iThoughts The Legend

(Originally posted 2016-05-19.)

As I’ve indicated elsewhere we use iThoughts for outlining our podcast episodes (and use it to track completion).

I’ve developed quite a nice technique for iThoughtsX (the macOS flavour), which I’ll share with you. This is in case you’re inclined to play with newer toys. 🙂

Consider the following fragment of an outline:

You’ll see some of the nodes are filled (arguably) blue, others red and still others green. So we’ve started to colour code the nodes.

Over to the left you see a set of coloured boxes. Zooming in a bit on them:

  • Blue is for “Yes, we will do this bit in this episode”.
  • Red is for “No we won’t this time”.
  • Green is for “This is where the guest comes in”.

The idea for the green is that we share with the guest the outline by screen sharing when recording (over Skype) and they can concentrate on just their bit.

So this is a kind of legend, describing the colour coding of the nodes.

But there are other times when I want a legend. Namely when I’m abusing mind maps to show, for example, which CICS regions connect to a particular DB2 subsystem.

 

This Is The Stuff Legends Are Made Of

Making one of the boxes of the legend is simple in iThoughtsX (on macOS). You can do it one of two ways:

  • Topic -> New Topic -> Floating and then move the node into place.
  • Context Menu -> New Floating and again move it into place.

You can change the shape to a square as I have done. You also want to set its colour using the colour palette, and those of the “in the tree” topics you want to match it. Finally you’ll want to put some text in the box.

Legends And Templates

We actually have a template for our podcast episodes, which I copy into a new outline. That’s pretty straightforward.

For my other uses I use REXX code to generate the outline – in a particular form of CSV (Comma-Separated Value). I haven’t found a way to robotically generate the legend but there is a simple technique to “parachute” it in: Paste As Floating does the trick.

One example is for CICS-related regions. Specifically I colour code e.g CICS Data Tables Server address spaces but hang a child node with the text “Data Tables Server” off the node that names the address space. With a legend I can dispense with this child node and tidy up.

An Infestation Of Ticks

You’ll notice the red tick marks.[1] They’re actually to signal we’ve completed that piece of the recording. iThoughts supports tasks and the notion of completion. I defined a pair of Keyboard Maestro hot key combinations to mark completion and unmark it.[2]


So, for those of you who like playing with modern tools (as I do), I hope this has been interesting.

Anyhow, more mainframe technical content soon.


  1. When capturing the graphic I accidentally left them in. But it’s actually a nice feature so I didn’t remake the graphic.  â†©

  2. While iThoughts allows you to specify partial completion we are binary: It’s all or nothing, completionwise.  â†©

More Fun With DDF

(Originally posted 2016-05-16.)

Already this year I’ve posted thrice on DDF:

It’s clearly something that’s important to me right now. 🙂

So this post is to mention I’m putting the finishing touches to a new presentation (the third of the year so far). I’m giving it to European customers in Munich in mid June. I’m also giving it as an internal IBM webcast in the same timeframe. Of course, I hope to use it again and again.

It’s called “More Fun With DDF”.

The basic thesis is there’s lot of interesting analysis to do for DDF workloads at a number of levels:

  • System
  • WLM Service Class
  • DB2 Subsystem address spaces
  • DB2 Accounting Trace

Obviously you can’t do analysis without data and it is indeed there aplenty.

So after what I’m calling a Tutorial I dive into a number of customer cases. While I’ve been using them as test data (for my rapidly evolving code) they do illustrate a number of points. None of the cases are exactly “war stories” but I do think they’re interesting.

And after this presentation it’s on to “refurbishing” an older presentation.

But for now previously unthought of slides are popping into my head (and hence into the presentation) at a rate of about 1 a day; I’m well past the “I can’t fill an hour” stage. 🙂 I just hope it is more fun. 🙂

Mainframe Performance Topics Podcast Episode 3 “Getting Better”

(Originally posted 2016-05-07.)

You probably wonder why I post to my blog echoing our podcast episodes. There are two reasons:

  • Yes, it alerts more people to our podcast series. Well duh. 🙂
  • It gives me a chance to inject something more personal about the episode.

So in the latter spirit I’d say the highlight for me of making Episode 3 was getting the “three part disharmony” 🙂 working.

Glenn Wilcock very kindly agreed to be our first guest and having a guest raised an interesting problem:

How do you edit with 3 people? So we could’ve gone all mono on you. I’m not keen on that. Indeed I would encourage listeners to use headphones if at all possible – as I’m playing games with the “stereoscape”. As a Queen fan I’ve been spoilt. 🙂

When I edit I place myself on the left (naturally) 🙂 and (unfortunately for her) I consign Marna to the right. 🙂 So, obviously, a guest has to be placed in the middle.

Now, when it’s just the two of us I use a piece of software on my Mac to record the skype call. It captures the video but Audacity (our audio editor of choice) will extract the stereo audio from that. I appear on the left and Marna on the right.

The trick with more than two people is to ask everyone to record the Skype call and send me their files. Then I can use Audacity to throw all the “right channel” stuff away. And then I’m in business.

I think you’ll agree this worked really well in this episode.

But if a participant can’t record then we have to fall back on “punching out” each contribution to make a fresh mono recording for them. Cumbersome and requiring clear separation between the contributions.

But Glenn was able to furnish his own recording so all was good.

Arrogantly enough 🙂 I’ve offered my support to others in IBM in getting going with podcasting. “Learn from what I’ve learnt, even though I’m only just ahead of you” has long been my modus operandi.

Below are the show notes.

The series is here.

Episode 3 is here.

Episode 3 “Getting Better” Show Notes

Here are the show notes for Episode 3 “Getting Better”.

Follow Up

We had some follow up items:

  • Following up the Episode 1 “Topics” item on Markdown, Martin talked about John Gruber’s Daring Fireball Dingus page which lets you paste in Markdown and see the HTML generated from it (and how the HTML is rendered).
  • Also following up on an Episode 1 item, but this time the “Mainframe” item, Marna talked about her personal problem with ISPF 3.17 Mount Table right / left she was seeing before. It was the PF key definition (which she suspected all along), however defining 12 PF keys did the trick.

Mainframe

Our “Mainframe” topic included our first guest, Glenn Wilcock, a DFSMS architect specializing in HFS. Glenn talked about a very important function – using zEnterprise Data Compression (zEDC) for HSM. He discussed some staggering excellent numbers for CPU reduction, throughput improvement, and storage usage reduction. A win on all three fronts. Here are some of the links that you can use to get more information about this topic:

Performance

Our “Performance” item was a discussion on one of Martin’s “2016 Conference Season” presentations: “How To Be A Better Performance Specialist”. Two particular points arising:

  • This presentation might interest a wider audience than just Performance and Capacity people.
  • You might get something out of it even if you’ve been around for a while.

We’ll publish a link to the slides when they hit Slideshare, probably after the 2016 IBM z Systems Technical University, 13 – 17 June, Munich, Germany.

Topics

Under “Topics” we discussed mind mapping and how we use it for this podcast and other uses, such as depicting relationships between CICS systems and the DB2 subsystems they attach to.

Martin mentioned Freemind, an open source cross-platform mindmapping tool, available from here.

He also mentioned the proprietary iThoughts, which has an iOS Version and a macOS version. Data is interchangeable between these two and Martin uses both versions, with the iOS version on his iPad Pro and his iPhone.

On The Blog

Martin posted to his blog:

Marna posted to her blog:

  • Are you electronic delivery secure?.

    Warning!!! Regular old ftp for electronic software delivery will be gone on May 22, 2016, for Shopz and SMP/E RECEIVE ORDER. Find a secure replacement.

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.

DB2 DDF Transaction Rates Without Tears

(Originally posted 2016-05-02.)

Some of my blog posts revolve around a single SMF field, or maybe a couple. This post is one of those.

If I look at my customers’ mainframe estates [1], especially the DB2 portion of them, they’re getting more complex by the year.

In particular, quite a few customers are in the “tens of DB2 subsystems” category. One of the things driving the number up are SAP implementations, leading to multiple DB2 subsystems for a plethora of applications.

This post is mostly about “DDF heavy” DB2 subsystems – from the z/OS Performance point of view.

 

The Featured SMF Field

Don’t reach for your SMF manual just yet but there is a very nice field I finally realised the significance of recently – SMF30ETC. It’s in the SMF 30 Performance Section, most useful in the Interval records (Subtypes 2 and 3).

This field is Independent Enclave Transaction Count. With interval records you would, of course, turn this into a rate.

(Other fields you might like to consider are SMF30ETA (Independent Enclave Transaction Active Time) and a bunch of Independent Enclave CPU fields (SMF30ENC and some breakout ones for specialty engines). With these I think you can do some interesting calculations.)

 

But What Is An Independent Enclave And Why Do I Care?

An independent enclave is a unit of work that runs in an address space but in a different service class to the one the address space itself runs in.

The most notable case of this is with DDF:[2]

The following diagram outlines what happens when work comes into DDF.

In this example there are three threads – Txn A, Txn B, and Txn C.

  • When a user from outside the LPAR comes into DB2 they come in through the DIST address space using the Distributed Data Facility (DDF).
  • The initial work to set up for the transaction[3] does not run on an independent enclave SRB. It does run under the DIST address space’s service class.[4]
  • After WLM classification, authorisation and a few other things the transaction runs under an independent enclave. Generally this enclave would be classified to a separate WLM service class.[5] In this case Txn A and Txn C are both classified to DDFHI, while Txn B is classified to DDFLO.

One of the key things to note is that for DDF work there is no SMF 30 record for the enclave, just the DIST address space.

DB2 Subsystem Transactions Without DB2 Instrumentation

As you’ve probably gathered by now I like to glean middleware-specific information without using its data.

The reasons for this are straightforward and reasonable, I think:

  • If you have a plethora of, for example, CICS regions you can’t turn on CICS-specific instrumentation for them all.
  • Middleware-specific SMF is generally voluminous and maybe expensive to collect. Still further process.

So I like to use SMF 30 first, which guides me to the specific CICS regions, DB2 (or MQ) subsystems, etc.

So, SMF30ETC for a DIST address space directly gives me the DDF transaction count for that subsystem, no DB2 instrumentation needed.

That’s it.

 

So Get To The Point

I thought I just had. 🙂

So, suppose I had multiple DB2 subsystems in an LPAR.

The general recommendation is to stick their IRLM address spaces in SYSSTC and the rest – DBM1, DIST, and MSTR – in a notional “STCHI” service class. This service class should have Importance 1 and a goal of something like 70% or more.

There’s probably not much to separate the various DB2 subsystems so often they all end up in the same (“STCHI”) service class.

But suppose you wanted to understand the transaction rate to each subsystem, without using DB2 instrumentation. Then this field (SMF30ETC) gives you that.

 

What About RMF?

You might think that a report class with the DIST address space in gives you the transaction rate. I don’t believe it does. So SMF30ETC is the best source of information.

There is another approach with RMF, but it solves a slightly different problem.

If clumps of DDF transactions are assigned different report classes their transaction rates will be recorded there. (The same is true for service classes.)

Generally I don’t see DDF work to different DB2 subsystems on the same LPAR assigned to the same report (or service) classes so this is a nice way to work out which clumps of transactions form the bulk of the DDF traffic to each service class.

 

Conclusion

If confronted by a plethora [6] of DB2 subsystems that are likely to have DDF work in the same service class on the same system I’d use SMF30ETC to figure out which have high transaction rates, and which have more expensive transactions.

I’d also add this is a good way to figure out whether any DB2 subsystem has little DDF work.

As I learn more about this deceptively simple piece of instrumentation I’ll let you know.

And there are many more things I could say about DDF. Most of those will have to await my “More Fun With DDF” presentation. It’s the very next conference presentation I’ll be building – with a real “drop dead” date. Stay tuned!


  1. I’ve been using the term “mainframe estate” for some time now, and it seems to be catching on. You could use “mainframe landscape”, I suppose. In any case I mean a self-contained set of machines and what resides in them – from LPARs, through subsystems, to transactions.  â†©

  2. Are there any others? You, dear reader, might know of some.  â†©

  3. For DDF a transaction is a COMMIT or ABORT, not a complete discussion. And then only if the thread goes inactive.  â†©

  4. You can observe this in the SMF 30 as the SRB time is usually small but non-trivial. (The TCB time includes Independent Enclave CPU which is why it is bigger. But I digress.)  â†©

  5. Most people who have maintained WLM service definitions should be, in outline, familiar with DDF rules. However, the WLM DDF capabilities are very flexible and discussion with DB2 folks is often fruitful.  â†©

  6. See 3 Amigos 🙂  â†©

Java Markdown Processing On z/OS

(Originally posted 2016-05-01.)

In Episode 1 of MPT Podcast we discussed Markdown and Marna asked me if it could be run on z/OS. My answer was “you could try a Python Markdown processor via Jython”.[1]

Then, on IBM-MAIN, Dave Griffiths suggested using one of the Java Markdown processors, instead of Jython.

So I got to experimenting:

  1. I downloaded Java Markdown to my Linux machine. It’s a jar file.
  2. I wrote a short wrapper program in Java and tested it.

Here’s the program. Feel free to swipe it and improve on it.

import org.markdown4j.Markdown4jProcessor;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;

class Markdown {
  public static void main(String[] args) {
    String markdownSource="",inputLine;

    try {
      BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
      while((inputLine=br.readLine())!=null){
        markdownSource+=inputLine+"\n";
      }

      String html=new Markdown4jProcessor().process(markdownSource);
      System.out.println(html);
    } catch (IOException e) {
      throw new RuntimeException(e);
    }
  }
}

Then I uploaded the jar file[2] and the class file (without ASCII to EBCDIC translation) and tested again. No problems with that.

But it gets better…

Drawing on this post I wrote some wrapper REXX.

The REXX is here:

/* REXX */
env.0=1
env.1="CLASSPATH="
env.1=env.1"/u/userhfs/pmmpac/markdown4j/markdown4j-2.2-cj-1.0.jar"
env.1=env.1":/u/userhfs/pmmpac/markdown4j"

stdin.0=11
stdin.1="###A Heading"
stdin.2=""
stdin.3="First Paragraph"
stdin.4=""
stdin.5="Second Paragraph"
stdin.6=""
stdin.7="* Bullet 1"
stdin.8=""
stdin.9="* Bullet 2"
stdin.10=""
stdin.11="Third paragraph"

cmd="/usr/lpp/java/J7.0/bin/java Markdown"

call bpxwunix cmd,stdin.,stdout.,stderr.,env.

say "stdout:"
do i=1 to stdout.0
  say stdout.i
end

say "stderr:"
do i=1 to stderr.0
  say stderr.i
end

and the JCL:

//STEP10 EXEC PGM=IKJEFT1B,REGION=0M
//SYSEXEC DD DISP=SHR,DSN=PMMPAC.MARKDOWN.JCL
//SYSTSPRT DD SYSOUT=K,HOLD=YES
//SYSTSIN DD *
  %MARKDOWN
/*

Again, feel free to swipe and improve.

  1. I set up the env. stem variable so the environment is right for the java program to execute.
  2. I fill the stdin. stem variable with the Markdown I want to process. This will be the stdin.
  3. I invoke BPXWUNIX to run the java program.
  4. I print the contents of stdout (stored in the stdout. stem variable.)
  5. I print the contents of stderr (stored in the stderr. stem variable.)

The net effect is I can invoke a Markdown processor from REXX on z/OS in Batch and postprocess the HTML generated to my heart’s content.

One example might be injecting some CSS or some javascript.

I would argue this is easier than having your REXX generate HTML in the first place.

And part of the beauty of this is the Java is zIIP-eligible.

The sample program is just a toy, illustrating the principle. I’m sure there’s much more that could be done with it. Have fun!


  1. I’ve talked about Jython on z/OS before  ↩

  2. The jar file I used was markdown4j–2.2-cj–1.0.jar  ↩

Mainframe Performance Topics Podcast Episode 2 “Sound Affects”

(Originally posted 2016-04-30.)

I think we were much more relaxed when we put this episode together, and I hope it shows. We’re learning our craft, and I think quite fast.

Below are the show notes.

The series is here.

Episode 2 is here.

Episode 2 “Sound Affects” Show Notes

Here are the show notes for Episode 2 “Sound Affects”.

Follow Up

We had some follow up items:

  • Marna talked about the new SDSF panels and their commands’ responses being cut off. Use ULOG to see the whole response. BTW – for commands in z/OSMF there is a box for the entire command response you can scroll through.
  • Martin took IBMer Dave Griffiths’ advice and experimented with a Markdown processor written in java. You can find the java jar (markdown4j) he used here.

As you can see we do actually pay attention to feedback and follow it up when we can. So keep sending it in.

Mainframe

Our “Mainframe” topic is about blocking the IBM Ported Tools product for ordering with z/OS V2.2. You don’t need IBM Ported Tools V1.3 product with z/OS V2.2 since the same functions are contained in the operating system now at either the same or higher level. See here, searching in the page for “OpenSSH” or “Apache”.

Performance

Our “Performance” topic is about Coupling Facility Link Latency, which sounds like a boring topic. It actually has its own share of thrills and spills. Martin has blogged on it several times:

Topics

Under “Topics” we discussed the Android app “Smart Maps Offline”. This handy tool can save you mobile data costs if you are directionally challenged. Maps are free for Android, but might cost for Apple’s version of this app. Good bits were: personal pins can be added, common points (hotels, restaurants, transportation) is already marked, good performance, small download map size. Things we’d like to see improved: can’t give you directions to somewhere, and maps are in the local language only. See your app store for downloading this handy tool. One such place is here.

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.

Mainframe Performance Topics Podcast Episode 1 “A Luta Continua” (“The Struggle Continues”)

(Originally posted 2016-04-09.)

We enjoyed recording Episode 0, learning as we went. And people seem to have enjoyed it. So we recorded another one.

 

So here are the show notes for Episode 1.

 

By the way we publish the show notes with the audio.

 

You can get the series (and it is a series now) from here, and Episode 1 from here.

And rest assured we have plenty of plans for the future.

Episode 1 “A Luta Continua” (“The Struggle Continues”) Show Notes

Here are the show notes for Episode 1 “A Luta Continua” (“The Struggle Continues”).

Mainframe

Our “Mainframe” topic is about the new z/OS V2.2 ISPF 3.17 Mount Table enhancement.

To try it out on z/OS V2.2: ISPF 3.17, then File_Systems pull down, then the new options are #1 or #2. Type “x” for toggling between expanding and contracting. Use the Help pulldown for more commands available.

Performance

Our “Performance” topic is about long term page fixing DB2 buffer pools, and a related question: Whether to use 1MB pages.

Topics

Under “Topics” we discussed Markdown, a very easy way of creating formatted text. (In fact these notes were written in Markdown using a plain text editor on Linux.)

Not that we are heavy users of Android apps there are apparently plenty of Markdown tools there too. See here.

Martin briefly mentioned the possibility of driving Python’s Markdown processor Python Markdown on z/OS using Jython (which is Python interpreted using Java).

From The Blog

On Martin’s blog:

On Marna’s blog:

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.

 

 

 

 

 

 

Mainframe Performance Topics Podcast Episode 0 “Sic Parvis Magna” (“Greatness From Small Beginnings”)

(Originally posted 2016-03-17.)

I’m delighted Marna Walle and I are collaborating on a podcast series. We’re calling it “Mainframe, Performance, Topics Podcast”. You can guess where we got the name from. 🙂

Below are the show notes.

The series is here.

Episode 0 is here.

Episode 0 “Sic Parvis Magna” ( “Greatness From Small Beginnings" ) Show Notes

Here are the show notes for Episode 0 “Sic Parvis Magna” ( “Greatness From Small Beginnings" ).

We’re structuring each podcast episode (loosely) in three parts:

We’ll post useful links and supplementary information in each episode’s show notes.

Mainframe

Our “Mainframe” topic is about an SPE to z/OS 2.1 for SDSF.

z/OS 2.1 SDSF SPE

APARs mentioned were:

Performance

Our “Performance” topic is about detecting address spaces in support of FPGA cards.

FPGA Card Support Address Space Detection

Some useful information on the new PCIE and FPGHWAM address spaces is here.

Topics

Other things we discussed were:

And if you think Naughty Dog made up “sic parvis magna” see here.

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.

Batch DB2 And MQ The Easy Way

(Originally posted 2016-03-08.)

Here are are two questions people would like easy answers to:

  • What are the big CPU DB2 jobs accessing this DB2 subsystem?
  • Which job steps access MQ?

This post shows you how to answer both these questions without DB2- or MQ-specific instrumentation.

The “without DB2- or MQ-specific instrumentation” phrase is important: I’m working with a customer with something like 50 DB2 subsystems and many MQ subsystems. You can’t really take product-specific SMF from all of those.

But most installations collect SMF 30 data, which is what this method relies on.

In fact I’ve just enhanced our tools to make these two questions easy to answer – with single table queries.

The key to this is the SMF 30 Usage Data Section:

  • For DB2 the Product Name field is “DB2” and the Product Qualifier is the DB2 subsystem name. The system is obviously the one that cut the record.
  • For MQ it’s only slightly more complicated: If the Product Qualifier is “FREDBATC” the MQ subsystem is “FRED”. The Product Name contains “MQ”.

With these simple rules it’s easy to write the queries that answer the questions at the top of the post.

You can also use this method with IMS, with only very minor modifications. My current customer engagement, though, doesn’t include IMS.

Dependency Non-Detection

There is a little subtlety here (and the news isn’t very good):

We usually detect dependencies between jobs by examining the access to data sets:

  • For Non-VSAM data sets this would be examining “write” (SMF 15) versus “read” (SMF 14).
  • For VSAM data sets it’s SMF 64 (CLOSE) statistics. [1] We examine the individual statistics deltas in the record.

This is fine for data-set-driven dependencies.

You can’t use these records for DB2- or MQ-driven dependencies. The bad news is you can’t get dependency information from SMF 30 either. For this you have to examine the appropriate Accounting Trace records:

  • SMF 101 for DB2.
  • SMF 116 for MQ.

And neither of these tell you what other job (step) this one is dependent on. You just get evidence of write and read activity, which helps.

Conclusion

So Batch remains complex and therefore interesting. But the technique in this post – examining SMF 30 Usage Data – can help make sense of large numbers of jobs very easily.

In the study that inspired this post we can readily see which DB2 subsystems support the “big CPU” jobs and we needed that to direct our DB2 specialist to focus his attention on just these of the many subsystems this customer has (even on a single system) .


  1. SMF 62 is an OPEN record But it’s statistics aren’t helpful here.  ↩

Oh The Arrogance Of It

(Originally posted 2016-02-21.)

One of my new 2016 presentations is called “How To Be A Better Performance Specialist” – though I really could use a snappier title.

Here’s the abstract:

 

I’ve spent 30 years doing Performance and Capacity. You’d think it’d seem stale and repetitive by now. Not a bit of it. It’s still fresh and interesting.

 

More to the point I think I’m doing it better day by day, even now. So I’d like to share some thoughts on how you too can become more valuable to your organisation as a Performance Specialist. And how you can have fun doing it.

 

Now, who on earth am I to be telling people how to do their jobs better? Particularly as I’ve never worked as a customer Performance Specialist.

I think there are two things going for me:

  • I’ve spent 30 years doing Performance and Capacity" says I might’ve worked with a fair few customers. Just a few. 🙂
  • I already have plenty of topics for this presentation.
  • It’s highly likely I’ll be doing quite a bit of mentoring over the next few months.

Except that’s three. 🙂

Anyhow, I’m casting my mind in the direction of “higher level” messages. It’ll pass and normal service will be resumed shortly. 🙂

But until it does I have a few slides to write…