Mainframe Performance Topics Podcast Episode 3 “Getting Better”

(Originally posted 2016-05-07.)

You probably wonder why I post to my blog echoing our podcast episodes. There are two reasons:

  • Yes, it alerts more people to our podcast series. Well duh. 🙂
  • It gives me a chance to inject something more personal about the episode.

So in the latter spirit I’d say the highlight for me of making Episode 3 was getting the “three part disharmony” 🙂 working.

Glenn Wilcock very kindly agreed to be our first guest and having a guest raised an interesting problem:

How do you edit with 3 people? So we could’ve gone all mono on you. I’m not keen on that. Indeed I would encourage listeners to use headphones if at all possible – as I’m playing games with the “stereoscape”. As a Queen fan I’ve been spoilt. 🙂

When I edit I place myself on the left (naturally) 🙂 and (unfortunately for her) I consign Marna to the right. 🙂 So, obviously, a guest has to be placed in the middle.

Now, when it’s just the two of us I use a piece of software on my Mac to record the skype call. It captures the video but Audacity (our audio editor of choice) will extract the stereo audio from that. I appear on the left and Marna on the right.

The trick with more than two people is to ask everyone to record the Skype call and send me their files. Then I can use Audacity to throw all the “right channel” stuff away. And then I’m in business.

I think you’ll agree this worked really well in this episode.

But if a participant can’t record then we have to fall back on “punching out” each contribution to make a fresh mono recording for them. Cumbersome and requiring clear separation between the contributions.

But Glenn was able to furnish his own recording so all was good.

Arrogantly enough 🙂 I’ve offered my support to others in IBM in getting going with podcasting. “Learn from what I’ve learnt, even though I’m only just ahead of you” has long been my modus operandi.

Below are the show notes.

The series is here.

Episode 3 is here.

Episode 3 “Getting Better” Show Notes

Here are the show notes for Episode 3 “Getting Better”.

Follow Up

We had some follow up items:

  • Following up the Episode 1 “Topics” item on Markdown, Martin talked about John Gruber’s Daring Fireball Dingus page which lets you paste in Markdown and see the HTML generated from it (and how the HTML is rendered).
  • Also following up on an Episode 1 item, but this time the “Mainframe” item, Marna talked about her personal problem with ISPF 3.17 Mount Table right / left she was seeing before. It was the PF key definition (which she suspected all along), however defining 12 PF keys did the trick.

Mainframe

Our “Mainframe” topic included our first guest, Glenn Wilcock, a DFSMS architect specializing in HFS. Glenn talked about a very important function – using zEnterprise Data Compression (zEDC) for HSM. He discussed some staggering excellent numbers for CPU reduction, throughput improvement, and storage usage reduction. A win on all three fronts. Here are some of the links that you can use to get more information about this topic:

Performance

Our “Performance” item was a discussion on one of Martin’s “2016 Conference Season” presentations: “How To Be A Better Performance Specialist”. Two particular points arising:

  • This presentation might interest a wider audience than just Performance and Capacity people.
  • You might get something out of it even if you’ve been around for a while.

We’ll publish a link to the slides when they hit Slideshare, probably after the 2016 IBM z Systems Technical University, 13 – 17 June, Munich, Germany.

Topics

Under “Topics” we discussed mind mapping and how we use it for this podcast and other uses, such as depicting relationships between CICS systems and the DB2 subsystems they attach to.

Martin mentioned Freemind, an open source cross-platform mindmapping tool, available from here.

He also mentioned the proprietary iThoughts, which has an iOS Version and a macOS version. Data is interchangeable between these two and Martin uses both versions, with the iOS version on his iPad Pro and his iPhone.

On The Blog

Martin posted to his blog:

Marna posted to her blog:

  • Are you electronic delivery secure?.

    Warning!!! Regular old ftp for electronic software delivery will be gone on May 22, 2016, for Shopz and SMP/E RECEIVE ORDER. Find a secure replacement.

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.

DB2 DDF Transaction Rates Without Tears

(Originally posted 2016-05-02.)

Some of my blog posts revolve around a single SMF field, or maybe a couple. This post is one of those.

If I look at my customers’ mainframe estates [1], especially the DB2 portion of them, they’re getting more complex by the year.

In particular, quite a few customers are in the “tens of DB2 subsystems” category. One of the things driving the number up are SAP implementations, leading to multiple DB2 subsystems for a plethora of applications.

This post is mostly about “DDF heavy” DB2 subsystems – from the z/OS Performance point of view.

 

The Featured SMF Field

Don’t reach for your SMF manual just yet but there is a very nice field I finally realised the significance of recently – SMF30ETC. It’s in the SMF 30 Performance Section, most useful in the Interval records (Subtypes 2 and 3).

This field is Independent Enclave Transaction Count. With interval records you would, of course, turn this into a rate.

(Other fields you might like to consider are SMF30ETA (Independent Enclave Transaction Active Time) and a bunch of Independent Enclave CPU fields (SMF30ENC and some breakout ones for specialty engines). With these I think you can do some interesting calculations.)

 

But What Is An Independent Enclave And Why Do I Care?

An independent enclave is a unit of work that runs in an address space but in a different service class to the one the address space itself runs in.

The most notable case of this is with DDF:[2]

The following diagram outlines what happens when work comes into DDF.

In this example there are three threads – Txn A, Txn B, and Txn C.

  • When a user from outside the LPAR comes into DB2 they come in through the DIST address space using the Distributed Data Facility (DDF).
  • The initial work to set up for the transaction[3] does not run on an independent enclave SRB. It does run under the DIST address space’s service class.[4]
  • After WLM classification, authorisation and a few other things the transaction runs under an independent enclave. Generally this enclave would be classified to a separate WLM service class.[5] In this case Txn A and Txn C are both classified to DDFHI, while Txn B is classified to DDFLO.

One of the key things to note is that for DDF work there is no SMF 30 record for the enclave, just the DIST address space.

DB2 Subsystem Transactions Without DB2 Instrumentation

As you’ve probably gathered by now I like to glean middleware-specific information without using its data.

The reasons for this are straightforward and reasonable, I think:

  • If you have a plethora of, for example, CICS regions you can’t turn on CICS-specific instrumentation for them all.
  • Middleware-specific SMF is generally voluminous and maybe expensive to collect. Still further process.

So I like to use SMF 30 first, which guides me to the specific CICS regions, DB2 (or MQ) subsystems, etc.

So, SMF30ETC for a DIST address space directly gives me the DDF transaction count for that subsystem, no DB2 instrumentation needed.

That’s it.

 

So Get To The Point

I thought I just had. 🙂

So, suppose I had multiple DB2 subsystems in an LPAR.

The general recommendation is to stick their IRLM address spaces in SYSSTC and the rest – DBM1, DIST, and MSTR – in a notional “STCHI” service class. This service class should have Importance 1 and a goal of something like 70% or more.

There’s probably not much to separate the various DB2 subsystems so often they all end up in the same (“STCHI”) service class.

But suppose you wanted to understand the transaction rate to each subsystem, without using DB2 instrumentation. Then this field (SMF30ETC) gives you that.

 

What About RMF?

You might think that a report class with the DIST address space in gives you the transaction rate. I don’t believe it does. So SMF30ETC is the best source of information.

There is another approach with RMF, but it solves a slightly different problem.

If clumps of DDF transactions are assigned different report classes their transaction rates will be recorded there. (The same is true for service classes.)

Generally I don’t see DDF work to different DB2 subsystems on the same LPAR assigned to the same report (or service) classes so this is a nice way to work out which clumps of transactions form the bulk of the DDF traffic to each service class.

 

Conclusion

If confronted by a plethora [6] of DB2 subsystems that are likely to have DDF work in the same service class on the same system I’d use SMF30ETC to figure out which have high transaction rates, and which have more expensive transactions.

I’d also add this is a good way to figure out whether any DB2 subsystem has little DDF work.

As I learn more about this deceptively simple piece of instrumentation I’ll let you know.

And there are many more things I could say about DDF. Most of those will have to await my “More Fun With DDF” presentation. It’s the very next conference presentation I’ll be building – with a real “drop dead” date. Stay tuned!


  1. I’ve been using the term “mainframe estate” for some time now, and it seems to be catching on. You could use “mainframe landscape”, I suppose. In any case I mean a self-contained set of machines and what resides in them – from LPARs, through subsystems, to transactions.  â†©

  2. Are there any others? You, dear reader, might know of some.  â†©

  3. For DDF a transaction is a COMMIT or ABORT, not a complete discussion. And then only if the thread goes inactive.  â†©

  4. You can observe this in the SMF 30 as the SRB time is usually small but non-trivial. (The TCB time includes Independent Enclave CPU which is why it is bigger. But I digress.)  â†©

  5. Most people who have maintained WLM service definitions should be, in outline, familiar with DDF rules. However, the WLM DDF capabilities are very flexible and discussion with DB2 folks is often fruitful.  â†©

  6. See 3 Amigos 🙂  â†©

Java Markdown Processing On z/OS

(Originally posted 2016-05-01.)

In Episode 1 of MPT Podcast we discussed Markdown and Marna asked me if it could be run on z/OS. My answer was “you could try a Python Markdown processor via Jython”.[1]

Then, on IBM-MAIN, Dave Griffiths suggested using one of the Java Markdown processors, instead of Jython.

So I got to experimenting:

  1. I downloaded Java Markdown to my Linux machine. It’s a jar file.
  2. I wrote a short wrapper program in Java and tested it.

Here’s the program. Feel free to swipe it and improve on it.

import org.markdown4j.Markdown4jProcessor;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;

class Markdown {
  public static void main(String[] args) {
    String markdownSource="",inputLine;

    try {
      BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
      while((inputLine=br.readLine())!=null){
        markdownSource+=inputLine+"\n";
      }

      String html=new Markdown4jProcessor().process(markdownSource);
      System.out.println(html);
    } catch (IOException e) {
      throw new RuntimeException(e);
    }
  }
}

Then I uploaded the jar file[2] and the class file (without ASCII to EBCDIC translation) and tested again. No problems with that.

But it gets better…

Drawing on this post I wrote some wrapper REXX.

The REXX is here:

/* REXX */
env.0=1
env.1="CLASSPATH="
env.1=env.1"/u/userhfs/pmmpac/markdown4j/markdown4j-2.2-cj-1.0.jar"
env.1=env.1":/u/userhfs/pmmpac/markdown4j"

stdin.0=11
stdin.1="###A Heading"
stdin.2=""
stdin.3="First Paragraph"
stdin.4=""
stdin.5="Second Paragraph"
stdin.6=""
stdin.7="* Bullet 1"
stdin.8=""
stdin.9="* Bullet 2"
stdin.10=""
stdin.11="Third paragraph"

cmd="/usr/lpp/java/J7.0/bin/java Markdown"

call bpxwunix cmd,stdin.,stdout.,stderr.,env.

say "stdout:"
do i=1 to stdout.0
  say stdout.i
end

say "stderr:"
do i=1 to stderr.0
  say stderr.i
end

and the JCL:

//STEP10 EXEC PGM=IKJEFT1B,REGION=0M
//SYSEXEC DD DISP=SHR,DSN=PMMPAC.MARKDOWN.JCL
//SYSTSPRT DD SYSOUT=K,HOLD=YES
//SYSTSIN DD *
  %MARKDOWN
/*

Again, feel free to swipe and improve.

  1. I set up the env. stem variable so the environment is right for the java program to execute.
  2. I fill the stdin. stem variable with the Markdown I want to process. This will be the stdin.
  3. I invoke BPXWUNIX to run the java program.
  4. I print the contents of stdout (stored in the stdout. stem variable.)
  5. I print the contents of stderr (stored in the stderr. stem variable.)

The net effect is I can invoke a Markdown processor from REXX on z/OS in Batch and postprocess the HTML generated to my heart’s content.

One example might be injecting some CSS or some javascript.

I would argue this is easier than having your REXX generate HTML in the first place.

And part of the beauty of this is the Java is zIIP-eligible.

The sample program is just a toy, illustrating the principle. I’m sure there’s much more that could be done with it. Have fun!


  1. I’ve talked about Jython on z/OS before  ↩

  2. The jar file I used was markdown4j–2.2-cj–1.0.jar  ↩

Mainframe Performance Topics Podcast Episode 2 “Sound Affects”

(Originally posted 2016-04-30.)

I think we were much more relaxed when we put this episode together, and I hope it shows. We’re learning our craft, and I think quite fast.

Below are the show notes.

The series is here.

Episode 2 is here.

Episode 2 “Sound Affects” Show Notes

Here are the show notes for Episode 2 “Sound Affects”.

Follow Up

We had some follow up items:

  • Marna talked about the new SDSF panels and their commands’ responses being cut off. Use ULOG to see the whole response. BTW – for commands in z/OSMF there is a box for the entire command response you can scroll through.
  • Martin took IBMer Dave Griffiths’ advice and experimented with a Markdown processor written in java. You can find the java jar (markdown4j) he used here.

As you can see we do actually pay attention to feedback and follow it up when we can. So keep sending it in.

Mainframe

Our “Mainframe” topic is about blocking the IBM Ported Tools product for ordering with z/OS V2.2. You don’t need IBM Ported Tools V1.3 product with z/OS V2.2 since the same functions are contained in the operating system now at either the same or higher level. See here, searching in the page for “OpenSSH” or “Apache”.

Performance

Our “Performance” topic is about Coupling Facility Link Latency, which sounds like a boring topic. It actually has its own share of thrills and spills. Martin has blogged on it several times:

Topics

Under “Topics” we discussed the Android app “Smart Maps Offline”. This handy tool can save you mobile data costs if you are directionally challenged. Maps are free for Android, but might cost for Apple’s version of this app. Good bits were: personal pins can be added, common points (hotels, restaurants, transportation) is already marked, good performance, small download map size. Things we’d like to see improved: can’t give you directions to somewhere, and maps are in the local language only. See your app store for downloading this handy tool. One such place is here.

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.

Mainframe Performance Topics Podcast Episode 1 “A Luta Continua” (“The Struggle Continues”)

(Originally posted 2016-04-09.)

We enjoyed recording Episode 0, learning as we went. And people seem to have enjoyed it. So we recorded another one.

 

So here are the show notes for Episode 1.

 

By the way we publish the show notes with the audio.

 

You can get the series (and it is a series now) from here, and Episode 1 from here.

And rest assured we have plenty of plans for the future.

Episode 1 “A Luta Continua” (“The Struggle Continues”) Show Notes

Here are the show notes for Episode 1 “A Luta Continua” (“The Struggle Continues”).

Mainframe

Our “Mainframe” topic is about the new z/OS V2.2 ISPF 3.17 Mount Table enhancement.

To try it out on z/OS V2.2: ISPF 3.17, then File_Systems pull down, then the new options are #1 or #2. Type “x” for toggling between expanding and contracting. Use the Help pulldown for more commands available.

Performance

Our “Performance” topic is about long term page fixing DB2 buffer pools, and a related question: Whether to use 1MB pages.

Topics

Under “Topics” we discussed Markdown, a very easy way of creating formatted text. (In fact these notes were written in Markdown using a plain text editor on Linux.)

Not that we are heavy users of Android apps there are apparently plenty of Markdown tools there too. See here.

Martin briefly mentioned the possibility of driving Python’s Markdown processor Python Markdown on z/OS using Jython (which is Python interpreted using Java).

From The Blog

On Martin’s blog:

On Marna’s blog:

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.

 

 

 

 

 

 

Mainframe Performance Topics Podcast Episode 0 “Sic Parvis Magna” (“Greatness From Small Beginnings”)

(Originally posted 2016-03-17.)

I’m delighted Marna Walle and I are collaborating on a podcast series. We’re calling it “Mainframe, Performance, Topics Podcast”. You can guess where we got the name from. 🙂

Below are the show notes.

The series is here.

Episode 0 is here.

Episode 0 “Sic Parvis Magna” ( “Greatness From Small Beginnings" ) Show Notes

Here are the show notes for Episode 0 “Sic Parvis Magna” ( “Greatness From Small Beginnings" ).

We’re structuring each podcast episode (loosely) in three parts:

We’ll post useful links and supplementary information in each episode’s show notes.

Mainframe

Our “Mainframe” topic is about an SPE to z/OS 2.1 for SDSF.

z/OS 2.1 SDSF SPE

APARs mentioned were:

Performance

Our “Performance” topic is about detecting address spaces in support of FPGA cards.

FPGA Card Support Address Space Detection

Some useful information on the new PCIE and FPGHWAM address spaces is here.

Topics

Other things we discussed were:

And if you think Naughty Dog made up “sic parvis magna” see here.

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.

Batch DB2 And MQ The Easy Way

(Originally posted 2016-03-08.)

Here are are two questions people would like easy answers to:

  • What are the big CPU DB2 jobs accessing this DB2 subsystem?
  • Which job steps access MQ?

This post shows you how to answer both these questions without DB2- or MQ-specific instrumentation.

The “without DB2- or MQ-specific instrumentation” phrase is important: I’m working with a customer with something like 50 DB2 subsystems and many MQ subsystems. You can’t really take product-specific SMF from all of those.

But most installations collect SMF 30 data, which is what this method relies on.

In fact I’ve just enhanced our tools to make these two questions easy to answer – with single table queries.

The key to this is the SMF 30 Usage Data Section:

  • For DB2 the Product Name field is “DB2” and the Product Qualifier is the DB2 subsystem name. The system is obviously the one that cut the record.
  • For MQ it’s only slightly more complicated: If the Product Qualifier is “FREDBATC” the MQ subsystem is “FRED”. The Product Name contains “MQ”.

With these simple rules it’s easy to write the queries that answer the questions at the top of the post.

You can also use this method with IMS, with only very minor modifications. My current customer engagement, though, doesn’t include IMS.

Dependency Non-Detection

There is a little subtlety here (and the news isn’t very good):

We usually detect dependencies between jobs by examining the access to data sets:

  • For Non-VSAM data sets this would be examining “write” (SMF 15) versus “read” (SMF 14).
  • For VSAM data sets it’s SMF 64 (CLOSE) statistics. [1] We examine the individual statistics deltas in the record.

This is fine for data-set-driven dependencies.

You can’t use these records for DB2- or MQ-driven dependencies. The bad news is you can’t get dependency information from SMF 30 either. For this you have to examine the appropriate Accounting Trace records:

  • SMF 101 for DB2.
  • SMF 116 for MQ.

And neither of these tell you what other job (step) this one is dependent on. You just get evidence of write and read activity, which helps.

Conclusion

So Batch remains complex and therefore interesting. But the technique in this post – examining SMF 30 Usage Data – can help make sense of large numbers of jobs very easily.

In the study that inspired this post we can readily see which DB2 subsystems support the “big CPU” jobs and we needed that to direct our DB2 specialist to focus his attention on just these of the many subsystems this customer has (even on a single system) .


  1. SMF 62 is an OPEN record But it’s statistics aren’t helpful here.  ↩

Oh The Arrogance Of It

(Originally posted 2016-02-21.)

One of my new 2016 presentations is called “How To Be A Better Performance Specialist” – though I really could use a snappier title.

Here’s the abstract:

 

I’ve spent 30 years doing Performance and Capacity. You’d think it’d seem stale and repetitive by now. Not a bit of it. It’s still fresh and interesting.

 

More to the point I think I’m doing it better day by day, even now. So I’d like to share some thoughts on how you too can become more valuable to your organisation as a Performance Specialist. And how you can have fun doing it.

 

Now, who on earth am I to be telling people how to do their jobs better? Particularly as I’ve never worked as a customer Performance Specialist.

I think there are two things going for me:

  • I’ve spent 30 years doing Performance and Capacity" says I might’ve worked with a fair few customers. Just a few. 🙂
  • I already have plenty of topics for this presentation.
  • It’s highly likely I’ll be doing quite a bit of mentoring over the next few months.

Except that’s three. 🙂

Anyhow, I’m casting my mind in the direction of “higher level” messages. It’ll pass and normal service will be resumed shortly. 🙂

But until it does I have a few slides to write…

Suffering Subsystems

(Originally posted 2016-02-14.)

I wish I’d started counting DB2 subsystems before.

A recent study saw 43 DB2 subsystems, in 13 Data Sharing groups (and a few in none), across a large number of z/OS systems.

And if I try to remember other studies these numbers have been typical of them (but this is not a typical set of numbers).

Two thoughts entered my head:

  • How on earth do you get to these sorts of numbers, and is it a blessing or a nuisance?
  • How can you depict your DB2 estate?

This post is about the latter. I might come back to the former.

I want to share a technique I used that you might want to emulate. At any rate it generates diagrams I think you’ll find easy on the eye.

My Motivation

I’m always looking at new ways of depicting things for two reasons:

  • Because I spend way too long generating “orientation” information about customers. I’m lazy, or impatient, or an efficiency-seeker if you prefer. 🙂
  • Because I think there are fresh insights to be had.

As I hinted, I think customer mainframe estates have become more and more complex. So the need for better tooling has become acute.

Source Material

To capture your DB2 estate you need, unsurprisingly, to use SMF 30 Interval records. I’ve written about this many times. But here are a couple of specifics:

  • I look for job names ending with “IRLM” to represent the DB2 subsystem.[1]. This I plug into a query against the SMF 74–2 XCF data and retrieve the group name, throwing away any beginning with “IXCLO”. This gives me a “group name” which I can use to find others in the same group. [2]
  • To establish which CICS regions talk to which DB2 subsystem I use the DB2 SMF 30 Usage Data Section – for address spaces with program name “DFHSIP”.

If you read the footnotes you’ll see this isn’t 100% ideal but it certainly gets you a lot of the CICS / DB2 topology; To me it’s architecturally useful stuff. The question is how to depict this network.

New Tooling

I’ve used mind maps before and one of my favourite tools for creating and manipulating them is iThoughts. There’s an iOS version and a Mac OS X version.

Yes, other tools are available but there’s a specific feature I really like that makes this the tool I’m going with: Comma-Separated Value (CSV) import.[3]

CSV is nice because:

  • It’s plain text and my REXX code can readily generate it.
  • You can pull it into a spreadsheet and edit it before saving it and pulling it into iThoughts.

One other feature I like of iThoughts is the ability to Filter on a text string. Actually you can do a Global Replace which I found useful in sanitising the screen shots for this blog post.[4]

As with most mind mapping tools I can move nodes and subtrees around very easily. I can also add notes such as when the CICS region or DB2 subsystem started.

Some Fragments

So here are a couple of fragments of mind maps my tool has been taught to generate the CSV for. The screenshots are indeed from iThoughts running on Mac OS X.

First, a shot of some DB2 subsystems – one set in a Data Sharing group, another not.

The grey colour was actually specified in the CSV file my code creates. It’s to draw attention to the fact the subsystems in that colour aren’t in a DB2 Data Sharing group. One day I could colour code the Data Sharing groups.

And now a shot of some CICS regions attaching to two DB2 subsystems in the same system:

Conclusion

The two screenshots above are quite pretty and very close to automatic now:

  • My code generates the CSV file automatically
  • I still have to download it and throw it into iThoughts

That isn’t really burdensome.

The nice thing is I have a mind map or two I can rearrange and edit. And there are some more nice tricks like the ability to have my code generate notes for each node and have iThoughts import them at the same time as the actual topology data.

So if I get bored I can see ways to enhance this.

So, I’m sure you could do this with other mind mapping tools. The point of this post, however, is to encourage you to experiment with this kind of depiction. Have fun!


  1. The IRLM address space might not have the same characters before the “IRLM” as e.g. the DBM1 address space begins with.  ↩

  2. This is the IRLM XCF group, not the DB2 Data Sharing Group. The latter is not available unless you do something clever with SMF 74–4 Coupling Facility data. (And I haven’t got there yet.)  ↩

  3. Just as there are other Mind Map tools there are other text-file based formats, such as Freemind and OPML.  ↩

  4. It might interest you to know I’m using the Duet iOS app to provide a second screen and using iOS’ built-in screen shot capability to capture sections of the map.  ↩

DDF Batch

(Originally posted 2016-01-24.)

DDF and Batch sound like two opposite ends of the spectrum, don’t they?

Well, it turns out they’re not.

I said in DDF Counts I might well have more to say about DDF. I was right.

I’ve known for a long time that some DDF work can come in from other z/OS DB2 subsystems, but not really thought much about it.

Until now. And I don’t really know why now. 🙂 Maybe it’s just because I’m “in the neighbourhood”.

Why Is Batch DDF An Important Topic?

We look at batch jobs in lots of ways but until now we’ve not considered the case where a batch job goes to DB2 for data but the data is really in a different DB2.1

But if a DB2 job does go elsewhere for its data the performance of getting it clearly affects the job’s run time.

There are at least two different aspects to this:

  • The network traffic.
  • The remote DB2 access time.

How Do You Understand A Job’s Remote DB2 Performance?

First you have to detect an external DB2 batch job. Then you need to analyse its performance.

The latter is just the same as any other DB2 batch job, so I won’t dwell on it here. So let’s consider how to detect batch jobs that come in through DDF.

Detecting An External DB2 Batch Job

Let’s assume you have a bunch of SMF 101 (DB2 Accounting Trace) records with QWHCATYP of QWHCRUW or QWHCDUW – denoting DDF.

If field QMDAATYP contains “DSN” the DDF 101 record relates to a remote z/OS system. But these records could be, for example, from a remote CICS transaction.

You can detect remote batch jobs from the SMF 101 record by observing when field QMDACTYP contains “BATCH”. Typically QMDACNAM might contain “BATCH” or “DB2CALL”.

If it is Remote DB2 Batch the first eight characters of the remote Correlation ID (QMDACORR)2 are the job name.

Obtaining the step number and name can be done by using timestamp analysis, comparing this record’s timestamps to SMF 30 for the job on its originating system.

One snag with the term “originating system” is that the 101 record doesn’t actually tell you the originating system’s SMF ID. But it will give you some network information, from which you can probably work it out.

Now We Have Two Records To Analyse. Is This Better Than One?

So now we have two SMF 101 records for the job3:

  • The one on the job’s originating system.
  • The DDF one on the system it connects to via DDF.

As I pointed out at the end of this discussion thread in 2005 the originating job’s 101 record might contain substantial DB2 Services Wait Other time – which would be the time spent over in the system whose data it accessed.

So I would advocate a two step process:

  1. Analyse the job’s home DB2 101 to discover the big buckets of time and tune down – as usual.

  2. If the DB2 Services Wait Other time is substantial then understand the time buckets in the other 101 record (the one on the system it connects to via DDF).

Actually there is a third aspect: If your concern is actually the CPU time this job causes on the system it connects to via DDF then obviously the DDF 101 is the one you want.

So I think you can do good work with the pair of 101 records – so long as you’re collecting 101s from both DB2 subsystems and processing them appropriately.

What About The Network Traffic?

While you can’t directly see the network time you can see the traffic: The QLAC section in the 101 record gives you such things as SQL statements transmitted, rows transferred, bytes transferred etc.

I think this is useful information – and you might actually be able to do something about it.

Conclusion

Part of the purpose of this post was to sensitise Performance people to the possibility that their batch might be using DDF (and indeed that some of the DDF traffic might be coming from remote z/OS batch jobs).

The other part of the purpose was to outline how you might go about analysing the performance of such batch jobs.

In my code I have a new report that covers this ground. Naturally it’ll evolve – and I expect I’ll be asking customers whose DB2 Batch I study for SMF 101 data from any DB2 subsystems they think it accesses remotely.


  1. For simplicity I’ll write as if the access is read. In reality, of course, update is quite likely. 

  2. For CICS, in contrast, the middle 4 characters are the CICS transaction name. 

  3. I’ve simplified here. In reality the job might be multi-step, so you would then get more than 2 SMF 101 records.