Dock It To Me

With z/OS 2.4 you can now run Docker containers in zCX address spaces. I won’t get into the whys and wherefores of Docker and zCX, except to say that it allows you to run many packages that are available on a wide range of platforms.

This post, funnily enough, is about running Docker on a couple of reasonably widely available platforms. And why you might want to do so – before running it in zCX address spaces on z/OS.

Why Run Docker Elsewhere Before zCX?

This wouldn’t be a permanent state of affairs but I would install Docker and run it on a handy local machine first.

This is for two reasons:

  1. To get familiar with operating Docker and installing packages.
  2. To get familiar with how these packages behave.

As I’ll show you – with two separate environments – it’s pretty easy to do.

The two environments I’ve done it on are

  • Raspberry Pi
  • Mac OS

I have no access to Windows – and haven’t for over 10 years. I’d hope (all snark aside) it was as simple as the two I have tried.

One other point: As the Redbook exemplifies, not all the operational aspects of zCX are going to be the same, but understanding the Docker side is a very good start.

Raspberry Pi / Linux

Raspberry Pi runs a derivative of Debian. I can’t speak for other Linuxes. But on Raspberry Pi it’s extremely easy to install Docker, so it will be for other Debian-based distributions. (I just have no experience of those.)

You simply issue the command:

apt-get install docker

If you do that you get a fully-fledged Docker set up. It might well pull in a few other packages.

My Raspberry Pi 4B has a 16GB microSD card and it hasn’t run out of space. Some docker packages (such as Jupyter Notebooks) pull in a few gigabytes so you probably want to be a little careful.

After you’ve installed Docker you can start installing and running other things. A simple one is Node.js or “node” for short.

With node you can run “server side” javascript. Most of the time I prefer to think of it as command-line javascript.

A Simple Node Implementation

I created a small node test file with the nano editor:


And saved it as test.js.

I can run this with the following command:

docker run  -v "$PWD":/usr/src/app -w /usr/src/app node test.js

This mounts the current working directory as the /usr/src/app directory in Docker (-v option of the docker run command), sets the docker working directory to this directory (-w option), and then invokes node to run test.js. The result is a write to the console.

(This combination of -v and -w is a very common idiom, so it’s worth learning it.)

Accessing A Raspberry Pi From iOS

Though I SSH into my Pi from my Mac the most fun is doing it from iOS. (Sorry if you’re an Android user but you’ll have to find your own SSH client as I have little experience of Android. Likewise Windows users. I’m sure you’ll cope.)

My Raspberry Pi is set up so that I use networking over Bluetooth or WiFi. This means I can play with it on a flight or at home. In both cases I address it as Pi4.local, through the magic of Bonjour.

Specifically I can SSH into the Pi from an iOS device in one of two ways:

  • Using the “Run Script over SSH” Shortcuts action.
  • Using the Prompt iOS app.

I’ve done both. I’ve also used a third way to access files on the Pi: Secure ShellFish.

All these ways work nicely – whether you’re on WiFi or networking over BlueTooth.

Mac OS

For the Mac it’s a lot simpler. For a start you don’t need to SSH in.

I installed Docker by downloading from here. (I note there is a Windows image her but I haven’t tried it.)

Before it lets you download the .dmg file you’ll need to create a userid and sign in. Upon installation a cute Docker icon will appear in the menu bar. You can use this to control some aspects of Docker. You can sign in there.

From Terminal (or, in my case, iterm2) you can install nginx with:

docker pull nginx

This is a lightweight web server. You start it in Docker with:

docker run -p 8080:80 –d nginx

If you point your browser at localhost:8080 you’ll get a nice welcome message. This writeup will take you from there.


There’s an excellent Redbook on zCX specifically: Getting started with z/OS Container Extensions and Docker

It’s also pretty good on other Docker environments, though it doesn’t mention them.

Which, I think, tells you something. Probably about portability and interoperability.

So, as I said, it’s easy and rewarding to play with Docker just before doing it on z/OS with zCX. And, as I said on Episode 25 of our podcast, I’d love to see SMF data from zCX.

If I get to see SMF data from a zCX situation – whether a benchmark or a real customer environment – I’ll share what I’m seeing. I already have thoughts.

And maybe I’ll write more about my Raspberry Pi setup some day.

Mainframe Performance Topics Podcast Episode 25 “Flit For Purpose”

It’s been a long time since Marna and I recorded anything. So long in fact that I’d forgotten how to use my favoured audio editor Ferrite on iOS.

But it soon came back to me. Indeed the app has moved on somewhat since I last used it – in a number of good ways.

So, we’re back – in case you thought we’d gone away for good.

And here are the show notes. I hope you’ll find some interesting things here.

Episode 25 “Flit for Purpose”

Here are the show notes for Episode 25 “Flit for Purpose”. The show is called this because it relates to our Topic, and also can be related to our Mainframe topic (as a pun for “Fit for Purpose”).

It’s been a long time between episodes, for good reason

  • We’ve been all over the place, and too many to mention thanks to a very busy end of 2019, with two major GAs (z/OS V2.4 on September 30, 2019 and z15 September 23, 2019)


  • Twitter user @jaytay had a humorous poke at SMF field “SMF_USERKEYCSAUSAGE”, how it looks like sausage. We agree and are glad to find humor everywhere.

Follow up

  • We mentioned in Episode 24 CICS ServerPac in a z/OSMF Portable Software Instance Format . It GA’ed December 6 on ShopZ. If you will be ordering CICS or CICS program products, please order it in z/OSMF format!

Mainframe Topic: Highest highlights of z/OS V2.4 and z/OS on z15

  1. Highlight 1:  zCX

    • zCX is a new z/OS address space that is running Linux on Z with the Docker infrastructure. Docker containers have become popular in the industry. Examples include nginx and MongoDB, and WordPress.

      • (The use cases depicted reflect the types of software that could be deployed in IBM zCX in the future. They are not a commitment or statement of software availability for IBM zCX.)
    • Take Dockerhub image and run “as is”. About 3k images to choose from that can immediately run. vJust look for the “IBM Z” types.

    • The images are not necessarily from IBM, which brings about a “community” and “commonality” with Linux on Z.

    • zCX is packaged with SMP/E, and serviced with SMP/E.  However, configuration (getting it up and running) and service updates must be done with z/OSMF Workflow.

    • Application viewpoint: Docker images themselves are accessed through the TCP/IP stack, with the standard Docker CLI using SSH. And for the application people they might not even know it’s running under z/OS. Docker Command Level Interface is where you implement which containers run in which zCX address spaces.

    • For cost: No SW priced feature (IFAPRDxx). However, does required a priced HW feature (0104, Container Hosting Foundation) on either z14 GA2 or z15.  This is verified at the zCX initialization. Lastly, zCX cycles are zIIP-eligible

    • It’s an architectural decision whether to run Docker applications on Linux on Z or z/OS, and that’s for another episode.

      • Martin wants to see some SMF data, naturally. He’s installed Docker on two different platforms: Mac and his Raspberry Pi. In the latter case he installed nginx and also gcc.
  2. Highlight 2: z/OSMF

    • Lots of z/OSMF enhancements that have arrived in z/OS V2.4, and the good news is that most of them are rolled back to V2.3 in PTFs that have been arriving quarterly. 

    • Security Configuration Assistant: A way within z/OSMF to validate your security configuration with graphic views, on the user and user group level. Designed to work with all three External Security Managers!

      • Security is and continues to be one of the hardest part of getting z/OSMF working . This new application itself has more security profiles that users of Security Configuration Assistant will need access to, but the good side is once those users are allowed, they can greatly help the rest. 

      • Use case: if a user is having problems accessing an application and you don’t know why you could easily see if this user had authority to access the application to eliminate that as a problem.

      • Available back to V2.3 with APAR PH15504 and additional group id enhancements in APAR PH17871

    • Diagnostic Assistant for z/OSMF : A much simplier way to gather the necessary information to need for a Service person to perform debug for your z/OSMF problem. 

      • Not quite easy before. It could have been streamlined, and it took this application to give us that.  It is now so easy. Now, Marna doesn’t gather problem doc grudgingly because there are lots of different locations that contain necessary diagnostic files.

      • It could not be easier to use (although one additional tiny enhancement Marna has requested to the z/OSMF team how to make it even easier to not require the z/OSMF server jobname and jobid).

      • You open up the Diagnostic Assistant application, you select the pieces of information you want to gather. This includes the configuration data, the job log, and server side log and some other files.  Having z/OSMF collect it for you it really nice.

        • It is then zipped up and stored down to your hard drive (not the z/OS system).
      • Available back to V2.3 with Diagnostic Assistant APAR PH11606

  3. Highlight 3: z/OS on z15: System Recovery Boost :Speeds up your shutdown for up to 30 minutes and speeds your re-IPL for 60 minutes, with no increase to your rolling four hour average.

    • Some bits are free, some will cost should use choose to you them. Note that System Recovery Boost is available on z/OS V2.3 and V2.4. There is no priced z/OS SW feature (IFAPRDxx) at all. Made up of individual solutions, not all of them you may choose to use (or may apply to you).

      • No charge for this one: Sub-capacity CPs will be boosted to full capacity. At full-capabity already, this will not help.

      • No charge for this one: Use your entitled zIIPs for running workload that usually would not be allowed to run there (General CP workload).

        • Martin will have updates to his zIIP Capacity Planning presentation!
      • If you are a GDPS user, there is GDPS 4.2 scripting and firmware enhancements.  This will allow parallelization of GDPS reconfiguration actions that may be part of your restart, reconfiguration, and recovery process.

        • Martin notes that if you parallelise more than you otherwise would it might affect the resource picture.
      • Lastly, and now this one is priced, if you want to, you can purchase HW features (9930 and 6802).  These features will allow you to have extra temporary zIIP capacity which you can then make use of for even more boost processing.

      • Additional reference: System Recovery Boost Redpaper

      • Martin again, looks forward to seeing data from this as RMF could show some challenging things for his reporting.

Performance Topic: z15 from chip design on upwards

  • Disclaimer: personal view, not from Development or Marketing. Marna and Martin were talking about the z15 Chip design – and we thought those observations might be useful to include in the Performance topic. Mostly focusing on the CP chip, which is where the processor cores are.

  • Two traditional levers were raising clock speed or shrinking the feature size.

    • z15 clock speed still 5.2 GHz. And we’ve been as high as 5.5 GHz with zEC12.

    • Feature size still 14 nanometers (nm). Some other fabs have 10nm and even 7nm processes.

  • GHz and nm aren’t the be all and end all. Looking at chip design now.

    • Start with a similar sized CP chip and putting more on it. It helped to get rid of the Infiniband-related circuits, and some layout enhancements.

      • Very sophisticated software used for laying out all modern chips. Once you have more chip real estate, good stuff can happen.

        • Same size CP chip has 3 billion more transistors, that’s 9.1 billion transistors.
      • This can give us two more cores, taking us to twelve.

        • As an aside, Martin has seen the two more cores on a z14 PU chip allow better Cycles-Per-Instruction (CPI) than on z13.
      • More L2 Instruction Cache, at the core level. Double L3 Cache size, at the chip level, shared between cores. So almost double per core. All of this has got to lead to better CPI.

      • Nest Acceleration Unit (NXU): Compression on this chip is such a fascinating topic, but for another episode.

      • Drawer can go down from 5 or 6 PU chips to 4 and – for 1-drawer machine – still have one more purchasable core than z14 – 34 vs 33.

        • 33 vs 34 is for 1 drawer. Similar things apply for two or more drawers.

        • As a result they also were able to remove one set of X-Bus circuitry.

          • The X-Bus is used to talk to other CP chips and the System Controller (SC) Chip.

          • Now down from 3 to 2: one for the SC chip and one for the remaining other CP chip in the cluster.

        • Now fit the contents of the drawer in an industry standard 19 inch (narrower) rack. This following what was on the z14 ZR1.

      • At the top end there are up to 190 characterisable cores, coming up from 170. This can give us a fifth drawer – which is quite important.

        • Speculation that reducing the number of PU chips enabled the System Controller (SC) chip to talk to four other SC chips, up from 3 in z14, getting us from 170 to 190.
      • Many other things too: Like 40TB Max Memory at the high end vs 32TB, improved branch prediction so deeper processor design, and enhanced Vector processing with new instructions.

Topics Topic: How To Do A Moonlight Flit

  • This topic is about moving one’s social output, in particular blogs and podcast series. Martin’s blog had to move, because the IBM developerWorks blog site is being shut down.

    • Many blogs moved to, or elsewhere like Martin’s Mainframe Performance Topics. Marna’s blog remains in the same spot (Marna’s Musings.

    • This podcast probably also has to move, for similar reasons, and we are looking for another location now.

      • When we move it, the feed will have to be replaced in your podcast client, sadly.
  • Immediately people might worry about Request For Enhancements being affected , and it is not.

  • Following are the important criteria we thought, when selecting the right sight to move social media to:

    • Cost, ideally it’d be free, but may be worth paying a few dollars a month for better service and facilities.

    • Good ecosystem. Easy to use, especially for publishing a podcast.

    • Good blog publishing tools, that integrating well with writing systems and good preview capabilities.

    • Longevity is important. You do not want to migrate again soon. Martin has 15 years of blogging!

    • Security. Our material is public, however tampering is the concern.

  • Moving the media:

    • Retrieval from old location. Martin wrote some Python to webscrape the blog. He built a list of posts and retrieved their HTML.

      • Graphics: The same python retrieved the graphics referenced in the blog posts.

      • Podcasting needed show notes and keywords (for Search Engine Optimisation). Also audio and graphics.

      • Cross references: Martin’s blogs have references from one post to another, both absolute and relative. And our podcast shownotes have links too that will break.

    • Re-posting . A lot of HTML editing is required. Different posts using different authoring tools have different generated HTML, and had post-cross-references that needed refactoring.

      • Martin’s graphics needed uploading afresh, and how they are positioned on the page had to change.
  • Redirecting Audience:

    • We really don’t want to lose our listeners and readers!

    • Martin posted progress reports on Twitter where there would be a trace. His blog’s root URL had to change. Fortunately the old blogging site redirected to the new, but not to individual posts.

    • Our podcast subcribers will need to specify a new feed URL, as there is no possibility of not affecting subscriptions. Watch out for an announcement.

      • New feed will likely cause all episodes to be re-downloaded.

      • If you’re subscribing, once you re-subscribe to the new location, you should be fine for a long time. However, we don’t know how many we’ll lose. We don’t actually know how many people listen (e.g. on the web).

  • We try to turn such experiences into something useful.

Customer requirements

  • RFE 133491: “Write IEFC001I and IEFC002I information to SMF”

  • Abstract: At the point where IEFC001I and IEFC002I messages are produced, also write this information to SMF. The record should indicate if it was a cataloged procedure (the IEFC001I infomation) or an INCLUDE group, the member used, whether it came from a private or system library, and the dsname of that library, in addition to job identification information. Possibly the stepname should be included (names internal to a cataloged procedure are not needed).

  • Use Case: Organizations of long standing often have thousands of cataloged procedures. Often a large percentage of these are obsolete or never used but the mechanisms for discovering which ones should be archived or deleted do not exist as far as I can find. Being able to summarize SMF records to compare against member lists would allow us to clean up old cataloged procedures and/or INCLUDE members. This could also have security use if one suspects a malicious person had temporarily substituted a JCLLIB version of a proc.

  • Currently it is an Uncommitted candidate , moved from JES to BCP component for response.

  • The messages referenced were:

    • IEFC001I (PROCEDURE procname WAS EXPANDED USING text) : The system found an EXEC statement for a procedure. In the message text

    • IEFC002I INCLUDE GROUP group-name WAS EXPANDED USING text: The system found an INCLUDE statement to include a group of JCL statements. In the message text:

  • Our thoughts:

    • Martin thought it could be useful in SMF 30. Already have some stuff about PROCs and PROC STEPs in SMF 30. Some of the information, particularly data set, is quite lengthy. So probably an optional section in the record, maybe repeating. Might need some SMFPRMxx parameter. It looks useful, probably not a new record, needs care in designing.

    • Marna likes it for two reasons: 1. Helpful in cleanup. and 2. Has security benefits.

Future conferences where we’ll be

On the blog

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below. So it goes…

Normal Service Resumed

I’m sat in a nice coffee shop in Wallingford, relaxing after an interesting few weeks.

Over the past few weeks I’ve migrated my blog over to WordPress. If you’re reading this you’ve followed me over so a big thank you.

I migrated 526 blog posts. This number frankly astonishes me, though it’s been almost 15 years so that’s only 30 something a year.

The topics have varied, which is why I stuck the commas in the title some time ago. In the migration I made it official.

In the meantime a few years ago my dear friend Marna Walle joined me in using the “Mainframe, Performance, Topics” name – for our podcast series.

The focus of the blog has not changed at all. It’s just the “social conditions” that have changed: I’m funding the blog – on WordPress. This gives me very little more latitude but does mean I can take it with me – into (eventual) retirement.

The process of writing still needs sorting out – but my experience of migration tells me that WordPress is going to be a good place to be. I certainly can confect HTML and upload pictures and even PDFs. But the WordPress offers a wider ecosystem. For example, I’m writing this across Mac, iPad and iPhone using the very excellent Drafts app and I know Drafts can publish direct to WordPress. (Drafts’ automation might be handy in helping me write, actually.)

So expect even more

  • Mainframe whatever I want to talk about
  • Performance whatever I want to talk about
  • Topics whatever I want to talk about

You get the picture: It’s whatever I want to talk about. 🙂

So normal service resumed, then. 🙂

More On Native Stored Procedures

(Originally posted 2019-08-25.)

This follows up on Going Native With Stored Procedures?, and it contains a nice illustrative graph.

I could excuse a follow up so soon with the words "imagine how unreadably long the post would be I had written this as all as one piece”.

However, the grubby truth is I got to write some code I didn’t expect to just yet. And this post became possible as a result.

Where Were We?

The gist of the previous post is:

  • Non-native stored procedures aren’t zIIP eligible, regardless of their caller’s eligibility.
  • You can see the level of zIIP eligibility for DDF and note if it looks less than you might expect or desire.

And that’s where we (and my code) left it. A useful place to be but not the best that could be done.

The New Code

So, for the very same study as in Going Native With Stored Procedures?, I began to dig, and to teach my code new tricks.

I wanted to know more about the stored procedures that weren’t zIIP-eligible.

To do that would take processing of package-level Db2 Accounting Trace (SMF 101).

Package-Level SMF Records

There are two types of SMF 101:

  • IFCID 3 – Plan-Level Accounting
  • IFCID 239 – Package-Level Accounting

Each IFCID 239 record contains up to 10 QPAC Package Sections[1], each describing one package name. I emphasise “name” because you don’t get one each time you call a stored procedure with that name. Instead you get one section for all calls that invoke that package.

My Old Package-Level Code

To be honest, I stopped earlier with developing my package-level code than I wanted to. The code takes an IFCID 239 record and “flattens” it, with each Package Section placed in a fixed position in the output record.

So, I have records still with up to 10 package names in them.

If you’d sent me Package-Level Accounting for a study involving DDF my code flattened the records and left it at that. There was no reporting.

My New Package-Level Code

Now I have code that takes a 10-package record and turns it into 10 1-package records[2]. I also have some reporting.

The interesting bit is the reporting.

I create a CSV file, sorted by subsystem and, within that, GCP CPU. Each line is a separate Correlation ID and Package combination. There is a nice field in the QPAC section (QPACAAFG) which says which type of package it is. Here are the values QPACAAFG can take.

CodePackage Type
X’0000’Regular package
C’01’Non-Native Stored Procedure
C’02’User-Defined Function (UDF)
C’04’Natived Stored Procedure
C’05’Inline UDF

Notice anything strange about the above table?

I think the X’0000’ is odd but it’s probably indicating “this field not filled”.

In any case you can readily tell what kind of package we have.

When I import the CSV file into Excel (and hold my nose) 🙂 I get a useful spreadsheet[3]. I define a field which is the concatenation of the package type and its name. That enables me to produce a nice graph. Here’s an example.

It shows CPU seconds – both GCP and zIIP – for the 20 packages with the biggest GCP CPU.

I’ve obfuscated all but one package name. We’ll get to that. But here are some observations:

  1. This package is a non-native stored procedure. There is very little zIIP eligibility. (Why there is any zIIP eligibility is unclear.)
  2. This package is a native stored procedure. It has about 60% zIIP eligibility.
  3. This package is well known – SYSLH200 – so I’ve not obfuscated it. It also shows about 60% zIIP.
  4. This is a second native stored procedure – again showing good zIIP eligibility.

Most of the rest of the packages are non-native stored procedures, with a couple of native ones. The zIIP usage is much as you would expect.

Let’s return to SYSLH200. It’s a Db2-supplied default package. You see it a lot. In a different subsystem it’s – by miles – the biggest package. This would be entirely normal for a subsystem that wasn’t a heavy user of explicit DDF packages, such as stored procedures.

Here’s a table of such package names:

Package NameDescription
SYSSHxyydynamic placeholders – small package WITH HOLD
SYSSNxyydynamic placeholders – small Package NOT WITH HOLD
SYSLHxyydynamic placeholders – large package WITH HOLD
SYSLNxyydynamic placeholders – large package NOT WITH HOLD

Now I can spot them I’ve added this table to my bank of standard slides to use in customer engagements. I’ll add other well-known package names over time.

For this subsystem it’s clear the customer has some familiarity with Native Stored Procedures, whether accidentally or intentionally. (It’s possible the native ones were supplied by a vendor and the install script set up the Db2 Catalog to enable them.) But this customer has some way to go – assuming the remaining major stored procedures can and should be converted to native.


This code is going into Production – Just as soon as I can find the time to convert the JCL to an ISPF File Tailoring skeleton.

But I’m already using it so it is in a sense in Production.

So if I ask you for package-level Db2 Accounting Trace this will be one of the good uses I’ll put it to.

The Journey Continues

I’m sure there’ll be “another thrilling instalment” 🙂 in the DDF saga. I just have no idea right now what what it would be.

Stay tuned. 🙂

  1. Mapped by mapping macro DSNDQPAC in the SDSNMACS library shipped with Db2.  ↩

  2. I’ll admit it’s not the most efficient code, involving two passes over the data; One day I’ll probably rework my Assembler DFSORT E15 edit to emit 10 records and eliminate passes over the data.  ↩

  3. Well, useful when I’ve taken each subsystem and plonked it in its own sheet manually 😦 . If anyone knows how to automate this sort of thing I’d love to know.  ↩

Making The Connection?

(Originally posted 2019-08-11.)

I wrote the last blog post on my way to my first SHARE conference. I’m writing this one on the way back.

So a big thank you to all who made me feel so welcome in Pittsburgh. In many cases it was good to meet up with long standing friends; In some cases people I’ve known a long time but never actually met. I hope I’ll be back1.

Right now the team is working on a customer study, one component of which looking at big batch CPU consumers. And I think I‘ve just solved a long-standing issue of interpretation. So I’m sharing the thought with you.

It concerns batch job steps with a TSO step program name – IKJEFT01, IKJEFT1B, or IKJEFT1A. There are two main cases here:

  1. Those that call Db2 – through the TSO connection.
  2. Those that don’t.

The latter are usually something like REXX, or they could be compiled / assembled programs called from the Terminal Monitor Program (TMP) in Batch.

It would be good to know which is which. So, there’s no use tuning the SQL of a job step that doesn’t go to Db2. You’re better off tuning the REXX or program code. As it happens, I am perfectly capable of tuning REXX programs; I’m not so hot at tuning SQL. In any case I’m looking to advise the customer as to which is indicated for any given heavy CPU step.

A Previous Method

Previously we would’ve used our code which matches SMF 101 (DB2 Accounting Trace) with SMF 30 Step-End records (Subtype 4) to detect whether a job step accesses a Db2 subsystem or not. This is the most comprehensive way but it’s quite likely Accounting Trace isn’t turned on for the subsystem the job step attaches to. Further, we don’t even know which subsystem it is.

So this is OK – and in practice we’ve used this method for a long time.

By the way, for TSO Attach Batch the Db2 Correlation ID is the job name2.

A More Scalable Method

There is a more scalable way of distinguishing between TSO job steps that access Db2 and those that don’t. And it’s more scalable because it doesn’t require Db2 Accounting Trace. Further, it yields the subsystem name.

It uses only SMF 30 records.

I’ve mentioned the SMF 30 Usage Data Section before: If an address space accesses Db2 the record will have at least one Usage Data Section pointing to Db2. The Product Name field will say “DB2” and the Product Qualifier field will contain the name of the subsystem. (The Product Version field will tell you which version of Db2 it is, which is also handy.)

Today I tested the following query:

‘Show me all the SMF 30–4 records with a Program Name field beginning “IKJEFT” and with a Usage Data Section that points to Db2."

This query only uses SMF 30. It ran quickly and produced an interesting list. It would be a small matter to turn this into the following query:

‘Show me all the SMF 30–4 records with a Program Name field beginning “IKJEFT” without a Usage Data Section that points to Db2."

The first query yields the first case above: TMP job steps that attach to Db2. The second query yields the other case: Non-Db2 TMP steps.

So now we can more usefully discuss any step whose Program Name begins “IKJEFT” – by working out which case it is.

I consider this a nice result – which is why I’m sharing it with you.

And, yes, if the step goes to Db2 we can dive into Accounting Trace to figure out:

  • If there is after all any non-Db2 CPU to tune out.
  • Any other buckets of time worth tuning.

  1. Both to Pittsburgh and SHARE, in case you wondered.

  2. The observant will note that Correlation ID is up to 12 characters and job name is at most 8. The Correlation ID pads to the right with blanks up to 12.

Going Native With Stored Procedures?

(Originally posted 2019-08-04.)

I hope the term “Going Native” isn’t considered offensive. If it is my defence is that I didn't coin the term “Native Stored Procedures” and that’s what I’m alluding to.

This post is about realising the benefit of Native Stored Procedures for DDF callers. Most particularly, assessing the potential benefit of converting Stored Procedures to Native.

It’s not been long since l wrote about DDF. (My last DDF post was in 2018: DDF TCB Revisited. ) It’s been rather longer since I last wrote about Db2 Stored Procedures: 2007’s WLM-Managed DB2 Stored Procedure Address Spaces is the latest I can find.1

You might ask “what on earth is a z/OS Performance guy doing talking about Db2?” In “Low Self Esteem Mode” 🙂 I might agree, but I’ve always thought I could pull off “one foot in each camp” when it comes to Db2 and z/OS. “I consider it a challenge…” 🙂

Seriously, z/OS folks have quite a bit to say that can help Db2 folks, if they try. And so we come to a recent example.

We’ll get to Stored Procedures presently.2 But let me build up to it.

And, yes, this post is inspired by a recent customer case. And, no, it didn’t alert me to the topic; It just was a good excuse to talk about it.

zIIP Eligibility And DDF

In principle, a DDF workload should see roughly 60% zIIP Eligibility. As I said in DDF TCB Revisited

“The line in the graph – or rather the round blobs – shows zIIP eligibility hovering just under 60% – across all the DB2 subsystems across both customers. (One of the things I like to do – in my SMF 101 DDF Analysis Code – is to count the records with no zIIP eligibility.”

That last sentence is interesting, and I don’t think I spelt it out sufficiently: For quite a while now a DDF thread is either entirely zIIP-eligible or not at all.

I glossed over something in that post – which was the right thing to do as that set of data didn’t display an important structural feature: If your DDF thread calls a Non-Native Stored Procedure (SP) or User-Defined Function (UDF) it loses zIIP eligibility for the duration of that call.

Native Stored Procedures

You can write SPs and UDFs in pretty much any language you like, and “shell out” to anywhere you like3. For example, you can write them in REXX, Java, COBOL, PL/I. And SPs and UDFs can call each other. Lovely for reuse.

These capabilities have been around for at least 20 years. More recently (but not that recently (but progressively enhanced over recent Db2 versions)) a different set of capabilities have emerged: Native Stored Procedures. You write these in an extension to SQL called PL/SQL. The term “Native” here refers to using the built-in language, rather than a programming language.

Native SPs run in the Db2 DBM1 address space – but with the caller’s WLM attributes. Non-Native SPs run in WLM Stored Procedures Server address spaces (as described by WLM-Managed DB2 Stored Procedure Address Spaces and the Redbook it references.)

For the rest of this post I shall talk of SPs – for brevity. UDFs and Triggers are generally similar.

Why Do Native Stored Procedures Matter?

As a Performance person, Native Stored Procedures matter to me because there’s a fundamental difference in zIIP eligibility:

  • When a DDF thread calls a Native SP the zIIP eligibility is preserved.
  • When a DDF thread calls a Non-Native SP the CPU to run the SP is not zIIP eligible.

So, there’s an economic advantage to SPs being Native. Actually it’s twofold:

  • The obvious one of zIIP CPU having lower software (etc) costs than GCP CPU.
  • When work switches between zIIP eligibility and non-eligibility the work doesn’t get undispatched on one processor and redispatched on another immediately. However, if it does – for any reason – get redispatched its eligibility is checked. That could well cause redispatch on a different type of processor. Of course that’s a different processor – with the potential for cacheing loss.

In any case, we’d want to maximise zIIP eligibility, wouldn’t we?

A Case In Point

I said early on in this point I have a real customer case to discuss.

I did my standard “let’s see how much GCP and how much zIIP this Db2 subsystem uses” analysis. The results were emphatically not 60% zIIP eligible, 40% ineligible.

I examined this at three levels:

  • DDF Service Class, using SMF 72-3 Workload Activity data.
  • DIST Address Space, using SMF 30 Interval Independent Enclave data.
  • Db2 DDF Application, using SMF 101 Db2 Accounting Trace data.

They all agreed on transaction rates, GCP CPU, and zIIP CPU.

The customer has 9 subsystems, across 3 LPARs. These are not cloned subsystems, though the same application “style” – JDBC4 – dominated the DDF traffic in all cases.

Of these 9 subsystems only 1 approached a 60%/40% split. The rest varied up 90% or so GCP.

Db2 Accounting Trace is helpful here – if you examine Class 2 CPU times:

  • You get overall TCB time on a GCP [1]
  • You get overall zIIP CPU time [2]
  • You get Stored Procedures CPU time [3]
  • You get UDF CPU time [4]

So I took away [3] and [4] from [1]. I compared the result to [2]. With this adjusted measure of GCP CPU Time I got extremely close to 60% being zIIP eligible – for all 9 subsystems.

As an aside, Stored Procedures Schedule Wait and UDF Schedule Wait time buckets can tell you if there are delays in starting the relevant server address spaces. In this data it was notable that the Test Db2 subsystems were more prone to this than Production. As the machines ran pretty busy it’s good to see the delays being directed to less important work5.

So, Shall We Go Native?

“Not so fast” applies – both as an exclamation and as an expectation:

  • Because these are generally comprehensive application changes it’s going to take a while to convert.
  • It might not even be worth it – at least not for 100% of Non-Native SPs.

Here’s how I think you should proceed:

  1. Figure out how much of a problem this is (and you’ve seen that above).
  2. Use Db2 Accounting Trace at 2 levels to establish which moving parts might be worth reworking:
    • IFCID 3 would show you which applications, including client computer.
    • IFCID 239 would show you which package burns the most CPU, and whether it’s a SP, UDF, or Trigger.
  3. Assess – for each of the “high potential” SPs – which applications call them. Of course, only some of them will benefit – as SPs can be called from a wide variety of non-DDF applications as well as DDF ones.

If you carry out the above steps you should see whether it’s worth it to convert to Native.

One final word of caution: Many Non-Native SPs can’t be converted to PL/SQL so any discussion with your Db2 and Applications colleagues need to be sensitive to that.

  1. I need a Content Management System (CMS) as you, dear reader, might find a more recent one. 🙂 

  2. To “break the fourth wall” a moment, this post is more or less writing itself (at what I’m told is 10972m) so it might or might not let me get to the point. 🙂 

  3. Including to other services, even ones out on the web. 

  4. In Db2 Accounting Trace you can sum up by e.g. Correlation ID. JDBC shows up as “db2jcc_appli”. 

  5. And Workload Activity Report and WLM bear out it is less important work that is getting hit. 

Engineering – Part Three – Whack A zIIP

(Originally posted 2019-08-02.)

(I’m indebted to Howard Hess for the title. You’ll see why it’s appropriate in a bit.)

Since I wrote Engineering – Part Two – Non-Integer Weights Are A Thing I’ve been on holiday and, refreshed, I’ve set to work on supporting the SMF 99 Subtype 14 record.

Working with this data is part of the original long-term plan for the “Engine-ering” project.

Recall (or, perhaps, note) the idea was to take CPU analysis down to the individual processor level. And this, it was hoped, would provide additional insights.[1]

Everything I’ve talked about so far – in any medium – has been based on one of two record types:

  • SMF 70 Subtype 1 – for engine-level CPU busy numbers, as well as Polarisation and individual weights.
  • SMF 113 Subtype 1 – for cache performance.

SMF 99 Subtype 14 provides information on home locations for an LPAR’s logical processors. It’s important to note that a logical processor can be dispatched on a different physical processor from its home processor, especially probably in the case of a Vertical Low (VL)[2]. I will refer to such things as where a logical processor’s home address is as “processor topology”.

It should be noted that SMF 99–14 is a cheap-to-collect, physically small record. One is cut every 5 minutes for each LPAR it’s enabled on.

Immediate Focus

Over the past two years a number of the “hot” situations my team has been involved in have involved customers reconfiguring their machines or LPARs in some ways. For example:

  • Adding physical capacity
  • Shifting weights between LPARs
  • Varying logical processors on and off
  • Having different LPARs have the dominant load at different times of day

All of these are entirely legitimate things to do but they stand to cause changes in the home chips (or cores) of logical processors.

The first step with 99–14 has been to explore ways of depicting the changing (or, for that matter, unchanging) processor topology.

I’ll admit I’m very much in “the babble phase”[3] with this, experimenting with the data and depictions.

So, here’s the first case where I’ve been able to detect changing topology.

Consider the following graph, which is very much zoomed in from 2 days of data – to just over an hour’s worth.

Each data point is from a separate record. From the timestamps you can see the interval is indeed 5 minutes. This is not the only set of intervals where change happens. But it’s the most active one.

There are 13 logical processors defined for this LPAR. All logical processors are in Drawer 2 (so I’ve elided “Drawer” for simplicity.)

Let me talk you through what I see.[4]

  1. Initially two logical processors are offline. (These are in dark blue.) Cluster 2 Chip 1 has 7 logical processors and Cluster 2 Chip 2 has 2.
  2. A change happens in the fifth interval. One logical processor is brought online. Now Cluster 2 Chip 1 has 6 and Cluster 2 Chip 2 has 4. Bringing one online is not enough to explain why Cluster 2 Chip 2 gained two, so one must’ve moved from Cluster 2 Chip 1.
  3. In interval 7 another logical processor is brought online. The changes this time are more complex:
    • A logical processor is taken away from Cluster 1 Chip 1.
    • A logical processor appears on Cluster 1 Chip 2.
    • Two appear on Cluster 1 Chip 3.
    • One is taken away from Cluster 2 Chip 2.
  4. Towards the end, as each of two logical processors are offlined they are taken away from Cluster 2 Chip 2.

The graph is quite a nice way of summarising the changes that have occurred, but it is insufficient.

It doesn’t tell me which logical processors moved.

What we know – not least from SMF 70–1 – is the LPAR’s processors are defined as:

  • 0 – 8 General Purpose Processors (GCPs)
  • 9 – 12 zIIPs

The Initial State

With 99–14, diagrams such as the following become possible:

This is the “original” chip assignment – for the first 4 intervals.

This is very similar to what the original WLM Topology Report Tool[5] would give you. (I claim no originality.)

I drew this diagram by hand; I can’t believe it would be that difficult for me to automate – so I probably will.[6]

After 1 Set Of Moves

Now let’s see what it looks like in Interval 5 – when a GCP was brought online:

GCP 5 has been brought online in Cluster 2 Chip 2, alongside the 2 (non-VL) zIIPs. But also GCP 0 has moved from Cluster 2 Chip 2 to Cluster 2 Chip 1.

What’s The Damage?[7]

Now, what is the interest here? I see two things worth noting:

  • A processor brought online has empty Level 1 and Level 2 caches.
  • A processor that has actually moved also has empty Level 1 and Level 2 caches.

Within the same node/cluster or drawer probably isn’t too bad. (And within a chip – which we can’t see – even less bad as it’s the same Level 3 cache). Further afield is worse.

Of course the effects are transitory – except in the case of VLs being dispatched all over the place all the time. Hence the desire to keep them parked – with no work running on them.

After The Second Set Of Moves

Finally, let’s look at what happened when the second offline GCP was brought online – in Interval 7:

GCP 8 has been brought online in Cluster 1 Chip 2. But also zIIP 10 has moved from Cluster 2 Chip 2 to Cluster 1 Chip 1. Also zIIPs 11 and 12 have moved from Cluster 1 Chip 1 to Cluster 1 Chip 3.

This information alone (99–14) isn’t enough to tell you if there was any impact from these moves. However, you can see that in neither case was a “simple” varying a GCP online quite so simple. Both led to other logical cores moving. This might be news to you; It certainly is to me – though the possibility was always in the back of my mind.

Note: This isn’t a “war story” but rather using old customer data for testing and research. So there is no “oh dear” here.

Later On

To really understand how a machine is laid out and operating you need to consolidate the view across all the LPARs. This requires collecting SMF 99–14 from them all. This, in fact, is a motivator for collecting data from even the least interesting LPAR. (If its CPU usage is small you might not generally bother.)

But there’s a snag: Unlike SMF 70–1, the machine’s plant and serial number isn’t present in the SMF 99–14 record. So to form a machine-level view I have two choices:

  • Input into the REXX exec a list of machines and their SMFIDs.
  • Have code generate a list of machines and their SMFIDs.

I’ll probably do the former first, then the latter.

What also needs doing is figuring out how to display multiple LPARs in a sensible way. There is already a tool doing that. My point in replicating it would be to add animation – so when logical processors’ home chips change we can see that.

Maybe Never

SMF 99–14 records aren’t cut for non-z/OS LPARs, which is a significant limitation. So I can’t see a complete description of a machine. For that you probably need an LPAR dump which isn’t going to happen on a 5-minute interval.

However, for many customer machines, IFL- and ICF-using LPARs are on separate drawers. It’s a design aim for recent machines but isn’t always possible. For example, a single-drawer machine with IFLs and GCPs and zIIPs will see non-z/OS LPARs sharing the drawer. Most notably, this is what a z14 ZR1 is.

One other ambition I have is to drive down to the physical core level. On z14, for instance, the chip has 10 physical cores, though not all are customer-characterisable. But this won’t be possible unless the 99–14 is extended to include this information. This would be useful for the understanding of Level 1 and Level 2 cache behaviour.

Finally, there is no memory information in 99–14. I would dearly love some, of course.


While 99–14 doesn’t completely describe a machine, it does extend our understanding of its behaviour by relating z/OS logical processors to their home chips. Taken with 70–1 and 113–1, this is a rather nice set of information.

Which prompts lots of unanswerable questions. But isn’t that always the way? 🙂

A question you might have asked yourself is “do I need to know this much about my machine?” Generally the answer is probably “no”. But if you are troubleshooting performance or going deep on LPAR design you might well need to. Which is why people like myself (and the various other performance experts) might well be involved anyway. So – for us – the answer is “yes”.

The other time you might want to see this data “in action” is if you are wondering about the impact of reconfigurations – as the customer whose data I’ve shown ought to be. 99–14 won’t tell you about the impact but it might illuminate the other data (70–1 and 113). And together they enhance the story no end.

  1. Actually it’s a matter of “legitimised curiosity”. 🙂  ↩

  2. I’ll follow the usual convention of “VL” for “Vertical Low”, “VM” for “Vertical Medium ”, and “VH” for “Vertical High”.  ↩

  3. See, perhaps, Exiting The Babble Phase?.  ↩

  4. You might well see something different. And that’s where the fun begins.  ↩

  5. Which I now can’t find. 😦 and couldn’t run even if I could find it.  ↩

  6. Actually, I’ve experimented with creating such diagrams using nested HTML tables. It works fine. The idea would be to write some code (probably Python) to generate such (fiddly) HTML from the data.  ↩

  7. In UK English, at least, “What’s the damage” means “what’s the bill”.  ↩

Buttoned Up

(Originally posted 2019-06-18.)

In Automation On Tap I talked about NFC tags and QR codes as ways of initiating automation.

This post is about a different initiator of automation.

I recently acquired the cheapest (and least functional) Elgato Stream Deck. This is a programmable array of buttons. My version has 6 buttons but you can buy one with 15 and one with 32 buttons.

A Previous Attempt Didn’t Push My Buttons

A while back I bought an external numeric keypad for my MacBook Pro. When I was editing our podcast with Audacity on the Mac I used this keypad to edit. It’s just numeric, with a few subsidiary keys.

What makes this programmable is that the Mac sees external keypad keys as distinct from their main-keyboard counterparts. So the “1” key on the external keypad has a different hardware code from the “1” key on the main keyboard. Keyboard Maestro macros can be triggered when keys are pressed. So I wrote macros to control Audacity, triggered by pressing keys on the external keypad.

But there was a problem: How do you know what each button does? It’s quite a pretty keypad – as you can see:

So I really didn’t want to stick labels on the key tops. I also tried to make my own template but there was an obvious difficulty: It’s hard to design one that tells you what the middle buttons in a cluster do.

There’s another problem: Suppose I set up automation for multiple apps – which Keyboard Maestro lets you do. Then I need a template for each app – as each will probably have its own key pad requirements. These templates are fragile and in any case this is a fiddly solution.

So this was good – up to a point. It certainly made podcast editing easier. But that was all.

And then I moved podcast editing to Ferrite on iOS, discussed here. And this keypad is now in a drawer. 😦

The One That I Want

Cue the Elgato Stream Deck Mini. First I should say there are other programmable keypads but this is the one that people talk about. I’d say with good reason.

As you can see, it’s corded and plugs into a USB A port.[1] Power to it is supplied by this cable.

It’s easy to set up, literally just plugging it in. To make it useful, though, you need their app – which is available for Mac or Windows. It’s a free app so no problem.

With their app you drag icons onto a grid that represents the Stream Deck’s buttons. Here’s what mine generally looks like when powered up and the Mac unlocked.

The key thing[2] to notice is the icons on the key tops. This is already a big advance on paper templates and ruined shiny key pads. I didn’t have to do anything to get these icons to show up[3]. What you can see is:

  • 4 applications – Sublime Text, Drafts, Airmail and Firefox.
  • A folder called “More”. More 🙂 on this in a moment.
  • Omnifocus Today

The applications are straightforward, but the Omnifocus Today one is more complex. Pushing the button simulates a hot key – in this case Ctrl+Alt+Shift+Cmd+T. I’ve set up a Keyboard Maestro script triggered by this – extremely difficult to type – hot key combination. This script brings Omnifocus (my task manager) to the front and – via a couple of key presses – switches to my Today perspective.

So it’s not just launching applications. It can kick off more complex automation.

And someone built a bridge to IFTTT’s Maker Channel – which could open up quite a lot of automation possibilities. But right now I only have a test one – which emails me when I press the button.

One thing I haven’t tried yet is setting up buttons to open URLs in a browser. I don’t think I’ve got the real estate for that, with only six buttons.

Unreal Estate

In reality I probably should’ve plumped for the 15 button one[4], rather than the 6, but this has been an interesting exercise in dealing with limited button “real estate”.

So, if I press the “More” button I get this:

Because the button tops are “soft” they can change dynamically – as this demonstrates. This is much better than sticking labels on physical keys.

Two features of note:

  • The back button – top left
  • The “Still More” button bottom right

Being able to create the latter means I can nest folders within folders. This greatly increases my ability to define buttons.

The Right Profile

There’s one final trick that’s worth sharing.

Everything I’ve shown you so far has been from the Default Profile. You can set up different profiles for different apps.

I happen to have a few Keyboard Maestro macros for Excel – to make it a little less tedious to use. So I created a profile for Excel.

So when Excel is in the foreground the Stream Deck looks like this:

I’m still setting it up – and you can see two undefined (blank) buttons. – but you get the general idea. The two Excel-specific buttons – using an image I got off the web[5] – kick off some really quite complex AppleScript:

  • Size Graph – which resizes the current Excel graph to a standard-for-me 20cm high and 30cm wide. This is fiddly so having a single button press to do it is rather nice.
  • Save Graph – which fiddles with menus to get to the ability to save a graph as a .PNG file with a single button press.

(In retrospect I should probably have changed the text to black – which is easy to do.)

So in real estate terms, application-specific profiles are really handy.

One pro tip: Close the Stream Deck editor if you want application-specific profiles to work. I spent ages wondering why they didn’t – until I closed the editor.


This programmable key pad with dynamically-changing keytops is a really nice way to kick off automation from a single key press (or a few if you use cascading menus).

I would go for the most expensive one you can afford – but I’ve shown you ways you can get round the 6-button constraint in the Stream Deck Mini.

Application-specific profiles are a really nice touch.

The whole thing is rather fun for an inveterate tinkerer like me. 🙂

In short, I think I’ll be throwing this in my backpack and taking it with me wherever my MacBook Pro goes.

And one day I’ll upgrade to a model with more buttons.

  1. As they say not to use a USB hub I don’t know if it would plug into USB C via an adapter.  ↩

  2. Pun intended.  ↩

  3. Except the OmniFocus one – where I had to find an icon – but even that was easy.  ↩

  4. At much greater cost, of course. Still more so with the 32-button variant.  ↩

  5. You can create your own icons using a nice editor they have, which contains quite a wide variety of icons. But Excel wasn’t one of them – so I found an image and used that. The preferred dimensions are a 72-pixel square.  ↩

Mainframe Performance Topics Podcast Episode 24 “Our Wurst Episode”

(Originally posted 2019-06-18.)

You’ll have to pardon the pun in our latest podcast episode’s title.

We were also somewhat delayed in getting this out – due to our busy schedules and a few technical gremlins. Hopefully it’s worth the wait.

It’s also quite a long episode so if you listen to it on your commute you’ll have to ask your chauffeur to drive a little more slowly. 🙂

Episode 24 “Our Wurst Episode”

Here are the show notes for Episode 24 “Our Wurst Episode”. The show is called this because we both attended the IBM TechU in Berlin, Germany, and our Topics topic is our trip report.


  • We have some feedback (again) based on our use of stereo. We now have glorious mono, based on those comments!

Follow up

What’s New

  • APAR OA55959: NEW FUNCTION – PDUU Support for HTTPS
    • AMAPDUPL: Problem Documentation Upload Utility
    • How you get a dump to IBM, can be compressed, optionally encrypted, and sectioned into smaller data sets
    • HTTPS is important in this because dump can contain sensitive information and FTP is not an acceptable solution for many customers
    • FTPS had issues with e.g. firewalls,
    • Doesn’t look like this option has been incorporated into z/OSMF Incident Log at this time
  • Tailored Fit Pricing for IBM Z
    • Enterprise Capacity Solution
    • Enterprise Consumption Solution
    • Both different from traditional rolling four hour average model
    • For Tailored Fit Pricing, all machines must be IBM z14 Models M01-M05 or ZR1, and at IBM z/OS V2.2, or higher
    • More information here.
    • In Episode 19 Performance topic we talked about Licence-Related Instrumentation.
  • Ask MPT
    • Danny Naicker asks, “In z/OS 2.4 CSA subpool key 8–15, is it usable for user defined applications?”
    • Answer: Prior to z/OS V2.4 User Key Common Storage was available, but it was turned off by default. The downside was no control over who could use it.
      • In base V2.4 that specific capability has gone (the old system-wide switch).
      • Question probably originates from need to still use User-key CSA because of legacy stuff
      • This is where RUCSA (Restricted Use Common Service Area), a new function, comes into play. Allows you to identify applications by using a security definition.
      • Usage of RUCSA prior to V2.4 will need APAR OA56180
      • RUCSA will be offered in V2.4.
    • Thank you to Danny for a good question!

Mainframe Topic: CICS ServerPac in z/OSMF

  • IBM’s first delivery on new installation strategy, will be with CICS and associated SREL products. This is the first of many (really, all).
  • Choice on new installation strategy or old during ShopZ ordering. Choice is:
    • Old is ISPF CustomPac dialogs, or
    • New is z/OSMF Software Management and Workflows.
  • We encourage making the z/OSMF choice, as that is consistent between IBM and other vendors, and is intended to be easier.
  • Infrastructure already available in continuously delivery PTFs, and rolled back to z/OS V2.2. This makes the driving system have the proper infrastructure so anybody can package and deliver that way.
  • More details on the z/OS installation strategy:
    • Software vendors will package similarly, in a z/OSMF Portable Software Instance,
    • Clients will be able to acquire and deploy and configure using z/OSMF.
    • z/OSMF Software Management is used to the acquisiting and deployment. (“Deployment” is the new term for “installation”!)
    • z/OSMF Workflows is used for configuration. You would see the old ServerPac batch jobs as steps in a Workflow.
  • All software that you ordered as a ServerPac, and installed either way, will give you the same (or hopefully better) equivalent installation.
  • There is an IBM Statement Of Direction that this installation choice is coming, but we do not have an exact date yet.
  • For other software ISVs, they can exploit the new z/OS installation strategy whenever they are ready.
  • Prepare now by becoming familiar with z/OSMF Software Management and Workflows

Performance Topic: DB2 And I/O Priority Queuing

  • Follow on from Screencast / Blog post topic: Screencast 12 – Get WLM Set Up Right For DB2.
  • Recent talk has been about whether to turn off I/O Priority Queuing in WLM.
  • Service classes with DB2 subsystems in are heavily I/O Sample oriented, which is unusual among service classes in a system.
  • Means access to CPU is not properly managed, as CPU & zIIP samples few, relative to I/O samples. Reminder: Most of DBM1 is now zIIP-eligible.
  • Can achieve goal even with lots of delay for zIIP or CPU, but that’s definitely not what you want.
  • To see if it is properly managed:
    • See if there are lots of CPU / zIIP Delay samples in RMF Workload Activity.
    • In Db2 might well see Prefetch etc engines exhausted, which could cause unwanted Sync I/Os and bad SQL performance.
      • The effect is just like if there is a real zIIP shortage.
    • Instrumentation for DB2 of relevance is Statistics Trace.
  • You don’t want to just turn off WLM I/O Priority Queuing, as it’s sysplex-wide, it might affect other work that needs it, and Db2 might actually need it.
    • As the name suggests, it gives finer control over I/O priority.
    • So, it’s a case of proceeding with caution.
  • First you need a reasonably achievable goal for the service class. Make sure you’re more or less achieving the existing goal.
  • Second, calculate what the velocity achieved would be without I/O priority queuing .
    • Can take out the Using and Delay for I/O sample counts to do this.
  • If you don’t do the analysis and act on it a shift to not using I/O Priority Queuing could have unpredictable results.
  • You would know that turning off I/O Priority Queuing was helpful by seeing evidence that WLM is managing access to CPU for Db2 better, without hurting other stuff we care about. This evidence would come from RMF Workload Activity Report data.
    • On the Db2 side maybe Statistics Trace says Prefetch etc doesn’t get turned off. Or response times get better.
  • You should evaluate or adjust the goal attainment, but that is BAU. Changing WLM always needs some care.

Topics: Berlin Trip Report May 20–24

  • We both attended IBM Z TechU in Berlin, and got to see each other.
  • Marna had about six sessions.
    • The SMP/E Rookies session had fabulous attendance – 44. Some were more experienced, but most were not.
    • z/OSMF had good attendance too, about 82. More are interested in this topic, especially if you compare to just a couple of years ago.
    • Best attended was the z/OS V2.4 Preview, with about 150 people. There was excellent interest in what is coming in the new release.
  • Marna got to do a couple of things outside the conference:
    • Visiting the Reichstag was fabulous, but make sure to get a reservation.
    • Der Dom was also educational, with a walk to the top!
  • Marna did her own poster to help with z/OSMF configuration, and several people came by to chat.
  • Both Marna and Martin shared a poster about this podcast. We helped with getting one person a podcast app (on each platform), and a subscription to this podcast.
  • Martin had five sessions.
    • Two were with Anna Shugol, Engine-ering, and zHyperLink.
    • One was co-written with Anna, “2 / 4 LPARs”
    • Two were solo efforts: Parallel Sysplex Performance Topics, and Even More Fun With DDF
  • Martin also took a little time out of the conference
    • Each day took a session out to walk in the city.
    • It was interesting to wander round former East Berlin.
  • The next European IBM Z TechU is in Amsterdam May 25–29, 2020.

Customer requirements

  • RFE 131187
    • zOSMF RESTFILES PUT to remove Windows Carriage Return characters
    • Windows files contain a carriage return and line feed and the carriage return character x’0D’ is not being removed. The resulting zOS datasets therefore have a blank line after every data line that shouldn’t be there.

Future conferences where we’ll be

  • Both Marna and Martin in SHARE, Pittsburgh, August 5–9, 2019

On the blog

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below. So it goes…

Elementary My Dear Sherlock

(Originally posted 2019-06-08.)

Students of English literature won’t be alone in recognising the allusion in this post’s title.But this post isn’t about literature.[1].

Sherlocking, as described here is a phenomenon where a developer ships something – typically an app – but then Apple comes along and announces its own version of it.

In very recent memory, examples might be:

  • Enhancements to Apple’s Reminders apps on iOS and Mac OS versus Omnifocus and other “to do list” apps – in iOS 13 and Mac OS 10.15 Catalina.
  • NFC tags kicking off Apple Shortcuts automation versus LaunchCenter Pro’s support of NFC tags (described in Automation On Tap) – in iOS 13.
  • Apple Watch Calculator and PCalc – in Watch OS 6.

But what of it?

At first sight, having your app Sherlocked must be disheartening. But that’s not the end of the story.

What’s at risk for the vendor that gets Sherlocked is subscriptions and future sales. In other words, revenue. For some apps – particularly those that don’t use a subscription model – the sales pattern might be that most sales happen early in the app’s life. So before Sherlocking happens.

But it’s not that simple. Yes, Sherlocking represents a threat but it also represents an opportunity…

… A platform vendor legitimizes a marketplace by sensitising users to the value of a function. For example the new Watch OS Calculator app will introduce users to the idea of a calculator on their wrist.[2] But if PCalc were a better calculator than the Apple one it could still sell well. Fortunately it is. 😁

A built-in app gives one you have to buy a run for its money and free beats fee for many customers – if the basic function is good enough.

But the built-in (Sherlocking) app is usually relatively basic so a purchased app wins by differentiation from the basic. So the message is Sherlocked apps must up their game.

(For me, while I wouldn’t want to wear the “Power User” badge the basic function is rarely good enough. For example I use the excellent Overcast podcasting app instead of the Apple one and it’s inconceivable I wouldn’t use it and hardly conceivable I wouldn’t pay for the premium version.)

Having said that, first party apps have some additional opportunities for integration so could have some advantages. It’s difficult to compete against that. There are private APIs that only Apple can use – but those tend to open up over time.

Basic infrastructure – for example the Reminders database – done right allows third parties to use the infrastructure. Reminders data is shareable across platforms and with other third party apps designed to take advantage of the database.

A good example of a third party app using the Reminders database is Goodtask. Goodtask illustrates one downside of using the built in database, however: The developers had to use a lot of ingenuity to get round the limitations of the Reminders database: As they articulate here they use the Notes field for a task and had to invent their own metadata format.

Unfortunately Mail on iOS doesn’t have this open database and nor does Music.So email apps have to use their own databases – which is a real shame. It means, for instance, you can’t operate on the same email account with a mix of built-in and third party app functions. With Reminders and Goodtask you can.

Here’s an example of why I would want Mail to be open: My favourite email client – on Mac and iOS – is Airmail. I favour it because it has more automation hooks than the default Mail app. But I can’t use it with my work email account because it doesn’t share the default Mail app’s database.

Automation is one area where Apple – as the platform vendor – has an advantage. Those of us who are into automation are still waiting for better automation than x-callback-url[3] – as Shortcuts doesn’t provide a general automation mechanism for third-party apps. A more general mechanism would certainly help. But I digress.

In summary, Sherlocking does represent a threat but also an opportunity. The way to make it the latter is for Sherlocked app developers to continually innovate – to differentiate their apps from Apple’s. Thankfully the best developers are fleet of foot; I don’t envy their position but the best will survive.

If they do innovate at speed the net is the consumer benefits.

  1. Thankfully, given my background in Science and Technology. 🙂 But I’m not a complete philistine; Bits of me are missing. 🙂  ↩

  2. Not an entirely new idea, of course.  ↩

  3. I like x-callback-url so much I can often be seen sporting the t-shirt. 🙂  ↩