The “Gas Gauge” was indeed more than a gimmick

(Originally posted 2007-10-12.)

Back in April this blog entry incorporated a picture of the z9 HMC display that includes the “gas gauge” (or “power meter” if you prefer).

This press release details a rather more serious purpose for the thing. And thanks to John McKown for pointing out the press release in IBM-MAIN Listserver.

New Support for BatchPipes/MVS

(Originally posted 2007-10-11.)

As many of you know I’ve been very fond of BatchPipes/MVS (aka “Pipes”) down the years (17 to be precise). So I’m pleased to seeAPAR PK34251: ADDING BATCHPIPES SUBSYS SUPPORT TO TEMPLATE UTILITY describes some new support in the DB2 Load Utility (as driven by the Template utility) which makes it much easier to use with BatchPipes/MVS.

(For reference here’s the BatchPipes For OS/390 Version 2 Release 1 announcement letter.)

I can see a number of scenarios where the ability to load from a pipe would be handy, potentially speeding up the load. (Whether it actually does will depend on all the other factors that govern the Load utility’s speed.) The most usual scenario is an unload step – perhaps using a utility or SQL, some transformation, and then a (re)load. This might or might not be into the same DB2 subsystem, but probably won’t be into the same DB2 table. The unload could be piped into the transformation step. If the transformation step doesn’t involve a sort then all three – the unload, the transformation and the load – could conceivably be done in parallel. (If it does involve a sort then the sort’s input phase would be overlapped with the unload and the sort’s output phase with the load, leaving just the (usually small) intermediate sort merge phase not overlapped.) For what it’s worth DFSORT has been able automatically to detect pipes for input and output – for 10 years now. 🙂 When it detects a pipe it switches to BSAM to process it, rather than using EXCP. (BSAM and QSAM are the only supported access methods for Pipes – as well as for Extended Format Sequential data sets – whether striped, compressed or not.)

The referenced APAR description does a good job, I think, of discussing considerations when using a pipe to load the data from.

And now I’m off to edit the BatchPipes Wikipedia entry. 🙂

DB2 Version 9 – STATIME Default Decreased to 5

(Originally posted 2007-10-10.)

I’m in a session where we’re going through DB2 Version 9 migration considerations – and right now there’s a table on display with changes to DSNZPARM defaults.

One of real value is that STATIME has changed to 5 minutes from 30. Unless you’ve overridden it you should now get much better information at the DB2 subsystem level. This does, of course, mean 12 sets of Statistics Trace records an hour, rather than 2. But it does mean that for the “counter” fields subtracting the first from the last gives you a MUCH better view of hourly rate at which the counter increments.

I think it’s a long overdue change. Of course, if you’ve hardcoded the default 30 value then you won’t see the improvement. And if you’ve already dropped STATIME to 5 or (still better) to 1 then Well Done! (In my DB2 Performance engagements I always ask for STATIME to be dropped to 5 or 1 – and no customer of mine has complained about what results.)

DB2 Version 9 RLF – Not Just For SAP

(Originally posted 2007-10-10.)

Namik Hrle, the IBM Distinguished Engineer for DB2 for SAP, presented yesterday to an internal audience. (It’s the first time I’d seen him present and he’s a very dynamic presenter.) His presentation prompted me to download the Enhancing SAP by Using DB2 9 for z/OS Red Book. One item I’d like to pull out is the enhancements in DB2 Version 9 to RLF. RLF will cancel SQL execution for an application that uses too much resource (and you can specify how much resource that is).

What’s new in Version 9 is in the area of additional work qualifiers. In Version 8 the granularity is at the plan, package, collection, Authid and LU Name level. For distributed applications (those using DDF), these three qualifiers usually aren’t enough. In Version 9 additional qualifiers are recognised: Client workstation name, Client application name, Client userid and TCP/IP address.

While this RLF enhancement was billed yesterday as one of the 42 line items in Version 9 for SAP (and there were a further 53 for SAP in Version 8) I don’t think this one’s value is unique to SAP. For example, that rogue in the Chief Executive’s office who runs a huge DB2 query driven from Excel can be stopped in their tracks – with much more focused targeting. 🙂

So if you’ve tried to use RLF in the past to curtail runaway DDF work, and you’re looking for yet another reason to go to Version 9, consider the enhancements to RLF.

I May Not Know Who You Are But I Have Some Idea Why You’re Here

(Originally posted 2007-10-08.)

I don’t think you can see this information but I can see what are called “Referrer URLs” for hits on this blog. (The HTTP protocol defines a header that contains the URL you came from – when you clicked on a link.) Disregarding the several “Direct” hits – which tell me nothing – I see lots of referrer URLs with some information in them, such as:

  • Google, Yahoo etc searches.
  • Other blogs.
  • developerWorks routings.
  • Other referrers.

It’s nice to see the latter three categories – but it’s really the first that interests me. I can see, for example, search terms and sometimes the originating country or language. So it’s perhaps a little flattering to see “.fr”, “.in”, “.se” and “.it” in today’s sample. And, given I’m new to this whole “referrer” business, I’m looking at the search strings with interest…

One question I’m forced to ask is “did the visitor get what they came for?” For example, did whoever searched with “http://www.google.co.uk/search?hl=en&q=%22Memory+Matters+in+2008%22&btnG=Google+Search&meta=” and then clicked on a link to my blog get what they came for? They certainly didn’t get the presentation as a PDF. They might not have liked what I wrote that caused Google to generate a hit. But they followed the link. And what am I to make of THAT? I simply can’t know.

But seeing what it was that people searched for that led them here tells me what the purpose of my blog is. 🙂 “Wisdom of Crowds” springs to mind. This doesn’t mean I’m going to abandon writing about what I want to write about. But perhaps it gives me some clues as to what’s of interest and what isn’t. (What I don’t know is which entry caused the hit.)

This data is definitely interesting enough to make me want to write some analysis code to work out what the top search keys are each day. And I know a friend of mine publishes exactly that on her blog. (I think it’s a standard WordPress widget.)

The other thing, of course, is that I now know I have an audience. 🙂 But to repeat…

I don’t know who you are. So don’t be afraid to visit. 🙂

And to whoever was looking for Linda August and Joan Kelley I hope you’re glad you found me. 🙂

Bye Bye Internet Explorer

(Originally posted 2007-10-07.)

Finally the last web application I use that required Internet Explorer is replaced with a Firefox-friendly version (or so we’ve been told). Hurrah!

Except, as always, I’ll believe that when I see it. (This is our Online Travel Reservations application that used an ActiveX control. Now it looks very much like an Eclipse application, except I’m certain that’s just a styling thing.)

And so I start to think “I wonder if there is any mileage in a Firefox extension mashing up this new application”. I happen to have just such a Firefox extension – that knows which page it’s on and does things the page authors never intended the page to do.) 🙂 But “so many ideas, so little time”.

It’s a nice piece of news for me, though, on a Sunday morning as I prepare for a 4-day trip to Germany. Finally I can say “Bye Bye IE”. 🙂

Is UCB Too Obscure For Wikipedia?

(Originally posted 2007-10-04.)

Well, is it? Apparently there are umpteen (count’em if you want a more precise number) 🙂 expansions of the acronym “UCB”. But the one I care most about is Unit Control Block. (I think had been born before this meaning came into being – but I’m not sure.) 🙂

Actually Unit Control Block was not one of the listed meanings. So I added it yesterday – but didn’t write much on it. You’re probably wondering at this point why I’d pick on UCB to write about. Well, it’s rather hard to write about PAV and HyperPAV if the reader doesn’t know about UCB’s, IOSQ time and UCB Queuing.

The more general point is that I think, as a mainframe community, we should contribute more to Wikipedia, whatever trust and accuracy issues we have with it. I raised this point on IBM-MAIN Listserver yesterday and I seem to be getting exclusively “thumbs ups”. So, do get writing and editing!

Late To The Party

(Originally posted 2007-10-03.)

I’m towards the end of revamping our analysis code to support (z9) and z/OS R.8. What took us so long? 🙂 I’m telling you this for two reasons:

  • So you know what there is that might slow you down.
  • So you have some view as to whether we can competently process your data. 🙂

System z9

About the only change in instrumentation between z990 and z9 is the separation of specialty engines into pools. But this is a big change…

We process SMF 70 records into tables – with rows and columns. When we had just 2 pools we could have separate columns for GCP and “ICF” pools. Now, with 5 pools (GCP, zAAP, IFL, ICF and zIIP) this approach no longer works. So we’ve reworked it to have separate rows for each pool for each LPAR and machine. This caused lots of breakage. But we’re over that. The other thing that did was to cause a rethink of how we display the pool-level data. And I’m much happier with how it turned out.

Incidentally, I’ve managed to make our (Bookmaster) reporting work nicely with B2H so I can now publish HTML versions of the textual reporting. I may well do that for the machine-level reporting in a future blog entry. I think I’d better learn some more CSS first, though, or you’ll be underwhelmed. 🙂

Also on z9 our charts are by pool now. And I’ve taken the “IDLE” and “UNKNOWN” samples off the Service Class Period charts. That way the “Using” and “Delay” samples are clearer. (Also, of course the zAAP- and zIIP-related samples are displayed.) And I “smart out” zero delay buckets. Overall the foils are less cluttered and more punchy (but are still GDDM-originated CGMs – with styling that went out with the (B) Ark). 🙂

z/OS Release 8

z/OS R.8 provides us with some challenges in the Memory area as previously noted. On a more positive note it added the machine serial number (e.g 51-11D68). This I can now display – and I do when I have it. But so what? Actually one immediate thing (and one deferred)…

  • There is a nice Customer Engineer’s tool called “VPD Formatter for Windows” (VPDFWIN). This takes the VPD that each machine periodically sends to the Boulder server and formats it. The input to the tool is the device type (e.g “2094”) and the 7-digit machine serial number. With it I get lots of gory information about the machine. Such as how much memory is physically on the machine and how much is purchased. Such things tell me the impact of e.g. buying more memory, or even deploying more to an LPAR.
  • A “still to do” is to tie up the 70-1 view of CPU for an ICF with the 74-4 view (as the 74-4 also got the machine serial number). But for now I’m concentrating on higher priority work – such as listed above.

So, I’ve been busy coding – and I like what’s coming out – despite the breakage both z9 and z/OS R.8 caused in my code. And as always having “early sight” of the data gives me a chance to advise my customers on what’s coming and how to use it.

Mor(e )on UIC

(Originally posted 2007-10-03.)

Thanks to the people who responded to this blog entry. And to the people who talked to me offline.

The result is a slight shift in emphasis: I never did talk about UIC as the primary metric of memory constraint. If you read the referenced blog item I mentioned it as one of three, alongside paging rate and free frames. The shift is that I’m going to be more strident about UIC in future, relegating it to “third place”…

  1. Obviously copious free frames would suggest no constraint.
  2. Obviously significant paging would tend to suggest some level of shortage – though customers have been reporting some paging even when there are tons of frames free (and I haven’t had a good explanation for that yet).
  3. If UIC is dropping then that ought to be some kind of indicator of constraint.

As always “constraint” has to be seen in the light of goal attainment for important service class periods. You can see “delay for paging” samples in Workload Activity Report.

Did I say I was making this up as I go along? I didn’t? 🙂

Seriously, I’ve yet to see a day’s worth of z/OS R.8 data from a Production system. I expect to any week now. But rest assured, dear reader, if you get to send me such data I’ll be ready for it. 🙂