My HackDay 6 Project – Mashing Up RMF

(Originally posted 2008-11-03.)

Another 6 months on and another HackDay…

Hackday6 was on 24 October – and being distracted by little things such as GSE Conference and an important customer situation I haven’t blogged about it yet. So, a little late, here goes…

Many of you will know that RMF has a WebPortal for monitoring performance and capacity usage. And you’ll know that the z10 processor’s HMC is web-based (though I don’t think z10 was the first System z processor to have one). Let’s park the HMC thing for a moment…

Before I go any further I should acknowledge my team mate in this: Stepane Rodet from the Boeblingen Lab (where, most relevantly to me, RMF, WLM and Capacity Provisioning Manager come from). He did a lot of the programming work and put together the foils we’re waving around (the latter, thankfully, using OpenOffice).

Here’s the idea…

Any web interface, just about, can be “mashed up”, hacked about or (as I prefer to call it) defaced. So we thought we’d demonstrate this using RMF. Choosing RMF was not exactly gratuitous – just to get “z” in the hack. 🙂 The point is we think that the RMF WebPortal has information in it that would be good to mash up. We don’t really know how people would choose to mash up the data but we think there’s a lot of potential there. Such vague notions are what get me into trouble. 🙂

So, we devised a shopping list of web technologies to use against this web data source, including

  • GreaseMonkey (a most excellent scripter for Firefox).
  • PHP (which would run in a web server that acted as a front end).
  • cURL (a command line HTTP client).
  • Adobe AIR (which uses HTML and javascript to build desktop applications).
  • Java applets.

In the end we concentrated on a very small number of these and went against a small number of RMF-served web pages.

And our hacks were rather modest – just showing we could extract names and numbers (of LPARs) and do some arithmetic and redisplay with them (building a simple table with scaled bars in it).

We had a lot of fun doing it and, though the hack was rather small, we count it a success because…

  • We demonstrated RMF WebPortal could be mashed up.
  • We learnt some lessons on behalf of Development… such as the need to be able to navigate direct to pages that are in frames and the need for “id” attributes on some of the key HTML tags.

So, as I mentioned before, Stephane has written some foils (and I’ve contributed a little to them) that he is taking to RMF and WLM development groups – to get their interest. (Stephane works on Capacity Provisioning Manager and is in the same group as WLM and RMF.)

There are no promises here but we’ve started the conversation with RMF about how to make its information more consumable by web clients. (You have to be careful in this because there’s no guarantee of “HTML stability” as, in principle, RMF could change the HTML with a single PTF – to meet some other need.) And, no, I don’t think HTML hacking is the best way to do this… I’d like to see some robust web services (probably JSON or maybe XML).

Now, why the HMC?

Because I think that this is equally mashable and combining its data with RMF might be an interesting thing to do. But we didn’t get there this time. We wanted to actually get something working… to demonstrate the possibilities.

Maybe next time…

Unless you (dear customer) have something in the z/OS space that’s relatively small you’d like us to try and do (or mock up or experiment with). Ideas?

Oh, and by the way, maybe Twitter is a good place to bounce around ideas like that. Lots of mainframers and DB2 folks are there now. DB2 is doing especially well with household names right now. You can fine Stephane as “rodet” and me as “martinpacker” on Twitter.

System z Expo, Las Vegas, 13-17 October

(Originally posted 2008-09-25.)

With a couple of working days to go before the deadline I’ve completed my two presentations for Expo.

As usual I’ve added some new stuff, based on new technologies and some situations I’ve encountered. And rather than throwing older stuff out I’ve moved most of it to backup foils. While I’m prepared to present the backup material I don’t expect to have time to in the sessions. But at least you’ll have the material. And at other times, in other places, I can present the material.

As my presentations are of the “evolutionary” variety I suspect if you saw me present twice in a year you’d barely feel they’d changed at all. Hopefully most people see them but once a year and get value out of what has changed.

So, here are the two sessions I’m giving:

  • Much Ado About CPU – Now with z10 Too

    Depending on your perspective the most notable new stuff is

      Structure-Level Coupling Facility CPU
    • More focused zIIP and zAAP stuff
    • Blocked Workloads
    • z10 Hiperdispatch
  • Memory Matters in 2009

    For some reason the organisers of the US conference think I should be ultra forward looking and name the presentation after the coming year. In Europe we seem to be more relaxed about this and the same presentation would be “in 2008”. To be honest I don’t know which I called it for GSE Conference (which is at the end of October).

    Again, depending on your perspective the most notable new stuff is

    • More on paging subsystem design and dump space
    • System z10 1MB pages
    • z/OS Release 10 64-Bit Common

It’s going to be interesting to catch up with some of the other presenters and to see what they think is interesting to talk about. And to see some friendly customers I haven’t seen for a year.

I’ll also be updating you via Twitter: Remember my id is MartinPacker so feel free to follow me and to interact with me on Twitter. I’m going to be using the hashtag #zOSExpo2008 and would encourage other twitterers at the conference to do the same. Using a hashtag to mark tweets has worked well at other conferences. I expect, for instance, to flag what session I’m in. So you might want to ensure questions you’re interested in get asked. Of course, I’d have to find the question interesting and not to have asked too manyquestions already. 🙂 You know how difficult that’ll be. 🙂

I think you can tell I’m looking forward to this conference (don’t I always?) even if the flights are bit of a bore and so’s the jetlag. I have been to Las Vegas three times before and have the Saturday after the conference free. I think I might hire a car and get out of town and take my camera with me. Anyone else attending who has the same idea?

And if you were wondering about going I can tell you the agenda looks easily good enough to make it a great use of time. Join me in Las Vegas!

How Many Browsers Do You Have On Your Machine?

(Originally posted 2008-09-20.)

Well, how many do you have?

I’ll admit to 4 on my Windows XP Laptop (and 2 versions of 1 on my ASUS EEE PC, running Linux)…

I firmly believe in having an “emergency browser”. At 0ne time that would have been Internet Explorer – for two reasons:

  • To recover a broken browser. As I mainly use Firefox Nightly builds I sometimes get cases where I need a fresh install of a prior nightly to get me out of trouble. (Nightlies are the bleeding edge in Firefox terms, nice but sometimes too bleeding edge.)
  • To access pages that Firefox has a problem with.

Nowadays, however, there are almost no web pages I can’t read with Firefox. So it doesn’t have to be Internet Explorer anymore.

In fact IBM really doesn’t care what browser I use – so long as I don’t want support. 🙂 (I haven’t wanted support for probably 20 years.) 🙂 Certainly, to be serious, there is strong endorsement by all and sundry for Firefox, even though it isn’t in any formal sense the mandated browser.

So what do I have now?

  • Firefox 3 Nightlies (as I said before) as my main browser. And I occasionally get involved in design or bug spotting discussions.
  • Internet Explorer 6 – seldom used.
  • Safari as my real “emergency browser” at the latest released level. Webkit goes on apace and so I might take a more bleeding edge stance soon.
  • Google Chrome – just out of curiosity.

One thing to note now is that there is a great race going on between the various browsers for speed, especially (this year) in their javascript execution engines.

And the significance of the “javascript wars” is that, through means like Adobe AIR and toolkits such as dojo, the web experience (and perhaps more general application experience) is become increasingly dependent on javascript. And heavy execution at that.

So the browser war may have become dull (or maybe not) but it’s fascinating to see the javascript front open up.

But, finally, in case anyone thinks I’m about to shift browser I’m not expecting to replace Firefox as my main browser anytime soon. I really don’t think Google Chrome is a Firefox killer (or an anything else killer, either).

Happy browsing folks and remember you should have more than one browser – not counting the (perhaps ignored) one that came with your operating system.

Minor Good News On Coupling Facility Performance Reporting

(Originally posted 2008-09-19.)

In recent releases RMF have put the machine serial number into both SMF Type 74 Subtype 4 (Coupling Facility Activity) and SMF Type 70 Subtype 1 (CPU Activity). Actually we get two fields in each case: Plant Number and Sequence Number, which you can put together as eg “51-12345”.

Soon we’ll get another piece of the jigsaw: Partition Number in 74-4.

This is a small change but a nice useful one…

It finally enables you to correlate the CPU Activity Report view of a Coupling Facility LPAR and the Coupling Facility Activity Report view. I have seen several customer cases where this has been impossible from SMF alone, even from counting engines in both records.

I intend to say a little more about it in my “Much Ado About CPU – Now with z10 Too” presentation at Expo in October. In fact I’d better get writing the new foils… given that I think it needs restructuring to compress much of the “everybody knows this already” material. (I expect to put quite a bit about z10 in, as well as some more stuff about Coupling Facility CPU such as the Structure-Level CPU field R744SETM.)

System z10 CPU Instrumentation

(Originally posted 2008-09-18.)

Since I got back off vacation in L’Hérault in late August I’ve been working on adding z10 support to our CPU analysis code. It’s quite a substantial set of changes – and I don’t think I’m finished yet. But I’d like to share with you what I’ve learned so far.

But first let’s briefly review what’s changed with z10. (This is a very brief review and not a tutorial on the subjects mentioned.)

  • z10 introduces a bunch of changes in the area of how upgrades – whether temporary or permanent and whether wanted or forced by circumstances – work. So we now have the notion of Permanent and Temporary capacity models (and indeed capacity values).
  • HiperDispatch is a very significant set of changes in the way PR/SM and the z/OS Dispatcher work – especially since they work together.

I’ve had data from one customer who is using Hiperdispatch for real. But already I’m seeing “behaviours”.

I would assume, by the way, that MXG already has support for the new fields and has adjusted any calculations that needed adjusting. While I follow MXG-L Listserver I don’t take more than a passing interest in MXG itself. And, also by the way, I’m talking exclusively about Type 70 Subtype 1 in this post.

Capacity Models

We now have four different models in Type 70:

  • SMF70MDL – the original model. (Prior to z990 software model (this one) was equal to the hardware model)
  • SMF70HWM – hardware model (introduced with z990 because of the book structure)
  • SMF70MPC – permanent capacity model (new with z10)
  • SMF70MTC – temporary capacity model (new with z10)

There are also three capacity ratings:

  • SMF70MCR – corresponding to SMF70MDL
  • SMF70MPR – corresponding to SMF70MPR
  • SMF70MTR – corresponding to SMF70MTR

These are all interesting in an environment where your machine configuration changes – whether through “On-Off Capacity On Demand”, “Capacity Backup”, “Capacity For Planned Events” or whatever. You can now do your usual performance and capacity work even when the configuration changes.

At this point I’m just listing the numbers in my reporting. I suspect I’ll do more when I get performance data from customers who actually do e.g. time-of-the-month upgrades/downgrades (and I know one or two who already do).

Hiperdispatch

When looking at Hiperdispatch you have to understand there are two major parts to it:

  • Dispatcher Affinity (DA) – from z/OS
  • Vertical CPU Management (VCM) – from PR/SM

Internally I still sometimes hear it using the terms DA and VCM. The point is it’s got two parts to it. So there is information in sections of the record related to z/OS and other information in sections related to PR/SM. You have to put the two together.

And here’s the most important bit…

You need to collect Type 70s from ALL z/OS images of any significance on the machine to get the full picture.

A good example of this is understanding how many logical engines are really in play when some of them are parked (in most LPARs).

z/OS – Related Information

SMF70HHF has flags for whether Hiperdispatch is supported or is active. These are, fairly obviously, for the reporting z/OS image.

SMF70PPT is the amount of time this engine was “parked” in the interval. (That is when work is deliberately not dispatched to it.) These are some or all of the “Low Polarization” engines. More on that a little later. But parked engines are important because the new calculation for CPU Busy counts parked engines as not part of the z/OS image’s capacity.

PR/SM – Related Information

SMF70POW is used to calculate the Polarization Weight for a logical engine. Logical engines are classified as High, Medium or Low. An LPAR’s weights are spread across its logical engines to ensure the High engines each have a weight corresponding to one physical engine. Each Low engine has a zero weight. Any weight left over from assigning the High weights is assigned to either 1 or 2 Medium engines. (1 if the remainder is more than half an engine, 2 if the remainder would have been less than half an engine.)

You can observe this Polarization Weight distribution using SMF70POW…

The highest value of SMF70POW for an LPAR is a High logical engine, that is 1 whole physical engine. Any values of SMF70POW smaller than that but greater than zero are for Medium logical engines. I’ve seen cases of both 1 Medium and of 2 Mediums for different LPARs on the same machine.

Bringing It All Together

So, to understand Hiperdispatch you need both LPAR and z/OS image information.

Actually, since IRD was introduced, you’ve had to marry up both perspectives. Because Online Time (in the case of Logical CP Management) became a part of the calculation. And now Parked Time is.

(After a number of years of owning our CPU Analysis code I’ve recast it – for Hiperdispatch – in a way that makes it much easier to morph our CPU calculations in case anything else happens. I’m not foretelling anything – just knowing that CPU Utilisation is one of those things whose definition will never settle for long.) 🙂

Replace String A With String B But Not If It’s Part Of String A1

(Originally posted 2008-07-31.)

As I mentioned in this blog post DFSORT just shipped a new FINDREP function to do “find and replace”.

I mentioned an example of where I might use it to replace SMFIDs in SMF records. That example generally works well.

But suppose (say, for SMF 42-6 Data Set Performance records) I want to replace “SYS1” with “MVSA” but don’t want to replace “SYS1.” with “MVSA.”.

There’s something useful about FINDREP that saves the day: Only the first match is used. So consider the following FINDREP example:

  OPTION COPY                                                
  INREC FINDREP=(INOUT=(C'SYS1.',C'SYS1.',C'SYS1',C'MVSA'))
  

With this code any data set references of the form “SYS1.” are transformed to themselves – and the cursor moves past them – whereas any other “SYS1” reference is transformed to “MVSA”.

It makes sense to put the most restrictive find string first. Otherwise the following wouldn’t work…

   OPTION COPY                                              
   INREC FINDREP=(INOUT=(C'SYS1.MODIFYME',C'MVSA.MODIFYME',
     C'SYS1.',C'SYS1.',
     C'SYS1',C'MVSA'))

which will change “SYS1.MODIFYME” into “MVSA.MODIFYME” but no other “SYS1.” string (unless it starts with “SYS1.MODIFYME”).

Of course, given you can use FINDREP as part of IFTHEN there’s a lot more you can do with it. But for now I just wanted to point out the “first hit wins” characteristic that can be useful.

New DFSORT Functions

(Originally posted 2008-07-29.)

Yesterday DFSORT announced a new set of functions – as PTF UK90013. The documentation for it can be found here.

Every year or so there’s a new set of DFSORT functions – and generally they’re “out of cycle” with z/OS releases – although they are incorporated into subsequent releases of z/OS. This means that fewer of you will know about the functions, particularly as we don’t make a big fuss about it at z/OS release announcement time. So you quite possibly, when you move to a new release of z/OS, get new DFSORT functions you don’t know about.

I’m privileged to call the DFSORT developers friends. And I get to “beta” the new code ahead of release. This time, due to other commitments (like the “Parallel Sysplex Performance Topics” Redbook), it’s been difficult to find the time to play much with the code.

So, what’s new?

Here are a few highlights:

  • FINDREP makes it MUCH easier to do “find and replace” operations. Hence, presumably, the name. 🙂

    There are a number of ways of specifying the search string and its replacement. For example, you can specify multiple input strings that get changed to the same output string. You can also define pairs of strings so that you can find and replace multiple strings in one pass over the data. But you can also just specify a single string and its replacement.

    These strings can be specified as character strings (e.g C’XYZ’) or hexadecimal strings (e.g. X’FFAB’). Or as multiples (e.g 4X’FF’). And DFSORT Symbols can be used for C’XYZ’ and X’FFAB’ styles (but not the “multiplier” styles).

    You can specify search “margins”. So, you could specify that strings are to be sought between positions 11 and 71, for example.

    You can specify the maximum number of times find and replace is performed for a record. So you could specify for example only the first match is to be replaced.

    You can say what is to happen if a replacement operation causes the output record to become wider than the LRECL.

    If you don’t want the remainder of the record to be shifted left or right after a match you can specify that as well.

    All in all a nicely thought out set of options.

    Here’s an example I actually need today:

    OPTION COPY                             
    INREC FINDREP=(IN=(C'#@$'),OUT=(C'SYS'))
    

    All our systems have SMFIDs beginning “#@$” and our tooling currently has a problem with a “$” character in certain places. Replacing “#@$” with “SYS” gets us out of a hole. And it’s pretty much guaranteed that anywhere in the SMF records we see “#@$” is part of a SMFID. So preprocessing with FINDREP will help.

  • Group operations (using WHEN=GROUP) allow you to identify and operate on groups of records.

    A group of records can be identified in one of two ways:

    • Every n records is a new group. (This uses the “RECORDS=” syntax variant.)
    • All the records between a header record and a trailer record is a new group. (This uses the “BEGIN=” and/or “END=” syntax variants.)

    Here’s an example, straight from the new documentation:

    INREC IFTHEN=(WHEN=GROUP,RECORDS=3,PUSH=(15:ID=3,19:SEQ=5)
    

    specifies groups of three consecutive records. Position 15 for 3 is an identifier that increments for each group. Position 19 for 5 is a sequence number that increments by 1 and restarts at the beginning of each group.

    In the above it’s the PUSH that actually edits the records.

    One thing to note: It’s entirely possible (but not in this example) that records fail to fall into any group. With BEGIN and END it’s possible to have records before the first BEGIN hit and after the last END hit and between an END hit and the next BEGIN hit. Here’s a way of detecting them:

    OPTION COPY
    INREC IFTHEN=(WHEN=GROUP,BEGIN=(1,5,CH,EQ,C'START'),END=(1,4,CH,EQ,C'STOP'),PUSH=(10:ID=1))
    OUTFIL FNAMES=REJECTED,INCLUDE=(10,1,CH,EQ,C' ')
    

    In the above case groups start with a record with “START” in them and end with records with “END” in them. The “PUSH” sets a flag for records in a group. The OUTFIL writes records to a sidefile where the flag wasn’t set.

    Just for grins, I tried a “mis-nesting” or “mis-bracketing” where the input stream was:

    START
    START
    STOP
    STOP
    

    The output was:

    START    
    1START    
    2STOP     
    2STOP
    

    So each “START” record starts a new group, regardless of whether the previous group was terminated with an “STOP” record. So the second “STOP” record isn’t part of any group. Still, I expect most applications will have “well formed” input data, cough cough. 🙂

    RECORDS, if specified with BEGIN or END has a slightly different role to when it appears on its own. It limits the number of records in a group.

One of the nice things about the above is you can fit them into an IFTHEN “pipeline”. IFTHEN lets you pass records through a sequence of filters – much like real pipelines. WHEN=GROUP, in particular is always used within IFTHEN, even if there are no other filters (or “stages” as they’re sometimes known). Also, WHEN=GROUP can be intermixed with WHEN=INIT but must before the other WHEN clause types.

The other significant enhancements are all related to ICETOOL:

  • DATASORT is a new operator that allows you to sort the data records in a data set without sorting the header or trailer records. Header and trailer records are copied in their original order, continuing to bracket the sorted data records.

    You use HEADER, FIRST, HEADER(n), FIRST(n) to denote the first one or n records are the header. Similarly, TRAILER, LAST, TRAILER(m), LAST(m) denote the last one or m records are the trailer.

    You can use OUTFIL to post-process the entire output stream, including header and trailer records.

  • SUBSET is a new operator that selects records based on their record number, for example the first 5 records – FIRST(5). You can specify whether subsetting is done on the way into a sort or on the way out. Again you can use OUTFIL to post-process the records.

    NOTE: If you specify eg LAST(n) ICETOOL may have to call DFSORT twice. The first pass is to count the input records. The second is to actually write out to the output data set. Because the first pass doesn’t actually OPEN the output data sets this kind of two-pass approach is fine with a BatchPipes/MVS pipe as the output data set.

  • The SELECT operator is enhanced to allow you to select the first n records with each key or the first n duplicate records with each key. This is useful for a “top list” approach. (I once did something similar with REXX driving DFSORT.)

    While there are FIRST(n) and FIRSTDUP(n) forms there aren’t LAST(n) and LASTDUP(n) forms. But Example 1 in the documentation shows how you get round that using a different sort sequence.

  • The SPLICE operator is enhanced with a new keyword: WITHANY. (As if having WITHEACH and WITHALL wasn’t confusing enough already.) 🙂

    WITHANY creates one output record for each set of duplicates. The first duplicate is written out with with the non-blank values of each subsequent duplicate spliced onto it for specified fields. (“Spliced onto” or “Spliced into”?) 🙂

    The documentation gives a far better description of this than I can here. Which is another way of saying “SPLICE confuses the heck out of me.” 🙂

  • The DISPLAY operator now allows you to display counts in reports. Formerly you could display totals, maxima, minima and averages (and the “sub” variants of those).
  • The DISPLAY and OCCUR operators have greatly enhanced title capabilities:

    • You can have up to 3 title lines.
    • Each can have up to 3 strings.

    This flexibility enables you to use DFSORT Symbols in titles, including System Symbols. To do the latter code something like:

    //SYMNAMES DD *System,S'&SYSNAME'Sysplex,S'&SYSPLEX'

    and then in ICETOOL control statements something like:

    TITLE('System: ',System)
  • The COUNT operator has been enhanced to allow you to write its output to a data set. Previously it went to SYSOUT. So you could code something in the form:

    COUNT FROM(EMPIN) WRITE(EMPCT) - TEXT('Number of employees is ') - EDCOUNT(A1,U08) WIDTH(80)

    to get output like:

    Number of employees is 1,234,567

    A1 is a mask that puts commas in. U08 means “Use 8 digits”.

    COUNT also allows you to add to or subtract from the count with ADD(n) or SUB(m). If criteria like EMPTY are used it’s the modified record count that is used in the comparison (in this case to 0).

So there’s lots there to play with. And it’s available right now.

And if you want all the user guides for “recent” function PTFs go here.

Web 2.0 and System z Pilot Workshop – And Other Conversations

(Originally posted 2008-07-11.)

A couple of days ago I attended the pilot of a “Web 2.0 and System z” workshop in the Boeblingen lab. I’m pleased to say the room was full – both with IBMers and non-IBMers. And I think my presentation – which was basically a rant about Web 2.0 Behaviours and why we should adopt them as mainframe folks 🙂 – went OK.

One of the things that was interesting about it was that on greeting Luis Suarez (@elsua on Twitter) we felt it natural to hug – though we’d never physically met before. I think that’s a testament to the power of relationships built through Social Networking. Oh, and I made new friends as a result of the workshop. So welcome @ansi, @frogpond and @rodet – all Twitter users.

Actually there were lots of things interesting about the workshop and another one will be running in the Autumn. I gather there’ll be some “hands on” then, which I’d encourage.

I’m wondering if other people would like to see such a workshop – in other parts of the world.

And it was nice to reprise my presentation the next day in front of WLM, RMF and Capacity Provisioning Manager (CPM) developers.

And now my copy of the O’Reilly book “Dojo: The Definitive Guide” has arrived. So I’m “Web 2.0’ed out” for the week. 🙂 Actually the book looks like it covers the ground quite well. I have a half-jesting rule of thumb: If there’s an O’Reilly book about a technology it’s probably ready. If there’s a “For Dummies” book on the subject it’s probably boring, dulle and passe. 🙂

Coupling Facility Subchannel Busy in RMF

(Originally posted 2008-07-11.)

In the same meeting with RMF Development as ERBSCAN / ERBSHOW was discussed we talked about Parallel Sysplex instrumentation. (Both XCF and Coupling Facility aspects.)

One thing I hadn’t noticed is the appearance of Subchannel Busy Percentage as a number in the following three places:

  • In Monitor III
  • In the Spreadsheet Reporter
  • As Overview Condition (“SUBCHBP”)

I think you can do useful work with this but you have to know a few things about it:

  • It is one system’s view.
  • It is an estimate, derived from request rates and service times, both Sync and Async. Essentially you work out the request total time in the RMF interval and divide that by the interval length and the number of subchannels. The implicit assumption is that the service time is all subchannel busy time and that there is no other time when the subchannel is busy. It’s a moderately well-known formula, however.
  • It says nothing about individual subchannel busy levels – nor about paths.

This topic is of interest to me for a couple of reasons:

  • On our Residency in May / June we used coupling facilities that were attached by a mixture of ICB and ISC links. Two different link types between the same z/OS and the same coupling facility. It would’ve been really nice to see which link type actually got favoured.
  • My suspicion is that customers would like to do more detailed connectivity analysis and tuning for their parallel sysplexes. For example, if a link doesn’t get used it’d be nice to know that and to be able to trouble-shoot it. (Perhaps someone left it connected as a 20km fibre suitcase.) 🙂

So, I’m wondering… Is link and connectivity information important? What do you need? And why? Or is the Subchannel Busy Percentage estimate good enough for you to work with?

Oh, and thanks to Matthias Gubitz for the phrase “Gutes tun und daruber sprechen” (“Do good and speak about it.”) 🙂 That’s probably the best reason for blogging and such like.

And the “neologism of the week” award goes to Harald Bender for the word “handisch”. Neither English nor German. 🙂 We believe he meant “by hand”. But for now I prefer “handisch”. 🙂

ERBSCAN / ERBSHOW – More Good News

(Originally posted 2008-07-11.)

I mentioned ERBSCAN and ERBSHOW in this post from January. This is a handy pair of tools for displaying SMF records – most notably the ones produced by RMF. For the uninitiated you type ERBSCAN in ISPF 3.4 against an SMF data set and it pops up a list of SMF records. Typing ERBSHOW followed by the record number formats the record – especially well if it’s one that RMF produced.

And the prior post on ERBSCAN and ERBSHOW talked about the “x” parameter for ERBSHOW that enhances the formatting.

So, I was in Boeblingen for a different purpose this week and took almost 2 hours of the RMF Development team’s valuable time. 🙂 We talked about many things, including “futures”. There was, as usual, a good meeting of minds.

Matthias Gubitz mentioned an enhancement to ERBSCAN that is worth relating. It’s in z/OS Release 9, but it won’t’ve made the headlines…

The list of records that ERBSCAN produces is an ISPF Edit session. So you can do ISPF Edit things like using Find and Exclude All…

In the past I’ve used it to navigate to eg SMF 74-4 with “F ‘074.004’ and to find records cut by a specific SMF ID. So you can imagine what’s displayed affects usability…

Matthias added the following two items:

  • For 72-3 (Workload Activity) the Service Class.
  • For 74-4 (Coupling Facility Activity) the Coupling Facility name.

You can see that being able to search on these, perhaps going “Exclude All” first, is going to make finding the records you want – to ERBSHOW – much easier.

Well I think it’s going to save me time. 🙂

And the RMF guys mentioned to me that ERBSCAN / ERBSHOW seem to have become popular. I wonder why. 🙂