Innsbruck z/OS and Storage Conference – Day 1

(Originally posted 2005-04-11.)

Here’s a summary of MY first day at the z/OS and Storage conference in Innsbruck. I’ll be reporting from a PERSONAL point of view (using Notepad to capture thoughts). Hopefully some of these items will be of some use to you. I admit this is all a bit raw.

Already I’ve run into several UK customers and a lot of developer friends (and fellow presenters I’ve known a long time).

Session Z01: What’s New In z/OS? Speaker: Garry Geokdjian

This was an introductory session to z/OS R.6. It also touched on R.7. As a performance person it’s hard to keep up with the more minor details of each release. So this was a good chance to fill in the gaps.

Garry previewed the “New Face of z/OS” initiative which proposes a Web User Interface which is consistent across

tasks. These tasks would be automated and simplified tasks and would have integrated user assistance.

z/OS Load Balancing Advisor is a new feature in R.7 Improved Dynamic Virtual IP Addressing in R.4 and R.6. Can
rename LPAR without IPL in R.6 (z890 and z990 required.)

There is a PTF to R.4 to change CPU speed of z800 and z890 without an IPL.

In R.6 you can reserve LPARs with a “*” for the LPAR Name.

A 32-way z/OS image previewed as a PTF for z/OS R.6 for later in 2005.

64-bit Java 2 1.4.1 was made available September 2004.

IBM Communication Controller for Linux (CCL) for zSeries 1.1 emulates a 3745 Comms Controller, so you can run ( most
of) NCP under Linux. Already available – but this is an announcement I’d not spotted.

Initial z/OS support for Enterprise Workload Manager (EWLM) was made available December 2004. This today provides
reporting of business performance objectives and breaks down response times across the whole environment. In the
future it will provide workload balancing recommendations.

From R.7 the root File System will be zFS.

z/OS Load Balancing Advisor will use SASP protocol to provide routing recommendations to a SASP-compliant router to help with load balancing.

IBM HealthChecker for z/OS and Sysplex has been very successful and will be incorporated into R.7. Additional checks will be added.

XRC+ makes System Logger more attractive in GDPS environment.

Statement of Direction for The VSAM Connector for z/OS, a JDBC connector for VSAM.

Session TSS06: VSAM RLS Overview, Speaker: Terri Menendez

With RLS, SHAREOPTIONS(2,x) allow some level of sharing between RLS (can read/write) and non-RLS (for read).

SMSVSAM address space is RLS. Control blocks and buffer pools are in a dataspace.

To use RLS you have to have a CF even for single-system operation. RLS uses cache structures and a lock structure
(IGWLOCK00). The default lock structure sizing is generous. The cache structures might be a little small. SMS assigns data sets to Cache Sets, each of which is associated with a CF cache structure.

RLS Development has put a very great deal of effort into Reliability Availability and Serviceability (RAS). Much of
this has come out through APARs.

RLS does lock detection and can supply a bad return code to the caller in the event of a deadlock.

D SMS,SMSVSAM is a useful operator command for displaying the status of the RLS infrastructure.

Catalog calls SMSVSAM to delete a data set, because there might be retained locks associated with the data set. Likewise DFSMSdss.

Automation between CICS and VSAM RLS. eg F cicsname,CEMT SET DSN(RLSADSW>VFA1D.*),QUI to quiesce on all CICS regions
sharing the data set. (It was news to me you could do a CEMT by modiFying the CICS address space.)

Session G04 zSeries Processors Migration Considerations, Speaker: Parwez Hamid

ESCON channels cannot be spanned across LCSS’s. FICON can but needs the same CHPID for each LCSS.

z990 I/O Configuration “Plan Ahead” can be used to install additional cages when installing the z990 – to avoid
outages when upgrading the I/O configuration later on.

I’m reminded that the Bimodal Accomodation Offering is not available for z/OS R.5 and subsequent releases.

z/OS R.4 z990 Exploitation code supports more than 1 LCSS, more than 15 LPARs and 2-digit LPAR IDs. WSC Flash 10236
describes this.

Parwez reminded us of the the good reasons why LSPR comparisons between eg 9672 and z990/z890 are not directly

z990 GA3 allows conversions of engines to Unassigned and between types.

Adding another z990 book (perhaps for more memory) and POR’ing is quite likely to cause PR/SM to re-evaluate which
physical engines to use and hence it’s pretty likely some engines on the new book will be used.

z990 GA3 is required for dynamic LPAR renaming (mentioned above).

If going to CF Level 14 use the CFSIZER tool to determine if you need more CF storage – it’s quite likely you

Session ZP03: Much Ado About CPU, Speaker: Me 🙂

I feel I rushed this a little – but it WAS the first time I’d delivered the material. There were a couple of
questions. One related to not being able to treat multiple clusters as one. I think the customer has multiple
parallel sysplexes which touched the one machine and therefore more than one cluster. IRD will not manage between
clusters. The other question was a comment that for bureaux the need is to limit an LPAR’s CPU consumption. My only
answer to that is that LPAR design needs to take that carefully into account.


DB2 UDB for z/OS Performance Topics

(Originally posted 2005-03-09.)

I always look forward to the publication of the “DB2 Performance Topics” red book for each release of DB2. This time was no exception. Except: I was lucky enough to participate in reviewing this one – for Version 8. Most of the credit, though, goes to a great team of residents and developers (my role being minor).

So here’s a link to a draft:

It’s expected to be published by the end of March, but is only a draft at this stage.

WLM Balancing Initiators In Sysplex

(Originally posted 2005-03-09.)

Here’s the gist of an Info APAR on the subject: OA10114.

This documentation APAR was taken to enhance the information in the WLM Planning book to more clearly state that the intent of the WLM Managed Batch Initiator Balancing enhancement is not to evenly distribute the batch workload among systems in the sysplex nor to equalize the CPU utilization across systems in the sysplex. Rather, the intent is to improve the performance and throughput of batch workload in the sysplex by stopping initiators on systems with a CPU utilization over 95% and restarting them on a system which has more idle capacity.

DB2 Fix for Star Join

(Originally posted 2005-03-09.)

PK01266 enables parallelism in Star Join under an additional set of circumstances:

From the APAR Database:


Starjoin queries with non-partitioned fact table can encounter poor performance since CPU parallelism may not be enabled. The problem is corrected to enable CPU parallelism for this situation.

I suspect this will be most useful for SAP BW, but obviously other “Data Warehouse” users of Star Join should find it interesting as well.

An interesting paper in this area is Terry Purcell’s “Evolution of Star Join Optimization”.

A question for those of you that send data into IBM

(Originally posted 2005-02-24.)

I’m investigating FTP support for our process. That would mean allowing customers to send their data via FTP to either of two IBM-owned sites: One in Mainz, Germany and the other in Boulder, Colorado.

The question is: Is it easier for customers to send data via FTP? Or on tape? I’m thinking not only our (largely SMF) data but also things like dumps.

For me I think it would be easier to FTP GET the stuff from one of these two sites. (It eliminates the mess of mounting tapes but it does mean any shortage of disk space at my end would be more of a problem – I don’t think you can IFASMFDP (or even DFSORT COPY) over FTP. So if I ran out of space here I couldn’t get round it by cutting down the data.

I will confer with my colleagues in the USA who do this, but I wonder what customers think.

Processing DB2 Unload / DSNTIAUL data with DFSORT – VARCHAR Fields

(Originally posted 2005-02-21.)

In our process we unload the DB2 Catalog with DSNTIAUL and then use the resulting files in our analysis. When prototyping new code, or just plain trying different queries out, we use DFSORT.

However VARCHARs pose a bit of a problem, encoded as they are with a 2-byte length and padded with trailing nulls to the VARCHAR’s maximum length. If you want to include a record with a specific value shorter than the maximum length for that field you have to code something like


where the field is 8 bytes wide. It would be much nicer if you could take advantage of the Symbol name for the full 8-byte field:

You could if the padding had been done with blanks (spaces) as DFSORT can cope with that. But we have nulls here, instead.

So here’s a snippet of code that uses the new IFTHEN support to convert trailing chars of a 4-byte field to something else:

IFTHEN=(WHEN=(4,1,CH,EQ,C' '),

Note: this example converts trailing blanks to full stops. But it could just as easily convert between any two code points.

Actually, in the above case you don’t actually need the HIT=NEXT for this simple example. It was extracted from a real example where I was doing this for two fields. And if you do have the HIT=NEXT version you can reduce each IFTHEN to acting on a single character.

I’ve tried informal tests with dozens of IFTHENs strung together and it didn’t seem to cost much.

Time to get creative

(Originally posted 2005-02-20.)

Well, now I’m on the agenda at these two conferences I’m confronted with the task of actually getting 4 presentations out the door by March 4th. Thankfully

  1. Three are updates of existing presentations, varying in effort quite considerably.
  2. Two are shared between the two conferences

Of course it doesn’t help that the UKCMG presentation slots are shorter by 15 minutes than those in the zSeries conference.

As I’m writing “Much Ado About CPU” – and this is one of the dual-use ones – I have to think about what to leave out of the UKCMG version. As I’m talking about IRD and zAAPs (inter alia) which have an “implementation” element I’m thinking of skipping that at UKCMG.

(I can hide behind the “I’ve always been a pontificator and never a sysprog” line whenever the implementation details become “inconvenient” to talk about.) 🙂

Now to go and find out at least something about IRD managing Linux partitions. None of my clients has presented me with that as a serious topic before. 😦

DFSORT Processing SMF Records

(Originally posted 2005-02-18.)

It’s going to sound like the only thing I talk about is DFSORT. While that isn’t entirely true I am currently updating my “What’s New With DFSORT?” foils. And it made me think about processing SMF data…

In our internal analysis processes we chuck SMF data around quite a bit, whether subsetting it or actually mangling the records. In this entry I want to talk about subsetting – as most people have some other means of doing the actual mangling.

For a while you’ve been able to use the TMx and DTx formats to extract meaningful information from SMF Timestamp fields.

(Actually our process hasn’t caught up with this – we still have a REXX Exec that creates DFSORT statements from human-readable timestamps. But I digress.)

So you could code something like:


Although the above is a reformatting it illustrates a couple of points:

  1. You can use DFSORT Symbols to map (the front of) SMF records
  2. You can process the timestamp fields into something a little more readable.
    • The SMFDATE field will be reformatted as DT1 (Z’yyyymmdd’) field, which you can post-process.
    • The SMFTIME field will be reformatted as a TM4 (Z’hhmmssxx’)field.

One of our bugbears with IFASMFDP has been that if you want to specify a time range it applies to all the output destinations. We might want, for example, to send all the RMF data (SMF 70-79) to one data set and only a single day’s worth of CICS CMF data (SMF 110) to another data set. While I think you could do it with an exit it’s cumbersome.

With DFSORT OUTFIL you could write to two separate destinations, specifying totally different conditions…

* Syms mapping original record
* Syms for use after INREC
_ORIGINAL The original record stuck after the prefix

The final OUTFIL statement uses the SAVE parameter to ensure all the records thrown away by the previous OUTFIL statements are written to the "OTHER" DD.

This example looks much more complicated because to use the SMF D/T values you have to reformat them. I've stuck them into a "prefix" which the OUTFIL statements strip off. So really long records won't work with this technique. (So you have to treat the date and time as BI and PD instead.)

The underscored symbols map the record after INREC has been used to apply the prefix. I use this convention to make it easier to handle INREC "invalidating" the original symbols: The fields move but the symbols don't change to handle that.

Actually I can't resist telling you about one "mangling" feature of the latest PTFs (UQ95213 and UQ95214):

In a lot of records, whether GTF or some of the timing fields in DB2 Accounting Trace (SMF 101), the values are stored as 8-byte STCK values. You can format these using new types. In fact when designing this I suggested both STCK and STCKE (the new format). So using DCn, TCn, DEn and TEn formats you can extract STCK Date, STCK Time, STCKE Date and STCKE Time, respectively.

In the first instance I think this is going to be more useful for formatting GTF records (as their timestamps use STCK format). (Actually I have some prototyping code that flattens SMF records out so DFSORT can process the data fields and so I may well be using STCK formats in prototype analysis, particularly if records are sent to me broken.)

RSS Feed

(Originally posted 2005-02-17.)

Would you like to see updates to my blog without having to visit the site on the off chance? If so, read on.

Pardon my patronising essentially mainframe people by describing basic web technology but…

If you want to get notified when my blog changes you might like to take an RSS feed. RSS clients exist to notify you when this RSS feed pumps something out.

I RMBed on the “RSS” icon in the right sidebar and copied the link to the clipboard. Then I pasted the URL into my RSS client (an internal IBM one) as an alert source.

You can find RSS feeds all over the web now. Some day I’ll figure out how to use them properly. 🙂

Not Your Grandfather's DFSORT

(Originally posted 2005-02-17.)

I’ve had a long and happy association with DFSORT. So here’s a link to a document describing the most recent enhancements, which came out in December.

There was also another significant set of changes in February 2003. They are described here:

As I previously mentioned I’ll be presenting “What’s New With DFSORT?” in Innsbruck. I’ve added sidebar links to the DFSORT home page, where you can get all the documentation from, and to the main FORUM for DFSORT on the Web.

If you have questions etc on DFSORT I’ll try to answer them but you’re better off using the FORUM.