Self-Documenting?

(Originally posted 2006-01-24.)

As a performance consultant I used to think it was only me that needed to glean configuration information from performance data – because I didn’t want to ask foolish questions of the customer.

But gradually (it seems to me) customers have ended up in the same boat as me:

  • What we thought we bought ought demonstrably to correspond to what we actually bought.
  • Settings we thought were in play ought to be demonstrably in play.

The utter complexity of machine configurations and settings – whether z/OS, or DB2, or CICS, or Websphere, or MQ, or … – means that it’s increasingly difficult to check that things are as we thought they were.

I’m not asking anyone to declare themselves out of control 🙂 but does this view – that things ought to be self-documenting – chime with anyone?

Because that’s an argument I’m increasingly using with Development groups.

Most such groups do a good job, whether it’s DB2 providing its dynamic settings in a Statistics Trace record, or the wealth of configuration information in RMF. There are however some untidinesses that I’m working to see fixed.

System z9 and zSeries Technical Conference

(Originally posted 2006-01-12.)

I’m lucky to again be on the agenda at this conference. Here’s a link to the conference website.

This is a very good arena for technical education on the mainframe, with many great speakers from the development laboratories, coupled with a number from the field. I always learn a lot – so I highly recommend it.

My three presentations are:

MIDAWs, FICON and DB2 Performance

I’m thrilled that Jeff Berger asked me to present his paper. Here’s the abstract:

Martin will present Jeff Berger’s paper on how the new z9-109 MIDAW facility, FICON channels and the DS8000 controller can improve DB2’s I/O Performance. This presentation will cover Extended Format DB2 data sets, FICON Express 1 and 2, what MIDAWs are, the new DS8000 controller, and finally how all these have been shown in measurements to improve DB2 I/O performance.

Memory Performance Management in a 64-bit world

This one is an update on last year’s presentation of the same name. Here’s the abstract:

DB2 Version 7 exploits 64-bit real memory, whereas Version 8 also exploits 64-bit virtual. This presentation focuses on managing both real and virtual memory, with an emphasis on DB2. It enables z/OS and DB2 performance people to work together to manage both the real and virtual memory usage by DB2.

It assumes at least a basic understanding of how memory works on zSeries processors.

Tuning “New World” DB2 Applications for MVS Performance Specialists

Again an update on an ever-evolving theme:

MVS Performance specialists are used to handling the quirks of SMF records. They are therefore well placed to support DB2 Application tuning efforts.

This presentation introduces MVS Performance specialists to the DB2 SMF 101 Accounting Trace record, outlining many of its major quirks. Reference is made to other types of instrumentation that complement SMF 101.

After some “vocabulary and syntax” how records from different application types look is presented.

DB2 Performance Improvements for Websphere Application Server JCC Clients

(Originally posted 2006-01-12.)

APARs PQ99707, PK07317 and PK08949 document two performance improvements that help WAS applications using the Universal JCC (java) driver. All the enhancements are available for DB2 Version 8. The “CPU cost” one is also marked as available for DB2 Version 7.

Higher DB2 CPU costs and longer transaction times are caused when using the new Java Universal Driver DB2Connection.resetDB2Connection() method. This method allows the JCC client system to pool connections to a remote system for later use by other applications, or transactions, that need to access the same server, thereby saving the performance cost of physically re-establishing a connection to the remote server.

When the JCC client system decides to reuse the connection, the DB2 server must first clean up and release all resources related to the previous transaction before executing the next transaction on the connection. Using this new method, the DB2 server treats each transaction as an independent entity and requires each user associated with each transaction to be authenticated using RACF, even if the userID and password is unchanged from the last use of the connection.

The high rate of re-authentication causes excessive CPU utilization and degrades performance.

DB2 server changes were made to improve the performance of the JCC DB2Connection.resetDB2Connection() method being exploited by WAS. DB2 is changed to only perform authentication every 3 minutes when the userID and password hasn’t changed from the last use of the connection.

Also, with PQ99707, there is a reduction in message flows when the client provides “end user identification” data: UserID, work station name, application name, and accounting data)

Fix to the way extents are calculated in SYSTABLEPART

(Originally posted 2006-01-12.)

Running out of extents for DB2 tablespaces can be a real bore. So installations like to monitor them (and I like to see them too). APAR PK12653 describes a fix to the recording of the number of extents in the SYSTABLEPART Catalog table. This fix is only available for DB2 Version 8.





RUNSTATS updates this table using the extents count in the PB0 control block. This is updated on physical data set OPEN, but not when the data set gets extended. So it can be inaccurate.





The changes are two-fold:

  • Refresh the PB0 value at the end of extend processing.
  • Notify other data sharing members at the end of extend processing.

This should make the number a lot more accurate in SYSTABLEPART because RUNSTATS will be sampling a more accurately maintained control block field.

Better Routing of DDF Work in a Parallel Sysplex with z/OS 1.7

(Originally posted 2006-01-12.)

I’m working through the queue of interesting APARs that have been sitting in my in-basket. (I guess it’s a “New Year’s tidying up” thing.) 🙂 Here’s one that relates to workload routing for DDF. It does prereq z/OS R.7 on all members so it won’t be of that much interest to customers right now…

APAR PK03045 “DB2 z/OS support for z/OS 1.7 WLM routing service” describes the DB2 enhancement to support this new routing algorithm. This is available for both DB2 Version 7 and Version 8.

In a nutshell the “old” WLM algorithm used to provide routing recommendations to interested clients based on available CPU capacity. However that didn’t take into account anything else that within DB2 that might cause a particular Data Sharing Group Member to be a poor placement choice. DB2 has been changed to ensure that WLM has accurate information about the delay, or queue, time of any of the enclaves that DB2 DDF created and used to perform its work requests. Recall: in-bound DDF work acquires an enclave when it goes through the initialisation process in the DIST address space, prior to executing SQL.

This enhancement advances the state of the art when it comes to workload balancing – a topic that has recently exercised my brain more than it had in the past. I would be interested in hearing of customers’ experiences in this area, whether DB2 or not.

DB2 Catalog and Performance Trace Mismatch

(Originally posted 2005-12-01.)

If you ask Performance Trace nicely enough it’ll tell you what statement in a package is consuming all the CPU.

Over the past couple of years we’ve been working on code to analyse the DB2 Catalog and PLAN_TABLE tables to do SQL tuning. It’s been an interesting journey – and, I guess, a never-ending one.

Each client provides fresh insight and challenges. Our current Chinese one is just such a client: The statement numbers in Performance Trace and SYSIBM.SYSPACKSTMT/PLAN_TABLE don’t always tie up. SYSPACKSTMT and PLAN_TABLE agree with each other. It’s the Performance Trace that’s anomalous.

It turns out that the static data and the Performance Trace were collected some weeks apart. In a busy installation code gets edited, tested, recompiled all the time. Particularly if the code is underperforming. I think we got bitten by that.

So in the future we’ll be surer to try and get fresh Catalog/PLAN_TABLE data when we take a trace – which probably means a respin as we generally home in on a particular package to trace after the main DB2 (and z/OS) study is done. Unfortunate but necessary.

DB2 in 64 Bit Mode and Hiperpools

(Originally posted 2005-11-23.)

A recent client of mine is still using Hiperpools – while in 64-bit mode (and with > 2GB of memory on each LPAR).

It is my contention that they would save CPU, and probably get better buffer pool effectiveness, if they switched to a judicious mixture of virtual pools and data space pools.

Would any reader care to relate their site’s experience with this sort of reconfiguration?

In this post by virtual pools I mean those backed by DBM1 virtual storage, rather than data space or hiperpools. So virtual pools contribute towards the tally of DBM1 virtual storage usage, which (as we all know) is limited to significantly less than 2GB.

This client is on Version 7 – and I expect them to remain there for a while. I do know – from IFCID 225 (Statistics Trace Class 6) – that they can’t replace the whole of their current hiperpool inventory with virtual pools. So some mixture of data space and virtual is inevitable.

Inside Big Blue – Great Web Stuff

(Originally posted 2005-11-20.)

IBM Shows del.icio.us… is a very interesting article by David Weinberger (author of The Cluetrain Manifesto) about all the interesting things going on on the web.

I’m proud to say that IBM is doing a lot of these interesting things – and I’ve taken time over the last two years to become a part of them. David spent a day at a seminar where all of these interesting things were demo’ed. I’d encourage you to read his blog entry and see how some of those things could apply to your organisation. And I’m sure you’ll think some of them off the wall but we are genuinely finding most of them useful. And I personally can attest to that.

Take blogging for instance. As Apple found out (with their Nano) it’s important to know what the blogosphere is saying about you. And it’s far more than just Googling your own name. 🙂

Queen: Return of the Champions DVD

(Originally posted 2005-11-14.)

As many people know I’ve been a Queen fan since 1974. And us Queen fans all know how we feel about Freddie Mercury’s death and the subsequent retirement of John Deacon. So it was with a mixture of anticipation and trepidation that I saw Queen + Paul Rodgers live in Hyde Park this summer. That was a great show. As these guys are all about 15 years older than me it’s nice to see that in a notoriously youth-oriented business they could still deliver. The parallel with the IT industry is obvious (not that I have to mention it to be able to blog here).:-)

So on my way through Heathrow I picked up the Queen: Return of the Champions DVD. It was all shot at a concert in Sheffield (with the exception of a very nicely done Imagine which was from Hyde Park).

I have to tell you it recreated the buzz I got when QPR strode onto the stage at Hyde Park. But this time I could see much more.

So, for Queen fans who can cope with the idea of Brian and Roger touring with someone other than Freddie, or for Paul Rodgers fans, this is one nice DVD.