Coping With DB2 Version 10 And Preparing For DB2 Version 11

(Originally posted 2014-03-08.)

I’ve said this many times: I’m not a DB2 person but I’ve bluffed my way in DB2 for many a year.[1] Perhaps that’s why I don’t get to use my DB2 analysis code nearly as much as I’d like.

So it’s perhaps not so surprising my code fell behind when it came to new versions. The last big update was for Version 8. That was a big update – mainly because Accounting Trace (SMF 101) was radically changed. This was to enable enhancements to Package-level statistics.

Now, as is so often the way, I’m dealing with a customer’s DB2 batch performance and they’re the first DB2 Version 10 customer I’ve dealt with at this level of detail.[2] And in fact I’d like to get ready for DB2 Version 11 (hence the title of this presentation).

The moral of the tale for customers is to keep current with any analysis tooling – whether you maintain it yourself or rely on software vendors to do it for you: It’s not much use trying to analyse systems or stuff running on them with backlevel tools.

Now to some specifics (and bear in mind I leapt from Version 8 (or possibly Version 9) to Version 10).

Shock 1: DB2 Does SMF Subtypes Properly

I discovered my code was throwing away Statistics Trace records when building the database when it had accepted them before. Specifically it had assumed the SMF subtype was in offset 22 for a single byte. This is slightly odd as the standard calls for two bytes but the code had worked fine up until now.

I remembered MQ had (long ago) had the same problem and fixed it.

And now it turns out somewhere between Version 8 and Version 10 so had DB2. Easily fixed – by expecting 2-byte subtypes at offset 22. – and we don’t throw records away anymore.

Shock 2: DB2 Widened The SQL Statistics Fields

In Accounting Trace the QXST Section contains fields like QXFETCH – the number of SQL fetch requests – and these used to be 4-byte fields.

So I run off my usual DB2 Batch reporting and compare the results with a DB2 Accounting Report (Long) my DB2 colleague created. While many things match the SQL statistics don’t. Not even close.

After he sent me the mapping macro (member DSNDQXST) for the QXST section in Version 11 I twig the fields have all (controversially in my opinion[3]) been widened to 8 bytes. Adjusting my code to expect these wider fields wasn’t hard and doing so yielded the right numbers.

So now I can trust my code again – for now..

Shock 3: Accounting Trace Class 10

APAR PK28561 was introduced in the life of DB2 Version 8. It introduces a new Accounting Trace class: Class 10. Without this turned on you don’t get package-level SQL counts nor package-level Buffer Pool statistics. This means you can’t say things like “In Program FRED10 in Step 5 of Job JIM you opened a cursor and fetched a gazillion rows” with any certainty.

I expect these statistics cost CPU and disk space so it’s reasonable to make them optional.

Actually my code copes with this but I didn’t understand the numbers I was getting in my reports at the package level. My reports needed adjusting to explain what just happened.

Consequences And Considerations

One consequence of these three changes is I’m no longer willing (or able) to process data from DB2 Version 9 or earlier. I could write a lot of code to fix up the relevant issues in a Version 9 compatible manner – but it’s really not worth it as Version 9 is at end of life.

I could give up and rely on someone else maintaining record processing (probably one of the IBM products). I choose not to for a number of reasons:

  • It keeps me close to the data. How my code worked and how Accounting and Statistics Traces fit together came back to me very quickly. I judge expertise in how SMF records work to be valuable, especially when it comes to evolution (as in this case) and quirks.
  • I can integrate my queries into my reporting. Listing scraping is no fun and often want to go beyond what standard reporting does. But actually the reporting design and evolution is the fun bit.
  • Data provenance is important. For example knowing RMF SMF 71’s Available Frame Queue (AFQ) counts are solid is valuable. (See
  • What We Share And What We Have In Common for more on this.)
  • I’m not keen on being spoonfed – perhaps to the point I probably should accept more spoonfeeding.
  • Talking to my DB2 colleague we suspect some of the product code – both DB2 and what I still call “DB2PE” – is faulty.[4]

But I bet you’re glad you don’t have to actively get into the guts of evolving data, like the DB2 stuff. Unless you are a vendor. Or me. 🙂

  1. Real DB2 specialists can attest to the truth of this: DB2 was something I came to long after MVS but knowing the two has made life much more interesting. 🙂  ↩

  2. Actually I’m not the primary DB2 person helping them but processing the DB2 data is important here: It should help them talk in terms of jobsteps – which would be more useful to Operations people and performance folks in the customer. So I’m keen to do a good job with the DB2 data.)  ↩

  3. This is a tough one as adding new 8-byte fields on the end would’ve added about 1KB to each SMF 101 record. Not doing so has meant reusing space in the record and still extending by a few hundred bytes. Either way supporting both old and new would have meant dual-patching code.  ↩

  4. Generally I trust my own code much less than that of products. But occasionally I have to file a bug report.  ↩

Published by Martin Packer

I'm a mainframe performance guy and have been for the past 35 years. But I play with lots of other technologies as well.

2 thoughts on “Coping With DB2 Version 10 And Preparing For DB2 Version 11

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: