IBM System z Technical University – Vienna, May 2-6

(Originally posted 2011–04–11.)

I’m working on my presentations for System z Technical University – Vienna, May 2–6 and I’m reviewing the agenda. As well as my four presentations there are lots of other goodies. These range from the Management level down to the purely technical. (I guess mine are towards the latter end of the scale – but I’d say there’s lots of pressure on us all to work on cost so detailed information on e.g. CPU has real impact.) In the other dimension there’s a very wide range of topics.

For the record I’m speaking on the following topics:

  • Memory Matters in 2011
  • Much Ado About CPU
  • Parallel Sysplex Performance
  • DB2 Data Sharing Performance For Beginners

These are all what I call “rolling” presentations: They evolve with time. If you haven’t seen them for a couple of years they’re substantially different. (Actually that’s probably true if it’s only been a year – as it will be for some of the luckier attendees.)

I’ll be a day late to the conference as I’m seeing Brian May and Kerry Ellis in concert at the Albert Hall the day before so won’t travel until the Monday. ( This concert is for a great cause: Leukemia and Lymphoma Research.)

I always enjoy these conferences: They’re generally in nice places but, more to the point, it’s great to run into old friends (customers, vendors and IBMers) and make new ones. And it’s always nice to hear things like “I saw you last year in Berlin and I’ll be in Vienna this year” (said by an Austrian customer back in February).

So, I think this conference is a great investment of time and money. And I feel very lucky to be attending yet again. See you there!

(Meanwhile I hope to be publishing my “Batch Architecture, Part One” post some time this week. I’m working on two batch situations that hopefully will inform the post, even if they delay it.)

Experimenting With QR Codes

(Originally posted 2011-04-04.)

Inspired by two of Bob Leah’s posts on QR Codeshere and here I started experimenting with creating and consuming QR codes.

But what is a QR  code? In short it’s a two-dimensional barcode that can contain e.g plain text or a URL. In the latter case a QR code reader can pick up the URL – maybe from a real-world object – and open it in a browser.

Creating QR Codes

In my experiment I created the barcode differently from how Bob did: As my laptop is running Ubuntu Linux I looked for a command-line tool. In my case I used the qrencode package. This takes a string and encodes it as a PNG graphic. Here is an example:

This is rather small – which might be handy from the perspective of printing labels.

Command line is important to me because it means I could automate generating QR codes – maybe a page of labels at a time.

Reading QR Codes

On my iPhone I installed a nice QR Code reader app: qrafter (in fact the free version). Although the QR code above is rather small it could read it perfectly well. I’m sure there are QR code readers for all kinds of mobile devices. Nowadays anything with a camera can do all sorts of things like barcode reading, QR code reading, document scanning (with or without OCR).

Possibilities

The ultimate aim of the experiment is to be able to tag objects: If you can tolerate sticking a small QR code label on an object you can annotate it: You could stick a URL on the object and then your device of choice could read the URL and open the page in a browser.

But what could the URL be? In my imagination it could be in two parts:

  1. The URL points to a web server that maintains a database of information about objects. (In fact the URL points you to a page where you can view the information about the object – and optionally edit it.)
  2. The search string is the object number. Each QR code has a different number. Actually it need not be a number, strictly speaking.

Of course you COULD do this with RFID tags. But this seems to me a lighter-weight way to get started. Of course there are many objects you wouldn’t or couldn’t stick paper labels on: Such as clothing. But there are lots of things you could annotate this way.

There are lots of possibilities here. I was just experimenting – admittedly in my hotel room on a Sunday night. I’d be interested in ideas and thoughts on this.

Memory Metrics – An Overdue Update

(Originally posted 2011-03-30.)

In 2007 I posted twice on memory metrics. The original posts are

and

I should probably have posted an update some time ago. In the latter I said "Obviously copious free frames would suggest no constraint." That’s true but I would invite installations to consider something else…

Capturing a dump into virtual memory backed by real memory is much faster than capturing it into paging space. (And that in turn is much faster than capturing it into constrained paging space.) Over the past couple of years I’ve progressively updated my "Memory Matters" presentation to cover Dumping and Paging Subsystem design – to reflect this.

So it’s important to consider what your stance on Dumping is. For some customers Availability will be the over-riding consideration and they’ll configure free memory to dump into. For others it’ll have to be a compromise – for machine lifecycle and economic reasons. The point is decide on a stance on provisioning memory for Dumping. And do it at the LPAR level.

Meanwhile, z/OS Development haven’t neglected this area. I’ve documented the z/OS Release 12 enhancements in "Memory Matters" but in short they are:

  • Batch page-in I/O operations during SDUMP capture eliminates much of the I/O delay.
  • Data captured will no longer look recently referenced. This data will be paged-out before other potentially more important data.
  • Certain components now exploit a more efficient capture method in their SDUMP exits. For example GRS for SDATA GRSQ data, IOS for CTRACE data, and configuration dataspaces.

I’ve had foils on page data set design, Dumping control parameters etc for some time.

But the important thing is that dump speed is an important thing to factor in to memory configuration and monitoring.

And the thing that caused me to write this post – at last – is a discussion today on MXG-L on UIC. So thanks to the participants in that.

Batch Architecture, Part Zero

(Originally posted 2011-03-29.)

I’m not an architect. I don’t even play one on TV. 🙂 In fact real architects would probably say I’m in the babble phase, architecturewise.

But I’ve been involved in a few situations over the past year or so (and I’m involved in a couple starting round about now) which have led me to the following simple conclusion: Many installations would  benefit from drawing up a Batch Architecture. I don’t think this is specific to z/OS-based batch, though we do tend to have more complex batch environments than other platforms. (And modern environments seem to have z/OS-based and other batch mixed together, often in the same application.)

As I say, I’m not an architect so some of what follows will seem to real architects a lot like Officer Crabtree. 🙂 But it’s my thinking – and hopefully some of it resonates with you.

So what do I mean by a Batch Architecture? To me it contains the following elements:

  • A description of the operating environment. This contains things like the LPARs Production Batch will run on, the database systems, message queuing systems and the like. You’d also rope in things like transmission networks, tape subsystems, job schedulers and printers supporting the batch. You might include commentary such as "we use PRDBATHI WLM class for critical production batch, PRDBATMD for most of it, and PRDBATLO for stuff that can run slowly but is still classified as Production".
  • An inventory of applications. Though I’ll talk about this element more in a subsequent post I’ll note the minimum would be a list of names of applications and a description for each application. Also the job names (or rule for classifying jobs into a particular application).
  • An understanding of the interfaces between applications. For example "PABC990D in ABC and PXYZ010D mark the boundary between the ABC application and the XYZ application, the former needing to update the General Ledger before the latter can begin to produce reports". Again, something I hope to write about some more.
  • A view of when the window is – if there still is such a thing as a window. And what has to get done by when – with business justification such as "we have to post status with the Bank of England by this time in the morning".

The above, far from exhaustive, list enables you to think about your batch in a structured fashion. Done right, and it doesn’t really matter which tooling you use, it begins to enable you to:

  • Talk about your batch at a level above the individual job.
  • Think about the impact of growing business volumes.
  • Plan for merging batch workloads (particularly topical at the moment).
  • Plan for splitting off workloads.
  • Think about how you can move work around.
  • Consider what happens if there is a problem – whether an application failure or a disaster.
  • Put some structure on modernisation efforts.
  • Tune batch in a structured way.
  • Collate the understanding of your batch that so often is in the heads of very good application experts.

Now, I appreciate most customers have huge batch inventories – often in the tens of thousands of jobs a night – and I think many customers are doing elements of this already. So what’s left to do that’s actually doable? I think quite a lot – and of course it varies from installation to installation.

But I do think some architectural thinking about batch would be really useful for most customers – and I’m certainly going to be thinking more about this myself (including seeking the wise counsel of some real architects).:-) At a practical level I’m going to post on how to do some of the building of a batch architecture.

Do You Like The New Look?

(Originally posted 2011-03-25.)

I hope you do. Special thanks are due to Bob Leah, Victoria Ovens and David Salinas for getting me this far:

  • Bob created the new template I’m using (and it is discussed further here.
  • Victoria created the new blog header graphic (of a z196 and a zBX) and David put it up for me.

You’ll have noticed I’m also blogging again – after a gap of about a year. (I talk about that somewhere in the middle of here.) So, a fresh look seems like a very good thing.

developerWorks blogs are built using Lotus Connections Blogs (in turn built with Apache Roller/Velocity) – a flexible platform I’m beginning to learn how to tailor. So now my tweet stream is embedded – for one.

I’d like to add a blogroll and a set of other useful links in. Basic stuff, I know. It’ll take a little figuring out.

So here’s a question for you: Is there anything else you think I should do to make my blog more useful to you? Apart from (or maybe including) some useful content. 🙂

Exiting The Babble Phase?

(Originally posted 2011-03-23.)

… or "The Nightmare That Never Ends". 🙂

The concept of a babble phase comes from Child Development: When children are learning to talk they start by making sounds they think might get them somewhere. (Some might say that’s a phase they never leave.) 🙂 Such a developmental stage is called "the babble phase". (The term has been borrowed by Artificial Intelligence researchers – and I expect Watson went through that at some point.)

I’d like to think I was an earlier adopter – and I get frustrated when I realise I’m not. But I think it safe to say I was fairly early in adopting a number of Social Networking tools: One of the earlier bloggers on IBM’s internal BlogCentral, one of the earlier users of Twitter (and thanks to Ben Hardill – who responded to my challenge – an early adopter of BlueTwit, the internal analogue). And a fairly early adopter of LinkedIn and FourSquare. The jury’s still out on Quora but I’m there, albeit not that active.

The point about this post – in case you’re still reading – is the stance to take on personal adoption of technology. I’d like to think the sooner you adopt a technology the sooner you exit the babble phase. And then you’re onto the next one. 🙂 Early adoption actually means you make the mistakes before many people are around to notice (particularly the ones you’d seek to hide such mistakes from). 🙂 So, in the words of Bill and Ted’s "they do get better" applies here. 🙂  It also means you can exit early – if a technology doesn’t work out.

One other thing early adoption does for you is to give you a body of experience you can use to help others. If you’re like me (and it’s highly likely you aren’t) 🙂 what works for me might work for you. On the other hand, if you see me doing something that you think is not for you, that’s probably useful, too. I wouldn’t want to spend my life as a walking talking antipattern, though.

Seriously, my experiences as a customer-facing early adopter led me to participate in drafting and revising IBM’s Social Computing Guidelines. I’m one of the people who drafted the "customer" elements of it. And I’m one of the people who injected some thoughts on Geolocation into the current version. (It’s fairly obvious stuff, of which Heisenberg might’ve been proud: Don’t give away your location and the matter-in-hand in a way that would damage or embarrass a customer.)


Did you spot the "Watson" reference a few paragraphs back? It’s all wrong of course: It doesn’t contain a hyperlink, it isn’t the "party line" but it is my way of thinking about it: An "authentic voice" on the matter (but possibly not a useful one). 🙂

I’ve been caused to think over the past few weeks "why am I doing Social Networking?" Now, it’s not any kind of reticence or realisation I’ve been wasting my time. 🙂 Far from it. But I feel comfortable sharing with you my motivations:

  • "If it feels good do it" is probably the underlying motivation. It really does feel good communicating with people (or maybe at people). That’s why I do it, not through any sense of self-aggrandisement.
  • I think I have stuff to say. Now, I’m aware that with a single online identity (through whichever tool we’re talking about) I’m writing stuff that only a subset of my audience will latch onto. Hopefully each reader gets enough from me to make it worthwhile – and I know you’ll vote with your "feet" if that isn’t so. Fair enough.
  • I think I can help. You don’t get to spend this long on the planet, and this long trying to become a Subject Matter Expert (SME in the parlance (which reminds me of Red Dwarf for some reason) 🙂 ) without having knowledge you can share with others.
  • But I’m not trying to sell anything (other than the notion that some of the things I’m interested in are interesting).
  • People might remember me when they see me. This is the nearest thing to a commercial or selfish reason I can think of.

So, you won’t find my Social Networking presence optimised to garner millions of links or generate lots of sales. You also won’t find me pretending to be someone I’m not. (You may well find some of what I have to say stuff you disagree with.)

Which brings us back to Watson. While I will admit I’m impressed by Watson I really don’t have a detailed knowledge of its inner workings, nor do I think I need to. But it is a big advance in Artificial Intelligence and so is a reasonably topical thing to link to "babble phase" in a joke. And that’s my authentic voice speaking.

I think "authentic voice" is terribly important: It has to be real people speaking. I’d like you to read this post – if you already know me – and be able to say "yup, that’s Martin alright". I’d like you to be able to trust me and what I’m saying. (And I’d like to earn that trust.)

Now that authentic voice plays out in how I use the tools:

  • Twitter is my natural arena: I hope what I have to say is worth 140 characters or fewer. 🙂 It’s not a force fit at all.
  • I used to blog a lot. I went away from it for a while – thinking the effort of a long-form piece too much for the payload. But I’m now thinking that rather than lighten the wrapping I should beef up the payload. I do think I have stuff that’s worth much more than 140 characters. And I’d invite (perhaps rashly) you to suggest things I should write on. Just understand some things I won’t be able to write about and others will need a good deal of research.
  • FourSquare feels right, too. I’m very pleased to see a wide diversity of customers – in lots of arenas of varying intimacy and formality. (But note the bit above about protecting customers.)
  • My artistic skills are shockingly poor. Which is why you don’t see many pictures.
  • Podcasting feels like too much of an effort. But having heard some awfully-produced ones recently I think maybe I could do better than that. Maybe I will. (And I’m trying not to be suckered into big-budget effects, having just seen Stormtroopers In Stilettos). 🙂
  • Newsgroups are great too. (In fact I’ve been participating since the VM FORUMs in the mid-1980s.) They often cause me to research stuff and I learn a lot from them as well.

So I think each medium has a different role to play – and differently for each of us. (Both as a deliverer and as a victim consumer.) 🙂

What’s also interesting is the fact that each medium is linked: So Twitter feeds Facebook and LinkedIn, FourSquare feeds Twitter. And, of course, I tend to link to new blog entries from Twitter (including this one).


In summary, I’ve found there’s tremendous (personal) value in Social Networking. I’d urge people to take a "fast forward into the future" approach – as we’ll all benefit from more, authentic voices. (I nearly left it at "more authentic" 🙂 but inserted the comma instead. Well, it made me giggle a little.) 🙂

"Publish and be damned" can be taken in at least two ways as well. 🙂

And "fast forward" for the reasons I’ve articulated: Mentoring, exiting the babble phase ahead of the pack, etc. I also think it enables you to evaluate new tools that much quicker.

Of course, like many nightmares you can appear to wake up. But often you’re still in the dream, just somewhere else. So, to answer the question in the title: "no, not really, and I wouldn’t have it any other way". 🙂

One final thought: If, as William Gibson says, "The future is already here — it’s just not very evenly distributed" then I’m extraordinarily lucky how much of it has come my way. If some comes your way, grasp it with both hands.

Learning Android Programming

(Originally posted 2011-03-17.)

A while back I set myself a technical challenge: To learn how to program an Android device.

NOTE: I don’t have a real application in mind, just idle curiosity and a degree of annoyance at the prerequisites to be able to program an iPhone (apart from as a WebApp).

Here’s what I did and how I’ve got on:

My Equipment

  • I bought the cheapest Android tablet I could find. It’s an Intempo I.D. Spirit. It runs Android 1.6 (not upgradable as far as I know) and you have to fairly thump the "touch screen" to get it to do your bidding. It’s slow  but that’s OK. At least it’s colour and very portable.

    You’ll find lots of similar or better machines at a low price.

    The point is that – up to a point – the hardware isn’t important for learning on. (But note this one won’t do "gestures" and the like.)

  • I bought a SanDisk microSDHC card reader, which plugs into a USB port on my laptop. (And a 8GB card to go in it.)

    While I could’ve used wireless to install it seemed much easier to just do the card-juggling thing. And I could’ve gotten away with a much smaller memory card.

  • I installed the Android toolkit on my laptop. I’m running Ubuntu Linux as it happens. I’m sure you can do this on other platforms.

  • I bought "Hello Android" by Ed Burnette (Third Edition, which covers Android 2 as well as the 1.6 I’m targeting). The website for the book is here.

That’s all: A machine, a way of transferring programs, the SDK and a book. What more could you need?

I experimented with the Eclipse environment for Android development but couldn’t get it to build. (But then I like to use a text editor and manually drive the build tools anyway – just to get a feel for what’s really going on. If I were a real Android application developer I’d probably make Eclipse work for me.)

My Experience

The "Hello, Android" book is well written – easy to follow but not condescending. And the website has downloads and errata. Ed’s clearly taken this seriously. I’ve not had a book from "The Pragmatic Programmers" before but I think I’d buy others if a cursory glance suggested they were similarly executed.

First the book takes you through creating a "Hello, Android" application – using the SDK. This worked well. I did have to look up how to sign the application using keytool and jarsigner (the latter being invoked automatically by the ant script the SDK creates to enable you to build). It ran just fine when installed from the card into the Android device.

Then, and here’s the real meat of it, the remaining chapters teach you how to build a Sudoku application. The book does this in an incremental fashion, which really works well.

It really helps if you’re already familiar with java and XML: You’d be editing those sorts of files quite a lot.

My Conclusion

It really is possible to go from zero to being reasonably competent in building an Android application with not much hardware, a flash memory card, the SDK, a text editor and a good book. And that’s the way it should be, I think.

The other nice thing is it seems to be something you can learn in small chunks – maybe an hour at a time.

When Display Commands Aren’t Good Enough

(Originally posted 2011-03-13.)

There’ve been times in the past when a request for extra data in SMF has been met by "you can issue a DISPLAY command to get that". (Another variant is "you can go to the HMC for that".)

I’m here to tell you why I don’t think that’s a brilliant answer:

  • Such a command is a point-in-time thing.

    Systems nowadays are much more dynamic than they were. Something as simple as the number of engines assigned to an LPAR is highly likely to change – from one moment to the next.

    So you really can’t tell what happened last week from the results of a command you issued just now.

    And automating the command on a timer pop is a touch fraught as well.

  • The output from commands is usually much harder to parse than well-designed written-out records.

    (That’s true whether we’re talking about SMF or some other kind of instrumentation .)

  • The Systems Management environment is worse, too.

    I’m not sure I really want to permit Performance people to issue operator commands, just so they can get instrumentation. Look, we’re all splendid lads and lasses (I’m quite sure) but I really don’t think it’s good for Systems Security to let us do it: To make it even acceptable we have to put bounds around what commands and parameters are permitted.

    (Think of "F LLA" as a good example of where the same command can be used to change things and get some answers.)

    And as for going to the HMC… 🙂

  • An interesting case is the DB2 Catalog: While it generally has static information in it there is some time-driven stuff (such as in SYSCOPY and the "HIST" tables).

I think we’re very lucky in z/OS to have good instrumentation in the form of SMF (and possibly things like DB2 logs). It’s the sort of thing many other platforms should be envious of. And, yes, I do value the ability to get "here and now" information (not least for driving Automation).

One other thing: If the information is available for an operator command it’s just a SMOP (Simple Matter Of Programming) to get it into SMF. (Yes, I know it’s not quite that simple: Having rubbed shoulders with developers for 25 years I do appreciate that.)

So you’ll see why I keep pushing for stuff to go into SMF. And I hope you’ll keep pushing, too. 🙂

WLM Velocity – “Rhetorical Devices Are Us”

(Originally posted 2010-01-24.)

I’m beginning to look at performance data slightly differently these days…

As well as plotting things by Time Of Day (which our tools have done for 25 years) I’m beginning to plot things more directly with load. (Time Of Day is sort of code for With Load but not really – telling a story people can relate to more directly.)

The first instance of this "with load" approach was plotting CPU per Coupling Facility request (and also request Response Time) against request rate. That’s proved invaluable (as you will see in previous blog entries).

The second instance is what I want to talk about now…

I plotted – for a single Service Class Period – velocity against CPU consumed. As it happens I had 200 data points, ranging from almost no CPU to 7 engines or so. CPU consumed is on the x axis and velocity is on the y axis. One further wrinkle: Each day was plotted as a differently coloured set of points (with different marker styles as well), enabling me to compare one day against another.

I’m not going to share the graph with you – as it really would be abusing customer confidence. But suffice it to say it was interesting…

As you go from no workload all the way up to 2n engines the following happens: The velocity starts out low and rapidly rises to well above the goal velocity, staying there until n engines’ worth of CPU. Then it steadily but slowly declines to well below the goal velocity. At the highest levels of utilisation the velocity achieved is about 20% of the goal. These highest levels of utilisation, though, appear to be "on the curve" – albeit outliers in x value terms.  I think that’s an interesting dynamic that says at some point the achievement drops off and can’t be sustained at or near the goal level.

The second thing that I noticed was that the points get more widely distributed as utilisation increases – most notably around the point where the velocity starts to drop. It’s a most beautiful broadening out. So we get into a position of unstable velocity. Again not a good thing.

Finally, let’s consider the days themselves. It turns out they’re all pretty much alike, with two exceptions: All the "2n engine" outliers are from one day – a problem day. Also, on the part of the curve where the velocity is dropping away the "problem day" data points are spread both above and below the others. Again we’re getting instability of outcome.

I really wish I could share this prototype chart with you – it’s got truly beautiful structure. I’m going to "hand create" such a chart a few more times with different customers’ data and then "shrink wrap" it into my analysis code. If you get to see it I think you’ll like it. It could rapidly grow to be my new favourite rhetorical device. 🙂

Of course the above only works for Velocity-based Service Class Periods but I’m sure I could dream up obvious analogue for the other goal types. (PI might be the unifying concept but it doesn’t, in my view, pass the "keep it real" test, not that Velocity is that connected to delivered reality anyway.)

And I share it with you in case it’s something you’d like to experiment with.

Going Global

(Originally posted 2010-01-22.)

In the interstices between finishing off a "rush job" piece of analysis for ONE customer and a conference call with a vendor on behalf of ANOTHER it’s time to catch up with a piece of news…

As some of you will know I have a new job in IBM…

I’ve joined Software Group’s Worldwide Banking Center of Excellence (WWCoE for short) as their z/OS System Performance person.

So, it’s a pretty similar job but with a new focus: The World. 🙂 And, more specifically, mainly banks. Mainly, but not ENTIRELY, banks. So, I know some of my readership is from customers I’ve worked with in the past who don’t happen to work in banking. I don’t regard this as "so long and thanks for all the fish" as far as they are concerned.

And I don’t reckon to be doing fewer conferences but (hopefully) more. And already the season is shaping up that way.

For me, I like the travel and I like the chance to work with customers I’ve not reached yet, some of whom have REALLY thorny problems I can help with.

As my Dad said recently on the phone "it’s the job you’ve always wanted". And now I’ve got it, watch out World. 🙂