(Originally posted 2007-11-11.)
It’s nice to see a flurry of activity in IBM-MAIN about Hiperbatch. And it’s more for the emotional reason of reminiscence than for any stunning insights that I’m blogging about it…
Back in 1988 I ran a technical project in my then customer, Lloyds Bank, to evaluate Data In Memory (DIM). It was a fun project with a wide range of workloads on multiple machines, including the then-new DB2. So we tried out all the analysis tools and even did a bit of Roll Your Own…
Through the IBM internal FORUMs I met another IBM Systems Engineer, John O’Connell, doing a similar thing for his customer, Pratt and Whitney in Connecticut. And he wrote some SAS code to evaluate VIO to Expanded Storage. We shared this code with Lloyds Bank. (I ran into him at several conferences. And later I discovered he’d left IBM and, I think, gone to work for a customer. Are you out there, John?)
And I wrote a presentation on VIO to Expanded Storage (called VIOTOES – which sounds funny if you pronounce it right). π There were two key elements in this presentation:
- How to evaluate the opportunities for VIO. And what happened if you fiddled with the VIOMAXSIZE tuning knob (the maximum (primary plus 15 secondary extents) temporary data set size that would end up as VIO (and hence potentially in Expanded Storage)).
- How to use the then new-fangled DFSMS ACS routines to control which data sets were even considered for VIO – and all sorts of other funky tricks with DFSMS.
This presentation went down well with IBMers and customers alike and could be considered my first conference presentation.
The point of the above is to set the scene for Hiperbatch…
So, we announce Hiperbatch as part of MVS/ESA 3.1.3. (Funny how we went 3.1.0, 3.1.0e but not 3.1.1 or 3.1.2.) And it had a hardware prerequisite of a 3090 S processor (because of the MVPG instruction – even though technically one COULD move pages between Central and Expanded Storage using ordinary instructions if we’d chosen to implement it that way.) The important thing is that we wanted an exclusive by tying this super duper new facility to a brand new processor.
And because of this my fourth line manager at the time decided we all had to run Hiperbatch Aid (HBAID) studies. At this point I learnt I was not a “team player”. Well duh. π I declared I wasn’t going to do it because we’d already crawled all over Lloyds Bank looking for genuine DIM benefit. And there wasn’t likely to be any from Hiperbatch. With that defence the requirement to run HBAID was waived.
You’d think from that I’d a downer on Hiperbatch and HBAID, wouldn’t you? π Far from it actually…
I did enjoy running HBAID in one or two other customers and I did get quite creative with Hiperbatch. And that was the trick, in my opinion – getting creative. And that realisation led on to other things…
One of the really nice things about HBAID was that it Gantt’ed out data set lives. And from that you could glimpse where other techniques might be useful (such as VIO and OUTFIL). So I invented[1] a technique called LOADS which stood for (for those of you averse to “flyovers”) π “Life Of A Data Set”. These signatures were really rather handy. Here’s a slightly later example…
A “standard” BatchPipes/MVS pipe candidate would be a sequential writer job followed by a sequential reader job for the same data set. There are, of course, lots of scenarios where Pipes is useful, each with their own signatures (LOADS).
And it was my good friend Ted Blank who told me about Pipes in late 1990. (And he had been involved in HBAID.)
Ted also encouraged me to start writing a book on Batch Performance. This later became SG24-2557 “Parallel Sysplex Batch Performance” and I believe you can still find it online. (Only this week I referred someone to it as a starter manual for what he wanted to do – but I DO regard it as being somewhat dated.) π¦
And the writing of the book got me into writing batch tools – which generalised what HBAID did and then some. And that’s how I came to be one of the developers of PMIO , the Batch Window analysis toolset / consulting offering. You may have heard of PMIO.
I’m aware of very few installations running Hiperbatch and even fewer running Pipes. π¦ But at least I got something out of it. And we did evolve the “state of the art” as far as batch window was concerned.
As for Hiperbatch’s applicability. I think it’s worth a look at. But there are so many other techniques around that more or less cover the same ground. Though some, like Pipes, cost money. And others are extremely creative to apply. But I don’t think it was ever going to take off in a big way. But that’s OK, given Hiperbatch was built to solve a problem one important customer.
[1]Actually I don’t claim to have invented it, just popularised and generalised it. After all HBAID itself was doing much the same thing. In fact someone once suggested I should patent LOADS but I declined on the “not exactly oriiginal work” basis.
One thought on “Memories of Hiperbatch”