(Originally posted 2011-12-16.)
Four score and seven years ago (or so it seems) 🙂 the Washington Systems Center published a set of mainframe Data-In-Memory studies. These were conducted by performance teams in various IBM labs and were quite instructive and inspiring. I wish I could find the form number (and a fortiori a PDF version) for this book. Anyone? Even hardcopy would be really nice.
The reason I mention this is because of a thread in the CICS-L newsgroup overnight about the CPU impact of increasing the size of VSAM LSR buffers in CICS. I seem to recall that CICS / VSAM was one of the benchmarks written up in this orange book. The original poster wanted to know what the CPU impact profile was of increasing VSAM buffers. I think the study showed that there could be some CPU saving with bigger buffer pools. (Compare this with VIO in (then) expanded storage – which showed a net CPU increase for the technique.)
There are a number of points I would have raised in CICS-L but I’ll write them here instead – as most of you probably don’t read CICS-L:
- I would not build a Data-In-Memory (DIM) case on CPU savings (though I would want to satisfy myself there wasn’t a significant net cost). I would build it on throughput enablement and response time decreases. This is true of any DIM technique.
- The thread in CICS-L correctly identifies the need to be able to provision real memory to back the increase in virtual.
- VSAM LSR buffers are allocated from within the virtual memory of the CICS address space. For most customers this isn’t an issue as the buffers are usually within 31-bit memory. (There is no 64-bit VSAM buffering.) But it’s still worth keeping an eye on CICS virtual storage (whether 31- or 24-bit) – perhaps using what’s in CICS SMF 110 Statistics Trace.
- Back in the late 1980’s there was a tool – VLBPAA – that would analyse User F61 GTF Trace to establish the benefit of bigger buffer pools – at least in raw I/O reduction terms. The trace is still available and you could process it with DFSORT but it would be harder to predict buffering outcomes without VLBPAA. In fact I mention this in Memories of Batch LSR
- One of the comments talked about hit ratios but I prefer to think of miss rates – or better still misses per transaction.
In general I find CICS VSAM LSR buffering insufficiently aggressive: As memory is generally plentiful these days (at least relative to the CICS VSAM LSR pool sizes I encounter) I think it’s appropriate for installations to consider big increases (subject to the provisoes above). Think in terms of doubling rather than adding 20%. And no 10MB total buffering is not aggressive. 🙂
One thought on “CICS VSAM Buffering”