(Originally posted 2011-03-30.)
In 2007 I posted twice on memory metrics. The original posts are
I should probably have posted an update some time ago. In the latter I said "Obviously copious free frames would suggest no constraint." That’s true but I would invite installations to consider something else…
Capturing a dump into virtual memory backed by real memory is much faster than capturing it into paging space. (And that in turn is much faster than capturing it into constrained paging space.) Over the past couple of years I’ve progressively updated my "Memory Matters" presentation to cover Dumping and Paging Subsystem design – to reflect this.
So it’s important to consider what your stance on Dumping is. For some customers Availability will be the over-riding consideration and they’ll configure free memory to dump into. For others it’ll have to be a compromise – for machine lifecycle and economic reasons. The point is decide on a stance on provisioning memory for Dumping. And do it at the LPAR level.
Meanwhile, z/OS Development haven’t neglected this area. I’ve documented the z/OS Release 12 enhancements in "Memory Matters" but in short they are:
- Batch page-in I/O operations during SDUMP capture eliminates much of the I/O delay.
- Data captured will no longer look recently referenced. This data will be paged-out before other potentially more important data.
- Certain components now exploit a more efficient capture method in their SDUMP exits. For example GRS for SDATA GRSQ data, IOS for CTRACE data, and configuration dataspaces.
I’ve had foils on page data set design, Dumping control parameters etc for some time.
But the important thing is that dump speed is an important thing to factor in to memory configuration and monitoring.
And the thing that caused me to write this post – at last – is a discussion today on MXG-L on UIC. So thanks to the participants in that.