(Originally posted 2017-09-06.)
So this is a shorter episode, much to Marna’s pleasure. (Personally I’m indifferent to show length, regularly listening to episodes of other podcasts that run to 1.5 to 2 hours.)
It was very good to have a guest: Barry Lichtenstein. (I kept in the bit where I mispronounced his name, as I thought it a funny mistake. You’ll find another piece of flubbing, again because it was funny.)
Episode 15 “Waits And Measures” Show Notes
Here are the show notes for Episode 15 “Waits and Measures”. The show is called this because our Performance topic is about LPAR weights, and because this episode was after a seasonal hiatus.
Where we’ve been
Martin has been to nowhere in person, but has talked on the phone to several interesting locations.
Marna has just returned from SHARE in Providence, RI and from Melbourne, Australia for conferences.
Our “Mainframe” topic discusses a small new function in z/OS UNIX that there were customer requirements for, and that might not have been highlighted as much as other new functions in z/OS V2.3: automatic unmount of the version “root” file system in a shared file system environment. Our guest was Barry Lichtenstein, the developer of the function, and he told us all about it:
- there is a new BPXPRMxx VERSION statement parameter, UNMOUNT. This means that when no one is using that “version file system” (the new name for the “version root file system”!), it will be automatically unmounted. This is not the default. Syntax here
- this function is nice, as it will allow an unused version file system to be “rolled off” when you don’t need it anymore. Unused here, means that no system is using it or using any file system mounted under it. z/OS UNIX will do this detection automatically, and unmount not just the version file system, but mounted file systems under it that are no longer used by any systems after an unspecified amount of time.
- you can turn this on and off dynamically with a SET OMVS or SETOMVS command. There is DISPLAY command support of it. And perhaps the best news, the health check USS_PARMLIB will see if the current settings don’t match the used parmlib specification. (Marna thinks this check is the gold standard for using dynamic commands and not regressing them on the next IPL with the hardened parmlib member!)
- we weren’t sure if SMF record 92 would be cut when the unmount happened, but Barry said nothing unique was happening for this function so what happens today is most likely the same behavior. There are messages that are issued in the hardcopy log when the unmounts happen. SMF 90 might be issued for SET changes.
Martin talked about Weights and Online Engines in LPARs, and Martin again looking at customer information.
- Intelligent Resource Director (IRD) changed PR/SM worked:
- Dynamic weight adjustment
- Online logical engine management (vary online and offline). Shows minimum and maximum weights when changed by IRD.
- HiperDispatch: took away logical engine management (and manages it better!), and kept IRD dynamic weight adjustment. With HiperDispatch’s parking of engines, no work is directed towards it. An affinity node is a small group of logical engines to which a subset of the LPAR’s work is directed.
- More instrumentation was introduced, such as Parked Time and refined instrumentation on weights (vertical weights, by engine).
- Customer situation was that they did their own version of IRD and HiperDispatch: Varying logical engines online and varying weights (not using IRD itself). Martin expected IRD to change weights, but he saw the IRD weight fields were all zero. You must look at the “inital weights”, which means initial since you last changed it.
- Why not let IRD do it? Martin thinks there was something in the customer’s mind to control it themselves, perhaps somethind other than WLM goal attainment (which IRD would adjust weights for).
- Why not use HiperDispatch? Martin thinks that maybe a subtle difference might be needed, but LPARs should be designed properly. One possible aim might be to fit in one drawer, for instance. Maybe it’s not understanding what HiperDispatch does.
- How did the customer adjust the weights? It was an open question. Probably via BCPii? Feedback would be welcome on this.
- As an aside, with IRD a change in weights would lead to HiperDispatch recalculating how many Vertical High, Medium, and Low logical engines each LPAR has.
Lesson learned: Assumption on how something has dynamically changed may not always be correct.
Our podcast “Topics” topic was “Video Killed the Radio Star?” and about screencasting.
Martin has been trying to post screencasts to YouTube. Here’s one.
Screencasts are not videos where you see the speaker. It’s just a visual of what is happening on a screen with a talkover.
The best candidates are graphs and program output. Martin uses these steps to create these screencasts:
First, make a set of slides or images. Annotations are good to use, to point to a particular feature on the screen. (They don’t have to be animated.)
Use the screen recorder to add sound to the slide. (In PowerPoint, record the slides.)
Editing, with proper fadeouts. Split the audio out and clean it up with Audacity.
Re-unite the audio and the video. Camtasia, while expensive, has some promise.
Publish on Youtube.
As well as Barry’s mention on the v/OS V2.3 automatic unmount of version file system requirement (RFE Number 47549, “Automatic disposal of z/OS UNIX version root”), there was another customer requirement we discussed:
The quoted description is:
/dev/random is a special file that serves as a psuedo-random number generator source for applications. On z/OS, this special file is only provided if ICSF is started. If ICSF is not available, we need to resort to some other source of random numbers (which will have to be implemented within applications). Goal here is to make /dev/random available on z/OS, independently of whether ICSF is available or not.
- It’s a major migration action in z/OS V2.3 to have ICSF available for /dev/random.
- ICSF is a dependency for many functions. It’s important to have on every single z/OS system.
- Another aspect: each user had to have authority to use these ICSF services for certain of the functional depedencies (including /dev/random).
Martin mentioned that random number generators vary in quality, and behaviour. He hopes, if this were done, it would be high quality. One criterion would mean a close enough match to the ICSF-based algorithm, distributionwise.
Where We’ll Be
Martin will be in middle of Italy in August, 2017. He is threatening to drive from Italy to the Munich zTechU conference.
Marna will also be in Munich too. Marna is going to Johannesburg for the IBM Systems Symposium, aka IBM TechU Comes to You. She is also going to Chicagoland area on Sept 26 and 27, 2017 for some area briefings.
Both Martin and Marna are hoping to do a poster session in Munich, which should be jolly good fun.
On The Blog
Martin has actually not published a blog recently!
Marna actually did publish a blog recently!
Or you can leave a comment below.