(Originally posted 2013-04-08.)
You wouldn’t put all your eggs in one basket, CICSwise, would you? A naive reading of the CICS TS 5.1 announcement materials might lead you to suppose you could. This post is about thinking about your CICS region portfolio in the light of this announcement.
While every CICS release introduces capabilities that makes it worthwhile to review your region portfolio, 5.1 majors on scalability. So, in the months (hopefully only months) before you install 5.1 and eventually go live, it would be a good idea to review your CICS region portfolio.
(I properly should say “application” rather than “region” – but for us Performance Folks we’re more likely to get involved in discussions about regions than applications. But we should still take a more-than-polite interest in applications. However, this post is indeed rather more about regions than applications.)
So let’s review why installations split applications up into multiple regions. There are essentially three:
- Performance and scalability
When reviewing your portfolio it’s worth looking at all these categories.
And to me one of the major benefits of 5.1 is that it gives you more choices.
You’re probably thinking I protest too much about not being an architect. I’ve talked about it enough times. 🙂
What I would say is it’s worth understanding the role of each CICS region.
- You can begin by using the SMF 30 Usage information – as I discuss in Another Usage Of Usage Information. In that post I point out you can get topology information – such as which MQ or DB2 subsystem a region uses – just from SMF 30.
- The above trick won’t detect File-Owning Regions (FOR’s). For that you probably could spot one from the Disk EXCP counts in SMF 30 or, failing that, in SMF 42–6.
- You could have some fun with region names – as I discuss in He Picks On CICS.
- You could use CICS’ own Performance Trace – and I think CICS Performance Analyzer helps with this – to figure out how transactions flow.
- Or you could actually talk to CICS people. 🙂 Actually that’s not an exclusive or.
From the above you can get to knowing which regions are part of which application, can tell FOR’s from AOR’s from DOR’s from QOR’s from TOR’s, and generally have a crack at figuring out how set up for available it all is. All before breakfast. 🙂
Hmm. I think I’m going to have to write me some more code… 🙂
And, of course, in 5.1 the architectural choices increase again.
Personally I recommend having at least four servers for resilience, though that is sometimes unaffordable.
The reason I recommend four rather than two is quite straightforward: If running out of a resource causes a server to fail only having two means the other one is likely to fail as well. Having three others makes it much more likely the survivors could handle the load. Virtual Storage is a good example of this.
Of course there’s a cost to provisioning four rather than two – day in day out. Consider four way Data Sharing: Thankfully the difference between non- and two-way- is usually greater than the cost between two-way- and four-way-Data Sharing.
Each installation must make its own decisions on availability versus cost.
Performance and Scalability
There have traditionally been two reasons for limiting the size of CICS region, performancewise:
- QR TCB Constraint
- Virtual Storage
QR TCB Constraint
I wrote about this in New CPU Information In SMF Type 30 Records, where I posited the new CPU metrics introduced into SMF Type 30 in APAR OA39629 could help establish if the QR TCB is large.
In early client data I consistently see the biggest TCB in CICS regions as being “DFHKETCB” so I think this is the QR TCB. I decode this string as “DFH for CICS”, followed by “KE for Kernel” and “TCB is TCB”, so this all makes sense to me.
In any case you could work with the SMF 30 TCB time: If a significant portion of an engine you might look at the biggest TCB. Whether that is the QR TCB or not a large % of an engine for Biggest TCB would warrant examination. If it is the QR TCB then you have work to do before such a region could be combined with others.
For example, a CICS region with 90% of an engine at peak would warrant further investigation: If the biggest TCB were DFHKETCB and only 20% of an engine you could combine maybe 3 such regions without concern for QR TCB constraint.
If, however, the QR TCB were larger you’d want to consider the appropriateness of Threadsafe before concluding regions couldn’t be merged.
In 5.1 more commands have been made Threadsafe, as has the Transient Data (TD) Facility. This follows all the extensions to Threadsafe applicability over prior releases. (See Threadsafe Considerations for CICS.)
Historically CICS has used 24-, 31- and 64-bit virtual storage: Both 24- and 31-bit virtual storage should be viewed as scarce resources, especially 24-bit.
As a coarse upper bound you can use the SMF 30 Allocated virtual storage numbers.
For example, a region with less than 2MB of 24-bit allocated is probably not threatening when combined with a few others. Similarly a region with less than 500MB of 31-bit allocated is probably not an issue if combined with one or two more.
I emphasis coarse because CICS suballocates memory and has its own sophisticated memory management regime. You should use the CICS Statistics Trace virtual storage numbers to treat this subject properly.
In 5.1 a substantial number of areas have been moved to 31-bit virtual storage from 24-bit. Similarly, a substantial number of areas have moved from 31-bit to 64-bit.
Benefits Of Merging Regions
It’s worth pointing out that there are advantages in reducing the number of CICS regions. Two in particular come to mind:
- Reduced operational complexity
- Potentially improved resource usage and performance.
Others can much better explain the operational benefits. As a primarily performance guy I consider questions of resource consumption and effectiveness. Two simple examples are:
CICS doesn’t load a program each time a transaction that uses it runs: It keeps it in virtual storage. Two regions potentially means two copies – which would require twice the real memory. One region obviously doesn’t.
In the case of VSAM (LSR) buffer pools two regions require two pools for every one that a single region would have. Again, to get the same buffer pool effectiveness is highly likely to require twice the amount of real memory to back the pools as in the single region case.
In the examples in this post I gave some numbers. Please don’t use them as rules of thumb – without applying further thought. They are just reasonable examples: Derive your own.
Further, this whole discussion has been necessarily simplistic. But I think asking some basic questions is a very good start. Hopefully I’ve given you a way to look at whether CICS TS 5.1 (and indeed 4.2 or any other release, but less so) provides an opportunity to rework your portfolio of CICS regions and applications.
To recap, if anything, 5.1 gives you choices. (Actually it gives you lots of other things but the focus on this post has been narrow: How many eggs in how few baskets?)
Talking of those other things 5.1 brings CICS Transaction Server for z/OS Version 5 Release 1 What’s New is well worth a read.
I’m wondering whether it would be useful to work this post up into a presentation on the topic – probably with considerable help from people who major on CICS. What do you think?
Also, I considered inserting some graphics but thought the ones I came up with to be gratuitous and unhelpful. So I didn’t. So there. 🙂