Down In The Dumps?

(Originally posted 2013-09-14.)

That’s such a horrible pun I must’ve used it before. If so sorry (but not very). 🙂

This post follows on from Enigma And Variations Of A Memory Kind in a way. In that post I mentioned DUMPSRV, in almost a throwaway fashion: I happened to notice the memory usage in SMF 30 by DUMPSRV grew at just the point free memory took a dip.

This post takes that idea and extends it a little – and I think it might be something you want in your everyday reporting.


I ran a query against Data from all the customer’s LPARs in one pled – using SMF 30 data for DUMPSRV: I pulled out hours when the DUMPSRV CPU was more than 0.1% of a processor, printing the memory used, blocks transferred (think “I/O traffic”) and CPU. This highlighted that across the LPARs quite a lot of dumping happened, sometimes simultaneously on the systems. It made me think that “dump containment” is quite a big issue for this customer.

There are some issues with this approach:

  • The granularity is 1 hour as I summarised to that level. With an SMF interval of 30 minutes it’s a little better but it’s still hard to correlate the surge in DUMPSRV in Type 30 with the time it actually occurs.

  • I can’t tell who was dumped, just that a big dump capture took place at that point.

One thing that is worth working into the reporting is what happened to free memory at that point. If it was driven into the ground that’s a sign you need to take dumping seriously.


As with all such things it’s a matter of priority as to whether I write a “RDUMPSRV” REXX EXEC to detect this sort of thing. It wouldn’t take long.

More to the point I worked up this post from the one liner in the other one because I think it’s a technique worth thinking about: If you’re a Performance person it might not be obvious but you really do want to know about dumping prevalence, and substantial dump occurrences in particular. And if dumping does happen you’ll certainly want to be prepared to handle it in ways I’ve mentioned before – such as adequate memory, good paging subsystem design or, notably, zFlash.

What I’m Looking Forward To In z/OS 2.1

(Originally posted 2013-09-11.)

As I mentioned in We Have Residents! we'll be working with a z/OS 2.1 system in October. In fact I've already logged on to it. I might even get to play with it before the residency starts, depending on current workload – but the priority is to hit the ground running by ensuring the environment is set up to our liking and that we have test data. (And then there are those day-to-day customers…) 🙂

Here are the things that've caught my eye that I'm particularly keen to to try out. Of course there's a lot in z/OS 2.1, so don't treat this list as the definitive list: It's just the things that leapt out at me that could change my daily life – as a programmer and regular user. And, yes, there are other things I like about 2.1 which aren't in this category.

  • Regular expressions in FIND and REPLACE in the ISPF PDF Editor.

    At first I'll experiment interactively with this but I can see myself building it into REXX EXECs.

  • Processing of VBS data in REXX.

    This might seem obscure but it really means SMF to me. (In the official materials I haven't seen SMF mentioned but SMF is VBS data so I'm hopeful.) Assuming my experiments are successful I'll write much more on this. This is the one most likely to be used in my code.

  • Symbol processing enhancements in JCL. At this stage I don't quite know what we'll get out of this but it's the enhancement that's most likely to find its way into our Redbook: I hazard it'll be useful.

One of the nice things about spending 4 weeks in Poughkeepsie is the chance to discuss things with developers: They might ask us to try things out. And we might tell them what we think. 🙂

Of course it's not my production system. And I don't know how long I'll have access to the 2.1 system for. Still, I'll use it while I can and report any highlights (and, hopefully non-existent, lowlights).

And you probably will have your own favourite enhancements.

Enigma And Variations Of A Memory Kind

(Originally posted 2013-09-07.)

This post, unlike the previous one, is “on topic” for a Mainframe Performance Topics blog. I hope nobody’s relying on that. In fact I joke about renaming blog this to “Mainframe, Performance, Topics” 🙂 the next time I ask for the header to be updated. In fact I just might.


I recently got some data from a customer that I thought showed a bug in my code. Instead it illustrated an important point about averages.

We all know averages have problems – taken on their own – so this post isn’t really about that. It’d be a duller post if it were.

It’s about how free memory (or, if you prefer, memory utilisation) varies

  • By time of day
  • By workload mix
  • From the average to the minimum

You’ll notice that last point didn’t mention the maximum. This is consistent with being more interested in the free memory than the utilisation, in a number of contexts. Let me explain:

As technology has moved on it’s become more feasible to configure systems (really LPARs) so that there is some free, or at any rate so that paging is (practically) zero. I’m concerned with how well installations meet that aspiration.

So the maximum free doesn’t interest me. But the minimum does. (And so does the average.)


Consider the following graph, from the customer I mentioned.

Before this week I plotted the blue line only (and that for each day in the data). This is the average of RMF’s average memory free number – by hour. But I had code that printed in a table the minimum free across the whole set of data (from a different RMF field in the same SMF 71 number).

While the blue line suggests just under 7GB reliably free, the minimum of the minima showed about 100MB free. This is where I thought my code was buggy – as these two numbers appear to contradict each other. I’d forgotten the 100MB number came from RMF’s minimum field and wasn’t just the lowest of the averages.

The red line is, as the legend says, the minimum free number from RMF plotted across the day. And the mystery (or enigma, to half explain this post’s title) is resolved: The 100MB number is the low point of the red line.

I’ll be throwing this graph into production shortly, maybe with a tweak or two (depending on how well it “performs”).


But the interesting thing – and the point of this post – is how free memory varies. (You’ll’ve guessed that the “Variations” in the title comes from this.)

If you look at the graph you’ll notice that, mainly, the big variability is overnight. Though there is a notable divergence between the two lines at about 11AM, they’re much closer together during the day.

If you’d asked me what I expected to see I’d say this is about right (but I wouldn’t’ve been certain, not having looked at the data this way before).

Overnight, fairly obviously, the customer runs Batch. And I can prove they do from SMF 30, of course. (I actually spent a happy year working on their Batch a while back.)

I would expect a Batch workload to show a higher degree of variation in memory usage, compared to Online (or whatever it’s called nowadays). For at least two reasons:

  • Long-running work managers, such as CICS, MQ, Websphere Application Server and DB2, acquire most of their memory when they start up and variation in usage is slight thereafter. For example, buffer pools are likely to be acquired at start-up and populated shortly thereafter. (I’ve seen this in almost every set of data I’ve looked at over the past mumbleteen 🙂 years.)
  • Batch comes and goes. (Job steps start, acquire storage, and release it when they end. Furthermore they’re generally highly variable in their memory footprint, one to another.)

Another thing I’m not that surprised about is Batch driving free memory to zero. Assuming large sorts are happening this is often to be expected: Products like DFSORT manage their usage of memory according to system memory conditions. Often they use what’s free if there is sort work data to fill the memory. This can be managed by the installation, if it is so desired. And there often won’t be enough sort work data to fill memory. (It’s not a direct objective to fill memory, but if there’s good use for the memory why not exploit it?)


Interestingly, on another system, on another day, the minimum free memory went to near zero (and the average followed it down, albeit in not quite such an extreme way). This time the cause was a surge in usage by DUMPSRV (from SMF 30). Clearly this was a time when a large dump was captured (or maybe several moderate-sized ones).

I often talk about configuring systems – particularly their memory and paging subsystems – to make Dump Capture as non-disruptive as possible. (Several blog posts over the years have talked about this.) This will be a reminder to talk to the customer about dump containment.


Now, the above may be obvious to you (and the value of comparing minima to averages certainly will be) but I hope there are some things to think about in it – notably how Batch behaves, catering for Dump Capture, and the fact RMF gives you the minimum (and maximum) and average memory free numbers.

But I also appreciate most of you don’t look at nearly as many installations as I do: If what you see in yours matches the above that’s fine. If it doesn’t it’s worth figuring out why not (but not necessarily thinking it’s a problem).

Translation: That’s a really nice graph you might like to make for yourself. 🙂

Tagging Up Stuff

(Originally posted 2013-09-06.)

This post is in response to Kelly's post "Hashtags – love 'em or hate 'em? ".

My true response is "well, neither really." 🙂

A slightly more considered response would be to note that the utility of hashtags has decreased markedly over time:

  • In the beginning there was e.g. Twitter without any kind of searchability.
  • Then it got better.
  • The End. 🙂

Seriously, if you wanted to find material on a subject, outside of something like Twitter that had no search, you'd use a standard web search engine. You might've noticed that I'm not motivated to put tags on my blog posts any more: I've seen far too good results with web searches to bother.

I also think the bother of curating tags is too much – and most tag-bearing sites don't make it easy. Two examples of this are this very blog and Evernote. By curation I mean things like merging two tags, most notably because of spelling or capitalisation issues.

And that's just dealing with my own tags. If we're talking about a tag you expect people to agree on it's even worse.

Having said that, I do plan on using the Twitter hashtag #batchres13 for the residency I'm running in October (and encouraging others to use it as well). That's more a "try to generate a little interest in what we're doing thing" than anything. (Though, used consistently, it might help us keep track of what's said about the residency on Twitter. We'll see.)

Kelly says something interesting:

"I don't attempt to measure my social business with hashtags, simply because I don't think they are ever static or consistently applied by the world at large."

I'd agree with that (and part of it goes hand in hand with what I said about curation). I'd also add that I have no pressing need to measure my social media effectiveness, but rather just to "do what I do". (I've said this before.) Others will have different imperatives and might take a different view.

A phenomenon I'm seeing more of is the "joke" use of tags, particularly on Twitter. Now that's something I can buy into, used lightly.

So, no, I'm not wild about tags either.

And, in case you wonder about the title of this post, the "cultural" reference 🙂 is to this. (This might be slightly NSFW.) I'm guessing I don't get many readers who aren't old enough to be allowed to play this. 🙂

Coupling Facility Duplexing Reporting Warm-Over

(Originally posted 2013-08-02.)

In my experience Coupling Facility Duplexing configuration and performance is something that tends to get neglected – once the initial configuration decisions have been made. After all it’s rare that customers rework their Duplexing design.

Over the past few weeks I’ve been comprehensively reworking my Coupling Facility tabular reporting, as I recently mentioned in Coupling Facility Topology Information – A Continuing Journey .

This post is about the Duplexing part of that. If you agree it’s time to review your Duplexing reporting read on…

In the previously-mentioned post I talked about signalling rates and overall times at the CF level – for Duplexing. I have those now. While those are interesting they are rather macro level and don’t really talk about outcomes that directly affect applications. (Actually nothing does but I think you’ll agree specific middleware-related structures are more interesting than overall CF numbers when it comes to tuning e.g. Data Sharing applications.)

So let’s talk about structures…

User- Versus System-Managed Duplexing

First, there is only one exploiter of User-Managed Duplexing: DB2 Group Buffer Pools.

Second, the two types are very different from the instrumentation (and other) perspectives: Attempting to treat them the same is a bad idea.

Detecting Primary And Secondary Structures

Formally you need RMF SMF from the Sysplex Data Gatherer z/OS system. In several of my sets of data I don’t have that. So I have to improvise.

But first the formal bit: For the Sysplex Data Gatherer one Request Data Section is written for each structure. Bits in field R744QFLG in this section denote whether the structure is the old instance (primary) or the new instance (secondary) or neither. “Old” and “new” might seem strange names but duplexing is built on top of structure rebuilding, so the terms are not so strange.

If you don’t have data from the Sysplex Data Gatherer you can sometimes still get the answer:

For User-Managed structures (DB2 Group Buffer Pools) the traffic to the primary is higher than to the secondary. But if there’s no traffic you’re stuck. So my code performs the traffic test and reports accordingly.

Actually when I say “higher” I really mean “much higher”.

There is no such traffic test for System-Managed. In Performance terms the primary and secondary are generally identical: The traffic is much the same. So it doesn’t really matter which is which.

Similarly, for the zero-traffic case Performance isn’t a hot topic anyway. So again detecting which is primary and which secondary isn’t important.

Duplexing States And Timings

As I mentioned, User- and System-Managed Duplexing are somewhat different.

  • With User-Managed, the “user” (DB2) coordinates writing to primary and secondary structures. So none of what I’m about to tell you applies to the User-Managed case.

  • With System-Managed, XES and the two CFs coordinate: And operations in both CFs generally have to complete together.

The coordination for System-Managed manifests itself in the data in a series of fields in the Request Data Section for each version of the structure (whether the system is the Sysplex Data Gatherer or not). These fields generally have a count of events, total time for the events and the sum of the square of the times for the events – enough to calculate average and standard deviations.

So these events (and they’re reported in RMF’s Coupling Facility Activity Report) take System-Managed Duplexing down to the structure level. (The numbers are, unsurprisingly, zero for User-Managed.)

One thing they allow you to do is see one aspect of the Service Time cost of Duplexing. I say “one aspect” because, although there are timings in these fields, Duplexing introduces other effects.

For example a non-duplexed LOCK1 lock structure might have service times in the region of 3 to 20 microseconds, depending on link technology. (Here one would expect all the requests to be performed synchronously and these timings reflect that.)

But use System-Managed Duplexing with it and often most of the requests are performed asynchronously and with service times in the tens of microseconds (or hundreds over, say, extended distances).

But at least the structure-level counters and timings can help point out problems.

But there is a role here for the CF-level duplexing statistics: The path-level signal latency times for Duplexing links (if you have them) can also point to why Duplexing performance is what it is. RMF converts them to estimates of distance at 1 kilometer for each 10 microseconds, which is a clue that a lot has to do with distance.


One final word of caution: None of the RMF statistics related to Duplexing – of either flavour – say anything about application impacts from Duplexing. Where there is any evidence at all is from something like DB2 Accounting Trace where maybe the Asynchronous Database I/O Write Wait time is extended. But this is scant information at best.

Realistically the best you can do is give the performance-critical structures the best performance you can.

So, as you can tell, I’ve been busy warming over my CF Duplexing code. A few studies more and I’ll probably do it again. 🙂

The Missing Link?

(Originally posted 2013-07-30.)

Recently I wrote up some initial results of using OA37826 data in Coupling Facility Topology Information – A Continuing Journey .

That post in turn followed on from System zEC12 CFLEVEL 18 RMF Instrumentation Improvements .

Since then an interesting thing happened – and sooner than I thought it would: I got some data with a broken piece of Coupling Facility (CF) link infrastructure. Lest you think I’m insensitive about bad things happening to customer installations I’m going to say very little about the actual incident.


A colleague sent me a few hours of data from a customer I’d visited before. This customer has a mixture of generations of processors, including zEC12. I noticed the “Path Is Degraded” flag was set for a pair of CHPIDs between the zEC12 and one of the CFs, but that others between the pair were OK. (I also checked the flag that accompanies it to denote the flag is itself valid).

The first version of the code that detected this only listed the CHPIDs with the flag set. So I passed the CHPID list to the account team.

But I wasn’t satisfied with that: It occurred to me that actual configuration data would be better.

So I extracted the PCHID, Adapter ID and Port ID from the same section of the record (Path Data Section). Although the PCHIDs were, unsurprisingly, different the Adapter ID and Port ID were common to the two CHPIDs that were said to be degraded.

Not being that heavily into how you plug in CF links I’m not sure what that all means – but I’m going to have to learn.

In any case I’ve enhanced my code to give this additional information for degraded links. I’m also thinking I should add this information for non-degraded ones – as it might expose points of vulnerability. But I doubt I’ll get to do it until I get another zEC12 set of data in.


Well that’s the way it was when I first drafted this post. In the 36 hours since I’ve had a fit of “that really won’t do at all” 🙂 and a 3 hour train journey with wifi to press on with coding.

And I’m glad I did:

I can now see that the other two paths between this z/OS system and this coupling facility are on a different adapter. So it seems Installation Planning has been done well and the adapter isn’t a single point of failure. That’s something I’d want to check for in future sets of customer data.


Because I have only a couple of hours’ data I couldn’t detect the onset of the problem: If you’re a customer running daily reports you might want to create one to check for path degradation. I also didn’t get to see how performance was affected by the degradation: More data would’ve helped with that, too.


So the point of this post is to reinforce the view that you should – if it’s available to you – consider tracking the “Path Is Degraded” flag provided by OA37826 which externalises new support in CFLEVEL 18 (on zEC12 and the new zBC12 machines). Otherwise you’re relying on diagnostics and indicators that most of you wouldn’t ordinarily go near. Once again it’s very nice to have it in SMF.

We Have Residents!

(Originally posted 2013-07-18.)

Back in May I wrote about a new batch residency planned for this October and invited good people to apply to join the team. It’s been very pleasing how many people applied to be residents and the high quality of the entrants: It was genuinely difficult to pick the eventual team. We also had to reduce the scope a bit – which was a disappointment to both Frank Kyne and I. So, if you didn’t get selected I’d encourage you to apply for other residencies as they come up.

So here’s the final team:

  • Karen Wilkins – DB2 and COBOL.
  • Dean Harrison – Tivoli Workload Scheduler, JCL and Operational Considerations in general.
  • Myself – orchestrating and doing the Performance piece, and probably playing with some XML or JSON.

You’ll notice we lost PL/I and VSAM. I’m hoping to sneak some VSAM in – and definitely some Sequential Data Set processing. You might disagree but the loss of PL/I is less severe to me than not dealing with VSAM issues.

Though the residency starts 7 October the next phase is putting some materials together for us to work from and finalising our (large) sample data and the environment we’ll run on. I’m also hoping to have access to our (z/OS 2.1) system so I can try out the idea that you can process SMF data with REXX, and maybe a few other things. You’ll appreciate, though, that the “day job” intervenes quite often. 🙂

We hope to make this quite a “social” residency – so some blog posts and tweets will come out. Stay tuned! And you might want to follow us on twitter: Karen, Dean and myself.

We also are pencilled in to present some of the material at GSE Annual Conference just a few days after the residency finishes in early November – which will be a challenging deadline. With luck I’ll get to present a more polished version at the UKCMG One Day Conference in late November. But I’m not on the agenda yet.

Stay tuned!

Refactoring REXX – Temporarily Inlined Functions

(Originally posted 2013-07-16.)

You could consider this another in the Small Programming Enhancement (SPE) 🙂 series. You’ll probably also notice I’ve been doing quite a lot of REXX programming recently. Anyway, here’s a tip for refactoring code I like.

Suppose you have a line of code:

norm_squared=(i-j)*(i-j)

that you want to turn into a function.

No biggie:

norm2: procedure 
parse arg x,y 

return (x-y)*(x-y)

and call it with:

norm_squared=norm2(i,j)

But what about the process of developing that function? Most naturally you might scroll to the bottom of the member you’re editing and add the function there. But that’s a pain as your iterative development would require you to scroll down to where it’s defined at the bottom and up to where it’s called. Many times over.

Try the following, though:

/* REXX */ 

do i=1 to 10 
  do j=1 to 10 
    say i j norm2(i,j)

/* do never */
if 0 then do 

norm2: procedure 
parse arg x,y 

return (x-y)*(x-y) 

/* end do never */
end

  say "After procedure definition"
  end 
end 
exit 

It works just fine: The “if 0” condition is never met: It’s effectively a “do never” and you have to have it or else the REXX interpreter will crash into the function definition. (You can’t use a SIGNAL instruction if this is inside a loop or function definition for this purpose as it throws away the context. Early attempts did do this but it immediately failed once I tried it inside a loop: You learn and move on.)

What this enables you to do is to develop the function “inline” and then you can move it later – to another candidate invocation or indeed to the end of the member (or even to a separate member).

It saves a lot of scrolling about and encourages refactoring into separate routines. It’s not the same as an anonymous function but it’s heading in that direction, in terms of utility.

In that vein you might choose not to move the function away from “in line” – particularly if its use is “one time”. If you don’t you could consider an unmnemonic function name, such as “F12345” – which I wouldn’t normally recommend. But sometimes it’s really hard to come up with a meaningful function name. 🙂

Sorting In REXX Made Easy

(Originally posted 2013-07-15.)

In REXX sorting is a fraught exercise – and you tend to have to resort to other programs to do it:

  • DFSORT for the “heavy lifting”
  • UNIX Sort

but you might decide your sort is low volume enough to do in REXX. The trouble is it’s difficult to write a routine that isn’t specific on how the data is sorted, or rather how two items are compared – as we shall see.

Following on from Dragging REXX Into The 21st Century? here’s a technique that is quite general. It uses the REXX INTERPRET instruction.


The problem of sorting comprises two challenges:

  1. Comparing two items.
  2. Deciding which two items to compare when, and acting on the results of the comparison. Much has been written about this, perhaps most notably in Donald Knuth’s Sorting & Searching.

This post is mostly about Challenge 1, though the code presented also addresses Challenge 2.

Comparing Two Items

It’s easy to do a straight numeric comparison in REXX, such as

if a<b then do
  ...
end

but more complex comparisons are, ahem, more complex. 🙂 But, more to the point, a generalised comparison is difficult to do, which is where INTERPRET comes in.

Sample Sort Code

In what follows the program works. That isn’t to say the sorting algorithm is the most efficient over all sets of data, but this post isn’t really about the sort’s efficiency.

/* REXX */ 

/* Set up stem variable to sort */ 
sortin.0=5 
sortin.1="AXXX XS" 
sortin.2="HXXXX BSS" 
sortin.3="FXXX IS" 
sortin.4="XSSSSSSSS ZS" 
sortin.5="A JSS" 

/* Sort the stem variable */ 
call sortStem "comp1" 

/* Print the results */ 
do s=1 to sortin.0 
  say sortin.s 
end 

exit 

/* Sort comparison routine based solely on length of second word */
comp1: procedure 
parse arg x,y 

/* Calculate lengths of second word of x and y */ 
lx=length(word(x,2)) 
ly=length(word(y,2)) 

select 
when lx=ly then return 0 
when lx>ly then return 1 
otherwise return -1 
end 

sortStem: 
parse arg comparison_routine 
flip_flop=0 
do i=sortin.0 to 1 by -1 
  flip_flop=1 
  do j=1 to i-1 
    /* Decide whether to swap */ 
    interpret "compRes="comparison_routine"('"sortin.i"','"sortin.j"')"
    if compRes<0 then do 
      should_swap=1 
    end 
    else do 
      should_swap=0 
    end 

    if should_swap=1 then do
      /* Swap values */ 
      xchg=sortin.j 
      sortin.j=sortin.i 
      sortin.i=xchg 

      flip_flop=0 
    end 
  end 
end 
return

The item comparison routine in this example is, of course, somewhat artificial – and not one I’m likely to use. It compares the length of the second token in each stem variable. I chose it as an example of something that’s not easy to do in, say, DFSORT. The real point is you can compare items using arbitrarily complex criteria. If your collection of stem variables was more complex than this – essentially a list of objects – you’d want the sophistication this approach allows.


Incidentally when it comes to INTERPRET performance I think it depends on which statements are actually interpreted and which have already been tokenised:

  • If your “heavy lifting” is the code passed to the INTERPRET routine that would be expensive.
  • If you pass a call to a routine to INTERPRET that isn’t – so long as the actual routine is “static”, meaning appears in your source code where it can be tokenised.

In the above example code you’ll notice I’m using a hard-coded set of stem variable names. In my actual production code I’ve generalised to allow any stem variable names – so I’m actually much heavier in my use of INTERPRET.

I’m leaning towards INTERPRET in more cases – provided it doesn’t obscure the meaning.

At the start of this post I mentioned DFSORT and Unix sort as alternatives. I think you should consider them, though their limitations and the need to work hard at “marshalling” to use them will limit their appeal. So, I hope you’ll find the technique presented in this post useful for the cases where they’re not appealing.

The most important piece of this post – and the main reason for writing it – is the notion of using a single sorting routine and handing it the name of an item comparison routine. This should lead to code that’s more maintainable. Its long been the staple of other programming languages. You can do it with REXX, too!

Min And Max Of Tokens In A String

(Originally posted 2013-07-14.)

A couple of days ago I had a need to take a REXX string comprising space-separated numbers and find their minimum and maximum values. Here’s the technique I used.

(When I say “space-separated” there can be one or more spaces between the numbers, but there has to be at least one.)

The solution has three components:

  1. The REXX SPACE function – to turn the list into a comma-separated string of numbers. (The second parameter is the number of so-called spaces to separate tokens with. The third is the actual character to use – in my case a comma.)
  2. The REXX MIN (or MAX) function to compute the minimum (or maximum) value from this comma-separated string. These functions take a set of parameters of arbitrary length and do the maths on them. Parameters are separated by commas, hence the need to use SPACE to make it so.
  3. INTERPRET to glue 1 and 2 together.

My need is relatively low volume, so the “health warning” about INTERPRET’s performance is hardly relevant for my use case.

Here’s the code:

/* Return min and max value of string of space-separated numbers */
minAndMax: procedure 
parse arg list 
comma_list=space(list,,",") 
interpret "minimum=min("comma_list")" 
interpret "maximum=max("comma_list")" 
return minimum maximum 

It’s relatively straightforward, taking a list of numbers and returning the minimum and maximum. You’ll notice it doesn’t check that the tokens really are numbers. If I were to extend it I’d probably check for two SLR conditions: Overflow (“*” or similar) and Missing Value (“—” or similar). I’d probably take some of the “List Comprehension” stuff I talked about in Dragging REXX Into The 21st Century? and apply it to the list.

And my code uses this to decide if I have a range of values or just a single one. In the former case it turns the pair of numbers into e.g. “1-5” and the latter just e.g. “4”.

Of course there are other ways to do minimum and maximum for a list of numbers but this one seems the simplest and most elegant to me. “6 months later me” might take a different view. 🙂