Dragging REXX Into The 21st Century?

(Originally posted 2013-06-07.)

I like REXX but sometimes it leaves a little to be desired. This post is about a technique for dealing with some of the issues. I present it in the hope some of you will find it worth building on, or using directly.

Note: I’m talking about Classic REXX and not Open Object REXX.

List Comprehensions are widespread in modern programming languages – because they express concisely otherwise verbose concepts – such as looping.

Here’s an example from javascript:

var numbers = [1, 4, 9];
var roots = numbers.map(Math.sqrt);
/* roots is now [1, 2, 3], numbers is still [1, 4, 9] */

It’s taken from here which is a good description of javascript’s support for arrays.

Essentially it applies the square root function (Math.sqrt) to each element of the array numbers, using the map method. Even though it processes every element there’s no loop in sight. This, to me, is quite elegant and very maintainable. It gets rid of a lot of looping cruft that adds no value.

My Challenge

I have a lot of REXX code – essential to fetch data from the performance databases I build and turn it into graphs and tabular reports. Much of this code iterates over stem variables (similar to arrays – for the non-REXX reader) or character strings that are tokens separated by spaces (blanks).

An example of a blank-delimited token string is:

address_spaces="CICSIP01 CICSIP02 CICSPA CICXYZ DB1ADBM1 MQ1AMSTR MQ1ACHIN"

It would be really nice when processing such a string – perhaps to pick up all the tokens beginning “CICS” – to be able to do it simply. Perhaps an incantation like:

cics_regions=filter("find","CICS",address_spaces)

In this example the filter routine applies the find routine to each token in the string, with a parameter "CICS" (the search argument).

And not a loop in sight.

My Experiment

I implemented versions of map, filter and reduce. I’ll talk about how but first here’s what they do:

Function Purpose
map Applies a routine to each element
filter Creates a subset of the string with each element being kept or discarded based on the routine’s return value (1 to keep and 0 to throw the item away)
reduce Produce a result based on an initial value and applying a routine to each element.

Here’s a simple version of filter:

filter: procedure 
parse arg f,p1,p2,p3,list 
if list="" then do 
  parse arg f,p1,p2,list 
  if list="" then do 
    parse arg f,p1,list 
    if list="" then do 
      funstem="keepit="f"(" 
    end 
    else do 
      funstem="keepit="f"("p1"," 
    end 
  end 
  else do 
    funstem="keepit="f"("p1","p2"," 
  end 
end 
else do 
  funstem="keepit="f"("p1","p2","p3","
end 
outlist="" 
do forever 
  parse value list with item list 
  interpret funstem""item")" 
  if keepit=1 then do 
    if outlist="" then do 
      outlist=item 
    end 
    else do 
      outlist=outlist item
    end 
  end 
  if list="" then leave 
end 
return outlist 

Variable “list” is the input space-separated list. “outlist” is the output list that filter builds – in the same space-separated list format.

Much of this is in fact parameter handling: The p1, p2, p3 optional parameters need checking for. But the “heavy lifting” comes in three parts:

  • Breaking the string into tokens (or items, if you prefer).

  • Using interpret to invoke the filter function (named in variable f) against each token.

  • Checking the value of the keepit variable on return from the filter function:

    If it’s 1 then keep the item. If not then remove it from the list.

I also wrote a filter called “grepFilter” (amongst others). Recall the example above where I wanted to find the string “CICS” at the beginning of a token. That could’ve been done with a filter that checked for pos("CICS",item)=1. That’s obviously a very simple case. grepFilter, as the name suggests, uses grep against each token. It worked nicely (though I suggest it fails my long-standing “minimise the transitions between REXX and Unix through BPXWUNIX” test).

And then I got playing with examples, including “pipelining” – from, say, map to filter to reduce – such as:

say reduce("sum",0,filter("gt",8,map("timesit",2,"1 2 3 4 5 6")))

Issues

There are a number of issues with this approach:

  • You’ll notice the function name (first parameter in the filter example above) is in fact a character string.

    It’s not a function reference as other languages would see it. REXX doesn’t have a first class function data type. Suppose you didn’t have a procedure of that name in your code: You’d get some weird error messages at run time. And while you can pass around character strings all you want the semantics are different from passing around function references.

  • The vital piece of REXX that makes this technique possible is the interpret instruction.

    It’s very powerful but comes at a bit of a cost: When the REXX interpreter starts it tokenises the REXX exec – for performance reasons. It can’t tokenise the string passed to interpret. So performance could suffer. For my use cases most of the time (and CPU time) is spent in commands (scripted by REXX) rather than in the REXX code itself. (I also think the process of mapping a function to a list suffers less than the average REXX instruction if run through interpret.

  • The requirement to write, for example

    say reduce("sum",0,filter("gt",8,map("timesit",2,"1 2 3 4 5 6")))

    rather than

    say "1 2 3 4 5 6".map("timesit",2).filter("gt",8).reduce("sum",0)

    is inelegant. Fixing this would require subverting a major portion of what REXX is. And that’s not what I’m trying to do.

  • The need to apply a function to each item – particularly in the filter case – can be overkill.

    In my Production code I can write

    filter("item>8","1 2 4 8 16 32")

    as I check the first parameter for characters such as “>” and “=”. So no filtering function required.

  • REXX doesn’t have anonymous functions and I can’t think of a way to simulate them. Can you? If you look at the linked Wikipedia entry it shows how expressive they can be.

These are worth thinking about but not – I would submit – show stoppers. They just require care in using these techniques and sensible expectations.

Conclusions

It’s perfectly possible to do some modern things in REXX – if you work at it. And this post has been the result of experimentation. Experimentation which I’m going to use directly in some of my programs. (In fact I’ve taken the prototype code and extended it for Production. I’ve kept it “simple” here.)

I’d note that “CMS Pipelines” would do some of this – but not all. And in any case most people don’t have CMS Pipelines – whether on VM or ported to TSO. (TSO is my case, but mostly in batch.)

I don’t believe “Classic” REXX to be under active development so asking for new features is probably a waste of time. Hence my tack of simulating them, and living with the limitations of the simulation: It still makes for clearer, more maintainable code.

Care to try to simulate other modern language features? Lambda or Currying would be pretty similar.

Of course if I had kept my blinkers on then I wouldn’t know about all these programming concepts and wouldn’t be trying to apply them to REXX. But where’s the fun in that?

Published by Martin Packer

I'm a mainframe performance guy and have been for the past 35 years. But I play with lots of other technologies as well.

2 thoughts on “Dragging REXX Into The 21st Century?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: