WPS and MXG

(Originally posted 2007-12-12.)

Thanks to Oliver Robinson for pointing this out to me: In MXG Newsletters Numbers 50 and 51 Barry Merrill discusses MXG / WPS support in what I think is a detailed and fair way. (Oliver, for the record, works for WPC who develop WPS.)

Barry and I discussed WPS and MXG over a year ago. I think WPS has come a long way in this time and is now a very credible alternative to SAS for many applications, having a very high degree of programming language compatibility. As Barry notes, it’s available for both z/OS and Windows.

Perhaps I’ll even get round to learning SAS one day. I have the blue book already, of course.

Mashinations

(Originally posted 2007-12-10.)

Nowadays – actually thenadays 🙂 but more so nowadays – the value of a web page is related to how well structured it is. Well structured from a mashup programmer’s point of view…

As I often say you have to assume that people will want to take your web pages and mash them up in ways you never thought likely. If your web page is hard to navigate, extract material from, etc then people will use other pages and sites as the basis of their mashups. And you will lose traffic / business / kudos or whatever other metric of success you choose to use . Unless it’s obscurity you seek. 🙂

Here are two simple rules I’ve become sensitised to as a Firefox extension author…

Use id Attributes On “Structural” Elements

Motivation: To reliably navigate to a particular portion of a page javascript programmers use the getElementById() method. It takes the a string parameter and returns one element whose id attribute matches.

Not having this attribute gets you into “I want the third entryfield on the page. No wait, it’s moved to be the fourth” territory. Nasty.

But should you apply this to every element on the page? Not necessarily. But you should think about the structure of the page from the mashup perspective. So a major element like the edit field on your page should probably have an id attribute. (This is a real-world example for me as my Firefox extension parses and injects stuff in a suitable entryfield – for a wide variety of pages. All the “not plain text” in this post was injected programmatically by my extension.)

Use name Attributes on Forms And Form Elements

Motivation: In javascript the document element has an array property “forms”. So you can refer to an individual form as e.g. myDoc.forms[“postForm”] or even simply myDoc.postForm rather than having to hunt through the forms array for a somehow matching form. But only if you give the form a name attribute. What’s more you can refer to the elements in the form directly (e.g. myDoc.postForm.textField) but again only if the element in question has a name attribute.

(In fact the value of using the name attribute extends (in part) to images (<img> tags) and java applets (<applet> tags) but the value is rather less. (Navigating to an image and, especially, to an applet is relatively rare.)

One minor Firefox disappointment: I use the Built-in DOM inspector a lot and it doesn’t either show the “formname” property or enumerate the forms property by form name. I wonder why it doesn’t when there are perfectly good javascript ways of doing it.

These are simple examples of how to make life easy (or difficult). I’m sure there are many more. The bottom line, though, is to make web pages easy to mash up with other sources of data.

A Note On Javascript And Other Languages

I keep mentioning Javascript, don’t I? That’s simply because it’s the language in this space I’m most familiar with. (It’s typically seen in Firefox extensions and in AJAX, particularly in frameworks such as the dojo toolkit (which I’ve just installed.)) An increasing number of web servers are going to be doing their own aggregation or mashing up. I suspect they’ll rely on other languages such as PHP.

But it doesn’t matter… The guidelines above apply in just the same way to these server-side mashup languages. (I’ve just installed PHP on Apache and intend to play around with such mashing up at some point.)

Where My Searches Come From

(Originally posted 2007-12-06.)

This is really just a test of some Firefox Extension code of mine… but it’s kind of interesting in its own right…

The following is a table containing the country-level summary of Search URLs that somehow get a reader to my blog on any given day. (I have another table ranking search terms.)

Most days the USA and India are at the top of the list. And there’s usually a smattering of other countries represented, mainly in Europe.

What’s interesting is that readers from India seem to be mainly interested in JCL, utilities and DFSORT usage. Whereas those from the USA, Europe, China and Japan appear to be mainly interested in Performance- and Architecture-related topics.

I can’t prove this to you but the picture outlined above stays pretty much static.

And then there’re the searches explicitly on names of women. 🙂

Country Searches % Country Searches %
USA 8 33 DE 1 4
IN 6 25 IT 1 4
Unknown 4 16 LT 1 4
FR 2 8 UK 1 4

A Great Song For A Great Cause

(Originally posted 2007-11-30.)

This is a great song for a great cause…

I saw Queen + Paul Rodgers play this in 2005 in Hyde Park. Actually Paul did nothing – Roger Taylor sang it. 🙂

But now they’ve worked on it as part of their current studio sessions and have rushed it out – for free.

So download, play it, donate and tell your friends….

Get it from here and read all about it.

Now I’ve Downloaded And Listened To It

I downloaded it and listened to it a few times. As with all Queen-related things it grew on me after a few listenings. I would say it’s slower than the original 2005 version, which I wish it wasn’t. But on the other hand there’s some excellent guitar work from Brian May. No surprise there. 🙂

All three of them – Paul Rodgers, Brian May and Roger Taylor – shared the vocals. There’s some dispute in our household because I prefer Brian’s and especially Roger’s voices to Paul’s. The rest of the family don’t think much of their voices.

Roll on the album. 🙂

(And here‘s the song at its original speed.)

Memories of DFSORT OUTFIL

(Originally posted 2007-11-25.)

In September 1997 DFSORT Release 13 was shipped (to coincide with the release of OS/390 Release 4). It took a nice idea from Syncsort and extended it.

In case you didn’t know OUTFIL allows you to read an input data set (and perhaps sort it) and write to multiple output files from the resulting records – perhaps selecting subsets of the records and reformatting them (and differently to each output file). All in a single pass over the input data.

Perhaps people really still don’t know about OUTFIL as, while I get many searches that hit my blog for DFSORT topics, OUTFIL is rarely one of the search terms.

There are three features that were then unique to DFSORT:

  • SPLIT

    This is a “card dealer” function. The most obvious (to me) use was in BatchPipes/MVS “Pipe Balancing”. This is where multiple readers absorb the output of a single writer. (Or the other way around.) In this case DFSORT would write to multiple pipes.

  • FNAMES

    This allows you to use any output DD name you like. Consider the following…

    OUTFIL FILES=(01,02),...
    

    and

    OUTFIL FILES=(GERMANY,FRANCE),...
    

    The latter is much preferable to the former. (The former generates DD names of “SORTOF01” and “SORTOF02” while the latter uses DD names of “GERMANY” and “FRANCE”.)

  • SAVE

    (This one is, if I remember correctly, my one contribution to the release. I like to make “helpful suggestions” 🙂 to the Development team and Frank (Yaeger) kindly puts some of my ideas into the product.)

    Consider the case where you’ve written multiple subsets of the input records to different output destinations. Suppose you want to write the records that haven’t been written to any destination to one. It can be very complicated (and error-prone) to figure out the right INCLUDE/OMIT parameter string to make this happen. SAVE automatically does that for you:

    OUTFIL FNAMES=SPECIALS,INCLUDE=(SMFID,EQ,C'PRD1',AND,...)OUTFIL FNAMES=THEREST,SAVE
    

    So I really like this one. 🙂

So those are the bare-bones additions to OUTFIL for Release 13. (DFSORT has since added a lot more function to OUTFIL and I would have to assume Syncsort had done the same.)

I had the chance to run a residency that Summer in Poughkeepsie – and it was a lot of fun. A small team of highly skilled people I’m pleased to call friends and I took a version of our batch-driven SMF analysis code and made a mini batch window out of it. (We do have our own mini scheduler and a batch window topology to play with.) So we got to play with DFSORT Release 13 and the (also new) DFSMS System-Managed Buffering and WLM-Managed JES2 initiators and BatchPipes/MVS and had a fine old time of it – running and measuring and tweaking and doing it all again. We got to do a few presentations based on the results of our playing but never got to turn the foils into a Redbook. (I think we had too much fun “playing sysprog” and the like.) 🙂

One feature that proved really useful was DFSORT’s ability to detect when BSAM or QSAM was needed instead of EXCP. The most common cases are BatchPipes/MVS pipes, Extended Format Sequential (Striped and/or Compressed) and SYSOUT data sets.

Note: I didn’t say “QSAM”. So that ruled Hiperbatch out. To use Hiperbatch you have to write your own E15, E32 or E35 exit to close the data set and to reopen it for QSAM. (There was a sample piece of Assembler code to do this in the HBAID manual.)

From my (largely Performance) perspective, though, OUTFIL is all about avoiding repeated reads of the input data set. Our batch window did a fair amount of that because it read SMF data repeatedly. So OUTFIL fitted nicely into our window. (And if we pretended the output data was VB and not VBS we could get away with piping it as well.)

One thing to be clear about – which I soon realised – was that OUTFIL does not replace multiple sorts with a single one – unless the sort keys etc are identical. But you could feed the same records from a DFSORT OUTFIL job through multiple pipes into the appropriate number of DFSORT SORT jobs. n sorts become n+1 jobs. More balls to keep in the air, of course. And if those sorts are big enough they’ll compete for memory. Which is where another Release 13 feature came in handy: Dynamic Hipersorting. This changed DFSORT Hipersorting from asking MVS (via STGTEST SYSEVENT) how much storage it could have once at the beginning of the sort for sort work to asking the question several times over (as the data was read in). Because of this change Dynamic Hipersorting was much less likely to cause overcommitment of memory – in the multiple concurrent sorts case.

So, to me, DFSORT OUTFIL is yet another of those techniques that you have to “engineer in”. Unless you plan the implementation with the usual diligence nothing will happen. We had a lot of fun finding cases where it would be ideal in customer workloads – when conducting PMIO Batch Window studies. And, like so many other techniques, it’s as valid today as it was 10 years ago.

One final thing: I’ve just remembered that I had another idea that got put into DFSORT Release 13…

The PMIO team worked with DFSORT Development to enhance the SMF 16 record (one per DFSORT invocation) to add a few minor fields – in Release 12. (I even forget what they were.) In Release 13 the record was enhanced to contain input and output data set sections – one per data set. These were very detailed – including the number of records read or written, the access method (including Pipes) and so on. Very nice. To get this additional data you need to run with SMF=FULL. (I’d recommend this anyway.)

Memories of Pipes

(Originally posted 2007-11-25.)

!Somehow I seem to have end up writing a “Memories of…” series of blog posts. That wasn’t the intention but a set of threads on IBM-MAIN Listserver got me to thinking about these nice venerable technologies – VIO, Hiperbatch, Batch LSR and Pipes.

By couching these posts in terms of “memories of” it sounds like they’re perhaps obsolete. With the possible exception of Hiperbatch that probably isn’t true. (And the only thing really wrong with Hiperbatch is its non-support of Extended Format VSAM and Sequential data sets.)

So, back to the topic – BatchPipes/MVS, usually shortened to Pipes…

The concept of record-level interlock really wasn’t new, even at the time… Unix had had pipes for at least 15 years before that, probably 20. In 1990, however, there was a good reason to introduce it to MVS/ESA (as it then was called)… A big customer wanted it. Pipes was born as a result of an exec-level challenge from a specific customer in the USA.

The idea is quite simple… Pipe individual records from a writer job to a reader job – with minimal changes to the application. (In this case “minimal changes” meant changing the DD card to point to your Pipes subsystem.) But I’ve already written about this.

So, when I was in Poughkeepsie for the second burst of writing the “Storage Configuration Guidelines” Redbooks in November 1990 my new-found friend Ted Blank (who was to become a very good family friend a little later on) told me about Pipes. It was more or less the same conversation that led to the “Parallel Sysplex Batch Performance” Redbook (SG24-2557) because it ranged widely onto more general aspects of batch performance.

Pipes was readied for market through until 1993 or so. At that time in IBM a “new entrepreneurial spirit” was abroad in IBM. Rather than make Pipes a MVS/ESA component it was decided to market it as a specific product. (Personally I think this was a mistake – in terms of aceptance and ultimately customers’ batch windows). So the idea was to offer a service to analyse customer data and then lead them into implementing this chargeable product. The data analysis though was more or less restricted to finding “one writer one reader” patterns in the SMF data. True “engineering in” (which is what is called for) happened after this initial screening – if it happened at all.

In parallel (if you’ll pardon the pun) we were starting to build the PMIO Batch Window offering. Our whole premise was to engineer whatever it took into workloads, acknowledging the value of rescheduling to run in parallel, breaking up jobs, and “pacing”. All things you’d need to really get value out of Pipes. So we, in the PMIO team, were fellow travellers seeking to make Pipes successful and build a good Batch Window Tuning practice. (In the middle of this my manager was offered the opportunity to act as an agent for Pipes in Europe. He declined that offer. He’s no longer in IBM, either. Life could’ve been different.) 🙂

I had a great time in the 1990’s “chalking and talking” on Pipes. It taught me I could take a complex topic like Pipes and keep it in my head and chalk and talk it. Who needs foils? 🙂 And sorry if you were a victim of my extemporisation on Pipes in that era. 🙂

What was also nice was when DFSORT automated the detection of when EXCP wasn’t appropriate for sequential I/O…

Not only Pipes but also Extended Format sequential. This means striped and compressed data sets.

In the Summer of 1997 DFSORT Release 13 came out with this support. (Perhaps I should do another piece called “Memories of DFSORT”.)

At the same time – Summer of 1997 – the Pipes team teamed up with BMC, incorporating the latter’s Data Accelerator and Batch Accelerator into SmartBatch. Also CMS Pipelines became available (in the main) as a set of “fittings” called BatchPipeWorks. (You specify them on the file DD as a parameter string to the subsystem. Perhaps I should blog on this as well.) And also BatchPipePlex – which routes pipes through the coupling facility.Neither BatchPipes/MVS nor Smartbatch sold well. But I still think the Pipes approach remains valid and valuable.

Now today we have BatchPipes/MVS Version 2 Release 1 – which comprises the original Pipes function, BatchPipeWorks and BatchPipePlex. (BTW folks that’s the only way to use “comprise” in sentence.) 🙂 I also see cases where products consider prereq’ing Pipes – or at least offer support as an option.

And I believe I can STILL extemporise on Pipes. So if you need me to (and preferably if you’re on my patch) please get in touch.

So it sounds like I’ve given myself two more topics to blog on:

  • Memories of DFSORT R.13
  • BatchPipeWorks

The latter would require me to fire up Pipes on some system or other again. I’m looking forward to playing. 🙂

Not Boris Johnson but a Chat Show In Secondlife?

(Originally posted 2007-11-22.)

Thanks to Kevin Aires for pointing this out to me…

“Boris in Wonderland” is a chat show in Secondlife – hosted by one Boris Frampton of IBM. Go here for the first episode.

Andy Stanford-Clark (Ginger Mandelbrot in Secondlife) is interviewed about his applications and creations.

The Monty Python quote “look out there are Llamas” is relevant here – but you’ll just have to roll the video to find out why. 🙂

One thing you might notice is the hand gestures when the host and his guest are talking. These “speech gestures” are standard now that Secondlife has Voice.

Not much mainframe relevance, I guess, but one day we’ll all have to make this stuff work – and perform. 🙂 And hopefully lots of people will be wanting to connect virtual worlds to CICS and DB2 and Websphere and MQ and…