So You Don’t Think You’re An Architect?

Every year I try to write one new presentation. Long ago, it feels like, I started on my “new for 2020” presentation. It’s the culmination-so-far 🙂 of my “architecture thing”.

“What Architecture thing?” some of you might be asking.

It’s quite a simple idea, really: It’s the notion that SMF records can be used for far more than just Performance, even the ones (such as RMF) that we’re notionally designed for Performance. A few years ago I wrote a presentation called “How To Be A Better Performance Specialist” where I pushed the germ of this notion in two directions:

  • Repurposing SMF for non-Performance uses.
  • Thinking more widely about how to visually depict things.

The first of these is what I expanded into this “Architecture” idea. (The second actually helps quite a bit.) But I needed some clear examples to back up this “who says?” notion.

My day job – advising customers on Performance matters – yields a lot of examples. While the plural of “anecdote” isn’t “data”, the accumulation of examples might be experience. And boy do I have a lot of that now. So I set to writing.

The presentation is called “So You Don’t Think You’re An Architect?” A good friend of mine – who I finally got to meet when I did a customer engagement with him – thought the title a little negative. But it’s supposed to be a provocative statement. Even if the conclusion is “… and you might be right”. So I’ve persisted with it (and haven’t lost my friend over it). 🙂

I start at the top – machines and LPARs – and work my way down to the limits of what SMF 30 can do. I stop there, not really getting much into the middleware instrumentation for two reasons:

  • I’ve done it to death in “Even More Fun With DDF”.
  • This presentation is already quite long and intensive.

On the second point, I could go for 2 hours, easily, but I doubt any forum would let me do a double session on this topic. Maybe this is the book I have in me – as supposedly everybody does. (Funnily enough I thought that was “SG24–2557 Parallel Sysplex Batch Performance”. Oh well, maybe I have two.) 🙂

One hour has to be enough to get the point across and to show some actual (reproducible) examples. “Reproducible” is important as it is not (just) about putting on a show; I want people to be able to do this stuff and to get real value out of it.

One criticism I’ve faced is that I’m using proprietary tools. That’s for the most part true. Though sd2html, An Open Source WLM Service Definition Formatter – Mainframe, Performance, Topics is a good counter-example. I intend to do more open sourcing, time permitting. And SMF 30 would be a good target.

So, I’ve been on a long journey with this Architecture thing. And some of you have been on bits of the journey with me, for which I’m grateful. I think the notion we can glean architectural insight from SMF has merit. The journey continues as recently I’ve explored:

I’ll continue to explore – hence my “culmination-so-far” quip. I really don’t think this idea is anything like exhausted. And – in the spirit of “I’ll keep revising it” I’ve decided to put the presentation in GitHub. (But not the raw materials – yet.) You can find it here.

You might argue that I risk losing speaking engagements if I share my presentation. I have to say this hasn’t happened to me in the past, so I doubt it makes much difference now. And this presentation has already had one outing. I expect there will be more. And anyway the point is to get the material out. Having said that, I’m open to webcasting this presentation, in lieu of being able to travel.

IMS Address Space Taxonomy

(I’m grateful to Dougie Lawson for correcting a few errors in the original version of this.)

I don’t often write about IMS and there’s a good reason for it: Only a small proportion of the customers I deal with use it. I regard IMS as being one of those products where the customers that have it are fanatical – in a good way. 🙂

So when I do get data from such a customer I consider it a golden opportunity to enhance my tooling. And so it has been recently. I have a customer that is a merger of three mainframe estates – and I have data from two of the three heritages. Both of these have IMS.

This mergers happened long ago but, as so often happens, the distinct heritages are evident. In particular, the way they set up the IMS systems and regions differs.

You can, to a first approximation, separate IMS-related address spaces into two categories:

  • IMS System Address Spaces
  • IMS Application Regions

In what follows I’ll talk about both, referencing what you can do with SMF 30, specifically. Why SMF 30? Because processing SMF 30 is a scalable method for classifying address spaces, as I’ve written about many times before.

IMS System Address Spaces

IMS system address spaces run with program name “DFSMVRC0” and there are several different address spaces. For example, over 30 years ago the “DL/I SAS” address space became an option – to provide virtual storage constraint relief. It;s been mandatory for a long time. Also there is a DBRC address space. All have the same program name.

The system address spaces have Usage Data Sections which say “IMS”. The Product Version gives the IMS version. In this customer’s case one part of the estate says “V15” and the other part “V14”.

The IMS Control Region is the only system address space that can attach to Db2 or MQ. So, if the program name is “DFSMVRC0” and there are Usage Data Sections for either Db2 or MQ we know this is the Control Region. But this isn’t always going to be the case – as some IMS environments connect to neither Db2 nor MQ. So here the Product Qualifier field can be helpful:

  • Both DBRC and Control Region address spaces have a Product Qualifier of “TM”. But you can’t necessarily tell them apart from things like I/O rates. However, you might expect a DBRC address space to have a name with something like “DBR” in. (I’m not wowed by that level of fuzziness.)
  • A DL/I SAS has Product Qualifier “DBCTL”.

I’m going to treat IRLM as an IMS System Address Space, when really it isn’t. This is the lock manager – and it’s the same code whether you’re running IMS or Db2. The program name is DXRRLM00 and there is little to distinguish between an IRLM for IMS or for a Db2 subsystem in SMF. (In fact which Db2 an IRLM address space is associated with isn’t in SMF either.) the best my code can do is parse job names, service class, report class etc names for “IMS” or, still worse, “I” but no “D”.

IMS Application Regions

IMS application address spaces – whether MPRs or BMPs – run with program name “DFSRRC00”. They also have Usage Data Sections that say “IMS” but don’t – in the Product Qualifier field – say anything about the subsystem it’s using. Similarly, when CICS attaches to IMS it’s Product Qualifier isn’t helpful.

To my mind the distinction between a MPR (Message Processing Region) and a BMP (Batch Message Processor) is subtle. For example I’ve seen BMPs that sit there all day, fed work by MQ. You probably would glean something from Service Classes and Report Classes. Relying on the address space name is particularly fraught.

Two Diverse IMS Estates

This latest customer has two contrasting styles of IMS environment, mainly in their testing environments:

  • One has lots of very small IMS environments.
  • the other has few, larger testing environments.

Also, as I noted above, one estate is IMS V14 and the other is V15. This does not appear to be a case of V15 in Test/Development and V14 in Production.

So I guess their testing and deployment practices differ – else this would’ve been homogenised.

I’m going to enjoy talking to the customer about how these two different configurations came to be.

Conclusion

IMS taxonomy can be done – but it’s much messier than Db2 and MQ. It relies a lot on naming conventions and spotting numerical dynamics in the data.

Note: For brevity, I haven’t talked about IMS Datasharing. That would require me to talk at length of XCF SMF 74–2 and Coupling Facility 74–4. Something else I haven’t discussed is “Batch DL/I” – where a batch job is it’s own IMS environment. This is rather less common and I haven’t seen one of these in ages.

I would also say, not touched on here, that SMF 42–6 would yield more clues – as it documents data sets.

And, of course serious IMS work requires its own product-specific instrumentation. Plus, as Dougie pointed out to me, the Procedure JCL.

sd2html, An Open Source WLM Service Definition Formatter

There have been a number of WLM Service Definition formatters over the years. So why do we need another one?

Well, maybe we don’t but this one is an open source one, covered by the MIT licence. That means you can change it:

  • You could contribute to the project.
  • You could modify it for your own local needs.

While IBM has other WLM Service Definition Formatters, it was easy to get permission to open source this one.

It’s the one I started on years ago and have evolved over the many engagements where I’ve advised customers on WLM.

If it has an unusual feature it’s that I’ve stuck cross links in wherever I can – which has made it easier for me to use. For example, everywhere a Service Class name appears I have a link to its definition. So, a Classification Rule definition points to the Report Class definition.


Installing sd2html


sd2html is a single PHP script, originally run on a Linux laptop and then on a MacBook Pro. Both platforms come with web servers and PHP built in. In the Mac’s case it’s Apache.

So, to use it you need to provide yourself with a tame (perhaps localhost) web server. It needs to run PHP 7.

Place sd2html.php somewhere that it can be run by the web server.


Extracting A WLM Service Definition


In my experience, most customers are still using the ISPF WLM Application. there is a pull down menu to print the Service Definition. Choose the XML option and it will write to a FB 80 sequential file. This you need to place on the web server, as previously mentioned.

Customers send me their WLM Service Definitions in this format, downloaded with EBCDIC to ASCII translation. It’s easy to email this way.

When I receive the file it looks broken. I keep reassuring customers it isn’t because I can one-line it, throwing away the new line characters. This used to be a fiddle in my editor of choice – then Sublime Text, now BBEdit. That works well.

But I’ve eliminated the edit step: sd2html now does the edit for me, before passing the repaired text onto the XML parser. (Originally the XML parser read the file on disk directly. Now the code reads the file in, removes the new lines, and then feeds the result to the XML parser.)


Using sd2html


So you’ve got the Service Definition accessible by your PHP web server. Now what?

From a browser invoke sd2html on your web server with something like

http://localhost/sd2html.php?sds=wlm.xml

You obviously need to adjust the URL to point to sd2html. Also the sds query string parameter needs to point to your WLM Service Definition file.

Then browse to your heart’s content, following links which you’ll find in two places:

  • The table of contents at the beginning.
  • Within the document.

Open Sourcing sd2html


I said in filterCSV, An Open Source Preprocessor For Mind Mapping Software I had another open source project in the works. sd2html is it. I have one more piece of code that will need a lot of work to open source – but I think mainframers will like it. And two more potential ones – that aren’t specific to mainframes.

So, I welcome contributions to sd2html, or even just comments / ideas / requirements. Specifically, right now, I’d value:

  • Documentation writing. (This post is all there is right now.)
  • Early testers.
  • Creative ideas.
  • People who know PHP better than I do.
  • People who can think of how to handle the national language characters that show up from time to time.

Anyhow, try it if you can and let me know what you think.

filterCSV, An Open Source Preprocessor For Mind Mapping Software

I have a number of ideas for things I want to open source, some directly related to the day job and some not. This post is about one piece of software that I use in my day job but which you probably wouldn’t recognise as relevant to mainframe performance.

To me the rule of thumb for candidates for open sourcing is clear: Something of use outside of IBM but with little-to-no prospect of being commercialised.

filterCSV is just such a piece of software.


What’s The Original Point Of filterCSV?


filterCSV started out as a very simple idea: Our processing often leads to CSV (Comma Separated Value) files with a tree structure encoded in them.

This tree structure enables me to create tree diagrams in iThoughts. iThoughts is mind mapping software I’m (ab)using to draw tree diagrams. Whereas most people create mind maps by hand, I’m bulk loading them from a CSV file. Strictly speaking, I’m not creating a mind map – but I am creating a tree.

iThoughts has a variant of CSV for importing mind maps / trees. It’s documented here. It’s a very simple format that could be confected by any competent programmer, or from a spreadsheet.

So, to filterCSV: I’ve got in the habit of colouring the nodes in the tree I create in iThoughts. Originally I did it by hand but that doesn’t scale well as a method. If I discern a bunch of nodes (perhaps CICS regions) are part of a group I want to colour them all at once.

The very first piece of filterCSV, which is a Python 3 script, compared nodes to a regular expression. If they matched they’d be coloured with a specified RGB value – by altering the CSV file. I would import this altered CSV file into iThoughts.

In a real customer engagement this saves a lot of time: For CICS regions the nodes have the string “RC: CICSAORS” in, for example. “RC” is short for “Report Class”, of course. So the following works quite well as a command line invocation:


filterCSV < input.csv > output.csv ‘RC:CICSAORS’ FFBDAA


So every node with “RC: CICSAORS” in its text gets coloured with RGB value FFBDAA.

If I keep going with this I can find rules that colour all the CICS regions. Then I understand them much better.


Open Sourcing filterCSV


Let’s generalise the idea: You might be creating a mind map and want to colour some nodes, based on a readily-codifiable criterion. Here’s what you do:

  1. You export the mind map from iThoughts in CSV format.
  2. You throw the CSV file through filterCSV, specifying the regular expression and the colour on the command line.
  3. You import the resulting CSV file into iThoughts.

I don’t know how many users of mind mapping software want to do this, but I bet I’m not alone in wanting it. If the effort to open source it were minimal it makes sense to do it, rather than accepting I’m going to be the only user.

So, I put it on IBM’s internal GitHub site – and I was pleased when Christian Clauss of IBM Zurich joined me in the effort. He’s brought a lot of experience and, in particular, testing knowledge to bear.

Then I got permission to open source filterCSV. This turned out to be very straightforward for a number of reasons:

  • IBM is keen on open sourcing stuff.
  • There is no prospect of this becoming product code.
  • The process for open sourcing when there are no dependencies is streamlined.

I’ll also say this is a good practice run for open sourcing things that are of wider interest – and particularly for the mainframe community. Which is something I really want to do.

So it’s now a project on GitHub. I subsequently went through the process with another one – which I’ll talk about in another blog post.


filterCSV Has Morphed Somewhat


I realised a couple of things while developing filterCSV:

  1. It’s not just iThoughts that the method could be applied to.
  2. What I really have is a tree manipulation tool. In fact that’s essentially what mind mapping software is.

It’s the combination of those two points that made me think the tool could be more generally useful. So here are some things I’ve added to make it so:

  • It can import flat text – creating a tree using indentation. That can include Markdown using asterisks for bullets.
  • It can import XML.
  • You can delete nodes that match a regular expression.
  • You can change the shape of a matching node, or its colour.
  • You can write HTML in tabular or nested list form.
  • You can write XML – either OPML or Freemind.
  • You can promote nodes up the hierarchy, replacing their parents.
  • You can spread Level 0 nodes vertically or horizontally. This helps when you have multiple trees.

Craig Scott, the developer of iThoughts, kindly gave me the RGB values for iThoughts’ colour palette. So now you can specify a colour number in the palette. (You can actually go “next colour” (or “nc” for short), which is quite a boon when you have multiple regular expression rules.)

Some of these things came to me while using filterCSV in real life; the experience of actually using something you built is useful.


Conclusion


So this has been a fun project, where I’ve learnt a lot of Python. I continue to have things I want to do to filterCSV, including something that should be out in the very next few days. The general “tree manipulation” and “adjunct to iThoughts” ideas seem to have merit. And I’m enjoying using my own tooling.

If you fancied contributing to this open source project I’m open to that. In any case you can find it here on GitHub. The latest release is 1.2 and 1.3 should, as I say, be out soon.

And I have plenty of ideas for things to enhance filterCSV.

Is Db2 Greedy?

If you want really good Db2 performance you follow the guidelines from Db2 experts.

These guidelines contain such things as “ensure Db2 has excellent access to zIIP” and “put the Db2 address spaces up high in the WLM hierarchy”. Some of these rules come as a bit of a shock to some people, apparently.

If you take them all together it sounds like Db2 is greedy. But is it really? This blog post seeks to answer that question.


Why Is Db2 Performance So Important?


It’s a good idea to understand how important Db2’s performance is – or isn’t. Let’s assume at least some of the work connecting to Db2 is businesswise important. Then it depends on Db2’s own performance.

The principle is the server needs to have better access to resources than the work it serves.

Let’s take two examples:

  • If writing to the Db2 logs slows down commits will slow down – and the work that wants to commit will slow down.
  • If Prefetch slows down the work for which the prefetch is done will slow down. Ultimately – if we run out of Prefetch Engines – the work will revert to synchronous I/O.

So, in both these cases we want the appropriate Db2 components to run as fast as possible.

“Ah”, you might say, “but a lot of this work is asynchronous”. Yes, but here’s something you need to bear in mind: Asynchronous work is not necessarily completely overlapped. There’s a reason we have time buckets in Accounting Trace (SMF 101) for asynchronous activities: A sufficiently slowed down async activity can indeed become only partially overlapped. So it does matter.


What Resources Are We Talking About?


In both the above examples zIIP performance comes into play:

  • Since Db2 Version 10, Prefetch Engines (which are really Dependent Enclaves) are 100% eligible for zIIP. Similarly Deferred Write Engines. These run in the DBM1 address space.
  • Since Db2 Version 11, Log Writes are similarly eligible. These are issued from the MSTR address space.

Note: The number of Prefetch Engines (or rather their limit) is greatly increased in Db2 Version 12 – from 600 to 900. I think this provides some headroom for surges in requests – but it doesn’t obviate the need for DBM1 to have very good access to zIIP.

By the way Robert Catterall has a very good discussion on this in Db2 for z/OS Buffer Pools: Clearing the Air Regarding PREFETCH DISABLED – NO READ ENGINE.

Note Also: Everything I’ve said about Prefetch Engines is true of Deferred Write Engines.

The above has been about zIIP but almost all the same things apply to General Purpose CPU. The two go together in the WLM Policy: Delay for CPU and Delay For zIIP are part of the denominator in the calculation of Velocity.

By the way, just because a Delay sample count is low doesn’t necessarily mean there was no delay. Furthermore, don’t be overly reassured if the zIIP-on-GCP number is low: In both cases there can be zIIP Delay in between samples. Further, the not crossing over to GCP can still hide delay in getting dispatched on a zIIP.

Further, there are scenarios where a zIIP doesn’t get help from a GCP:

  • The logical zIIP has to be dispatched on a physical to ask for help. This is less timely for a Vertical Medium (VM) and still less likely for a Vertical Low (VL) – compared to a Vertical High (VH).
  • The GCP pool might itself be very busy and so might not offer to help the zIIPs.

(By the way zIIPs ask for help; GCPs don’t go round trying to help.)

Of course, CPU is not the only category of resource Db2 needs good access to. Memory is another. Db2 is a great exploiter of memory, getting better with each release. By Version 10 virtually all of it was 64-Bit and long-term page-fixed 1MB pages are the norm for buffer pools.

It remains important to provision Db2 with the memory it needs. For example, it’s best to prevent buffer pools from paging.

Usually Db2 is the best place to consider memory exploitation, too.

Then there’s I/O. Obviously fast disk response helps give fast transaction response. And technologies like zHyperWrite and zHyperLink can make a significant difference; Db2 is usually an early exploiter of I/O technology.

For brevity, I won’t go into Db2 Datasharing requirements.


Db2 And WLM


The standard advice for Db2 and WLM is threefold:

  • Place the IRLM (Lock Manager) address space in SYSSTC – so above all the application workload and the rest of the Db2 subsystem.
  • Place the DBM1, MSTR, and DIST address spaces above all the applications, with the possible exception of genuine CICS Terminal Owning Regions (TORs), using CPU Critical to further protect their access to CPU.
  • Understand that DDF work should be separately classified from the DIST address space and should be below the DIST address space in the WLM hierarchy.

“Below” means of a lower Importance, not necessarily a lower Velocity. Importance trumps the tightness of the goal.

While we’re talking about Velocity, it’s important to keep the goal realistic. In my experience I/O Priority Queuing keeps attained Velocity higher than without it enabled. Because the samples are dominated by Using I/O. This also means CPU and zIIP play less of a role in the Velocity calculation.

Follow these guidelines and you give Db2 the best chance in competing for resources – but still no guarantee a heavily constrained system won’t cause it problems.


What Of Test Db2 Subsystems?


Perhaps a Test* Db2 isn’t important. Well it is to the applications it serves, in the same way that a Production Db2 is to a Production workload. So two things flow from this:

  • The test Db2 should have better access to resources than the work it supports.
  • If you want to run a Production-like test you’d better give the Db2 that’s part of it good access to resources.

That’s not to say you have to treat a Test Db2 quite as well as you’d treat a Production Db2.


What Of Other Middleware?


Db2 isn’t necessarily exceptional in this regard. MQ and IMS are certainly similar. For MQ we’re talking about the MSTR and CHIN address spaces. For IMS the Control Region and SAS, at least.

You’d want all of these to perform well – to serve the applications they do.


Conclusion


Despite apparently strident demands, I wouldn’t say Db2 is greedy. I entirely support the notion that it needs exceptionally good access to resources – because of how its clients’ performance is so dependent on this. But there are indeed other things that play an even more central role – such as the address spaces in SYSTEM and those that are in SYSSTC.

One final point: I said I wouldn’t talk about Datasharing but I will just say that an ill-performing member can drag down the whole group’s performance. So we have to take a sysplex-side view of resource allocation.

Let’s Play Master And Servant

I’ve been experimenting with invoking services on one machine from another. None of these machines have been mainframes. They’ve been iPads, iPhones, Macs and Raspberry Pis.

I’ve used the plural but the only one actively dualed is iPhone. And that plays a part of the story.

For the purpose of this post you can view iPads and iPhones the same.

Actually mainframers needn’t switch off – as some of this might be a useful parallel.

Why Would You Invoke Services On Another Machine?

It boils down to two things, in my view:

  • Something the other machine has.
  • Something the other machine can do.

I have examples of both.

At this point I’m going to deploy two terms:

  • Heterogeneous – where the architecture of the two machines is different.
  • Homogeneous – where the architecture of the two machines is the same.

You can regard Pi calling iOS as heterogeneous. You can regard iOS calling iOS as homogeneous. there is quite a close relationship between these two terms and the “has” versus “can do” thing. But it’s not 100% true.

On “has” I would probably only invoke a service on another device of the same type of it had something that wasn’t on the invoking machine. Such as a calendar. So, for example, I could copy stuff from my personal calendar into my work calendar.

On “can do” I would invoke a service on another machine of a different type of it can do something the invoking machine can’t do. For example, a Raspberry Pi can’t create a Drafts draft – but iOS (and Mac) can.

But Pi can do a lot of things iOS can do, and vice versa. So sometimes “has” comes into play in a heterogeneous way. (I don’t have a good example of this – and it’s not going to feature any further in this post.)

Now let’s talk about some experiments – that I consider “Production-ready”.

Automating Raspberry Pi With Apache Webserver

There are certain things Raspberry Pi – which runs Linux – can do that iOS can’t. One of them is use certain C-based Python libraries. One I use extensively is python-pptx – which creates PowerPoint presentations. My Python code uses this library to make presentations from Python. Normally it would run on my Mac.

At 35,000 feet I can run my Pi on a battery and access it on a Bluetooth network. I can SSH in from my iPad and also have the Pi serve web pages to the iPad.

So I don’t need to pull out my nice-but-cumbersome Mac on a plane.

SSH is my method of choice for administering the Pi. But what of invoking services? Here’s where Apache (Webserver) comes in handy. It’s easy to get Apache with PHP up and running. In my case I use URLs of the form http://pi4.local/services/phpinfo.php to access it from iOS:

  • I can use the Safari web browser.
  • I can use a Shortcuts action to GET or POST. In fact all http verbs.
  • I can write JavaScript in Scriptable to access the webserver.
  • and so on.

Here’s a very simple PHP example:

<?PHP                                                                                                                                          
phpinfo();                                                                                                                                     
?>                                                                                                                                             

I’ve saved this as phpinfo.php in /var/www/html/services on the Pi. So the URL above is correct to access it.

I don’t want to get into PHP programming but the first and the last line are brackets around PHP code. Outside of these brackets could be HTML, including CSS and javascript. phpinfo() itself is a call to a built in routine that gives information about the PHP setup in the web server. (I use it to validate PHP is even installed properly and accessible from the web server.)

The point of this example is that it’s very simple to set up a web server that can execute real code. PHP can access, for example, the file system.

Here’s another example:

<?php                                                                                                                                          
print("<table>\n");                                                                                                                            
print("<th><td>Parameter</td><td>Value</td></th>\n");                                                                                          
foreach ($_GET as $key => $value) {                                                                                                            
  print("<tr><td>$key</td><td>$value</td></tr>\n");                                                                                            
}                                                                                                                                              
print("</table>\n");                                                                                                                           

?>                         

This is queryString.php and it illustrates one more important point: You can pass parameters into a script on the Pi, using the query string portion of the URL. This example tabulates any passed in parameters.

So the net of this is you can automate stuff on the Raspberry Pi (or any Linux server or even a Mac) if you set it up as a web server.

Automating iOS With Shortcuts And Pushcut Automation Server

Automating an iOS device from outside is rather more difficult. There are several components you need:

  1. Shortcuts – which is built in to both iPad OS and iOS.
  2. A way to automatically kick off a shortcut from another device.
  3. Some actual shortcuts to do the work.

Most people reading this are familiar with items 1 and 3. Most people won’t, however, know how to do item 2.

This is where Pushcut comes to the rescue. It has an experimental Automation Server. When I say “experimental”, Release 1.11 three weeks ago from the time of writing introduced it as “experimental” but there have been enhancements released since then. So I guess and hope it’s here to stay, with a few more enhancements perhaps coming.

Before I go on I should say some features of Pushcut require a subscription – but it’s not onerous. I don’t know if Automation Server is one of them – as I happily subscribed to Pushcut long before Automation Server was a thing. There’s a lot more to Pushcut than what I’m about to describe.

When you start up Automation Server it has access to all your Shortcuts, ahem, shortcuts. You can invoke one using a URL of the form

https://api.pushcut.io/<my-secret>/execute?shortcut=Hello%20World&input=Bananas%20Are%20Nice

The input parameter is the shortcut’s input parameter. You don’t have to have an input parameter, of course so just leave off the Bananas%20Are%20Nice part. (Notice it is percent encoded, which is easy to do, as is the shortcut name.)

The above invokes a (now obsolete version of my) Hello Worldshortcut. Obsolete because it now turns the input into a dictionary.

JSON is passed in thus:

https://api.pushcut.io/<my-secret>/execute?shortcut=Hello%20World&input={"a":"123%20456,789"}

Again it’s percent encoded.

On Pi (via SSH from Mac) you can use the Lynx textual browser, wget, or curl. Actually, on Mac, you can do it natively – if you want to.

When you use the URL Pushcut – via the web – invokes the Automation Server on your device. “Via the web” is important as you can’t use the Automation Server technique without Internet access. You can, however, use another function of Pushcut – Notifications-based running of a shortcut.

Curl Invocation

Curl invocation is a little more complex than invoking using a web browser. Curl is a command line tool to access URLs. Here is how I invoke the same shortcut, complete with a JSON payload

curl -G -i 'https://api.pushcut.io/<my-secret>/execute' --data-urlencode 'shortcut=Hello World' --data-urlencode 'input={"a":"123 456,789"}'

Note the separation of the query string parameters and their encoding. I found this fiddly until I got it right.

An Automation Server

It’s worth pointing out that Automation Server should run on a “dedicated” device. In practice few people will have these. So I’ve experimented with what that actually means.

First, buying a secondhand device that is good enough to run iOS 12 (the minimum supported release and not recommended) is prohibitive. In the UK I priced them above £100 so I’d have to have a much better use case than I do.

Second, I don’t have a spare device lying around.

But I do have a work phone (that I donated, actually) that rarely gets calls. It does get notifications but those don’t seem to stop Automation Server from functioning just fine.

So this approach might not be available to many people.

Automating Drafts From Raspberry Pi

One good use case is being able to access my Drafts drafts from my Raspberry Pi. Drafts is an excellent environment for creating, storing and manipulating text. I highly recommend it for iPhone, iPad and Mac users. Again a well-worth-it subscription could apply.

So, my use case is text access on the Pi. I might well create a draft on iOS from the Pi, perhaps storing some code created on the Pi.

So I’ve written shortcuts that:

  • Create a Drafts draft.
  • Retrieve the UUIDs (identifiers) of drafts that match a search criterion. (In the course of writing this post I upgraded this to return a JSON array of UUID / draft title pairs.)
  • Move a draft to the trash (as the way of deleting it).

Here’s what the first one of these looks like:

It’s a simple two action shortcut:

  • Create the draft.
  • Return the UUID (unique ID) of the newly created draft.

The second action isn’t strictly necessary but I think there will be cases where the UUID will be handy. For example, deleting the draft requires the UUID. So this action returns the UUID as a text string.

The first action is the interesting one. It takes the shortcut’s input as the text of the draft and creates the draft. Deselecting “Show When Run” is important as otherwise the Drafts user interface will show up and you don’t want that with an Automation Server. (It’s likely to temporarily stall automation.)

So this is a nice example where iOS can do something Raspberry Pi (or Linux) can’t. This is not a Mac problem as Drafts has a Mac client.

Outro

A couple of other things:

I did a lot of my testing on a Mac – both directly as a client and also by SSH’ing into the Pi. (Sometimes I SSH’ed into the Pi on both iPad and iPhone using the Prompt app.)

By the way, the cultural allusion in the title is to Depeche Mode’s Master And Servant. I’ve been listening to a lot of Depeche Mode recently.

I’ve titled this section “Outro” rather than “Conclusion” because I’m not sure I have one – other than it’s been a lot of fun playing with one machine calling another one. Or playing Master and Servant. 🙂

My Generation?

Ten years ago in Channel Performance Reporting I talked about how our channel-related reporting had evolved.

That was back when the data model was simple: You could tell what channel types were and what attached to each channel:

  • Field SMF73ACR in SMF Type 73 (RMF Channel Path Activity) told you the channel was, for example, FICON Switched (“FC_S”). This was an example I gave in that blog post.
  • SMF Type 78 Subtype 3 (RMF I/O Queuing Activity) gave you the control units attached to each channel. (You had to aggregate them through SMF 74 Subtype 1 (RMF Device Activity) to relate them to real physical controlers.)

SMF 74 Subtype 7 FICON Switch Statistics

In that post I talked about SMF 74 Subtype 7 (FICON Director Statistics) as a possible extension to our analysis. I mapped this record type pretty comprehensively. Unfortunately so few customers have 74–7 enabled that I haven’t got round to writing analysis code.

I’m grateful to Steve Guendert for information on how to turn on these records:

RMF 74–7 records are collected for each RMF interval if

  • The FICON Management Server (FMS) license is installed on the switching device. (This is the license for FICON CUP)
  • The FMS indicator has been enabled on the switch
  • The IOCP has been updated to include a CNTLUNIT macro for the FICON switch (2032 is the CU type)
  • The IECIOSnn parmlib member is updated to include “FICON STATS=YES”
  • “FCD” is specified in the ERBRMFnn parmlib member

With these instructions I’m hoping more customers will turn on 74–7 records.

Talking About My Generation

What I don’t think we had 10 years ago was a significant piece of information we have today: Channel Generation. This is field SMF73GEN in SMF Type 73.

This gives further detail on how the channel is operating.

There isn’t a complete table anywhere for the various codes but here’s my attempt at one:

Value Meaning Value Meaning
0 Unknown 18 Express8S at 4 Gbps
1 Express2 at 1 Gbps 19 Express8S at 8 Gbps
2 Express2 at 2 Gbps 21 Express16S at 4 Gbps
3 Express4 at 1 Gbps 22 Express16S at 8 Gbps
4 Express4 at 2 Gbps 23 Express16S at 16 Gbps
5 Express4 at 4 Gbps 24 Express16S at 16 Gbps
7 Express8 at 2 Gbps 31 Express16S+ at 4 Gbps
8 Express8 at 4 Gbps 32 Express16S+ at 8 Gbps
9 Express8 at 8 Gbps 33 Express16S+ at 16 Gbps
17 Express8S at 2 Gbps 34 Express16S+FEC at 16 Gbps

Actually that “24” entry worries me.

By the way I’m grateful to Patty Driever for fishing many of these out for me.

A Case In Point

We’ve been decoding these channel “generations” for a number of years but it came into sharper focus with a recent customer situation.

This customer has a pair of z14 processors, each with quite a few LPARs. A few weeks ago I pointed out to them that quite a few of their channels were showing up at 4 Gbps or 8 Gbps, not the 16 Gbps they might have expected.

Where the channels were at 16 Gbps they were indeed using Forward Error Correction (FEC).

Now, why would a 16 Gbps channel run at 8 or even 4 Gbps? It’s a matter of negotiation. If the device or the switch can’t cope with 16 the result might well be negotiation down to a slower speed. So older disks or older switches might well lead to this situation. I guess it’s also possible that unreliable fabric might lead to the same situation.

(Quite a few years ago I had a customer situation where their cross-town fabric was being run above its rated speed, with quite a lot of errors. It was a critsit, of course. 🙂 Later I had another critsit where the customer had ignored their disk vendor (not IBM) ’s advice in how to configure cross-town fabric. So it does happen.)

Apart from SMF73ACR the above all relates exclusively to FICON. So what about Coupling Facility links?

I’ve written extensively on these. The most recent post is Sysplexes Sharing Links. In this post you’ll find links (if you’ll pardon the pun) to all my other posts on the topic. I don’t need to repeat myself here.

But SMF73ACR does indeed identify broad categories of Coupling Facility links:

  • “ICP” means “IC Peer link”.
  • “CS5” means “ICA-SR link”.
  • “CL5” means “Coupling Express LR link”.

Conclusion

Whether we’re talking about Coupling Facility links or FICON links it’s worth checking what you’re actually getting. And RMF has – both in its reports (which I haven’t touched on) or in its SMF records – the information to tell you how the links panned out.

It’s also worthwhile knowing that raw link speed doesn’t translate directly into I/O response time savings. But modern channels, especially when exploiting zHPF, can yield impressive gains. But this is not the place to go into that.

One final thought: Ideally you’d allow the latest and greatest links to run at their full speed. But economics – such as the need to keep older disks in Production – might defeat this. But, over time, it’s worthwhile to think about whether to upgrade – to get the full channel speed the processor is capable of.

Dock It To Me

With z/OS 2.4 you can now run Docker containers in zCX address spaces. I won’t get into the whys and wherefores of Docker and zCX, except to say that it allows you to run many packages that are available on a wide range of platforms.

This post, funnily enough, is about running Docker on a couple of reasonably widely available platforms. And why you might want to do so – before running it in zCX address spaces on z/OS.

Why Run Docker Elsewhere Before zCX?

This wouldn’t be a permanent state of affairs but I would install Docker and run it on a handy local machine first.

This is for two reasons:

  1. To get familiar with operating Docker and installing packages.
  2. To get familiar with how these packages behave.

As I’ll show you – with two separate environments – it’s pretty easy to do.

The two environments I’ve done it on are

  • Raspberry Pi
  • Mac OS

I have no access to Windows – and haven’t for over 10 years. I’d hope (all snark aside) it was as simple as the two I have tried.

One other point: As the Redbook exemplifies, not all the operational aspects of zCX are going to be the same, but understanding the Docker side is a very good start.

Raspberry Pi / Linux

Raspberry Pi runs a derivative of Debian. I can’t speak for other Linuxes. But on Raspberry Pi it’s extremely easy to install Docker, so it will be for other Debian-based distributions. (I just have no experience of those.)

You simply issue the command:

apt-get install docker

If you do that you get a fully-fledged Docker set up. It might well pull in a few other packages.

My Raspberry Pi 4B has a 16GB microSD card and it hasn’t run out of space. Some docker packages (such as Jupyter Notebooks) pull in a few gigabytes so you probably want to be a little careful.

After you’ve installed Docker you can start installing and running other things. A simple one is Node.js or “node” for short.

With node you can run “server side” javascript. Most of the time I prefer to think of it as command-line javascript.

A Simple Node Implementation

I created a small node test file with the nano editor:

Console.log("Hello")

And saved it as test.js.

I can run this with the following command:

docker run  -v "$PWD":/usr/src/app -w /usr/src/app node test.js

This mounts the current working directory as the /usr/src/app directory in Docker (-v option of the docker run command), sets the docker working directory to this directory (-w option), and then invokes node to run test.js. The result is a write to the console.

(This combination of -v and -w is a very common idiom, so it’s worth learning it.)

Accessing A Raspberry Pi From iOS

Though I SSH into my Pi from my Mac the most fun is doing it from iOS. (Sorry if you’re an Android user but you’ll have to find your own SSH client as I have little experience of Android. Likewise Windows users. I’m sure you’ll cope.)

My Raspberry Pi is set up so that I use networking over Bluetooth or WiFi. This means I can play with it on a flight or at home. In both cases I address it as Pi4.local, through the magic of Bonjour.

Specifically I can SSH into the Pi from an iOS device in one of two ways:

  • Using the “Run Script over SSH” Shortcuts action.
  • Using the Prompt iOS app.

I’ve done both. I’ve also used a third way to access files on the Pi: Secure ShellFish.

All these ways work nicely – whether you’re on WiFi or networking over BlueTooth.

Mac OS

For the Mac it’s a lot simpler. For a start you don’t need to SSH in.

I installed Docker by downloading from here. (I note there is a Windows image her but I haven’t tried it.)

Before it lets you download the .dmg file you’ll need to create a userid and sign in. Upon installation a cute Docker icon will appear in the menu bar. You can use this to control some aspects of Docker. You can sign in there.

From Terminal (or, in my case, iterm2) you can install nginx with:

docker pull nginx

This is a lightweight web server. You start it in Docker with:

docker run -p 8080:80 –d nginx

If you point your browser at localhost:8080 you’ll get a nice welcome message. This writeup will take you from there.

Conclusion

There’s an excellent Redbook on zCX specifically: Getting started with z/OS Container Extensions and Docker

It’s also pretty good on other Docker environments, though it doesn’t mention them.

Which, I think, tells you something. Probably about portability and interoperability.

So, as I said, it’s easy and rewarding to play with Docker just before doing it on z/OS with zCX. And, as I said on Episode 25 of our podcast, I’d love to see SMF data from zCX.

If I get to see SMF data from a zCX situation – whether a benchmark or a real customer environment – I’ll share what I’m seeing. I already have thoughts.

And maybe I’ll write more about my Raspberry Pi setup some day.

Mainframe Performance Topics Podcast Episode 25 “Flit For Purpose”

It’s been a long time since Marna and I recorded anything. So long in fact that I’d forgotten how to use my favoured audio editor Ferrite on iOS.

But it soon came back to me. Indeed the app has moved on somewhat since I last used it – in a number of good ways.

So, we’re back – in case you thought we’d gone away for good.

And here are the show notes. I hope you’ll find some interesting things here.

Episode 25 “Flit for Purpose”

Here are the show notes for Episode 25 “Flit for Purpose”. The show is called this because it relates to our Topic, and also can be related to our Mainframe topic (as a pun for “Fit for Purpose”).

It’s been a long time between episodes, for good reason

  • We’ve been all over the place, and too many to mention thanks to a very busy end of 2019, with two major GAs (z/OS V2.4 on September 30, 2019 and z15 September 23, 2019)

Feedback

  • Twitter user @jaytay had a humorous poke at SMF field “SMF_USERKEYCSAUSAGE”, how it looks like sausage. We agree and are glad to find humor everywhere.

Follow up

  • We mentioned in Episode 24 CICS ServerPac in a z/OSMF Portable Software Instance Format . It GA’ed December 6 on ShopZ. If you will be ordering CICS or CICS program products, please order it in z/OSMF format!

Mainframe Topic: Highest highlights of z/OS V2.4 and z/OS on z15

  1. Highlight 1:  zCX

    • zCX is a new z/OS address space that is running Linux on Z with the Docker infrastructure. Docker containers have become popular in the industry. Examples include nginx and MongoDB, and WordPress.

      • (The use cases depicted reflect the types of software that could be deployed in IBM zCX in the future. They are not a commitment or statement of software availability for IBM zCX.)
    • Take Dockerhub image and run “as is”. About 3k images to choose from that can immediately run. vJust look for the “IBM Z” types.

    • The images are not necessarily from IBM, which brings about a “community” and “commonality” with Linux on Z.

    • zCX is packaged with SMP/E, and serviced with SMP/E.  However, configuration (getting it up and running) and service updates must be done with z/OSMF Workflow.

    • Application viewpoint: Docker images themselves are accessed through the TCP/IP stack, with the standard Docker CLI using SSH. And for the application people they might not even know it’s running under z/OS. Docker Command Level Interface is where you implement which containers run in which zCX address spaces.

    • For cost: No SW priced feature (IFAPRDxx). However, does required a priced HW feature (0104, Container Hosting Foundation) on either z14 GA2 or z15.  This is verified at the zCX initialization. Lastly, zCX cycles are zIIP-eligible

    • It’s an architectural decision whether to run Docker applications on Linux on Z or z/OS, and that’s for another episode.

      • Martin wants to see some SMF data, naturally. He’s installed Docker on two different platforms: Mac and his Raspberry Pi. In the latter case he installed nginx and also gcc.
  2. Highlight 2: z/OSMF

    • Lots of z/OSMF enhancements that have arrived in z/OS V2.4, and the good news is that most of them are rolled back to V2.3 in PTFs that have been arriving quarterly. 

    • Security Configuration Assistant: A way within z/OSMF to validate your security configuration with graphic views, on the user and user group level. Designed to work with all three External Security Managers!

      • Security is and continues to be one of the hardest part of getting z/OSMF working . This new application itself has more security profiles that users of Security Configuration Assistant will need access to, but the good side is once those users are allowed, they can greatly help the rest. 

      • Use case: if a user is having problems accessing an application and you don’t know why you could easily see if this user had authority to access the application to eliminate that as a problem.

      • Available back to V2.3 with APAR PH15504 and additional group id enhancements in APAR PH17871

    • Diagnostic Assistant for z/OSMF : A much simplier way to gather the necessary information to need for a Service person to perform debug for your z/OSMF problem. 

      • Not quite easy before. It could have been streamlined, and it took this application to give us that.  It is now so easy. Now, Marna doesn’t gather problem doc grudgingly because there are lots of different locations that contain necessary diagnostic files.

      • It could not be easier to use (although one additional tiny enhancement Marna has requested to the z/OSMF team how to make it even easier to not require the z/OSMF server jobname and jobid).

      • You open up the Diagnostic Assistant application, you select the pieces of information you want to gather. This includes the configuration data, the job log, and server side log and some other files.  Having z/OSMF collect it for you it really nice.

        • It is then zipped up and stored down to your hard drive (not the z/OS system).
      • Available back to V2.3 with Diagnostic Assistant APAR PH11606

  3. Highlight 3: z/OS on z15: System Recovery Boost :Speeds up your shutdown for up to 30 minutes and speeds your re-IPL for 60 minutes, with no increase to your rolling four hour average.

    • Some bits are free, some will cost should use choose to you them. Note that System Recovery Boost is available on z/OS V2.3 and V2.4. There is no priced z/OS SW feature (IFAPRDxx) at all. Made up of individual solutions, not all of them you may choose to use (or may apply to you).

      • No charge for this one: Sub-capacity CPs will be boosted to full capacity. At full-capabity already, this will not help.

      • No charge for this one: Use your entitled zIIPs for running workload that usually would not be allowed to run there (General CP workload).

        • Martin will have updates to his zIIP Capacity Planning presentation!
      • If you are a GDPS user, there is GDPS 4.2 scripting and firmware enhancements.  This will allow parallelization of GDPS reconfiguration actions that may be part of your restart, reconfiguration, and recovery process.

        • Martin notes that if you parallelise more than you otherwise would it might affect the resource picture.
      • Lastly, and now this one is priced, if you want to, you can purchase HW features (9930 and 6802).  These features will allow you to have extra temporary zIIP capacity which you can then make use of for even more boost processing.

      • Additional reference: System Recovery Boost Redpaper

      • Martin again, looks forward to seeing data from this as RMF could show some challenging things for his reporting.

Performance Topic: z15 from chip design on upwards

  • Disclaimer: personal view, not from Development or Marketing. Marna and Martin were talking about the z15 Chip design – and we thought those observations might be useful to include in the Performance topic. Mostly focusing on the CP chip, which is where the processor cores are.

  • Two traditional levers were raising clock speed or shrinking the feature size.

    • z15 clock speed still 5.2 GHz. And we’ve been as high as 5.5 GHz with zEC12.

    • Feature size still 14 nanometers (nm). Some other fabs have 10nm and even 7nm processes.

  • GHz and nm aren’t the be all and end all. Looking at chip design now.

    • Start with a similar sized CP chip and putting more on it. It helped to get rid of the Infiniband-related circuits, and some layout enhancements.

      • Very sophisticated software used for laying out all modern chips. Once you have more chip real estate, good stuff can happen.

        • Same size CP chip has 3 billion more transistors, that’s 9.1 billion transistors.
      • This can give us two more cores, taking us to twelve.

        • As an aside, Martin has seen the two more cores on a z14 PU chip allow better Cycles-Per-Instruction (CPI) than on z13.
      • More L2 Instruction Cache, at the core level. Double L3 Cache size, at the chip level, shared between cores. So almost double per core. All of this has got to lead to better CPI.

      • Nest Acceleration Unit (NXU): Compression on this chip is such a fascinating topic, but for another episode.

      • Drawer can go down from 5 or 6 PU chips to 4 and – for 1-drawer machine – still have one more purchasable core than z14 – 34 vs 33.

        • 33 vs 34 is for 1 drawer. Similar things apply for two or more drawers.

        • As a result they also were able to remove one set of X-Bus circuitry.

          • The X-Bus is used to talk to other CP chips and the System Controller (SC) Chip.

          • Now down from 3 to 2: one for the SC chip and one for the remaining other CP chip in the cluster.

        • Now fit the contents of the drawer in an industry standard 19 inch (narrower) rack. This following what was on the z14 ZR1.

      • At the top end there are up to 190 characterisable cores, coming up from 170. This can give us a fifth drawer – which is quite important.

        • Speculation that reducing the number of PU chips enabled the System Controller (SC) chip to talk to four other SC chips, up from 3 in z14, getting us from 170 to 190.
      • Many other things too: Like 40TB Max Memory at the high end vs 32TB, improved branch prediction so deeper processor design, and enhanced Vector processing with new instructions.

Topics Topic: How To Do A Moonlight Flit

  • This topic is about moving one’s social output, in particular blogs and podcast series. Martin’s blog had to move, because the IBM developerWorks blog site is being shut down.

    • Many blogs moved to ibm.com, or elsewhere like Martin’s Mainframe Performance Topics. Marna’s blog remains in the same spot (Marna’s Musings.

    • This podcast probably also has to move, for similar reasons, and we are looking for another location now.

      • When we move it, the feed will have to be replaced in your podcast client, sadly.
  • Immediately people might worry about Request For Enhancements being affected , and it is not.

  • Following are the important criteria we thought, when selecting the right sight to move social media to:

    • Cost, ideally it’d be free, but may be worth paying a few dollars a month for better service and facilities.

    • Good ecosystem. Easy to use, especially for publishing a podcast.

    • Good blog publishing tools, that integrating well with writing systems and good preview capabilities.

    • Longevity is important. You do not want to migrate again soon. Martin has 15 years of blogging!

    • Security. Our material is public, however tampering is the concern.

  • Moving the media:

    • Retrieval from old location. Martin wrote some Python to webscrape the blog. He built a list of posts and retrieved their HTML.

      • Graphics: The same python retrieved the graphics referenced in the blog posts.

      • Podcasting needed show notes and keywords (for Search Engine Optimisation). Also audio and graphics.

      • Cross references: Martin’s blogs have references from one post to another, both absolute and relative. And our podcast shownotes have links too that will break.

    • Re-posting . A lot of HTML editing is required. Different posts using different authoring tools have different generated HTML, and had post-cross-references that needed refactoring.

      • Martin’s graphics needed uploading afresh, and how they are positioned on the page had to change.
  • Redirecting Audience:

    • We really don’t want to lose our listeners and readers!

    • Martin posted progress reports on Twitter where there would be a trace. His blog’s root URL had to change. Fortunately the old blogging site redirected to the new, but not to individual posts.

    • Our podcast subcribers will need to specify a new feed URL, as there is no possibility of not affecting subscriptions. Watch out for an announcement.

      • New feed will likely cause all episodes to be re-downloaded.

      • If you’re subscribing, once you re-subscribe to the new location, you should be fine for a long time. However, we don’t know how many we’ll lose. We don’t actually know how many people listen (e.g. on the web).

  • We try to turn such experiences into something useful.

Customer requirements

  • RFE 133491: “Write IEFC001I and IEFC002I information to SMF”

  • Abstract: At the point where IEFC001I and IEFC002I messages are produced, also write this information to SMF. The record should indicate if it was a cataloged procedure (the IEFC001I infomation) or an INCLUDE group, the member used, whether it came from a private or system library, and the dsname of that library, in addition to job identification information. Possibly the stepname should be included (names internal to a cataloged procedure are not needed).

  • Use Case: Organizations of long standing often have thousands of cataloged procedures. Often a large percentage of these are obsolete or never used but the mechanisms for discovering which ones should be archived or deleted do not exist as far as I can find. Being able to summarize SMF records to compare against member lists would allow us to clean up old cataloged procedures and/or INCLUDE members. This could also have security use if one suspects a malicious person had temporarily substituted a JCLLIB version of a proc.

  • Currently it is an Uncommitted candidate , moved from JES to BCP component for response.

  • The messages referenced were:

    • IEFC001I (PROCEDURE procname WAS EXPANDED USING text) : The system found an EXEC statement for a procedure. In the message text

    • IEFC002I INCLUDE GROUP group-name WAS EXPANDED USING text: The system found an INCLUDE statement to include a group of JCL statements. In the message text:

  • Our thoughts:

    • Martin thought it could be useful in SMF 30. Already have some stuff about PROCs and PROC STEPs in SMF 30. Some of the information, particularly data set, is quite lengthy. So probably an optional section in the record, maybe repeating. Might need some SMFPRMxx parameter. It looks useful, probably not a new record, needs care in designing.

    • Marna likes it for two reasons: 1. Helpful in cleanup. and 2. Has security benefits.

Future conferences where we’ll be

On the blog

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below. So it goes…

Normal Service Resumed

I’m sat in a nice coffee shop in Wallingford, relaxing after an interesting few weeks.

Over the past few weeks I’ve migrated my blog over to WordPress. If you’re reading this you’ve followed me over so a big thank you.

I migrated 526 blog posts. This number frankly astonishes me, though it’s been almost 15 years so that’s only 30 something a year.

The topics have varied, which is why I stuck the commas in the title some time ago. In the migration I made it official.

In the meantime a few years ago my dear friend Marna Walle joined me in using the “Mainframe, Performance, Topics” name – for our podcast series.

The focus of the blog has not changed at all. It’s just the “social conditions” that have changed: I’m funding the blog – on WordPress. This gives me very little more latitude but does mean I can take it with me – into (eventual) retirement.

The process of writing still needs sorting out – but my experience of migration tells me that WordPress is going to be a good place to be. I certainly can confect HTML and upload pictures and even PDFs. But the WordPress offers a wider ecosystem. For example, I’m writing this across Mac, iPad and iPhone using the very excellent Drafts app and I know Drafts can publish direct to WordPress. (Drafts’ automation might be handy in helping me write, actually.)

So expect even more

  • Mainframe whatever I want to talk about
  • Performance whatever I want to talk about
  • Topics whatever I want to talk about

You get the picture: It’s whatever I want to talk about. 🙂

So normal service resumed, then. 🙂