Mainframe Performance Topics Podcast Episode 21 “Fits and Starts”

(Originally posted 2018-12-08.)

We released this episode a few days ago. But I only got round to writing this post today, 5 days later. That’s because I (and probably half of our listenership 🙂 ) were involved in a rather urgent piece of customer business. There are some learning points from that one, but that’s for another day.

So it goes, indeed.

I’m continuing to experiment with recording techniques – to minimise noise (and hence distortion caused by heavy noise removal). Hopefully this one is better.

The Performance topic was from a real live, very recent, customer situation I had a lot of fun with.

The Topics topic we had a lot of fun making. It seems the Apple versus non-Apple thing we have going works well.

Nobody has mentioned chapter markers yet. If you get to see them let us know how they work for you. In this episode there’s an extra one – which some of you will appreciate… 🙂

Anyhow, enjoy the show!

Episode 21 “Fits and Starts”

Here are the show notes for Episode 21 “Fits and Starts”. The show is called this because we talk about fitness devices, and the Performance topic that had work submitted after a one minute hiatus.

Where We’ve Been Lately

Marna and Martin have been to GSE UK Annual Conference Nov 5-7 in Whittlebury, UK. Great conference!

Feedback

  • We’ve received feedback from a New Zealand listener requesting a GDPS topic. Good idea, and we’ll look into it! Martin is still playing around with a gentler stereo effect, in response to two listener comments. We hope audio is improving, by using Ferrite.
  • DocBuddy 2.2.1 for iOS and Android is available. The levels are the same between the two platforms, and yet it appears the functional content is different.

What’s New

  • z/OS V2.3 Enhancements
    • The z/OSMF Sysplex Management task is enhanced by the PTF for APAR PI99307 which is PTF UI58355 on V2.3 so you can modify sysplex resources.
    • A new z/OSMF plug-in called the zERT Network Analyzer is now available to visually determine which z/OS TCP and Enterprise Extender traffic is or is not cryptographically protected in z/OS V2.3 with APAR PH03137.

Mainframe: SMF Recording of APF Modifications

  • Post-IPL dynamic APF changes are reflected in SMF 90 Subtype 37
  • A lot of the function is in z/OS V2.2, with these fields in the SMF record:
    • Function: Add, Delete, DynFormat, StatFormat
    • Was the update via SETPROG, SET PROG, CSVAPF
    • Parmlib member suffix for the SET PROG case
    • Data set name
    • Volser
    • Time of update (STCK)
    • Jobname
    • Command Scheduling Control Block (CSCB)’s CHKEY field
    • Console ID of issuer (-1 for CSVAPF)
    • Utoken of issuer
  • More in z/OS V2.3:
    • The RACF UTOKEN is stored in its “unencrypted format”
    • The UserID within the UTOKEN is at offset x‘98’ in the data
    • The console name is provided at offset x’A8’
    • PROGxx supports APFSMFALL:
      • When specified, the SMF record includes information about updates that are “already in the correct state”. Defaults to initial behavior of not placing “no change”cases in the SMF records. The record identifies this situation by a bit: SMF90T37_AlreadyAsNeeded –the x‘01’ bit in byte SMF90T37Flags (offset 1)
  • Triggers when post-IPL APF changes dynamically: PROGxx: APF ADD …or APF DELETE … SETPROG APF,ADD,… or SETPROG APF,DELETE,…
  • Ensure to collect by setting in SMFPRMxx type 90 subtype 37 record
  • Presumably there’s not much overhead, as it will be produced when changes happen (which is probablyl not often).
  • Auditors will probably want this

Performance: An interesting Db2 DDF case

DDF stands for Distributed Data Facility. DDF is how you get to Db2 from outside the LPAR, and with a Type 4 JDBC driver if the java application is local.

Central to Martin’s DDF work is some analysis code to process SMF 101 DB2 Accounting Trace.

A customer complained their DDF application stopped dead one evening – for 1 minute. It was an application serviced by a 3-way Datasharing group. The customer sent SMF 101 data from all 3 members for 3 hours around the stoppage, and for 3 hours the previous evening for a presumably “good behaviour”.

Martin plotted application statistics at a one second interval level – across both days. Lots of data points: 6 hours x 3600 seconds is a lot of data points – 21,600.

Excel was frustrating – as ever! (Martin thinks he’s getting better at this but the swearing hasn’t diminished and it’s despite, rather than because of Excel.)

He plotted the transaction ending rate for each member.It showed sloshing (moving of workload to lightly used system to the point of it being too busy) but much more:

It showed a 40-second stoppage the evening they hadn’t complained, making the 1 minute threshold interesting as a number.

But why did the app stop?

Theory number one is that the transactions elongated. They weren’t elongated enough to support that explanation.

NOTE: SMF 101s are written at transactionending / commit not on an interval basis.

He looked more at the in-DB2 time components, and there was not much “Not Accounted For” time – typically CPU queuing.

Martin “zoomed in” to a much shorter time range . When transactions started again they were elongated, and it that was due to the clustered arrivals in clearing the backlog.

The best theory is something external stopped transactions arriving.

Further he thought there could be “near misses” many times, just short of the 1 minute mark. Martin speculates an external monitor alerted or standby process kicked in – based on a 1-minute threshold.

After transactions started coming again there were spikes in transactions arriving every minute. The speculation is this might be the middle tier doing something on a 1 minute basis: Maybe retries of some sort?

The key learning point from this experience is that drilling down well below the RMF interval can help, and you can see sloshing and routing behaviours.

NOTE: With SMF timestamp granularity you can go to 100th of a second granularity, but that’s a lot of data points!

…and you wouldn’t get many transaction endings per data point, either. (100 transactions per second = 1 per 100th, so it would appear “lumpy”.)

Topics: Fitness Tracking

  • Why are we doing this? Nosy about data, wanted to get fitter, and wanted to lose weight.
  • Marna uses a Fitbit Charge 2.
    • Key features: sleep analysis, step counts, heart rate.
    • There is a Fitbit Charge 3 coming soon.
    • Fitbit app for the Android: calculates floors, miles, calories, sleep analysis, across timescales – day, month, overall, etc and compares with age bracket.
  • Martin uses an Apple Watch, and used to have a Fitbit.
    • He wanted the Watch for other reasons: for health a few months ago, and all of the above – except for sleep tracking.
  • Marna gets employer incentives, to help with health care cost reduction.
    • However, IBM and health insurance company has the data.
    • Fitbit links to a “Vitality” website, but asked for permission.
  • Martin has been successful, as he hasn’t failed to close his rings for 3 months, for the Apple Watch. This makes him obsessive.
    • Uses iOS Overcast podcast player to send podcast episodes to the watch.
    • Runs with just the watch and AirPods. Listening to podcasts keeps me going – whether running or walking.
    • He has lost a considerable amount of weight! Cardiac situation much better, lowered resting heart rate, faster recovery from exercise, and clothes fit better.
    • There is plenty of data, and some of it is alarming.
    • Once you start you want to keep going, and you could become obsessive.

Customer requirements

  • z/OSMF supplied ftp profiles need to be GDPR compliant
    • z/OSMF supplied ftp profiles can’t be modified to require passwords (per GDPR compliance) and can’t be removed. They are useless now and confusing, as they can’t be removed and can’t be modified.
    • Adoption of z/OSMF Problem Management is difficult enough with the additional confusion of ftp profiles that no longer work and can’t be modified or removed.
    • IBM update for requirement: We take this as the ability for a customer to remove the IBM supplied profiles that they don’t want to use such as profiles that don’t require a userid and password. Martin thinks that we take GDPR pretty seriously, so this sounds reasonable in that spirit.

Places we expect to be speaking at

  • Milan, 27/28 November at an IBM technical conference.

On the blog

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below. So it goes…

Appening 7 – Ferrite Audio Editor On iOS

(Originally posted 2018-11-10.)

I’ve moved from editing audio on my Mac with Audacity to editing it on my iPad with Ferrite. So I thought I’d give you my impression of Ferrite. And this is an early impression.

In some ways I’m as much interested in encouraging people to record as anything else. The more voices we hear, particularly Mainframe voices, the better. So, the lower the barrier to entry the better. Having said that, Audacity is free and Ferrite isn’t. But, if Ferrite is usable, the advantages of editing on a tablet or phone are obvious: You can do it anywhere, even at 35,000 feet.1

My use case is voice, specifically podcasting. So I’m not making any claims about recording music and editing it on a tablet.

One other thing I should mention at this point is I’ve got the best iOS editing environment: I have a 12.9 inch iPad Pro and an Apple Pencil.3

Chapter Markers Were A First Step

A while back we introduced chapter markers into our podcast episodes. The idea is you could skip to a specific spot in the podcast episode. A secondary effect is that parts of an episode would display on things like podcast clients with specific graphics. I’ve seen this happen on iPhones and on in-car displays. Apparently this is much less of a thing with Android than iOS, but I can’t help that.2

I regard chapter markers as a professional touch; I’m telling myself our podcast series is useful and in fact the listening numbers suggest that is so. So doing it reasonably professionally appeals to me. I’m not actually convinced it’s a good thing you can skip my “Performance” topic, or even the “Topics” topic. But never mind, I think chapter markers are good.

I mention chapter markers because that was the first thing that drew me to Ferrite, some months ago. I wasn’t able to add them in Audacity. So my editing workflow was to edit and assemble the pieces of an episode in Audacity and transfer it to Ferrite to inject the chapter markers. And then, of course, to transfer it back.

Ferrite makes this pretty easy:

  • I move the editing cursor – by dragging it – to the spot where a chapter should begin and pull up the chapter marker editing dialog. I add a chapter marker there.
  • As part of that I edit the chapter name. (Ferrite offers “Chapter 1” etc – and I want something like “Intro” instead.)
  • I pull in the chapter marker graphic from a folder in Photos. These are all based on the blue background to the podcast series graphic Marna created. With my limited artistic skills I drew a foreground in white against that blue background. I’m reasonably happy with the result.

So that was my first foray into Ferrite on iOS.

Taming Stereo Was The Second Big Step

Prior to Episode 20 I edited the podcast with a 100% left/right stereo separation. Some people didn’t like it. I like stereo but I was inclined to agree this level of stereo separation was a bit harsh. But it was cumbersome to reduce the stereoscope in Audacity. In Ferrite this is ridiculously easy: If I have a pair of tracks I can use the simple controls for each track to place it wherever I want on the stereoscope. These are simulations of rotating knobs. So I chose 30% left/right as a gentler effect. I would’ve preferred to be able to type in “+30” and “-30” or some such.

The effect works and people seem to like 30% better than 100%. It’s much less harsh.

And, if I wanted to, I could explore more stereo effects.

The question, though, was: Should I continue to edit on the Mac with Audacity, just using the iPad for taming stereo and adding chapter markers? Or should I move all the editing to the iPad? Either one requires me to move the recordings at some stage to the iPad and back again?4

It all comes down to the editing experience.

The Editing Experience

I expected a learning curve – as Ferrite does things rather differently to how Audacity does them. So I set out to edit the three main topics in Episode 20 with Ferrite, learning as I went. That was the right thing to do as you really can’t learn a new tool like Ferrite without a real project to work on. As you might expect, the first topic to be edited – Performance – was slow and painstaking. But I got faster, less error prone, and slicker through these three topics.

I can’t claim Ferrite is faster than Audacity – yet. I don’t know if it will ever be. But all I really want is for it to be effective and quick enough. I don’t think the speed of the iPad is a significant factor. It’s much more about the user’s productivity.

I would also say that if you haven’t edited audio before you might find Ferrite easier than I have. I had to unlearn some habits I’d learnt in Audacity. The paradigms are slightly different.

Noise removal was actually easier with Ferrite: You just ask it to remove noise. Audacity wants to collect a noise sample – which you might not be able to furnish.

Using the Apple Pencil provided precision that a finger wouldn’t. To gain that level of accuracy you’d have to zoom in more than I was comfortable with.

One thing I learnt quite early on was to change the setting for what happens when you snip a piece of audio in two. By default it keeps both snippets selected. I changed the setting to deselect everything. That worked better for me.

A nice function was Strip Silence. Because Marna and I don’t tend to talk over each other this function created distinct snippets of her and me. These I could “punch out” to left and right – using the fragments created by Strip Silence. I suppose if we were prone to talking over each other we could punch out those snippets to a third track placed in the middle of the stereoscope. When we have guests we can do just that as well. I like putting guests in the middle.5

Rendering the final (MP3) audio is a little slower than it was on the Mac. This doesn’t matter very much as I’m not actively watching it.

Conclusion

I count the move to Ferrite on iOS a success. It means I can edit audio without having to pull out a Mac – which means I can do it in more places. I find Ferrite very usable, now I’ve got used to it. It also has some functions that weren’t available to me before. Already, adding chapter markers (including graphics) and adjusting the stereo are things I wasn’t able to do before.

I haven’t explored effects like ducking or panning. These are built into Ferrite. I don’t know that they’re actually useful for us. But I might play with them. They seem to me the basis for some good audio gags – which is something I thought I wanted to do before we even got started with podcasting.

In cost terms, Ferrite isn’t free. To get all the functions is about 20 US dollars or pounds. I consider that a good investment, particularly as I’ve spent rather more on headphones and a microphone. Which I now have to work on to get the audio quality up. To that effect I’ve bought another recording tool that works with Skype – in the hope it produces better audio. It’s called Piezo and is developed by Rogue Amoeba – who have a good reputation for such things.

I’m about to play with Ferrite’s Templates support – which might actually save a lot of time and introduce consistency. So I might reprise this, when I think I know what I’m doing.


  1. Actually, I haven’t done that yet. I think it’s feasible, though noise issues might defeat you. 

  2. To be fair, I’ve no evidence that it isn’t just me that can see the chapter marker graphics. 

  3. This isn’t quite the best case: 12.9” iPad Pros have been released since mine. And a newer Apple Pencil has been released. But my rig is the biggest screen and pretty much the most functional setup. Doing this without an Apple Pencil on a small phone would be a pain. 

  4. By the way, I find Airdrop a very convenient mechanism for moving audio between Mac and iPad. 

  5. Generally I want our podcast episode to feel like the conversation it is. Adding a guest should just add to that feeling. 

Three Early SMF 89 Results

(Originally posted 2018-10-31.)

Recently I wrote code to “flatten” SMF 89 records – in REXX.1 Now I want to share with you some of the insights I’ve acquired from early experiments with the data.2

There are three things I’d like to share with you that you might not realise you can get from SMF Type 89. But first a design standard I didn’t mention before.

CSV Files That Are Sortable

When processing data it’s useful to create a series of tables. And so it is with SMF. There are two things I’m aiming for:

  • Ability to pull into Excel – which suggests CSV format
  • Ease of processing with DFSORT or ICETOOL

These two aims are not mutually incompatible, it turns out. So here’s how you do it:

  • Pad all your fields so they are fixed width. This makes it easier for DFSORT to find the fields – as fixed width supports fixed positions.
  • Wrap character stings in double quotes.
  • Splice fields in a row with commas.

If you do those things you can process the data with a wide variety of methods – whether on z/OS or elsewhere. When I’m flattening SMF 89 I follow these rules; It’s not difficult in REXX, with right() and left() functions being your friend.

By the way what I’ve just said doesn’t just apply to SMF; It’s relevant to any tabular data.

Three Insights

Here are three things I (and you) can see – without much difficulty – from SMF 89.3

zNALC Or Not zNALC?

When examining software economics, for example, it’s important to know if an LPAR is operating under zNALC rules. zNALC is a successor to NALC.

With NALC you had to have the LPAR follow a strict (IBM-specified) naming convention. With zNALC you don’t. Operationally it’s much easier.

In SMF 89 there is a flag for whether the LPAR is zNALC or not. Maybe this one isn’t particularly useful if it’s your systems we’re talking about. For consultants and people like me it’s a different matter, of course. But the standard homily applies: “Systems should be self documenting.” In this case they are.

Actually, in my test data sets, this is a little hard to verify – as none of them contain data from zNALC LPARs. With each new set of customer data I’ll run a query until I find a zNALC LPAR. Then I’ll incorporate the test into my formal code. I quite like the idea of being able to test things. 🙂

MQ Queue Managers And DB2 Subsystems

Believe it or not, you can list the MQ and DB2 subsystems by LPAR – using only SMF 89 Subtype 1 data. You don’t need to use SMF 30 – which I have been using up until now just for this purpose.4

The key here is to look for Usage Data Sections with Product Name “DB2” or “MQM MVS/ESA”. The Product Qualifier field yields then name, perhaps slightly encoded. Decoding turns out to be easy – as you’ll see in a minute.

The Product Version field tells you what release the subsystem is running at.

Here is an example:

DB2 - All at 11.01.00
=====================

ASYS 
    DBG1
    DBR1
    DBS1
    DSN
BSYS
    DSNF
    DSNH
    DSNJ
CSYS
    DBG2
    DBP2
    DBR2
    DBS2
DSYS
    (none)

MQ - All at V8 R0.0
===================

ASYS
    EDIP
    ODSP
    ODSH

This might not be pretty – and before it hits Production it will be much prettier – but it gives useful insight:

  • All the DB2 subsystems are V11 and all the MQ queue managers are V8.
  • MQ only runs on ASYS.
  • DSYS had neither DB2 nor MQ.
  • You might be able to spot a meaningful naming convention – but this is not the technique for discovering Data- or Queue-sharing groups.

If you were a SAP installation, for example, you might be glad of a report based on SMF 89 rather than SMF 30.

MQ Queue Manager CPU By Time Of Day

This is a more surprising result: For MQ you can see the CPU in MQ by time of day. The following graph is from a real customer. I teased it on Twitter the other day. Or rather a simplified version of it.

The graph is for a single MQ queue manager across a seven-day period.

I mentioned decoding the Product Qualifier field above. For queue manager MQ01 this field takes values including:

  • MQ01
  • MQ01CHIN
  • MQ01BATC
  • MQ01RRSB
  • MQ01CICS

These enable me to plot a graph such as the one above, using the TCB time in the Usage Data Section. I’ve omitted a series “Other”. In this data Other contains zero TCB time. I calculate Other by summing up all the MQ-related TCB time where the Product Qualifier begins with “MQ01” but isn’t in this list. (I can figure out what it is and create another series for it, obviously.)

So, to what the graph shows. (Or rather what I think it shows – and you might form your own view.)

  • There is a fairly regular daily pattern.
  • RRSB is more pronounced on Tuesday, 9th October.
  • Early on Wednesday, 10th October Batch is much more pronounced.
  • Later the same day CHIN becomes much bigger.

CHIN, by the way, is MQ traffic to and from outside the z/OS LPAR. We don’t get from this data (or SMF 30) PUT and GET rates. Nor is why CHIN is unmatched by some other connection explained. But this is enough to alert you to the fact something happens, and to talk to the MQ experts in the installation.

Conclusion

I think these three insights are useful, if not a little surprising. It encourages me to work further with the SMF 89 data. Maybe it encourages you, too, to look at this record. And they weren’t hard to get. There’a more to come, I suspect.


  1. If you want to know how to process SMF in REXX see Rexx’Em

  2. There might well be more later, as I gain experience with the data. 

  3. And I’m actually not fussy about how you process the data; I just want you to know you can glean this stuff. 

  4. I very much want customers to send me (and learn how to process for themselves) SMF 30; There’s tons of insight to be gained from it. 

Mainframe Performance Topics Podcast Episode 20 “Two Is One And One Is None”

(Originally posted 2018-10-30.)

Episode 20 marked a departure in editing terms…

Previously I’d been adding chapter markers in Ferrite on iOS and editing the sound on my Mac using Audacity.

However, as you’ll see in the Feedback section, some listeners wanted less aggressive (or even no) stereo. This is much easier to achieve in Ferrite so I basically transferred the whole sound editing job there.

Initially I’ve set the stereo to 30% rather than 100% and I hope people like it – as it’s much gentler.

As the editing progressed – with there being 5 basic sections in the podcast – it got easier and I now prefer Ferrite to Audacity. So this will continue.

I have to admit there are a few sound glitches, but those I think are microphone and capture issues, not editing ones. For the next episode I’m using a different sound recording program and we’ll see if we can make ti better.

Anyhow, I hope you enjoy this episode.

Episode 20 “Two Is One And One Is None”

Here are the show notes for Episode 20 “Two Is One And One Is None”. The show is called this because our Topics topic is trying to figure out how to archive family photos and videos.

Where We’ve Been Lately

Marna’s been to Z Tech U – Hollywood Florida. Great conference!

Feedback

We’ve received feedback that the stereo too aggressive. This episode has it narrowed, with a different audio tool.

What’s New

  • Dynamic IODF activation for Standalone CFs (before needed a POR, now can be driven remotely from another CEC)
    • Z14 GA2 for both driving and target, PR/SM-based solution which needs one more POR on target to establish a firmware-defined Master Control Services LPAR
    • Need some z/OS APARs: HCM-IO25603, IOS-OA53952, HCD-OA54912, IOCP-OA55404
  • Asynchronous Cache Structure XI. Is also with Z14 GA2, and there is expected to be Db2 support PTFs.

Mainframe: zFS Shrink, Only in z/OS V2.3

  • Top customer requirement. A system command for reducing the size of a zfs file system. Not be confused with compressing files within a file system.
  • You specify a target size with the size option, gives final size in KB, gets rounded to 8K boundary
    • -noai option: means no active increase. A file system is being accessed, might need additional blocks over the shrink size given. so it can be “actively increased” by default.
      • If you need to actively increase to the original size of the file system, the shrink command ends with error.
      • If you don’t want to active increase (-noai) and you need to actively increase, ends with error.
  • During a shrink a scan occurs to determine what blocks must move…longest part of the operation. Blocks are moving from the portion to be released, into the portion that is to remain.
    • After Blocks are moved, and then space is release – in which it will be briefly quiesced. Applications do not need to be stopped when doing a shrink.
    • It is recommended not to shrink during peak times you need the files.
  • To know how big to make the file after the shrinkg, use zfsadm fsinfo for aggregate size (in K), free size in (8k blocks), and 1k fragments.
    • Even better hint! Use df -kP to see 1k used and available, and those sizes are consistent for shrink.
  • Nothing right now to help with suggesting a size.
    • Don’t pick a final size that is too small, so you keep have to growing.
  • Reminder to use aggrgrow and zfsadm grow increase the size of the filesystem.
  • Monitor with SMF 92 subtype 50, for both grow and shrink events. Subtype 59 for # of I/Os and rate, but might occur too often, so use it wisely.

Performance: CPENABLE and HiperDispatch

Each I/O ends with an Interrupt, which needs to be handled by a processor, and needs to be handled in a timely way. When an I/O interrupt is handled the processor handling it issues aTest Pending Interrupt (TPI) instruction. If this test returns “true” this processor handles the (detected) pending interrupt. If “false” then the processor has no more interrupts to handle – for the time being.

If many of these TPI tests result in “true” it suggests a queue has built up – which might indicate temporarily enabling more processors to handle interrupts.

There’s a trade off between timeliness and processor efficiency. The CPENABLE parameter’s values manage this trade off. There are two values: if TPI% below first disable a processor from handling I/O interrupts, and if TPI% above second enable a processor to handle them.

Without Hiperdispatch, access to CPU is smeared across online processors, as the LPAR’s weight is evenly spread across its logical processors. Without Hiperdispatch it is recommended that CPENABLE be set to 0,0 which allows all processors to handle interrupts.

With Hiperdispatch, however, access to CPU is corralled into fewer processors – with the weight not being spread evenly across the LPAR’s logical processors: A Vertical High (VH) logical processor has a “full engine” weight; The remaining weight is spread across 0 or 1 or 2 Vertical Medium (VM) logical processors. Vertical low (VL) logical processor have zero weight.

With Hiperdispatch it is recommended to set CPENABLE to 10,30. This corrals interrupt handling into fewer processors much of the time. It’s in the spirit of the weight distribution.

In terms of instrumentation, SMF Type 70 is useful:

  • It documents LPAR Setup, including HiperDispatch State, logical Engines and weights, and Verticals / Horizontals.
  • It counts Interrupts & TPIs, enabling you to calculate the TPI Percentage, down to the logical processor level.

In a recent customer Data Sample there were a couple of different types of LPARs:

  • Some with Hiperdispatch enabled – with CPENABLE of 10,30 – where the logical processors were enabled from 0 upwards to handle I/O interrupts.
  • Some without Hiperdispatch enabled – with CPENABLE of 0,30 – where the logical processors were enabled from the highest downwards to handle I/O interrupts.
  • It probably would’ve shown smearing of I/O interrupt handling across all the logical processors with a CPENABLE value of 0,0 – but this is conjecture.
  • It showed LPARs with tiny weights: 0.1 engines’ of weight on a 2,3,4,5 way which is not going to be that timely in servicing I/O interrupts.

Overall this topic shows I/O Interrupt Enablement is a topic worthy of consideration to get timeliness vs efficiency right – particularly in the Hiperdispatch era. Also that the instrumentation really helps.

Topics: Archived Family Information

  • Talking about personal and family information: photos, audio, and video only. Not writings.
  • Backing up: Two Is One And One Is None. Need multiple backup techniques at different physical locations. Multiple cloud locations?
  • Modern media vs legacy: when to adopt new technology and how to convert?
  • Google Photos Retrieval
  • Apple Photos app
  • Finder search inside files is used to find outline elements from previous shows.
  • Some serious questions:
    • “What happens when I’m dead?” Facebook, for one, has a protocol. Google has a protocol.
    • Ideally write a will and tell family how to handle material.
    • Will anyone else care about the material?
    • “What about Big Brother” : Everyone has something to hide, it’s about trusting the service provider
    • “Who owns the material?” and do you care?

Customer requirements

  • z/OSMF Workflow “Deep” Search 126042
    • Within a z/OSMF Workflow instance, allow the user to look for an argument within the workflow itself. Right now, the search function only finds strings that are in the titles of the steps, and not in the “tabs” on the insides (such as general, instructions, notes, …).
    • What about find and replace? Not sure you want to replace something that is in the instance. Workflow designing isn’t in this scope.

Places we expect to be speaking at

  • Whittlebury Hall, Nov 5-7, 2018 GSE UK Huge attendance already registered!

On the blog

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below. So it goes…

Invoking Keyboard Maestro From PopClip

(Originally posted 2018-10-13.)

About 18 months ago I built some automation on Mac that I found rather handy. Since I mentioned it on various forums people have wanted it – or at least to know how I built it.

Given its extensibility as a method, it seemed more appropriate to dedicate this post to how to build.

An Example

Here’s an example of this automation in action.

Suppose you are typing text in an editor and you want to uppercase a portion of it. You would select the text with the cursor and up would pop a menu:

In the above the white on a black background is the pop up that PopClip offers you. Some of the items on the menu are standard and some are from among the many that others have built.

One in particular is the first asterisk (‘*’) – because I’m too lazy or unskilled to create an icon – which is the PopClip extension I built.

Click on the asterisk and you get:

Under the banner ‘Popclip Bridge Macro Group’ you see a whole palette of macros you can choose from.

If you choose ‘Uppercase’ you would get:

The result of the text transformation is typed in over the selected text.

Now, Uppercase is built in to PopClip. (It’s the ‘AB’ icon.) But that’s just a simple example. You could do anything you please with the text.

How Was This Built?

The first thing to say is that you could build some of this without Keyboard Maestro – though the palette of actions in the second graphic wouldn’t be possible.

The automation consists of three pieces:

  1. A PopClip extension that invokes a Keyboard Maestro macro.
  2. This Keyboard Maestro macro that pops up a palette, which enables you to select another Keyboard Maestro macro.
  3. The Keyboard Maestro macros that can be invoked from the palette.

I’m going to describe how you build all three.

The PopClip Extension

A PopClip Extension is a zip file, containing at least two other files. Its extension is ‘popclipext’. Let me show you how simple it is.

There’s a simple XML file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Actions</key>
    <array>
        <dict>
            <key>AppleScript File</key>
            <string>KMBridge.applescript</string>

            <key>Title</key>
            <string>* </string>

            <key>After</key>
            <string>paste-result</string>
        </dict>
    </array>
    <key>Extension Identifier</key>
    <string>com.KMBridge</string>

    <key>Extension Name</key>
    <string>Keyboard Maestro Bridge</string>
</dict>
</plist>

This is called Config.plist.

The other file is even simpler, being a basic AppleScript script:

tell application "Keyboard Maestro Engine"
    do script "PopclipKeyboardMaestroBridge" with parameter "{popclip text}"
end tell

Its name (pointed to in an obvious way in the XML) is KMBridge.applescript.

Note {popclip text} is substituted for by the selected text when the PopClip menu was popped up.

You can create these two files with any plain text editor, add them to a zip file and rename that zip file to have the extension ‘popclipext’.

Which is why I haven’t attempted to furnish a single file for you to download.

In fact I want you to feel free to edit the file: Go ahead and cut the above two from this page and paste them into your editor as Config.plist and KMBridge.applescript. Then zip them up into a file with extension ‘popclipext’. If you double click on this it will – possibly after a warning – install the extension into PopClip.

More information on this topic is available here. And, of course, you can get PopClip extensions to do much more, using various scripting languages, for example Ruby or Shell Script.

Keyboard Maestro Macro To Show A Palette

In the AppleScript above a Keyboard Maestro macro ‘PopclipKeyboardMaestroBridge’ is invoked and the selected text passed to it.

You could run any macro you like at this point. Here’s one to pop up a palette and pass the text along:

It’s very simple, only having two actions:

  1. Save the text passed in into a variable ‘TEXT’
  2. Show a palette with a bunch of macros listed.

The second one produces the palette of actions you saw earlier. For a macro to appear it has to be placed in macro group ‘PopClip Bridge Macro Group’. In Keyboard Maestro users generally group their macros into groups.

By the way, you can cancel out of the palette by pressing the ‘Esc’ key.

Example Macro For The Palette

Let me show you the ‘Uppercase’ macro, as a simple example. It also comprises two steps:

Here are the steps:

  1. Convert the input text to upper case and store back in the variable ‘TEXT’.
  2. Type the contents of the variable ‘TEXT’.

All very simple.

In fact I have more complex examples, including:

  • Appending the text to a specific ‘scratchpad.md’ file in DropBox. This uses a template that embeds the date and time as a Markdown heading.
  • Turning a set of lines into a (javascript) array of strings.

Conclusion

Hopefully the above has served two purposes:

  • Proved a useful tutorial in how to build a simple PopClip extension that invokes some Keyboard Maestro functions
  • Given people unfamiliar with PopClip and Keyboard Maestro a sample of what each of them can do, and how they can usefuly work together.

In a sense this post follows on from Automatic For The Person.

Screencast 13 – Topology Today

(Originally posted 2018-10-10.)

I can’t say I’ve learnt much about screencasting since I published Screencast 12 – Get WLM Set Up Right For DB2 but it’s certainly been a while. I have, of course, learnt quite a bit about other stuff.

So I just released Screencast 13 – Topology Today.

It pulls together a couple of use cases for the SMF 30 Usage Data Section. This section, as I’m sure I’ve said many times, gives lots of insight into how address spaces connect together. I’m using the term “Topology” as I really can’t think of a better one.

After some preamble I give two examples:

  1. CICS into DB2
  2. Batch into MQ (and also DB2)

It’s just under 10 minutes long – which is where each of the past three screencasts has been. If you were impatient and skipped past the introductory slides these two examples would make rather less sense.

Production Notes

This time, in Camtasia, I learnt how to fade to black. It took a few goes to get it right – and it basically involves dragging an effects “tile” over the section you want to fade over and then stretching the tile to control the fade out time.

Thankfully with this screencast I didn’t have the same issues with huffing and puffing in the audio: By ramping up my exercise over the past couple of months that issue has gone away, I’m pleased to say.

Mainframe Performance Topics Podcast Episode 19 “You’ve Lost That Syncing Feeling”

(Originally posted 2018-10-06.)

This summer has seen the most travel I think I’ve ever done, and I would imagine Marna feels much the same.

We like to record together – which has made the logistics difficult. We actually met in the summer but thought recording in the same room would be difficult. We’ve stayed with each other a number of times but don’t want to record in our houses because the sound quality would be poor: Wooden floors produce way too much echo.

A lot of water has flowed under the bridge in this time, of course. Which has yielded quite a few blog posts on both our parts. And one new feature…

The “What’s New” subtopic gives us a chance to point out announcements and things like APARs. It’s not meant to be encyclopaedic but just contain a few new things that took our fancy. It’s, as always, an experiment. It might move in the running order, we might can it, we might morph it. I doubt, though, that it will become a topic in its own right.

So, we’re back. We hope you enjoy this episode. And we think we have a good chance of recording more in the near future.

Here are the show notes.

Episode 19 “You’ve lost that syncing feeling”

Here are the show notes for Episode 19 “You’ve lost that syncing feeling”. The show is called this because our Topics topic is about losing the Xmarks URL synchronization tool.

Where we’ve been

This episode had a very long hiatus – more than 5 months – so we’ve been to many places and on vacation/holiday. Sorry we’ve taken so long to get back together to record! It is not through lack of trying!

Feedback

For once we have some follow up: With iOS 12 the built-in Podcast app now supports MP3 chapter markers. As many listeners on iOS will be using this app they might see chapters (and the nice graphics) show up. Still, though, Android podcast apps with correctly working chapter markers have not been found yet.

What’s New (in APARs)

  • OA56011: OSPROTECT Flag in RMF SMF 70

  • PH00582: New function to export a workflow in printable format, as a text file.

Mainframe

Our “Mainframe” topic discusses moving from V4 to V5 zFS, prompted by a user comment that had a very positive experience.

  • You need to be totally on z/OS V2.1 to use, but now is applicable to many since z/OS V2.1 is now end of service.

  • The old version for zFS was V4. V5 gives you a directory using a tree structure for faster searching. This should be faster than a naive linear search approach.

  • This topic was prompted by a customer comment.

    • XCF reduction: IOEZFS group 99%, SYSGRS group 80%

    • Significant CPU reduction in address spaces: XCFAS and GRS

  • To take advantage of this, you need to convert from old V4 format to V5. V5 file systems can have both V4 and V5 directories, however V5 dirs must be in a V5 file system.

  • You can convert: offline with IOEFSUTL, online with zfsadm convert , IOEFSPRM CONVERTTOV5=ON , and on MOUNT – you choose.

    • Steps are: ensure fully at V2.1, set IOEPRMxx format_aggrversion=5 for new file systems, set IOEPRMxx change_aggrversion_on_mount=on for fast safe file system switch to V5, determine if you want IOEPRMxx CONVERTTOV5=ON for one-time switch on directory access. Delay is expected!

    • If cannot tolerate one-time delay, use MOUNT CONVERTTOV5 to selectively determine most benefit, on large directories and those highest used (F ZFS,QUERY,FILESETS)

      • Use zfsadm fileinfo to see a directory version, use zfsadm aggrinfo -long to look at all the file systems.
    • New RMF zFS reports in 2.2 with helpful pop-ups

Performance

Our Performance topic is a survey of Licence-Related Instrumentation. Most shops are very conscious of software costs. The key evidences are licence agreement documents and instrumentation. Martin discusses the instrumentation portion.

  • SMF can help you:

    • System level SMF 70 gives you the rolling 4 Hour Average CPU, Defined Capacity and Group Capacity information, and high-level CPU.

    • System level SMF 89 gives you more detailed information on licencing: Product Usage – both names and CPU.

    • Service Class level SMF 72-3 gives you Service Units (SUs) consumed on zIIP, on general purpose CP, and zIIP-Eligible on general purpose CP.

      • Mobile SUs is one set of fields and total SUs another

      • Resource consumption in general

    • Address Space level SMF 30 gives you a Usage Data Section for topology and for CPU in a product sometimes. (An example of topology is which CICS regions connect to which DB2 subsystem.)

  • Container-Based Pricing introduces new metrics: 70-1, 89, 72-3, and Tenant Classes and Tenant Resource Groups explicitly document this.

  • Closing thoughts:

    • Licensing is getting more complex, and difficult to understand it all fluently.
    • It would be wise to become familiar with the instrumentation.
    • And it would be wise to understand aspects of software licensing that cause impact in your installation.

Topics

Our podcast “Topics” topic is about Marna losing a handy and simple URL sync tool, XMarks. Xmarks used to let you save bookmarks between browsers with other cool capabilities. It was discontinued on May 1, 2018.

  • XMarks was a plug-in to browser, logon, sync, and they were there! With multiple profiles, such as work and home.

  • Here are some replacements?

    • NetVibes: better for rss feeds and dashboard seem to be its strength.

    • Google Bookmarks syncs URLs; Haven’t used it really, but still only for Firefox and Chrome. Gmarks will connect to google servers. Some sites need IE.

      • Modern browsers can fake the User Agent to look like IE
    • Diigo with a toolbar: not used it. Pricing plans, sharing URLs. A bit too heavyweight

    • The promising one is called Raindrop for Chrome, FF, and Safari. Just started trying it out. Works between Windows and Android!

    • Safari / Mobile Safari use iCloud syncing and work out of the box. But if you share an Apple ID, watch out!

    • Input from listeners??

Where We’ll Be

Martin will be renewing his passport, so limited travel for him.

Marna will be at a couple of conferences:

We welcome feedback!

On The Blog

Martin and Marna have both had several blog posts due to our long hiatus from the podcast.

Martin has:

Marna has these:

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.

MQ Batch CPU

(Originally posted 2018-09-23.)

This post is an update to Batch DB2 And MQ The Easy Way, which I wrote back in 2016.

There’s nothing wrong with what I wrote then – but there’s something extra I want to impart now.

In that post I said you can answer the question “What are the big CPU DB2 jobs accessing this DB2 subsystem?” If you substitute “MQ” for “DB2” you can answer the question “What are the big CPU MQ jobs accessing this MQ subsystem?” For MQ, always, you can go further – and that is what this post is all about. I’ll answer the question “why can’t you go further with DB2?” In a minute. But first things first.

A Further Question For MQ

The question I realised you can ask is “How much MQ CPU is there in this job step?” It’s subtly (and I think usefully) different from the question “How much CPU is there in this MQ job step?” We’ll see why this might matter in a minute.

In the SMF 30 Usage Data Section – as described in Batch DB2 And MQ The Easy Way – you can see which MQ subsystem the job step attaches to. But here’s the extra bit: You can also see the CPU in MQ the step uses.

If you subtract the MQ CPU from the Step CPU you, obviously, get the non-MQ CPU. So you can tell if a step is primarily MQ or not. This is helpful in working out where the real action is in a job step that accesses MQ. What you can’t tell this way is how much elapsed time is MQ related. For that you need the SMF 116 records. And these are rare.

I revisited this because we were doing a batch study when we spotted that one of the steps accessed MQ. There was a Usage Data Section that pointed – with the Product Qualifier – at a specific MQ subsystem.

It got my interest to the point I revisited our code and added some more columns to the table, including “Not Usage TCB Time”. Hence my comment above. I analysed this customer’s batch jobs accessing MQ. For some jobs – including the ones we spotted – the MQ CPU is over 90% of the step’s CPU. So it’s clear the step is essentially an MQ step. For others there is a considerable amount of non-MQ CPU, so this step is doing something more intensive than just putting messages on a queue or taking them off a queue.

I think this is a useful insight – whether a step is really “just MQ” or not.

Why Can’t We Do This For DB2?

DB2 has a “NO89” switch at the subsystem level. The impact of this is that DB2 won’t record TCB time in SMF 30 – if the “NO89” option is taken. To be clear, you still get step TCB and SRB times, just not DB2 TCB and SRB times in the Usage Data Section.

I have yet to see a customer that has enabled DB2 to record its CPU in the Usage Data Section. So I never see DB2 TCB in the Usage Data Section.

Of course, if you want to see DB2 TCB at a step level, you can get it in the DB2 Accounting Trace record (SMF 101). In fact you can get more detail – at the Package / Program level – if you turn on Package Accounting in DB2.

Conclusion

It’s nice to be able to look inside a step, particularly one where the elapsed time is hard to explain. For MQ you can definitely do it – at least for the CPU component – with SMF 30 and the MQ Usage Data Section. And the key thing is you can tell an intensively-MQ step from a not-so-MQish one. Another step forwards, if you’ll pardon the pun. 🙂

Appening 6 – Rescript NodeJS Environment

(Originally posted 2018-09-09.)

I have another flight where my Inbox is surprisingly close to empty so I’m writing about a nice iOS app that should be of interest to both mainframers and non-mainframers. This app is Rescript by Matteo Villa and it is a javascript programming environment for iPad and iPhone. It, most particularly, allows you to run Node.js on iOS. You can develop and test your scripts anywhere, including at 37,000 feet with no network.

The current version includes Node.js runtime version 8.6.0. I’m expecting the developer will update the app as new releases of Node become available.

The free version of the app allows you to run lots of Node apps. With an in-app purchase you can, most notably, use additional UI and Share modules. These talk to iOS specifically, so your scripts can interact with the device and other apps. The free app is ad-supported, so that might be another reason to pay a small amount of money to the developer.

What Is Node?

Node is a server-side javascript runtime and set of modules. This means that, unlike browser-side javascript, it runs on a server. Which in this case could even be a phone. 🙂 If you’re used to client/browser-side javascript it shouldn’t be much of a stretch to embrace Node.

The programming model, as distinct from the javascript language, is quite different from that you’d experience if you were developing for the browser. But that isn’t a difficult transition.

Here’s a short example that implements a simple web server, from a sample included with Rescript:

const http = require('http')

let server = http.createServer((req, res) => {
    res.writeHead(200, {'Content-Type': 'text/html'});

    let content = ‘
    <html>
    <head>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    </head>
    <body>
    <h1>Hello from Rescript!</h1>
    </body>
    </html>
    ‘
    res.write(content);
    res.end();
})

server.listen(8080,() => {
    console.log('Node.js server is ready, listening on http://localhost:8080');
})

Without taking you through it line by line, there being many Node tutorials around, I would note the vast majority of this code is a character string full of the HTML that would actually be served.

In almost any environment, you could point your. browser at localhost (or 127.0.0.1) and the HTML would be served.

So far I’ve installed Node on Linux, on my iPhone, my iPad, my MacBook Pro, and my Raspberry Pi. It’s simple to install and lots of modules are available for it.

The javascript implementation is based on Google’s V8 interpreter which was designed to be fast. I won’t get into a “browser / javascript engine wars” discussion but all sensible modern javascript interpreters are much faster than they once were. For most applications, javascript is plenty fast enough.

It’s also worth pointing out that Node can act as a web client, for example using the node-fetch module. It can also handle server-side things like files.

You can also create your own modules and, through Node Package Manager (NPM), distribute them.

Rescript

Rescript is a very new app at the time of writing, but it shows lots of promise.

The app comprises three panes:

  • On the left a document picker.
  • In the middle the editing pane.
  • On the right a combination Console / Output / Help pane.

While the examples are a little sparse, there are plenty of examples and tutorials on the web. Plus there is a book “Node.js Up And Running” published by O’Reilly. I have it open side by side on my iPad with Rescript. So, that’s a nice use of Split Screen – a key modern iPad feature.

However, this reliance on the web is a little problematic at 37,000 feet (or wherever you find yourself out of Internet). I mention this because in the Settings dialog are links to a number of open source libraries included in Rescript. But these links are to the web. Also the help is a mixture of built-in information and links to the web. For example the javascript and Node.js links are to the web and don’t work in an aeroplane (for most of us). It would be nice to see a bit more information – really for beginners – locally cached. I realise this is a nit but it might be important to a beginner’s experience. Actually the built in information is nice. And the modules provided by Rescript are indeed available within the app, without network connectivity.

I didn’t find a way to see the source of the Node libraries included with Rescript. I don’t think this is a problem, however, as you’re supposed to code to the APIs, rather than examining the source code. But if you were intent on contributing to these libraries you might well feel different.

I wouldn’t say that Rescript is a full function IDE but it does have syntax colouring, which helps catch errors. But, frankly, most of my javascript development has been done with nothing more than a text editor. Some people like code folding, for example hiding the body of a javascript function. Personally I don’t tend to use it, perhaps as my slabs of code tend to be quite small, but I think it would be a nice edition.

Another thing – available in a full IDE – is code completion based on the Node modules brought in by require(). I think this is a tall order, though.

So, my experience of Rescript is good, despite the nits I mentioned above.

While I haven’t developed any User-Interface (UI) based apps I’ve run the samples and nosed through the code. Unless you point your browser at localhost you’re not going to see HTML and might well be building scripts that use the UI module. You might also use some of the iOS-specific modules.

One I did try was the iOS share sheet capability – which is only available in the paid version.

I mentioned NPM above. I would like to see some NPM capability, if that’s possible.

The help mentions a number of keyboard shortcuts. If you have an iPad you might not have an external keyboard so these wouldn’t help you. More than 99% of the time I use my iPad with an external keyboard so I appreciate these.

The help also tells me I can invoke a Rescript script from a share sheet in another app. This is nice to see. What I haven’t seen is any support for URL schemes. I’m thinking of x-callback-url in particular. I hope this comes as it would allow Rescript to be invoked from other apps as part of some sophisticated automation.

Another nit is that I didn’t find a way to resize the side panels. Particularly for the help, I would have liked that.

Integration With Other iOS Apps – Via The Share Sheet

As I said, I experimented with the share module.

First, I tried Rescript as a “client” – where a Node script pops up the iOS share sheet. Here is a very simple example:

let share=require('share')

share.shareText('Blah\nblah\nblah')

If you run this very short script it does indeed pop up the share sheet, and the text in the shareText call is indeed passed in.

Now here’s Rescript as a “server” – where a Node script accepts text.

let share=require('share')

console.log(share.getText())

As with most share sheet extensions you have to enable them in the share sheet – but that’s merely flipping a switch.

In Workflow (being reborn as Shortcuts in iOS 12) you nominate a workflow to be usable in the share sheet within the Workflow app – when you edit the workflow. In contrast you nominate Rescript scripts to be usable in the share sheet when you invoke the Rescript extension. But it remembers what you did so it’s a one time setup.

Anyway, when you open the iOS share sheet from another app with some text selected you can pass it to a specific Rescript script. With the console.logfunction it writes the output to a pop up window. There are two things you can do with that output, if you select some or all of the text:

  • You can cut the selected text to the iOS clipboard.
  • You can invoke the share sheet again with the selected text.

The net of this is that Rescript scripts can participate nicely in workflows involving other apps and the iOS share sheet.

By the way, the clipboard module allows you to read from and write to the clipboard.

Drafts And Rescript

In Appening 5 – Drafts On iOS I talked about another javascript environment on iOS. There are some key differences. Which you want depends on what you’re trying to do. Personally I have both and use them for distinctly different things:

  • Drafts is all about capturing text and processing it, with javascript supporting that through automation.
  • Rescript is all about Node.js and could also be used for automation. If your interest is primarily Node then you’d probably want Rescript.

I would say that javascript is a language that is really worth learning now, with many environments for running it. An increasing number are on iOS. And if you can get to both the relevant book and the development environment being on screen at the same time – through Split Screen – that’s a nice position to be in. Even if you are at 37,000 feet.

Conclusion

I’m very happy with this app but think, as with all new apps, it could get even better. As to whether to pay for the app, I think think the £3.99 I spent was good value for money – particularly as the modules included with the paid version add significant function. But, if your interest in Node is very light, you might be happy with the free version.

In any case, I hope Matteo (@mttvll on Twitter) keeps developing Rescript. My areas of improvements are, I think nits, and I’m just bowled over to be able to run Node on my iPhone and especially my iPad. And to have the nice “iOS integration” extension modules he’s built.

Day One Support; Who Needs It?

(Originally posted 2018-07-28.)

It’s the Time Of The Season1 for thinking about Day One support. Not for z/OS, or DB2, or CICS, or anything mainframe-related. But for iOS, MacOS and their kin.

Before you switch off – if you’re an Android user2 – you can consider the Apple bit an analogue. This post will be light on technical detail, and heavier on developers’ approaches. It might even stimulate some discussion about z/OS.

So, it’s a month or so since Apple announced new iOS, MacOS, etc releases at their Worldwide Developer Conference (WWDC) and developers (and foolish / brave non-developers) have run betas. Several betas, in fact.

Part of the point is to prepare their products for General Availability3. And developers’ approaches to that is what this post is about.

So, you can see this might have some relevance to z/OS and its vendor ecosystem.

Approaches To Day One

As I look around at the many iOS, WatchOS, and MacOS apps I have I see a number of approaches from the various developers. Here are a few examples:

  • I’m beta’ing releases of Drafts (mentioned in Appening 5 – Drafts On iOS). Already the sole developer is experimentally introducing exploitation of the new Siri Shortcuts feature.
  • I use a podcast client called Overcast. The sole developer – Marco Arment – is rebuilding his WatchOS app to use the new iOS 5 audio playback capabilities.
  • I’ve yet to hear much from the Omni Group but they indicated they were clearing the decks for whatever Apple threw at them – which is a good sign.
  • I’m hearing rumblings that some of the MacOS apps I depend on – some IBM, some not – won’t Day One support the new Mojave MacOS release.
  • There are plenty of apps on my iOS devices that already won’t run – because the developer never updated them to run on iOS 11. The technical point here is they must be 64-Bit. I consider these – 1 year in – as “abandonware” though I wish Apple did a better job of enabling me to dispose of them.

Through these various approaches and stances runs a theme: I’ve emphasized the words “sole” and “Group” for a reason.

  • The sole developers, Greg and Marco, are moving fast and experimenting with exploitation.
  • Casting no aspersions whatsoever, I see Omnigroup saying little. But I am jolly sure they are working on stuff for the apps I rely on: Omnifocus and OmniGraffle (and the others of theirs I’m not so reliant on). I’m confident for two reasons: Attitude and Track Record.

In Enterprise we might well appreciate the “more planned” approach Omni Group are taking. In the consumer space not quite so much.

But, back in 2014, I wrote in And Just Complain:

Mobile users, though, have no real understanding of how the service is provided and don’t really care (and nor should they.) So I think they can be characterised as much less patient and much less tolerant of service issues, and that’s fine.

So, assumptions of tolerance of errors and issues is in limited supply everywhere.

What Do You Need?

There’s obviously a lot of Marketing value to being able to claim “Day One” support – for some markets. So, from a developer’s point of view, something close to Day One support is important. In “real world” terms there’s another point for developers: They really don’t want to field “iOS 124 broke your app” issues.

For the vendor – Apple or IBM – it’s great to have customers able to adopt their new release on Day One. In reality, though, many customers will want to skip early life and the “pioneer cost” issues5 that brings.

A few days ago (as I write this) was World Emoji Day. The relevance of World Emoji Day is surprisingly high: Each year on this day the Emoji standards body releases the new emoji for the year6. Apple traditionally supports these on the first point release after a new version of iOS or MacOS. That’s also where the first batch of “settling in” fixes are delivered for an iOS version. It might seem superficial but getting people to install this point release is a lot easier if there are new emoji to play with.7

And what about us, hapless punters that we are? 🙂 Some of us are insanely 🙂 keen to install the new operating system level on the day of release. I’m not consistently 🙂 one of those. But pretty close.

When I review the myriad material coming out of WWDC, I take note of the new things8. But I consider what the operating system vendor ships as being “one shoe dropping”. I’m really looking forward to the other shoe dropping: What the app vendors ship.

But, of course, z/OS is different, or rather its customers are. Very few will install a new z/OS release at GA, for example. But they would like to know that all their products – whether vendor or IBM – work well before they need them to.

Exploitation might well be a different thing; My suspicion is most customers are less worried about exploitation. Though, if you ask them, quite a few customers will reply “I’m really looking forward to x”.

There is a whole interesting side conversation to be had about what drives customers to upgrade, quite apart from exploitation. Maybe another time. But, even if you’re just upgrading because you have to, it’s still important to know stuff continues to work. If you are a “Last Day Upgrader” (to coin a phrase) the chances are that vendor and IBM products will have introduced toleration.

But I still get excited at reading announcement material and learning about new functions.


  1. Cultural reference. :–)  ↩

  2. Or a reluctant Apple user. :–)  ↩

  3. That, of course, is an IBM term; I’m not sure what Apple call it.  ↩

  4. Or z/OS 2.3, for that matter.  ↩

  5. Whether bugs, or usability, or Performance, or whatever.  ↩

  6. Over 150 this year, taking the total to almost 3,000.  ↩

  7. Imagine I send you an emoji of a platypus playing billiards :–) and all you get is some “dunno, mate” indicator like a question mark. In theory you’d want to upgrade just to get my message in its full, ahem, glory. :–) 9  ↩

  8. And there are a lot of nice things this time round.  ↩

  9. Emoji rendering design and evolution is actually quite an interesting topic in its own right.  ↩