Anatomy Of A Great iOS App

(Originally posted 2019-01-06.)

This post isn’t about what a great iOS app would functionally do. It’s about what would turn a useful app into a great one.

There are obvious personal biases here:

  • Automation is important to me.
  • I have most of the Apple ecosystem – but no HomePod speakers (yet).
  • I really want good quality apps – and I am willing and able to pay for them.

So these thoughts are obviously coloured by these biases.

I’d like to spark discussion among power users. For the rest of us some insight into what the best iOS apps do might prove useful.

I’ve divided the features into three categories:

  • Good
  • Better
  • Best

I wouldn’t take these categories too seriously. They are in fact varying degrees of “stretch objective”.

Finally all of these items are feasible – as multiple apps have already done them – but some might not be relevant for a given app1.

Good

  • iCloud syncing – so data can be shared between devices.
  • URL support that is deep enough to reach specific bits of the application – so sophisticated automation can be built.
  • iPad Split Screen / Slideover support – to make it pleasant to use alongside other apps.
  • Siri Shortcuts support that is meaningful – again for automation, but also for voice control.

Better

  • Files access – so I can get at the app’s data from multiple apps.
  • Dropbox access – which speaks for itself.
  • x-callback-url support – for calls from one app to another. (Really sophisticated automation has been built this way.)
  • Programmatic automation support – whether Javascript or Python.
  • Well-chosen Siri Shortcuts support – as opposed to basic.
  • Cross-platform syncing – so I can start on e.g. an iPhone and finish on a Mac.

Best

  • Box access – less prevalent a need than DropBox.
  • TextExpander support – which can save a lot of typing and ensure consistency.
  • Workflow constructors via e.g. Drag and Drop
  • Drag and Drop support – which frankly I haven’t really got into.

And Another Thing

Everything so far has been about the app itself. But there’s more to it than just what the code does.

I’ve been fortunate to be involved with lots of apps where I get to beta – through TestFlight. So I’m conscious of developers’ attitudes. I like to see a number of things:

  • Frequent updates, even if small. Even if only to correct issues or support new hardware / software.
  • Beta testing through TestFlight.
  • Creative licencing schemes.
  • A vibrant user community.

If you’ve read this far you’re probably part of a vibrant user community anyway. 🙂

And as I finish this post I realise this is a bit of a follow on to Day One Support; Who Needs It?.

Anyhow, I’m interested in what others think turns a good app into a great one – at least from the perspective I’ve shown in this post.

(This post was written in Drafts on iOS and this sentenceparagraph added using the (in beta) counterpart on Mac OS, and the HTML created in Sublime Text. At least one of the attributes of this post thus demonstrated: iCloud Syncing between Drafts versions.)


  1. There is no accounting for ingenuity so some of these that don’t seem useful to me might be just what somebody else really wants. 

Automation On Tap

(Originally posted 2019-01-01.)

While some beer was tasted over the vacation period, this post is about a different kind of tap.

During my Xmas and New Year holiday I’ve been experimenting with ways of kicking off automation that don’t involve talking to a device, or tapping on it, or typing anything. Specifically what you can do by tapping an iOS device on something, or waving it near something.

Most of what I’m talking about here is indeed iOS, but I bet there are similar things possible with Android. So some of this post is “go hunt for how to do it” and some of it is “here is what I did”.

There are two technologies in particular I experimented with:

Neither is an Apple technology. Hence my comment that Android users might still get ideas.

I experimented with both but it was quite late in the vacation that some NFC tags appeared, so I’ll talk about QR codes first.

But first some motivation (perhaps): There are quite a few repetitive things I do. For example:

  • When I get in the car I always switch the phone to Overcast, my podcast client of choice. This is fiddly on a phone, particularly in the near-dark.
  • I often want to dictate a quick thought in Drafts, or add a task to my to-do list manager (Omnifocus). I want the minimum amount of friction getting from thought to capture.

These are cases where just tapping on something is going to be quicker, less fiddly, and less error prone. Or at least that was the idea. And anything that reduces the friction or error rate should make me more likely to use it.

Plus, I just wanted to play with some technology away from the “day job".

Application-Specific URLs

For the rest of this post to make sense I need to tell you what application-specific URLs are.

Consider this URL: omnifocus///new

Whereas most people are familiar with URLs beginning with http:// and https:// it’s perfectly legitimate to begin the URL with a different protocol or scheme. In iOS an application can register a protocol handler. In URL terms the protocol name is the bit before the ://. In the above example, the OmniFocus app registers a protocol handler so that it handles anything with a scheme of omnifocus.

But what does an app do when it handles an application-specific URL? It should parse the rest of the URL, including the path and any query string. So, by confecting a URL and having the app handle it you can be specific with what you want the receiving app to do. And hence automate stuff. (To the extent the app supports such URLs.)

To use the URL you can:

  • Open it with a web browser – such as Safari.
  • Open it with one of the many automation apps that know how to open a URL.

Whatever you open it with, the opener doesn’t need to know anything about the app the URL invokes. However, some apps take part in a more elaborate protocol built on this called x-callback-url. This protocol enables apps to communicate with each other (bidirectionally) – if they support it. x-callback-url is described here.

The net of this is that if you have a trigger that furnishes the right URL it can automate apps on the device1 – one or more in a chain. The rest of this post is about doing just that.

QR Codes

A QR code is a kind of two-dimensional bar code – and can be displayed or printed without exotic equipment. It can be read using a device with a camera and decoding software. These requirements aren’t really strenuous with modern phones and tablets.

My idea was to print a sheet of QR codes, that I could stick to the wall. Here is an example one:

Stephen Millard kindly wrote a Shortcuts action to create this from a list of items. Each item consists of two elements:

  • The printed name. e.g. “New Draft”
  • The URL to invoke to run the action. e.g. “drafts5:///new”

(His code confects an HTML table and converts it into a PDF. I then printed that.)

One thing to note about this is that – if you want a hard copy – any change necessitates printing another sheet. While it’s possible you might display such a grid on e.g. an iPad I would think that a rare case.

You can get Stephen’s sample action from here. You will need to edit the first step for your own actions. And you will need Apple’s Shortcuts app to run it.(It’s free and only runs on iOS and should be regarded as a standard app in iOS 12.)

The best QR Code reader I’ve found – in terms of being able to invoke a wide range of actionable URLs – is Qrafter Pro. And opening URLs is the key point, as I’ve already said.

I got this to work nicely, but I’m not sticking with it. So I won’t be buying a laminator.

NFC Tags

A NFC tag is a very thin piece of electronics – that can be read by placing an NFC reader within 4cm of it. In fact NFC is at the heart of contactless payment systems and its use is very similar. You need an NFC-enabled device to read it – which the latest iPhones are.

Just recently Contrast updated their Launch Center Pro app to include background NFC reading – as a trigger for automation.

I’ve had Launch Center Pro (LCP) for a number of years – and it’s one of the best ways of confecting “actionable URLs” (as Stephen called them).

What’s new is being able to read NFC tags in the background. This means you tap on the tag and it launches an action without actually having to open the LCP app first. (But you do have to unlock your phone.)

But these are specially encoded tags, which cost about £1 each. Here’s one:

As you can see, I haven’t peeled it off the backing paper – as I’m experimenting with precise positioning. It’s actually the one in my home office, and I have others scattered around the house and one in my car2.

They’re very thin and a strip of 5 came in an ordinary letter-sized envelope in the mail from the USA.

Tapping on the office one yields – after tapping on a notification3 – the following menu:

Tapping on one of the items in the menu kicks off the action. They’re all simple actions at this point – and all ones LCP knows directly how to kick off. But they could be the beginning of a complex set of automation. For example, there might be one to set me up for writing a blog post, or for starting my day4.

Unlike printing a QR code grid, I can edit this menu any time I want. Indeed I’ve added actions to each of the 4 NFC tags I’ve deployed so far.

There’s a cautionary tale worth noting here. You see the Sonos item towards the bottom of the menu? All I can do with the Sonos app is open it. I wanted to be able to select which Sonos speaker to select – but the Sonos App has not been enabled for that. The lesson is you can only automate what the app lets you.

Conclusion

So it’s been interesting to experiment with two technologies that allow you to wave a phone over a QR code or tap on an NFC tag. (In case you didn’t get it, the “Automation On Tap” title refers to tapping on an NFC tag.)

I prefer the NFC implementation to the QR code one – though the latter is available to many more people. I do have some curation asks for Contrast, when it comes to Launch Center Pro. A few of them are:

  • The ability to clone the action list associated with a tag – onto another tag.
  • The ability to include a list of actions as a single item in another list.
  • Being able to resequence a list of actions. (Perhaps I already can but I couldn’t figure out how to.)
  • Cascading menus of actions – which I have figured out how to confect in both Shortcuts and Drafts.
  • Sharing a list of actions would be nice.

It’ll be interesting to see if Contrast bite on these – now they have some real users. And these WIBNI5s are an indication of enjoyment and value, rather than frustration.

I would also expect that similar things exist for Android – at least that NFC background readers were available for high end phones. I’d be extremely disappointed to think that QR code reading and creation apps weren’t available on Android.

And, as you might expect, it’s been good clean fun playing with the technology.

Now, back to work – which is also good clean fun. 🙂


  1. Or, in the case of something like IFTTT, off the device.

  2. As I’m due to change my car soon it’s floating around the driver console and will probably get lost. On the next car I’ll probably find somewhere permanent to stick it.

  3. This additional step is necessary, according to the iOS security model.

  4. Yes, I know there’s one called “Start The day”. Right now it just kicks off an action – via Shortcuts – to open Omnifocus at today’s task list. It also runs on a timer in the morning – but again the iOS security model requires me to tap on a notification to run Lots of us wish there were an iOS equivalent of crontab that didn’t require extra interactions.

  5. Wouldn’t It Be Nice If …

DDF TCB Revisited

(Originally posted 2018-12-11.)

I seem to spend a lot of time working with DB2 DDF, and it’s no wonder: Many modern applications accessing DB2 are built using it, whether through JDBC 1 or some other connective software.

This post is a by-product of a serious customer situation with DDF Performance, which I don’t intend to go into. As I say, it’s a byproduct, not the main event.

Before I continue, I have a small correction to make, which is highly relevant to this post: In DB2 DDF Transaction Rates Without Tears I labeled the authorisation unit of work as an SRB. In fact it’s a TCB.

A Brief Recap

SQL processing via DDF is done under an Enclave SRB. But before it starts, and at thread termination, code is run under a non-Enclave TCB. I’ve bolded these terms as they’re important for the discussion. In DB2 DDF Transaction Rates Without Tears I talked about classifying these enclaves, using WLM. This post, however, isn’t about that. I’m more interested in the non-Enclave TCB CPU time.

And, throughout this post, I’m referring specifically to the DB2 DIST address space. Hence the use of address space instrumentation.

zIIP Eligibility

We’ll return to this later in this post but it’s worthwhile talking about zIIP eligibility now.

It’s only the enclave portion of CPU that has any zIIP eligibility. The non-enclave CPU portion has no zIIP eligibility.

In this post, and with the examples I’m using, there is no zIIP-on-GCP. That simplifies things – and happens to be the truth in these cases.

CPU Numbers

To be able to continue this discussion we need to talk about CPU time. So let’s do so. Our source will be SMF 30 Interval records (subtypes 2 and 3). Specifically:

  • SMF30CPT is the Preemptible Class CPU
  • SMF30ENC is Independent Enclave CPU
  • SMF30CPS is Non-Preemptible CPU (SRB)
  • SMF30_ENCLAVE_TIME_ON_ZIIP – Independent Enclave CPU on zIIP
  • SMF30_TIME_ON_ZIIP – CPU on zIIP

This, I’m sure you’ll recognise, is quite a sophisticated set of numbers. But it’s only a subset of those in SMF 30. And, for less exotic address spaces, most of this sophistication isn’t needed. “Less exotic” includes batch jobs.

A Tale Of Two Customers

The “meat” of this blog post is how these numbers play out in practice. The following graph incorporates data from two customers I know well, each with multiple DB2 datasharing groups.

I’ve summarised the numbers over an eight hour shift. I’m primarily looking at two things:

  • Percentage zIIP eligibility
  • Distribution of CPU between the various types of work units

These customers show quite diverse DDF behaviours; Each datasharing group is quite different, even within an individual customer.

I’ve, as you might expect and hope, obfuscated the names somewhat:

  • Client A has two datasharing groups: DBAx and DBBx – being the members
  • Client B has three datasharing groups: DBGx, DBSx, and DBPx

I’m showing percentages of the total, rather than absolute values. I think this tells the story better.

zIIP Eligibility

In both these customers, and they weren’t particularly chosen for this, the processors are z13 7xx models – so the zIIP speed is the same as the GCP speed. (This post isn’t about normalisation, or it’d be a good deal longer.)

It’s only the enclave portion of the CPU that has any eligibility. And for these zIIP eligibility is on and individual thread basis: A thread is either eligible or it isn’t.

The line in the graph – or rather the round blobs – shows zIIP eligibility hovering just under 60% – across all the DB2 subsystems across both customers. (One of the things I like to do – in my SMF 101 DDF Analysis Code – is to count the records with no zIIP eligibility.)

As the folklore suggests you should get around 60% this all seems normal.

CPU Distribution

This is the bit that got me going in the first place: I’ve always asserted that the non-enclave TCB time should be small.2

But in the case of one of these datasharing groups that wasn’t the case: Looking at the DBBx members in the graph you can see that their non-enclave TCB time on a GCP is around 10% of their entire CPU.

You could argue that this datasharing group is out of line with the others. I don’t want to make that argument; There’s some variability between members and datasharing groups as a whole.

An obvious question is: “What causes the variation?”. Most of the code that’s run on the TCB before the transaction hops on the enclave SRB is authorisation.

One clue is that DBAx members are long running threads that persist across many DB2 commits 3. DBBx members have shorter-running threads that don’t. So we might expect the latter to go through authorisation more often. It could also be that each pass through authorisation is more expensive. Further on that point, it could be that each pass through authorisation is more expensive relative to the SQL processing.

At this point I’m speculating. But I would want to know why one set of subsystems behaved differently to others.

What Do I Conclude?

First, not all DDF environments behave the same, even within a customer.

Second, SMF 30 is a valuable tool for understanding something about a DB2 subsystem’s DDF work. It’s worth profiling in the way I have here, along with what I described in DB2 DDF Transaction Rates Without Tears.

And there might be value in drilling in to the data, below the shift level. Perhaps next time I have a DDF situation I will.

And, out of the corner of my eye, I see a customer with significantly less than 60% of Enclave CPU being zIIP-eligible. Interesting…

As Robert Catterall points out in this blog post non-native stored procedures could cause this. I’m just wondering where the CPU gets clocked back to – in SMF 30 terms.


  1. Java connecting – using Dynamic SQL.  ↩

  2. A corollary of this is that one can usually blithely say “60% of DIST should be zIIP eligible” rather than the more qualified “60% of DIST Enclave CPU should be zIIP eligible”.  ↩

  3. A DB2 commit ends a transaction but not necessarily the connection.  ↩

Mainframe Performance Topics Podcast Episode 21 "Fits and Starts"

(Originally posted 2018-12-08.)

We released this episode a few days ago. But I only got round to writing this post today, 5 days later. That’s because I (and probably half of our listenership 🙂 ) were involved in a rather urgent piece of customer business. There are some learning points from that one, but that’s for another day.

So it goes, indeed.

I’m continuing to experiment with recording techniques – to minimise noise (and hence distortion caused by heavy noise removal). Hopefully this one is better.

The Performance topic was from a real live, very recent, customer situation I had a lot of fun with.

The Topics topic we had a lot of fun making. It seems the Apple versus non-Apple thing we have going works well.

Nobody has mentioned chapter markers yet. If you get to see them let us know how they work for you. In this episode there’s an extra one – which some of you will appreciate… 🙂

Anyhow, enjoy the show!

Episode 21 “Fits and Starts”

Here are the show notes for Episode 21 “Fits and Starts”. The show is called this because we talk about fitness devices, and the Performance topic that had work submitted after a one minute hiatus.

Where We’ve Been Lately

Marna and Martin have been to GSE UK Annual Conference Nov 5-7 in Whittlebury, UK. Great conference!

Feedback

  • We’ve received feedback from a New Zealand listener requesting a GDPS topic. Good idea, and we’ll look into it! Martin is still playing around with a gentler stereo effect, in response to two listener comments. We hope audio is improving, by using Ferrite.
  • DocBuddy 2.2.1 for iOS and Android is available. The levels are the same between the two platforms, and yet it appears the functional content is different.

What’s New

  • z/OS V2.3 Enhancements
    • The z/OSMF Sysplex Management task is enhanced by the PTF for APAR PI99307 which is PTF UI58355 on V2.3 so you can modify sysplex resources.
    • A new z/OSMF plug-in called the zERT Network Analyzer is now available to visually determine which z/OS TCP and Enterprise Extender traffic is or is not cryptographically protected in z/OS V2.3 with APAR PH03137.

Mainframe: SMF Recording of APF Modifications

  • Post-IPL dynamic APF changes are reflected in SMF 90 Subtype 37
  • A lot of the function is in z/OS V2.2, with these fields in the SMF record:
    • Function: Add, Delete, DynFormat, StatFormat
    • Was the update via SETPROG, SET PROG, CSVAPF
    • Parmlib member suffix for the SET PROG case
    • Data set name
    • Volser
    • Time of update (STCK)
    • Jobname
    • Command Scheduling Control Block (CSCB)’s CHKEY field
    • Console ID of issuer (-1 for CSVAPF)
    • Utoken of issuer
  • More in z/OS V2.3:
    • The RACF UTOKEN is stored in its “unencrypted format”
    • The UserID within the UTOKEN is at offset x‘98’ in the data
    • The console name is provided at offset x’A8’
    • PROGxx supports APFSMFALL:
      • When specified, the SMF record includes information about updates that are “already in the correct state”. Defaults to initial behavior of not placing “no change”cases in the SMF records. The record identifies this situation by a bit: SMF90T37_AlreadyAsNeeded –the x‘01’ bit in byte SMF90T37Flags (offset 1)
  • Triggers when post-IPL APF changes dynamically: PROGxx: APF ADD …or APF DELETE … SETPROG APF,ADD,… or SETPROG APF,DELETE,…
  • Ensure to collect by setting in SMFPRMxx type 90 subtype 37 record
  • Presumably there’s not much overhead, as it will be produced when changes happen (which is probablyl not often).
  • Auditors will probably want this

Performance: An interesting Db2 DDF case

DDF stands for Distributed Data Facility. DDF is how you get to Db2 from outside the LPAR, and with a Type 4 JDBC driver if the java application is local.

Central to Martin’s DDF work is some analysis code to process SMF 101 DB2 Accounting Trace.

A customer complained their DDF application stopped dead one evening – for 1 minute. It was an application serviced by a 3-way Datasharing group. The customer sent SMF 101 data from all 3 members for 3 hours around the stoppage, and for 3 hours the previous evening for a presumably “good behaviour”.

Martin plotted application statistics at a one second interval level – across both days. Lots of data points: 6 hours x 3600 seconds is a lot of data points – 21,600.

Excel was frustrating – as ever! (Martin thinks he’s getting better at this but the swearing hasn’t diminished and it’s despite, rather than because of Excel.)

He plotted the transaction ending rate for each member.It showed sloshing (moving of workload to lightly used system to the point of it being too busy) but much more:

It showed a 40-second stoppage the evening they hadn’t complained, making the 1 minute threshold interesting as a number.

But why did the app stop?

Theory number one is that the transactions elongated. They weren’t elongated enough to support that explanation.

NOTE: SMF 101s are written at transactionending / commit not on an interval basis.

He looked more at the in-DB2 time components, and there was not much “Not Accounted For” time – typically CPU queuing.

Martin “zoomed in” to a much shorter time range . When transactions started again they were elongated, and it that was due to the clustered arrivals in clearing the backlog.

The best theory is something external stopped transactions arriving.

Further he thought there could be “near misses” many times, just short of the 1 minute mark. Martin speculates an external monitor alerted or standby process kicked in – based on a 1-minute threshold.

After transactions started coming again there were spikes in transactions arriving every minute. The speculation is this might be the middle tier doing something on a 1 minute basis: Maybe retries of some sort?

The key learning point from this experience is that drilling down well below the RMF interval can help, and you can see sloshing and routing behaviours.

NOTE: With SMF timestamp granularity you can go to 100th of a second granularity, but that’s a lot of data points!

…and you wouldn’t get many transaction endings per data point, either. (100 transactions per second = 1 per 100th, so it would appear “lumpy”.)

Topics: Fitness Tracking

  • Why are we doing this? Nosy about data, wanted to get fitter, and wanted to lose weight.
  • Marna uses a Fitbit Charge 2.
    • Key features: sleep analysis, step counts, heart rate.
    • There is a Fitbit Charge 3 coming soon.
    • Fitbit app for the Android: calculates floors, miles, calories, sleep analysis, across timescales – day, month, overall, etc and compares with age bracket.
  • Martin uses an Apple Watch, and used to have a Fitbit.
    • He wanted the Watch for other reasons: for health a few months ago, and all of the above – except for sleep tracking.
  • Marna gets employer incentives, to help with health care cost reduction.
    • However, IBM and health insurance company has the data.
    • Fitbit links to a “Vitality” website, but asked for permission.
  • Martin has been successful, as he hasn’t failed to close his rings for 3 months, for the Apple Watch. This makes him obsessive.
    • Uses iOS Overcast podcast player to send podcast episodes to the watch.
    • Runs with just the watch and AirPods. Listening to podcasts keeps me going – whether running or walking.
    • He has lost a considerable amount of weight! Cardiac situation much better, lowered resting heart rate, faster recovery from exercise, and clothes fit better.
    • There is plenty of data, and some of it is alarming.
    • Once you start you want to keep going, and you could become obsessive.

Customer requirements

  • z/OSMF supplied ftp profiles need to be GDPR compliant
    • z/OSMF supplied ftp profiles can’t be modified to require passwords (per GDPR compliance) and can’t be removed. They are useless now and confusing, as they can’t be removed and can’t be modified.
    • Adoption of z/OSMF Problem Management is difficult enough with the additional confusion of ftp profiles that no longer work and can’t be modified or removed.
    • IBM update for requirement: We take this as the ability for a customer to remove the IBM supplied profiles that they don’t want to use such as profiles that don’t require a userid and password. Martin thinks that we take GDPR pretty seriously, so this sounds reasonable in that spirit.

Places we expect to be speaking at

  • Milan, 27/28 November at an IBM technical conference.

On the blog

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below. So it goes…

Appening 7 – Ferrite Audio Editor On iOS

(Originally posted 2018-11-10.)

I’ve moved from editing audio on my Mac with Audacity to editing it on my iPad with Ferrite. So I thought I’d give you my impression of Ferrite. And this is an early impression.

In some ways I’m as much interested in encouraging people to record as anything else. The more voices we hear, particularly Mainframe voices, the better. So, the lower the barrier to entry the better. Having said that, Audacity is free and Ferrite isn’t. But, if Ferrite is usable, the advantages of editing on a tablet or phone are obvious: You can do it anywhere, even at 35,000 feet.1

My use case is voice, specifically podcasting. So I’m not making any claims about recording music and editing it on a tablet.

One other thing I should mention at this point is I’ve got the best iOS editing environment: I have a 12.9 inch iPad Pro and an Apple Pencil.3

Chapter Markers Were A First Step

A while back we introduced chapter markers into our podcast episodes. The idea is you could skip to a specific spot in the podcast episode. A secondary effect is that parts of an episode would display on things like podcast clients with specific graphics. I’ve seen this happen on iPhones and on in-car displays. Apparently this is much less of a thing with Android than iOS, but I can’t help that.2

I regard chapter markers as a professional touch; I’m telling myself our podcast series is useful and in fact the listening numbers suggest that is so. So doing it reasonably professionally appeals to me. I’m not actually convinced it’s a good thing you can skip my “Performance” topic, or even the “Topics” topic. But never mind, I think chapter markers are good.

I mention chapter markers because that was the first thing that drew me to Ferrite, some months ago. I wasn’t able to add them in Audacity. So my editing workflow was to edit and assemble the pieces of an episode in Audacity and transfer it to Ferrite to inject the chapter markers. And then, of course, to transfer it back.

Ferrite makes this pretty easy:

  • I move the editing cursor – by dragging it – to the spot where a chapter should begin and pull up the chapter marker editing dialog. I add a chapter marker there.
  • As part of that I edit the chapter name. (Ferrite offers “Chapter 1” etc – and I want something like “Intro” instead.)
  • I pull in the chapter marker graphic from a folder in Photos. These are all based on the blue background to the podcast series graphic Marna created. With my limited artistic skills I drew a foreground in white against that blue background. I’m reasonably happy with the result.

So that was my first foray into Ferrite on iOS.

Taming Stereo Was The Second Big Step

Prior to Episode 20 I edited the podcast with a 100% left/right stereo separation. Some people didn’t like it. I like stereo but I was inclined to agree this level of stereo separation was a bit harsh. But it was cumbersome to reduce the stereoscope in Audacity. In Ferrite this is ridiculously easy: If I have a pair of tracks I can use the simple controls for each track to place it wherever I want on the stereoscope. These are simulations of rotating knobs. So I chose 30% left/right as a gentler effect. I would’ve preferred to be able to type in “+30” and “-30” or some such.

The effect works and people seem to like 30% better than 100%. It’s much less harsh.

And, if I wanted to, I could explore more stereo effects.

The question, though, was: Should I continue to edit on the Mac with Audacity, just using the iPad for taming stereo and adding chapter markers? Or should I move all the editing to the iPad? Either one requires me to move the recordings at some stage to the iPad and back again?4

It all comes down to the editing experience.

The Editing Experience

I expected a learning curve – as Ferrite does things rather differently to how Audacity does them. So I set out to edit the three main topics in Episode 20 with Ferrite, learning as I went. That was the right thing to do as you really can’t learn a new tool like Ferrite without a real project to work on. As you might expect, the first topic to be edited – Performance – was slow and painstaking. But I got faster, less error prone, and slicker through these three topics.

I can’t claim Ferrite is faster than Audacity – yet. I don’t know if it will ever be. But all I really want is for it to be effective and quick enough. I don’t think the speed of the iPad is a significant factor. It’s much more about the user’s productivity.

I would also say that if you haven’t edited audio before you might find Ferrite easier than I have. I had to unlearn some habits I’d learnt in Audacity. The paradigms are slightly different.

Noise removal was actually easier with Ferrite: You just ask it to remove noise. Audacity wants to collect a noise sample – which you might not be able to furnish.

Using the Apple Pencil provided precision that a finger wouldn’t. To gain that level of accuracy you’d have to zoom in more than I was comfortable with.

One thing I learnt quite early on was to change the setting for what happens when you snip a piece of audio in two. By default it keeps both snippets selected. I changed the setting to deselect everything. That worked better for me.

A nice function was Strip Silence. Because Marna and I don’t tend to talk over each other this function created distinct snippets of her and me. These I could “punch out” to left and right – using the fragments created by Strip Silence. I suppose if we were prone to talking over each other we could punch out those snippets to a third track placed in the middle of the stereoscope. When we have guests we can do just that as well. I like putting guests in the middle.5

Rendering the final (MP3) audio is a little slower than it was on the Mac. This doesn’t matter very much as I’m not actively watching it.

Conclusion

I count the move to Ferrite on iOS a success. It means I can edit audio without having to pull out a Mac – which means I can do it in more places. I find Ferrite very usable, now I’ve got used to it. It also has some functions that weren’t available to me before. Already, adding chapter markers (including graphics) and adjusting the stereo are things I wasn’t able to do before.

I haven’t explored effects like ducking or panning. These are built into Ferrite. I don’t know that they’re actually useful for us. But I might play with them. They seem to me the basis for some good audio gags – which is something I thought I wanted to do before we even got started with podcasting.

In cost terms, Ferrite isn’t free. To get all the functions is about 20 US dollars or pounds. I consider that a good investment, particularly as I’ve spent rather more on headphones and a microphone. Which I now have to work on to get the audio quality up. To that effect I’ve bought another recording tool that works with Skype – in the hope it produces better audio. It’s called Piezo and is developed by Rogue Amoeba – who have a good reputation for such things.

I’m about to play with Ferrite’s Templates support – which might actually save a lot of time and introduce consistency. So I might reprise this, when I think I know what I’m doing.


  1. Actually, I haven’t done that yet. I think it’s feasible, though noise issues might defeat you. 

  2. To be fair, I’ve no evidence that it isn’t just me that can see the chapter marker graphics. 

  3. This isn’t quite the best case: 12.9” iPad Pros have been released since mine. And a newer Apple Pencil has been released. But my rig is the biggest screen and pretty much the most functional setup. Doing this without an Apple Pencil on a small phone would be a pain. 

  4. By the way, I find Airdrop a very convenient mechanism for moving audio between Mac and iPad. 

  5. Generally I want our podcast episode to feel like the conversation it is. Adding a guest should just add to that feeling. 

Three Early SMF 89 Results

(Originally posted 2018-10-31.)

Recently I wrote code to “flatten” SMF 89 records – in REXX.1 Now I want to share with you some of the insights I’ve acquired from early experiments with the data.2

There are three things I’d like to share with you that you might not realise you can get from SMF Type 89. But first a design standard I didn’t mention before.

CSV Files That Are Sortable

When processing data it’s useful to create a series of tables. And so it is with SMF. There are two things I’m aiming for:

  • Ability to pull into Excel – which suggests CSV format
  • Ease of processing with DFSORT or ICETOOL

These two aims are not mutually incompatible, it turns out. So here’s how you do it:

  • Pad all your fields so they are fixed width. This makes it easier for DFSORT to find the fields – as fixed width supports fixed positions.
  • Wrap character stings in double quotes.
  • Splice fields in a row with commas.

If you do those things you can process the data with a wide variety of methods – whether on z/OS or elsewhere. When I’m flattening SMF 89 I follow these rules; It’s not difficult in REXX, with right() and left() functions being your friend.

By the way what I’ve just said doesn’t just apply to SMF; It’s relevant to any tabular data.

Three Insights

Here are three things I (and you) can see – without much difficulty – from SMF 89.3

zNALC Or Not zNALC?

When examining software economics, for example, it’s important to know if an LPAR is operating under zNALC rules. zNALC is a successor to NALC.

With NALC you had to have the LPAR follow a strict (IBM-specified) naming convention. With zNALC you don’t. Operationally it’s much easier.

In SMF 89 there is a flag for whether the LPAR is zNALC or not. Maybe this one isn’t particularly useful if it’s your systems we’re talking about. For consultants and people like me it’s a different matter, of course. But the standard homily applies: “Systems should be self documenting.” In this case they are.

Actually, in my test data sets, this is a little hard to verify – as none of them contain data from zNALC LPARs. With each new set of customer data I’ll run a query until I find a zNALC LPAR. Then I’ll incorporate the test into my formal code. I quite like the idea of being able to test things. 🙂

MQ Queue Managers And DB2 Subsystems

Believe it or not, you can list the MQ and DB2 subsystems by LPAR – using only SMF 89 Subtype 1 data. You don’t need to use SMF 30 – which I have been using up until now just for this purpose.4

The key here is to look for Usage Data Sections with Product Name “DB2” or “MQM MVS/ESA”. The Product Qualifier field yields then name, perhaps slightly encoded. Decoding turns out to be easy – as you’ll see in a minute.

The Product Version field tells you what release the subsystem is running at.

Here is an example:

DB2 - All at 11.01.00
=====================

ASYS 
    DBG1
    DBR1
    DBS1
    DSN
BSYS
    DSNF
    DSNH
    DSNJ
CSYS
    DBG2
    DBP2
    DBR2
    DBS2
DSYS
    (none)

MQ - All at V8 R0.0
===================

ASYS
    EDIP
    ODSP
    ODSH

This might not be pretty – and before it hits Production it will be much prettier – but it gives useful insight:

  • All the DB2 subsystems are V11 and all the MQ queue managers are V8.
  • MQ only runs on ASYS.
  • DSYS had neither DB2 nor MQ.
  • You might be able to spot a meaningful naming convention – but this is not the technique for discovering Data- or Queue-sharing groups.

If you were a SAP installation, for example, you might be glad of a report based on SMF 89 rather than SMF 30.

MQ Queue Manager CPU By Time Of Day

This is a more surprising result: For MQ you can see the CPU in MQ by time of day. The following graph is from a real customer. I teased it on Twitter the other day. Or rather a simplified version of it.

The graph is for a single MQ queue manager across a seven-day period.

I mentioned decoding the Product Qualifier field above. For queue manager MQ01 this field takes values including:

  • MQ01
  • MQ01CHIN
  • MQ01BATC
  • MQ01RRSB
  • MQ01CICS

These enable me to plot a graph such as the one above, using the TCB time in the Usage Data Section. I’ve omitted a series “Other”. In this data Other contains zero TCB time. I calculate Other by summing up all the MQ-related TCB time where the Product Qualifier begins with “MQ01” but isn’t in this list. (I can figure out what it is and create another series for it, obviously.)

So, to what the graph shows. (Or rather what I think it shows – and you might form your own view.)

  • There is a fairly regular daily pattern.
  • RRSB is more pronounced on Tuesday, 9th October.
  • Early on Wednesday, 10th October Batch is much more pronounced.
  • Later the same day CHIN becomes much bigger.

CHIN, by the way, is MQ traffic to and from outside the z/OS LPAR. We don’t get from this data (or SMF 30) PUT and GET rates. Nor is why CHIN is unmatched by some other connection explained. But this is enough to alert you to the fact something happens, and to talk to the MQ experts in the installation.

Conclusion

I think these three insights are useful, if not a little surprising. It encourages me to work further with the SMF 89 data. Maybe it encourages you, too, to look at this record. And they weren’t hard to get. There’a more to come, I suspect.


  1. If you want to know how to process SMF in REXX see Rexx’Em

  2. There might well be more later, as I gain experience with the data. 

  3. And I’m actually not fussy about how you process the data; I just want you to know you can glean this stuff. 

  4. I very much want customers to send me (and learn how to process for themselves) SMF 30; There’s tons of insight to be gained from it. 

Mainframe Performance Topics Podcast Episode 20 "Two Is One And One Is None"

(Originally posted 2018-10-30.)

Episode 20 marked a departure in editing terms…

Previously I’d been adding chapter markers in Ferrite on iOS and editing the sound on my Mac using Audacity.

However, as you’ll see in the Feedback section, some listeners wanted less aggressive (or even no) stereo. This is much easier to achieve in Ferrite so I basically transferred the whole sound editing job there.

Initially I’ve set the stereo to 30% rather than 100% and I hope people like it – as it’s much gentler.

As the editing progressed – with there being 5 basic sections in the podcast – it got easier and I now prefer Ferrite to Audacity. So this will continue.

I have to admit there are a few sound glitches, but those I think are microphone and capture issues, not editing ones. For the next episode I’m using a different sound recording program and we’ll see if we can make ti better.

Anyhow, I hope you enjoy this episode.

Episode 20 “Two Is One And One Is None”

Here are the show notes for Episode 20 “Two Is One And One Is None”. The show is called this because our Topics topic is trying to figure out how to archive family photos and videos.

Where We’ve Been Lately

Marna’s been to Z Tech U – Hollywood Florida. Great conference!

Feedback

We’ve received feedback that the stereo too aggressive. This episode has it narrowed, with a different audio tool.

What’s New

  • Dynamic IODF activation for Standalone CFs (before needed a POR, now can be driven remotely from another CEC)
    • Z14 GA2 for both driving and target, PR/SM-based solution which needs one more POR on target to establish a firmware-defined Master Control Services LPAR
    • Need some z/OS APARs: HCM-IO25603, IOS-OA53952, HCD-OA54912, IOCP-OA55404
  • Asynchronous Cache Structure XI. Is also with Z14 GA2, and there is expected to be Db2 support PTFs.

Mainframe: zFS Shrink, Only in z/OS V2.3

  • Top customer requirement. A system command for reducing the size of a zfs file system. Not be confused with compressing files within a file system.
  • You specify a target size with the size option, gives final size in KB, gets rounded to 8K boundary
    • -noai option: means no active increase. A file system is being accessed, might need additional blocks over the shrink size given. so it can be “actively increased” by default.
      • If you need to actively increase to the original size of the file system, the shrink command ends with error.
      • If you don’t want to active increase (-noai) and you need to actively increase, ends with error.
  • During a shrink a scan occurs to determine what blocks must move…longest part of the operation. Blocks are moving from the portion to be released, into the portion that is to remain.
    • After Blocks are moved, and then space is release – in which it will be briefly quiesced. Applications do not need to be stopped when doing a shrink.
    • It is recommended not to shrink during peak times you need the files.
  • To know how big to make the file after the shrinkg, use zfsadm fsinfo for aggregate size (in K), free size in (8k blocks), and 1k fragments.
    • Even better hint! Use df -kP to see 1k used and available, and those sizes are consistent for shrink.
  • Nothing right now to help with suggesting a size.
    • Don’t pick a final size that is too small, so you keep have to growing.
  • Reminder to use aggrgrow and zfsadm grow increase the size of the filesystem.
  • Monitor with SMF 92 subtype 50, for both grow and shrink events. Subtype 59 for # of I/Os and rate, but might occur too often, so use it wisely.

Performance: CPENABLE and HiperDispatch

Each I/O ends with an Interrupt, which needs to be handled by a processor, and needs to be handled in a timely way. When an I/O interrupt is handled the processor handling it issues aTest Pending Interrupt (TPI) instruction. If this test returns “true” this processor handles the (detected) pending interrupt. If “false” then the processor has no more interrupts to handle – for the time being.

If many of these TPI tests result in “true” it suggests a queue has built up – which might indicate temporarily enabling more processors to handle interrupts.

There’s a trade off between timeliness and processor efficiency. The CPENABLE parameter’s values manage this trade off. There are two values: if TPI% below first disable a processor from handling I/O interrupts, and if TPI% above second enable a processor to handle them.

Without Hiperdispatch, access to CPU is smeared across online processors, as the LPAR’s weight is evenly spread across its logical processors. Without Hiperdispatch it is recommended that CPENABLE be set to 0,0 which allows all processors to handle interrupts.

With Hiperdispatch, however, access to CPU is corralled into fewer processors – with the weight not being spread evenly across the LPAR’s logical processors: A Vertical High (VH) logical processor has a “full engine” weight; The remaining weight is spread across 0 or 1 or 2 Vertical Medium (VM) logical processors. Vertical low (VL) logical processor have zero weight.

With Hiperdispatch it is recommended to set CPENABLE to 10,30. This corrals interrupt handling into fewer processors much of the time. It’s in the spirit of the weight distribution.

In terms of instrumentation, SMF Type 70 is useful:

  • It documents LPAR Setup, including HiperDispatch State, logical Engines and weights, and Verticals / Horizontals.
  • It counts Interrupts & TPIs, enabling you to calculate the TPI Percentage, down to the logical processor level.

In a recent customer Data Sample there were a couple of different types of LPARs:

  • Some with Hiperdispatch enabled – with CPENABLE of 10,30 – where the logical processors were enabled from 0 upwards to handle I/O interrupts.
  • Some without Hiperdispatch enabled – with CPENABLE of 0,30 – where the logical processors were enabled from the highest downwards to handle I/O interrupts.
  • It probably would’ve shown smearing of I/O interrupt handling across all the logical processors with a CPENABLE value of 0,0 – but this is conjecture.
  • It showed LPARs with tiny weights: 0.1 engines’ of weight on a 2,3,4,5 way which is not going to be that timely in servicing I/O interrupts.

Overall this topic shows I/O Interrupt Enablement is a topic worthy of consideration to get timeliness vs efficiency right – particularly in the Hiperdispatch era. Also that the instrumentation really helps.

Topics: Archived Family Information

  • Talking about personal and family information: photos, audio, and video only. Not writings.
  • Backing up: Two Is One And One Is None. Need multiple backup techniques at different physical locations. Multiple cloud locations?
  • Modern media vs legacy: when to adopt new technology and how to convert?
  • Google Photos Retrieval
  • Apple Photos app
  • Finder search inside files is used to find outline elements from previous shows.
  • Some serious questions:
    • “What happens when I’m dead?” Facebook, for one, has a protocol. Google has a protocol.
    • Ideally write a will and tell family how to handle material.
    • Will anyone else care about the material?
    • “What about Big Brother” : Everyone has something to hide, it’s about trusting the service provider
    • “Who owns the material?” and do you care?

Customer requirements

  • z/OSMF Workflow “Deep” Search 126042
    • Within a z/OSMF Workflow instance, allow the user to look for an argument within the workflow itself. Right now, the search function only finds strings that are in the titles of the steps, and not in the “tabs” on the insides (such as general, instructions, notes, …).
    • What about find and replace? Not sure you want to replace something that is in the instance. Workflow designing isn’t in this scope.

Places we expect to be speaking at

  • Whittlebury Hall, Nov 5-7, 2018 GSE UK Huge attendance already registered!

On the blog

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below. So it goes…

Invoking Keyboard Maestro From PopClip

(Originally posted 2018-10-13.)

About 18 months ago I built some automation on Mac that I found rather handy. Since I mentioned it on various forums people have wanted it – or at least to know how I built it.

Given its extensibility as a method, it seemed more appropriate to dedicate this post to how to build.

An Example

Here’s an example of this automation in action.

Suppose you are typing text in an editor and you want to uppercase a portion of it. You would select the text with the cursor and up would pop a menu:

In the above the white on a black background is the pop up that PopClip offers you. Some of the items on the menu are standard and some are from among the many that others have built.

One in particular is the first asterisk (‘*’) – because I’m too lazy or unskilled to create an icon – which is the PopClip extension I built.

Click on the asterisk and you get:

Under the banner ‘Popclip Bridge Macro Group’ you see a whole palette of macros you can choose from.

If you choose ‘Uppercase’ you would get:

The result of the text transformation is typed in over the selected text.

Now, Uppercase is built in to PopClip. (It’s the ‘AB’ icon.) But that’s just a simple example. You could do anything you please with the text.

How Was This Built?

The first thing to say is that you could build some of this without Keyboard Maestro – though the palette of actions in the second graphic wouldn’t be possible.

The automation consists of three pieces:

  1. A PopClip extension that invokes a Keyboard Maestro macro.
  2. This Keyboard Maestro macro that pops up a palette, which enables you to select another Keyboard Maestro macro.
  3. The Keyboard Maestro macros that can be invoked from the palette.

I’m going to describe how you build all three.

The PopClip Extension

A PopClip Extension is a zip file, containing at least two other files. Its extension is ‘popclipext’. Let me show you how simple it is.

There’s a simple XML file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Actions</key>
    <array>
        <dict>
            <key>AppleScript File</key>
            <string>KMBridge.applescript</string>

            <key>Title</key>
            <string>* </string>

            <key>After</key>
            <string>paste-result</string>
        </dict>
    </array>
    <key>Extension Identifier</key>
    <string>com.KMBridge</string>

    <key>Extension Name</key>
    <string>Keyboard Maestro Bridge</string>
</dict>
</plist>

This is called Config.plist.

The other file is even simpler, being a basic AppleScript script:

tell application "Keyboard Maestro Engine"
    do script "PopclipKeyboardMaestroBridge" with parameter "{popclip text}"
end tell

Its name (pointed to in an obvious way in the XML) is KMBridge.applescript.

Note {popclip text} is substituted for by the selected text when the PopClip menu was popped up.

You can create these two files with any plain text editor, add them to a zip file and rename that zip file to have the extension ‘popclipext’.

Which is why I haven’t attempted to furnish a single file for you to download.

In fact I want you to feel free to edit the file: Go ahead and cut the above two from this page and paste them into your editor as Config.plist and KMBridge.applescript. Then zip them up into a file with extension ‘popclipext’. If you double click on this it will – possibly after a warning – install the extension into PopClip.

More information on this topic is available here. And, of course, you can get PopClip extensions to do much more, using various scripting languages, for example Ruby or Shell Script.

Keyboard Maestro Macro To Show A Palette

In the AppleScript above a Keyboard Maestro macro ‘PopclipKeyboardMaestroBridge’ is invoked and the selected text passed to it.

You could run any macro you like at this point. Here’s one to pop up a palette and pass the text along:

It’s very simple, only having two actions:

  1. Save the text passed in into a variable ‘TEXT’
  2. Show a palette with a bunch of macros listed.

The second one produces the palette of actions you saw earlier. For a macro to appear it has to be placed in macro group ‘PopClip Bridge Macro Group’. In Keyboard Maestro users generally group their macros into groups.

By the way, you can cancel out of the palette by pressing the ‘Esc’ key.

Example Macro For The Palette

Let me show you the ‘Uppercase’ macro, as a simple example. It also comprises two steps:

Here are the steps:

  1. Convert the input text to upper case and store back in the variable ‘TEXT’.
  2. Type the contents of the variable ‘TEXT’.

All very simple.

In fact I have more complex examples, including:

  • Appending the text to a specific ‘scratchpad.md’ file in DropBox. This uses a template that embeds the date and time as a Markdown heading.
  • Turning a set of lines into a (javascript) array of strings.

Conclusion

Hopefully the above has served two purposes:

  • Proved a useful tutorial in how to build a simple PopClip extension that invokes some Keyboard Maestro functions
  • Given people unfamiliar with PopClip and Keyboard Maestro a sample of what each of them can do, and how they can usefuly work together.

In a sense this post follows on from Automatic For The Person.

Screencast 13 – Topology Today

(Originally posted 2018-10-10.)

I can’t say I’ve learnt much about screencasting since I published Screencast 12 – Get WLM Set Up Right For DB2 but it’s certainly been a while. I have, of course, learnt quite a bit about other stuff.

So I just released Screencast 13 – Topology Today.

It pulls together a couple of use cases for the SMF 30 Usage Data Section. This section, as I’m sure I’ve said many times, gives lots of insight into how address spaces connect together. I’m using the term “Topology” as I really can’t think of a better one.

After some preamble I give two examples:

  1. CICS into DB2
  2. Batch into MQ (and also DB2)

It’s just under 10 minutes long – which is where each of the past three screencasts has been. If you were impatient and skipped past the introductory slides these two examples would make rather less sense.

Production Notes

This time, in Camtasia, I learnt how to fade to black. It took a few goes to get it right – and it basically involves dragging an effects “tile” over the section you want to fade over and then stretching the tile to control the fade out time.

Thankfully with this screencast I didn’t have the same issues with huffing and puffing in the audio: By ramping up my exercise over the past couple of months that issue has gone away, I’m pleased to say.

Mainframe Performance Topics Podcast Episode 19 "You've Lost That Syncing Feeling"

(Originally posted 2018-10-06.)

This summer has seen the most travel I think I’ve ever done, and I would imagine Marna feels much the same.

We like to record together – which has made the logistics difficult. We actually met in the summer but thought recording in the same room would be difficult. We’ve stayed with each other a number of times but don’t want to record in our houses because the sound quality would be poor: Wooden floors produce way too much echo.

A lot of water has flowed under the bridge in this time, of course. Which has yielded quite a few blog posts on both our parts. And one new feature…

The “What’s New” subtopic gives us a chance to point out announcements and things like APARs. It’s not meant to be encyclopaedic but just contain a few new things that took our fancy. It’s, as always, an experiment. It might move in the running order, we might can it, we might morph it. I doubt, though, that it will become a topic in its own right.

So, we’re back. We hope you enjoy this episode. And we think we have a good chance of recording more in the near future.

Here are the show notes.

Episode 19 “You’ve lost that syncing feeling”

Here are the show notes for Episode 19 “You’ve lost that syncing feeling”. The show is called this because our Topics topic is about losing the Xmarks URL synchronization tool.

Where we’ve been

This episode had a very long hiatus – more than 5 months – so we’ve been to many places and on vacation/holiday. Sorry we’ve taken so long to get back together to record! It is not through lack of trying!

Feedback

For once we have some follow up: With iOS 12 the built-in Podcast app now supports MP3 chapter markers. As many listeners on iOS will be using this app they might see chapters (and the nice graphics) show up. Still, though, Android podcast apps with correctly working chapter markers have not been found yet.

What’s New (in APARs)

  • OA56011: OSPROTECT Flag in RMF SMF 70

  • PH00582: New function to export a workflow in printable format, as a text file.

Mainframe

Our “Mainframe” topic discusses moving from V4 to V5 zFS, prompted by a user comment that had a very positive experience.

  • You need to be totally on z/OS V2.1 to use, but now is applicable to many since z/OS V2.1 is now end of service.

  • The old version for zFS was V4. V5 gives you a directory using a tree structure for faster searching. This should be faster than a naive linear search approach.

  • This topic was prompted by a customer comment.

    • XCF reduction: IOEZFS group 99%, SYSGRS group 80%

    • Significant CPU reduction in address spaces: XCFAS and GRS

  • To take advantage of this, you need to convert from old V4 format to V5. V5 file systems can have both V4 and V5 directories, however V5 dirs must be in a V5 file system.

  • You can convert: offline with IOEFSUTL, online with zfsadm convert , IOEFSPRM CONVERTTOV5=ON , and on MOUNT – you choose.

    • Steps are: ensure fully at V2.1, set IOEPRMxx format_aggrversion=5 for new file systems, set IOEPRMxx change_aggrversion_on_mount=on for fast safe file system switch to V5, determine if you want IOEPRMxx CONVERTTOV5=ON for one-time switch on directory access. Delay is expected!

    • If cannot tolerate one-time delay, use MOUNT CONVERTTOV5 to selectively determine most benefit, on large directories and those highest used (F ZFS,QUERY,FILESETS)

      • Use zfsadm fileinfo to see a directory version, use zfsadm aggrinfo -long to look at all the file systems.
    • New RMF zFS reports in 2.2 with helpful pop-ups

Performance

Our Performance topic is a survey of Licence-Related Instrumentation. Most shops are very conscious of software costs. The key evidences are licence agreement documents and instrumentation. Martin discusses the instrumentation portion.

  • SMF can help you:

    • System level SMF 70 gives you the rolling 4 Hour Average CPU, Defined Capacity and Group Capacity information, and high-level CPU.

    • System level SMF 89 gives you more detailed information on licencing: Product Usage – both names and CPU.

    • Service Class level SMF 72-3 gives you Service Units (SUs) consumed on zIIP, on general purpose CP, and zIIP-Eligible on general purpose CP.

      • Mobile SUs is one set of fields and total SUs another

      • Resource consumption in general

    • Address Space level SMF 30 gives you a Usage Data Section for topology and for CPU in a product sometimes. (An example of topology is which CICS regions connect to which DB2 subsystem.)

  • Container-Based Pricing introduces new metrics: 70-1, 89, 72-3, and Tenant Classes and Tenant Resource Groups explicitly document this.

  • Closing thoughts:

    • Licensing is getting more complex, and difficult to understand it all fluently.
    • It would be wise to become familiar with the instrumentation.
    • And it would be wise to understand aspects of software licensing that cause impact in your installation.

Topics

Our podcast “Topics” topic is about Marna losing a handy and simple URL sync tool, XMarks. Xmarks used to let you save bookmarks between browsers with other cool capabilities. It was discontinued on May 1, 2018.

  • XMarks was a plug-in to browser, logon, sync, and they were there! With multiple profiles, such as work and home.

  • Here are some replacements?

    • NetVibes: better for rss feeds and dashboard seem to be its strength.

    • Google Bookmarks syncs URLs; Haven’t used it really, but still only for Firefox and Chrome. Gmarks will connect to google servers. Some sites need IE.

      • Modern browsers can fake the User Agent to look like IE
    • Diigo with a toolbar: not used it. Pricing plans, sharing URLs. A bit too heavyweight

    • The promising one is called Raindrop for Chrome, FF, and Safari. Just started trying it out. Works between Windows and Android!

    • Safari / Mobile Safari use iCloud syncing and work out of the box. But if you share an Apple ID, watch out!

    • Input from listeners??

Where We’ll Be

Martin will be renewing his passport, so limited travel for him.

Marna will be at a couple of conferences:

We welcome feedback!

On The Blog

Martin and Marna have both had several blog posts due to our long hiatus from the podcast.

Martin has:

Marna has these:

Contacting Us

You can reach Marna on Twitter as mwalle and by email.

You can reach Martin on Twitter as martinpacker and by email.

Or you can leave a comment below.