Appening 1 – Note & Share on iOS

(Originally posted 2013-02-17.)

I’ve added the words “on iOS” because this might spread to other platforms.

A week ago I posted that I’d pick an app a week and try to get value out of it. I also said I might blog about what I think.

This week’s app is Note & Share – an app that runs on both the iPad and the iPhone. (Probably on iPod Touch also but I don’t have one of those.)

Needless to say this isn’t an official IBM view or endorsement but my own personal experience.

I’m actually writing this post (interstitially) using Note & Share. I started it on the Piccadilly Line and am continuing it elsewhere. I wrote my previous post the same way.

So here are my thoughts on the app.


Basic Information

  • iTunes URL. The same company (ignitionsoft) makes EverClip (installed.) They are based in Hong Kong.
  • Purpose: Allow note taking using Markdown syntax, saving to Evernote, Dropbox and other services.
  • Release tested: 1.7.2 on iPad with iOS 6.1

Evernote Integration

Setting up the link to Evernote is straightforward. Notes sync to the default notebook for the linked account – and they sync quickly.

You can easily keep a MarkDown version in the Note & Share app itself – so you can revise it. Updating the note and re-sending to Evernote leads to the note being updated in Evernote, rather than a new one being created. But the “created” time stamp is also updated, rather than just the “updated” one, when you send the note to Evernote.

Tags don’t seem to make it through to Evernote properly but appear in the app’s own note list. Tags appear in title in both Evernote web app and iPad app but in both cases a tag search shows the tagged notes appropriately. iPad app shows the tags in the note info. Putting the tags on their own line doesn’t work. You might be able to clean this up with AutoEver.

Dropbox Integration

Dropbox integration works really well: When you save a note in Note & Share it is also saved to Dropbox. Even with Markdown Conversion on it saves without doing the conversion. This makes it easy to transfer to another computer.

I started this paragraph using the gedit editor on Linux, using MarkDown syntax and saved it in a folder watched by DropBox, with the updated file automatically imported into Note & Share. Then I added text to the paragraph in BBEdit on my Macbook Pro, again with the DropBox client active. (In BBEdit I selected MarkDown from the list of languages under “Edit” -> “Text Options” to enable syntax colouring and formatting with “Markup” -> “Preview in BBEdit”.) Preview in BBEdit also reloads when the file changes, whether locally (even before saving) or in Dropbox.

You have to reload the note in Note & Share for updates made elsewhere to appear on your editing screen, despite Dropbox tapping the app on the shoulder. You might also have to bring Note & Share to the foreground.

BBEdit automatically reloads the note when Dropbox alerts it to the fact the note has been changed – unless you turn off this option in Preferences. gedit prompts you as to whether you want it reloaded.

I also edited the document from Dropbox with Geany on Linux. It will also do syntax highlighting if you set the filetype to MarkDown.

Dropbox is the key to sharing between iPads and iPhones: I successfully shared this note between 2 iPads and an iPhone, authoring changes on the 2 iPads.

I created a note in gedit on Linux and saved it to Note & Share’s Dropbox folder and it showed up just fine in Note & Share. Late in the week I installed Marked on the Mac. It takes Markdown and creates other formats, such as HTML and RTF. It works fine.

To get this paragraph and the one before it into another Markdown document is a matter of copying and pasting.

Ease Of Composition

MarkDown syntax is simple to master but a little tough with the iPad keyboard. The MarkDown toolbar makes this much easier, though.

Standard iOS spelling suggestions are quite handy. Otherwise I’d soon get fed up with the on-screen keyboard.

TextExpander works but only after you enable immediate expansion and restart Note & Share. This is also true if you add a snippet to TextExpander. TextExpander support could be handy for creating more complex MarkDown. I’ve raided the restart issue with both ignitionsoft and SmileOnMyMac. The latter tells me there’s a specific API the former should be using to avoid the requirement for a restart.

Headings need a blank line after them.

Snippets in iOS 5 or later works OK. For example, typing “zo” offers “z/OS” as an expansion (which you can decline).

Exporting HTML

Enabling the clipboard allows you to put HTML onto the clipboard. If you disable MarkDown conversion you can get the original markup there (and can then email it or save the note to Evernote). This is, however, for all services – but the option is near the top of the options dialog, so it’s not too inconvenient.

Safari Bookmarklet

This is quite easy to set up but is not a way to import HTML as it only starts a new note with the page’s URL in.

Other MarkDown Editors / Viewers

For an online editor and converter go to Daring Fireball: MarkDown Web Dingus. It converts to HTML and displays that HTML. It also has a MarkDown cheat sheet.


All the above is the contents of a note I built over the week. To get it into this one I copied and pasted it in. (The copy icon in the app creates HTML which I don’t want at this stage.)

I could be criticised for not being inclined to put bounds round things: One learning point is it’s sometimes difficult (and maybe unhelpful) to review one product in isolation. As you’ll see from the above I roped in other tools (and in one case paid for one, though not much). You might expect a tool to stand alone but conversely to integrate well with others. Note & Share does both nicely. Recall the main point was to live with Note & Share and get value out of it. Writing a review was very much secondary. Hopefully you’ll find this interesting both ways: As a product review and a view of how it fits into my kitbag of tools.

This one’s a keeper – and on the front page if my phone.

Now to decide what next to try out for a week. It might be a game. I don’t know if I’ll write a review – we’ll see. In any case I consider the experiment to be a success.

And standing outside a shop in Oxford Street I’m ready to post. 🙂

Except…

… Between writing this and posting the next day I notice Brett Terpstra has blogged about another (new) Markdown editor: iOS App Review: Write for iPhone. I’m not about to rush out and switch to it, being happy enough with Note & Share.

zIIP Eligibility When You Don’t Have A zIIP

(Originally posted 2013-02-15.)

A couple of things have happened recently that lead me to post about projecting the amount of CPU that’s zIIP eligible. (Everything in this post applies equally to zAAPs, of course.)

When we first introduced z/OS specialty engines we introduced the “Project CPU” mechanism, reporting most notably via RMF. (I emphasise “z/OS” because ICF and IFL engines, which don’t run z/OS, don’t have such a mechanism.) This tells you how much work that is zIIP-eligible that is actually running on general-purpose CPs (GCPs).

Note that there are two cases where some zIIP CPU will be projected:

  • Where you have no zIIPs in the LPAR.
  • Where you have zIIPs but still some work that is eligible runs on GCPs.

This worked fine when you had a workload already running but had no specialty engines. (Of course the workload might grow, but that’s just relatively normal Capacity Planning.) If a workload didn’t yet exist then RMF wouldn’t be able to report on its eligibility. A well-known example of this is IPSec where specialty engines made it more affordable to use the function, at a time when it had become more important to installations. So far so good.

In recent months I’ve heard of cases where software doesn’t run the zIIP-eligible path when it determines there is no zIIP. This is said to be to minimise CPU. Fair enough, but it makes it difficult to assess how much work is eligible for zIIP.

Thanks to Don Zeunert, I now know about PM65448 for OMEGAMON XE for DB2 PE/DB2PM. (He mentioned it in OMEGAMON XE DB2 V510+ zIIP Project CPU when no zIIP present.)

So you can elect to turn on Project CPU for Omegamon XE DB2, or not to. I’m not sure how easy it is to make this product pick up a change in this setting.

My initial reaction was to turn it on for a couple of peak hours and see what number it gave you. I’ve moved on from that to thinking that installations should consider:

  • Measuring the CPU consumption by Omegamon XE DB2 with this switched off.
  • Turning it on for at least a day and measuring both the benefit and the additional cost in GCP terms.
  • Consider leaving it on permanently, or at least semi-permanently if you are about to acquire zIIPs.
  • Not rush to turn it off when you install zIIPs and allow the relevant LPARs to use them. (You’re already paying the overhead of zIIP eligibility anyway and you need the diagnostics Project CPU provides.)

I’ve not seen situations where a significant amount of GCP CPU has been wasted by running the zIIP-eligible paths through products. That doesn’t mean it can’t happen – as I see only a small subset of customer cases. But it does suggest to me it’s not a major concern for most customers.

I’m obviously a fan of lots of knobs and dials when I say I think this Omegamon XE DB2 function is nice to see: At least you have the choice. I’d like to see other products do something similar.

So, two questions for you, dear reader:

  1. Which other products detect the absence of zIIPs (or zAAPs) and choose not to use the zIIP-eligible code paths when they’re absent?
  2. Do any of these products allow you to turn Project CPU back on again, despite the absence of zIIPs?

As It Appens

(Originally posted 2013-02-10.)

We must have over 200 iOS apps in our iTunes account. Some of them we paid for, but usually not much1, but many were free.2 I’m sure I’m not alone in wondering "how did that happen?" 🙂

It’s got well beyond the point that a new app3 simply won’t appear on my iPhone and has to be searched for. Yes, I do use app groups and yes I also know how to find the recently used apps but that’s not the point.

So, starting this week, I’m going to take an app a week and try to get value out of it – whether it’s a game or utility or whatever. I’ll probably write a personal review in Evernote.4 I might even post a review here. Such a review would be, it has to be said, my opinion and not that of IBM. But that’s true of everything I post here.

And at the end of the week I’ll decide what to do with it:

  • Some will get promoted to my first page on the iPhone.
  • Some will get much more use as I finally get to grips with what the app can do.
  • Some will get relegated to groups in "low rent" 🙂 pages.
  • Some will get deleted from my iPhone.

In some ways it’s like what I should do with stuff around the house: Triage it and rediscover it.5 There’s a cautionary tale here:

Twice recently we’ve had to replace significant household items and discovered features in the old one’s manual that would’ve been really handy and will get used in their replacement. I recommend reading the manual (again 🙂 ) 3 months after purchasing something and pressing it into service.

I’ve also dutifully installed updates to all the apps – across all the iPhones and iPads in the house: This "app a week" approach will probably unlock things in later releases that make the app more relevant which I was largely unaware of.

Could this be like Christmas all over again? 🙂

And, finally, I might get a better understanding of how we come to acquire these apps (and perhaps other stuff). And how to handle their lifecycle.6

Now he who is without sin may cast the first stone. Form an orderly (and I bet very short) queue. 🙂


1 Defined for the purposes of this exercise as "tuppeny pieces to this value in my pockets wouldn’t make my trousers fall down". 🙂

2 At least two of us (me being one of them) are prone to falling for the "it cost nothing to acquire so there’s no TCO" line. 🙂

3 I’m in two minds about the word "app": The ponderous part of me wonders what’s wrong with the word "application" but the rest of me likes the brevity, now the term has become commonplace and with wider applicability than just iOS apps.

4 I already have a table in Evernote for each Mac app – so the family can find apps they might think useful we’ve already acquired.

5 There is a certain amount of joy in rediscovering some half-forgotten product that actually has use. Or is just plain fun again.

6 There ya go – if you were looking for business relevance: 🙂 There’s an analogy or transferrable lesson right there.

DB2 Data Sharing and XCF Job Name – Revisited

(Originally posted 2013-01-27.)

It’s been almost four years since I wrote DB2 Data Sharing and XCF Job Name. It mostly stands the test of time but there are a couple of things I want to bring up.

I was in the DB2 Development lab a couple of days ago, talking with a couple of developer friends about DB2 Data Sharing and XCF. They know DB2 Data Sharing and IRLM much better than I do but XCF not so much. (It’s probable that XCF Development have a complementary set of knowledge.)

So this conversation provided a fresh set of data as well as a chance to rehearse the contents of that blog post again.

The first thing to note is that I was inaccurate in one regard: Because in 2009 I’d only seen data from installations where the XCF group name for IRLM was “DXRabcd” where “abcd” is the DB2 Data Sharing group name I’d made the poor assumption this was always the case. In this fresh set of data the IRLM XCF group name is “DXRGROUP”, which has nothing to do with the Data Sharing group name. You can have a DB2 Data Sharing group of up to 8 characters long so “DXRgrpname” couldn’t work as a convention.

(And if you think the terms “XCF group name” and “DB2 Data Sharing group name” are confusingly similar, I’m inclined to agree.)

But all is not lost as the field that started it all – R742MJOB – contains the IRLM address space name. IRLM address space names are quite easy to find – in SMF Type 30 – because the program name is always “DXRRLM00”. But you might have several within the same z/OS image. So the method I outlined for finding the IRLM XCF group name – and monitoring its performance – still stands, with this minor tweak.

The other thing the conversation did was to reinforce something I’ve been gradually sensitised to:

Keep track of how DB2 and IRLM address space CPU behaves over time.

Here I’m talking about not just the IRLM address space for a subsystem but also DBM1, MSTR and DIST. The conversation started with a customer seeing spikes in IRLM CPU. As we only had very few data points it was impossible to do what I like to do: Plot stuff by time of day over several days. If I’ve worked with your data you’ll know I do this to establish patterns.

So are these spikes regular, or at least vaguely regular? Or are they something specific going wrong? (The notion of “going wrong” is interesting, too.) If you have spikes in IRLM CPU in the Batch Window maybe it’s because some jobs are driving a lot of locking activity. (And so it would be with e.g. DBM1.)

What would be interesting would be to see a coincidence between IRLM CPU and these two XCF groups’ – DXR and IXCLO – traffic spiking. (Or indeed the lack of a coincidence.) It’s important to notice that much IRLM activity goes nowhere near XCF or indeed the LOCK1 Coupling Facility structure.

But we didn’t get to do that. Which is a pity. But still, I learn from every situation: And seeing lots of them is my good fortune.

Evernote, Remember The Milk, SMTP / MIME and z/OS Batch

(Originally posted 2013-01-24.)

Another kernel popped the other day: SMTP / MIME.

But what on earth is MiGueL Mainframe 🙂 troubling himself with SMTP / MIME for? Let’s come at this from a different angle…

You probably know by now that when you send me your data it gets put through some batch reporting: Ultimately I don’t create the graphs by hand, but I do do the analysis and put the presentation together myself. That’s the “high value” creative part.

Workflow

You probably also know that the JCL to build performance databases and do the reporting is generated using ISPF File Tailoring and some panels.

But what about the actual workflow? In broad terms it’s pretty much all the same – for each engagement: I’d like my “to do” list for a project to be automatically generated. And I might well want some other notes to be automatically generated – perhaps a slide template or a “lessons learned” boilerplate note or something.

For most of my life I keep notes in a very fine service: Evernote. I also keep my “to do” list in Remember The Milk. I’m sure other fine services exist but these are the ones I use – and the ones I know the following technique work for.

I’d like to automate my workflow, as I said, and some of my engagement-related documentation.1.

Both Evernote and Remember The Milk supply an email address specific to an account: If you knew my Evernote email address, for instance, you could email in a note and Evernote would store it for us.2 So I can teach any email client how to add notes to Evernote and “to do” items to Remember The Milk. The latter accepts a list of items, along with due dates, priorities etc.

(To find your Evernote email address see here. Likewise for Remember The Milk.)

Between them I’m sure I can automate quite a lot of workflow, while continuing to make careful choices to keep client information secure.

Email and z/OS / TSO Batch

So why not have my JCL generator include some steps to generate this material?

Though that was a rhetorical question it does have an answer. 🙂 You have to make it so – with a SMOP. 🙂

But actually it’s not difficult.

In “Standing On The Shoulders Of Giants3 Mode, I notice we already have a jobstep – very early in our process flow – that uses XMIT to send a small tracking file, containing a File-Tailored set of information about the study. It’s a flat file but it points the way. It doesn’t use SMTP and it doesn’t include HTML.

I found out the appropriate SMTP address that my z/OS system has access to. With it I can send emails to anywhere – inside IBM and beyond (as Evernote and RTM both are).

Putting It Together

I’ve already created a batch job that can send HTML-formatted emails. It looks like this:

//XMITSMTP EXEC PGM=IKJEFT01,DYNAMNBR=50,REGION=0M
//* 
//SYSOUT   DD SYSOUT=K,HOLD=YES 
//SYSPRINT DD SYSOUT=K,HOLD=YES 
//SYSTSPRT DD SYSOUT=K,HOLD=YES 
//SYSTSIN  DD DDNAME=SYSIN 
//SYSUDUMP DD SYSOUT=K,HOLD=YES 
//SYSIN    DD  * 
    XMIT <smtp server address> NONOTIFY + 
             MSGDSNAME('<userid>.JCL.LIB(SMTPDATA)')
/* 
//*

In the above I chose to use MSGDSNAME rather than DSNAME to point to the data. This stands a better chance of having the EBCDIC translation work right. It points to the actual MIME message:

Helo MVSHOS 
mail from:<martin_packer@uk.ibm.com> 
rcpt to:<to-address>
data 
From:  martin_packer@uk.ibm.com 
To: to-address 
Subject: This is a test
MIME-Version: 1.0 
Content-type: multipart/mixed; 
              boundary="simple boundary" 
                                                                  
You have received mail whose body is in the HTML Format. 
--simple boundary 
Content-type: text/html 
                                                                  
<font face="Arial" size="+2" color="blue"> 
This is Arial font in blue. 
</font> 
<br/> 
<ul> 
<li>One</li> 
<li>Two</li> 
</ul>                                                            
<font face="Arial" size="+3" color="red"> 
This is the Arial font bigger and in red. 
</font> 
                                               
--simple boundary 

This is in what is called “multipart MIME format” – and you can tell this from the “Content-type: multipart/mixed;” line. (Each part is separated by the line “simple boundary”.) The HTML is obvious and the fact it is to be treated as HTML is indicated by the “Content-type: text/html” line.

One of the things this illustrates is that sending HTML by email isn’t complicated at all.

Note:The actual “to address” in the “rcpt” line needed a relay address in my case – preceded by an “@” and separated from the eventual address by a “:”. You might need one too.

When I sent this HTML to Evernote it worked fine and I have a nicely formatted note, complete with the title preserved. If you want to understand how Evernote handles emails look here. For Remember The Milk look here.

The note in Evernote looked very much like this:


 
This is Arial font in blue. 
 
  • One
  • Two
This is the Arial font bigger and in red.

As I said earlier, sending an HTML-formatted email is not significantly more difficult than sending a plain text one. I hope this blog post demonstrates that: Examine the code you’re using today to send emails from z/OS and I think you’ll agree. And I think you’ll find cases where it would be a better solution.

On a final note, IBM (and others) have email solutions. And indeed workflow solutions. Those have their own applicability – for the more complex or larger-scale applications.

But if you want “lightweight”, “simple”, “informal” workflow my approach might make sense to you. As it is I’m going to build this, small pieces at a time – like I do most of my development work.


Notes:

1 I’m very clear about not compromising customer data or situations. Customer confidentiality is key – and along with other cloud services – I can’t store sensitive or identifiable data in Evernote or Remember The Milk.4. Similarly, I’m incredibly circumspect in reviewing customer-related stuff in public places.

2 Obviously this is open to abuse – as anyone with the email address can fill your account up with SPAM. But you can change the email address at any time – and I don’t give it out often.

3 When I first heard this cliché I thought it was Albert Einstein. And later on I thought (slightly more accurately) it was Isaac Newton. Obviously giving Maths/Physics giants more credit than they’re due. I wonder why. 🙂

4 As one of the authors of this piece of the IBM Social Computing Guidelines I’d urge you to read this short document to understand IBM’s stance.

Microwave Popcorn, REXX and ISPF

(Originally posted 2013-01-21.)

To me learning is like Microwave Popcorn.

Specifically, turning

into

Part of the fun of making popcorn is watching the bag and listening to the poppings: As each kernel pops it pushes the bag out.

And so it is with learning: Every piece of knowledge contributes to the overall shape.

Anyhow, enough of the homespun “philosophy”. 🙂

I was maintaining some ISPF REXX code recently and it caused me to come across two areas where REXX can really help with ISPF applications:

  • Panel field validation.
  • File Tailoring

The introduction of REXX support is not all that recent – I think z/OS R.6 and R.9 were the operative releases – but I think most people are unaware of these capabilities.

I’m not an ISPF application programmer so if you want the technical details look them up in the ISPF manuals. But here’s the gist of why you might want to consider them.

Panel Field Validation

On one of our ISPF panels we have eight fields that together represent a time/date range. You can (with VER(), as you probably know) check these fields – two sets of year, month, day, hour, minutes – have numeric values and aren’t blank. I don’t think you can check things like whether the end date is after the start date, or whether these two dates are before today. For that you need REXX:

With *REXX in the )PROC section of the panel (terminated with *ENDREXX) you can inject REXX code. If you set variable zrxrc to 8 (and set zrxmsg to an appropriate ISPF message number) you can fail the validation. If you set zrxrc to 0 you can pass it.

Of course you might be in a position to do this all in the REXX that causes the panel to be displayed in the first place. But there are two reasons why I think you’d want to do it in the panel definition itself:

  • It’s a lot simpler than having the driving REXX redisplay the panel if the fields don’t validate.
  • Keeping all the field validation logic together – VER() and REXX – is much neater.

But you have the choice.

File Tailoring

Again driven by REXX, the code I maintain uses ISPF File Tailoring to create JCL from skeleton files, based on variables from ISPF panels.

You can write some quite sophisticated tailoring logic without using REXX. But with REXX you can do so much more.

(My first test case used the REXX strip() function to remove trailing blanks. Of course you can do that with )SETF without REXX.)

If you code )REXX var1 var2 … then some REXX then a terminating )ENDREXX you can use the full power of REXX.

In the above var1 etc are quite important: If you want to use any of the File Tailoring variables (or set them in the REXX code) you have to list them.

Note: You can use say to write debugging info to SYSTSPRT.

I don’t believe you can directly emit lines in REXX but you could set a variable to 1 or 0 and use )SEL to conditionally include text.

Again, you could perhaps do some of this in the REXX that calls File Tailoring. But I’d prefer as much of the generation logic as possible to be in the one place: The File Tailoring skeleton. This is particularly true of variable validation when you consider you can use )SET in the skeleton to set the value of a variable – after the validation code has run.


So these two items – panel field validation and file tailoring – were areas I unexpectedly found myself researching. I won’t claim they’re core to my “day job” or particularly profound but certainly they proved handy. If you find yourself developing with ISPF facilities they might save you a lot of time.

And certainly I feel my grasp of ISPF is that much better – but maybe because of the 2000 lines of ISPF REXX I reformatted and adopted in the process. 🙂

A Good Way To Kick Off 2013 – Two UKCMG Conference Abstracts

(Originally posted 2013-01-13.)

First, a belated Happy New Year! to everyone. It’s been a busy past few weeks, not least because of the customer situation I’m working on.

But, to kick off 2013 here are the two conference abstracts I submitted for the UKCMG Annual Conference. No pressure, guys. 🙂




Time For DIME

In recent years memory has become cheaper, or certainly more plentiful. This enables us to do new things, or old things faster and better.




I believe it is indeed Time For DIME (Data In Memory Exploitation). But we’ve been here before – in the late 1980’s. Much has changed but the basic concepts haven’t. So this presentation reminds us of "the way we were" but brings things right up to date. It covers why you’d want to run a DIME project and how to go about it: It covers both the project phases and technical aspects, preparing you to make a quick start on realising the benefits of DIME.




While the main example presented here is DB2, the presentation also discusses Coupling Facility memory exploitation, as well as a number of other examples.

The Life And Times Of An Address Space

A typical z/OS system has a wide variety of address spaces. So much so that managing their performance can be difficult.




This presentation prepares you to handle this diversity, discussing what’s common to all and what’s different. Centred around SMF Type 30 records, it guides you in deciding when to rely on common instrumentation, and when to go to more specific data, such as CICS instrumentation or data set records.

 


Personally I find it very difficult to write abstracts – particularly as you end up trying to write them before you write the actual presentation. So the finished result can be different. But then every time anyone ever gives a presentation it turns out at least a little different.




As for the UKCMG Annual Conference, this is an event I’ve been proud to present at most years in the last 20. It’s always been a great crowd and a good opportunity to catch up with what people are doing. This time it’s in London at the CBI, instead of being out in the country. I don’t know how much difference that will make. Come and join us if you can. Here’s the link: UKCMG Annual Conference, London, May 14-15, 2013

And a final thought: I write about what I want to write about (and what I think is important). If you have ideas of what I should be presenting on and writing on do let me know.

DB2 Timings For CICS Transactions – With Thread Reuse

(Originally posted 2012-12-11.)

There was a time before blogging 🙂 and what I’m about to talk about is something I used to explain quite often back in those days.

Reminded by a current customer situation – and needing to explain it again – I thought it time to do it this way.

(Here I’m presenting a simplified view, but one that covers the salient features that might help you.)

The CICS / DB2 Connection code provides a number of possibilities for optimisation, one of which is Thread Reuse. This post won’t discuss the mechanics of this in any depth but aims to explain the effect of it on DB2 instrumentation – in fact DB2 Accounting Trace (SMF 101).

Consider the following diagram, with time flowing from left to right…

I’ve shown the two scenarios one above the other. Blue bars represent periods of Class 1 elapsed time. Green bars periods of Class 2 elapsed time. Notice how the blue bars are unbroken but the green ones can have gaps: Because Class 2 represents the time actually in DB2 there can be time between “stanzas”. (But SMF 101 doesn’t record the timings of the gaps – just two numbers which when subtracted give you the total time.)

The diagram shows three CICS transactions running one after the other, with Thread Reuse and without:

  • In the case without Thread Reuse three threads have to be created and terminated, one after the other. This is an expensive process, which is why Thread Reuse is used.
  • In the Thread Reuse case the thread is reused twice, avoid the thread management lifecycle.

The reason for discussing this is that with Thread Reuse DB2 timings in Accounting Trace (SMF 101) work a little differently.

(One thing that remains unchanged is the relationship between Class 2 elapsed time, Class 2 CPU time, the Class 3 wait components, and what’s not accounted for but still part of Class 2 elapsed time. So I won’t discuss those here. What’s also not changed is Class 1 CPU time – so computing Non-Class 2 CPU time is the same – Class 1 CPU minus Class 2 CPU.)

The most important thing to notice in the diagram is the difference in Class 1 time behaviour:

Instead of – as with the non-reuse case – starting and ending at the transaction boundaries, Class 1 time now ends when the next transaction that uses the thread starts. (And that’s when the DB2 Accounting Trace (SMF 101) record is produced.) This means, as you can see, a lot of the Class 1 elapsed time has nothing to do with executing the transaction. Obviously, under these circumstances, you can’t use Class 1 elapsed time for much.

An obvious question is “when can you trust Class 1 time in a CICS environment?”

Fortunately the answer is quite simple: The value of a field in the 101 record (QWACRINV “Reason For Invoking Accounting”) determines whether you can.

If QWACRINV has a value signifying either “New User Signon” or “Same User Signon” you know the thread was reused. Otherwise – probably with the value signifying “Deallocation” – you know it wasn’t.

(If you wanted to know how effective Thread Reuse was you’d calculate – as my code does – some ratio relating these two Signon values to Deallocation.)

In the case I’m dealing with some of the transactions in the CICS region use Thread Reuse and some don’t. For those that do I’m discarding the Class 1 elapsed time and for the rest I’m using it to give some understanding of the timings outside of DB2.

I have to be very careful when I say “some understanding of the timings outside of DB2” – but that’s really a topic of a completely different discussion, involving things like Unit Of Work Identifiers. (CICS PA does a nice job of bringing it all together – to the extent it can be.)

For now I wanted to explain why I’m careful in handling DB2 Class 1 Accounting elapsed times for CICS transactions. And to socialise a briefing for friends of mine. I had honestly aspired to be brief – and it’s frightening to me how much detail I’ve left out – but brevity was not to be.

Two Potential New Presentations – Coming Soon?

(Originally posted 2012-12-09.)

I’m trying to put some structure on the idea of Life And Times Of An Address Space. The best way to do it, I think, is to attempt a taxonomy of address space types. So here’s an initial stab:

One thing that immediately comes to mind is you can map useful SMF record types onto it:

Three things:

  • I think it immediately betrays my bias towards SMF 30 in what I get to write about. But I think that’s the point: Making instrumentation tell useful stories.
  • I haven’t attempted to draw in the product-specific instrumentation. I may well do so as another point of the presentation is likely to be “to get really useful you sometimes need to go to product-specific stuff.
  • On a technical note, batch jobs run in initiators – which are typically long running. In fact (e.g. with WLM-Managed Initiators) this might not be the case. In any case I think this is a useful simplification that might survive this writing process.

At this point the purpose of publishing this “0.0” version (as you’d see from the URLs of the two pictures) is in case someone says “you’ve got the taxonomy all wrong” or “I don’t like the direction you’re headed in. Though you might find it a useful taxonomy all in its own right.

Yes, I know the annotation is unsubtle. It’s an experiment with Skitch annotation. Of course my home-grown HTML5 Canvas annotation code is much nicer. 🙂

And the product logos are also probably not the final ones I’ll end up with: They’re really my first go at adding graphics to a MindNode-produced mind map. (And I’ve not tried using MindNode for taxonomy before.)

Still, it’s better than a beer mat. (Who am I kidding?) 🙂

Now, if you were to annotate either graphic and send it back to me that’d be interesting, dontcha fink?

Two Potential New Presentations – Coming Soon?

(Originally posted 2012-12-08.)

Every year I like to debut one new presentation, though that isn’t a firm rule: In 2012 I debuted “Send In The Clones” (SITC)1 and “I Know What You Did Last Summer” (IKWYDLS), but actually only the first one was written in 2012.

Of course presentations are “slow trains coming”: I widely trailed my desire to write IKWYDLS in 2011 and finally revealed it early this year. (In fact it evolved through the course of the year into “I Know What You Did THIS Summer” and I now refer to is as “I Know What You Did This Last Summer”.) 🙂 Or “IKWYDTLS” for short.

SITC arose much more spontaneously – being initially a presentation for a customer working group. (And it still has some of that genesis in it – with a rather pointed “where from here?” slide left in.)

The point of the above is generally I don’t just get up one day and decide “today I’ll put on a show”: You have to “record” before you can “gig”. (Of course I do just say stuff sometimes.) 🙂

So what about 2013?

I have two ideas swirling around in my head, and I’d like to know if either appeals to you:

  1. Time For D.I.M.E.
  2. The Life And Times Of An Address Space

Time For D.I.M.E.

This is more a “campaign” presentation in that I really do think it’s time (judging by my customer set) for customers (particularly those running z196 and zEC12 machines, but also z114) to consider memory usage afresh. (DIME is, of course, short for Data In Memory Exploitation.) (With the advent – a while back but practically into 2013 – of DB2 Version 10 this becomes even more relevant.)

This is probably the presentation my management would be keener I wrote – though it’s one I personally feel strongly about anyway.

The Life And Times Of An Address Space

I like to write occasionally about more abstract things, things with less immediate punch in their message. This presentation is very much in that category. Its origins are, I think, the lower-level pieces of IKWYDTLS. When giving that presentation I had to gloss over the address space piece. And there was so much more I wanted to say than was even on the slides. And stuff has happened this year that makes it even worse – as regular readers of this blog will know.

I also think there’s something of a (pseudo-)intellectual framework to be espoused here: For example, we can view batch jobs and CICS regions as looking very different but actually there is much commonality. I’d like to explore that.

(There is a practical benefit as it’s important to use the commonality but respect the differences when designing reporting.)

I also think it’s important to get beyond the idealised address space and into practical examples, such as CICS and DB2.

(Somehow BBEdit, which I’m writing this in, seems to have learned to prompt me with the words “CICS” and “DB2.) 🙂

So How About You?

What do you think of those two ideas? Feel free to comment here or in any other way you like. The aim is to take these two ideas (and any others) and turn them into useful material, whether actual presentations, blog posts, analysis code or whatever.

The next step is probably to inflict more of my handwriting on you. 🙂 And, as I’m not so good at graphics, I might collect some napkins from around the world to draw them on and post photos of these rough drafts as we go along. 🙂 Now wouldn’t it be fun to do a presentation composed entirely of photos of drawings on interesting pieces of paper, sides of cows, people holding placards2 etc? 🙂


Notes:

1 Queen fans tend to refer to songs and albums by obscure sets of initials, so that “TMLWKY” is “Too Much Love Will Kill You”, “NOTW” is “News Of The World” etc.. TMI? Perhaps. 🙂

2 I see CICS has already done it. 🙂