Memory Metrics – An Overdue Update

(Originally posted 2011-03-30.)

In 2007 I posted twice on memory metrics. The original posts are

and

I should probably have posted an update some time ago. In the latter I said "Obviously copious free frames would suggest no constraint." That’s true but I would invite installations to consider something else…

Capturing a dump into virtual memory backed by real memory is much faster than capturing it into paging space. (And that in turn is much faster than capturing it into constrained paging space.) Over the past couple of years I’ve progressively updated my "Memory Matters" presentation to cover Dumping and Paging Subsystem design – to reflect this.

So it’s important to consider what your stance on Dumping is. For some customers Availability will be the over-riding consideration and they’ll configure free memory to dump into. For others it’ll have to be a compromise – for machine lifecycle and economic reasons. The point is decide on a stance on provisioning memory for Dumping. And do it at the LPAR level.

Meanwhile, z/OS Development haven’t neglected this area. I’ve documented the z/OS Release 12 enhancements in "Memory Matters" but in short they are:

  • Batch page-in I/O operations during SDUMP capture eliminates much of the I/O delay.
  • Data captured will no longer look recently referenced. This data will be paged-out before other potentially more important data.
  • Certain components now exploit a more efficient capture method in their SDUMP exits. For example GRS for SDATA GRSQ data, IOS for CTRACE data, and configuration dataspaces.

I’ve had foils on page data set design, Dumping control parameters etc for some time.

But the important thing is that dump speed is an important thing to factor in to memory configuration and monitoring.

And the thing that caused me to write this post – at last – is a discussion today on MXG-L on UIC. So thanks to the participants in that.

Batch Architecture, Part Zero

(Originally posted 2011-03-29.)

I’m not an architect. I don’t even play one on TV. 🙂 In fact real architects would probably say I’m in the babble phase, architecturewise.

But I’ve been involved in a few situations over the past year or so (and I’m involved in a couple starting round about now) which have led me to the following simple conclusion: Many installations would  benefit from drawing up a Batch Architecture. I don’t think this is specific to z/OS-based batch, though we do tend to have more complex batch environments than other platforms. (And modern environments seem to have z/OS-based and other batch mixed together, often in the same application.)

As I say, I’m not an architect so some of what follows will seem to real architects a lot like Officer Crabtree. 🙂 But it’s my thinking – and hopefully some of it resonates with you.

So what do I mean by a Batch Architecture? To me it contains the following elements:

  • A description of the operating environment. This contains things like the LPARs Production Batch will run on, the database systems, message queuing systems and the like. You’d also rope in things like transmission networks, tape subsystems, job schedulers and printers supporting the batch. You might include commentary such as "we use PRDBATHI WLM class for critical production batch, PRDBATMD for most of it, and PRDBATLO for stuff that can run slowly but is still classified as Production".
  • An inventory of applications. Though I’ll talk about this element more in a subsequent post I’ll note the minimum would be a list of names of applications and a description for each application. Also the job names (or rule for classifying jobs into a particular application).
  • An understanding of the interfaces between applications. For example "PABC990D in ABC and PXYZ010D mark the boundary between the ABC application and the XYZ application, the former needing to update the General Ledger before the latter can begin to produce reports". Again, something I hope to write about some more.
  • A view of when the window is – if there still is such a thing as a window. And what has to get done by when – with business justification such as "we have to post status with the Bank of England by this time in the morning".

The above, far from exhaustive, list enables you to think about your batch in a structured fashion. Done right, and it doesn’t really matter which tooling you use, it begins to enable you to:

  • Talk about your batch at a level above the individual job.
  • Think about the impact of growing business volumes.
  • Plan for merging batch workloads (particularly topical at the moment).
  • Plan for splitting off workloads.
  • Think about how you can move work around.
  • Consider what happens if there is a problem – whether an application failure or a disaster.
  • Put some structure on modernisation efforts.
  • Tune batch in a structured way.
  • Collate the understanding of your batch that so often is in the heads of very good application experts.

Now, I appreciate most customers have huge batch inventories – often in the tens of thousands of jobs a night – and I think many customers are doing elements of this already. So what’s left to do that’s actually doable? I think quite a lot – and of course it varies from installation to installation.

But I do think some architectural thinking about batch would be really useful for most customers – and I’m certainly going to be thinking more about this myself (including seeking the wise counsel of some real architects).:-) At a practical level I’m going to post on how to do some of the building of a batch architecture.

Do You Like The New Look?

(Originally posted 2011-03-25.)

I hope you do. Special thanks are due to Bob Leah, Victoria Ovens and David Salinas for getting me this far:

  • Bob created the new template I’m using (and it is discussed further here.
  • Victoria created the new blog header graphic (of a z196 and a zBX) and David put it up for me.

You’ll have noticed I’m also blogging again – after a gap of about a year. (I talk about that somewhere in the middle of here.) So, a fresh look seems like a very good thing.

developerWorks blogs are built using Lotus Connections Blogs (in turn built with Apache Roller/Velocity) – a flexible platform I’m beginning to learn how to tailor. So now my tweet stream is embedded – for one.

I’d like to add a blogroll and a set of other useful links in. Basic stuff, I know. It’ll take a little figuring out.

So here’s a question for you: Is there anything else you think I should do to make my blog more useful to you? Apart from (or maybe including) some useful content. 🙂

Exiting The Babble Phase?

(Originally posted 2011-03-23.)

… or "The Nightmare That Never Ends". 🙂

The concept of a babble phase comes from Child Development: When children are learning to talk they start by making sounds they think might get them somewhere. (Some might say that’s a phase they never leave.) 🙂 Such a developmental stage is called "the babble phase". (The term has been borrowed by Artificial Intelligence researchers – and I expect Watson went through that at some point.)

I’d like to think I was an earlier adopter – and I get frustrated when I realise I’m not. But I think it safe to say I was fairly early in adopting a number of Social Networking tools: One of the earlier bloggers on IBM’s internal BlogCentral, one of the earlier users of Twitter (and thanks to Ben Hardill – who responded to my challenge – an early adopter of BlueTwit, the internal analogue). And a fairly early adopter of LinkedIn and FourSquare. The jury’s still out on Quora but I’m there, albeit not that active.

The point about this post – in case you’re still reading – is the stance to take on personal adoption of technology. I’d like to think the sooner you adopt a technology the sooner you exit the babble phase. And then you’re onto the next one. 🙂 Early adoption actually means you make the mistakes before many people are around to notice (particularly the ones you’d seek to hide such mistakes from). 🙂 So, in the words of Bill and Ted’s "they do get better" applies here. 🙂  It also means you can exit early – if a technology doesn’t work out.

One other thing early adoption does for you is to give you a body of experience you can use to help others. If you’re like me (and it’s highly likely you aren’t) 🙂 what works for me might work for you. On the other hand, if you see me doing something that you think is not for you, that’s probably useful, too. I wouldn’t want to spend my life as a walking talking antipattern, though.

Seriously, my experiences as a customer-facing early adopter led me to participate in drafting and revising IBM’s Social Computing Guidelines. I’m one of the people who drafted the "customer" elements of it. And I’m one of the people who injected some thoughts on Geolocation into the current version. (It’s fairly obvious stuff, of which Heisenberg might’ve been proud: Don’t give away your location and the matter-in-hand in a way that would damage or embarrass a customer.)


Did you spot the "Watson" reference a few paragraphs back? It’s all wrong of course: It doesn’t contain a hyperlink, it isn’t the "party line" but it is my way of thinking about it: An "authentic voice" on the matter (but possibly not a useful one). 🙂

I’ve been caused to think over the past few weeks "why am I doing Social Networking?" Now, it’s not any kind of reticence or realisation I’ve been wasting my time. 🙂 Far from it. But I feel comfortable sharing with you my motivations:

  • "If it feels good do it" is probably the underlying motivation. It really does feel good communicating with people (or maybe at people). That’s why I do it, not through any sense of self-aggrandisement.
  • I think I have stuff to say. Now, I’m aware that with a single online identity (through whichever tool we’re talking about) I’m writing stuff that only a subset of my audience will latch onto. Hopefully each reader gets enough from me to make it worthwhile – and I know you’ll vote with your "feet" if that isn’t so. Fair enough.
  • I think I can help. You don’t get to spend this long on the planet, and this long trying to become a Subject Matter Expert (SME in the parlance (which reminds me of Red Dwarf for some reason) 🙂 ) without having knowledge you can share with others.
  • But I’m not trying to sell anything (other than the notion that some of the things I’m interested in are interesting).
  • People might remember me when they see me. This is the nearest thing to a commercial or selfish reason I can think of.

So, you won’t find my Social Networking presence optimised to garner millions of links or generate lots of sales. You also won’t find me pretending to be someone I’m not. (You may well find some of what I have to say stuff you disagree with.)

Which brings us back to Watson. While I will admit I’m impressed by Watson I really don’t have a detailed knowledge of its inner workings, nor do I think I need to. But it is a big advance in Artificial Intelligence and so is a reasonably topical thing to link to "babble phase" in a joke. And that’s my authentic voice speaking.

I think "authentic voice" is terribly important: It has to be real people speaking. I’d like you to read this post – if you already know me – and be able to say "yup, that’s Martin alright". I’d like you to be able to trust me and what I’m saying. (And I’d like to earn that trust.)

Now that authentic voice plays out in how I use the tools:

  • Twitter is my natural arena: I hope what I have to say is worth 140 characters or fewer. 🙂 It’s not a force fit at all.
  • I used to blog a lot. I went away from it for a while – thinking the effort of a long-form piece too much for the payload. But I’m now thinking that rather than lighten the wrapping I should beef up the payload. I do think I have stuff that’s worth much more than 140 characters. And I’d invite (perhaps rashly) you to suggest things I should write on. Just understand some things I won’t be able to write about and others will need a good deal of research.
  • FourSquare feels right, too. I’m very pleased to see a wide diversity of customers – in lots of arenas of varying intimacy and formality. (But note the bit above about protecting customers.)
  • My artistic skills are shockingly poor. Which is why you don’t see many pictures.
  • Podcasting feels like too much of an effort. But having heard some awfully-produced ones recently I think maybe I could do better than that. Maybe I will. (And I’m trying not to be suckered into big-budget effects, having just seen Stormtroopers In Stilettos). 🙂
  • Newsgroups are great too. (In fact I’ve been participating since the VM FORUMs in the mid-1980s.) They often cause me to research stuff and I learn a lot from them as well.

So I think each medium has a different role to play – and differently for each of us. (Both as a deliverer and as a victim consumer.) 🙂

What’s also interesting is the fact that each medium is linked: So Twitter feeds Facebook and LinkedIn, FourSquare feeds Twitter. And, of course, I tend to link to new blog entries from Twitter (including this one).


In summary, I’ve found there’s tremendous (personal) value in Social Networking. I’d urge people to take a "fast forward into the future" approach – as we’ll all benefit from more, authentic voices. (I nearly left it at "more authentic" 🙂 but inserted the comma instead. Well, it made me giggle a little.) 🙂

"Publish and be damned" can be taken in at least two ways as well. 🙂

And "fast forward" for the reasons I’ve articulated: Mentoring, exiting the babble phase ahead of the pack, etc. I also think it enables you to evaluate new tools that much quicker.

Of course, like many nightmares you can appear to wake up. But often you’re still in the dream, just somewhere else. So, to answer the question in the title: "no, not really, and I wouldn’t have it any other way". 🙂

One final thought: If, as William Gibson says, "The future is already here — it’s just not very evenly distributed" then I’m extraordinarily lucky how much of it has come my way. If some comes your way, grasp it with both hands.

Learning Android Programming

(Originally posted 2011-03-17.)

A while back I set myself a technical challenge: To learn how to program an Android device.

NOTE: I don’t have a real application in mind, just idle curiosity and a degree of annoyance at the prerequisites to be able to program an iPhone (apart from as a WebApp).

Here’s what I did and how I’ve got on:

My Equipment

  • I bought the cheapest Android tablet I could find. It’s an Intempo I.D. Spirit. It runs Android 1.6 (not upgradable as far as I know) and you have to fairly thump the "touch screen" to get it to do your bidding. It’s slow  but that’s OK. At least it’s colour and very portable.

    You’ll find lots of similar or better machines at a low price.

    The point is that – up to a point – the hardware isn’t important for learning on. (But note this one won’t do "gestures" and the like.)

  • I bought a SanDisk microSDHC card reader, which plugs into a USB port on my laptop. (And a 8GB card to go in it.)

    While I could’ve used wireless to install it seemed much easier to just do the card-juggling thing. And I could’ve gotten away with a much smaller memory card.

  • I installed the Android toolkit on my laptop. I’m running Ubuntu Linux as it happens. I’m sure you can do this on other platforms.

  • I bought "Hello Android" by Ed Burnette (Third Edition, which covers Android 2 as well as the 1.6 I’m targeting). The website for the book is here.

That’s all: A machine, a way of transferring programs, the SDK and a book. What more could you need?

I experimented with the Eclipse environment for Android development but couldn’t get it to build. (But then I like to use a text editor and manually drive the build tools anyway – just to get a feel for what’s really going on. If I were a real Android application developer I’d probably make Eclipse work for me.)

My Experience

The "Hello, Android" book is well written – easy to follow but not condescending. And the website has downloads and errata. Ed’s clearly taken this seriously. I’ve not had a book from "The Pragmatic Programmers" before but I think I’d buy others if a cursory glance suggested they were similarly executed.

First the book takes you through creating a "Hello, Android" application – using the SDK. This worked well. I did have to look up how to sign the application using keytool and jarsigner (the latter being invoked automatically by the ant script the SDK creates to enable you to build). It ran just fine when installed from the card into the Android device.

Then, and here’s the real meat of it, the remaining chapters teach you how to build a Sudoku application. The book does this in an incremental fashion, which really works well.

It really helps if you’re already familiar with java and XML: You’d be editing those sorts of files quite a lot.

My Conclusion

It really is possible to go from zero to being reasonably competent in building an Android application with not much hardware, a flash memory card, the SDK, a text editor and a good book. And that’s the way it should be, I think.

The other nice thing is it seems to be something you can learn in small chunks – maybe an hour at a time.

When Display Commands Aren’t Good Enough

(Originally posted 2011-03-13.)

There’ve been times in the past when a request for extra data in SMF has been met by "you can issue a DISPLAY command to get that". (Another variant is "you can go to the HMC for that".)

I’m here to tell you why I don’t think that’s a brilliant answer:

  • Such a command is a point-in-time thing.

    Systems nowadays are much more dynamic than they were. Something as simple as the number of engines assigned to an LPAR is highly likely to change – from one moment to the next.

    So you really can’t tell what happened last week from the results of a command you issued just now.

    And automating the command on a timer pop is a touch fraught as well.

  • The output from commands is usually much harder to parse than well-designed written-out records.

    (That’s true whether we’re talking about SMF or some other kind of instrumentation .)

  • The Systems Management environment is worse, too.

    I’m not sure I really want to permit Performance people to issue operator commands, just so they can get instrumentation. Look, we’re all splendid lads and lasses (I’m quite sure) but I really don’t think it’s good for Systems Security to let us do it: To make it even acceptable we have to put bounds around what commands and parameters are permitted.

    (Think of "F LLA" as a good example of where the same command can be used to change things and get some answers.)

    And as for going to the HMC… 🙂

  • An interesting case is the DB2 Catalog: While it generally has static information in it there is some time-driven stuff (such as in SYSCOPY and the "HIST" tables).

I think we’re very lucky in z/OS to have good instrumentation in the form of SMF (and possibly things like DB2 logs). It’s the sort of thing many other platforms should be envious of. And, yes, I do value the ability to get "here and now" information (not least for driving Automation).

One other thing: If the information is available for an operator command it’s just a SMOP (Simple Matter Of Programming) to get it into SMF. (Yes, I know it’s not quite that simple: Having rubbed shoulders with developers for 25 years I do appreciate that.)

So you’ll see why I keep pushing for stuff to go into SMF. And I hope you’ll keep pushing, too. 🙂

WLM Velocity – “Rhetorical Devices Are Us”

(Originally posted 2010-01-24.)

I’m beginning to look at performance data slightly differently these days…

As well as plotting things by Time Of Day (which our tools have done for 25 years) I’m beginning to plot things more directly with load. (Time Of Day is sort of code for With Load but not really – telling a story people can relate to more directly.)

The first instance of this "with load" approach was plotting CPU per Coupling Facility request (and also request Response Time) against request rate. That’s proved invaluable (as you will see in previous blog entries).

The second instance is what I want to talk about now…

I plotted – for a single Service Class Period – velocity against CPU consumed. As it happens I had 200 data points, ranging from almost no CPU to 7 engines or so. CPU consumed is on the x axis and velocity is on the y axis. One further wrinkle: Each day was plotted as a differently coloured set of points (with different marker styles as well), enabling me to compare one day against another.

I’m not going to share the graph with you – as it really would be abusing customer confidence. But suffice it to say it was interesting…

As you go from no workload all the way up to 2n engines the following happens: The velocity starts out low and rapidly rises to well above the goal velocity, staying there until n engines’ worth of CPU. Then it steadily but slowly declines to well below the goal velocity. At the highest levels of utilisation the velocity achieved is about 20% of the goal. These highest levels of utilisation, though, appear to be "on the curve" – albeit outliers in x value terms.  I think that’s an interesting dynamic that says at some point the achievement drops off and can’t be sustained at or near the goal level.

The second thing that I noticed was that the points get more widely distributed as utilisation increases – most notably around the point where the velocity starts to drop. It’s a most beautiful broadening out. So we get into a position of unstable velocity. Again not a good thing.

Finally, let’s consider the days themselves. It turns out they’re all pretty much alike, with two exceptions: All the "2n engine" outliers are from one day – a problem day. Also, on the part of the curve where the velocity is dropping away the "problem day" data points are spread both above and below the others. Again we’re getting instability of outcome.

I really wish I could share this prototype chart with you – it’s got truly beautiful structure. I’m going to "hand create" such a chart a few more times with different customers’ data and then "shrink wrap" it into my analysis code. If you get to see it I think you’ll like it. It could rapidly grow to be my new favourite rhetorical device. 🙂

Of course the above only works for Velocity-based Service Class Periods but I’m sure I could dream up obvious analogue for the other goal types. (PI might be the unifying concept but it doesn’t, in my view, pass the "keep it real" test, not that Velocity is that connected to delivered reality anyway.)

And I share it with you in case it’s something you’d like to experiment with.

Going Global

(Originally posted 2010-01-22.)

In the interstices between finishing off a "rush job" piece of analysis for ONE customer and a conference call with a vendor on behalf of ANOTHER it’s time to catch up with a piece of news…

As some of you will know I have a new job in IBM…

I’ve joined Software Group’s Worldwide Banking Center of Excellence (WWCoE for short) as their z/OS System Performance person.

So, it’s a pretty similar job but with a new focus: The World. 🙂 And, more specifically, mainly banks. Mainly, but not ENTIRELY, banks. So, I know some of my readership is from customers I’ve worked with in the past who don’t happen to work in banking. I don’t regard this as "so long and thanks for all the fish" as far as they are concerned.

And I don’t reckon to be doing fewer conferences but (hopefully) more. And already the season is shaping up that way.

For me, I like the travel and I like the chance to work with customers I’ve not reached yet, some of whom have REALLY thorny problems I can help with.

As my Dad said recently on the phone "it’s the job you’ve always wanted". And now I’ve got it, watch out World. 🙂

If The Cap Doesn’t Fit…

(Originally posted 2010-01-16.)

… swear at it. 🙂

No, I KNOW that’s not right – but it’s (for me) an irresistibly bad pun. And it’s a natural reaction, too. 🙂

In a recent customer situation I looked at the RMF Workload Activity Report data for a number of service classes. One WLM Sample count was particularly high: "Capped". In fact I look at, with tooling, SMF and the actual field is R723CCCA. (An IBM Development lab HAD looked at the data through the RMF Postprocessor "prism" and come to the same conclusion.)

It turns out, however, that the service classes in question aren’t part of any WLM Resource Groups. (There IS a service class that is subject to Resource Group capping but it’s not involved here.)

So, how can this be?

A piece of background will help:

The reason I had been asked to look at the SMF data was because a large dump episode had taken rather longer than it should have. It’s the usual lesson of "don’t dump into already busy page packs". The best way to ensure this doesn’t happen is, of course, to dump into memory. (Which might not be affordable, but it IS the best way.)

What had in fact happened was that the system had become under extreme Auxiliary Storage stress. And this had been my suspicion all along.

I’m indebted to Robert Vaupel of WLM Development for confirming this:

Capping delays occur when an address space in the service class is marked non-dispatchable. This can occur when Resource Group capping takes place (switching between non-dispatchable and dispatchable in defined intervals) or when a paging or auxiliary storage shortage occurs and the address space is detected as being the reason for it.

In the above the address spaces are related to dumping, of course.

And the reason I asked Robert was because R723CCCA is populated by a WLM-maintained field (RCAECCAP from IWMRCOLL) – it always paying to understand the source of RMF numbers.

So, if you see values in R723CCCA when Resource Group capping is not in play this might be the cause. I’ve not seen this documented anywhere.

(One thing I’d NOT been crisp about – but Robert firmed up in my mind – is that "Capped" samples have NOTHING to do with Softcapping or LPAR Capping in general. That’s a whole ‘nother story.)

So, there may be a moral tale here: If you THINK the cap doesn’t fit – it might well be the case it doesn’t. 🙂

What I Did On My Vacation

(Originally posted 2010-01-04.)

First of all, a happy and prosperous 2010 to one and all.

As with most vacations it’s been a time partially filled with playing with technology and learning stuff there isn’t (legitimate) time to learn about during the rest of the year.

So, lest the rest of this post make you think I ONLY play with web stuff 🙂 I present to you a short list of REALLY good other things from the past few weeks:

  • Avatar in 3D (as the great Doctor Brian May recommended).
  • Uncharted 2 (on the PS3).
  • Beatles Rock Band (also on the PS3).
  • Neil Gaiman’s "American Gods".
  • The company of friends and family.

Now onto the "geek stuff": 🙂

My Performance Management tooling (standing on the shoulders of giants, as it happens) produces reports and charts as Bookmaster and GIFs, respectively. (Actually the GIF bit I built mid-year 2009.)

Some time in late 2009 I installed Apache on my Thinkpad – with PHP support. That enabled me to treat my laptop as an automation platform. I also installed Dojo and B2H. (B2H is a NICE but old piece of REXX that takes Bookmaster output and converts it into HTML.)

So this PHP code allows me to download all the GIFs and Bookmaster source and display it on my laptop.

In November I wrote some PHP code to selectively bundle the GIFs into a zip file – to make it easier to share them with colleagues and customers. (If YOU get one from me I hope you can readily unpack and view its contents.)

In mid-December I took this zip code and modified it to create OpenOffice ODP files from selected GIFs. Although legitimate ODP files OpenOffice couldn’t read them – but KOffice on Linux COULD. And when written out again by KOffice OpenOffice was then able to read them. (I’ve not got to the bottom of this but it’s something to do with some assumptions OpenOffice makes about XML.)

Vacation Learning and Developing

I think it’s fair to say I’ve been using "interstitial" time to play with stuff and get things built.

Learning How To Hack The DOM with jQuery and Dojo

(For those that don’t know jQuery and Dojo are javascript frameworks – free and Open Source.)

The first thing I did was to install jQuery and buy the excellent O’Reilly "jQuery Cookbook". This introduced me to a better way of parsing HTML / XML. It uses CSS selectors as a query mechanism – which is REALLY nice.

The second thing I did was to see if Dojo could do something similar. It turns out that dojo.query is pretty similar and converging on jQuery’s capabilities. (1.4 adds some more.) If you’re wedded to Dojo (as I am) I recommend you look at dojo.query and (related) NodeList support. It’ll make "hacking the DOM" much easier. (And later developments built on this.)

(If you’re looking for a good introduction to Dojo try Matthew Russell’s “Dojo: The Definitive Guide”, also published by O’Reilly. It could do with updating for the next release but it’s perfectly fine for 1.4.)

Using PHP To Simplify Dojo Development

I now have a small set of PHP functions I’ve built up over the months that make it very easy for me to create a web page that takes advantage of Dojo. So, for instance, it’s very easy to write the stuff in the "head" and "body" tags to make Dojo create widgets (Dijits) and pull in the necessary CSS and javascript.

One problem I wanted to solve was to prettify the HTML that B2H generates. It’s at the 3.2 level and is really not at all "structural" so CSS styling would prove to be a bear. (It has no class or id attributes, for example.)

Dojo can automate (with xhrGet) the asynchronous loading of files from the server. So the first thing I taught my PHP code how to do was to load some HTML and then to insert it (via innerHTML) below a specified element in the web page. (At first I used "id" as the anchor but then used dojo.query (see above) to allow the HTML to be injected ANYWHERE in the page.)

(Because not all the data I want to display in a page is HTML I added a "preprocess the loaded file" capability. So, for example I can now take a newline-separated list of names and wrap each name in an "option" tag.)

So, I can now pull in HTML from a side file. The point is to be able to work on it…

Injecting a CSS link was easy. It’s just a static “link” tag.

But some parts of the dragged-in HTML aren’t really distinquishable from other parts. So I can’t style them differently. So I wrote some more code to be able to post-process the injected HTML (once it’s part of the page). So, for example, a table description acquired a “tdesc” class name – and so CSS selectors can work with that. To do the post-processing I leaned heavily on Dojo’s NodeList capability – as it made the coding MUCH easier.

So now, if I show you an HTML report based on your data it should look MUCH prettier. (I’ve been showing customers their machines and LPARs as pretty ugly HTML.)

Dojo TabContainer Enhancements in 1.4

Some time over the vacation I installed Dojo 1.4 and converted from using 1.3.2.

I hadn’t expected this but the dijit.TabContainer widget that I was already using to display GIFs got enhanced in 1.4…

  • Instead of multiple rows of tabs you (by default) now have one – with a drop-down list to display all the tab titles. (Amongst other things this means a PREDICATABLE amount of screen real-estate taken up by the tabs.)
  • Scroll forwards and backwards buttons to allow you to page amongst the tabs. (Actually left and right arrow keys allow scrolling as well.)

Altogether it’s a much slicker design. I’ve opened a couple of enhancement tickets:

These really are “fit and finish” items but they would help with a11y (Accessibility) as well. (I’ve made contact (via Twitter) with IBM’s Dojo a11y advocate and she’s aware of these two tickets.)

Conclusion

This has been a long and winding blog post. But I think it illustrates one thing: Through small incremental enhancements (done in “interstitial time”) you can make quite large improvements in code. But then, this IS hobbyist code.

I’d also like to think I learnt a lot along the way.

Now to go explain to my manager why I’d like (as a mainframe performance guy) to become a contributor to the Dojo code base. 🙂