How Many Browsers Do You Have On Your Machine?

(Originally posted 2008-09-20.)

Well, how many do you have?

I’ll admit to 4 on my Windows XP Laptop (and 2 versions of 1 on my ASUS EEE PC, running Linux)…

I firmly believe in having an “emergency browser”. At 0ne time that would have been Internet Explorer – for two reasons:

  • To recover a broken browser. As I mainly use Firefox Nightly builds I sometimes get cases where I need a fresh install of a prior nightly to get me out of trouble. (Nightlies are the bleeding edge in Firefox terms, nice but sometimes too bleeding edge.)
  • To access pages that Firefox has a problem with.

Nowadays, however, there are almost no web pages I can’t read with Firefox. So it doesn’t have to be Internet Explorer anymore.

In fact IBM really doesn’t care what browser I use – so long as I don’t want support. 🙂 (I haven’t wanted support for probably 20 years.) 🙂 Certainly, to be serious, there is strong endorsement by all and sundry for Firefox, even though it isn’t in any formal sense the mandated browser.

So what do I have now?

  • Firefox 3 Nightlies (as I said before) as my main browser. And I occasionally get involved in design or bug spotting discussions.
  • Internet Explorer 6 – seldom used.
  • Safari as my real “emergency browser” at the latest released level. Webkit goes on apace and so I might take a more bleeding edge stance soon.
  • Google Chrome – just out of curiosity.

One thing to note now is that there is a great race going on between the various browsers for speed, especially (this year) in their javascript execution engines.

And the significance of the “javascript wars” is that, through means like Adobe AIR and toolkits such as dojo, the web experience (and perhaps more general application experience) is become increasingly dependent on javascript. And heavy execution at that.

So the browser war may have become dull (or maybe not) but it’s fascinating to see the javascript front open up.

But, finally, in case anyone thinks I’m about to shift browser I’m not expecting to replace Firefox as my main browser anytime soon. I really don’t think Google Chrome is a Firefox killer (or an anything else killer, either).

Happy browsing folks and remember you should have more than one browser – not counting the (perhaps ignored) one that came with your operating system.

Minor Good News On Coupling Facility Performance Reporting

(Originally posted 2008-09-19.)

In recent releases RMF have put the machine serial number into both SMF Type 74 Subtype 4 (Coupling Facility Activity) and SMF Type 70 Subtype 1 (CPU Activity). Actually we get two fields in each case: Plant Number and Sequence Number, which you can put together as eg “51-12345”.

Soon we’ll get another piece of the jigsaw: Partition Number in 74-4.

This is a small change but a nice useful one…

It finally enables you to correlate the CPU Activity Report view of a Coupling Facility LPAR and the Coupling Facility Activity Report view. I have seen several customer cases where this has been impossible from SMF alone, even from counting engines in both records.

I intend to say a little more about it in my “Much Ado About CPU – Now with z10 Too” presentation at Expo in October. In fact I’d better get writing the new foils… given that I think it needs restructuring to compress much of the “everybody knows this already” material. (I expect to put quite a bit about z10 in, as well as some more stuff about Coupling Facility CPU such as the Structure-Level CPU field R744SETM.)

System z10 CPU Instrumentation

(Originally posted 2008-09-18.)

Since I got back off vacation in L’Hérault in late August I’ve been working on adding z10 support to our CPU analysis code. It’s quite a substantial set of changes – and I don’t think I’m finished yet. But I’d like to share with you what I’ve learned so far.

But first let’s briefly review what’s changed with z10. (This is a very brief review and not a tutorial on the subjects mentioned.)

  • z10 introduces a bunch of changes in the area of how upgrades – whether temporary or permanent and whether wanted or forced by circumstances – work. So we now have the notion of Permanent and Temporary capacity models (and indeed capacity values).
  • HiperDispatch is a very significant set of changes in the way PR/SM and the z/OS Dispatcher work – especially since they work together.

I’ve had data from one customer who is using Hiperdispatch for real. But already I’m seeing “behaviours”.

I would assume, by the way, that MXG already has support for the new fields and has adjusted any calculations that needed adjusting. While I follow MXG-L Listserver I don’t take more than a passing interest in MXG itself. And, also by the way, I’m talking exclusively about Type 70 Subtype 1 in this post.

Capacity Models

We now have four different models in Type 70:

  • SMF70MDL – the original model. (Prior to z990 software model (this one) was equal to the hardware model)
  • SMF70HWM – hardware model (introduced with z990 because of the book structure)
  • SMF70MPC – permanent capacity model (new with z10)
  • SMF70MTC – temporary capacity model (new with z10)

There are also three capacity ratings:

  • SMF70MCR – corresponding to SMF70MDL
  • SMF70MPR – corresponding to SMF70MPR
  • SMF70MTR – corresponding to SMF70MTR

These are all interesting in an environment where your machine configuration changes – whether through “On-Off Capacity On Demand”, “Capacity Backup”, “Capacity For Planned Events” or whatever. You can now do your usual performance and capacity work even when the configuration changes.

At this point I’m just listing the numbers in my reporting. I suspect I’ll do more when I get performance data from customers who actually do e.g. time-of-the-month upgrades/downgrades (and I know one or two who already do).

Hiperdispatch

When looking at Hiperdispatch you have to understand there are two major parts to it:

  • Dispatcher Affinity (DA) – from z/OS
  • Vertical CPU Management (VCM) – from PR/SM

Internally I still sometimes hear it using the terms DA and VCM. The point is it’s got two parts to it. So there is information in sections of the record related to z/OS and other information in sections related to PR/SM. You have to put the two together.

And here’s the most important bit…

You need to collect Type 70s from ALL z/OS images of any significance on the machine to get the full picture.

A good example of this is understanding how many logical engines are really in play when some of them are parked (in most LPARs).

z/OS – Related Information

SMF70HHF has flags for whether Hiperdispatch is supported or is active. These are, fairly obviously, for the reporting z/OS image.

SMF70PPT is the amount of time this engine was “parked” in the interval. (That is when work is deliberately not dispatched to it.) These are some or all of the “Low Polarization” engines. More on that a little later. But parked engines are important because the new calculation for CPU Busy counts parked engines as not part of the z/OS image’s capacity.

PR/SM – Related Information

SMF70POW is used to calculate the Polarization Weight for a logical engine. Logical engines are classified as High, Medium or Low. An LPAR’s weights are spread across its logical engines to ensure the High engines each have a weight corresponding to one physical engine. Each Low engine has a zero weight. Any weight left over from assigning the High weights is assigned to either 1 or 2 Medium engines. (1 if the remainder is more than half an engine, 2 if the remainder would have been less than half an engine.)

You can observe this Polarization Weight distribution using SMF70POW…

The highest value of SMF70POW for an LPAR is a High logical engine, that is 1 whole physical engine. Any values of SMF70POW smaller than that but greater than zero are for Medium logical engines. I’ve seen cases of both 1 Medium and of 2 Mediums for different LPARs on the same machine.

Bringing It All Together

So, to understand Hiperdispatch you need both LPAR and z/OS image information.

Actually, since IRD was introduced, you’ve had to marry up both perspectives. Because Online Time (in the case of Logical CP Management) became a part of the calculation. And now Parked Time is.

(After a number of years of owning our CPU Analysis code I’ve recast it – for Hiperdispatch – in a way that makes it much easier to morph our CPU calculations in case anything else happens. I’m not foretelling anything – just knowing that CPU Utilisation is one of those things whose definition will never settle for long.) 🙂

Replace String A With String B But Not If It’s Part Of String A1

(Originally posted 2008-07-31.)

As I mentioned in this blog post DFSORT just shipped a new FINDREP function to do “find and replace”.

I mentioned an example of where I might use it to replace SMFIDs in SMF records. That example generally works well.

But suppose (say, for SMF 42-6 Data Set Performance records) I want to replace “SYS1” with “MVSA” but don’t want to replace “SYS1.” with “MVSA.”.

There’s something useful about FINDREP that saves the day: Only the first match is used. So consider the following FINDREP example:

  OPTION COPY                                                
  INREC FINDREP=(INOUT=(C'SYS1.',C'SYS1.',C'SYS1',C'MVSA'))
  

With this code any data set references of the form “SYS1.” are transformed to themselves – and the cursor moves past them – whereas any other “SYS1” reference is transformed to “MVSA”.

It makes sense to put the most restrictive find string first. Otherwise the following wouldn’t work…

   OPTION COPY                                              
   INREC FINDREP=(INOUT=(C'SYS1.MODIFYME',C'MVSA.MODIFYME',
     C'SYS1.',C'SYS1.',
     C'SYS1',C'MVSA'))

which will change “SYS1.MODIFYME” into “MVSA.MODIFYME” but no other “SYS1.” string (unless it starts with “SYS1.MODIFYME”).

Of course, given you can use FINDREP as part of IFTHEN there’s a lot more you can do with it. But for now I just wanted to point out the “first hit wins” characteristic that can be useful.

New DFSORT Functions

(Originally posted 2008-07-29.)

Yesterday DFSORT announced a new set of functions – as PTF UK90013. The documentation for it can be found here.

Every year or so there’s a new set of DFSORT functions – and generally they’re “out of cycle” with z/OS releases – although they are incorporated into subsequent releases of z/OS. This means that fewer of you will know about the functions, particularly as we don’t make a big fuss about it at z/OS release announcement time. So you quite possibly, when you move to a new release of z/OS, get new DFSORT functions you don’t know about.

I’m privileged to call the DFSORT developers friends. And I get to “beta” the new code ahead of release. This time, due to other commitments (like the “Parallel Sysplex Performance Topics” Redbook), it’s been difficult to find the time to play much with the code.

So, what’s new?

Here are a few highlights:

  • FINDREP makes it MUCH easier to do “find and replace” operations. Hence, presumably, the name. 🙂

    There are a number of ways of specifying the search string and its replacement. For example, you can specify multiple input strings that get changed to the same output string. You can also define pairs of strings so that you can find and replace multiple strings in one pass over the data. But you can also just specify a single string and its replacement.

    These strings can be specified as character strings (e.g C’XYZ’) or hexadecimal strings (e.g. X’FFAB’). Or as multiples (e.g 4X’FF’). And DFSORT Symbols can be used for C’XYZ’ and X’FFAB’ styles (but not the “multiplier” styles).

    You can specify search “margins”. So, you could specify that strings are to be sought between positions 11 and 71, for example.

    You can specify the maximum number of times find and replace is performed for a record. So you could specify for example only the first match is to be replaced.

    You can say what is to happen if a replacement operation causes the output record to become wider than the LRECL.

    If you don’t want the remainder of the record to be shifted left or right after a match you can specify that as well.

    All in all a nicely thought out set of options.

    Here’s an example I actually need today:

    OPTION COPY                             
    INREC FINDREP=(IN=(C'#@$'),OUT=(C'SYS'))
    

    All our systems have SMFIDs beginning “#@$” and our tooling currently has a problem with a “$” character in certain places. Replacing “#@$” with “SYS” gets us out of a hole. And it’s pretty much guaranteed that anywhere in the SMF records we see “#@$” is part of a SMFID. So preprocessing with FINDREP will help.

  • Group operations (using WHEN=GROUP) allow you to identify and operate on groups of records.

    A group of records can be identified in one of two ways:

    • Every n records is a new group. (This uses the “RECORDS=” syntax variant.)
    • All the records between a header record and a trailer record is a new group. (This uses the “BEGIN=” and/or “END=” syntax variants.)

    Here’s an example, straight from the new documentation:

    INREC IFTHEN=(WHEN=GROUP,RECORDS=3,PUSH=(15:ID=3,19:SEQ=5)
    

    specifies groups of three consecutive records. Position 15 for 3 is an identifier that increments for each group. Position 19 for 5 is a sequence number that increments by 1 and restarts at the beginning of each group.

    In the above it’s the PUSH that actually edits the records.

    One thing to note: It’s entirely possible (but not in this example) that records fail to fall into any group. With BEGIN and END it’s possible to have records before the first BEGIN hit and after the last END hit and between an END hit and the next BEGIN hit. Here’s a way of detecting them:

    OPTION COPY
    INREC IFTHEN=(WHEN=GROUP,BEGIN=(1,5,CH,EQ,C'START'),END=(1,4,CH,EQ,C'STOP'),PUSH=(10:ID=1))
    OUTFIL FNAMES=REJECTED,INCLUDE=(10,1,CH,EQ,C' ')
    

    In the above case groups start with a record with “START” in them and end with records with “END” in them. The “PUSH” sets a flag for records in a group. The OUTFIL writes records to a sidefile where the flag wasn’t set.

    Just for grins, I tried a “mis-nesting” or “mis-bracketing” where the input stream was:

    START
    START
    STOP
    STOP
    

    The output was:

    START    
    1START    
    2STOP     
    2STOP
    

    So each “START” record starts a new group, regardless of whether the previous group was terminated with an “STOP” record. So the second “STOP” record isn’t part of any group. Still, I expect most applications will have “well formed” input data, cough cough. 🙂

    RECORDS, if specified with BEGIN or END has a slightly different role to when it appears on its own. It limits the number of records in a group.

One of the nice things about the above is you can fit them into an IFTHEN “pipeline”. IFTHEN lets you pass records through a sequence of filters – much like real pipelines. WHEN=GROUP, in particular is always used within IFTHEN, even if there are no other filters (or “stages” as they’re sometimes known). Also, WHEN=GROUP can be intermixed with WHEN=INIT but must before the other WHEN clause types.

The other significant enhancements are all related to ICETOOL:

  • DATASORT is a new operator that allows you to sort the data records in a data set without sorting the header or trailer records. Header and trailer records are copied in their original order, continuing to bracket the sorted data records.

    You use HEADER, FIRST, HEADER(n), FIRST(n) to denote the first one or n records are the header. Similarly, TRAILER, LAST, TRAILER(m), LAST(m) denote the last one or m records are the trailer.

    You can use OUTFIL to post-process the entire output stream, including header and trailer records.

  • SUBSET is a new operator that selects records based on their record number, for example the first 5 records – FIRST(5). You can specify whether subsetting is done on the way into a sort or on the way out. Again you can use OUTFIL to post-process the records.

    NOTE: If you specify eg LAST(n) ICETOOL may have to call DFSORT twice. The first pass is to count the input records. The second is to actually write out to the output data set. Because the first pass doesn’t actually OPEN the output data sets this kind of two-pass approach is fine with a BatchPipes/MVS pipe as the output data set.

  • The SELECT operator is enhanced to allow you to select the first n records with each key or the first n duplicate records with each key. This is useful for a “top list” approach. (I once did something similar with REXX driving DFSORT.)

    While there are FIRST(n) and FIRSTDUP(n) forms there aren’t LAST(n) and LASTDUP(n) forms. But Example 1 in the documentation shows how you get round that using a different sort sequence.

  • The SPLICE operator is enhanced with a new keyword: WITHANY. (As if having WITHEACH and WITHALL wasn’t confusing enough already.) 🙂

    WITHANY creates one output record for each set of duplicates. The first duplicate is written out with with the non-blank values of each subsequent duplicate spliced onto it for specified fields. (“Spliced onto” or “Spliced into”?) 🙂

    The documentation gives a far better description of this than I can here. Which is another way of saying “SPLICE confuses the heck out of me.” 🙂

  • The DISPLAY operator now allows you to display counts in reports. Formerly you could display totals, maxima, minima and averages (and the “sub” variants of those).
  • The DISPLAY and OCCUR operators have greatly enhanced title capabilities:

    • You can have up to 3 title lines.
    • Each can have up to 3 strings.

    This flexibility enables you to use DFSORT Symbols in titles, including System Symbols. To do the latter code something like:

    //SYMNAMES DD *System,S'&SYSNAME'Sysplex,S'&SYSPLEX'

    and then in ICETOOL control statements something like:

    TITLE('System: ',System)
  • The COUNT operator has been enhanced to allow you to write its output to a data set. Previously it went to SYSOUT. So you could code something in the form:

    COUNT FROM(EMPIN) WRITE(EMPCT) - TEXT('Number of employees is ') - EDCOUNT(A1,U08) WIDTH(80)

    to get output like:

    Number of employees is 1,234,567

    A1 is a mask that puts commas in. U08 means “Use 8 digits”.

    COUNT also allows you to add to or subtract from the count with ADD(n) or SUB(m). If criteria like EMPTY are used it’s the modified record count that is used in the comparison (in this case to 0).

So there’s lots there to play with. And it’s available right now.

And if you want all the user guides for “recent” function PTFs go here.

Web 2.0 and System z Pilot Workshop – And Other Conversations

(Originally posted 2008-07-11.)

A couple of days ago I attended the pilot of a “Web 2.0 and System z” workshop in the Boeblingen lab. I’m pleased to say the room was full – both with IBMers and non-IBMers. And I think my presentation – which was basically a rant about Web 2.0 Behaviours and why we should adopt them as mainframe folks 🙂 – went OK.

One of the things that was interesting about it was that on greeting Luis Suarez (@elsua on Twitter) we felt it natural to hug – though we’d never physically met before. I think that’s a testament to the power of relationships built through Social Networking. Oh, and I made new friends as a result of the workshop. So welcome @ansi, @frogpond and @rodet – all Twitter users.

Actually there were lots of things interesting about the workshop and another one will be running in the Autumn. I gather there’ll be some “hands on” then, which I’d encourage.

I’m wondering if other people would like to see such a workshop – in other parts of the world.

And it was nice to reprise my presentation the next day in front of WLM, RMF and Capacity Provisioning Manager (CPM) developers.

And now my copy of the O’Reilly book “Dojo: The Definitive Guide” has arrived. So I’m “Web 2.0’ed out” for the week. 🙂 Actually the book looks like it covers the ground quite well. I have a half-jesting rule of thumb: If there’s an O’Reilly book about a technology it’s probably ready. If there’s a “For Dummies” book on the subject it’s probably boring, dulle and passe. 🙂

Coupling Facility Subchannel Busy in RMF

(Originally posted 2008-07-11.)

In the same meeting with RMF Development as ERBSCAN / ERBSHOW was discussed we talked about Parallel Sysplex instrumentation. (Both XCF and Coupling Facility aspects.)

One thing I hadn’t noticed is the appearance of Subchannel Busy Percentage as a number in the following three places:

  • In Monitor III
  • In the Spreadsheet Reporter
  • As Overview Condition (“SUBCHBP”)

I think you can do useful work with this but you have to know a few things about it:

  • It is one system’s view.
  • It is an estimate, derived from request rates and service times, both Sync and Async. Essentially you work out the request total time in the RMF interval and divide that by the interval length and the number of subchannels. The implicit assumption is that the service time is all subchannel busy time and that there is no other time when the subchannel is busy. It’s a moderately well-known formula, however.
  • It says nothing about individual subchannel busy levels – nor about paths.

This topic is of interest to me for a couple of reasons:

  • On our Residency in May / June we used coupling facilities that were attached by a mixture of ICB and ISC links. Two different link types between the same z/OS and the same coupling facility. It would’ve been really nice to see which link type actually got favoured.
  • My suspicion is that customers would like to do more detailed connectivity analysis and tuning for their parallel sysplexes. For example, if a link doesn’t get used it’d be nice to know that and to be able to trouble-shoot it. (Perhaps someone left it connected as a 20km fibre suitcase.) 🙂

So, I’m wondering… Is link and connectivity information important? What do you need? And why? Or is the Subchannel Busy Percentage estimate good enough for you to work with?

Oh, and thanks to Matthias Gubitz for the phrase “Gutes tun und daruber sprechen” (“Do good and speak about it.”) 🙂 That’s probably the best reason for blogging and such like.

And the “neologism of the week” award goes to Harald Bender for the word “handisch”. Neither English nor German. 🙂 We believe he meant “by hand”. But for now I prefer “handisch”. 🙂

ERBSCAN / ERBSHOW – More Good News

(Originally posted 2008-07-11.)

I mentioned ERBSCAN and ERBSHOW in this post from January. This is a handy pair of tools for displaying SMF records – most notably the ones produced by RMF. For the uninitiated you type ERBSCAN in ISPF 3.4 against an SMF data set and it pops up a list of SMF records. Typing ERBSHOW followed by the record number formats the record – especially well if it’s one that RMF produced.

And the prior post on ERBSCAN and ERBSHOW talked about the “x” parameter for ERBSHOW that enhances the formatting.

So, I was in Boeblingen for a different purpose this week and took almost 2 hours of the RMF Development team’s valuable time. 🙂 We talked about many things, including “futures”. There was, as usual, a good meeting of minds.

Matthias Gubitz mentioned an enhancement to ERBSCAN that is worth relating. It’s in z/OS Release 9, but it won’t’ve made the headlines…

The list of records that ERBSCAN produces is an ISPF Edit session. So you can do ISPF Edit things like using Find and Exclude All…

In the past I’ve used it to navigate to eg SMF 74-4 with “F ‘074.004’ and to find records cut by a specific SMF ID. So you can imagine what’s displayed affects usability…

Matthias added the following two items:

  • For 72-3 (Workload Activity) the Service Class.
  • For 74-4 (Coupling Facility Activity) the Coupling Facility name.

You can see that being able to search on these, perhaps going “Exclude All” first, is going to make finding the records you want – to ERBSHOW – much easier.

Well I think it’s going to save me time. 🙂

And the RMF guys mentioned to me that ERBSCAN / ERBSHOW seem to have become popular. I wonder why. 🙂

First Impressions Of Programming With Adobe AIR

(Originally posted 2008-06-25.)

Many of you will have, by now, installed the Adobe AIR runtime. Most probably it will be to run something like Twhirl.

At this point many of you will be asking “what’s Twhirl?”

If I said it was a nice desktop application that makes using Twitter so much easier I hope you don’t ask “what’s Twitter?” 🙂

So, we’re beginning to see these desktop applications coded using Adobe AIR, which stands for “Adobe Integrated Runtime” (formerly “Apollo”).So, what’s special (if anything) about AIR?

In my experience, two things:

  • AIR applications run across Mac, Windows (and in Alpha form) Linux.

    (In fact I installed the Linux AIR alpha plus Twhirl on my ASUS EEE PC some months ago. (Nice machine that EEE, by the way.) It worked fine within the limitations of the AIR alpha code.)

  • It’s entirely possible to code up AIR applications using just a text editor. (in fact that’s precisely what I did.)

    My editor of choice, for whatever that’s worth, is Notepad++ on Windows. (And I’d rather not start an editor war here, thanks.) 🙂 And the EEE comes with the Kate editor, being built on Debian Linux.

    I like environments where there is little, if any, in the way of barriers to entry for programmers.

So what does an AIR application consist of?

In principle one can do fancy things with Adobe Flex Builder and .SWF files but that requires expensive tooling. But there is another way:

You can write an AIR application of arbitrary complexity with two files:

  • A small XML file.
  • Your HTML file which can also have javascript in, just like normal web pages. In fact your typical AIR HTML file could be thrown straight into a web browser and work just fine.(That is unless you use some of AIR’s special capabilities – which your javascript can detect the availability of and code around accordingly.) And I assume that when I say “your browser” I mean a standards-compliant one like Firefox. 🙂 Note: The flavour of AIR that uses HTML / Javascript is actually built around the Safari browser.

    So I really think the “this builds on what you already know” point makes it attractive to an awful lot of people.

This isn’t an AIR tutorial so I’m not going to give you samples of the small XML and html files. You can easily find those on the Web. I just want to leave you with the impression there isn’t much to it. And there is an O’Reilly pocket guide for AIR. (But there isn’t a “For Dummies” book for it – so that’s alright.) 🙂

There are alternatives – such as Mozilla’s XULRunner (whose user interface is based on XUL rather than HTML) and, I suppose, Microsoft’s Silverlight. But neither is as easy and “I just need a text editor and the SDK to do it” is pretty enticing.

So, I’m impressed at its simplicity, the fact it builds on HTML, XML and Javascript. And if you want to be able to build something that looks like a desktop application, complete with drag-and-drop and clipboard capabilities, this may well be a good fit for your needs.

I mentioned the SDK. It’s free (but it doesn’t run on Linux and I’ve not heard any suggestion that it will). It comprises two useful tools:

  • ADL – which you use for testing
  • ADT – which you use for packaging up your completed application. This involves signing the resulting .AIR file – and the ADT tool has the capability to enable you to be “self certifying” so that gets round the cost of getting a certificate from a more formal authority. But it does require users to trust you. 🙂

Both are simple to use.

So, I got a “Hello World” application up and running in about twenty minutes, and that even with typ(o)ing in the XML and HTML files from the O’Reilly book. Swiping from the Web would probably be quicker – but you’d learn less.

And now I’m proving remarkably productive at writing a full-scale application. What’s slowing me down – if anything – is my very basic CSS. And this application uses XMLHttpRequest and walks the DOM tree to build the window contents on the fly. All using standard javascript. Oh, and my CSS is rather more under control. 🙂 I just need to refactor it into multiple parts as all my CSS / HTML / Javascript is in the one file. Oh, and then there’s refactoring to use dojo. I’m told you can use dojo in an AIR application, so long as you ship the dojo runtime with it.

If you want to know more about AIR go to their blog. For Twhirl go here and, of course, Twitter is here.

Note: IBM doesn’t appear to have any commercial interest in AIR. And I certainly don’t. So I’m just telling you how I see it.

Web 2.0 and System z Pilot Workshop

(Originally posted 2008-06-21.)

The same Kevin Keller I mentioned in this blog post is running a one day pilot of a “Web 2.0 and System z” workshop on July 8th, in IBM Böblingen. It’s designed for customers, though some of the audience will undoubtedly be IBMers. I’ve looked at the agenda and, while I’m on it, it’s a great agenda. 🙂 Actually it has the highly-acclaimed Luis Suarez on the agenda as well. (I’m looking forward to meeting Luis as, though we’ve talked many times using all the social media, we’ve never actually met before.)

If you want to attend the workshop – or would like more information – let me know and I’ll see what I can do. I don’t control the attendee list so I can’t promise you a place.

Though run by IBM Germany this workshop is in English.

I’m looking forward to this class tremendously: As my long-suffering audiences 🙂 know I’m keen that people and organisations realise there’s a lot of scope for doing Web 2.0 (or, more generally, modern) stuff on z/OS. So this workshop ought to be the proof of that.