Batch Architecture, Part One

(Originally posted 2011–04–12.)

First a word of thanks to Ferdy for his insightful comment on Batch Architecture, Part Zero. And also to my IBM colleague Torsten Michelmann for his offline note on the subject.

As I indicated in Part Zero I hoped to talk about jobs in a subsequent post. And this is that post. In particular I want to discuss

  1. Viewing jobs as part of distinct applications, and
  2. Generating a high-level understanding of individual jobs

Mostly I’m talking about using SMF 30 job-end records, but consider also:

  • SMF 30 step-end records.
  • SMF 16 DFSORT-invocation records (and, for balance, those for Syncsort).
  • SMF 101 DB2 Accounting Trace.
  • Scheduler Information.
  • Locally-held information about jobs.

(When I talk about jobs I’m aware there are other activities, not running as z/OS-based jobs. These include other actions on z/OS, such as automated operator actions, recovery actions, and jobs running on other platforms. In this post I’m more focusing on z/OS-based batch jobs.)


Grouping Jobs Into Applications
There are lots of ways of grouping jobs into applications…

Most installations claim a job naming convention. For example:

  • First character is “P” for “Production”, “D” for “Development” and “M” for “Miscellaneous”.
  • Second through fourth characters denote the application. (Maybe there’s a business area as the second character and the other two are applications within that.)
  • Last character denotes the frequency, e.g. “D” for “Daily”, “W” for “Weekly”, “M” for “Monthly”.
  • The remaining characters (often numeric) are an identifier within the application.

Sometimes I see naming conventions that are the other way round. I would recommend – if you have the choice – having this way round. So status and application are at the front. The reason I recommend this is it makes it much easier to code queries against any instrumentation – whether you’re using SAS, Tivoli Decision Support or the home-grown code I use. (If you’re merging batch portfolios and have to pick a naming convention this is the one I’d definitely go for.)

Your workload scheduler may well have different names for operations (in Tivoli Workload Scheduler parlance) so some care is required with those.

An interesting question is how well an installation observes their naming convention. As the old joke goes “we love naming conventions: We’ve got lots of them”. 🙂 Analysis of SMF 30 should give you view of whether the naming convention is being observed.

As well as job names it’s sometimes interesting to see which userid(s) jobs are submitted under. Often Production batch is submitted from a single userid, according to Type 30. Similarly you can see which job class, WLM workload, service class and report class a job runs in.

Sometimes the programmer name field in Type 30 reveals application information.

Within a window it is occasionally the case that when a job runs is closely related to which application it’s in, though usually applications are intermingled in time – to some degree.

… And the above are just examples of characterisation information.


Understanding Individual Jobs – At A High Level

Whether you’ve grouped jobs into applications or are just looking at individual jobs it’s useful to characterise them. Typical characterisations include:

  • Whether jobs are long-running or short. Likewise CPU- or I/O-intensive.

  • Whether jobs are in-essence single step. (“In essence” alludes to the fact many jobs have small first and last steps, for management purposes.)

  • Whether jobs have in-line backups (the presence of e.g. IDCAMS steps being a good indicator).

  • How data sets are created and deleted for steps (e.g. IEFBR14 steps between processing steps).

  • Whether jobs use tape or do non-database I/O (visible in tape and disk EXCP counts).

  • Reliability statistics.

  • Use of DB2. (Slightly tricky for IMS DB2 jobs but still can be done.)

  • Clonedness.

  • Sort product usage.

The above are all discernable from job- and step-level information. At a slightly lower level (because it requires the use of data set OPEN information) is characterising the data access as being to VSAM, or BDAM, or QSAM/BSAM (or some combination thereof).

A lot of the characterisation of jobs is centred around standards. For example, how jobs are set up by the installation features heavily in the above list. Other sorts of standards can only be seen in things like JCL.

While the above obviously applies to individual jobs it can equally be applied to applications (as identified above) but it’s obviously a bit more work.


This post has talked about how to use instrumentation to group jobs into applications and the like. It’s also included some thoughts on how to characterise individual jobs and applications.

I hope in the next part to talk about relationships between applications. And to dive deeper into the application’s data.

Published by Martin Packer

I'm a mainframe performance guy and have been for the past 35 years. But I play with lots of other technologies as well.

One thought on “Batch Architecture, Part One

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: