This article is more than 1 year old

Learning from the past

David Norfolk has Proustian moment over IMS

Comment I'm sitting thinking about our new Reg Developer site and its target audience – professional IT developers who already read The Register but who might like something more targeted on their specific world – when an IMS Newsletter drops on the mat.

For those who don't know it, IMS (Information Management System) was IBM's hierarchical enterprise database management system from the seventies/eighties of the last century and it's where I learned about IT, as a DBA (database administrator) in Australia. (I once found a bug in IMS, a log tape commit on the wrong side of a memory FREEMAIN, which could ABEND and back out a change already committed to the log – oh, happy days).

IMS was, and is, a powerful DBMS (Database Management System) for very large applications exploiting hierarchical data structures, and these aren't that uncommon (XML data structures are basically hierarchical). It is also about the only technology product on my CV that is still reasonably saleable. IMS version 9 is still a current product (GA – General Availability - released Oct 2004) and IMS celebrated its 30th anniversary some 7 years ago.

Actually, mainframe expertise generally is still in demand, according to Bill Millar (Head of Mainframe Solutions, BMC Software), which is actively developing the mainframe aspects of Business Service Management.

An update to BMC's visualisation software, which will take information from Mainview for IMS or DB2 (or from BMC's competitors) and make it available in the service model, is due out next year; automated topology discovery (which will allow an ITIL CMDB to be kept dynamically populated with mainframe assets, without overloading it) is coming; and transaction management across the IMS and DB2 world will let operational support identify points of failure across the enterprise.

In addition, BMC is extending its "smart DBA" to the mainframe – Miller sees mainframe DB2 V8 as a major innovation, although he thinks customers haven't converted yet in the volumes IBM expected. But, there is still a (small) world of mainframe developers and, like developers everywhere, they are having to come to terms with service oriented delivery – the business enterprise wants to be given operationally complete, manageable, automated business services, not just programs.

Incidentally, Miller is most proud of being able to support mainframe DB2 v8 – a mainframe database that is still being actively developed and which IBM thought was moving too fast for the ISVs (independent software vendors) to keep up with. DB2 is still a database of choice for the very largest and most resilient business systems, while IMS is being used but not being developed for much - although I'm old enough to remember getting into trouble in a City bank for suggesting too publicly that DB2 was enterprise-ready and could start offloading processing from IMS.

As an aside, while I'm finishing off this piece, Mark Whitehorn's assessment of SQL Server as “enterprise ready” is exciting comment in our email - “only mainframe DB2 and Teradata come close to cutting it” is a paraphrase of some comments. Well, if it was my money and the application was big enough, I'd be mostly looking at DB2 today too, because I like playing safe with my career, but things change; and tomorrow SQL Server could well be in the frame, just as DB2 snuck up on IMS.

But back to IMS around 1979. What I learnt then was the importance of processing small messages in near real time (using IMS MPPs - message processing programs) rather than relying on batch processing. I learned about distributed processing – our central IMS database was fed from minicomputers in the state capitals and if the communications failed, these minis carried on providing a service (some 90% of processing was local to the State) and updated the central mainframe when the comms came back up.

I learned about abstractions and metadata– our databases were generated automatically from the data dictionary and any production problems were fed back into the dictionary as an audit trail of implementation issues related to the business data entities being processed instead of just to the physical database.

Testing program specs against the logical data structures behind the databases they accessed let us identify and remove defects before coding even started. And, the business logic associated with resolving operational database issues (such as “always restart the database after a transaction failure after logging the failure, since most problems are due to the characteristics of a single rogue transaction; unless we're having repeated failures, in which case leave the database down and call emergency support”) was stored in, and executed from, an active repository.

This was some 25 years ago – and my impression is that driving a database environment from a logical metadata repository linked to a model of business data structures is pretty advanced even today.

More about

TIP US OFF

Send us news


Other stories you might like