Original URL: http://www.theregister.co.uk/2005/12/08/ims_database_dba/

Learning from the past

David Norfolk has Proustian moment over IMS

By David Norfolk

Posted in Developer, 8th December 2005 08:57 GMT

Comment I'm sitting thinking about our new Reg Developer site and its target audience – professional IT developers who already read The Register but who might like something more targeted on their specific world – when an IMS Newsletter drops on the mat.

For those who don't know it, IMS (Information Management System) was IBM's hierarchical enterprise database management system from the seventies/eighties of the last century and it's where I learned about IT, as a DBA (database administrator) in Australia. (I once found a bug in IMS, a log tape commit on the wrong side of a memory FREEMAIN, which could ABEND and back out a change already committed to the log – oh, happy days).

IMS was, and is, a powerful DBMS (Database Management System) for very large applications exploiting hierarchical data structures, and these aren't that uncommon (XML data structures are basically hierarchical). It is also about the only technology product on my CV that is still reasonably saleable. IMS version 9 is still a current product (GA – General Availability - released Oct 2004) and IMS celebrated its 30th anniversary some 7 years ago.

Actually, mainframe expertise generally is still in demand, according to Bill Millar (Head of Mainframe Solutions, BMC Software), which is actively developing the mainframe aspects of Business Service Management.

An update to BMC's visualisation software, which will take information from Mainview for IMS or DB2 (or from BMC's competitors) and make it available in the service model, is due out next year; automated topology discovery (which will allow an ITIL CMDB to be kept dynamically populated with mainframe assets, without overloading it) is coming; and transaction management across the IMS and DB2 world will let operational support identify points of failure across the enterprise.

In addition, BMC is extending its "smart DBA" to the mainframe – Miller sees mainframe DB2 V8 as a major innovation, although he thinks customers haven't converted yet in the volumes IBM expected. But, there is still a (small) world of mainframe developers and, like developers everywhere, they are having to come to terms with service oriented delivery – the business enterprise wants to be given operationally complete, manageable, automated business services, not just programs.

Incidentally, Miller is most proud of being able to support mainframe DB2 v8 – a mainframe database that is still being actively developed and which IBM thought was moving too fast for the ISVs (independent software vendors) to keep up with. DB2 is still a database of choice for the very largest and most resilient business systems, while IMS is being used but not being developed for much - although I'm old enough to remember getting into trouble in a City bank for suggesting too publicly that DB2 was enterprise-ready and could start offloading processing from IMS.

As an aside, while I'm finishing off this piece, Mark Whitehorn's assessment of SQL Server as “enterprise ready” is exciting comment in our email - “only mainframe DB2 and Teradata come close to cutting it” is a paraphrase of some comments. Well, if it was my money and the application was big enough, I'd be mostly looking at DB2 today too, because I like playing safe with my career, but things change; and tomorrow SQL Server could well be in the frame, just as DB2 snuck up on IMS.

But back to IMS around 1979. What I learnt then was the importance of processing small messages in near real time (using IMS MPPs - message processing programs) rather than relying on batch processing. I learned about distributed processing – our central IMS database was fed from minicomputers in the state capitals and if the communications failed, these minis carried on providing a service (some 90% of processing was local to the State) and updated the central mainframe when the comms came back up.

I learned about abstractions and metadata– our databases were generated automatically from the data dictionary and any production problems were fed back into the dictionary as an audit trail of implementation issues related to the business data entities being processed instead of just to the physical database.

Testing program specs against the logical data structures behind the databases they accessed let us identify and remove defects before coding even started. And, the business logic associated with resolving operational database issues (such as “always restart the database after a transaction failure after logging the failure, since most problems are due to the characteristics of a single rogue transaction; unless we're having repeated failures, in which case leave the database down and call emergency support”) was stored in, and executed from, an active repository.

This was some 25 years ago – and my impression is that driving a database environment from a logical metadata repository linked to a model of business data structures is pretty advanced even today.

Of course, even though some of what I write today comes out of what I learned way back then, I take care not to mention COBOL, IMS DB/DC, MVS, data analysis and entity/relationship diagramming, because I want editors to take me seriously and pay me.

So, I talk UML, OO, XML, C#, RDBMS and Team System. However, I think that this is a symptom of what is wrong with IT. Today's technology is better than that of 25 years ago – but we shouldn't throw away the underlying principles of metadata abstraction, functional cohesion and so on just because they were also once useful for building COBOL Message Processing Programs – or rather, we shouldn't relearn these principles from scratch; we should build on what we already know.

IT is the only discipline that's important to business where we routinely throw away knowledge every time a new technology implementation appears. I remember watching OO encounter “for the first time” people and management issues that the OO gurus could have anticipated if they'd been prepared to abstract Structured Techniques experiences away from COBOL and databases and learn from them.

I watched RDBMS outdate my beloved IMS – and I really am a relational data model enthusiast (relational theory is not totally worthless in the design of hierarchical databases, by the way) - and I'm now seeing “post relational” databases replace the relational databases I'm used to. Yes, I know that the product names haven't changed but any database in which a column can contain rich XML documents isn't really a relational database, in my book.

As we move forward, however, are we at risk throwing away knowledge: the importance of abstraction; of analysing metadata structures; of distinguishing the semantics of data from its format; of trading off flexible access against raw performance in a managed way? Is the importance of providing database mentors to help application developers design good, maintainable database accesses that take advantage of the DBMS optimisers etc being recognised? This was a function of DBA in my day but these days I get the impression that DBA doesn't talk to developers much.

Many people use databases today, but how many of them know why they might choose Ingres or PostgreSQL instead of MySQL? We have better automated DBA tools today than I ever had (from the likes of BMC and Embarcadero), but are database maintenance and performance issues still biting developers on the heel?

I leave these questions to my readers – although I believe that many developers, those who still have a career path, have it because they can learn from the past as well as exploit the new - but please feel free to start a debate if you think that there are issues here. Or even if you think I'm finding issues where none exist....®

David Norfolk is the author of IT Governance, published by Thorogood. More details here.