Feeds

Learning from the past

David Norfolk has Proustian moment over IMS

Beginner's guide to SSL certificates

Comment I'm sitting thinking about our new Reg Developer site and its target audience – professional IT developers who already read The Register but who might like something more targeted on their specific world – when an IMS Newsletter drops on the mat.

For those who don't know it, IMS (Information Management System) was IBM's hierarchical enterprise database management system from the seventies/eighties of the last century and it's where I learned about IT, as a DBA (database administrator) in Australia. (I once found a bug in IMS, a log tape commit on the wrong side of a memory FREEMAIN, which could ABEND and back out a change already committed to the log – oh, happy days).

IMS was, and is, a powerful DBMS (Database Management System) for very large applications exploiting hierarchical data structures, and these aren't that uncommon (XML data structures are basically hierarchical). It is also about the only technology product on my CV that is still reasonably saleable. IMS version 9 is still a current product (GA – General Availability - released Oct 2004) and IMS celebrated its 30th anniversary some 7 years ago.

Actually, mainframe expertise generally is still in demand, according to Bill Millar (Head of Mainframe Solutions, BMC Software), which is actively developing the mainframe aspects of Business Service Management.

An update to BMC's visualisation software, which will take information from Mainview for IMS or DB2 (or from BMC's competitors) and make it available in the service model, is due out next year; automated topology discovery (which will allow an ITIL CMDB to be kept dynamically populated with mainframe assets, without overloading it) is coming; and transaction management across the IMS and DB2 world will let operational support identify points of failure across the enterprise.

In addition, BMC is extending its "smart DBA" to the mainframe – Miller sees mainframe DB2 V8 as a major innovation, although he thinks customers haven't converted yet in the volumes IBM expected. But, there is still a (small) world of mainframe developers and, like developers everywhere, they are having to come to terms with service oriented delivery – the business enterprise wants to be given operationally complete, manageable, automated business services, not just programs.

Incidentally, Miller is most proud of being able to support mainframe DB2 v8 – a mainframe database that is still being actively developed and which IBM thought was moving too fast for the ISVs (independent software vendors) to keep up with. DB2 is still a database of choice for the very largest and most resilient business systems, while IMS is being used but not being developed for much - although I'm old enough to remember getting into trouble in a City bank for suggesting too publicly that DB2 was enterprise-ready and could start offloading processing from IMS.

As an aside, while I'm finishing off this piece, Mark Whitehorn's assessment of SQL Server as “enterprise ready” is exciting comment in our email - “only mainframe DB2 and Teradata come close to cutting it” is a paraphrase of some comments. Well, if it was my money and the application was big enough, I'd be mostly looking at DB2 today too, because I like playing safe with my career, but things change; and tomorrow SQL Server could well be in the frame, just as DB2 snuck up on IMS.

But back to IMS around 1979. What I learnt then was the importance of processing small messages in near real time (using IMS MPPs - message processing programs) rather than relying on batch processing. I learned about distributed processing – our central IMS database was fed from minicomputers in the state capitals and if the communications failed, these minis carried on providing a service (some 90% of processing was local to the State) and updated the central mainframe when the comms came back up.

I learned about abstractions and metadata– our databases were generated automatically from the data dictionary and any production problems were fed back into the dictionary as an audit trail of implementation issues related to the business data entities being processed instead of just to the physical database.

Testing program specs against the logical data structures behind the databases they accessed let us identify and remove defects before coding even started. And, the business logic associated with resolving operational database issues (such as “always restart the database after a transaction failure after logging the failure, since most problems are due to the characteristics of a single rogue transaction; unless we're having repeated failures, in which case leave the database down and call emergency support”) was stored in, and executed from, an active repository.

This was some 25 years ago – and my impression is that driving a database environment from a logical metadata repository linked to a model of business data structures is pretty advanced even today.

Security for virtualized datacentres

More from The Register

next story
PEAK APPLE: iOS 8 is least popular Cupertino mobile OS in all of HUMAN HISTORY
'Nerd release' finally staggers past 50 per cent adoption
Microsoft to bake Skype into IE, without plugins
Redmond thinks the Object Real-Time Communications API for WebRTC is ready to roll
Microsoft promises Windows 10 will mean two-factor auth for all
Sneak peek at security features Redmond's baking into new OS
Mozilla: Spidermonkey ATE Apple's JavaScriptCore, THRASHED Google V8
Moz man claims the win on rivals' own benchmarks
FTDI yanks chip-bricking driver from Windows Update, vows to fight on
Next driver to battle fake chips with 'non-invasive' methods
DEATH by PowerPoint: Microsoft warns of 0-day attack hidden in slides
Might put out patch in update, might chuck it out sooner
Ubuntu 14.10 tries pulling a Steve Ballmer on cloudy offerings
Oi, Windows, centOS and openSUSE – behave, we're all friends here
Was ist das? Eine neue Suse Linux Enterprise? Ausgezeichnet!
Version 12 first major-number Suse release since 2009
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
Protecting against web application threats using SSL
SSL encryption can protect server‐to‐server communications, client devices, cloud resources, and other endpoints in order to help prevent the risk of data loss and losing customer trust.