My lost Cobol years: Integrating legacy management
When does management itself become the problem?
Workshop Nearly a quarter of a century ago, I went for a job interview with ICL. “What do you think of COBOL?” they asked. “It’s a dinosaur, won’t last, should be put out of its misery,” I remember saying.
The two grey suits looked at each other and turned back to me. “We’re a COBOL shop,” said one, before the interview very swiftly came to an end.
In hindsight, the strange thing is not the fact that I didn’t get an offer of employment; rather, that COBOL skills are still in demand today. Had I not fluffed the interview I could have taken the job in the knowledge I had (at least) 23 years of forward security, which feels somewhat incongruous in this supposedly fast-changing world of IT.
Technology professionals have seen many things come over the past few decades, but few of them go – while the more proprietary systems have tended to vanish, many companies retain applications written in C, Java and indeed, COBOL and FORTRAN to name a few supposedly old-school languages. But what does this mean when it comes to managing the old alongside the new?
Older systems never die, they tend to be subsumed into the IT environment. Reasons are legion: one IT manager at a finance organisation told me how the idea of decommissioning a system was an anathema, simply because it was too complicated to work out if it was no longer needed.
There’s an ‘if it ain’t broke, don’t fix it’ attitude that pervades many companies, while for others, the costs of taking a system out of service would exceed the benefits of doing so. Systems are often as valuable as the data they store, and it can make more sense to build a new front end onto an existing database running on an AS400 (say), than replacing it wholesale with a new hardware acquisition and migrating data, tools and working practices built around the application.
It may hurt to say so, but it appears that some such platforms have yet to be bested. When suggested a couple of years ago on The Register, that mainframes might be obsolete for certain workloads such as data warehousing, readers quite rightly responded in no uncertain terms.
The mainframe platform remains one of the most resilient and secure available, and also offers one of the most attractive cost-per-CPU-hour propositions if it is fully laden. Whether or not virtualisation security is an oxymoron, few can deny that both virtualisation and security were in active use on the mainframe long before the wave of x86 virtualisation activity we are seeing today.
The downside of multiple generations of systems is that each tends to come with its own set of management tools and practices. Research we have carried out recently (soon to be published) suggests the phenomenon of fragmented management – that is, if each system requires separate management capabilities, the resulting overheads in terms of staffing outweigh the in-principle benefits of having such management tools in place in the first place.
In other words, the use of tools to support and automate IT management activity is not, by itself, a guarantee of efficiency. Sooner or later, it becomes appropriate to look at the ‘manager of managers’ precept – a single console which can be used to co-ordinate activity across multiple systems – but we know how hard it can be to get funding for such capabilities.
What we don’t absolutely know is where the threshold lies, beyond which management itself becomes the problem rather than the solution. Equally however, it will be difficult for IT to move towards the nirvana of true service delivery management without using appropriate tools that do play well together, For so-called ‘legacy’ systems and software, what is the tipping point where the management pain exceeds the gain of keeping things going, and just how do things unfold – or indeed unravel from this point?
If you have any light you could shed on these issues, we’re all ears. ®
Sponsored: Network DDoS protection