This article is more than 1 year old

They shoot mainframes, don't they?

Rethinking big iron for the data centre

What does two and two make? The answer: a myth.

It’s a joke, but not a very funny one, for IBM’s System z team: when they talk to customers who refuse to consider a move to a mainframe environment, two and two is their objection: "it will take me two years to do it, and I’m going to spend two million dollars in the struggle," they say.

If this were true, you’d have to be crazy, a masochist or have a really bad data centre to embark on a plan like this. Your staff would despise you, your CEO would disown you, and your users would pretend to be busy when you came to tell them that you’re rewriting their applications for them.

But the assumption is false. Today’s mainframe deployments can be measured in weeks, not years. You don’t have to rewrite or migrate many of your apps (though you might discover that hundreds aren’t being used any more), and the bill won't swallow your hardware budget for years.

If you subscribe to the two and two myth, you are not considering all your options: that’s the conclusion of our Register white paper “Reconsider the Mainframe”.

Mainframes by numbers

Let’s give you some other numbers with twos in them: $200,000, the effective entry level for a mainframe deployment. Twenty per cent: the power used for the same workload for a mainframe compared to running it on virtualised x86 servers. Two decades: the time during which IDC reports that mainframe workloads have been growing by 19 per cent year-on-year.

Add to this that staff productivity in the mainframe environment has increased by a factor of 18 in the last 10 years, and the economics of the mainframe are radically different to the last time you entered the numbers in an ROI spreadsheet. As, come to think of it, is the spreadsheet.

For those with long memories, it’s remarkable that there’s even a mainframe left to reconsider that’s not a rusting hulk. If the mainframe was a horse in the late 1980s, they’d have shot it.

As x86 servers piled on the power in the 1990s while prices tumbled, most analysts and many users consigned the mainframe to history – and with good reason. The momentum towards moving workloads to a server environment was a drip-drip effect, and packaged software innovation made the case unanswerable for many applications.

Startups and growing businesses never seemed to hit the point at which it was appropriate to make the case for the mainframe, because to switch back from their client-server applications to big iron ran counter to all the conventional wisdom of the data centre – especially after 2000, when we assumed that virtualisation and the cloud would solve the problems of management and utilisation.

Also, departments became accustomed to having their own servers, and jealously protected them. Lovingly patched, with their own apps, sometimes their own management, and – crucially - their own budgets, consolidation into a standardised mainframe environment seemed like yesterday’s news.

Sprawl-free

As you know, it hasn’t quite worked out. For many types of application, the mainframe has never been bettered. Resilience, availability and manageability have become more important. And just when you thought budgets were under control, the price of virtualisation software has rocketed.

As x86 sprawl threatens manageability, the System z security and management environment is now available for Linux and Java applications, run on their own blades. And here’s a number with a zero in it: no one has ever hacked the mainframe.

None of this means that all, or even most, big IT departments will be using the mainframe in the near future. But as Reconsider the Mainframe points out that, you’d be foolish to dismiss the technology because you still believed a bunch of out-of-date myths.

You can download our paper, 'Reconsider the Mainframe' here. ®

More about

TIP US OFF

Send us news