This article is more than 1 year old

Where is virtualisation taking you?

Is ‘perfect’ possible – or indeed necessary?

Lab Anyone who’s been in this industry for longer than a decade will know that some of what IT vendors say needs to be taken with a pinch of salt. Virtualisation holds great promise, so we are told – but yet so did blade servers, grid architectures, enterprise management solutions, application service providers... the list goes on. But even if we cut through the we’re-already-there marchitectures of more zealous product pushers, virtualisation does appear to offer a bit of a path towards a better way to do IT.

It’s not as if what we have is completely broken – far from it. In general, and as shown repeatedly in our research, few organisations feel completely under the cosh when it comes to their IT. I can think of a couple of places I’ve worked that really were facing the technological equivalent of a failed marriage, but in general, IT and the business do tend to rub along.

All the same, many IT managers across the globe do reach that point in their careers where they think to themselves, “There has to be a better way of doing things than this.” And no doubt there is merit in exploring certain options, be they in software architectures, systems management, data centre design, backup policy... you name it.

One recurring ‘better way’ is that of running IT in a semi-automated manner – or indeed as automated as possible. I’ve said in the past that I don’t believe IT will become a utility in the short-to-medium term. It’s just too darned complex, and the level of technical competence required to deploy and manage efficient IT services is just too high. However, just making IT a bit more dynamic would be a good start, and virtualisation has been said by many to hold the key to such a transition.

But here’s the rub: what’s the real gain to be had from such a virtualised environment? Imagine pristine rows of servers, each running multiple virtual machines, delivering dynamically scaled services to users as efficiently as possible. While this might sound jolly good in principle, a number of counters exist.

First, that the cost of hardware – or at least, the relative cost per unit of processing – continues to drop. The issue with defining any ideal environment is that it needs to be sustainable – no point in doing so if, in three years’ time, you’ll need to do it all again. Business changes as fast as IT – and all it takes is a single merger for all that hard work defining the ideal environment to be thrown out of the window. Meanwhile there is a big question over people costs. The basic principle is that “people are expensive, automation is good,” but I’m sure you have your own anecdotes about how the systems that were supposed to simplify things then required double the operations staff to run.

Many of these questions remain unanswered, and the ultimate cost-benefit of virtualisation has still to be proven in the mainstream context. Perhaps this is a good thing when we consider that some pieces of the IT puzzle are still catching up with the potential of virtualisation – Intel and AMD’s latest chipsets will help, to be sure, and pan-industry vendor partnerships will take things forward in terms of interoperability. In the meantime, we have management best practice and its associated tooling, neither of which could be said to bake in virtualisation right now.

While there’s still work to be done, these are of course early days: most organisations are still in what we could consider a ‘pilot’ stage when it comes to virtualisation, and are only starting to consider what comes next. On Wednesday this week, we considered where virtualisation goes next after the pilot – and there is plenty that can be done with it without needing to take it to its ultra-dynamic conclusion.

So it’s certainly not about being downhearted, more a recognition that for virtualisation, perhaps the best is yet to come. ®

More about

TIP US OFF

Send us news


Other stories you might like