This article is more than 1 year old

Virtualisation, the Linux way

De facto standard

IBM, Hewlett-Packard and Sun Microsystems, among others, are creating an imperative. Their infrastructure initiatives, entitled respectively; On Demand, Adaptive Enterprise and N1, are all quite similar and aimed at the idea of virtualising the hardware layer, writes Robin Bloor of Bloor Research.

The primary reason for wanting to virtualise hardware is this; in the last five years or so companies have been buying servers in an ad hoc manner, tending to deploy them on a one server per application basis.

Consequently, they assembled server farms which turn out to have an average hardware utilisation of about 20 per cent. This is, of course, a waste of money and, in the long run, a management headache. However there are other imperatives, particularly the idea of being able to provide infrastructure as a service - dynamically, i.e. you pay for what you use and you get what you need when you need it.

So companies, especially large companies, are very receptive to the idea of corporate computer resource that is both managed and efficient - which is what IBM, HP and Sun are talking about. However, if you talk the talk you are also going to have to walk the walk, and right now, what can be delivered doesn't amount to wall-to-wall virtualisation - or anything like it.

So the question is: How is it ever going to be delivered - given legacy systems, existing server farms and the enormous difficulty involved in relocating applications in a heterogeneous network.

Blade technology, grid computing, automatic provisioning, SANs, NAS and so forth will play a part in this, but for it to work, and work well, it will require a standard OS - and there is only one candidate - Linux.

The easiest way to see the need for a standard OS is to consider why and how TCP/IP became a standard. It didn't happen because it was the best option or because it was purpose designed to run a world-wide network with hundreds of millions of nodes (it wasn't). It happened because it was the only reasonable choice at the time. The same is now true of Linux as regards hardware virtualisation. Irrespective of its other qualities, it is the only one that fits the bill.

It qualifies because it spans so many platforms - from small devices up to IBM's zSeries mainframe. It also qualifies because, like TCP/IP, it doesn't actually belong to anyone. It runs on most chips and is rapidly becoming the developer platform of choice. So the idea is starting to emerge that you virtualise storage by the use of SANs and NAS and you virtualise server hardware by the use of Linux - thus making it feasible to switch applications from one server to another automatically, and quickly. Within this capability you can cater for failover and make highly efficient use of resources.

This doesn't solve all the problems of virtualisation - and there are many, including legacy hardware that will never run Linux and legacy applications that will never run on Linux. But this doesn't actually matter. In the short run they'll get excluded from virtualisation and in the long run, they cease to exist.

The momentum is building and Linux is set to become the standard OS for hardware virtualisation in large networks. Other OSes may eventually have to impersonate the characteristics of Linux or move aside.

© IT-Analysis.com

Related stories

McDATA sets out its virtual stall
StorageTek signs FalconStor for data pooling
Sun repositions first plank in N1 strategy
IBM overhauls Tivoli
Virtualization sells better as something else, HP says

More about

TIP US OFF

Send us news


Other stories you might like