This article is more than 1 year old

Why is virtualisation important?

Show this to your family next time you need to work all weekend

Datacentre Lower costs are the basic attraction of all enterprise technologies, and virtualisation promises that in spades. In particular, it reduces hardware maintenance costs via what is now a fairly simple process of packaging physical servers up and hosting several of them on one large server.

The technology can also lower energy bills, thereby allowing datacentre owners to claim green credentials, and it makes provisioning a server much, much faster; it can take only a few minutes to set one up, making the whole system much more flexible.

It works by decoupling the software from the hardware. In practice, this means that a virtual server can contain exactly the same software components – operating system, utilities and application software – as before but instead of running on the hardware directly, it runs inside a sandbox created by a virtualisation hypervisor such as Vmware’s ESX or Microsoft’s Hyper-V.

It is pretty much the way that IBM used the technology – which it invented – back in the 1960s

The software divides up physical resources, such as CPU, disk and memory, and allocates them to the servers as they need them. In this way, you can have multiple servers running Windows or Linux, for example, on one piece of hardware. A host server running a hypervisor can run as many virtual servers, or virtual machines (VMs), as it has resources for. The result is that ratios of ten or so virtual servers per host are commonplace, with some reporting much higher ratios.

The trend for virtualisation is up. Before it became widespread in the mid-2000s, only about 10 per cent of servers were virtualised as many large companies waited for reference sites to appear; canny IT managers didn’t get where they are today by jumping blindly into a brand new technology pit. Those reference sites have now appeared and a recent Gartner report suggested that 25 per cent of servers are now virtualised, with the proportion likely to rise to over 80 per cent by 2012. Most companies have some form of virtualisation project or pilot underway: virtualisation market leader VMware claims that its customers include every member of the US Fortune 100.

Virtualisation cannot resolve all server problems, however. VMs need as much software maintenance as physical ones, and they are so easy and cheap to create that you can end up with virtual server sprawl if you are not on top of managing them. They can also lead to friction within the organisation if some departments demand a physical server, although charging them more for this facility usually helps to change their minds. More critically, shoehorning an I/O-intensive job such as a big database application into a VM can lead to problems if you haven’t done the sums first and made sure that the host’s I/O capabilities, along with all the downstream technologies, are up to the task.

Once your server estate has been virtualised, the problem then is that resources might be in the wrong place. This happens as workloads change and the VMs’ requirements change with them. Suppose, for example, that a host machine has 32GB of RAM but the VMs running on it are only using half that. At the same time, another host could be maxed out, with VMs demanding more resources than it can provide. You could move VMs manually from one to the other but the best long-term resolution is what’s called datacentre orchestration, which allows virtual machines to move automatically around the datacentre to find the resources they need when they need them.

This may seem new, but it was pretty much the way that IBM used the technology – which it invented – back in the 1960s in order to increase the utilisation of its mainframes. Sound like a familiar problem?

We’ll be covering this issue in more depth in a future feature. ®

More about

TIP US OFF

Send us news