Why is virtualisation important?
Show this to your family next time you need to work all weekend
Datacentre Lower costs are the basic attraction of all enterprise technologies, and virtualisation promises that in spades. In particular, it reduces hardware maintenance costs via what is now a fairly simple process of packaging physical servers up and hosting several of them on one large server.
The technology can also lower energy bills, thereby allowing datacentre owners to claim green credentials, and it makes provisioning a server much, much faster; it can take only a few minutes to set one up, making the whole system much more flexible.
It works by decoupling the software from the hardware. In practice, this means that a virtual server can contain exactly the same software components – operating system, utilities and application software – as before but instead of running on the hardware directly, it runs inside a sandbox created by a virtualisation hypervisor such as Vmware’s ESX or Microsoft’s Hyper-V.
It is pretty much the way that IBM used the technology – which it invented – back in the 1960s
The software divides up physical resources, such as CPU, disk and memory, and allocates them to the servers as they need them. In this way, you can have multiple servers running Windows or Linux, for example, on one piece of hardware. A host server running a hypervisor can run as many virtual servers, or virtual machines (VMs), as it has resources for. The result is that ratios of ten or so virtual servers per host are commonplace, with some reporting much higher ratios.
The trend for virtualisation is up. Before it became widespread in the mid-2000s, only about 10 per cent of servers were virtualised as many large companies waited for reference sites to appear; canny IT managers didn’t get where they are today by jumping blindly into a brand new technology pit. Those reference sites have now appeared and a recent Gartner report suggested that 25 per cent of servers are now virtualised, with the proportion likely to rise to over 80 per cent by 2012. Most companies have some form of virtualisation project or pilot underway: virtualisation market leader VMware claims that its customers include every member of the US Fortune 100.
Virtualisation cannot resolve all server problems, however. VMs need as much software maintenance as physical ones, and they are so easy and cheap to create that you can end up with virtual server sprawl if you are not on top of managing them. They can also lead to friction within the organisation if some departments demand a physical server, although charging them more for this facility usually helps to change their minds. More critically, shoehorning an I/O-intensive job such as a big database application into a VM can lead to problems if you haven’t done the sums first and made sure that the host’s I/O capabilities, along with all the downstream technologies, are up to the task.
Once your server estate has been virtualised, the problem then is that resources might be in the wrong place. This happens as workloads change and the VMs’ requirements change with them. Suppose, for example, that a host machine has 32GB of RAM but the VMs running on it are only using half that. At the same time, another host could be maxed out, with VMs demanding more resources than it can provide. You could move VMs manually from one to the other but the best long-term resolution is what’s called datacentre orchestration, which allows virtual machines to move automatically around the datacentre to find the resources they need when they need them.
This may seem new, but it was pretty much the way that IBM used the technology – which it invented – back in the 1960s in order to increase the utilisation of its mainframes. Sound like a familiar problem?
We’ll be covering this issue in more depth in a future feature. ®
Ask me if I care
We read here of "virtualisation" every few days.
Given its minor position, eclipsed by numerous other branches of IT, many more relevant and almost all more interesting, you have to ask why.
Gravity is important, too, but I don't benefit from reading about it several times a week.
It's actually pretty simple.
It's virtualized versus partitioned/stand alone.
If you have One _partitioned_ server where you have 4 virtual machines each have:
4 CPU cores.
2xSAN ports running at 4Gbit.
2x 1Gbit ports running a 1Gbit
32 GB of memory.
Then each partion will only be able to have the resources allocated to it.
Hence 4 CPU cores of processing power.
8GBit of SAN bandwidth
2Gbit of Network bandwidth and
32 Gigabyte of Memory.
If you on the other hand virtualize things. in the following way: (Using POWERVM terms. Feel free to translate into other virtualization products notations).
8 Virtual CPU cores
2 virtual HBA's
2 Virtual network cards.
40 GB of virtual memory.
Now the physical ressources from before are virtualized, hence beneth to serve the virtual machines you normally will have.
16 physical CPU cores.
2x 4x 1 Gbit Etherchannels (8 Gbit)
2x4x4 Gbit SAN bandwidth (32 Gbit SAN bandwidth)
128 GB of physical memory.
Hence you potential have
twice the processing power 8 Virtual cores versus 4 physical
Four times the network bandwidth.
Four times the SAN bandwidth.
25% extra memory.
It's very simple you let your active virtual machine use resources that other virtual machines aren't really using. Hence driving up utilization of the physical ressources.
Simple, and this is something that many people do on many different virtualization platforms every day.
Actually I have no problem what so ever doing a 'sh*tload' of I/O in a POWERVM environment.
Entry number 3 on this list:
Here is so the exec summery:
Sure there is an overhead, but the whole idea of virtualization is increasing utilization. And virtualized workloads often run faster than native ones.. why cause they can utilize resources that otherwise would have dedicated to other virtual machines. It's actually quite simple.. if you understand the concept.