Cheap as chips: The future of PC virtualisation
A brief history of virtualisation VMware was founded in 1998, and until the launch of its eponymous product the next year, the PC’s x86 architecture had been considered to be impossible to fully virtualise.
Since that time, although VMware continues to prosper, prices of virtualisation tools have fallen to an all-time low – in fact, most hypervisors are free, with just the management tools costing money.
The canny are therefore asking if or when the bubble is going to burst. It looks like the classic hype cycle: following the "peak of inflated expectations" comes the "trough of disillusionment."
Some of the key weaknesses of current x86 virtualisation methods and technologies can be revealed by comparing PC hypervisors with those on mainframes and large Unix servers.
For instance, compared to mainframe partitioning, if you use a full PC server OS to run full-system VMs containing other full server or client OSs, the result is horribly inefficient.
Whole layers of the software stack are duplicated on both host and inside multiple VMs. It doesn’t even make a huge difference if the host runs Linux and the guests Windows, or the other way round – either way, there is functional duplication in the stack.
On an x86 server with guest OSs running under a hypervisor, a full copy of Windows (say) is running on an emulated chipset connected to emulated disk drives (formatted with a normal filesystem), and talking through emulated Ethernet cards to another real operating system – which is storing the VM images in another filesystem running on a real disk.
If you’re running current or recent Windows guests under a Windows 2008 Server R2 with Hyper-V, then the Enlightened I/O drivers allow for dynamic memory sizing and reasonably efficient driver communication between guests and host – but there is still a lot of duplicated code, which is both wasteful and inefficient.
It means VMs run more slowly and take more disk space and memory. It also means that all the OSs in the stack must be patched and updated separately.
If you have half a dozen Windows servers running in VMs, then that is half a dozen copies of Windows that must be updated – and possibly a host copy as well. Even if tools such as WSUS ease the deployment, it still has to be done.
On the other hand, if the guests are running under VMware ESX or XenServer, then the host OS is relatively simple and lightweight, but the guests are running under pure software emulation of complete systems – in VMware's case, complete with emulation – albeit heavily-optimised – of the host CPU for running Ring 0 code. This means a significant amount of emulation overhead.
A lot of duplicated code, which is wasteful and inefficient
Let’s consider what could be eliminated. In part three of this series, we looked at the Unix way of doing things: OS-level virtualisation. This means that only the userland of the OS is virtualised, with multiple userlands running atop a single kernel. One installed copy of the OS can appear to be dozens or more – but all sharing the same core binaries, the same memory and real native unvirtualised CPUs.
Parallels’ Virtuozzo Containers brings this integral feature of Solaris, AIX and FreeBSD to Windows. Virtuozzo isn’t cheap, whereas Hyper-V and VMware are essentially free – but then again, half a dozen Virtuozzo VMs need no more RAM and disk than would be taken by installing all the apps in them straight on the host OS. The savings can be very considerable indeed, and the host server’s resources are shared equally by all the VMs – no partitioning or allocation is required.
Sponsored: Hyper-scale data management