Feeds

Cheap as chips: The future of PC virtualisation

Playing catch-up

Choosing a cloud hosting partner with confidence

Another big-iron model

If you need the guest OSs to be different from the host – for example, hosting multiple Windows XP guests – then another big-iron model could be applied. We looked at IBM mainframe-style virtualisation in part two of this series - where it’s normal to run a specialised host OS on the bare metal, supporting specialised guest OSs that can only run inside VMs. How could this be applied to Windows?

If there were special editions of Windows for running inside guests, let’s look at what could be removed. In the days of MS-DOS, PCs were sometimes configured as “diskless workstations” – clients with no local hard drive that booted over the network from a server, then mounted a share as their C drive.

These days, the technology is called “OS streaming” and several companies offer versions for modern versions of Windows, including Citrix Provisioning Server and Xstreaming Technology’s VHD.

The technology combines well with things like Windows Server’s Volume Shadow Copy service. Multiple VMs can boot off a single shared drive with writes being redirected to another volume, whose contents can be discarded when the machines shut down.

In this way, a dedicated guest edition of Windows wouldn't need emulated disk drives or indeed a filesystem of its own at all. It could just store its files natively in the host’s filesystem – one central set of binaries for dozens or hundreds of machines. It wouldn’t need an emulated network card, either, just a simple soft link to the hypervisor – no emulated chipsets or other complexities.

Stripping out the client OS like this would do away with several layers of indirection and emulation.

By the same token, the guest OS would only ever be served to a single user, so it would not need any integral support for creating multiple users, storing their profiles, switching between them and so on – all its data and configuration would be stored on a server anyway. It would need no hardware detection or device drivers of its own – any drivers it needed would be put in place when it was provisioned and the virtual hardware platform is essentially static and unchanging.

Memory allocation

Memory allocation of VMs is getting more flexible with time. VMware ESXi lets you "overcommit" a server, assigning more RAM to VMs than the server actually has.

Meanwhile, if you run a version of Windows with "Enlightened I/O" under Microsoft’s Hyper-V, the memory size of the guest can be configured dynamically according to how much the host has free. However, there’s a more elegant method of sharing memory between host and guests than either of these.

coLinux logo incorporating Tux penguin and Windows The model to follow comes from an foreign platform, though. There are two different versions of the Linux kernel that are designed to run as programs under another OS: User Mode Linux and coLinux. UML is a version of the kernel rewritten to run as a userland program – i.e., in the processor’s Ring 3 – under a parent Linux system.

To the parent OS, it appears as a single big process, but inside that process is a complete guest OS – no VM required. coLinux does superficially the same thing on a Windows host, although the implementation is very different.

The point being that if a kernel is designed to run under another OS, it can be written so as to request memory and other resources from the host system. With current x86 virtualisation, each guest needs an emulated memory controller, its own allocation of RAM and a complete emulated motherboard chipset – even if hypervisor-aware drivers improve the performance of running systems. None of this is necessary with a purpose-built kernel, which needs only a few simple drivers to handle communication with the host system.

Virtualisation on x86 has a long way to go before it catches up

These are of course just idle speculations, but they give a flavour of how much smaller and simpler a custom "Guest Windows" could be, should Microsoft ever decide to build such a thing.

The key points to take away from all this?

No matter how mature it seems, virtualisation on x86 has a long way to go before it catches up with the systems that were doing it decades before it came to the PCs.

There is already a specialised host version of Windows Server called Hyper-V Server, which like all of Microsoft's virtualisation tools is a free download. Even so, the potential benefits from a specialised guest version of Windows would be far greater.

But even if no such product ever appears, it would help if future versions of Windows could install in a special "guest" mode, with a kernel designed to be hypervisor-aware when running under another, host OS.

Even a full-fat edition of Windows would perform better in this mode if its kernel were able to communicate directly with the hypervisor rather than using multiple software emulations of PC hardware or even optimised drivers. Not only that, but it would be easier to manage and would use resources more efficiently.

Beyond this, full-system virtualisation is not the only way to do it, and there are persuasive advantages to operating-system level virtualisation as well. For some roles, where you expect to run identical host and guest OSs, OS-level virtualisation delivers much the same benefits but with dramatically lower resource usage and the management – and licensing – savings of a single system image to configure, maintain and patch.

A final thought is, sadly, perhaps the least likely to appear. There are already quite a few full-system virtualisation products for various operating systems: Bochs, QEMU, KVM, Xen, VMware, Parallels, VirtualBox and the various Microsoft offerings.

The Linux KVM hypervisor shares code with QEMU and thus shares a VM format. All the Microsoft offerings use a common format, too, derived from Connectix VirtualPC's VHD files. Most of the others, however, do not. The difference goes deeper than the arrangement of files in the host's filesystem: the virtual hardware made available to guests differs from one hypervisor to another, as well.

A single, common virtual hardware platform, at least, would be a big win – and a single on-disk format for virtual machines even better.

The PC industry does not have a good track record at adopting standard formats for interchange between rival systems – where there are standards, such as RTF files, they are at best secondary to products' native formats. If there is one common element, it tends to be everyone else adapting their products to read and write the Microsoft file formats.

It is to hypervisor vendors' advantage if their users are locked-in, though; it discourages a VMware house from migrating to Hyper-V, for instance. It would be very convenient for users if they could skip readily between different vendors' hypervisors, but don't hold your breath for that to happen any time soon. ®

Security for virtualized datacentres

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.