This article is more than 1 year old

Virtualisation soaks up corporate IT love

Getting the Big Boys excited

A brief history of virtualisation Virtualisation is the in-thing in corporate IT. You’d think it was some kind of shiny new concept that had never been done before, a panacea for all computing ills. Everyone seems to be doing it.

It’s not even restricted to servers any more. Suppliers and customers are both getting excited about Virtual Desktop Infrastructure these days – running dozens of copies of client editions of Windows in VMs on big servers, talking to thin clients.

Never mind that the last two iterations of this idea never caught on. The more logical and inexpensive network computer – cheap, interchangeable logic on the desktop; expensive hard-to-maintain config and data on the server – failed to take off at all. The Citrix and Windows Terminal Server-style multiuser Windows box never escaped a niche market.

We’ll also gloss over the fact that at a gig or two per client Windows instance, you’ll be stuffing an awful lot of expensive high-capacity DIMMs in those servers.

You’ll also be paying your techies double time when it’s time to upgrade the host box to the latest service pack, as it has to be over the weekend or a thousand users won’t be able to get into their machines.

People are even talking about running single-tasking single-user hypervisors directly on client machines, just to simplify deployment onto one standard set of virtual hardware. Just ignore the fact that virtualisation incurs performance overhead as well as a licensing one, or that the virtualised OS doesn't get the benefit of things like fancy 3D cards on the client.

The big question remains, though, why is it proving quite so popular?

Licence to print licenses

Ostensibly, it seems like good news for everyone.

Operating system suppliers see it as a way of simplifying deployment, so that each server app runs on its own clean copy of the operating system – and thus, incidentally, you need to buy umpteen more licences for said OS.

Of course, the marketing message is that you’ll need less server hardware, because server OS vendors mostly don’t sell server hardware and vice versa. If you’re not ready to move to 64-bit OSs and apps, you can also run several “legacy” 32-bit OSs with their three-and-a-bit gig memory ceiling (NT 4 and Exchange 5, anyone?) on your shiny new 64-bit server with umpteen gigs of RAM.

Desktop virtualisation is better still. Even more than running server OSs under a hypervisor, because there are no drivers needed for the client – everything is running on the bland, standard, universal virtual hardware provided by the hypervisor.

This is easily transferred from one server to another without reconfiguration – so long as you don’t change hypervisor, of course; more of that lovely vendor tie-in – and you don’t need to care what the hardware is on the client end. Handily, OEM licences from workstations don’t transfer across, so you need to buy a ton of new client OS licences. Lots of revenue, less support.

Spreading the cash around

Server vendors love it too, paradoxically enough. The thing is that x86 performance hasn’t risen that dramatically in the last few years, but if you’re going to virtualise, you’ll need new, high-efficiency, more power-frugal virtualisation-ready servers. Your old servers were probably doing OK, but you want to simplify the management, don’t you?

Replacing working hardware is environmentally catastrophic

So replace a dozen just-out-of-warranty, fully-depreciated stand-alone boxes with a couple of much more powerful ones that boast lights-out management and can run everything the whole collection of old ones did. You’ll get higher utilisation that way, and it’s "green," too.

It’s not really green at all, of course. Replacing working hardware is environmentally catastrophic. Partly because PCs and their peripherals are tricky and expensive to recycle – so don’t, donate them to Computer Aid – but mostly because the majority of the energy used by a computer, from a notebook to a server, is used in mining the ore, refining the metals, fabricating the device and then shipping it half way around the planet.

That which it actually runs on is a small part of the energy it will use in its lifetime – for instance, for PCs, the ratio is often worse than 80 per cent manufacturing to 20 per cent use.

The more efficient the PC, the more unfavourable the ratio. So replacing any serviceable working device with a new one squanders all the energy and resources used in making it and in making the replacement, too.

This is even more the case when you replace fairly young kit that’s just old enough that the accounts department has fully depreciated it. Junking millions of working CRT monitors to replace them with flatscreens wasted more energy than the flatscreens’ lower usage will ever save… and sadly, saving desk space is not environmentally significant.

Handily, though, OS installations degrade over time like some accidental form of planned obsolescence, as noted by The Reg's very own Verity Stob some years ago.

The best way to upgrade a tired old PC is to wipe it and reformat it, after which it will run like new – but even skipping individualised installs and just pushing out an updated system image onto hundreds of workstations is a significant operation. So don't stop at the servers – no, save the bother of upgrading those old workstations and junk and replace them, too!

Next page: Lots more cores

More about

TIP US OFF

Send us news


Other stories you might like