Original URL: https://www.theregister.com/2011/07/21/brief_history_of_virtualisation_part_4/

Virtualisation soaks up corporate IT love

Getting the Big Boys excited

By Liam Proven

Posted in On-Prem, 21st July 2011 12:00 GMT

A brief history of virtualisation Virtualisation is the in-thing in corporate IT. You’d think it was some kind of shiny new concept that had never been done before, a panacea for all computing ills. Everyone seems to be doing it.

It’s not even restricted to servers any more. Suppliers and customers are both getting excited about Virtual Desktop Infrastructure these days – running dozens of copies of client editions of Windows in VMs on big servers, talking to thin clients.

Never mind that the last two iterations of this idea never caught on. The more logical and inexpensive network computer – cheap, interchangeable logic on the desktop; expensive hard-to-maintain config and data on the server – failed to take off at all. The Citrix and Windows Terminal Server-style multiuser Windows box never escaped a niche market.

We’ll also gloss over the fact that at a gig or two per client Windows instance, you’ll be stuffing an awful lot of expensive high-capacity DIMMs in those servers.

You’ll also be paying your techies double time when it’s time to upgrade the host box to the latest service pack, as it has to be over the weekend or a thousand users won’t be able to get into their machines.

People are even talking about running single-tasking single-user hypervisors directly on client machines, just to simplify deployment onto one standard set of virtual hardware. Just ignore the fact that virtualisation incurs performance overhead as well as a licensing one, or that the virtualised OS doesn't get the benefit of things like fancy 3D cards on the client.

The big question remains, though, why is it proving quite so popular?

Licence to print licenses

Ostensibly, it seems like good news for everyone.

Operating system suppliers see it as a way of simplifying deployment, so that each server app runs on its own clean copy of the operating system – and thus, incidentally, you need to buy umpteen more licences for said OS.

Of course, the marketing message is that you’ll need less server hardware, because server OS vendors mostly don’t sell server hardware and vice versa. If you’re not ready to move to 64-bit OSs and apps, you can also run several “legacy” 32-bit OSs with their three-and-a-bit gig memory ceiling (NT 4 and Exchange 5, anyone?) on your shiny new 64-bit server with umpteen gigs of RAM.

Desktop virtualisation is better still. Even more than running server OSs under a hypervisor, because there are no drivers needed for the client – everything is running on the bland, standard, universal virtual hardware provided by the hypervisor.

This is easily transferred from one server to another without reconfiguration – so long as you don’t change hypervisor, of course; more of that lovely vendor tie-in – and you don’t need to care what the hardware is on the client end. Handily, OEM licences from workstations don’t transfer across, so you need to buy a ton of new client OS licences. Lots of revenue, less support.

Spreading the cash around

Server vendors love it too, paradoxically enough. The thing is that x86 performance hasn’t risen that dramatically in the last few years, but if you’re going to virtualise, you’ll need new, high-efficiency, more power-frugal virtualisation-ready servers. Your old servers were probably doing OK, but you want to simplify the management, don’t you?

Replacing working hardware is environmentally catastrophic

So replace a dozen just-out-of-warranty, fully-depreciated stand-alone boxes with a couple of much more powerful ones that boast lights-out management and can run everything the whole collection of old ones did. You’ll get higher utilisation that way, and it’s "green," too.

It’s not really green at all, of course. Replacing working hardware is environmentally catastrophic. Partly because PCs and their peripherals are tricky and expensive to recycle – so don’t, donate them to Computer Aid – but mostly because the majority of the energy used by a computer, from a notebook to a server, is used in mining the ore, refining the metals, fabricating the device and then shipping it half way around the planet.

That which it actually runs on is a small part of the energy it will use in its lifetime – for instance, for PCs, the ratio is often worse than 80 per cent manufacturing to 20 per cent use.

The more efficient the PC, the more unfavourable the ratio. So replacing any serviceable working device with a new one squanders all the energy and resources used in making it and in making the replacement, too.

This is even more the case when you replace fairly young kit that’s just old enough that the accounts department has fully depreciated it. Junking millions of working CRT monitors to replace them with flatscreens wasted more energy than the flatscreens’ lower usage will ever save… and sadly, saving desk space is not environmentally significant.

Handily, though, OS installations degrade over time like some accidental form of planned obsolescence, as noted by The Reg's very own Verity Stob some years ago.

The best way to upgrade a tired old PC is to wipe it and reformat it, after which it will run like new – but even skipping individualised installs and just pushing out an updated system image onto hundreds of workstations is a significant operation. So don't stop at the servers – no, save the bother of upgrading those old workstations and junk and replace them, too!

Lots more cores

CPU vendors love it. Firstly, because modern CPUs are not that much quicker than the ones of a few years back – the changes are incremental now, with the biggest difference being lots more cores, which sadly most current software cannot effectively use.

Finding ways to automatically parallelise existing single-threaded software is one of the hardest problems in modern computer science, and so far, nobody has much of a clue how to do it.

It all has to be done carefully, by hand, by very smart, highly-trained human developers – and event today, most of their training doesn’t cover the complex and difficult process of refactoring code for multi-core machines.

One thing you can be sure of is that when you buy the next version of your software, whatever it is, it’ll be bigger, take more RAM and more CPU than the previous version, so you might well not notice any marginal benefit from a couple of marginal modules now being multicore-aware.

(This, incidentally, is why this article is being written in Microsoft Word 97.)

Intel Core i5 vPro

The clock speed of a shiny new Core i5 is not hugely faster than a dusty old Core 2 Duo – much of the reason for better benchmark scores are the additional cores, designed to deliver greater performance per clock cycle.

Those are great for rendering movies into DivX format, or for applying Photoshop filters to large images, but no real benefit at all if you’re running a word processor, spreadsheet or email client: single-threaded performance might be 25 per cent better at the same clock speed, if you’re lucky.

This makes them a tough sell on the workstation end of things. But on the server? Whereas your old Core 2 Duo boxes were just fine for a single OS, modern chips have more cores – from three or four to even 12 – and virtualisation just loves multicore: you can dedicate one or more cores per VM. Little current software scales well to multiple cores, but run a hypervisor and multiple OS instances and you can use loads of them.

Performance issues

And while we're talking about performance, it's worth remembering that virtualisation on x86 still isn’t terribly efficient. It typically incurs a performance cost of 5 to 10 per cent or so.

Remember, too, that now your newly virtualised server is sharing the same disk drives and network cards with half a dozen other server instances. It’s quite possible that your new virtual server will actually be no quicker at all than your old one, despite hardware that is several years younger. You can easily deploy lots more of them, but actually, what most desktop software and its users want is faster single-threaded performance, not more threads.

Server software vendors are rejoicing – there’s less need to make sure that your code plays nice with others if it can expect to have a nice clean VM entirely to itself, and it’s easier to cope with deployment issues, patching, very specific version requirements and so on, too. Plus, as we mentioned, you'll be needing licences for all those VMs and the host machines too.

And it's great for network admins, too. VMs are much easier to deploy than physical servers. You don’t need to take disk images for backups – they are disk images, ready for copying and archiving. Virtual server gone wrong? Just stop it and restart it. All the other functions are on other virtual servers, which won’t be affected. Restart didn’t help?

The IT industry survives only by customers constantly replacing their hardware and software

Roll back to the last known good image. Need more capacity? Plonk a few more images on a new box – the virtual hardware is all identical, no reconfiguration needed. It’s terrific. Beware, though – with large numbers of servers, real or virtual, you get into the arcane territories of load-balancing and failover.

And then, of course, comes the dread day that you have to update the OS on the actual physical server underneath, at which point all those nice easy-to-maintain virtual boxes are going to go down at once. Better cross your fingers and hope the host comes back up again without a hitch.

But most of all, at the end of the day, let’s not forget that the IT industry survives only by customers constantly replacing their hardware and software and it has about as much awareness of the future as a pre-Crunch investment banker.

Sure, for now, all the virtual hardware is uniform, but at some point, who knows if some sweeping change will be necessary that invalidates a million VM images? So one of the key questions is: what changes are waiting down the line for x86 virtualisation? ®

In the fifth and final article, we will look at the future of virtualisation on the PC.