Lots more cores
CPU vendors love it. Firstly, because modern CPUs are not that much quicker than the ones of a few years back – the changes are incremental now, with the biggest difference being lots more cores, which sadly most current software cannot effectively use.
Finding ways to automatically parallelise existing single-threaded software is one of the hardest problems in modern computer science, and so far, nobody has much of a clue how to do it.
It all has to be done carefully, by hand, by very smart, highly-trained human developers – and event today, most of their training doesn’t cover the complex and difficult process of refactoring code for multi-core machines.
One thing you can be sure of is that when you buy the next version of your software, whatever it is, it’ll be bigger, take more RAM and more CPU than the previous version, so you might well not notice any marginal benefit from a couple of marginal modules now being multicore-aware.
(This, incidentally, is why this article is being written in Microsoft Word 97.)
The clock speed of a shiny new Core i5 is not hugely faster than a dusty old Core 2 Duo – much of the reason for better benchmark scores are the additional cores, designed to deliver greater performance per clock cycle.
Those are great for rendering movies into DivX format, or for applying Photoshop filters to large images, but no real benefit at all if you’re running a word processor, spreadsheet or email client: single-threaded performance might be 25 per cent better at the same clock speed, if you’re lucky.
This makes them a tough sell on the workstation end of things. But on the server? Whereas your old Core 2 Duo boxes were just fine for a single OS, modern chips have more cores – from three or four to even 12 – and virtualisation just loves multicore: you can dedicate one or more cores per VM. Little current software scales well to multiple cores, but run a hypervisor and multiple OS instances and you can use loads of them.
And while we're talking about performance, it's worth remembering that virtualisation on x86 still isn’t terribly efficient. It typically incurs a performance cost of 5 to 10 per cent or so.
Remember, too, that now your newly virtualised server is sharing the same disk drives and network cards with half a dozen other server instances. It’s quite possible that your new virtual server will actually be no quicker at all than your old one, despite hardware that is several years younger. You can easily deploy lots more of them, but actually, what most desktop software and its users want is faster single-threaded performance, not more threads.
Server software vendors are rejoicing – there’s less need to make sure that your code plays nice with others if it can expect to have a nice clean VM entirely to itself, and it’s easier to cope with deployment issues, patching, very specific version requirements and so on, too. Plus, as we mentioned, you'll be needing licences for all those VMs and the host machines too.
And it's great for network admins, too. VMs are much easier to deploy than physical servers. You don’t need to take disk images for backups – they are disk images, ready for copying and archiving. Virtual server gone wrong? Just stop it and restart it. All the other functions are on other virtual servers, which won’t be affected. Restart didn’t help?
The IT industry survives only by customers constantly replacing their hardware and software
Roll back to the last known good image. Need more capacity? Plonk a few more images on a new box – the virtual hardware is all identical, no reconfiguration needed. It’s terrific. Beware, though – with large numbers of servers, real or virtual, you get into the arcane territories of load-balancing and failover.
And then, of course, comes the dread day that you have to update the OS on the actual physical server underneath, at which point all those nice easy-to-maintain virtual boxes are going to go down at once. Better cross your fingers and hope the host comes back up again without a hitch.
But most of all, at the end of the day, let’s not forget that the IT industry survives only by customers constantly replacing their hardware and software and it has about as much awareness of the future as a pre-Crunch investment banker.
Sure, for now, all the virtual hardware is uniform, but at some point, who knows if some sweeping change will be necessary that invalidates a million VM images? So one of the key questions is: what changes are waiting down the line for x86 virtualisation? ®
In the fifth and final article, we will look at the future of virtualisation on the PC.
Virtualisation, like everything, has pros and cons
First off, let's be honest here. ISYS's comment "Secondly, you don't virtualise services that require high end performance so sharing resources with all the other clients on a host is not an issue." doesn't tally with what the virtualisation providers advertise. For example, from VMWare's site:
"Run Business Critical Applications with Confidence
Deliver better application performance and availability with less complexity at a lower cost.
Scalability and Performance
With 4x more powerful VM’s, vSphere supports the most resource-intensive applications."
Of course VMWare wants everything you have to run virtualised hardware, that's VMWare's job. Additionally, sharing resources with all the other clients on a host is always an issue. For a start that's why you're supposed to stuff your new servers full of RAM.
I do see that there are pros to virtualisation and certainly in the "longer term" it makes increasing amounts of sense but right now the con does seem to look a lot like "throw out your stuff and buy new stuff". If your current stuff works, great. All the benefits of reduced power use and server space rental *need* to pay for themselves between licence renewals because there's no "it's better now" advantage in virtualisation. It's not supposed to make your server quicker. Your users aren't getting more done because you bought a few new servers and an often surprising number of software licences and that's a massive failing in a technology upgrade.
Spending money solely for the purpose of saving money needs to be done very, very carefully.
You should never forget that all of these trends (why hasn't your business virtualised / switched to Mac / upgraded from XP / let us install and support Linux / gone paperless etc etc) are advertised precisely because someone intends to make money out of you. The companies selling virtualisation software, as companies, don't really care if virtualisation is right for your situation.
"It's great for network admins too"
There are 2 places where virtualisation really helps.
1) in freeing up rack space at a busy datacentre - all those extra cores are useful because they can be treated as vCPUs and you can shrink the physical size of your infrastructure, taking down power and cooling requirement. Screw any global environmental effect, it means you don't have to buy more space or electricity to expand.
2) in reducing the amount of time required to manage the OS's which means the same amount of bodies can administer a much larger number of images than they could physically, meaning no new bodies to be hired.
The cost of site rental, power and especially people keep going up, the cost of physical hardware keeps steady (while capacity goes up) so longer term, virtualisation makes complete sense.
vsphere 5.0 licensing
You mean this? http://www.theregister.co.uk/2011/07/13/vmware_esxi_5_0_analysis/