Original URL: https://www.theregister.com/2010/06/17/virtual_wetware/

Virtual management - a lesson from history

Wetware still required

By Trevor Pott and Iain Thomson

Posted in Systems, 17th June 2010 10:02 GMT

Blog I have worked with many different virtualisation platforms, from Microsoft’s early attempts with Virtual Server 2005 to the latest VMWare, Hyper-V, KVM and Xen offerings. I have taken the time to play with the management software offered by these vendors, and even a few of the third-party stuff. Depending on your specific needs, the higher-level management tools can either make your life easier or far more complicated. They will most assuredly make it more expensive.

The most obvious problem is that of keeping all our eggs in one basket. This is tough to beat. In some instances (such as VDI) it may well be a perfectly acceptable risk to run a collection of virtual machines on a single physical server. The downtime that occurs in the rare event that the server suffers a hardware failure can be considered acceptable, and those VMs can then be spun up elsewhere. As soon as you step away from systems that can afford brief outages and start looking at the virtualisation of critical systems, everything gets a lot more complicated.

As a systems administrator, it’s part of my job to look at the state of IT today and the history of IT in general, and from this extrapolate a prediction of where it is going. Buying most hardware is a fairly short-term bit of foresight; on average, you’ll be replacing your servers and desktops every four years. Software requires a lot more research, both into the capabilities of the product, but the viability of the supplier. Once locked into a given operating system or application, exiting that ecosystem is not easy. Whether you are trying to hop software vendors to a superior competitor or because the vendor has failed or exited the market, software changes tend to present far greater difficulty than hardware changes.

The lessons I have learned trying to solve power management issues on my network have brought this required bit of professional prognostication to the fore. I have long maintained that the IT landscape is in the midst of collapsing into a relatively small number of megacoporations. While I can’t possibly say how that will shake out, it will make for an interesting decade wherein everything from storage arrays to operating systems and even cloud services are commoditised to the point of being indistinguishable. This process is generally good for consumers; we get more choice for a lower cost. What it always brings with it in IT are some desperate endgame plays and strategic alliances. Everyone starts pairing off, and the game becomes one of locking your customers into your particular vertical stack of hardware, hypervisor, operating system, management applications and services. Through this integration it becomes nearly impossible to ever leave.

The death of the PC has been predicted many times before, and I certainly am not going to predict it here. There will likely always be a need for general purpose desktops and servers, but I feel that the days where they are the norm rather than the exception will soon be behind us. The reason for this is not politics or money, or even the rise of the iPad. The reason is that Intel has spent the greater part of the aughties stymied by the laws of physics.

Somewhere in the past decade, the ability to keep increasing the speed of a single core hit a wall. Intel loves to tell anyone that will listen that multi-core is the future, and programmers just need to catch up with the hardware. Programmers respond by politely informing the semiconductor behemoth that they are very sorry, however some workloads can’t be parallelised. This dance has gone on for years now, and very little progress has been made. In an attempt to continually increase the price/performance/watt values of our networks, virtualisation has bought us time. It needs to be borne in mind though that it isn’t an endgame solution, but rather a stop-gap measure. Without that single-thread performance, we are limited to only being able to grow workloads that can be parallelised. Virtual Desktop Infrastructure (VDI) is one of them. So low are the requirements of most users today that many small, simple, non-critical workloads can be shoved into one of these multi-core monstrosities without any noticeable compromises.

On the desktop, two cores is better than one; background processes are now free to run about on a separate compute unit from your main task, but four cores is in many cases still a complete waste over two. Six cores, eight, twelve, sixteen… this is simply ludicrous when we start putting it under Alice’s desk for her to check her email and view a PDF. As a rule of thumb, I ask myself if a user would benefit from having more than two cores in their desktop. If the answer is no, they get a thin client and VDI. Some users I simply can’t justify pushing on to VDI; they need all the power they can get. In fact, we have users with spiffy quad core systems under their desk, and a virtual machine or two of their own, all of which they cheerfully flatten all day long.

On the server front we consolidate our servers from many into one in order to save money; whether in initial purchase costs, electricity or cooling. We drive up the utilisation of our servers in doing so, thus making them more efficient. We then end up running these smaller number of servers harder and hotter, resulting in more frequent component failures. ($15,000 server crashed by failure of ten-cent fan!) In response, server vendors offer us more complex and redundant servers and management software all of which are more power hungry and expensive. While virtualisation can save money in some deployments, improperly dealing with “eggs in one basket” by trying too hard to virtualise mission critical services can destroy any budget gains made.

Beyond just worrying about a single hardware failure causing multiple VM failures, you need to look at the power utilisation of properly designed servers. It’s easy to look at thirty servers or desktops with 600W power supplies and see that being collapsed into a single server with a 1400W power supply. What’s harder is asking yourself if all thirty of those systems needed enough power to require 600W power supplies in the first place. If you are considering consolidation through virtualisation, then the chances are the answer to this question is no.

The FTP server or spam filter for a small business is unlikely to need a 130W CPU with four DIMMs and RAID array. In fact you could probably shove it on a spiffy little Mini-ITX Atom board with a small flash disk, passively cool the whole thing and have it pull less than 60W. With no moving parts and judicious parts choices, you could cheerfully get ten years out of a system like that. It could just sit in a corner and sip power. In my experience, there are quite a few workloads that can be successfully “physicalised”. That is, moving the workload onto a dedicated piece of hardware that is tightly specified for the job. Little overhead for growth, ultra-low power consumption, but zero expectation that the requirements will ever change for the system doing that job.

Looked at another way, computing workloads are in many places moving rapidly towards becoming appliances. The general purpose PC, be it desktop or server is facing a crisis. There is little if any more single-thread performance to be had. This leaves us cutting up our systems either with parallelisation of our applications (not easy,) or with virtualisation. Virtualisation bears with it its own extra costs and complexity, so we respond by virtualising only what needs to be virtualised, and making simple appliances out of the rest. To make matters more complicated, everywhere you turn there is someone claiming to have found the “one true path”, with the “one true technology stack” that can solve all problems, cure all ills.

Utter bollocks. In trying to find a magical solution to getting decent power management out of the virtualised portion of my network, I came to realise that every chunk of software or hardware I dug up to solve my problems probably wouldn’t apply to most other networks. For this reason many of these solutions never made it into my VDI power management articles. I also came to the conclusion that there is absolutely nothing special about VDI. It is almost impossible to separate the issues surrounding its implementation from those surrounding the implementation of server workloads.

There are nifty software stacks that can do it all and make you tea, but they bear a heavy price. This price is not only that of the costs of licensing, but vendor lock-in. (Something my spidey-sense tells me is going to get to me much more of an issue in the coming decade.) For small and medium enterprises, these costs are so high that when you do the math to figure out what the cheapest solution to high uptime with virtualised systems is you don’t come up with hardware or software. The cheapest solution to these problems is still wetware. It seems that when trying to balance cost, power usage, performance and uptime considerations, there’s just no viable replacement for a well trained systems administrator who knows his network. ®