Virtual management - a lesson from history
Wetware still required
Blog I have worked with many different virtualisation platforms, from Microsoft’s early attempts with Virtual Server 2005 to the latest VMWare, Hyper-V, KVM and Xen offerings. I have taken the time to play with the management software offered by these vendors, and even a few of the third-party stuff. Depending on your specific needs, the higher-level management tools can either make your life easier or far more complicated. They will most assuredly make it more expensive.
The most obvious problem is that of keeping all our eggs in one basket. This is tough to beat. In some instances (such as VDI) it may well be a perfectly acceptable risk to run a collection of virtual machines on a single physical server. The downtime that occurs in the rare event that the server suffers a hardware failure can be considered acceptable, and those VMs can then be spun up elsewhere. As soon as you step away from systems that can afford brief outages and start looking at the virtualisation of critical systems, everything gets a lot more complicated.
As a systems administrator, it’s part of my job to look at the state of IT today and the history of IT in general, and from this extrapolate a prediction of where it is going. Buying most hardware is a fairly short-term bit of foresight; on average, you’ll be replacing your servers and desktops every four years. Software requires a lot more research, both into the capabilities of the product, but the viability of the supplier. Once locked into a given operating system or application, exiting that ecosystem is not easy. Whether you are trying to hop software vendors to a superior competitor or because the vendor has failed or exited the market, software changes tend to present far greater difficulty than hardware changes.
The lessons I have learned trying to solve power management issues on my network have brought this required bit of professional prognostication to the fore. I have long maintained that the IT landscape is in the midst of collapsing into a relatively small number of megacoporations. While I can’t possibly say how that will shake out, it will make for an interesting decade wherein everything from storage arrays to operating systems and even cloud services are commoditised to the point of being indistinguishable. This process is generally good for consumers; we get more choice for a lower cost. What it always brings with it in IT are some desperate endgame plays and strategic alliances. Everyone starts pairing off, and the game becomes one of locking your customers into your particular vertical stack of hardware, hypervisor, operating system, management applications and services. Through this integration it becomes nearly impossible to ever leave.
The death of the PC has been predicted many times before, and I certainly am not going to predict it here. There will likely always be a need for general purpose desktops and servers, but I feel that the days where they are the norm rather than the exception will soon be behind us. The reason for this is not politics or money, or even the rise of the iPad. The reason is that Intel has spent the greater part of the aughties stymied by the laws of physics.
Somewhere in the past decade, the ability to keep increasing the speed of a single core hit a wall. Intel loves to tell anyone that will listen that multi-core is the future, and programmers just need to catch up with the hardware. Programmers respond by politely informing the semiconductor behemoth that they are very sorry, however some workloads can’t be parallelised. This dance has gone on for years now, and very little progress has been made. In an attempt to continually increase the price/performance/watt values of our networks, virtualisation has bought us time. It needs to be borne in mind though that it isn’t an endgame solution, but rather a stop-gap measure. Without that single-thread performance, we are limited to only being able to grow workloads that can be parallelised. Virtual Desktop Infrastructure (VDI) is one of them. So low are the requirements of most users today that many small, simple, non-critical workloads can be shoved into one of these multi-core monstrosities without any noticeable compromises.
"Would like to ask, what kind of stuff are you talking about that can't be paralised? In the real world I mean, I know the theoretical limits but I've rarely (probably never) butted up against them."
Well, talking strictly "real world" stuff, we run up against this with render engines a lot. You can render many different things at the same time, but you can only render one thing in one thread. If you are applying multiple filters to a frame or image, you have to wait until filter 1 is done before filter 2 is applied, because it needs to corrected image from filter 1. We hit this wall all the time, since the vast majority of our data requirements are rendering terabytes of images every month.
I also see many video decoders that can be at best PARTLY parallelized, but which still have to do the bulk of their work in a single thread. This leaves you really dependant on the speed of that primary core. There are other examples, (such as speech recognition, facial recognition sets and other biometrics processing that seem to require one large thread with a bunch of much smaller ones.)
“Hmm. I'd like to say well Duh, because there is never a replacement for common sense & knowing your subject. Anyone who thinks there is shouldn't be in a business. 'cept they are aren't they.”
The world moves ever faster towards replacing all wetware with hardware and software. No matter who you are, your job will eventually be done by some form of robot. Why should systems administration be immune. Just think of all the neat programs we use every day that once upon a time would have required a human being to do the work.
If you have the money for the software licenses, you can run an IT department on a shockingly low number of people. When you start talking managing a thousand or so servers, it becomes a real consideration, as the wetware overhead to do that manually starts to edge higher than the cost of the management software.
For SMEs, wetware is simply the cheaper option.
interesting but -
> Without that single-thread performance, we are limited to only being able to grow workloads that can be parallelised
- would like to ask, what kind of stuff are you talking about that can't be paralised? In the real world I mean, I know the theoretical limits but I've rarely (probably never) butted up against them.
I've usually found that bottlenecked stuff is only thus constrained because the 'wetware' hasn't been applied to it. Do that and you can often get orders of magnitude improvements, no new machinery needed.
> there’s just no viable replacement for a well trained systems administrator who knows his network
Hmm. I'd like to say well Duh, because there is never a replacement for common sense & knowing your subject. Anyone who thinks there is shouldn't be in a business. 'cept they are aren't they.