Virtualisation: just a lot of extra software licences?
Counting the cost
A colleague of mine recently remarked that x86 virtualisation makes no sense to any organisation that is cost conscious.
I am an early adopter of virtualisation and wanted to know what he meant.
“When using virtualisation, you are paying for far more software licences than you would if you were to take the time to implement everything on one physical box," he explained.
I beg to differ. Virtualisation’s greatest cost advantage is that it has allowed us to containerise applications without having to buy dedicated hardware.
Putting the reboot in
There are a number of updates that take down adjacent services during install (for example, Exchange and IIS), and others that require the reboot of the entire system. This is less of an issue with open-source systems, but even here it is a factor.
Industry-specific applications are touchy beasts. They have exacting requirements and often are so poorly written that they crash regularly. Sometimes they crash so spectacularly that they take the operating system with them.
This requires a reboot of the underlying server to get the application back up – a problem if several other critical applications happen to live on the same box.
Malware is another threat. I recently had an issue with a Linux server operating as both an application server and a file server. A nasty little bit of malware was uploaded to the system via FTP and a privilege escalation bug in one of the applications was used to mark it executable.
The system was rooted. Had there been any sensitive information available to that system very bad things could have occurred.
Windows is even more vulnerable.
Memory leaks and caching algorithm glitches cause their own problems. A single application with a memory leak can starve everything else on a system. A bad caching algorithm – think early Vista – can thrash the disk and significantly impair performance for everything on the system.
Some applications simply have different requirements. I have an old bit of software that requires the Microsoft JVM, and four other software applications all pinned to specific versions of Sun and Oracle’s JVMs.
They are completely incompatible and require separate execution environments. Patching Java in this scenario is complicated.
Direct application incompatibilities are not the only issue. In my environment, Exchange requires a UC certificate to work properly whereas other applications I run have differing certificate requirements. Some applications won’t run with UAC turned on; others balk at having it turned off and refuse to execute for security reasons.
The total cost of my installed base of server operating systems is about $100,000. If I compressed this into a single master/slave cluster per site with no virtualisation I could cut that cost to just $10,000. On paper, that saves the company $90,000.
Virtual sprawl isn’t all bad
The reality, however, is that it would probably take me the better part of two years to figure out how to do it all. Factor in the cost of downtime, then add tech time required to thoroughly test and re-certify the single-box environment for every single application update.
Suddenly $90,000 seems like a bargain.
x86 virtualisation has given rise to virtual sprawl, but virtual sprawl isn’t all bad. A single application running on a single operating system is far more manageable than the idea that all things must exist on a single box.
Specific licensing schemes related to virtualisation may be atrocious.
Still, most implementations of the technology are only more expensive than the single-box approach if both system downtime and IT staff time have no value. ®
"x86 virtualisation" does the same sort of thing that user separation provided on, well, almost anything but the first and rather trivial time sharing systems all those many years ago. Or what mainframes offer. Or what chroots or jails or zones or containers or a host of other features found in many systems offer. But hey, now windows can benefit from it too, several decades after the rest figured it out, through (third-party) add-ons. Isn't that nice?
It is indeed of particular import in the "windows" racket because that thing just doesn't do very well as anything but eyecandy. And I'd say not even there, but let's not shed bikes. Other systems fail too even if they do fail far, far less and less eggregiously than this. Personally I would indeed say that if software falls over, as it does, and it takes down the entire system with it, then that system isn't very good at all.
Yet if Trevor can't swap it out for something better --and sysadmins are frequently in this position even if they can do and do know better-- then this virtualisation thing comes in mighty handy for containing the damage while reducing the scap heap of under- and misused-by-the-system hardware a bit. This is Trevor describing what it does for him in his shop; he thinks it's serving him well. More power to him.
Is it me being thick
Is it me being thick or all of your "pros" read like a list of "Windows Specific Problem"?
Patching and rebooting
Patching is a centralised operation. It's significantly easier in a virtualised environment because I have one application per operating system. I have to test if that patch on that OS affects that application. I have no weird interactions between all these different applications in to debug. One app per OS. Test and release centrally. That part is easy as pie.
Now, rebooting is again made easier by "one app per OS." Rebooting the OS reboots the infrastructure under ONE application. Just one! I don't tank the whole business with a single reboot, I don't have to schedule reboots around 15 different departments. I call up the people who use that application in question go "hey guys, I need to reboot the server for updates, mind if I do that tonight at 7:00pm?"
I get a yay/nay and move forward.
I can schedule and co-ordinate each application independently of the others, and that is a bloody GODSEND. You see, I work in a business where IT doesn't have the almighty word of God. We don't dictate when computers will be available. We work with the affected business units to ensure the best possible quality of service with the fewest possible interruptions.
That means worrying about things like downtime. It also has to bear in mind the real world, where we have telecommuting workers in the systems 24/7.
I can not even conceive what it would take to coordinate a shutdown of the entire corporate infrastructure at any of the companies I oversee. A miracle perhaps. Or 6 months worth of proactive planning.
Virtualised and containerised environments make patching/rebooting EASIER. Yes there are more widgets to reboot, but you can do it without nearly as much angst or worry.
As to tracking and monitoring and securing a fleet of Windows servers, have you tried combinations of some or all of the following:
Windows Server Update Services
Microsoft System Center Suite
++squillions of others
If managing a fleet of servers - physical, virtual or otherwise - to know "are they up, are they patched, are they infected - is a difficult chore for you, then you are doing it wrong. It's easy to do...and there are programs that let you do it for free.
Managing computers is EASY. Managing people (and budgets) is hard.