Physical vs virtual: What's your poison?
Power management and VDI
Worse yet is load balancing your virtual machines across your hosts. The major virtualization players offer some neat software that can do this automatically for you, but I might as well ask the magic budget fairy for a Toughbook. It isn’t going to happen, and thus, I load-balance my VMs by hand. This creates interesting conflicts when trying to weigh load balancing against power management and even critical VM distribution.
As much I want to power down all non-essential systems when not in use, I also don’t want a single hardware failure taking out all of the production VMs responsible for the manufacturing equipment in a single go. I must also ensure that critical VMs have full LOM capabilities in case there is a problem with the host, and it needs to be repaired remotely. As not all of my servers have full LOM capabilities, this means being choosy about which hosts they live on.
Virtualization has its power management bonuses too. Overall, even with leaving the servers running 24/7, I am consuming less electricity than if all VMs were physical desktops or blade servers. With everything that is required for after-hours work confined to the datacenter, I can actually shut off entire segments of the network at night. Switches, phones, desktops, monitors, printers and all other forms of electronic gadgetry.
Still, it is interesting how much virtualization can complicate the life of a sysadmin. The “eggs in one basket” syndrome common with VDI has power management implications of its very own. Intel would love to come along and tell me that with their ridiculous new shiny servers, I could collapse thirty-two virtual hosts into six. They’d even be right; I’ve run the numbers, and right now I can run my entire network on six ridiculous servers. Eighteen months from now I could run it on three.
If I did that, however, I’d be sitting there praying every night that those three servers don’t blow a stick of RAM or lose a CPU fan, or that rodents of unusual size chose not to have a gnaw on cat6. For this reason, I feel I am actually better off with my older servers; there is a “sweet spot” past which I feel a host simply has too many guests for comfort.
These problems laid bare, my next article will focus on what I’ve done to overcome these issues. Some approaches are technological, while others are matters of policy and procedure. I don’t have access to the really awesome tools used to make virtualization really shine, so it will be an investigation into VDI power management with nothing but the bare basics to help you. ®
Sponsored: IBM FlashSystem V9000 product guide