Virtualisation extremist? Put down that cable and step away slowly
You've got hardware for a reason: use it
Virtualisation is everywhere, particularly the data centre, and that's a good thing - if used wisely. Virtualisation can help you milk the greatest possible performance (and hence maximum value) from the physical gear.
Running multiple virtual servers on top of a physical server platform allows you to minimise wasted CPU and RAM capacity. If you're so inclined you can even take it to extremes, so that in the event of a physical host failure your virtual servers keep on humming by virtue of real-time replication onto other physical hosts.
Similarly, virtual networks: how many of us don't use VLANs (virtual local area networks) to separate traffic within the same physical infrastructure? When VLANs became widely available, allowing switches rather than routers to define the broadcast domain, it seemed like some kind of technology voodoo, but now most of us are at it.
Storage virtualisation follows not far behind: hey, let's put all our storage on a high-speed network and thin-provision it so that we don't have terabytes of unused space sitting on internal disks in servers … oh, and if we don't want Fibre Channel let's just use iSCSI over the normal LAN instead.
And if you go with a cloud automation provider such as SolidFire, even a relative storage idiot like me can get the most out of it as all the complexity is hidden behind a simple GUI.
All of the above is great. Although historically a network guy, I'm now responsible for the servers and storage in my company, and the virtualisation aspects available to me make my life an order of magnitude easier than it was 10 or 15 years ago.
Want to run up a new application that'll be installed and managed by a third party? Easy: provision a new LUN (logical unit number), which will carve out a piece of the storage array for your app, define a new Virtual Data Centre on the ESX platform, drop a new VLAN on the infrastructure so the traffic can get securely from the internet to that new application, and we're off.
There's a catch, though: you can overdo virtualisation and make your life worse than it used to be.
A cautionary tale
Let's take virtualised networking as an example of a modern virtualisation concept.
The likes of Nicira (now part of VMware) and Microsoft have virtualised networking offerings, both of which let you do some really cool, useful stuff such as presenting the same IP subnet across multiple networks (generally multiple data centres).
Why is this useful? Easy: if you want high-availability virtual servers that can migrate between data centres in the case of a failure, you don't want to have to change their IP addresses when moving them.
All well and good, but the temptation is there (and people succumb to it) to say: “If I can virtualise my network and control everything from the virtual server GUI instead of at the switches, my management task is easier”.
Yes it is, but think about it: switch and router providers have spent billions of pounds/dollars developing high-speed switching architectures, using custom-built ASIC technology and such like.
Are you really going to get the performance you expect by pulling the network function off the custom network hardware and running it on a bunch of general-purpose servers?
Even age-old technology (relatively speaking) such as VMware and Hyper-V has its downsides – primarily that the virtualisation technology constantly sprints ahead of management and monitoring technology.
Not so long ago I had a demonstration from Enterasys of its network management package, and it was impressively VMware-aware – in the sense that by hooking into the hypervisor it was able to show me useful stuff like which VMs were connected to which switch ports (and to update nearly instantly when a VM moved to another host).
Sadly the average network diagnostic tool doesn't do this, and so you find yourself wandering around in the vSphere client tracing VMs through hosts, vSwitches and port groups to discover where they're currently sitting so you can try to figure out why their network connections are slow.
And one more example, just for fun: take the company whose server kit became end-of-life, and which decided to adopt a spanking new chassis-based blade platform for its servers. Six blades in a chassis, 40 plus virtual machines per blade and, er, a pair of 4Gbit/s uplinks (each a bundle of four 1Gbit/s links in an LACP bundle, if you're wondering) to the network in active/passive failover mode.
Two-hundred-and-forty machines on a 4Gbit/s uplink? Nah, you don't want to go there, particularly if the load-balancing algorithm is imperfect (which they all are).
So, then: should you avoid virtualisation like the plague? No, of course not. Buy it, learn how to use it correctly, use it and love it. Make the most of all of the good features, but don't forget that your physical infrastructure probably has some good features too.
By all means use network virtualisation where it gives tangible value, but similarly why not let your switches do some switching, and your routers do some routing?
Not all of it, granted (hey, if a VM is talking to another VM on the same host, why put the packets on the LAN at all), but definitely some of it.
And by all means thin-provision the storage on your VMs, but make sure you monitor it closely because if you don't, you'll get bitten when the back end runs out of space.
And while you're at it, let's have a decently provisioned iSCSI VLAN running on 10Gbit/s Ethernet instead of letting it share the LAN with all that other rubbish the servers are throwing around.
Above all, if you're letting your virtual world do things automatically (high availability, for instance, or anything that lets it do stuff without you noticing), implement it robustly and document it to death so that when you're not there, the poor sod who's tasked with fixing a system-down problem has a fighting chance of doing so. ®
Dave Cartwright is a senior network and telecoms specialist who has spent 20 years working in academia, defence, publishing and intellectual property. He is the founding and technical editor of Network Week and Techworld and his specialities include design, construction and management of global telecoms networks, infrastructure and software architecture, development and testing, database design, implementation and optimisation. Dave and his family live in St Helier on the island paradise of Jersey.
Sponsored: Global DDoS threat landscape report