Network quality of service: making the switch
Trevor Pott scales up
By cloud computing standards, the networks I oversee are small. The largest network under my care has 250 physical servers; my 9-5 has less than 50. Most networks I oversee have fewer than five servers and all of these networks have far fewer switches than systems.
Until very recently, local area network bandwidth has simply not been a problem. Commodity 1Gbit links interspersed with the odd 10Gbit port have served well.
This is changing. Moore's law remains intact; more VMs (virtual machines) fit into a single physical box than ever, while the network demands of individual VMs continue to expand. For most deployments, a full network upgrade would be too expensive. Instead, a proper load analysis and rebalancing of the VMs is called for. At such a small scales, I can get away with this sort of maintenance by hand.
Scale up just a small fraction – such as recent mandates to ready several of my networks for hosting duties – and the ability to monitor and adjust to network conditions in real-time become critical.
To even attempt to truly provide mission critical computing, I need real switches. That means making the leap from D-Link and its ilk to the top class network kit from the likes of Brocade, Cisco, HP, or Juniper.
I have mostly been studying Brocade switches; they are tight with Dell, their gear is proven solid, and they have offerings in price ranges I can reasonably consider.
As with the other major players, Brocade offers switches with some level of environmental awareness. In Brocade’s case, the sauce is in its Datacenter Fabric Manager (DCFM). Their switches measure and monitor your network and can be configured to react accordingly.
Brocade even offers a VMWare plug-in for DCFM ; bringing critical aspects of network monitoring together in one place. This integration is an important step.
For data centres to become environmentally aware, their individual components must become aware.
Awareness starts with individual applications, but must include all layers of hardware from servers and storage through to switches. The end goal is ultimately a single interface through which a systems administrator can see real-time network load, server utilisation, storage demand and even temperature and humidity conditions for the entire data centre.
Even my bottom-barrel switches can bond multiple gigabit links into a single trunk; what they can't do is tell me where this needs to occur. My switches can not modify capacity automatically in response to changing network conditions.
Half my switches don't even have spanning tree; dealing with network failures is a manual process. Here is where real switches shine. A move towards networks for grown-ups would mean switches with automated failover and provisioning. It would also mean real-time monitoring capabilities and alerts when things fail.
The law of big numbers
If self-healing network capabilities are a selling feature for an operation at the scale of 250 servers, they are essential at larger scales. Despite the common axiom, some press is indeed bad; notoriety for network failures is among the worst.
Data centre networking is a delicate balance. At hyperscale, over-provisioning can mean wasting millions. Under-provisioning means a breach of SLA (service level agreement) potentially also costing millions. Getting the most out of your networking is a big numbers game.
Choosing a network equipment provider is a big decision; it requires intense research and preferably some hands-on testing. My personal dive deeper into networking has left me convinced that the software side of the equation is the most important. Being the first kid on the block with the highest number of the fastest ports matters in some situations, but these are quite few.
What really matters are the environmental awareness features; top notch management software paired with switches and HBAs that can sense the changing network environment. Integration with my hypervisors and MPIO-aware applications is equally important. Real-time knowledge of network conditions is the only possible way to maintain an SLA, especially on a budget. ®