Feeds

Network quality of service: making the switch

Trevor Pott scales up

Boost IT visibility and business value

By cloud computing standards, the networks I oversee are small. The largest network under my care has 250 physical servers; my 9-5 has less than 50. Most networks I oversee have fewer than five servers and all of these networks have far fewer switches than systems.

Until very recently, local area network bandwidth has simply not been a problem. Commodity 1Gbit links interspersed with the odd 10Gbit port have served well.

This is changing. Moore's law remains intact; more VMs (virtual machines) fit into a single physical box than ever, while the network demands of individual VMs continue to expand. For most deployments, a full network upgrade would be too expensive. Instead, a proper load analysis and rebalancing of the VMs is called for. At such a small scales, I can get away with this sort of maintenance by hand.

Scale up just a small fraction – such as recent mandates to ready several of my networks for hosting duties – and the ability to monitor and adjust to network conditions in real-time become critical.

To even attempt to truly provide mission critical computing, I need real switches. That means making the leap from D-Link and its ilk to the top class network kit from the likes of Brocade, Cisco, HP, or Juniper.

Environmentally aware

I have mostly been studying Brocade switches; they are tight with Dell, their gear is proven solid, and they have offerings in price ranges I can reasonably consider.

As with the other major players, Brocade offers switches with some level of environmental awareness. In Brocade’s case, the sauce is in its Datacenter Fabric Manager (DCFM). Their switches measure and monitor your network and can be configured to react accordingly.

Brocade even offers a VMWare plug-in for DCFM ; bringing critical aspects of network monitoring together in one place. This integration is an important step.

For data centres to become environmentally aware, their individual components must become aware.

Awareness starts with individual applications, but must include all layers of hardware from servers and storage through to switches. The end goal is ultimately a single interface through which a systems administrator can see real-time network load, server utilisation, storage demand and even temperature and humidity conditions for the entire data centre.

Even my bottom-barrel switches can bond multiple gigabit links into a single trunk; what they can't do is tell me where this needs to occur. My switches can not modify capacity automatically in response to changing network conditions.

Half my switches don't even have spanning tree; dealing with network failures is a manual process. Here is where real switches shine. A move towards networks for grown-ups would mean switches with automated failover and provisioning. It would also mean real-time monitoring capabilities and alerts when things fail.

The law of big numbers

If self-healing network capabilities are a selling feature for an operation at the scale of 250 servers, they are essential at larger scales. Despite the common axiom, some press is indeed bad; notoriety for network failures is among the worst.

Data centre networking is a delicate balance. At hyperscale, over-provisioning can mean wasting millions. Under-provisioning means a breach of SLA (service level agreement) potentially also costing millions. Getting the most out of your networking is a big numbers game.

Choosing a network equipment provider is a big decision; it requires intense research and preferably some hands-on testing. My personal dive deeper into networking has left me convinced that the software side of the equation is the most important. Being the first kid on the block with the highest number of the fastest ports matters in some situations, but these are quite few.

What really matters are the environmental awareness features; top notch management software paired with switches and HBAs that can sense the changing network environment. Integration with my hypervisors and MPIO-aware applications is equally important. Real-time knowledge of network conditions is the only possible way to maintain an SLA, especially on a budget. ®

Boost IT visibility and business value

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Gartner critical capabilities for enterprise endpoint backup
Learn why inSync received the highest overall rating from Druva and is the top choice for the mobile workforce.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.