Feeds

Data centre networks are getting flatter and fitter

Shed the layers

Internet Security Threat Report 2014

We have all come across the traditional corporate network with three distinct layers: the core layer dealing with heavy-duty switching and routing, which runs on socking big switches and routers; the distribution layer dealing with lighter (but still intelligent) tasks such as packet filtering and some routing; and the access layer, which does little more than provide a bunch of ports for your servers and other endpoint devices to connect to.

Why, then, are so many vendors coming into the networking arena telling us that we need to move away from this model and “flatten” our networks – that is, throw away some of the layers and do more with less?

Blade runner

Let's imagine for a moment that you are running the traditional three-layer model. Let's also imagine that you have adopted a strategy of virtualising your server infrastructure where possible (and if you haven't, why not?).

And let's also imagine that you care about how much you spend on your server hardware and hosting (and about the environment too). So instead of buying loads of individual servers you tend to use blade-based chassis products. This is a perfectly valid scenario, as I know from several I have seen.

The thing is, though, this is not a three-layer model, it is a five-layer one. With the server virtualisation you have actually inherited a load of network virtualisation as well.

The blade chassis most likely has its own built-in switch on the shared LAN connectivity modules, and on top of this you may well be using the networking virtualisation features of your chosen virtual machine platform (Hyper-V or ESX, generally).

These features are great when they can benefit you (for instance when two virtual servers on the same blade are in the same subnet – the traffic never even hits the wire in many cases).

But when you have two virtual servers in different chassis and on different subnets every packet between them is travelling through seven physical and two software forwarding stages.

The sensible thing to do with a chassis-based server installation, then, is to plumb it into the core. Even a relatively low-level chassis with six or eight server blades can host a couple of hundred or more virtual machines, implying a serious bandwidth requirement and a desire for multiple physical LAN connections trunked into LACP bundles.

In the three-layer model the only place you can do this is in the core (you can't do LACP trunks if you are hanging the servers off separate access switches), so you want to hook your chassis directly into the core switches.

The hardware wasn't up to the job of doing it all in one box

Now let's reflect for a moment on the functionality of the various layers. Back in the olden days, to guarantee any kind of decent throughput you had to split the routing and switching functions into separate devices (preferably doing as little routing as possible and employing hefty hardware to keep the speed up for the routing work you can't avoid).

You were not splitting the functions because you wanted to but because you had to, as the hardware wasn't up to the job of doing it all in one box.

Then along came the vendor boffins and turned this on its head. The likes of Ipsilon came up with funky flow-based approaches, for instance, where the first packet of a given data stream goes through the layer 3 routing engine but the rest are simply switched through the much faster layer 2 fabric.

Pass the port

And then Rapid City and 3Com (with the F1200 and the CoreBuilder 3500 respectively) took the rather less cunning, but no less impressive, approach of building a switch that did layer 3 (specifically IP) routing in hardware, at wire speed. All of a sudden there was no need to spread the load over two devices, yet we carried on structuring our networks in layers nonetheless.

Leaving aside the cleverness of virtual switching and such funky new technologies, there is another aspect of virtualisation which has caused the layer requirements to diminish: the simple concept of port count. When every server was physical you needed at least three, probably five, ports per server (two or four for LAN connections, plus one for the lights-out management port).

Virtualise 10 physical servers and plonk them on a single ESX/Hyper-V host – without even considering chassis server products – and even if you upped the port count for bandwidth reasons you could reduce your requirement from 50 ports to perhaps a dozen.

Virtualise more servers, lose even more ports. Previously access switches at the edge were the sensibly economical way to provide all these connections; with virtual servers they are no longer needed.

The final point of note is that the network hardware itself has simply become sufficiently fast so you can worry less about needing to spread the load. I remember bemoaning Cisco's Catalyst 5000 back in its day, for instance: it couldn't even come close to running at full speed when filled with 10/100Mbps blades.

No need for speed

Contrast this with today's Catalyst 6500: the Supervisor Engine 720 – the card that does the work – is named after the aggregate 720Gbps throughput it can handle in a fully stacked chassis, with each slot of the chassis running at up to 40Gbps.

Now although this isn't quite wire-speed (it won't quite keep up with a maxed-out 48-port 10/100/1000Mbps card, for instance, or an eight-port 10Gbps card), it is as good as you will need in all but the most extravagant implementation.

The data centre network no longer makes sense in the traditional three-layer format, then. The fact is that you will probably still have three layers (and hence the requisite number of hops) between server A and server B, but only one of these is likely to be physical LAN infrastructure. The rest is within the server hardware and the virtualisation software layer.

Preserve three layers of network hardware and you will find yourself spending more than you need to. You will also slow things down by increasing hop counts and detracting from clever forwarding technology that works brilliantly – but only if you have everything in one layer. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
Symantec backs out of Backup Exec: Plans to can appliance in Jan
Will still provide support to existing customers
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.