Feeds

Data centre networks are getting flatter and fitter

Shed the layers

High performance access to file storage

We have all come across the traditional corporate network with three distinct layers: the core layer dealing with heavy-duty switching and routing, which runs on socking big switches and routers; the distribution layer dealing with lighter (but still intelligent) tasks such as packet filtering and some routing; and the access layer, which does little more than provide a bunch of ports for your servers and other endpoint devices to connect to.

Why, then, are so many vendors coming into the networking arena telling us that we need to move away from this model and “flatten” our networks – that is, throw away some of the layers and do more with less?

Blade runner

Let's imagine for a moment that you are running the traditional three-layer model. Let's also imagine that you have adopted a strategy of virtualising your server infrastructure where possible (and if you haven't, why not?).

And let's also imagine that you care about how much you spend on your server hardware and hosting (and about the environment too). So instead of buying loads of individual servers you tend to use blade-based chassis products. This is a perfectly valid scenario, as I know from several I have seen.

The thing is, though, this is not a three-layer model, it is a five-layer one. With the server virtualisation you have actually inherited a load of network virtualisation as well.

The blade chassis most likely has its own built-in switch on the shared LAN connectivity modules, and on top of this you may well be using the networking virtualisation features of your chosen virtual machine platform (Hyper-V or ESX, generally).

These features are great when they can benefit you (for instance when two virtual servers on the same blade are in the same subnet – the traffic never even hits the wire in many cases).

But when you have two virtual servers in different chassis and on different subnets every packet between them is travelling through seven physical and two software forwarding stages.

The sensible thing to do with a chassis-based server installation, then, is to plumb it into the core. Even a relatively low-level chassis with six or eight server blades can host a couple of hundred or more virtual machines, implying a serious bandwidth requirement and a desire for multiple physical LAN connections trunked into LACP bundles.

In the three-layer model the only place you can do this is in the core (you can't do LACP trunks if you are hanging the servers off separate access switches), so you want to hook your chassis directly into the core switches.

The hardware wasn't up to the job of doing it all in one box

Now let's reflect for a moment on the functionality of the various layers. Back in the olden days, to guarantee any kind of decent throughput you had to split the routing and switching functions into separate devices (preferably doing as little routing as possible and employing hefty hardware to keep the speed up for the routing work you can't avoid).

You were not splitting the functions because you wanted to but because you had to, as the hardware wasn't up to the job of doing it all in one box.

Then along came the vendor boffins and turned this on its head. The likes of Ipsilon came up with funky flow-based approaches, for instance, where the first packet of a given data stream goes through the layer 3 routing engine but the rest are simply switched through the much faster layer 2 fabric.

Pass the port

And then Rapid City and 3Com (with the F1200 and the CoreBuilder 3500 respectively) took the rather less cunning, but no less impressive, approach of building a switch that did layer 3 (specifically IP) routing in hardware, at wire speed. All of a sudden there was no need to spread the load over two devices, yet we carried on structuring our networks in layers nonetheless.

Leaving aside the cleverness of virtual switching and such funky new technologies, there is another aspect of virtualisation which has caused the layer requirements to diminish: the simple concept of port count. When every server was physical you needed at least three, probably five, ports per server (two or four for LAN connections, plus one for the lights-out management port).

Virtualise 10 physical servers and plonk them on a single ESX/Hyper-V host – without even considering chassis server products – and even if you upped the port count for bandwidth reasons you could reduce your requirement from 50 ports to perhaps a dozen.

Virtualise more servers, lose even more ports. Previously access switches at the edge were the sensibly economical way to provide all these connections; with virtual servers they are no longer needed.

The final point of note is that the network hardware itself has simply become sufficiently fast so you can worry less about needing to spread the load. I remember bemoaning Cisco's Catalyst 5000 back in its day, for instance: it couldn't even come close to running at full speed when filled with 10/100Mbps blades.

No need for speed

Contrast this with today's Catalyst 6500: the Supervisor Engine 720 – the card that does the work – is named after the aggregate 720Gbps throughput it can handle in a fully stacked chassis, with each slot of the chassis running at up to 40Gbps.

Now although this isn't quite wire-speed (it won't quite keep up with a maxed-out 48-port 10/100/1000Mbps card, for instance, or an eight-port 10Gbps card), it is as good as you will need in all but the most extravagant implementation.

The data centre network no longer makes sense in the traditional three-layer format, then. The fact is that you will probably still have three layers (and hence the requisite number of hops) between server A and server B, but only one of these is likely to be physical LAN infrastructure. The rest is within the server hardware and the virtualisation software layer.

Preserve three layers of network hardware and you will find yourself spending more than you need to. You will also slow things down by increasing hop counts and detracting from clever forwarding technology that works brilliantly – but only if you have everything in one layer. ®

High performance access to file storage

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
European Court of Justice rips up Data Retention Directive
Rules 'interfering' measure to be 'invalid'
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
Bored with trading oil and gold? Why not flog some CLOUD servers?
Chicago Mercantile Exchange plans cloud spot exchange
Just what could be inside Dropbox's new 'Home For Life'?
Biz apps, messaging, photos, email, more storage – sorry, did you think there would be cake?
IT bods: How long does it take YOU to train up on new tech?
I'll leave my arrays to do the hard work, if you don't mind
prev story

Whitepapers

Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
HP ArcSight ESM solution helps Finansbank
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Mobile application security study
Download this report to see the alarming realities regarding the sheer number of applications vulnerable to attack, as well as the most common and easily addressable vulnerability errors.