It's all in the fabric for the data centre network
At the beginniing of the SDN revolution, says Trevor Pott
Another fine mesh
The need for a new topology – a mesh, be it full or partial – has become painfully apparent. Servers need to be able to talk east to west with as little contention as possible without sacrificing north-to-south connectivity along the way.
Switches need to be able to determine the best path for a packet without needing to get into full-on layer-3 routing.
Getting the packet from A to B needs to be a layer-2 affair: something that doesn't require routing based on IP addresses and where getting more speed between two switches is as simple as plugging in another cable between them.
What is more, the human element of networking has become a problem. Modern data centres are heavily automated. New virtual machines are created and destroyed much faster than a network administrator can manually configure a network port or a storage administrator can assign storage.
Network configuration needs to be automated – something that traditional network equipment and management platforms just aren't good at.
This break with the traditional hierarchical network is one of the foundational considerations behind software defined networking (SDN) and is the most important movement in data centre networking to have occurred in decades. These modern networks are referred to as a network fabric.
Instead of a pyramid with a core router at the top, picture a tapestry of interwoven threads which intersect in an almost haphazard fashion but ultimately give rise to an elegance that belies the chaos of the individual elements.
Command and control
Currently, there are a number of approaches to making some or all of the elements of a modern network happen. Transparent interconnection of lots of links (Trill) and shortest path bridging stitch networks together into a fabric. Others take this a step further by completely separating the control plane from the data plane.
Traditional switches are little islands that intercommunicate. Each holds its own configuration and needs to be babied along. Its ports are configured individually, setup is handled separately, and generally there is a lot of rather unnecessary labour involved.
Modern switches are starting to be capable of SDN. This means they can be controlled centrally. The industry terminology is "separation of the control plane from the data plane" but that's not exactly helpful.
Put simply SDN is about separating the decision making and configuration widget from the device actually doing the work.
For infrastructure guys a great example is RAID controller software. Each RAID controller does the work of turning groups of disks into a single volume, and each RAID controller can be accessed and configured individually if absolutely necessary. This is the equivalent of the data plane that network types go on about.
The control plane is the centralised application from which an entire data centre's worth of RAID cards can be managed, maintained, configured, monitored and so forth.
Move up a level from RAID cards to storage area networks (SANs) and that control plane has the ability to do things such as inter-system replication, mirroring across devices and so forth.
With SDN routing decisions – layer 2 or layer 3 – are made by a separate controller that can see what is happening across the entire network.
Switches are reconfigured automatically, not only in response to a server being added or a virtual machine being created, but to detection of a downed link, changing traffic patterns or even an alert from various network security systems.
OpenFlow is emerging as the most popular way to do this, though there are other attempts at open standards and some proprietary versions too.
Brocade gets a nod for "old to new transition therapy": the latest version of its NetIron software can run ports in hybrid mode, allowing both OpenFlow and traditional routing to operate on the same port.
Name a price
We are at the beginning of the SDN revolution. The standards and patent wars have barely begun.
There is an incredible amount of FUD being flung about and a great deal of defensive hand-wringing by those who haven't adapted to changing requirements as well as others.
Amid all the hullabaloo about capabilities or performance, price is a very real consideration. All the sexy automation in the world doesn't help you if you can't afford it or if the minimum buy in to make it happen is an order of magnitude larger than your current data centre deployments look set to be.
You can lower the cost to entry if your vendor offers a port-based licensing approach or a subscription alternative. ®