Software defined networking works up a head of steam
Automation takes over the data centre
Software-defined networking (SDN) represents a revolutionary tide flowing through the fusty, slow-moving halls of data-centre networking, bringing speed and dynamism to network connectivity management.
The idea that computer data network connectivity can be automatically set up to have its characteristics changed as needs change, and then be closed down by software running in servers outside the network, is a great innovation.
Today’s network connections are largely initiated, operated and closed by technicians working at network device level with proprietary interfaces to these routers, bridges, switches and so on.
SDN is about automating this so that network connectivity can be set up, dynamically managed and closed for applications running on virtual machines.
In large data centres hundreds, if not thousands, of virtual machines can be instantiated, see their network needs change, and be moved from server to server and closed down each day.
Virtual machine operations are largely automated and take place in seconds and minutes. Setting up and changing network connectivity operates in a timescale of hours and days.
Cut out the middlemen
The overall aim of SDN is twofold: firstly to initiate, configure and end network connectivity much faster than by today's technician-driven methods; and secondly to make better use of network resources by making the network dynamically responsive to changes in user need, delivering traffic routing, bandwidth, quality of service, security, encryption and other network services to virtual machines.
This is done by splitting network traffic into two classes:
1) Data traffic which flows from application to application or to storage arrays or the cloud through the network;
2) Control information which flows from network controller devices (physical or virtual) to alter the way network devices serve the needs of virtual machines in the hundreds, thousands, tens of thousands or more in today’s enterprise data centres.
In effect the network traffic is divided into a data layer or plane and a control layer or plane.
The Metis Files have a schematic illustrating this.
SDN control plane idea from the Metis Files.
The advantage is that a single control facility using the control plane automates the setting up of network resources for virtual machines in servers, without needing a small army of qualified technicians using different network interfaces, which can take days and is error prone.
If, like a server, a network is virtualised then the network "hypervisor" configures the network on demand, pretty much like a server hypervisor. In fact, the network hypervisor becomes part of the server hypervisor in VMware's view.
This idea has the chance of becoming a practical reality because of a university research project.
The OpenFlow switching specification was developed at Stanford University and details how a remote controller can change the way network devices operate. Its development is overseen by the Open Network Foundation.
The specification was developed in 2008. The introduction states: "OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardised interface to add and remove flow entries."
OpenFlow refers to a layer 2 protocol used between an OpenFlow switch or router and an OpenFlow controller, which can be a server. A server application can use this open specification to control and change how network switches operate – where they send data packets of certain kinds, for example.
OpenFlow is an enabler of SDN and virtually every network systems vendor supports it, including Arista, Brocade, Cisco, Dell (Force 10), Extreme Networks, IBM, Juniper Networks, Larch Networks, Hewlett-Packard and NEC. Version 1.2 of the spec was published in February 2012.
At that time Big Switch Networks, a Stanford spin-off, released an open-source package of OpenFlow software, and has recently released its Floodlight OpenFlow controller.
Big Switch Networks SDN concept
In its SDN scheme the Big Network Controller manages the network's control plane. Virtual and physical switches have data plane information stored locally and are updateable on demand rather than by dedicated technicians. Big Switch uses the Nicira Open vSwitch as its virtual switch, as does Citrix with XEN and Red Hat Enterprise Linux.
VMware has a proprietary vSwitch running in ESXi. Cisco has a virtual Nexus 1000V virtual switch and Microsoft's Hyper-V has its own virtual switch, which the Big Network Controller talks to.
Arista Networks, Dell (Force 10), Brocade, Juniper Networks and Extreme Networks have all partnered with Big Switch. The SDN concept has so much support from users that nearly every networking company and server hypervisor supplier is developing its own SDN strategy.
Let's look at some of the runners and riders.
Brocade is the dominant supplier of of storage area network switches and directors and also includes a range of Ethernet switching devices in its product set.
CEO Lloyd Carney has said: "If you look at data centre, going forward the firewall is going to be a piece of software. The switch is going to be a piece of software. The router is going to be a piece of software. And that new evolved world where you have a control plane, doing SDN across this infrastructure requires an agile, flexible fabric."
That fabric favours OpenFlow, and Brocade has bought Vyatta for its virtual router, firewall and appliances, using a Debian Linux operating system.
No discussion of SDN would be complete without taking account of Cisco. It is the dominant networking supplier with its various proprietary switches, bridges, routers and more besides.
Cisco cannot afford to ignore the SDN move. It is in bed with VMware in the VCE initiative, which converges Cisco UCS servers, its networking products, EMC storage and the VMware hypervisor into single integrated systems.
Somehow, Cisco has to provide SDN features while protecting its proprietary network equipment and having VMware execute SDN software in its hypervisor running on UCS servers. How is it managing to do this?
Last year it introduced an Open Network strategy, which includes SDN but "also encompasses an array of solutions, products, and technologies that are applicable to most, if not all, use cases that are much broader than what SDN alone could address". It's the old Microsoft embrace-and-extend tactic.
Cisco says it wants to make its network elements more programmable, meaning that network controller software could tell Cisco network elements what to do and they would do it automatically.
It says network programmability is only part of a broader set of needs involving SDN, OpenFlow, OpenStack, network controllers, overlays, APIs providing broad and deep visibility onto the network, virtual overlays and so on.
Service providers, operators of massively scalable data centres and cloud providers need to work with a networking supplier that can provide a big-picture response to their needs, not just concentrate on SDN.
Cisco's Open Network Environment (ONE) is a set of "Cisco technologies and open standards that brings programmatic control and application awareness to the network, combining the benefits of hardware and software across physical and virtual".
In effect Cisco is saying you can have your open network standards cake and still use Cisco's hardware and software with its better network visibility and control.
It cites its existing Nexus 1000V switch, which runs as an application in a server and allows "your virtualized workloads to directly control their network services… without forgoing the capabilities you have come to expect from your physical network".
It says its ONE Controller framework is a "modular extensible network and fabric controller [which] can support a variety of protocols like onePK or OpenFlow, as well as a number of open APIs like REST and OSGI".
The implication is that if you go the full SDN-OpenFlow route then your network will be inferior to one using Cisco's programmable elements. You are not locked in, because you can use open protocols and APIs rather than Cisco proprietary ones if you wish.
Cisco reckons that VMware/Nicira will add network management functions to vCenter, for both actual and virtual switches. These will be a subset of the capabilities you need to provide network facilities for virtual machines.
The actual network then has to deliver the requested services, and more, because the VMware/Nicera functionality will be a subset of what is needed.
VMware's software-defined data centre concept
VMware's aim is a software-defined data centre whose set of server, storage and networking resources are virtualised. As with ESXi hypervisor-controlled servers, users can create virtual data centres which have an isolated set of the compute, storage and networking resources they need. These resources grow or diminish exactly like virtual servers in a physical servers.
In this concept SDN is seen as virtualised networking, abstracted and pooled, with virtual networks carved out of a set of physical network resources by the equivalent of a network hypervisor which instantiates virtual networks and configures them as circumstances change.
VMware sees all this in the context of cloud computing with its vCloud Director networking, virtual switching and VXLAN protocol.
Cloud computing makes the network resource efficiency problem harder because there are so many more network elements and the overall network is vastly more complex. There is simply more to manage.
VMware bought Nicira for $1.26bn to enter the SDN field. Nicira is another Stanford University OpenFlow-derived startup, like Big Switch.
Steve Herrod, VMware's chief technical officer at the time, said: "Nicira’s software-defined networking starts by virtualising the network, decoupling the logical view of a network from its physical implementation.
"It does so by creating an abstraction layer between server hosts and existing networking gear which decouples and isolates virtual networks for specific networking hardware, turning it into a pool of network capacity.
“This enables the on-demand, programmatic creation of tens of thousands of isolated virtual networks with the simplicity and operational ease of creating and managing virtual machines."
Herrod blogged: "This acquisition expands VMware’s networking portfolio to provide a full suite of SDN capabilities and a comprehensive solution line-up for virtualising the network – from virtual switching to virtualised layer 3-7 services."
In August 2012, Microsoft set out its stall on SDN courtesy of a post on the Windows Server blog by Sandeep Singhal and Vijay Tewari.
The company may appear to be a little late to the SDN party but says its Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager is “production tested, production used” from years of running massive data centres for the likes of Hotmail, Bing and Windows Azure.
Microsoft comes at SDN from the perspective of wanting to move virtual machines across a cloud data centre or between data centres, which requires the virtual machine to be given a new IP address to locate it in a network. This problem is solved by enabling network control through software.
Microsoft defines SDN as "being able to configure end hosts and physical network elements, dynamically adjust policies for how traffic flows through the network, and create virtual network abstractions that support real-time [virtual machine] instantiation and migration throughout the data centre".
Microsoft believes SDN has to include "programmability of end hosts, enabling end-to-end software control in the data centre", because the integration of virtual machines management and network control facilitates automation and reliability in large data centres.
It says this definition of SDN is “broader than the definition currently used by many industry players who only focus on configuration of physical network elements”.
SDN needs automation and centralised control and Microsoft provides all the pieces needed for this: the hypervisor (Hyper-V), the SDN control surface on the end host and the management software.
“Everything you need to deploy SDN is built right into these products, so you do not need to acquire separate management tools or product licenses,” Singhal and Tewari write.
We might see here a marketing imperative for Microsoft to differentiate itself from VMware by doing more integration at the operating system and hypervisor level. VMware can't do this because it has no server operating system product of its own.
A question of standards
Instituting SDN will turn into a standards jungle as the various suppliers try to expand their customer bases by claiming they provide benefits that you absolutely can't do without, one size does not fit all, and so on.
We have been there before, way back in IT history with Cobol, Unix, X-Open operating system and all the other standards crafted by suppliers to make it seem you can have your standards cake and eat their proprietary equipment cake too.
For sure networks will become more programmable. But no mainstream supplier will devise the network switches equivalent of JBODs (just a bunch of disks) which you control from server hypervisors and which run open-source software. Now that would bring down networking cost.
Few enterprises around the world are using storage arrays built with open-source software, such as Nexenta, because they just don't trust them, nor can they sever the apron strings that tie them to the big storage suppliers.
The betting is that the same scenario will play out in networking and that the SDN movement will result in much greater automated control of network elements.
This will be much appreciated but the limits of that control and its integration with other elements such as application management, security and data centre automation will be highly complex. This will take several years to work out.
One point to consider: if applications move to public clouds then the complexities of their network connections will be a matter for the cloud service providers. Enterprises' own data centres will become hollowed-out shells.
SDN will perhaps cease being of central importance to anyone but the service providers. For the rest of us it could simply be a transition phase on the road to fully-blown cloud computing. ®