Feeds

Let's talk about OpenFlow

Software defined networks, Oh yes

Next gen security for virtualised datacentres

Where they can, data networking equipment vendors like to arrange their proprietary products into vertically integrated stacks, with complex functions often "baked" into the hardware.

Furthermore, a sometimes tortuous standardisation process makes it hard to implement changes and raises the barriers of entry to new equipment vendors.

These are the challenges that the OpenFlow Switch Consortium seeks to address.

OpenFlow starts with a simple premise - that networks should be software-defined and programmable. By enabling dynamic re-programming of network devices, OpenFlow improves traffic flows and eases the introduction of better networking features.

The OpenFlow idea is very much the brainchild of academics. But the technology has received enthusiastic endorsements from many vendors, and the first OpenFlow-enabled devices launched this year. Many more products are on their way.

OpenFlow was inspired by frustrated Stanford University computer scientists, led by Nick McKeown, who wanted to research new ways of doing networking, and test real-life performance, scalability, security and manageability of new technologies.

That was impossible because their experiments would compromise existing networks. Also researchers cannot simply modify proprietary network devices willy-nilly. Networking equipment vendors have legitimate concerns about having the operations of their carefully tuned devices compromised by experiments with upset users irritated by delayed or lost or compromised traffic.

The researchers had to come with a scheme that met vendors' objections, which they did in 2008 with OpenFlow. In so doing, they enabled the vendors, and - ultimately - customers, to extend and modify their networks and network devices dynamically and safely to provide more performance and better value for network resource cost.

What is OpenFlow

OpenFlow starts with the concept of the modification of network device flow tables being modified by messages sent from a secure and remote server using a specific protocol. Control messages are logically separate from the data traffic flowing through the devices, occupying a control plane.

The remote station sends Forwarding Instruction Set messages to network devices, telling them what to do with the data packets they receive. It gives the sender central control of a network infrastructure.

OpenFlow is possible because almost all network devices have flow tables with a common core set of functions. A remote server communicates with these devices using the specific OpenFlow protocol (pdf). The forwarding instruction set messages pass across a secure link to the devices, which run a piece of OpenFlow firmware, and are used to modify flow tables.

A flow table has entries that identify a traffic flow and specify an action to be performed on packets within that flow.

The action could be to treat packets from one incoming port in a specific way by sending them to a particular destination. For example, incoming packets for an experimental routing scheme set up by researchers could be actioned separately from all other packets. In effect a kind of VLAN is set up by and for the researchers to test their new scheme.

What gets network device vendors excited is what they can do once a virtualised network device interface is in place.

For example, imagine a mobile phone user using Wi-Fi and moving between access stations. The current hand-off between access stations is poor and can result in dropped calls.

OpenFlow could be used to dynamically re-programme the access station flow tables and get a pretty seamless handover with no call interruption. It can also be used to drop packets that are no longer required, and deliver better quality of service, enhanced security or other functions. Video-streaming could be prioritised over email forwarding and malicious packets could be speedily dropped, for instance.

These can be accomplished without affecting or exposing proprietary routing technology inside a router. The vendors are not at risk from opening up their sensitive technology. Instead they get the advantages of a virtualised interface to their own products, which they can update on the fly.

The upshot is a software-defined network which gives them a finer level of control than Access Control Lists of routing protocols.

We should note that OpenFlow can be used to operate at the packet level, as well as the flow level, and control processing specific to particular packet types.

There are obvious security concerns. Getting access to the remote server originating OpenFlow messages will be a honeypot for hackers and this access must be extremely carefully controlled and monitored.

OpenFlow Standards

The Stanford researchers, in the OpenFlow Switch Consortium, have handed standardisation activities to the non-profit OpenFlow Foundation.

There is a board of directors, with representatives from Facebook, Google, Microsoft and others on it. Member companies include Brocade, Cisco, Dell, HP IBM, Intel, Riverbed, VMware and many other companies.

It has broad industry support and all the member companies will be seeing that virtualising networking in this way fits with virtualising servers, storage and entire data centres. Indeed, without virtualising networks in this way IT infrastructure virtualisation as a whole will be held back.

There is now quite comprehensive support for the OpenFlow standard from vendors producing routers, switches, virtual switches, and network access points.

The current standard is v1.1.0, which was announced in February. Once it is implemented in network products then the pace of network protocol and network operation should increase significantly.

Innovative schemes for improving network operations can be tested in the real world and at scale, giving vendors confidence in making changes and, hopefully, enabling networks and their operators to take on the demands of the hyper-scale operations that will come as cloud computing is adopted more and more widely.

OpenFlow-supporting products could arrive by the end of 2012 and signal a step-change in network operational control and flexibility.

The TCP/IP time warp that we are presently stuck in would be ended and a host of networking innovations should develop to reduce costs and improve efficiency. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?