OpenDaylight: meet networking's bright newcomer

Prepare to be dazzled

Next gen security for virtualised datacentres

Few of us have come across the word OpenDaylight in polite conversation lately, however many years we have spent using and managing networks.

It is, however, one of a number of related words that we are all going to be using a great deal over the next 12 to 18 months. Let us look at what it is and, more importantly, the wider set of concepts of which it is a part.

OpenDaylight is arguably the most important project in the fledgling genre of software defined networking (SDN).

The aim is simple: to open up the closed world of network infrastructure; and to do so within an open-source framework to encourage widespread development and ensure that it is not just available from whichever vendor offers developers the biggest pot of cash.

Pile on the layers

The network traditionally sits at a low layer under the servers and applications. As virtualisation platforms have become more complex and more advanced, they have started to introduce their own internal network functionality.

This enables you to do things such as keeping elements of traffic entirely within the server hardware if it is going from one virtual machine to another within the same box; and more recently even to do some funky cleverness that lets you implement neat concepts such as layer 2 emulation over the top of a layer 3 network.

The latter presently involves using creative code to provide cunning features while working within the confines of what the underlying network layer can provide. But what if the higher layers could hook into the network layer?

What if they could tell the network layer how to deal with the traffic that is about to be injected, and let the power of the network layer help out instead of simply acting as a constraint?

That is what OpenDaylight aims to give us, and it is an eye-wateringly desirable thing.

Tortoise and hare

There is, however, a niggling problem with SDN – and therefore with the entire reason for having OpenDaylight. Put simply, hardware is generally fast and software is slow.

In 1996 I sat in Rapid City Communications' offices in Silicon Valley and was told about the new f1200 wire-speed router the company had developed.

This was a huge deal. Until then, layer 3 routing had been performed exclusively in software and so was relatively slow; the f1200 did it on an ASIC (application-specific integrated circuit) and made it an order of magnitude faster.

Now, though, we are talking about moving the networking function out of hardware and back into software. Not just that, but onto the standard PC processors that are being shared with the virtual machines in the virtualisation stack. Isn't this a step backward, speed-wise?

Apparently not, according to Kurt Glazemakers, CTO of CloudFounders. “Software is in general slower than purpose-built hardware, but software allows a cheaper solution for a distributed approach,” he says.

“In fact a large hardware solution that takes care of all ACLs [access control lists] in the network could be far more inefficient then a distributed software solution that sets ACLs to each virtual machine in the network.

“Also CPUs tend to become 20 to 25 per cent faster year over year, so almost double every two years, while the network requirements per virtual machine have definitely not grown at the same pace.”

David Noguer Bau, EMEA head of service provider marketing at Juniper, acknowledges that you can't just throw everything into software.

“[The separation of the control and forwarding planes allows] the centralisation of certain parts of the control plane and certainly results in operational simplification of the network, faster deployment of new services and better orchestration between virtual machines and networks,” he says.

“But the wire-speed routing hardware will still be required to move the packets faster. The networking equipment will provide the muscle while the centralisation of the control plane will help to scale the brain.”

Jesse Rothstein, CEO of ExtraHop, acknowledges the downside of taking network functionality away from the network devices.

“Performing the layer 2/layer 3 switching and routing in software is certainly possible with present day server-class hardware,” he says.

“The important question is at what cost? High-speed packet processing is expensive for general-purpose processors, and the CPU is a shared resource. The amount of time the CPU spends processing interrupts or busy waiting is time not spent doing something else.”

Avoid the bottlenecks

Rothstein points out that the slower components will slow you down significantly only if they are being used a lot.

“For high rates of new flow initiation, the OpenFlow controller likely is a bottleneck. This issue can occur if network traffic consists primarily of short-lived flows and if every single flow requires the intervention of the controller," he says.

“However, if the OpenFlow controller intervenes only in occasional exception cases, then the performance impact is minimal. In fact, such a design is not dissimilar to the way that modern Ethernet switches employ the host processor.”

Nick Williams, EMEA senior product manager at Brocade, has a similar opinion, noting that making changes to existing flows through abstracted software when traffic is in flight will slow those traffic flows.

“How much of the programmatic change [modifying the forwarding tables in the network layers] are you going to make in real time? How often will it be used for traffic that's in flight?” he asks.

If you are making changes to the flow tables not in real time, albeit automatically/programmatically, he says “once you've made a change it will be just as fast as before as traffic forwarding will continue in hardware”.

Go with the Flow

On balance, then, the vendors I spoke to largely agree that speed hits may well be offset by flexibilty.

Rothstein mentioned OpenFlow specifically (albeit because I'd mentioned it specifically to him), so let's touch on that for a moment.

OpenFlow is highly likely to be a hugely prominent part of SDN

OpenDaylight is a project to define a framework for SDN in general, of which OpenFlow is merely one of the specific supported protocols. But OpenFlow is highly likely to be a hugely prominent part of SDN.

It has established itself in the field simply by having existed in release form since early 2011, and by having the support of all the big vendors including Brocade, Juniper, Cisco, Extreme, IBM and HP.

(I mentioned to Williams that I didn't see the likes of Riverbed – companies that produce WAN optimisation kit as their core product – on the list, and wondered whether we will see them added at some point. “Even if they don't naturally have a fit, they will want to find a fit,” he said.)

OpenFlow also has the benefit of being nicely simple to implement if you are a network vendor.

“The OpenFlow protocol is essentially an API to access the forwarding table of an Ethernet switch, decoupling the control plane from the data plane,” says Rothstein.

That is, the virtualisation layer can dynamically reconfigure the LAN to make it behave in the desired way without the need for manual tweaking of settings or working around the limitations of a closed network layer – precisely where we started when talking about the most important aspect of SDN.

The heart of the matter

To make sure we are clear on how the OpenDaylight framework works, let's make an analogy with a well-known product.

If you are familiar with Microsoft software architectures, you will be familiar with the .NET Framework. It has a complex set of hooks into the underlying operating system and provides a wide range of hooks to the applications that developers write to sit on top.

By writing relatively simple code, developers can produce immensely powerful software. While they understand to a certain extent the concepts of the system hooks they are calling, they don't have to be able to write code as complex as that underneath.

The OpenDaylight framework functions in a similar way. It sits between the high-level applications and the low-level network. The apps at the top (a virtualisation hypervisor, for example) make calls to the framework, which can then deal with the complex interface to the network devices below.

At the same time the framework provides hooks to enable apps that provide infrastructure-related functions such as monitoring, management, intrusion prevention and the like.

Save the network engineer

So if we have a new layer that lets the high layers interact directly with – and even reconfigure – the bottom layer, does this mean that OpenDaylight and SDN will effectively kill off the network engineer, with the server guys taking over the network as well?

Glazemakers thinks that in fact the evolution applies across the board, not just for network engineers.

“As software is replacing more and more core functionalities, traditional boundaries like storage, network and compute will start to disappear,” he says.

Noguer Bau, by contrast, thinks the role will continue to exist but in a much more collaborative sense than before. “Network and server guys already work closely together in the cloud departments,” he says.

“However, organisations need to evolve clearly defined roles and responsibilities in a world with SDN. In the future there will still be server specialists and network specialists. Juniper's SDN model allows the demarcation between the two groups.”

Corporate prospects

Will OpenDaylight grow into a mature, widely adopted framework? Or will it wither and die on the vine alongside other stunning ideas such as Betamax, 100VG-AnyLAN and ATM networking to the desktop? What will happen in the marketplace into whose door OpenDaylight has stuck a foot?

Glazemakers has a view on where SDN as a whole is likely to grow. “I think SDN will be mainly driven by private cloud solutions and virtualisation, or new advanced offerings of network service providers, but it will be mainly initiatives that lives within a corporate network,” he says.

Noguer Bau sees the cloud as a significant market. “Despite the big media coverage, SDN is still in its infancy,” he says.

“In the next 12 months, as more controllers become available we should expect the first implementations to happen. Some of them will be experimental but the majority will be solving problems in the context of the cloud.”

Williams agrees that the concept is still young. “A lot of work needs to be done understanding use cases and how it can be deployed,” he says.

“Within SDN all the manufacturers are still working out what support they can provide for each of the OpenFlow functions.”

Rothstein sees the growth of SDN concepts as a widening of what has been done thus far. “I think the definition of SDN is still evolving,” he says.

"A few years ago, SDN was largely synonymous with OpenFlow. Now, SDN is used to describe smarter and more dynamic networks where configuration is associated with logical entities rather than physical ones

“The definition will continue to broaden as more vendors embrace the term, so it is difficult to quantify its uptake.”

Full of youthful promise

In short, SDN is too useful not to grow markedly and see huge take-up. Although still a relatively youthful concept it is not just being talked about but openly implemented – not least thanks to OpenFlow.

This is a standard that has been around for a couple of years now and which the vendors seem universally to have adopted as a good thing to support.

The latter point is crucial, of course. Concepts such as this will succeed only if the vendors sign up. After all, there is no point having an SDN layer that hooks into the network layer if the makers of that network layer aren't opening up their systems.

OpenDaylight has already established itself as the focal point for SDN-related developments. Its first release is due on 9 December and is being scheduled as a simultaneous release, encouraging multiple components to happen at once. That is early enough to ensure it has not missed the bus.

We can expect, then, that we will hear a whole lot more about it, and that a sizeable number of us will be starting to use it very soon. And far fewer people will be asking: “Open what?” ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Object storage bods Exablox: RAID is dead, baby. RAID is dead
Bring your own disks to its object appliances
Nimble's latest mutants GORGE themselves on unlucky forerunners
Crossing Sandy Bridges without stopping for breath
prev story


Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
7 Elements of Radically Simple OS Migration
Avoid the typical headaches of OS migration during your next project by learning about 7 elements of radically simple OS migration.
BYOD's dark side: Data protection
An endpoint data protection solution that adds value to the user and the organization so it can protect itself from data loss as well as leverage corporate data.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?