This article is more than 1 year old

OpenFlow takes networks in a different direction

An easier route

As network topologies and data access patterns have evolved, load profiles can change so quickly that a completely new approach to networking is required. That approach is OpenFlow.

According to Renato Recio, IBM Fellow and system networking CTO, life before the advent of x86 virtualisation was simple: client computers did most of the heavy lifting.

They crunched the data, dealt with files and mostly moved stuff back to servers as a form of centralised storage. In a campus environment, this north-south data flow could account for up to 95 per cent of the traffic. 


In this scenario, multiple client computers needed to talk to a single, central set of servers. These computers rarely flattened their provided network connections and so you could oversubscribe the connection between their access switch and the next level of switch.

Continue this game until you have a small number of very fat pipes talking to your core switch and your big servers.

Talk show

Eventually, servers started doing most of the work by themselves. The web server talked to the database server, which talked to the object storage head node. The application server is talking to a separate database server and everyone’s talking to the authentication and logging servers.

Recio views it by the numbers. “I think it has always been east to west in data centres,” he says.

“Microsoft published a paper in 2010 that found 75 per cent of traffic in a service provider data centre was east to west, 50 to 60 per cent in an enterprise data centre. Most of this east-to-west traffic takes place within a rack.”

Those oversubscribed links start to look like a bad idea

The traditional tree topology starts to show strain here. Servers tend to be capable of doing a lot more heavy lifting behind a single network link than your average client system ever did.

Those oversubscribed links start to look like a bad idea. With judicious planning, you could keep workloads within a single rack, and non-blocking access switches save the day.

Then along came virtualisation and suddenly we’re back out into the weeds. In a fully virtualised datacenter, any workload can be located on any physical server inside any rack. Not only that, but these workloads move. The tree-topology, oversubscription-based network model has become a weakness.

For truly dynamic data centres to work, static network design ends up having to swing entirely the other way – adding massive amount of additional capacity just in case your highest-load (but completely interdependent) servers end up on opposite ends of the network diagram from each other.

Add in the modern push towards converged networking and the network simply had to evolve.

One solution to this problem is OpenFlow.

“Look at all the hypervisor platforms that are out there,” says Recio. “They all live in layer 2. Overlays are going to change that, but now they all live in layer 2. That means you have to stretch the size of layer 2 so that virtual machines live not just in a rack, but across racks.”

Clear the trees

This takes some doing. “To stretch that data centre across layer 2, you really need multipathing. We have a lot of different ways to do this. There’s Trill, and a lot of people doing proprietary things. OpenFlow is another, open, way,” says Recio.

“You can run algorithms to create multiple paths, including disjoint paths, to create a topology from it. You are not bogged down by spanning the tree.

“One of the values of using OpenFlow is that we've seen with some of our partners that you can get much faster convergence times. [Because the OpenFlow controller has these paths already computed and discovered], it can very quickly choose an alternative path.”

In an OpenFlow environment, switch configuration is managed centrally rather than in a device-centric manner. Network services can be run like apps on top of the network. Instead of having to work with a series of device-specific commands, properties and abilities, OpenFlow takes a fabric-wide, rule-based approach.

An OpenFlow rule starts by matching fields on a frame (switch port, VLAN, MAC source, MAC destination, IP source, IP destination, IP protocol, TCP source port and TCP destination port).

An action is then performed – forward to switch port(s), encapsulate and forward to controller (OpenFlow server), drop, process normally, modify fields. There are also optional vendor-specific field actions possible (say for load-balancing).

OpenFlow devices capture relevant statistics, and all of this can be done today on a single-chip solution capable of processing 1.28Tbps.

OpenFlow aims to solve several problems. “What gives it the ability to control the network much better than you could with just management plane tools is that you're actually programming the dataplane from the controller,” Recio explains

“The controller is running all the pathing algorithms. It is loading this into the switches, the forwarding tables and the rules that you need to implement and forward data along a path.

“The value of the programmability is that on that controller I can run software that can do things, for example a pathing service, a security appliance, a firewall or an intrusion prevention system. I can send stuff to the controller so that it can do intrusion prevention for me.

“That programmability can leverage OpenFlow to program the data plane flows. It's a powerful, disruptive option we did not have in the past. When you see this address, here's the action you're going to take. It changes. That's one example, but there are many others you can think of based on actions you could take.”

Command centre

To see how an OpenFlow switch might work in a real environment, we have to look at how these rules might be applied.

A switch sees a frame from MAC address A destined for MAC address B. The central configuration server is aware of which MAC addresses live on which ports of which switches across the entire fabric. The server is also aware of link states for every connection, as well as throughput statistics per port.

Since the central database is aware of all this, so too are the individual switches. The best route between the source switch and the destination switch is computed and the frame is forwarded.

Should a link anywhere in the switching fabric become saturated or a cable become unplugged, the central database is made aware of it. The information is quickly disseminated throughout the fabric, and new paths for packets can then be computed as required. This provides high availability with fast convergence.

Getting packets from A to B is only the beginning. We can do interesting layer-three stuff with OpenFlow.

Is your http server at IP address A down? Have the switches redirected the traffic to the server at B. Do you detect a frame with unknown characteristics? Forward it to the central configuration server to be characterised and examined.

Layer two, layer three. If you want your switch to do it, ease of configuration at data-centre scale (instead of merely device scale) is OpenFlow's bailiwick.

Truly dynamic programmable networks are here today, and vendors such as IBM are already shipping gear. ®

More about

TIP US OFF

Send us news


Other stories you might like