This article is more than 1 year old

Tracing the direction of data centre travel

Traffic management

Interview The data centre has been evolving constantly but for almost 15 years networking has not. But even networking will have to adapt to the times eventually, so The Register sat down with Brocade's Julian Starr to understand where we are and where we are going.

Starr's first statement after the inititial pleasantries was: "What you care about is that the data flows, not how."

This serves as an excellent summation of Brocade's views.

Feel the fabric

My last major look at the future of networking was discussing OpenFlow with Renato Recio from IBM. Starr's statement sounded like something from within OpenFlow's remit, so I started there.

Starr's response was passionate: he is obviously a strong believer in OpenFlow and related software defined networking (SDN) technologies.

Starr is a quite a fan of networking fabrics. He sees traditional networks as full of problems. They do not handle a lost link or too much congestion well.

“What we should see is what we see in the storage world today: that you can track your recovery time in real time,” he says.

“Remove spanning tree and move towards a fabric-style platform and a lot of things start to go away. As you move to Trill, OpenFlow and proper SDN, there is a simplification of the platform

Starr views OpenFlow in terms of programmability, monitoring and transparency. Programmability is evident in that OpenFlow is basically "a big brain that tells you how your network should be”.

He talks about the programmability of Brocade's switches. "If you automate you can orchestrate; the goal of RESTful APIs is to do that, to have a set of common APIs that can be managed as part of a common infrastructure,” he says.

Monitoring and transparency are critical as "nobody is going to be able to determine the impacts of failure and resilience if you can't see where traffic is flowing”.

This has a direct impact on business analysis capabilities, an area Starr views as increasingly important.

Quick response

Transparency and monitoring feed back into the programmability aspect of OpenFlow in the form of automation. Networks should be responsive to changes, not just in the physical architecture (such as new links being added or servers disconnected) but to traffic flows.

Our data centres are increasingly virtualised. Network traffic becomes dynamic in a way that didn't exist before. Workloads talk among themselves without presenting that information to end-users, constantly moving from one physical host to another, changing the servers they communicate with as the day moves on.

According to Starr this has real-world impacts on how the software behind the software-defined data centre is crafted.

"Some of those models and decision-making structures become hard to code. We are going to see a sort of resurgence of management tools," he says.

Go with the flows

In a virtualised environment, we have generalised the hardware and wrapped up the operating system, applications and so forth into a neat little container. We move this back and forth and it is easy to conceptualise.

Network flows can be envisioned in much the same way. Instead of looking at network traffic as a stream of packets with QoS, filters, layer 2 routing, layer 3 routing and so forth, we start to look at it as communication between one system and another, abstracting the messy details of how.

“The goal of virtualisation is to make everything look the same,” says Starr.

We don't care how the data gets there, only that it does get there

This is not too dissimilar from what SDN is trying to achieve in the network space. Ultimately, Starr believes that network flows will move like virtual machines.

We don't care how the data gets there – what QoS is required, what filters it passes through, what switches or routers it visits along the way – only that it does get there and at the speeds we specified.

Analysis of network flows can help with other aspects of the data centre as well. "We have a resource broker that looks at the hypervisor, looks at the transaction load going to the server and measure response time in real time,” says Starr.

Look at those response times and model them. If peaks occur you can spot them, ramp up additional virtual machines, plug into the application controller, spread out the load, bring the response time down and then spin things back down when the load drops.

“You can really optimise the infrastructure. We can do that today, but there are always things we can do to help drive that forwards.," says Starr.

Next page: Ready to roll

More about

TIP US OFF

Send us news


Other stories you might like