Feeds

Tracing the direction of data centre travel

Traffic management

Protecting against web application threats using SSL

Interview The data centre has been evolving constantly but for almost 15 years networking has not. But even networking will have to adapt to the times eventually, so The Register sat down with Brocade's Julian Starr to understand where we are and where we are going.

Starr's first statement after the inititial pleasantries was: "What you care about is that the data flows, not how."

This serves as an excellent summation of Brocade's views.

Feel the fabric

My last major look at the future of networking was discussing OpenFlow with Renato Recio from IBM. Starr's statement sounded like something from within OpenFlow's remit, so I started there.

Starr's response was passionate: he is obviously a strong believer in OpenFlow and related software defined networking (SDN) technologies.

Starr is a quite a fan of networking fabrics. He sees traditional networks as full of problems. They do not handle a lost link or too much congestion well.

“What we should see is what we see in the storage world today: that you can track your recovery time in real time,” he says.

“Remove spanning tree and move towards a fabric-style platform and a lot of things start to go away. As you move to Trill, OpenFlow and proper SDN, there is a simplification of the platform

Starr views OpenFlow in terms of programmability, monitoring and transparency. Programmability is evident in that OpenFlow is basically "a big brain that tells you how your network should be”.

He talks about the programmability of Brocade's switches. "If you automate you can orchestrate; the goal of RESTful APIs is to do that, to have a set of common APIs that can be managed as part of a common infrastructure,” he says.

Monitoring and transparency are critical as "nobody is going to be able to determine the impacts of failure and resilience if you can't see where traffic is flowing”.

This has a direct impact on business analysis capabilities, an area Starr views as increasingly important.

Quick response

Transparency and monitoring feed back into the programmability aspect of OpenFlow in the form of automation. Networks should be responsive to changes, not just in the physical architecture (such as new links being added or servers disconnected) but to traffic flows.

Our data centres are increasingly virtualised. Network traffic becomes dynamic in a way that didn't exist before. Workloads talk among themselves without presenting that information to end-users, constantly moving from one physical host to another, changing the servers they communicate with as the day moves on.

According to Starr this has real-world impacts on how the software behind the software-defined data centre is crafted.

"Some of those models and decision-making structures become hard to code. We are going to see a sort of resurgence of management tools," he says.

Go with the flows

In a virtualised environment, we have generalised the hardware and wrapped up the operating system, applications and so forth into a neat little container. We move this back and forth and it is easy to conceptualise.

Network flows can be envisioned in much the same way. Instead of looking at network traffic as a stream of packets with QoS, filters, layer 2 routing, layer 3 routing and so forth, we start to look at it as communication between one system and another, abstracting the messy details of how.

“The goal of virtualisation is to make everything look the same,” says Starr.

We don't care how the data gets there, only that it does get there

This is not too dissimilar from what SDN is trying to achieve in the network space. Ultimately, Starr believes that network flows will move like virtual machines.

We don't care how the data gets there – what QoS is required, what filters it passes through, what switches or routers it visits along the way – only that it does get there and at the speeds we specified.

Analysis of network flows can help with other aspects of the data centre as well. "We have a resource broker that looks at the hypervisor, looks at the transaction load going to the server and measure response time in real time,” says Starr.

Look at those response times and model them. If peaks occur you can spot them, ramp up additional virtual machines, plug into the application controller, spread out the load, bring the response time down and then spin things back down when the load drops.

“You can really optimise the infrastructure. We can do that today, but there are always things we can do to help drive that forwards.," says Starr.

Choosing a cloud hosting partner with confidence

Next page: Ready to roll

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
Seagate's triple-headed Cerberus could SAVE the DISK WORLD
... and possibly bring us even more HAMR time. Yay!
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.