Original URL: http://www.theregister.co.uk/2013/12/09/feature_network_function_virtualisation/

WTF is... NFV: All your basestations are belong to us

Intel and rather a lot of telcos want networks to operate like data centres

By Tony Smith

Posted in Data Networking, 9th December 2013 08:59 GMT

Mobile network operators would have had an easier life if it wasn’t for smartphones and the flood of data traffic they initiated. Apps have led to a massive increase in the volume of data moving back and forth over phone networks - not just from users; the ads in free apps helped too - and operators are struggling to cope.

And this before the Internet of Things really takes off as it’s expected to do in the coming years, adding millions more – particularly enthusiastic forecasts put the total at billions – devices to these networks too. Catering for all this data traffic isn’t simply a matter up widening the pipe, it will require a massive expansion of the infrastructure needed to host these networks.

Quite apart from the time it will take to put that infrastructure in place, there’s the cost. Businesses and consumers want more bandwidth for less money, but the money has to come from somewhere.

Enter chip giant Intel, not with its capacious cheque book at the ready but with a notion to commoditise telecommunications network infrastructure by ridding it of expensive, proprietary, function-specific and purpose-built hardware and replacing it with cheap general-purpose kit able to replicate in software the functionality delivered by the old boxes.

Intel’s motivation is not philanthropic, of course. These new, standard devices will, it hopes, be based on its processors.

The 1990s all over again

Today’s networks are based around boxes designed to do very specific jobs. Most of those tasks were defined years ago, and hardware built near enough on a bespoke basis for each operator. That makes them very expensive. It also means they can’t be readily adapted as network demand changes over time. Instead, vendors come up with new kit, timing its availability to tie in with established telco upgrade cycles.

It used to be that way in the server business too, but through the 1990s and early 2000s, x86-based commodity hardware running Linux or Windows proved itself to be much cheaper, more flexible, more scalable and easier to upgrade than older Risc-based machines.

Network Functions Virtualisation

The old way and the new: NFV replaces proprietary, bespoke boxes (left) with as many standard servers as you need

Intel’s logic centres on the notion that of relatively low-cost x86 servers can successfully replace pricier servers running on server makers’ own silicon, so surely they can likewise replace all those pricey proprietary boxes currently attached to base-stations and other parts of the network.

Even the chip giant admits x86 servers aren’t going to push out the established hardware in the near term, and not all of it once. But its scents a shift in the mood of the telcos themselves. This change is one that they want, and rather a lot of them are working together to make it happen.

A process has already been established to define how this shift might be made to take place quickly and to better meet the needs of telcos. The process is called Network Functions Virtualisation (NFV).

Bespoke hardware out, commodity kit in

NFV essentially replaces proprietary boxes with software running on standard servers. Or, better still, one server on which you make use of its processors’ virtualisation capabilities to implement the workloads of multiple boxes and their operating systems on a single unit.

BT is a keen supporter of the scheme. According to Don Clarke, the British telco’s Head of Network Evolution Innovation, the company has been researching NFV for the best part of three years now.

“Two-and-a-half years ago, we started a research programme to build a proof-of-concept platform to test network-type workflows on a standard industry servers,” he says. BT took hardware from HP, loaded it with a Wind River embedded software stack and began seeing what network hardware functionality it could replicate in software.

“We implemented a network function that’s well understood in BT, and tested it at scale and at performance,” says Clarke. “Pretty quickly it became apparent that we could get the same or better performance for a quite complex network function from this hardware as we could from a hardware-optimised device we’d bought in volume from one of our network equipment vendors.”

The next stage was to experiment with multiple functions - firewall, threat monitoring, connection acceleration - in parallel. “We asked, ‘OK if I take an industry standard server, what hardware appliances that currently have to be procured, installed and supported individually can I integrate by loading them as software equivalents on a single box, and have them deliver the same performance as the individual units.’ I can confidently say that, from a technical perspective, we did that. We proved the concept.”

Re-inventing the network

The test systems were better in other ways, too. “We also showed dramatic reductions in power consumption - in some scenarios as much as 70 per cent – and in some scenarios a 50 per cent capital expenditure reduction,” Clarke claims. “The flexibility and potential for service acceleration is unprecedented when you move into a software environment.”

Crucially, for Clarke, the research programme demonstrated that NFV was not merely an alternative to the current approach – it was the way forward for BT, and if for BT then for the world’s other telcos too.

He’s unequivocal about what the research work revealed. “Effectively what I’m saying is, that the ways in which we design and architect telecommunications networks using bespoke hardware appliances are now obsolete. That’s what we have discovered. We can virtualise network functions, download them as software and put them on industry standard servers.”

For Clarke, the case for NFV had been made. More research work would be needed to tune the details, but what was required most was an effort to establish an open ecosystem. Right from the start, Clarke realised this wasn’t something BT could - or should - do on it own.

To take the fullest possible advantage of NFV, the industry needed a common framework on which operators, hardware and software suppliers, and other service providers partners could begin to build and implement NFV.

Slow standards, quick co-operation

“That takes standards,” he says, “but standards take a long time to be agreed. So rather than embark on a standards process, the idea gelled in my mind that we should get a collaboration process going among the operators.”

He had two goals: first, to encourage telcos to share what they were learning from their own NFV research into the complexities of implementing networks in data centre environments and, second, to devise common approaches for at least some of the technical challenges.

“If we get common approaches and find common requirements, we can begin to get a common message to the industry and start to get the industry solving those challenges for us,” he says.

Network Functions Virtualisation

Use-case scenarios for NFV

A fair few of Clarke’s opposite numbers at rival top-tier operators agreed. Twelve of them from Europe, the US and Asia, along with BT itself, made their intentions plain in a white paper published just over a year ago, in October 2012, the result of the informal discussions Clarke had initiated earlier that year. They formed the NFV ISG (Industry Specification Group) under the European standards agency, ETSI.

The paper was those 13 operators’ way of making clear to the technology industry that they all believe NFV is the way they can expand network performance, reach and capabilities more cost-effectively than has ever been possible before, and that they are all committed to this approach.

Roadmapping new networks

“Networks built out of bespoke appliances is the old way,” says Clarke. “The new way is networks built in data centre environments, and that is a significant shift which will dramatically change the supply chain and the operating environment in telecommunications operators over the next two to five years.”

A year on, in October 2013, the original 13, by now joined by 12 other operators and service providers, among them Cable Labs representing the entire cable TV industry - Cable Labs maintains the DOCSIS cable modem standard, for example - restated their aims and their vision in a second white paper. It defined the roadmap they want to follow to have that vision realised.

This is not, Clarke stresses, an exercise in setting standards. The group’s definitions of terminology and use-case scenarios have been put in place to ensure telcos, and the software and hardware vendors who supply them, are talking the same language. The end result of the process laid down in the roadmap will be a framework which tells vendors exactly what telcos want to achieve: in what roles NFV kit will first be used, and how the technique’s use can then be extended. Then it will be up to the vendors to come up with products that meet their needs.

“NFV didn’t exist as a concept or a technology, though it was implied in many of the things the cloud industry were doing,” says Clarke. “So we had to design a new terminology. That’s needed because the whole point is not to develop standards, it’s to get the whole industry lined up behind a common endeavour to innovate and solve challenges, and that includes being able to talk the same language.”

This initial definition work isn’t exhaustive, he admits. No one can say what new applications NFV might prompt or make possible. “Who could have forecast SMS would be so successful when some engineer decided he was going to use 128 bits to make a general-purpose text transmission platform? It was unknown at that time what that would grow into. We can’t predict what kind of network innovations will arise if you virtualise networks in data centre environments.”

Adding smarts to dumb basestations

Hardware companies are already taking their first steps toward finding out how NFV may change what networks can do. Nokia Siemens Networks (NSN), for example, is developing a server blade module that slots into its base-station hardware to run applications. It’s currently running trials of the kit with South Korea Telecom.

“The basestation today is a data-in, data-out device,” says Dirk Lindemeier, who runs the application server programme at NSN. “By adding this new component, this application server, we give the basestation a bigger, more relevant role by letting it compute, store content and extract raw radio data which then be used to contextualise applications that are either locally hosted by this application server or stored online.”

Lindemeier’s team has focused on apps that can improve the user experience, and the trials have shown very positive results, he says. “By placing certain functions or content at the edge of the network, throughout has been improved by 100 per cent, in part due to the flawless TCP behaviour - we’re transmitting data without loss. We’ve seen quite dramatic latency reduction too. That’s a very important metric for service and application providers – latency reduction can be translated directly to money.”

This is what NSN calls “mobile edge computing” - putting intelligence at the edge of the network where it’s more readily available to process data and send back results than would be the case if everything had go right back down the line to centralised compute facilities.

And it works. “Basestation programmability is a reality. We have implemented certain functionality on the base-station within weeks. Had we taken this functionality through 3GPP and the corresponding standardisation and implementation processes it would have taken us much, much longer.

Intelligence at the edge

“Contextualisation is a reality. We have done it with a set of parameters that we extracted from the base-station and used with certain applications and it’s working in real time. So much data is sitting in the basestation but by the time it has been forwarded to some central analytics system, it would be out of date already. Basestation data changes at a millisecond level. You can’t expose information that is valid for a millisecond to any central analytical system. Either you process it directly on the spot, or you don’t process it al all.”

This is just a first step,, says Clarke. He envisages a dynamic base- station able to adapt to changing patterns of demand over a 24-hour period by downloading network functionality as software as it’s needed and replacing it with different functional modes when it’s not.

Think about the Japanese earthquake of 2012, he says. Japan’s voice network couldn’t cope with the sudden surge in usage as everyone began calling everyone else to see if they were unhurt. But NTT DoCoMo’s content delivery-centric data network stayed up. Imagine being able to download that entire content network’s hardware with voice traffic management functionality and using it to expand the networks’ voice capacity, all without the need to send out a single engineer.

Network Functions Virtualisation

The NFV Industry Specification Group’s roadmap

That’s the goal for NFV: to use not only the low-cost of standard servers to expand networks more cheaply or more extensive for the same budget, but to take advantage of the architecture’s flexibility to support different workloads, potentially at the flick of a switch.

But there’s still plenty of work to do to establish the ground rules. “We have set ourselves the target of having the work done by the end of 2014,” says Clarke. “This is where we differ from the standards body: we are not standardising, we are trying to give the industry as clear a picture of where we want to go as possible and then we want to hand over to the vendors who will supply us with the solutions to take forward the innovation - this is not about the telcos doing the innovation, or competing with the IT industry.”

On the contrary, it’s a major opportunity for the computer business, he says. “If you’re a small ISV who wants to sell your network application to the whole global industry, the chances are you’ve not got the resources to go and talk to 30-40 telcos. The NFV ISG’s work provides enough guidance for that ISV to know the big picture as to what these telcos think is the future.” The result: the ISV can produce generic foundations sure in the knowledge that it supports most if not all of the goals telcos want to achieve.

Open to open source

And it’s not just for the benefit of commercial developers. The NFV ISG’s work also embraces the open source community. “We recognise important open source communities that will be relevant to this space, for instance OpenStack, Apache CloudStack and the Linux Foundation’s OpenDaylight software-defined networking project - a number of those are going to be important.”

With the publication of the second NFV ISG white paper, the first five specification documents were released too. “These initial specifications have been developed in record time: under ten months of intensive work,” boasts NFV ISG chairman Prodip Sen. They take in use cases, telco requirements, the architectural framework, terminology and a framework for co-ordinating and promoting public demonstrations of proof-of-concept platforms.

According to the ISG, early NFV deployments are already under way and are expected to accelerate during the next two years. More detailed specifications are scheduled to be published in 2014.

NFV will change the network, and enable so many of the products and services the technology industry is already starting to build.

”Once you implement networks in software, new ways to implement networks become possible,” says Don Clarke. ”New ways to monitor networks become possible. New ways to scale and trade workloads become possible. The Internet of Things with its predicted 50 billion devices – connectivity is the big deal for that; there’s no point having these Things if you can’t connect them together - becomes possible.” ®