Microsoft reveals secrets of Azure cloud's networking underbelly
'In the last three years everything changed'
LADIS 2013 Microsoft has revealed the technologies it has pressed into service to provide network virtualization for its hulking Azure cloud – and 'fessed up to some of the thornier problems that appear when you grow a network significantly.
The company's partner development manager for Windows Azure networking, Albert Greenberg, gave details on the approach at this month's LADIS conference in Pennsylvania.
"I can tell you that in the last three years, everything [in Azure networking] changed," Greenberg said. "There's not a wire or protocol that still exists in the same form. It's a nice time to be in networking because there's so many opportunities."
Microsoft's virtualized network is built out of three standard components – an Azure front-end, a network fabric controller, and then an Azure VMSwitch, which handles the networking needs of compartmentalized VMs.
The frontend takes in requests and distributes them to a controller, which then shoves them down to the VMSwitch, where they propagate onto the network control plane.
This is made possible by Microsoft's development and use of NVGRE (Network Virtualization using Generic Routing Encapsulation), a technology also backed by Arista, Dell, Intel, Broadcom, and others. It stitches together the network by providing a way to horizontally tunnel layer-two networking packets across the IP fabric without causing bandwidth contention, and is functionally similar to VMware/Cisco's "VXLAN".
Microsoft was motivated to move to a virtualized network because of the strain put on its old network by the growth of Azure, which now supports "millions of VMs" across "hundreds of servers".
Virtualization? More like FLEXIBLE-IZATION!
Companies are interested in network virtualization because it lets them move control functions up and away from the base switches, and into a control plane that is rapidly modifiable. This lets them spend less on hardware – something that worried Cisco so much the giant has been forced to fund and buy a company called Insieme to give it software-defined networking capabilities.
Microsoft, for instance, uses its VMswitches for VPN and overlay services, per-tenant access control lists, and network address translation.
Some of the features made possible for Microsoft by this architecture include flexible billing, rate limiting, additional security features, an easier time deploying VLANs at scale, and five-tuple ACLs – access control lists that contain the source IP, destination IP, protocol, source port, and destination port.
The VMswitch, which Greenberg calls "the new cool thing in networking," contains tables that map flows to actions, and help it create a pipeline between its virtual network interface card and a specific VM, providing network encapsulation.
By moving to a virtualized network, Microsoft is able to treat networking as it does other cloud services. "All policy is software - everything is a VM, which means all the underlying mechanisms for building services in the cloud can be re-purposed for networks and we deploy networks like all other services," Greenberg said.
Microsoft's network controller is able to encapsulate VMs or groups of VMs with their own virtual switch, then shuttle data to other virtual switches to aid information sharing without compromising multi-tenant security.
"How you track state is what separates the men from the boys," he said. "You don't want the VMSwitch to know much, but it has to learn."
'A nice way to get your toe in the water'
Microsoft's switches will automatically request information from the controller, which pulls data from a directory service if they encounter something new, he said.
Customers have been able to use Microsoft's virtualized network to build multiple private virtual networks, sometimes nested inside one another, and then hook them back to an on-premise network as well (see picture). Greenberg says this is "a nice way to get your toe in the water" if you're a cloud-skeptical organization.
But Microsoft faces challenges, as the number of VMs its network has to support grows. It's partly going to get there by buying 40GbE network cards, Greenberg said, as they've come down in price, but it will also need to optimize packet flows.
"You can figure out the actions needed for the first packet, and then cache that stuff and second and subsequent packets can just fly through much faster," he said. "Those kinds of optimizations are super-important."
By virtualizing the network, Greenberg said, Microsoft has been able to create individual controllers for each major application, though this has forced it to create an allocator to help coordinate multiple controllers at once.
In floating the control layer into software, Greenberg says Microsoft has made its system much easier to manage.
"The complexity of managing these networks at huge scale is reduced - it may seem odd - compared to the management of a small enterprise network [but] there's no fingers touching anything, it's all automated, there's no drift, if a top of rack switch fails you've got another one - redundancy is built in," he said.
That hasn't stopped Microsoft's cloud from juddering to a halt on various occasions, mind, but from what we understand these fails have come from higher up in the stack than the network. ®
Sponsored: DevOps and continuous delivery