Data Centre

Windows Server 2016 to inherit Azure's load balancer, data plane

Redmond reveals Azure's FPGA-powered NICs, pledges cloud-grade SDN on premises

One Microsoft Way by CC 2.0 attribution cropped to 648-432

Microsoft's drip … drip … drip of information about Windows Server 2016 has revealed a couple more droplets of detail, and one big splash of news about Redmond's approach to the new OS.

The splash is that Azure is the wellspring of Microsoft's plans for your data centre. The whole cloud first thing is no mere mantra: Redmond is clearly developing for Azure first and then figuring out how to bring the stuff it builds for the cloud into your humble bit barn.

This post from Azure's chief technology officer Mark Russinovich outlines the doctrine as follows:

Every day we learn from the hyper-scale deployments of Microsoft Azure.  Those learnings enable us to bring new capabilities to your datacenter, functioning at a smaller scale to bring you cloud efficiency and reliability.  Our strategy is to adapt the cloud design patterns, points of innovation and structural practices that make Azure a true enterprise grade offering.  The capabilities for the on-premises components are the same, and they’re resident in technology currently in production in datacenters across the world.

At the end of the post Russinovich reveals that Windows Server 2016 will include enhanced software-defined networking capabilities, thanks to “... a data plane and programmable network controller based on Azure, as well as load balancer that is proven at Azure scale.”

The post also explains how Microsoft does SDN for Azure, revealing that Redmond operates “... virtual networks (Vnets) ... using overlay and Network Functions Virtualization (NFV) technologies implemented in software running on commodity servers, on top of a shared physical network.” Russinovich says “Through segmentation of subnets and security groups, traffic flow control with User Defined Routes, and ExpressRoute for private enterprise grade connectivity, we are able to mimic the feel of a physical network with these Vnets.”

There's also the Azure Virtual Filtering Platform (VFP), resident in Hyper-V hosts “to enable Azure’s data plane to act as a Hyper-V virtual network switch, enabling us to provide core SDN functionality for Azure networking services.”

“VFP is a programmable switch that exposes an easy-to-program abstract interface to network agents that act on behalf of network controllers like the Vnet controller and our software load balancer controller. By leveraging host components and doing much of packet processing on each host running in the datacenter, the Azure SDN data plane scales massively – both out and up nodes from 1 Gbs to 40 Gbs, and growing.”

Russinovich also reveals that Redmond has cooked up custom “Azure SmartNICs”, network interface cards employing Field Programmable Gate Arrays so they have enough processing grunt to excuse a server's CPU from having to handle networking.

Microsoft says it is unique in operating FPGA-propelled NICs. You almost certainly won't get a chance to run them in your bit barn, just another example of how the industry is striving to find a way to deliver the hyperscale experience to on-premises data centres without the need for exotic hardware. ®

Biting the hand that feeds IT © 1998–2017