Original URL: http://www.theregister.co.uk/2008/11/18/infiniband_10gbe_solve_datacentre_gridlock/

InfiniBand and 10GbE break data centre gridlock

Bandwidth bonanza in the data centre

By Chris Mellor

Posted in Servers, 18th November 2008 15:06 GMT

Analysis Data centre network pipes are getting choked up. Imagine Germany minus the autobahns or the US without interstate highways and you get the picture - cities trying to send goods and people by road to other cities and the single carriageway roads jamming up, consigning everybody to gridlock.

What were quaint little cottage industries in the data centre - Windows servers on slow Ethernet LANs - have turned into humming temples to automated IT mass production as bladed and virtualised and multi-core, multi-socket servers scream through their processing loads at Star Trek warp speed and then... wait... wait... for the slow network to cough up the next slug of server processor fuel: the data.

Even the high-performance computing (HPC) world is suffering. Although it has its own dedicated InfiniBand super-highways which make Ethernet look like snail mail compared to email, super-computers became addicted to processing cores and yesterday's 100-core model gave way to a 1,000-core model and multi-thousand-core super-duper-whooper-computers seem quite common these days. Dual-rate 20GBit/s InfiniBand (IB) isn't enough.

A network pipe delivering data to servers is just like a pipe bringing water to a shower head. If the same pipe has to serve two shower heads then each one gets half the water. Four heads means each gets a quarter of the pipe's water. Eight heads - you can see how it goes.

So data centre network pipe technology is getting upgraded as virtualised and bladed servers cry out for faster I/O to keep them busy. InfiniBand and Ethernet products from Alacritech, Mellanox and QLogic are bumping network speeds up two fold and tenfold respectively.

10GigE?

Mellanox has produced a Converged Network Adapter (CNA), a two-port 10GbE product that sits on a server's motherboard, the ConnectX ENt. Mellanox is primarily known for InfiniBand technology so 10GbE is a bit of a departure for it. The technology can support virtualisation acceleration features like NetQueue and SR-IOV, also I/O consolidation fabrics like Data Centre Ethernet (DCE), Fibre Channel over Ethernet (FCoE) and InfiniBand over Ethernet (IBoE). (With InfiniBand supporting Ethernet we could have an infinity of network recursion here.)

An eight-core server could use this product to deliver more than 1Gbit/s Ethernet bandwidth to each core. Mellanox says the ConnectX ENt cost is $200-300 per port vs $400-500 for a Fibre Channel HBA and $300-400 per port for a QLogic CNA. The product does not have a TCP/IP offload engine (TOE) on it or support iSCSI.

Alacritech has revved its TOE technology to produce a 10GbE NIC (network interface card) with TOE technology that supports either stateful Windows Server 2003/ Server 2008 Chimney TCP/IP processing where Windows is the TCP/IP initiator, or stateless Open Solaris, Linux, Mac OS X and non-Chimney Windows where the card carries out that function. It's called an SNA (Scalable Network Accelerator) and we can think of it as a TONIC, a TOE and NIC combined. Alacritech's marketing director, Doug Rainbolt, says the pricing has moved closer to that of a 10GbE NIC to encourage offload take-up. He also says the card has been built to work well with VMware and Hyper-V.

Alacritech SNA Netbench performance chart

Look, this is no idle thing. He has a Netbench chart (right) showing a four-core server running the SNA outperforming an equivalent eight-core server with no SNA, no TOE. Taken on its, merits that means you could add the SNA to an eight-core server and recover four cores previously maxed out doing TCP/IP processing. You could double the number of virtual machines in that server or buy a four-core and SNA to start with instead of the greater sum needed for the eight-core.

The product will cost $1,299 and be available in Q1 next year.

To InfiniBand and beyond

QLogic has announced 40Gbit/s quad data rate (QDR) InfiniBand switches and an HCA (Host Channel Adapter) which connects the cable from the switch to a server. The company based its 20Gbit/s InfiniBand products on Mellanox chipsets but has now developed its own ASICs for the HCA and 12000 series switch.

The switch can be configured for performance, meaning 72-648 ports, with no over-subscription, or port-count with 96-864 ports and the possibility of over-subscription reducing performance.

Both the HCA and the switches will be available by the end of the year but no pricing information is available. QLogic's EMEA marketing head, Henrik Hansen, says that the switch can sub-divide its overall InfiniBand fabric into separate virtual fabrics and deliver different qualities of service to these V-fabrics, which is equivalent to what VLANs (Ethernet) and Cisco VSANs (Fibre Channel) can do.

He says the switches have deterministic latency too: "If you run the switches up to 90 per cent full speed on all ports we maintain the latency. Above that it falls off." He reckons competing switches have latency fall-offs starting when they go over 70 per cent load.

QLogic is punting these products into the HPC market and not as data centre network consolidation platforms, a point which Mellanox adds to its InfiniBand marketing pitch.

Balancing processing and network I/O

Without much faster network pipes the greater efficiencies of virtualised and bladed servers can't be fully realised. It is apparent from the Alacritech Netbench chart that coupling virtualised and multi-core servers with balanced network bandwidth (and offloaded network processing) will send their utilisation up and enable server consolidation to deliver data centre cost-savings. This is something that's going to be essential in the quarters ahead to persuade cash-hoarding customers to buy IT products.

The autobahns and interstates cost a lot of money but they helped generate much, much more wealth than they cost to build. So too with faster data centre network pipes. Spend money to free up bottle-necked and choked servers to save a whole lot more by consolidating even more stand-alone servers into virtual machines and freeing up floor space, lowering energy costs and so forth into the bargain. Faster network pipes solve data centre gridlock. ®