Mellanox kicks off race to 40 Gigabit Ethernet
And why not?
The ramp of 10 Gigabit Ethernet for servers is just getting underway, and the use of 10GE networks for linking storage to servers is barely in its infancy. Fibre Channel over Ethernet being more a topic of conversation than a deployed technology.
But why not reach for a future that's even further out? This week at the Intel Developer Forum in San Francisco, Mellanox Technologies is showing off the world's first 40GE converged network adapters.
Mellanox likes to get in way ahead of the curve to catch the early adopters. It started shipping its 10GE adapter cards back in 2007 and debuted a 10GE card with SFP+ links last year. These were known as the ConnectX line of cards. And this year, Mellanox has added support for the physical layer (Layer 1 of the seven levels of networking) to the 10GE SFP+ card, bringing into being the ConnectX-2 family of cards.
The ConnectX-2 cards added support for Remote Data Memory Access (RDMA), wake-on-LAN features, and cut power consumption by anywhere from 15 to 35 per cent compared to earlier ConnectX cards. The latest cards also support virtual NICs and virtual HBAs over InfiniBand and 10GE adapters. Only a few weeks ago, Mellanox put out a mixed-mode card that supports one 40 Gb/sec InfiniBand (also known as quad data rate, or QDR) port alongside another 10GE port on the same physical card, satisfying the requirements of HPC users who sometimes need to mix both protocols on the same servers.
The latest ConnectX-2 card puts a single 40GE port onto a card, and it points the way to the kind of bandwidth and energy efficiency that will drive server and storage buyers to want to beef up their networks with 40GE switches at some point in the not too distant future.
According to John Monson, vice president of marketing at Mellanox, the 40GE chip that the company has developed burns somewhere between 6 and 7 watts, and a full adapter card will do around 10 watts. By comparison, one of Mellanox' own dual-port 10GE adapters uses about 5 watts, and there are plenty of other less energy efficient 10GE adapters out there, says Monson, that burn 20 or even 25 watts.
Like the other ConnectX cards, the 40GE adapter card (and the chips that Mellanox will sell to server makers for mezzanine networking cards and to motherboard makers when they want to put 40GE on their mobos) is able to support the emerging Converged Enhanced Ethernet (CEE) protocol enhancements to make Ethernet as reliable as InfiniBand and the Fibre Channel over Ethernet (protocol), which allows FC traffic storage to run over Ethernet because the CEE extensions make Ethernet not drop packets, which is a no-no for storage. These standards are not exactly nailed down, but they are getting close. The InfiniBand cards and ports on mixed mode cards also support Fibre Channel over InfiniBand, which is analogous to FCoE.
So why launch 40GE adapters now? The same answer as always: Systems need more bandwidth. With PCI-Express 2.0 coming out, peripherals can now drive a theoretical peak of 40 Gb/sec, says Monson. It won’t be long before processors with lots of cores and gobs of memory and I/O bandwidth start saturating their networks.
Mellanox got its first 40 Gb/sec InfiniBand adapters out there well ahead of the switches, explains Monson, for customers who wanted to do server-to-server and server-to-storage links, and the same will hold true with 40GE. So for now, it will be selling these adapters in a bundle with some cables. Pricing has not been set yet, but Monson says it will be cheaper than four 10GE cards and their cables. So not only will the energy costs be lower with 40GE per unit of bandwidth, but the cost per card will be competitive it seems.
The ConnectX-EN 40G adapter card has drivers for Windows and Linux and for the latest hypervisors from VMware (ESX Server) and Citrix Systems (XenServer) too. It has copper and fiber optic QSFP connectors.
Monson says that Mellanox is anticipating that the first 40GE switches will hit the market next year, and he adds that it won't be too long before the PCI-Express 3.0 standard emerges, perhaps in 2011 and perhaps with 80 Gb/sec of raw data bandwidth. Can everyone say 100GE and ConnectX-3 adapters somewhere around 2012? History will no doubt repeat itself, and Mellanox will try to be on the front end of this boom too. ®
Sponsored: DevOps and continuous delivery