Mellanox says 25 Gbps is the Goldilocks speed for flashy data flows
10Gbps is tooooo light. 40 Gbps is toooo expensive. Is Mellanox just right?
Enterprises are starting to roll 25 Gbps Ethernet into their data centres, and Mellanox is using this week's Open Compute Project (OCP) Summit in San Jose to plant its flag in the emerging market.
The launch represents a chance for Mellanox to catch up in the high speed Ethernet market, having been outpaced in the 10 Gbps space.
Mellanox marketing veep Kevin Deierling told The Register the company doesn't have much share in the 10 Gbps space, but it claims better than 90 per cent market share in adapters running faster than that speed.
He expects two dynamics to play to Mellanox's strengths.
The first is that 25 Gbps Ethernet is now hitting the market near price parity with 10 Gbps products. Deierling notes that Cisco has been on the record as saying 25 Gbps products will reach parity with 10 Gbps (a position it first put in 2014).
Second, while 40 Gbps Ethernet has been pretty much the preserve of the hyperscale data centre, Deierling says 25 Gbps has a clear enterprise use case – flash-based storage.
“Today, when a single NVME SSD can saturate a 25 Gbps link, people on 10 Gbps Ethernet are throwing away two-thirds of the performance available to them,” he explained.
As well as the 25 Gbps NICs, Mellanox is using the OCP summit to launch 50 Gbps single-host and multi-host NICs, and a 100 Gbps multi-host NIC.
The company will also be showing off NICs running Facebook's Leopard server, and the OpenPower Racespace Barreleye server.
“Open Composable Networks” will also get their first outing at the summit, which lets the customer mix-and-match network operating system, switch, network application, working with Mellanox's NEO network orchestration, and supporting the Linux Swtichdev, OCP ONIE and SAI APIs.
Rounding out its software stack, the company has announced support for the Cumulus Linux NOS, so its Spectrum switches now support five network operating systems – the others are Mellanox's own MLNX-OS, HP's OpenSwitch, Microsoft's not-yet-commercial Azure stack, and Metaswitch's Networks.
As noted at The Register's HPC sister publication The Next Platform, Mellanox is using the move to speeds above 10 Gbps as the chance to fire broadsides at Broadcom, the market leader in Ethernet merchant silicon.
With interfaces running at such high speeds, Mellanox reckons it's got the jump on Broadcom in terms of packet loss, because it can run its switches at wire speed without the ASIC dropping packets.
Deierling agrees that some packet loss is an inevitability in a busy Ethernet network. For example, if two input ports running at 25 Gbps have streams they have to send to the same output port, once the buffers are full, packets will be dropped. In that sense, packet loss is a valuable indicator of congestion (more on this in a second).
However, he asserts that Broadcom's Tomahawk is prone to dropping packets not because of the network, but the ASIC – and that's where Mellanox will go on the attack.
“Having the ASIC drop packets is preventable”, he told us. ®