This article is more than 1 year old

Mellanox adds networking specs to Open Compute Project

To 100 Gbps ... and beyond!

Open Compute Summit Mellanox has tipped two technologies into the Open Compute Project: an optical data centre interconnect, and a NIC that adds multi-host networking to Facebook's Yosemite chassis specification.

The optical spec is designed to help push OCP technologies into the HPC world, with a framework that Mellanox reckons can ultimately scale to the Terabit per second level.

What the company has exposed in its contribution, Mellanox marketing VP Kevin Deierling told Vulture South, “allows partners and competitors adopt this channel spacing so they can build products that conform to it.”

The spec contributed to OCP today provides standardisation for transferring up to 1Tbps using 32 wavelengths per fiber strand, and – importantly for the data centre and web-scale markets – supports distances up to 2,000 metres. Shifting 100Gbps using four 25 Gbps wavelengths is possible today under the spec.

The spec covers 1550 nm WDM lasers and silicon photonics to let the industry put together QSFP28-form-factor transceivers for single-mode fibre connectors.

Deierling told The Register the demands of the hyperscale market mean that opening the optical spec will bring more switches onto the market, along the way expanding Mellanox's opportunity to sell NICs.

Multiple hosts, one NIC

The company is also contributing its multi-host technology spec to the OCP. Demonstrated with Facebook's Yosemite chassis, the idea behind multi-host is to take advantage of the new architecture in Yosemite.

Traditional x86 architecture, Deierling explained, puts the CPU at the centre of the universe – or rather, the centre of the motherboard. Instead of the NIC being a peripheral to the CPU, they both sit on the same motherboard; as long as the NIC has enough capacity, there's no reason it shouldn't carry traffic for multiple CPUs.

The approach, Deierling said, is designed to save on NICs and cabling to the top-of-rack switches, “and you can use more affordable CPUs” than using symmetrical multiprocessor designs, “because you're using the network to connect the CPUs together”.

The technology Mellanox has handed over to OCP is designed to make multi-host networking transparent to the CPUs. That way, implementations don't need a new networking stack, but also, he said, it can support x86, OpenPower, ARM, GPU, or FPGA-based processing cards on the one physical connection.

In the configuration Mellanox demonstrated, a 648-node cluster would only need 162 each of NICs, ports and cables. ®

More about

TIP US OFF

Send us news


Other stories you might like