This article is more than 1 year old

QLogic spans Dell's InfiniBand

Blade pass through

When Dell sells servers to an HPC customer, QLogic wants to be the network interconnect provider of choice when it comes to the InfiniBand protocol.

In the HPC market, every hop in the network interconnect is something to be avoided, and density in server form factors is an issue. This is why blade server makers don't just make or resell others' InfiniBand and Ethernet switches for their blade chassis, but also sell pass-through modules that let blade links be aggregated for the blades and then linked out to a director or core switch outside of the chassis that links the cluster together.

This takes one hop - from edge switch inside the chassis to the core or director switch - out of the loop and can boost performance on certain HPC workloads. I know what you're thinking: Why bother with a blade server at all if you aren't going to use the integrated switching? The answer is simple: You can cram more servers in a standard 42U server rack using blade form factors than you can using rack-style machines. Some people can justify the blade premium, even if they ditch the integrated switches, because of the space savings and reduction in wiring.

QLogic today is announcing a pass-through module for Dell's PowerEdge M1000e blade chassis. The module links on the on-blade InfiniBand mezzanine cards to an out-chassis InfiniBand switch, whether it comes from QLogic (obviously what the company wants) or rivals Mellanox, Voltaire, and Cisco Systems. The new 12005-PT16 pass-through module takes up only one slot in the back of the M1000e chassis (which has six I/O slots in total). The module has 16 ports running at the full quad data rate (40GB/sec) coming off the server mezzanine cards, and it has a low 400 picosecond latency.

Jesse Parker, vice president and general manager of the Network Solutions Group at QLogic, says that in a lot of cases in the HPC market, using a pass-through module and a core or director switch is better than using an integrated InfiniBand switch, like the M3601Q from rival Mellanox that Dell sells, because the internal switch eats up two networking slots in the back of the chassis as well as adding a hop on the network and therefore unwanted latency in the HPC application. The QLogic pass-through module only takes up one slot, which means you have room to put another module in the chassis, such as for 10 Gigabit Ethernet for a second network protocol or Fibre Channel for links to SANs.

Later this month, when the pass through module ships through Dell, QLogic will also cover the other side of the blade link, with Dell also picking up the QME7342 dual-port QDR InfiniBand mezzanine card for its M1000e series blade servers. (Dell already sells QDR and DDR InfiniBand ConnectX mezzanine cards from Mellanox). Dell recently started selling QLogic's rack-based 12000 series InfiniBand switches. These include edge switches with 18 and 36 ports and director switches with 96, 288, 432, or 864 ports.

Parker says that when you do the math and compare the QLogic InfiniBand pass-through module and mezzanine card combo to the Mellanox alternative offered by Dell, the QLogic setup uses a lot less energy. Parker says that the QLogic setup will consume about 179 watts compared to 326 watts for the Mellanox setup, and eliminate one hop too. QLogic has also run the SPECmpi2007 HPC benchmark on the Dell boxes with and shown that its mezz cards offer anywhere from a 5 to 22 per cent performance advantage over the Mellanox mezz cards, with the performance being larger as the number of cores in the cluster grows.

Dell has not released pricing for the QLogic InfiniBand pass through module and mezzanine card for its blades. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like