Mellanox rides networking upgrade wave
QDR InfiniBand, 10 GE taking off
The quad data rate InfiniBand and 10 Gigabit Ethernet upgrade cycles within big data centers are well under way as the global economy continues to thaw and companies like Mellanox are benefiting from a strengthening upgrade cycle.
Mellanox, which makes InfiniBand silicon and switches as well as InfiniBand and Ethernet adapters, saw revenues jump by 60.5 per cent in the first quarter, to $36.2m. And even though research, development, sales, and marketing costs all rose sharply, revenues climbed a lot faster than costs, so Mellanox was able to more than double net earnings in the quarter to $5.2m.
Mellanox ended the quarter with $217.4m in cash and short-term investments, plenty of cash to continue to do the development necessary to compete in the cut-throat networking space. That's also enough money for the company to jump into the Ethernet switching market, should it decide that is a good idea. But thus far, Mellanox seems content to make InfiniBand chips (which it sells to Voltaire and uses in its own switching products), make InfiniBand-Ethernet gateways, and make ConnectX adapters that support InfiniBand and Ethernet protocols.
The cheapest way to enter the 10 GE switch market might be to acquire Voltaire, which is within Mellanox' reach financially given that Voltaire has a current market capitalization of $124m and had about $44.6m in cash and equivalents at the end of the December quarter. With Voltaire not profitable in both 2008 and 2009 (on an annual basis), there may never be a better time to buy Voltaire. And if Mellanox can't see that, maybe the private equity firm, Garnett & Helfrich Capital, behind Blade Network Technologies can see it and will make a move.
In a separate announcement, Mellanox said that its 10 GE and 40 GE adapters for servers will support the new RDMA over Converged Ethernet protocol that the InfiniBand Trade Association announced earlier this week. The RoCE protocol basically uses a tweaked version of the InfiniBand drivers from the OpenFabrics Alliance's, called the OpenFabrics Enterprise Distribution (OFED) collectively, to allow InfiniBand's Remote Direct Memory Access protocol - the secret sauce that gives InfiniBand such low latency - to run on a server but communicate through Ethernet adapters that support lossless data transmission.
This is yet another key feature that InfiniBand had all to itself for many years, giving it its high-performance niche. While RoCE will never offer the same performance of native InfiniBand, if it gets close, Mellanox may find itself wishing it had jumped into the 10 GE switching market when it had the cash and the chance. ®
Sponsored: IBM FlashSystem V9000 product guide