HPC

This article is more than 1 year old

SC09: Infiniband expands, Mellanox thrives

Pretty sporty

Mellanox was a big presence at SC09. It had a good-sized booth of its own, and its products were featured or referred to at a significant number of other booths.

It also made a major announcement with NVIDIA about a joint effort to provide technology that allows GPUs to talk directly to storage, thus taking load off of the general purpose CPUs and driving performance up even higher.

Mellanox's new HCA (host channel adaptor) will also take some of the overhead inherent in MPI (messaging passing interface) operations off the shoulders of the CPU. This may provide a significant performance boost – perhaps 20-30 per cent – although mileage will definitely vary according to the workload.

Infiniband was a big star at the show, and Mellanox is making hay with its IB harvest (tortured analogy, I know). Fully 37 per cent of the Top500 list uses Mellanox-sourced IB gear, and IB is used in more than 63 per cent of the Top500 cores. (This list is skewed a little as some of the very largest systems, such as Roadrunner are Infiniband-based.)

As a technology, Infiniband is pretty sporty, offering speeds of 40Gb/s now with very low latencies of around 200ns. With Mellanox ConnectX technology, latency drops to around 120ns. The future is bright for IB from a performance standpoint. Mellanox (and assumedly others) will soon trot out new 120Gbit IB switches – pushing performance threefold over today’s rates. Our buddy TPM wrote a nice Reg article outlining SC09 Mellanox announcements and futures.

We visited the Mellanox SC09 booth to get a tour from Brian Sparks. He showed us a nice demo of GPU vs. CPU computing and also gave us a look at their new 648 port, 120 Gbit switch, which was being used to run the network at the show. To take a look at a short video of our visit, click here.

It’s a given that supercomputing and HPC users will adopt 120Gb IB, but will this technology see a life outside of academia, Wall Street, and top-secret government labs? (Or, for that matter, super-villain labs?) We believe that it will.

But what about real people?

A big trend coming down the line is something that we’ve dubbed "The Age of Analytics." We’ll be nattering on and on about this topic in coming months (yeah, there’s something to look forward to), but here’s a brief explanation.

Globalization – including freer trade, instant communications, and world-wide sourcing/sales capabilities – is reducing entry barriers, reducing costs, and increasing opportunities in virtually every industry. That’s great, but at the same time, it’s also ratcheting up competition and destroying margins. In macroeconomic terms, we’re moving toward ‘perfect competition’ in many industries.

To effectively compete and earn more than the minimal margin, companies must address their expenses by streamlining operations AND building the top line. They’ll do this by taking advantage of increasingly fleeting high-margin opportunities. Mindless cost-cutting can cut expenses, but it doesn’t do much to increase sales or sales margins.

We see this in our current technology trends with data centers receiving budget to cut costs and increase flexibility via virtualization, for example. This isn’t to say that there's no tech spending aimed at increasing the top line; there certainly is. But I classify much of this spending as companies tech-enabling existing processes and extending their current business model span and reach with techie solutions.

I think we’re starting to see companies using their most valuable asset – their data – to uncover inefficiencies in their business and also to more effectively discover and exploit profitable opportunities. This isn’t traditional business intelligence (BI); it’s an extension of BI, where internally generated data is combined with appropriate external data to uncover new and actionable data relationships.

It’s also predictive, in that they will attempt to use these methods to, for example, understand when to enter some markets and when to leave others. In my mind, this is the ‘final frontier’ in business competition. These data-based weapons are already being deployed by industry leaders (Wall Street, Wal-Mart, and some others who are keeping very quiet about it.)

So what does this have to do with Infiniband? Let’s connect the dots: The type of number-crunching we’re talking about will require huge data sets and very fast movement of that data from system to system. The workloads will look more like supercomputing than the typical transactional business workloads.

The magnitude of the data, plus the need for speedy results, will demand faster and more efficient data transport mechanisms like Infiniband. However, the road to enterprise IB isn’t nearly as clear as it was for the HPC market.

I believe that Gigabit Ehternet will be a strong competitor for enterprise data distribution needs. To many commercial data centers, IB is a new and somewhat exotic technology, while Ethernet is tried and true. There are advantages and drawbacks to both technologies, many of which will be unique to the individual data center; this should ensure fairly wide adoption of both technologies in the short to medium term.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like