This article is more than 1 year old

Michael Dell heralds supercomputing fourth wave

Just like the third wave - with more marketing

SC08 The annual Supercomputing 2008 trade show kicked off this morning in Austin, Texas with a sales call keynote by local billionaire and sometime HPC player, Michael Dell. As chairman and once again chief executive officer of a company that's trying to make a more substantial run at the HPC area, he can be forgiven (perhaps) for giving a keynote address that at times seemed to be more of a commercial for his company than a revelation about HPC (as supercomputing is now called).

The people who got up early to see the keynote here in Austin probably won't, though. Personally, I would have rather the SC08 staff had booted up the 20-year-old keynote by Seymour Cray from the original Supercomputing 1988 event, since Cray, the father of supercomputing, did not give very many interviews in his life.

That said, Dell did make a few good points in his keynote, and he did outline, albeit somewhat thinly, what this next wave of supercomputing might look like.

He started off with an interesting bit of data, showing just how far mankind has to go to build a power-efficient, easily-programmed, redundant supercomputer. The human brain, Dell said, has some 100 billion neurons, each with some 1,000 or so synapses, each running at around 200 cycles per second. When you do the math, that's around 20 petaflops of raw computing performance, which if it could be built today - and it can't; we have just broken through the petaflops barrier - would cost an estimated $3.6bn. Here's the catchy bit. "The human brain uses about 20 watts of energy, so we evidently still have a long way to go," quipped Dell.

Later in the Q&A session, Dell was asked by an attendee what it would take to simulate the human brain, a bizarre question to ask the CEO of a PC and server maker and expect an answer that makes sense, and Dell deflected the question saying that he was not suggesting that we should be simulating the brain (there are people working on this, of course, and deploying lot of computing power to do it).

He said that what he meant to imply was that HPC clusters are not terribly efficient compared to mother nature. "For me, the dream and the excitement about computers was not to replace the human brain," Dell explained. But he does see the need to a better way to interact with the machines and the software that runs on them. "It is a fairly rudimentary process today. We type keys and something happens. I think there is an enormous opportunity to improve the man-machine interface."

Dell SC08 Keynote

What? This isn't a sales call?

Please, feel free to make up your own jokes about the Borg and dildonics.

What this 4th wave of computing seems to be about is an admission that we can make ever larger clusters with lots more main memory, storage, and I/O, but after more than a decade of serious parallel computing, the ability to deliver performance has far outstripped the ability to write applications that can take advantage of the raw iron. And now we are all thinking about energy efficiency these days, and that is a spanner in the works.

The three prior waves of supercomputing, according to Dell, included specialized vector machines and proprietary operating systems in the 1970s, microprocessor-based systems (mostly RISC but other architectures) in the 1980s and 1990s, and standards-based (meaning x86 mostly) parallel clusters in the late 1990s until now. The 4th wave will deliver higher density machines, probably in blade or other customized form factors, pools of shared storage, and focus on one of the pain points in clusters - running and administering them.

Dell cited figures from supercomputing market researcher Tabor Research showing that 70 per cent of HPC budgets are consumed by staffing and administration items in the budget - those pesky humans, again. (Of course, out there in the data centers of the corporate world, 65 per cent of the IT budget is spent on administration and maintenance, according to IDC, so welcome to the club).

The 4th wave of HPC will be keenly focused on performance per watt as well, and interestingly, Dell (the man, not the machine) is predicting that some of the systems management tools commonly used in enterprises are going to swim upstream to the HPC market to help them better and more efficiently manage the resources in supercomputing labs. Usually, HPC tech flows downstream to the general market over the course of about a decade.

The availability of cheap HPC setups is something that Dell is driving, as much as it did with the direct model with PCs and then servers two decades ago. With the price of computing dropping, it becomes more widely available to smaller companies and organizations as well as to developing countries that could not have dreamed of a having supercomputer. (In some cases, they were not legally allowed to have an American-made supercomputer a decade ago because of export controls).

Five years ago, according to Dell, a teraflops of computing cost about $1m, but today, that same $1m buys you around 25 times as much oomph. The density has not gone up as much as the price for capacity has come down, but it is still impressive. Three years ago, Dell said, a 2,500 core cluster with 1,250 servers using 3 GHz x64 processors delivered about 9.8 teraflops. Today, a 1,240 core machine using a mere 155 servers delivers 10.7 teraflops. That is a 90 per cent reduction in server count.

In actual Dell product news, Dell said that the company started shipping machines based on the new "Shanghai" quad-core Opteron processors yesterday. And looking ahead, as a teaser to HPC shops, Dell said that Dell will be the first server maker with quad data rate InfiniBand ports native on its blade servers and that the future machines based on Intel's "Nehalem" next-generation Xeon processors would be able to support up to 1 TB of main memory per node. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like