Supercomputing past masters resurface with coder-friendly cluster
Convex Convey boasts unified programming
SC08 The Supercomputing 2008 show in Austin is going to be the occasion for a lot of flashbacks, and not just because there are countless nerds on hand who came out of the University of California at Berkeley. The event is hosting the debut of a new supercomputer maker, Convey Computer, and the company's brain trust includes Steven Wallach, the man who co-founded vector minisupercomputer maker Convex Computer in 1982.
The Convey Hybrid Core-1 supercomputer is a cluster of two-socket, run-of-the-mill servers using Xeon processors from Intel. And like many experimental supercomputers being developed these days, the HC-1 also uses field programmable gate arrays (FPGAs) to substantially accelerate the performance of the applications running on the x64 processors. The machine does, of course, run Linux.
Other supercomputer makers and research projects in government, academia, and industry have been playing around with FPGAs as well as using graphics processing units to boost application performance, but there is some secret sauce that Wallach and his team at Convey have come up with to make this a little bit easier on the programmers.
First, the FPGAs used in the HC-1 don't plug into the server I/O or the graphics ports. They drop into one of the two processor sockets on the board. And because Convey has licensed both the frontside bus used with current Xeons and QuickPath Interconnect that will be used in future Xeons, Convey has done something really clever. It has linked the FPGA to the x64 processor in a cache-coherent manner.
While the details of what Convey has done are complex, the practical effect is one many of us can remember: It is akin to plugging an 8087 floating point math unit into a motherboard on an old 8088-based PC. You don't have to do any special programming. When the main CPU needs to do math, it hands it off to the much-faster co-processor.
"This is our fundamental breakthrough - a single programming model," says Bruce Toal, one of the company's co-founders and its president and chief executive officer.
Toal, by the way, was one of the bigwigs at the former Convex, which was eaten by Hewlett-Packard in 1995 and whose technology was part of the underpinnings of the high-end HP 9000 V-class and Superdome PA-RISC servers. And for many years after the HP acquisition of Convex, he ran HP's supercomputing business.
In 2007, Toal was approached by Wallach and his former co-founder at Convex, Bob Paluck, now a venture capitalist, to evaluate the possibility of bringing a hybrid x64-FPGA supercomputer to market. By the summer, Toal had signed up Intel and Xilinx, a maker of FPGAs, as well as Centerpoint Ventures, InterWest Partners, and Rho Ventures, as venture backers, raising a total of $15.1m in funding.
Programmers wanting to use FPGAs to accelerate their code in the HC-1 get an FPGA that has been programmed with a "personality" that is tuned to a particular type of application. One personality makes it behave like a vector math unit (called the SPvector personality), which is useful for seismic processing in the oil and gas industry. Another, called the financial vector personality, which is still in development, replaces pairs of single-precision math units in the SPvector personality with double-precision units. It is being tuned to do lots of parallel random number generations, which financial modeling software requires. Yet another FPGA is programmed to have a personality suitable for accelerating the calculations used in protein sequencing applications.
Next page: Memory architecture secret sauce
It might very well be easy to use the personalities, but the packaged vector-op personalities are worthless. FPGAs cannot get good performance mascarading as CPUs, especially on floating point code. ALL the magic will come from creating custom "personalities," creating custom software written in a hardware description language that transforms your algorithm into transistors. And doing that is hard.
The problem is that such languages are backwards, left behind by the demands of the semiconductor industry which doesn't value useability and hates change. The first object-oriented HDL was released barely three years ago, and few tools support fully it. There's also hardly any books on it. Prophesized new generations of languages that automate many of the basic problems of hardware design (particularly the rather difficult task of passing data from one part of your "program" to another, and making sure everything gets to where it needs to in the right number of cycles) have been promised for some time but have not materialized. Writing code that compiles to transistors is painful, few people know how to do it, and few people study the subject in school or in their spare time.
If an independent software industry springs up made up of a few brilliant coders who create useful personalities and resell them, then the FPGA idea could thrive. (Like I said, utilizing already-made personalities seems easy.) But the vertical markets that this thing targets don't work like that. Every one of their algorithms is unique, they develop everything from scratch, and the FPGA programming model and the transistor languages will be hell for them.
It's a wonderful idea, but i just don't think it's time for it yet. And if all you do is use the bundled vector personality, then a GPU will be ten times faster for, literally, one hundredth the price.
Although, if you can really just offload code a few lines at a time (to create your own instruction), then rewriting those few lines in an HDL is not nearly as hard as rewriting an entire algorithm. There could be hope, if the granulity is there. Although to get even those few lines working you need to write many more to interface to the DRAM and the FSB. More information is needed on what tools Convey has really created (in those fleeting dozen months) to automate such chores.
So where is the add in card that will boost my folding@home stats?
The Baby Boomers Have Failed. Sorry.
Let me be the first to congratulate Wallach on having solved the parallel programming crisis. Not!
The baby boomers, due to their enfatuation with Turing machines and their addiction to everything sequential and algorithmic, gave us the parallel programming and software reliability crises. Now they are old and they've run out of ideas. It is time for them to peacefully retire and let a new generation of thinkers have a go at it.
How to Solve the Parallel Programming Crisis: