HPC

This article is more than 1 year old

Appro sells another flash-happy HPC cluster

Trestles gives Opteron 6100s some love

Appro International, the upstart HPC cluster maker, has got another big order from its biggest customer, the San Diego Supercomputer Center.

The new machine is called "Trestles" because it is a bridge system until the 245 teraflops "Gordon" super is installed next year. But the Trestles box will weigh in at 100 teraflops, have lots of flash memory, and will give researchers a head start on programming for flashy x64-based clusters.

SDSC is trying to carve out a niche for itself among HPC centers for exploring the heavy use of flash-based memory in conjunction with parallel supercomputer clusters. I/O is generally an issue with parallel supers, not CPU-core count, and as the cores scale up, keeping them fed with data that is flying around because of a simulation is the real battle. Chewing on the data is easy once it gets to the cores.

Last November, as El Reg reported, SDSC bagged a $20m grant from the National Science Foundation to get disk drives out of the HPC picture and replace them with much faster flash. As Allan Snavely, associate director at the SDSC and co-principal investigator for Gordon put it last year, "moving a physical disk-head to accomplish random I/O is so last-century" and it was "time to stop trying to move protons and just move electrons."

The problem with Gordon is that it is based on Intel's next generation of Xeons, the "Sandy Bridge" processors that the chip maker will discuss next week at the Intel Developer Forum.

SDSC put a baby flash-based super on the floor in September 2009, called Dash. This machine is based on Appro's GreenBlade blade servers, and with its 68 nodes it's weighing in at only 5.2 teraflops. The Dash cluster uses ScaleMP's vSMP Foundation virtual symmetric multiprocessing systems software to create a virtual 16-node SMP box out of the blades, with a total of 768GB of shared virtual memory. Four of these supernodes are glued together to make up Dash, and the four remaining nodes have 1TB each of flash storage to feed the compute nodes. The whole thing is parked out on the TeraGrid distributed HPC network run by the NSF.

Dash is interesting, but it lacks scale. Gordon has scale, but doesn't exist yet and is not expected to be installed until mid-2011. And thus SDSC rattled the can in front of the NSF, which dropped in $2.8m to build the Trestle bridge system to give its local and TeraGrid users a more capable machine to play with until Gordon arrives next year.

The Trestles flash cluster is based on Appro's latest quad-socket server sporting AMD's Opteron 6100 processors. The server nodes, which are based on Appro's 1U-1143H rack-based servers, will sport a mere 64GB of DDR3 memory but will have 120GB of flash memory. Rather than outfit the quad-socket box with the twelve-core Opteron 6100s, SDSC is using the lower-powered and cheaper eight-core variants.

The Trestles cluster will be comprised of 324 nodes, for a total of 10,368 cores, 20TB of main memory, 38TB of flash memory, and a peak theoretical performance of 100 teraflops. It looks like the Trestles super will be using vSMP Foundation to gang up 32 server nodes into a virtual SMP, and then using InfiniBand to link multiple nodes together. That means each virtual, shared memory node will have 1,024 cores and 2TB of memory.

The Trestles super will be up and running by the end of this year.

The Gordon machine will be similar in concept, as it turns out. It will also have 32 supernodes, with each node having two Sandy Bridge Xeons and 64GB of memory. Each supernode will have 2TB of main memory and 8TB of flash memory, and will be comprised of 32 two-socket nodes glued together using vSMP Foundation.

The Gordon box will weigh in at 245 teraflops, have 64TB of main memory, 256TB of flash memory, and hook up to 4PB of disk storage. The idea is to get teraflops of oomph more or less matched up against terabytes of flash storage and see how that improves the overall performance of a super. The Gordon system has 2.5 times the oomph in teraflops as the Trestles system, but at $20m, it costs more than seven times as much. The NSF is paying for Gordon as well.

SDSC is not Appro's only flash-happy customer. Back in June, Lawrence Livermore National Laboratory, which has five of Appro's Opteron-based clusters with InfiniBand interconnects, tapped Appro and flash partner Fusion-io to build a custom 100TB flash storage array for the Hyperion testbed super.

The Hyperion machine, which was announced in November 2008 and installed in May, has Intel and Dell as its main contractors and is free to ISVs in the HPC space to test the scalability of their software on real iron. Hyperion was built using $5m of funding from the US Department of Energy plus another $5.5m in free equipment and services from various IT vendors. It is not clear how much the 100TB flash storage array cost, and if it was paid for through donations and DOE money like the basic x64 cluster was. ®

More about

TIP US OFF

Send us news


Other stories you might like