Pittsburgh boffinryplex fires up Altix UV 1000 combo-box
'Blacklight' to be joined by sister in Fusionopolis
As El Reg told you back in June when the first "UltraViolet" Altix UV 1000 shared memory supercomputers began shipping out of Silicon Graphics' factories, the Pittsburgh Supercomputing Center will be one of the first customers to get one of these boxes. As it happens they are getting two, joined up into a single machine.
The PSC combo, nicknamed "Blacklight", was paid for through a $2.8m grant from the National Science Foundation to bolster the number-crunching power of the TeraGrid, a nationwide network of supers funded by the US government for academic and private research (as opposed to the spooky stuff that goes on at the Department of Energy labs).
The Altix UV 1000 machines were launched last November and started shipping in June of this year. The machines are comprised of 128 two-socket blade servers that are connected together into a shared memory system using the NUMAlink 5 interconnect developed by SGI.
The NUMAlink 5 interconnect hooks into Intel's eight-core Xeon 7500 processors and its "Boxboro" 7500 series chipset. The Xeon 7500s and their chipsets only allow for up to 16 TB of main memory to be addressed, which is therefore the upper limit of the global shared memory that SGI can build into a single image of the Altix UV system packing 256 processors and a total of 4,096 cores linked together in a 2D torus mesh.
The NUMAlink 5 interconnect has an aggregate of 15 GB/sec of bandwidth across the blades in the Altix UV 1000 setup and under 1 microsecond of latency hopping from blade to blade. The fully loaded 256-socket machine delivers 18.56 teraflops of HPC oomph, which is a lot of flops for a shared memory system.
The shared memory systems offer a simpler programming model than parallel clusters. It is a pity that Intel didn't offer more memory in the Xeon 7500's on-chip memory controller, as the two-socket Itanium 9100 processors did.
The older Itanium chips could address 100 TB of main memory, although no one ever built a box with that much memory in it. Still, at 16 TB in a single image, the Altix UV 1000s are offering eight times the global shared memory of their Itanium-based predecessors, the Altix 4700s, which is an improvement. And you can bet that SGI is lobbying Intel pretty hard to boost the memory addressability of the future "Sandy Bridge" Xeons, due next year.
Blacklight: PSC's Altix UV 1000 shared memory super
The Blacklight super at PSC is the largest machine sold by SGI to date, with two fully loaded Altix UV 1000 systems that are networked on the TeraGrid backbone. SGI can offer much larger Altix UV 1000 systems, but they are not shared memory boxes. The blade servers can be linked into a fat tree pod of 128 blades using the NUMAlink 5 interconnect and then eight of these pods can be linked together across the NUMAlink 5 interconnect to create a 16,384-core behemoth weighing in at 74.3 teraflops of aggregate raw performance.
No one has seen fit to shell out the money for such a machine yet - which would have eight copies of the Linux operating system running, compared to Blacklight's two. SGI said last year it could scale the current Altix UV 1000 machines to 32,768 cores, with 148.6 teraflops of raw power and 16 linked shared memory systems and 16 Linux instances, if someone wanted to pay for it.
Based on the price that NSF paid to build the Blacklight super, that top-end machine Altix UV 1000 machine would cost around $22.4m, or around $153m per petaflops if you ganged up 64 fully loaded Altix UV 1000s together to break the petaflops barrier.
That's a whole lot more expensive than what Cray is charging these days for its XE6 machines, which are running about $45m per petaflops based on the few deals where pricing information has been available. The shared memory architecture is more expensive than the fast interconnect, distributed node Cray is peddling with its Gemini wares.
If SGI could address hundreds of terabytes of memory in a single system image, that more than 3.4x multiple might be worth it to save money and time on programming. As it stands, there are different horses for different courses, and both companies will win some deals.
In a related announcement, the Agency for Science, Technology, and Research (A*Star) in Fusionopolis, Singapore, has acquired an Altix UV 1000 system with 2,112 cores and 12.3 TB of shared memory, backed up by a 32 TB PAS 8 disk array from Panasas, to let researchers run their modeling, simulation, and visualization codes upon. A*Star says that the large memory system is necessary because of the large data sets it uses in its simulations and because some of its applications do not scale well on cheaper parallel clusters. ®
Sponsored: Benefits from the lessons learned in HPC