Original URL: https://www.theregister.com/2009/06/09/sun_constellation_juropa2/

Germans fire up 200 teraflop Juropa2 super

Sun makes good on Constellation promises

By Timothy Prickett Morgan

Posted in Channel, 9th June 2009 23:05 GMT

French server maker Bull might be the prime contractor on the 200 teraflops Juropa2 massively parallel supercomputer installed at the government-sponsored Forschungszentrum Jülich (FZJ), but beleaguered server maker Sun Microsystems wants everyone to know that the box that was turned on last Friday is comprised of its InfiniBand switches and its Xeon blade servers.

The Bull-Sun deal was announced last October. Juropa is short for Jülich Research on Petaflops Architectures and that pretty much tells you what the goal is. It's number crunching involves climate, chemistry, and medicine research.

The initial prototype machine, comprised of IBM System x servers using Intel's dual-core "Woodcrest" Xeons and linked together with a prototype of Quadrics interconnect running atop 10 Gigabit Ethernet, was installed as Juropa1 in June 2007.

Bull and Sun were quiet about the exact iron behind the Juropa2 machine, as was Quadrics, which was rumored to lose out on the second phase of the Juropa project and which has subsequently shut down just as the company's founder, Duncan Roweth, has taken a position at Cray's European operations.

FZJ is the third largest supercomputer center in Germany, and it's among the largest HPC facilities in Europe. What it buys can sway what other private and public HPC centers acquire - or do not. The Sun gear at FZJ is the second big deal that Sun has closed for its "Constellation" HPC clusters, which are comprised of Sun's own InfiniBand switches (nicknamed "Magnum") and its x64-based "Galaxy" blade servers.

Sun's first big HPC deal for the Constellation machines was the "Tsubame" cluster in Japan, where NEC was the prime contractor. This machine used 648 of Sun's X4600 rack servers (with a total of 11,088 cores, include node controllers, using dual-core Opterons) and math accelerators from Clearspeed to hit 48.8 teraflops of sustained number-crunching performance. Sun's second big Constellation deal was the 433.2 teraflops "Ranger" cluster at the University of Texas (with 62,976 cores using quad-core Opterons). Both of these machines used InfiniBand interconnects to link the nodes.

As it turns out, so does the Juropa2 cluster, and that is no surprise considering the fate of Quadrics and the tapping of Sun for iron by Bull. The Juropa2 machine is using Sun's X6275 blade servers, which were launched in mid-April in the wake of Intel's 'Nehalem EP' Xeon 5500 processor announcements at the end of March. The X6275 blade packs two whole two-socket Xeon 5500 servers onto a single blade, with each node on the blade getting two quad-data rate (40 Gb/sec) InfiniBand ports.

In this case, FZJ is using the 2.93 GHz X5570 Xeons on its 2,208 server nodes, which have a combined 17,664 cores. The server nodes are clustered using the "Project M9" kickers to Sun's Magnum InfiniBand switches, which have 648 QDR InfiniBand ports. Another 14 Sun storage servers are running a Lustre clustered file system that currently has 500 TB of capacity. While Sun didn't say this, it is inconceivable that the Juropa2 cluster does not run Novell's SUSE Linux Enterprise Server.

No word on when or if FZJ will kick this box up the petaflops performance level, or how. The odds do not favor a wholesale move to future eight-core 'Nehalem EX' Xeon 7500s or a switch over to Opteron versions of Sun's blades, but this being a mix of supercomputing and politics, where egos run just a little bit bigger than budgets (but only just a little bit), anything is possible. ®