US weather meisters buy mini Cray
Getting a foot back into an old door
Supercomputer maker Cray has managed to get the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, to take delivery of a Cray XT5m minisuper, getting its foot back in the door of a facility that helped put Cray on the map.
In 1976 NCAR took delivery of the very first Cray-1A vector super designed by Seymour Cray after he left Control Data Corp to found his own company.
Between 1963 and 1976, NCAR was a CDC shop, as you can see from the timeline of supercomputer installations at NCAR. The facility took delivery of one of the first Thinking Machines massively parallel computers, too, as well as dabbling with IBM PowerParallel RISC boxes in the early 1990s.
But the door at NCAR was slammed shut in the late 1990s, when Cray cried foul after NCAR wanted to install parallel vector supercomputers from NEC to do weather modeling.
Thanks in large part to heavy lobbying by Cray, the US government imposed heavy import duties (several hundreds of percent of list price) on NEC, Fujitsu, and Hitachi after declaring that they were dumping supercomputers on the US market. The irony, of course, is that IBM, not Cray, was the big beneficiary of the ruling, at least at NCAR, because this was when Big Blue got serious about supercomputers and started delivering the powerful RISC chips and high-bandwidth interconnects to make RS/6000-based supers.
Cray would obviously like to dislodge IBM from the facility with some heavy duty, petaflops-class systems, and this could be the first step on that long road to forgiveness.
It is interesting to note that NCAR has only picked up a Cray XT5m minisuper, which is based on last-year's six-core "Istanbul" Opteron processors, rather than on the new XT6 blades that use the just-announced twelve-core "Magny-Cours" Opteron 6100 processors.
This machine, nick-named "Lynx," will be used by weather researchers as a development box when it is installed later this month. Cray was quick to point out that NCAR researchers have access to the "Jaguar" XT5 at Oak Ridge National Laboratory, rated at 1.76 sustained petaflops on the Linpack Fortran benchmark test, the "Kraken" XT5 system at the University of Tennessee, rated at 831.7 teraflops, and the "Franklin" XT4 at the National Energy Research Scientific Computing Center, rated at 266.3 teraflops.
The implication is that Cray is in the running now for the next big upgrade at NCAR. And since NCAR is dabbling with Windows HPC Server as well as Linux in addition to its big production AIX supers, you can bet that NCAR would love to have a box that could run either Windows or Linux, which a super based on Xeon or Opteron processors can do. (IBM's Power-based supers can run either AIX or Linux, except for the BlueGene machines, which are restricted to Linux.)
You can bet that IBM is going to work to keep Cray out of NCAR in a big way. The big box at NCAR right now is a Power6-based Power 575 cluster running AIX called "Bluefire," which has 4,064 processors and which was the first of the water-cooled Power 575 clusters that IBM sold. (NCAR likes to be at the front of the line with new technology.) That machine was installed in April 2008, it uses InfiniBand interconnect rather than IBM's Federation interconnect and at 59.7 teraflops of performance, is a bit of a weakling on the supercomputing front.
IBM can, of course, pitch NCAR on a 1 petaflops or larger super based on its Power7-based Power Systems IH supercomputer nodes, which El Reg told you about last November. These 2U nodes, which are six feet deep and more than three feet wide, have eight of IBM's Power7 IH multichip modules (MCMs), which pack four eight-core Power7 chips on a single package; the drawer has eight high-speed switches (which marry some technologies from Federation and InfiniBand together) and 1 TB of memory.
A dozen of these drawers slide into a rack, delivering 98.3 teraflops of peak number-crunching power. The Power7 parallel design uses the hub/switch link to gang up four drawers into what IBM calls a super node, and the hub/switch can, in theory, scale to 512 of these supernodes, for a peak theoretical performance of 16.8 petaflops.
Cray, no doubt, has similar scalability goals for its future "Baker" Opteron-Linux systems and their "Gemini" interconnect. It will be interesting to see what NCAR does for its next big bad box. ®
Sponsored: Hyper-scale data management