SGI chases Cray with baby cluster
Ours is bigger, er, smaller
Cray thinks there is a market for baby supercomputers that bridge the gap between fast two-socket workstations with peppy graphics cards and the rack-based parallel supercomputer clusters that run large-scale simulations.
And the new Silicon Graphics, the result of the acquisition of the old SGI by Rackable Systems, agrees. As it discussed the merged SGI and Rackable server and storage lines just last week, SGI said it had some plans for high-performance computing.
Now, out pops the CloudRack X2, the second-generation of cookie-sheet servers from the company SGI is billing as a "scalable workgroup cluster."
The CloudRack X2 product weaves together a bunch of different Rackable server technologies - including the CloudRack C2 tray server designs using two-socket x64 server boards, which debuted last October, and the low-powered MicroSlice compute trays that plunk multiple Micro-ATX and Mini-ITX system boards onto a single tray, which were announced back in January.
The CloudRack line was tweaked in March to use 23U and 46U racks that are 24 inches wide instead of the 22U and 44U racks that were 26-inches wide. This is closer to a standard data center tile, and coincided with the impending launch of Intel's Xeon 5500 processors.
The CloudRack X2 is an even smaller rack. And rather than putting the servers horizontally into full-sized racks, the CloudRack X2 puts them vertically into a chassis that can be put on wheels or mounted in a rack that can fit in an office environment or in a lab. It is not a blade server, which has a midplane for communication and management, but offers most of the benefits of a blade server and some of the flexibility that blade servers do not have in terms of motherboard and switch choices.
The CloudRack X2 enclosure is a 14U chassis that has room for two 1U switches and nine vertical server trays mounted vertically in the front; in the back, there is space for three power supplies - n+1 redundancy - and three fan arrays.
Having all of the power and cooling in the chassis instead of on each server increases the overall efficiency of the baby rack. Also, the rotational vibrations of the many fans normally in rack servers - which can shorten the life of disk drives - is less of an issue because the fans are only on the chassis and the trays are open, so they can be cooled with their own dedicated fans.
More importantly, says senior director of product marketing at the new SGI Geoffrey Noer, entry HPC and Web 2.0 shops buying normal rack servers tend to not pay the extra money for redundant power and cooling, which they need. With the CloudRack X2 cluster, the redundancy is in the rack - as it is in the blade server - and comes at no incremental cost.
The baby cluster from SGI currently supports two half-width two-socket servers per tray, and those servers can be based on Intel's quad-core Xeon 5500 processors or AMD's quad-core Opteron 2300 or six-core Opteron 2400 chips. These are the same options in the bigger CloudRack machines.
The MicroSlice boards are also available on the CloudRack X2 trays. Up to three single-socket systems based on AMD's quad-core Phenom X4 processors - using Micro-ATX boards - and up to six single-socket mobos based on AMD's Athlon X2 or Intel's Atom chip are available, too. The AMD boards were announced with the MicroSlice trays that came out in January, but the Atom boards are new. They will eventually be available on the regular CloudRack products.
Noer says that regular Xeon and Opteron servers are aimed mostly at HPC shops, where performance is key, where shoppers tend to not want to go above 95 watts per socket. Also, the Phenom X4, Athlon X2, and Atom mobos are aimed more at hyperscale data centers running web applications where performance per watt and price are the key criteria driving a server acquisition - and where they tend to buy processor with 50- to 60-watt thermal envelopes.
None of that iron makes the CloudRack X2 particularly an HPC solution. But sprinkle on some Mellanox ConnectX adapter cards, which support both 10 Gigabit Ethernet and 20GB/sec or 40GB/sec InfiniBand networks, and now you get some HPC interest. And later in the third quarter, the Xeon and Opteron server trays will get just that.
Noer says SGI is also going to soon launch trays with nVidia Tesla GPU graphics co-processors, which can be used to drive graphics applications or to augment the number-crunching performance of the main processors on the server trays. SGI has plans to launch other server options for the trays aimed at HPC shops in the coming months, but Noer did not elaborate.
The CloudRack X2 can hold up to 216 processor cores using the six-core Opteron 2400s or up to 216 cores using the single-socket Phenom X4 chips. The trays can hold up to 72 drives in a 3.5-inch form factor or up to 108 drives in a 2.5-inch size. Adding drives to the CloudRack X2 trays obviously displaces some compute capacity.
The CloudRack X2 is available now. SGI did not provide list prices at press time. The old SGI did give out pricing, but Rackable never did. This is yet another area where the new SGI can learn from the old one.
What also makes the CloudRack X2 an HPC play - and what the old SGI had plenty of experience with while Rackable had very little of - is a stack of systems software tuned for HPC clusters and lots of application software for doing computational structural mechanics, fluid dynamics, electromagnetics, or chemistry, as well as CAD/CAM/ CAE, rendering, seismic processing, bioinformatics, and a slew of simulation programs.
SGI has certified Windows HPC Server 2008, SUSE Linux Enterprise Server 10 and 11 - including the SGI ProPack 6 math library extensions for SLES - and Red Hat Enterprise Linux 4 and 5 for the CloudRacks, and more than 50 popular applications across the HPC spectrum can run on the iron as well.
SGI is touting its Isle Cluster Manager software, which came from its acquisition of Linux Networx in February 2008 and that provides version control for patching plus job control and scheduling, to manage the baby super. ®
Sponsored: Benefits from the lessons learned in HPC