SGI preps supers for future Intel chips
To Itanium or not to Itanium
SC08 Architectures can change quickly in the supercomputing space, and slow-moving vendors can get left behind or at least find themselves out of step with the next big wave of sales in the HPC area. This has happened in the past with Silicon Graphics, and the company is determined to not let it happen again.
At the SC08 supercomputing show in Austin, Texas, this week, SGI was on hand to show off its future blade-style compute nodes for its Altix ICE Xeon-based clusters and to continue promoting the Itanium-based Altix 4700 systems that it's been selling and expanding for years. The company was happy to talk about future Altix ICE gear, but it wouldn't be penned down on plans for the Altix 4700 machines.
Michael Brown, sciences segment manager at SGI, was manning the company's booth and showing off a forthcoming Altix ICE blade that will use Intel's "Nehalem" multicore Xeon chip, widely expected to come to market with two, four, and eight cores in March 2009. The SGI compute blade has a dozen memory slots and will support 8 GB DDR3 memory modules, for a total of 96 GB of capacity. Thanks to the QuickPath Interconnect that the Nehalem Xeon and "Tukwila" Itanium processors will share - and their on-chip memory controllers - the "Tylersburg" chipset from Intel doesn't need a northbridge and that leaves room to rejigger the CPU sockets and memory slots and provide better cooling and more memory slots.
The future SGI Altix ICE blade will come with two dual-rate InfiniBand ports, and this is important because the backplane and switches inside the current Altix ICE racks are based on DDR InfiniBand. Brown said that while quad data rate InfiniBand was just coming available, SGI felt that it was more appropriate for its existing customers who want to add more processing power to deliver a DDR InfiniBand blade (running at 20 Gb/sec) rather than bumping up to QDR speeds (at 40 Gb/sec). (As we previously reported, Sun Microsystems is gearing up to sell a much larger blade with two two-socket Nehalem boards on a single blade, which also integrates QDR InfiniBand on the blade).
Brown said that the Altix ICE boxes would be upgraded with faster InfiniBand "within the next calendar year," since HPC customers always want more bandwidth. QDR InfiniBand would probably be available in SGI's plain vanilla rack servers first and then put into the Altix ICE machines later, since the blades and the backplane both need to be updated, not just motherboards.
As Quiet as Grass Growing
On the Altix 4700 server line, which is also a blade-style architecture but one employing SGI's NUMAflex clustering technology to create a global memory space for the blades, Brown was quiet as the grass growing on a hot summer day when pressed about who QuickPath Interconnect might give SGI different processor options. Meaning, for instance, that future Altix 4700 machines might support Nehalem chips and not Itaniums, or both, or maybe just the future quad-core Tukwilas, which are also due to ship in early 2009. "We are committed to global shared memory and moving forward with new processors," Brown said with a smile.
That could mean anything, of course.
Considering all of the tuning that SGI has done for Linux on Itanium, it is hard to believe that the future Altix machines (presumably the 4800s) won't support Tukwila. But given the presumably cheaper flops in a Nehalem chip, it is equally hard to believe such future machines would not offer Nehalems as an option, either. That global shared memory (not quite as tightly coupled as the memory in an SMP server, but certainly looking more like a single memory space to HPC applications than does a parallel Linux cluster) is very useful to a certain class of customers, and they will pay a premium for it. Which would seem to argue that if Nehalems are cheaper and can be put into kickers to the Altix 4700s, SGI might possibly boost its bottom line.
While a rack of Altix ICE machines can deliver 3 TB of distributed memory per rack, the Altix boxes can bring 1,024 Itanium cores to bear on a single Linux instance with a globally addressable memory, and SGI has customers with over 8,000 processors and a global shared memory expanding up to 20 to 30 TB; the architecture, says Brown, scales to 128 TB of global shared memory today.
All that said, you can imagine that SGI would like to cut some other costs, quite possibly by converging the Altix 4700 NUMAflex and Altix ICE lines. Imagine a line of Nehalem-based machines that offered the option of the InfiniBand interconnect used in the ICE products and another line that had NUMAflex as an option, allowing the global shared memory of the Altix 4700. One might even imagine that the networking aspects of these blades could be made modular (like compute, I/O, and memory are in the ICE products already), so a blade could be converted from one style of computing to the other. Or, because of the common QPI socket for Tukwila and Nehalem, from one processor type to another. And if that were not possible or economically feasible, then imagine including both native InfiniBand and NUMAflex ports on a single board to offer both styles.
SGI has lots of options, technically, but economically speaking, it is limited. SGI can only afford to create products that will sell and sell now. It will be interesting to see what SGI does. The one thing that SGI doesn't seem inclined to do is embrace the Opteron processors from Advanced Micro Devices - a position it has taken since the company moved off its own MIPS chips and Irix operating system and embraced Itanium and Linux. ®