Feeds

SGI to put Intel's Xeon E5s in ICE X systems

Opteron 6200s put on ice

Top 5 reasons to deploy VMware with Tegile

SC11 Supercomputer maker Silicon Graphics has been chomping at the bit for Intel to get its "Sandy Bridge-EP" Xeon E5 servers to market. And rather than wait until early next year to launch its new ICE X parallel machines to market, and give rival Cray and its Opteron 6200-based XE6 and XK6 supers all the headlines at the SC11 supercomputing conference in Seattle this week, SGI decided to preview the dense-pack ICE X machines that will employ the Xeon E5s and have actually begun shipping to selected customers.

When it became apparent six years ago that the Itanium-based shared memory systems that the original SGI sold were not appealing to all customers, that company launched the Altix ICE line of clusters that share some of the same packaging and cooling technologies of the Altix 4700 shared memory systems but which are plain vanilla clusters in terms of their interconnect.

Fast-forward to 2011, and the new SGI, the result of the merger of Rackable Systems and the old SGI, is putting its fifth generation of Altix ICE machines, nicknamed "Carlsbad 3" or "CB3" internally and now sold under the ICE X brand. The ICE X machines are blade servers and considerably more compute density than prior machines. In addition to traditional air cooling, SGI has crafted on-socket water cooling for the Xeon E5 processors to provide more efficient cooling and faster clock speeds as well as allowing for customers to use hotter and cheaper standard 1.5 volt DDR3 main memory instead of 1.35 volt low-power memory. (You can, of course, use low-volt memory if you are coping with intense thermals in conjunction with the water-cooled heat sinks.)

SGI Altix ICE X Dakota blade

SGI's "Dakota" ICE X blade server

At the heart of the ICE X machine are two a half-width, two-socket Xeon E5 motherboards which are designed by SGI and made by unnamed ODMs (most likely in Taiwan, but SGI is not saying). These server boards also include 56Gb/sec, or Fourteen Data Rate (FDR) mezzanine cards, which are based on the ConnectX-3 adapters from Mellanox Technologies. These mezz cards reach out to integrated switch blades that fit into the center of the ICE X chassis, which are based on the SwitchX ASICs from Mellanox.

The first blade is known as "Dakota" internally and is called the IP-113 in the SGI product catalog. The Dakota blade has two Xeon E5 sockets and can use the expected 130 watt parts. The Xeon E5 has on-chip memory and PCI-Express 3.0 peripheral controllers and hooks into the "Patsburg" chipset to offer different ranges of peripherals and bandwidth. (El Reg exclusively detailed the feeds and speeds of the Intel "Romley" server platform back in May.) The Dakota blade has eight memory slots per socket, and has room for two 2.5-inch SATA drives, which can be disk or solid state units, two PCI-Express 3.0 peripheral slots (one from each socket), one baseboard management controller (BMC), and one mezzanine board for network connectivity. The Dakota blade uses traditional metal heat sinks and air cooling and is expected to burn 400 watts on normal HPC loading.

Here's what the Dakota blade looks like:

SGI Altix ICE X Dakota blade

SGI ICE X "Dakota" blade schematics (click to enlarge)

The prior Altix ICE 8400 blades had InfiniBand ports soldered onto the system boards, but Paul Kinyon, director of product marketing for scale-out servers at SGI, says that customers were telling SGI that they wanted options, and so the company shifted to mezz cards than snap onto the motherboard. With these options, SGI can support single-rail or dual-rail FDR InfiniBand networks. The mezz cards include single-port, dual-port with one x8 slot, and a funky dual single-port with two x8 slots. The latter card will let a dual-rail InfiniBand network run at full speed off the Dakota blade but only take up one mezz card slot.

Beginner's guide to SSL certificates

More from The Register

next story
It's Big, it's Blue... it's simply FABLESS! IBM's chip-free future
Or why the reversal of globalisation ain't gonna 'appen
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
A less painful way to run Microsoft’s private cloud
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
CAGE MATCH: Microsoft, Dell open co-located bit barns in Oz
Whole new species of XaaS spawning in the antipodes
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.