HP double stuffs Nehalem blades

Skinless Opteron iron for HPC

The smart choice: opportunity from uncertainty

SC09 Hewlett-Packard may not dominate the HPC headlines, but it does serve up a lot of iron that runs HPC applications. And today, at the SC09 supercomputing show in Portland, Oregon, HP is adding two new servers to its ProLiant portfolio aimed specifically at the cluster crowd.

The company is also touting its StorageWorks X9000 clustered network storage arrays to the HPC faithful.

The first new server is the ProLiant BL2x220c G6, a double-density blade server that packs two whole two-socket Xeon 5500 servers onto a single blade. HP put its first double-density blade out in May 2008, using the quad-core "Harpertown" Xeon 5400 processors and their unimpressive frontside bus.

The workhorse ProLiant G6 boxes sport the new quad-core "Nehalem EP" Xeon 5500s and the latest quad-core and six-core Opterons from Advanced Micro Devices, and the BL2x220c blade is the last of the ProLiant machines to be refitted for the Xeon 5500s, which themselves launched in March. The Nehalems, thanks to the QuickPath Interconnect that replaced the frontside bus architecture of prior Xeon, offer nearly four times the memory bandwidth, which is a big deal to HPC customers.

With the double-density blade servers, HP can cram 16 blades in a single c7000 chassis, which is 10U high, and in a standard 42U rack, that works out to 1,024 cores. While the Nehalem cores do not have much of a performance advantage compared to the Harpertowns, Ed Turkel, manager of business development for HP's cross-divisional Scalable Computing and Infrastructure organization, says that some customers are seeing a dramatic increase in performance with the Nehalems.

"We are seeing as much as a 3X gain on some HPC applications - and that is without any optimization of code," says Turkel. Memory intensive applications, such as computational fluid dynamics, are seeing the biggest gains. "We had a lot of customers who were reluctant to move to quad-core Xeons because of Harpertown's frontside bus, which they thought would not give them enough bandwidth. With Nehalems, that inhibition is gone."

Intel's rebounding financials certainly reflect this.

The BL2x220c G6 blade server can use the 60 watt versions of the Xeon 5500s, which range in speed from 2.13 GHz to 2.4 GHz and which have an "L" designation in their model numbers, or the 80 watt versions, which run at between 2.13 GHz and 2.53 GHz and which have an "E" designation. Faster E, X, and W versions of the Xeon 5500s cannot be used in this siamese twin blade.

Each node on the two-server blade comes with 24 GB of memory standard, expandable to 96 GB using 8 GB DDR3 memory modules (which almost no one will buy at current prices). Each server node on the blade has space for one small form factor SATA or SSD disk; 120 GB and 250 GB disks and 32 GB and 64 GB SSDs are available per node. Each server node on the blade has a single PCI-Express x8 mezzanine expansion slot and two Gigabit Ethernet ports.

The base price of the BL2x220c G6 - with two server nodes with 24 GB, one processor per server, and no disks - is $6,059. The doubled-up server blade is available starting today.

HP has not shipped a double-density server blade based on AMD's Opteron processors, and Turkel didn't want to talk much about the possibilities. "We are seeing a lot of interest in Web and cloud computing for denser Opteron configurations, but I cannot comment on specific plans," Turkel said.

It stands to reason that HP probably has "Magny-Cours" Opteron 6100s and "Lisbon" Opteron 4100s slated for double-stacked blades and other dense rack-mounted and cookie sheet servers. The density play is there, and it looks like Istanbul had too short of a life for some serious engineering beyond commodity boxes, as El Reg pointed out back in June in the wake of the debut of the six-core Opterons. (See this story for the latest on AMD's planned server platform rollouts for 2010).

HP is, by the way, showing the Opteron chip some love in its "cookie sheet" ProLiant SL skinless server designs, which were announced in June. With these cookie sheet designs, HP is copying the essence of Google's minimalist server designs, eliminating all unnecessary metal from the boxes and plunking raw motherboards, drives, and power supplies onto metal trays that slide into standard racks.

Today at SC09, HP will start peddling the ProLiant SL165z, the first Opteron-based cookie sheet server from HP; the prior machines were based on the Xeon 5500s.

The SL165z is a the Opteron variant of the SL160z box that was announced in June and aimed at Web caching and HPC database applications, according to HP. The SL170z has room for six drives and is aimed at Web search and database jobs, and the SL2x170z packs two two-socket Xeon 5500 servers into a single 1U tray for HPC and Web front end processing. The SL165z uses the six-core Istanbul Opteron 2400s and packs two of them onto a single board on a 1U tray.

The machine's EATX motherboard has 16 DDR2 memory slots as well as an on-board SATA controller; it comes with a single Gigabit Ethernet port and a single PCI-Express 1.0 x16 I/O slot, too. The tray has room for two 3.5-inch disk drives (you can use SAS or SATA drives), and they are not hot swap because hyperscale shops usually replace a whole tray at a time when something fails. The SL165z is available today, and has an entry price of $2,965.

HP is also announcing at SC09 that the ProLiant SL family of skinless servers will be eligible to be put into CL3000 pre-fabbed cluster configurations. A 72-node setup will run you $189,900.

And finally, the recently acquired IBRIX clustered network storage software is being rebranded the StorageWorks X9000 line and pitched to HPC shops as appropriate for managing large datasets feeding into clusters. (Think life sciences, animation studios, and financial services).

The IBRIX Fusion file system has been ported to HP's ProLiant servers, and the current StorageWorks X9000 NAS boxes use two ProLiant servers cross-coupled to two MSA2000 storage arrays as a basic building block. And X9300 gateway will run you $50,000, and an X9320 module with 48 TB of capacity and using 10 Gigabit Ethernet links costs $140,000. A base X9720 array, which starts out with 82 TB but which scales to 656 TB, costs $160,000. ®

The smart choice: opportunity from uncertainty

More from The Register

next story
10Gbps over crumbling COPPER: Boffins cram bits down telco wire
XG-FAST tech could finesse fiber connections
FCC commissioner: We don't need no steeenkin' net neutrality rules
Just as deadline approaches for public comment on internet fast lanes
THE GERMANS ARE CLOUDING: New AWS cloud region spotted
eu-central-1.amazonaws.com, aka, your new Amazon Frankfurt bitbarn
Airbus to send 1,200 TFlops of HPC goodness down the runway
HP scores deal to provide plane-maker with new fleet of data-crunching 'PODs'
Tegile boots Dell array out of chemical biz. Dell responds: Tegile, who?
Upstart says it's up, up and away ... but not on the giants' radar – yet
prev story


Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
The Essential Guide to IT Transformation
ServiceNow discusses three IT transformations that can help CIO's automate IT services to transform IT and the enterprise.
Maximizing your infrastructure through virtualization
Virtualization continues to be one of the most effective ways to consolidate, reduce cost, and make data centers more efficient.
Seven Steps to Software Security
Seven practical steps you can begin to take today to secure your applications and prevent the damages a successful cyber-attack can cause.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.