Original URL: https://www.theregister.com/2010/04/27/hp_tukwila_servers/

HP dons blades to scale Superdome 2

Mountain out of Tukwila molehills

By Timothy Prickett Morgan

Posted in Channel, 27th April 2010 06:01 GMT

After several months of not talking about it in the wake of Intel's February launch of the "Tukwila" quad-core Itanium 9300 processors, Hewlett-Packard is finally describing what those machines will be.

At the HP Technology@Work 2010 conference, which is being held from April 26 through 29 in Frankfurt, Germany, HP is launching new snap-together Itanium-based Integrity blade servers that offer from two to eight sockets in a single system image. The company is also divulging plans for an Integrity rack server, designed to replace several machines in the current Itanium 9000 and 9100 lineup, and of course, it's raising the curtain just a little bit on the high-end Integrity Superdome 2 server.

The blades are available now, but the rack machine and the Superdome 2 are slated for "later this year," according to Lorraine Bartlett, vice president of marketing, strategy and operations for the Business Critical Systems division at HP. (BCS is part of the Enterprise Servers, Storage, and Networking group). With the Tukwila chips originally slated for 2007, then 2008, then 2009, and then finally pushed to early 2010 as one technology after another was changed around the chip, waiting a few extra months to actually get a full understanding of the Superdome 2 machines is probably not going to kill HP-UX shops that are dependent on these platforms to scale up their workloads. But it may drive them to drink. Again.

If you were expecting spec sheets, data sheets, and loads of information on all the new Tukwila systems, you are bound to be disappointed. HP is just not providing this information yet, even on machines that are supposed to be available starting today. The briefing deck was one of the thinnest I have seen in more than two decades of watching systems for what many (myself included) consider such an important server announcement. One on which somewhere between $4bn and $5bn of server revenues and heaven only knows how many more billions in storage, software, and services sales also depends.

Here's the family photo of the new Tukwila machines from HP:

HP Tukwila Itanium Systems

Even though the image is smaller, that machine on the left is the Superdome 2, the kicker to the current high-end 64-socket Integrity machines. Across the top are the Integrity BL860c i2, BL870c i2, and BL890c i2, which are really just BL860c i2 blades that snap together to create ever larger SMPs. In the bottom middle is the rx2800 i2 rack-mounted server. And off to the right is a BladeSystem Matrix setup using the new Integrity blades and running HP-UX 11i v3.

The new Tukwila-based Integrity blade servers are based on Intel's "Boxboro" chipset, the same one that supports "Nehalem-EX" Xeon 7500 processors. (Machines such as the ProLiant DL580 and DL980 that El Reg told you about earlier this month but which are not being formally announced today and may not be for some time yet). Kirk Bresniker, vice president and chief technologist for the BCS unit and an HP Fellow to boot, says that by putting the Blade Link SMP scalability interconnect on the front of the blade, HP was able to allow the Integrity blades to slide into the same c3000 and c7000 chassis it already uses for x64 and Itanium blades.

The Tukwila blades are full height blades, like the two-socket and four-socket Itanium 9000/9100 machines they replace. The two-socket Tukwila blade offers about three times the compute capacity of prior BL860c Itanium blades. By the way, you can't make a six-socket SMP box as far as I know using the Blade Link. (There isn't a BL880c i2, but the product naming left room for it). And based on internal benchmarks conducted inside HP back in February, the company reckons that the new Integrity BL i2 lineup offers up to nine times the oomph in half of the space as comparable earlier Integrity rack-mounted systems. In fact, these blades will be replacing the rack-mounted 4U and 7U SMP machines HP sold with prior generations of Integrity iron.

If you haven't gotten the message that HP is all about blades, you will by the end of this story.

The new Tukwila BL i2 blades support the homegrown Virtual Connect Flex-10 virtual networking for servers and storage, which is popular on HP's ProLiant x64-based BladeSystems. Integrity Virtual Machines, which is HP's home-cooked virtualization technology for Itanium machines, is also supported on the new boxes, as is HP-UX 11i v3. Presumably, HP will have nice things to say about OpenVMS and NonStop on these machines at some point, but it didn't in any of the materials I have laid eyes on.

The remaining feeds and speeds for the new Integrity blades are a mystery because HP didn't have spec sheets ready as El Reg went to press. But in an ironic shift among server makers, the prices for base machines are available. A BL860c i2 blade costs $6,490 in a base configuration; a BL870c i2 costs $13,970; and a BL890c i2 costs $30,935.

Very little was divulged about the rx2800 i2 rack server besides its name and the fact that it was being put into the field to appease customers who are just not quite ready for blades, like remote offices with modest compute needs. The rx2800, says Bresniker, supports 24 DDR3 memory slots, compared to eight in the rx2600 entry machine it replaces, and adds that the new eight-core box crams the performance of the rx6600 (an eight-core, four-socket machine weighing in a 7U) into a 2U space. That's more than a factor of three improvement in compute density. From the outside, the rx2800 i2 looks more or less like the rx2600 it replaces, with room for eight 2.5-inch disks mounted in the front.

Beyond Superdome

Which brings us around the HP Tukwila Integrity family photo to the Superdome 2, about which HP will say very little today because it is not shipping until the second half of 2010, along with the BladeSystem Matrix machine running HP-UX and offering a ready-to-go virtualized, cloudy infrastructure stack.

The many prior generations of Superdome machines were based on homegrown chipsets. The original "Yosemite" chipset for the Superdomes used PA-RISC 8600 processors - Yosemite Park being where Half Dome mountain is located and where Dick Lampman, director of HP Labs when the original Superdomes were created a decade ago, used to climb. The kickers were the "Pinnacles" sx1000, finally supporting Itanium, and the "Arches" sx2000, supporting the Itanium 2 processors. With the Superdome 2, HP continues in this tradition with the sx3000 chipset, which I am told did not have a code name but I simply do not believe it. That's about where the similarity ends.

With Superdome 2, HP is making some big changes. First, HP is ditching the four-socket cell board architecture and non-standard, fatter system rack that has defined the PA-RISC and Integrity Superdomes from here on out. The Superdome 2 is based on a modified c7000 enclosure, which is 10U high and has room for eight full-height server blades. But you can't cram all that SMP/NUMA goodness of the Superdome into a blade that is only 10U high and you can't just cobble together a Blade Link on the front of the blades to lash all eight blades in a chassis into a single system image.

And so, HP cut the top off the c7000 chassis and added another 8U of space for all the Superdome 2 and sx3000 goodies. Now, instead of cramming 64 cores into a rack that is fatter than everything else in the server room, HP can put 64-cores in under a half rack of space. Some shops are going to need 128-core images too, like they have with existing Integrity Superdomes based on the 9000 and 9100 series of dual-core Itanium 2 chips, and some are really probably wondering where the heck are the 256-core machines.

Each 18U-high Tukwila blade has 32 DDR3 DIMM sockets and two processor sockets, for a maximum of eight cores. If you assume 8 GB memory, that's 2 TB of main memory for 64 cores, which is a perfectly respectable amount of memory that will no doubt double to 4 TB when 16 GB memory sticks are available. (Probably by the time Superdome 2 ships, in fact).

Bresniker said that the sx3000 chip had three elements, including the I/O chip, node controller interfaces, and a crossbar chip. But oddly enough, all the BladeSystem I/O at the bottom of the blade is also available to the Superdome 2 nodes, so they can use Virtual Connect and integrated switching if customers want to go that way.

With Superdome 2, the crossbar fabric is fully redundant and fully fault tolerant, unlike the prior Superdome crossbar, and everything in the system is dual-path and can automatically fail over and retry in the event a component fails. This dual fabric is active/active, and the system load balances across both sets of paths between system components until something fails. When you identity and fix a broken component, which is possible because the system has hot-swap CPU, memory, and I/O, the crossbar figures out when it is fixed and rebalances the load.

Superdome 2 will also support PCI-Express 2.0 peripherals, and I/O can be added to the system independently of CPU and memory boards. So now you don't have to buy a cell board just because you want more I/O. Even though it is hard to believe, there are some I/O intensive workloads where customers have topped out the I/O slots in a Superdome. Exactly how this works is unclear, but as soon as El Reg gets the details, we'll let you know.

The upshot of the blading of the Superdome line is that an entry Superdome 2 box will cost around 40 per cent less than an Superdome box. Precise pricing was not announced, since the machines are not yet shipping.

When I asked if it was possible to plunk a pair of Xeon 7500 processors in a modified Superdome 2 motherboard and use the sx3000 chipset to make a very big Xeon machine, Bresniker and Bartlett just laughed. And they had to jump to the next call, so I didn't get a chance to ask about the possibilities of HP-UX being put on such a machine. But it is something to think about. ®