Whither the HP Nehalem-EX beastie boxes?
High end moves at low speed
Although HP had some machinery at Intel's eight-core Nehalem-EX Xeon 7500 launch event last week, it didn't make a peep about its plans for the iron, leaving just about every other major server vendor in the world to do the talking.
The company's reticence is odd. HP is the largest server shipper in the world and the second largest provider of big iron in the world. You would expect the company to be gung-ho about a new high-end processor to pack into its ProLiant systems.
HP's silence was in marked contrast to its performance at Intel's quad-core Tukwila Itanium 9300 launch at the International Solid-State Circuits Conference in San Francisco in early February, when HP did a lot of the talking and none of the other name-brand server makers said squat about their plans.
Although IBM, Silicon Graphics, Cray, Dell, and others have hogged all the headlines for servers based on the Xeon 7500s, HP does indeed have plans for Xeon 7500 boxes, and very likely has plans for their HPC offshoots, the Xeon 6500s, which are only available in two-socket configurations, are tuned for supercomputing workloads, and offer better bang for the buck on floating point operations compared to standard Xeon 7500 parts.
At the launch event's server showroom, HP lugged out a four-socket rack server and a bigger box that was tucked behind a rack enclosure. These machines, an HP source confirmed to El Reg, were supposed to be the ProLiant DL580 G6 and the DL980 G6.
The DL580 G5 machines are based on Intel's quad-core and six-core Dunnington Xeon 7400 processors, which have that stale old frontside bus architecture from the NetBurst era, and can scale up to 256GB of DDR2 main memory. (The company also sells a quad-socket ProLiant DL585 that uses the old six-core Opteron 8400s with the same 256GB memory limit, and has not yet announced a four-socket or eight-socket Opteron 6100 machine, either. So Intel shouldn't feel all that slighted.)
The HP Nehalem-EX boxes on display didn't have their faceplates on, but here is the first one:
Is this a ProLiant DL580 G6? Seems kinda small
The current DL580 G5 machine is a 4U box that has four sockets and room for sixteen 2.5-inch disks. The unit shown above is a 2U machine and looks more like a traditional workhorse DL380 Xeon DP machine. It would be interesting if there is a DL380 variant offering two sockets and lots of memory slots, and that this is the box that HP was showing. (I wasn't at the event, so I couldn't peer into the box and tell you what I saw.)
I have a hard time believing that a DL560 G6 won't come with 32 or 64 DDR3 memory slots as well as supporting four sockets, but maybe HP is planning on only building a four-socket box with 32 memory slots. That would only match the current memory capacity of the DL580 G5, but would do so in half the rack space.
The other machine HP talked about is the ProLiant DL980, which is possibly in this rack:
Unisys has a bigger rack than HP - size, apparently, is not all that
HP has not sold an eight-socket Xeon box for some time, but is still selling eight-socket Opteron machines, the ProLiant DL785 G5, based on the Opteron 8300s from Advanced Micro Devices, and the DL785 G6, based on last year's Opteron 8400s. These two machines come in a 7U form factor, top out at 512GB of main memory, have eleven peripheral slots, and eight 2.5-inch disks.
Once again, HP has not said jack about its plans for the twelve-core Magny-Cours Opteron 6100s, which can be configured gluelessly in four-socket machines that are functionally equivalent to the Nehalem-EX machines. You'll end up with 48 cores either way, but the Opteron 6100s top out at 512GB of memory (even with 16GB memory sticks) because that's as much as the memory controllers in the chips will address.
The Xeon 7500s should allow at 1TB with 64 slots and using 16GB sticks. The Opteron 6100s top out at 384GB using 8GB memory sticks, since they only have 48 memory slots in a four-socket configuration. Advantage Intel on memory expansion. The Intel chips are more expensive than the AMD ones, core for core. Advantage AMD. We'll see how this plays out on the benchmarks and in the market.
While the lack of detail about HP's Nehalem-EX boxes is surprising, sources at HP tell El Reg that they will not ship machines using the top-end Intel chips - and specifically the ProLiant DL580 and 980 machines - until the second half of 2010. That's a long time from now, particularly with other vendors already on the field in the x64 server price war.
HP said at the Itanium 9300 launch that it would get Integrity boxes using these Tukwila chips announced within 90 days. That puts the announcement at anywhere from later in April to early May, with early May looking more likely.
It's a reasonable guess that a lot of the components in the ProLiant/Xeon 7500 and Integrity/Itanium 9300 servers are similar. HP has not said what its chipset plans are, but if it is using a kicker to its own Arches sx2000 chipset used with the Itanium-based Integrity servers, converging down to a single chipset for Xeon and Itanium boxes like Intel has done with its own Boxboro 7500 chipset for the Itanium 9300s and Xeon 7500s, then it may be having chipset issues that it needs to resolve before getting either set of midrange and high-end servers into the field.
If HP is using the Boxboro chipset for Xeon 7500 and Itanium 9300 servers with two, four, and eight sockets, as the Intel chipset can do all by itself, and then relying on an Arches chipset kicker to push up to 16, 32, and 64 sockets for its Itanium boxes, it's a real mystery why the smaller machines are not out now like those from IBM, Dell, Cray, SGI, Fujitsu, and so on.
HP is in no mood to explain the situation further. ®
The problem is the DL980 has a problem. HP knows the Nehalem EX does not scale well past 4 sockets because there are only 4 QPI links on the chip so they created a XNC (cross-network connector) to be similar to IBM's eX5 architecture. Unfortunately it is HP's first try at a "glue" chip for Xeon and it does not work very well. They also only have I/O connected to some of the CPU's as it is a top and bottom drawer config. The 128 DIMM slots are also causing heat issues as they are in front of the processors and the heat off the memory can do wacky things to the Nehalems.
There is no redundant clock, redundant fabric or service processor, no double-chip spare, or rundant address control pins on the dimm, so you will need two clustered together.