Original URL: http://www.theregister.co.uk/2010/05/14/nec_nehalem_ex_tukwila/

NEC kicks out glueless Nehalem EX beast

Itanium relegated to mainframes

By Timothy Prickett Morgan

Posted in HPC, 14th May 2010 00:31 GMT

Like all remaining Itanium vendors aside from HP, NEC is shifting its focus to Intel's new Nehalem-EX Xeon 7500 processors for its big-iron beasts.

The times, they are a-changin'. NEC made a big deal about high-end Itanium-based systems in the prior decade, notching up big benchmark wins with its 16-socket "Azuza" and 32-socket "Azama" machines in the Express5800 family, sporting its A family of chipsets.

Now, in an effort to get to market quickly and to offer some differentiation compared to others building Nehalem-EX boxes, Mike Mitsch, general manager for the IT Platform Group at NEC America, says that NEC's engineers took some of the RAS goodies from the Itanium versions of the Express5800 server line (the Express5800/1320Xf and its A3 chipset, to be precise) and ported them over to the new Express5800 GX series, which uses the Xeon 7500 processors.

The idea is exactly what El Reg quipped when the Xeon 7500s were launched - they're like Itanium, only this time you might actually use 'em.

The Express5800 "Glueless Xeon" GX servers are completely designed by NEC, not developed in conjunction with Unisys like the "Monster Xeon" MX machines (using the four-core and six-core Xeon 7400 processors) that were announced in September 2008.

The Express5800 MX machines are cell-based symmetric multiprocessing systems based on four-socket mobos. Each Monster Xeon server chassis has one four-socket cell, and four chassis are lashed together using external SMP links to create a 96-core box, known by the easy-to-remember name Express5800/A1160 MX server.

The MX chipset used in the server sports 80GB/sec of bandwidth to link the main and cache memories on the four server nodes into an SMP configuration, and the box holds 1TB of main memory using 8GB fully buffered DDR2 memory sticks.

The Monster Xeon server designed by NEC and Unisys (and manufactured by NEC) supported the four-core and six-core "Dunnington" Xeon 7400 processors, which used the old frontside bus architecture instead of the new QuickPath Interconnect that the Nehalem and now Westmere Xeon chips employ. The Dunningtons did not support Intel's HyperThreading simultaneous multithreading, either, which means the box topped out at supporting 64 threads running at 2.4GHz or 96 threads running at 2.66GHz.

Rather than take the MX chipset and retrofit it to support the new Xeon 7500 processors with their new sockets and interconnect, NEC decided to instead start from scratch and build a midrange box that gluelessly scales from two to four sockets in a single system. By doing so, it could get a machine in to the field relatively shortly after the Xeon 7500s were announced. The Xeon 7500s debuted on March 30, and NEC started shipping servers using them on May 5 - but only started telling people this week, for some reason.)

Dell was also bragging that its four-socket Xeon 7500 boxes, the PowerEdge R810, R910, and M910, which El Reg told you all about here, started shipping on April 27. IBM's rack and blade servers using the Intel Xeon 7500 chips and Big Blue's ex5 chipset were rolled out on March 30 and will ship on June 25. Cisco Systems and Silicon Graphics have fielded four-socket Xeon 7500 machines, as well. As we previously reported, Hewlett-Packard is working on its own Nehalem-EX beastie boxes, the four-socket ProLiant 580 and the eight-socket ProLiant 980, and Oracle is also cooking up its own eight-socket rack box using the chip. Both HP and Oracle are rumored to be getting their boxes into the field in the June timeframe.

NEC wanted to be on the front-end of this wave, says Mitsch, which is why it went with the glueless GX design instead of re-engineering the MX design.

The Express5800/A1080a GX server comes in a 7U chassis and comes in three different flavors. The A1080a-S puts a single four-socket board in the box with a single service processor. That service processor is one of the key differences between the NEC machines and other Nehalem-EX boxes in that it takes the Intelligent QPI BIOS that NEC developed for its Itanium-based Express5800 servers (remember, Itanium was supposed to have QPI already, and NEC was ready for it even if Intel was not).

Machine check? Check.

It is this modified BIOS that hooks into the machine check architecture (MCA) features that Intel says gives Xeon 7500s some of the reliability features it needed to compete with Itanium and mainframe architectures.

A second configuration of the machine, the A1080a-D, puts in two mobos and two service processors in the box, allowing for the machine use hardware partitioning to put two four-socket systems in a 7U space. The A1080a-E is the eight-socket SMP version that gluelessly expands beyond four sockets to eight using Intel's "Boxboro" chipset and without resorting to the node controller switching architecture that IBM is using in its ex5 chipset.

NEC Express 5800 Nehalem EX Box

Eight Xeon 7500s sockets from NEC, glue not included

In an eight-socket config, the Express5800/A1080a tops out at 64 cores and 128 threads running at 2.26GHz, and 2TB of main memory using 16GB memory sticks when they are supported sometime in the second half of this year. NEC is supporting three of the Nehalem-EX chips in the box: the six-core, 2GHz E7540; the eight-core, 2GHz X7550; and the eight-core, 2.26GHz X7560. There are eight other Xeon 7500 and 6500 (special HPC variants) available from Intel, but not in this NEC box.

Each mobo in the GX server has a dozen PCI Express 2.0 x8 slots and two x16 slots. The chassis has room for twelve 2.5-inch SATA or SAS disk drives, and the mobos have on-board SAS RAID controllers with six ports.

Eight sockets is a lot of headroom for many customers, but the whole point of the Itanium 91XX and Xeon 7XXX lines is to be able to scale up to 16, 32, or maybe even more sockets for truly large systems. Having sold 32-socket machines since 2002, NEC knows that customers who need more scalability than an eight-socket box want to see a fatter Express5800 system. And so the roadmap from NEC calls for the MX chipset to be available in two flavors - one scaling from 2 to 16 sockets and another scaling from 2 to 32 sockets. This will not be a glueless design, obviously, because the Nehalem-EX chip does not have enough QPI ports to scale gluelessly beyond eight sockets.

By the way, the current Express5800 GX machine will also support the future "Westmere-EX" processor from Intel, due next year and probably sporting a dozen cores and giving a 50 per cent performance bump.

The Express5800/A1080a GX server supports Windows Server 2008 and the R2 update of that Microsoft operating system, which has the code needed to support the MCA features on the Xeon 7500 processors. Red Hat's Enterprise Linux 5.5 also supports Xeon 7500s, but you will need RHEL 6 (which is still in beta and not expected for a few months) to support the MCA features.

If you like SUSE Linux Enterprise Server 11, you're going to have to wait until SP1, which is expected in the coming weeks, for the MCA features to be supported. And while NEC has certified VMware's ESX Server 4.0 hypervisor on the machine, you need ESX Server 4.1, which is in the works as well, to use MCA.

In a base configuration, an A1080a-S GX server with four X7550 processors and 128GB of memory costs $53,658. An eight-socket A1080a-D machine in an SMP setup (meaning it does not use hardware partitioning, which adds costs because of the extra server processor) with 256GB of memory costs $88,774 using the X7550 processors and $101,574 using the faster X7560 processors. That's an extra $12,800, or 14.4 per cent, for maybe 13 percent more oomph. About $7,704 of that is just the difference in the cost of Intel processors (assuming 1,000-unit prices). The remaining $5,096 appears to be what we might call pure profit - or margin for haggling, more likely.

One last thing: Like all the other former Itanium enthusiasts, NEC is walking very carefully when it comes to the quad-core "Tukwila" Itanium 9300 processors that were announced in February. There are not going to be upgrades of the Express5800/1320Xf servers running the new Itanium processor and supporting Windows and Linux workloads.

But Mitsch says that NEC is going to ship mainframes running its GCOS and ACOS proprietary platforms supporting the new Itanium chips. The feeds and speeds of these machines were not available at press time, and they are sold mostly in Japan.

Windows Server 2008 R2 and RHEL 5 will be the last releases of operating system software from Microsoft and Red Hat to support the Itanium family chips. Novell has not said what its Itanium plans are, but it could turn out that SLES 11 will be the last release to support Itanium, something we won't learn until SLES 12 is getting closer to coming out.

If NEC gets in a pinch, it can always ink a reseller agreement with HP for an Itanium box. NEC used to be a contributor to the HP-UX Unix platform, which shows you that HP and NEC are not always enemies in the server market. ®