Machine check? Check.
It is this modified BIOS that hooks into the machine check architecture (MCA) features that Intel says gives Xeon 7500s some of the reliability features it needed to compete with Itanium and mainframe architectures.
A second configuration of the machine, the A1080a-D, puts in two mobos and two service processors in the box, allowing for the machine use hardware partitioning to put two four-socket systems in a 7U space. The A1080a-E is the eight-socket SMP version that gluelessly expands beyond four sockets to eight using Intel's "Boxboro" chipset and without resorting to the node controller switching architecture that IBM is using in its ex5 chipset.
Eight Xeon 7500s sockets from NEC, glue not included
In an eight-socket config, the Express5800/A1080a tops out at 64 cores and 128 threads running at 2.26GHz, and 2TB of main memory using 16GB memory sticks when they are supported sometime in the second half of this year. NEC is supporting three of the Nehalem-EX chips in the box: the six-core, 2GHz E7540; the eight-core, 2GHz X7550; and the eight-core, 2.26GHz X7560. There are eight other Xeon 7500 and 6500 (special HPC variants) available from Intel, but not in this NEC box.
Each mobo in the GX server has a dozen PCI Express 2.0 x8 slots and two x16 slots. The chassis has room for twelve 2.5-inch SATA or SAS disk drives, and the mobos have on-board SAS RAID controllers with six ports.
Eight sockets is a lot of headroom for many customers, but the whole point of the Itanium 91XX and Xeon 7XXX lines is to be able to scale up to 16, 32, or maybe even more sockets for truly large systems. Having sold 32-socket machines since 2002, NEC knows that customers who need more scalability than an eight-socket box want to see a fatter Express5800 system. And so the roadmap from NEC calls for the MX chipset to be available in two flavors - one scaling from 2 to 16 sockets and another scaling from 2 to 32 sockets. This will not be a glueless design, obviously, because the Nehalem-EX chip does not have enough QPI ports to scale gluelessly beyond eight sockets.
By the way, the current Express5800 GX machine will also support the future "Westmere-EX" processor from Intel, due next year and probably sporting a dozen cores and giving a 50 per cent performance bump.
The Express5800/A1080a GX server supports Windows Server 2008 and the R2 update of that Microsoft operating system, which has the code needed to support the MCA features on the Xeon 7500 processors. Red Hat's Enterprise Linux 5.5 also supports Xeon 7500s, but you will need RHEL 6 (which is still in beta and not expected for a few months) to support the MCA features.
If you like SUSE Linux Enterprise Server 11, you're going to have to wait until SP1, which is expected in the coming weeks, for the MCA features to be supported. And while NEC has certified VMware's ESX Server 4.0 hypervisor on the machine, you need ESX Server 4.1, which is in the works as well, to use MCA.
In a base configuration, an A1080a-S GX server with four X7550 processors and 128GB of memory costs $53,658. An eight-socket A1080a-D machine in an SMP setup (meaning it does not use hardware partitioning, which adds costs because of the extra server processor) with 256GB of memory costs $88,774 using the X7550 processors and $101,574 using the faster X7560 processors. That's an extra $12,800, or 14.4 per cent, for maybe 13 percent more oomph. About $7,704 of that is just the difference in the cost of Intel processors (assuming 1,000-unit prices). The remaining $5,096 appears to be what we might call pure profit - or margin for haggling, more likely.
One last thing: Like all the other former Itanium enthusiasts, NEC is walking very carefully when it comes to the quad-core "Tukwila" Itanium 9300 processors that were announced in February. There are not going to be upgrades of the Express5800/1320Xf servers running the new Itanium processor and supporting Windows and Linux workloads.
But Mitsch says that NEC is going to ship mainframes running its GCOS and ACOS proprietary platforms supporting the new Itanium chips. The feeds and speeds of these machines were not available at press time, and they are sold mostly in Japan.
Windows Server 2008 R2 and RHEL 5 will be the last releases of operating system software from Microsoft and Red Hat to support the Itanium family chips. Novell has not said what its Itanium plans are, but it could turn out that SLES 11 will be the last release to support Itanium, something we won't learn until SLES 12 is getting closer to coming out.
If NEC gets in a pinch, it can always ink a reseller agreement with HP for an Itanium box. NEC used to be a contributor to the HP-UX Unix platform, which shows you that HP and NEC are not always enemies in the server market. ®
Gluless Nehalem EX larger that 4 sockets is a bad thing
Best – GlueLess (requires direct connections)
Power7 to 32 sockets
Nehalem EX up to 4 sockets
Tukwila up to 5 sockets
Good – GlueFull (interconnect switch/bus)
IBM eX5 up to 16 sockets
HP Nehalem EX with “two XNC’s (cross-network connectors)“ up to 8 sockets (DL980 G7)
HP Superdome with SX3000 up to 32 Sockets
Sun(Oracle) SPARC64 and SPARC-CMT
Worst – Chip hopping
Nehalem EX > 4 sockets without proprietary chipset
Tukwila > 5 sockets without HP SX3000 chipset (BL890c)
How much to fill her up?
Will it run Quake? :)