Bull to do homegrown Nehalem EX chipset
Fame 2G and its Mesca servers
French server maker Bull is getting serious about the server racket, and is doing the engineering needed to back up its ambitions.
Only two weeks ago, Bull announced that it was creating its own line of extreme-scale blade supercomputers with the unfortunately and presumably unintentionally hilarious name bullx. The machines do, however, include plenty of clever hardware such as nVidia Tesla GPU co-processors, and software such as a complete Linux stack for HPC shops.
And now El Reg has learned that Bull is cooking up its own chipset to create very large servers based on Intel's forthcoming Nehalem EX family of eight-core processors.
According to documents obtained by El Reg, Bull has been planning for many years to create a single family of chipsets that would span its Itanium and Xeons processors. The plan was hatched in the wake of Intel's announcement that it would deliver a Common System Interface (CSI) for the Itaniums and Xeons, a technology we know know - after many years of delays - as the QuickPath Interconnect (QPI).
QPI offers high bandwidth, point-to-point interconnections between processors, memory, and I/O, and is substantially faster and easier to engineer than the frontside bus architecture of Xeon processors that predate the current Nehalem EP chips, and of the current Montvale Itaniums.
The latest roadmaps came out of Bull back in the summer of 2006, when the quad-core Tukwila Itaniums - the first chips slated to use QPI, in theory - were supposed to be readying for launch. Bull's plan was to use its own Fame D chipset to glue together multiple four-socket motherboards based on Intel's E8870 chipset to make machines that scaled to eight processor sockets and beyond.
The revised plan in 2006 called for Intel to ship the quad-core Tukwila Itaniums in mid-2008, which obviously never happened, and to use a new chipset called Fame 2G as the main chipset for four-socket, eight-socket, and larger boxes, which have been given the code-name Mesca. Interestingly, the plan called for the Fame 2G chipset to be used on four-socket and larger servers based on future (and unnamed) Xeon processors from Intel.
In early 2007, Bull updated its chipset and server roadmap to reflect changes that Intel had made in its processor roadmaps, even with the most recent delay for the delivery of the Tukwila Itaniums, which are now slated for the first quarter of 2010.
The QuickPath Interconnect, which was supposed to debut on earlier Xeons (but ones that were supposed to follow the Tukwilas to market), is now in the field, and is slated to appear by the end of the year in the Beckton variant of the Nehalem EX processor.
So with the QPIs all lined up again, Bull is getting ready to deliver its Fame D chipset to support these two machines, apparently. The documents we have seen just refer to Fame D supporting the Nehalem EX processors in the Mesca servers, but there is no reason (yet) to believe that Bull will not support the Tukwilas.
Bull's presentation focuses on supercomputing workloads, where the company has seen some traction in the European market in recent years. But the Mesca servers will be used to create high-end boxes that will be able to run big databases and other back-end workloads, not just parallel supercomputing jobs.
The Fame D chipset is akin to IBM's EX4 and future EX5 chipsets for Xeon processors in that it is used to maintain cache coherency across multiple four-socket motherboards, which themselves have their own chipsets to glue the memories of four processors together into a single symmetric multiprocessing (SMP) cluster.
The heart of the Fame D chipset is a gadget called the Bull Coherent Switch, which sounds like a funky name for the kind of rationalization you might do after a few pints in the pub, but which is actually, as its name suggests, a switch for linking multiple motherboards together by their memories. This switch will support both Itanium and Xeon processors, according to Bull, but not mixed within the same system because the Itanium and Xeon instruction sets are different.
The switch implements something Bull calls the XCSI fabric, which is probably a throwback in name to the CSI code name that QuickPath Interconnect once had, and which probably means Cross CSI fabric. By the time the Fame D chipset launches with Nehalem EX servers in the first quarter of next year, it might be called XQPI.
The Bull Coherent Switch supports six QPI links and six XCSI links and has an aggregate data rate over four server nodes of 230GB/sec. In theory, the switch could be used to link any number of four-socket Nehalem EX or Tukwila processor boards together, but the Mesca machines are going to top out at 16 sockets.
The Mesca server nodes will each have 32 DDR3 main memory slots and support up to four eight-core Nehalem EX chips. That's up to 32 cores and 256GB of main memory per server node. To build out the Mesca server, you place two, three, or four server nodes next to each other and use fiber optic cables to link the boxes together into a big SMP. That's up to 128 cores and 1TB of main memory per single system image, and about as big as any box out there.
There are going to be two different Mesca server nodes: a 3U compute node with four-socket Nehalem EX motherboards, and a 3U service node that puts only one Nehalem EX mobo into the box, but leaves room for eight SAS or SATA disks and six PCI-Express slots (two x16 and four x8 slots, to be precise). The HPC variants of the servers will have 40 Gb/sec quad data rate InfiniBand host channel adapters built onto the boards.
The Mesca servers will support Windows, Linux, and an emulated version of Bull's proprietary GCOS operating system. ®
Sponsored: 2016 Cyberthreat defense report