This article is more than 1 year old

Bull waves red flag at HPC with blade supers

Never mind the bullx

There are apparently two chipsets on this accelerator blade and two ConnectX adapters, but only two processor sockets, which support up to the 2.93 GHz X5570 Nehalem EP chips. (The extra chipsets provide extra I/O, which keeps the GPUs well fed with data.) The accelerator blade has room for one SATA or SSD disk on the blade, and has two Gigabit Ethernet ports. The GPUs and the Mellanox cards plug into PCI-Express 2.0 adapter slots.

The bullx chassis and the Xeon 5500 compute blades are available now, but the accelerator blades will not ship until November. Pricing was not announced for the bullx machines. A CPU-only configuration delivers 1.69 teraflops of number-crunching power per chassis and a 42U rack with 108 blades will peak out at around 10 teraflops. So, ten racks of these puppies and you are at 1 petaflops, and that is without resorting to GPUs.

The Tesla M1060 GPUs are the same ones that motherboard and whitebox server maker Super Micro just crammed into its own HPC server nodes, which are rack-style boxes, not blades. The M1060 GPU cards were announced at the beginning of June, and they have 240 cores clocking at 1.3 GHz, plus 4 GB of its own GDDR3 memory; each GPU is rated at 933 gigaflops on single-precision floating point calculations, but only 78 gigaflops on double-precision math.

The lack of performance on double-precision math limits the appeal of the CPU-GPU hybrid, but nVidia is supposedly working on a new packaging for the GPUs due early next year (and I am guessing to plug into normal processor sockets) that will also sport something close to parity between single-precision and double-precision math. When and if that happens, expect CPU-GPU hybrids to take off like mad.

Bull is supporting Red Hat Enterprise Linux 5 plus its own bullx Cluster Suite on the bullx HPC clusters, and is also supporting Microsoft's Windows HPC Server 2008. Given the popularity of Novell's SUSE Linux Enterprise Server in Europe, and especially among HPC shops, it seems odd that SLES 10 SP2 or SLES 11 are not yet supported.

According to a report in HPCwire, the CEA and the University of Cologne in Germany are the first two customers for the bullx boxes. The University of Cardiff, which currently buys Bull boxes, was trotted out as part of the bullx announcements to say that it will keep the bullx boxes in its thoughts as it plans to upgrade its current "Merlin" Xeon-based cluster, which is rated at 20 teraflops and which was installed last June. ®

More about

TIP US OFF

Send us news


Other stories you might like