Feeds

Bull waves red flag at HPC with blade supers

Never mind the bullx

Top 5 reasons to deploy VMware with Tegile

Having seen its partner Sun Microsystems get the bulk of the 200 teraflops Juropa supercomputer blade cluster deal at Forschungszentrum Jülich, French server maker Bull is trying to position itself as the European favorite for future deals at places such as FZJ with a new line of Xeon-based blade supers called bullx.

Yes, they named their machines after testicles, unless you want to be generous and say that x is a variable. And even then, you can have all sorts of fun with that. (Amuse yourself while I get on with the feeds and speeds.)

The bullx line is, according to Bull, the first European-designed, extreme scale supercomputer that can scale from teraflops to petaflops of number-crunching power. Bull says that it has over 500 supercomputing experts, who had input into the design, which was done in conjunction with some of its biggest customers (oil giant Total Fina and the French Commissariat à l'Énergie Atomique being the two biggies).

The bullx supers are packed into a fairly dense blade form factor, and include Xeon processor modules as well as hybrid accelerator blades that mix Xeons and graphics processor unit (GPU) math co-processors from nVidia to boost certain kinds of calculations.

The bullx chassis is a 7U form factor rack-mounted case that holds 18 half-height blade servers; ten across the bottom and eight across the top, leaving room in the middle of the upper row of blades for electronics and other gadgetry. This gear includes a chassis management module and a 24-port Gigabit Ethernet switch for managing the blades as well as a 36-port quad-data rate (40 gigabit/sec) InfiniBand switch module.

The chassis has room for four power supplies (three plus a spare) and two fan units, and also has a device Bull calls an ultra capacitor module (not a flux capacitor, so don't get excited), which stores up enough juice to let a chassis full of gear ride out a power outage as long as 250 milliseconds. (This may not sound like a lot until you have a simulation running for two months and the server nodes go blinky and you have to start all over again.) But more importantly, the ultra capacitor module means, according to Bull, that in areas that have good, steady electrical power, HPC centers can do away with uninterruptible power supplies, which cost money and consume about 15 per cent of the aggregate power in an HPC cluster because of the inefficiencies of charging batteries.

The bullx B500 compute blades look a lot like other current two-socket Xeon-based blade servers announced these days, but they are tweaked to support InfiniBand. The B500 blades are based on Intel's "Tylersburg" 5500 chipset and support the current "Nehalem EP" quad-core Xeon 5500 processors up to the X5570, which runs at 2.93 GHz but which kicks out 95 watts. Given the price premium of the X5570s and the heat (at 95 watts) they generate, it is far more likely that HPC customers will opt for the E5540, which runs at 2.53 GHz, dissipates 80 watts peak, and costs about half as much per chip.

The amount of memory that the B500 compute blade supports depends on the memory speed you want. If you are cool with 1.07 GHz DDR3 main memory, you can plunk in 96 GB in the 12 slots using 8 GB DIMMs, but if you want faster 1.33 GHz memory, then you can only use six of the slots for a maximum of 48 GB. (It seems far more likely, given the wicked expense of the 8 GB DIMMs, that HPC shops will use cheaper 4 GB DIMMs.) Each blade sports a ConnectX converged server and storage InfiniBand adapter from Mellanox (which plugs into the PCI-Express 2.0 slot on the blade) and a two-port Gigabit Ethernet NIC. The blade has room for one SATA or SSD drive mounted on the blade.

The B505 accelerator blade in the bullx HPC box is a double-wide blade that pairs a single two-socket Nehalem EP server with two Tesla M1060 co-processors. This blade is based on the 5520 variant of the Tylersburg chipset and has only six DDR3 memory slots for a total of 48 GB of main memory (using 8 GB DIMMs) running at 1.33 GHz.

Beginner's guide to SSL certificates

More from The Register

next story
It's Big, it's Blue... it's simply FABLESS! IBM's chip-free future
Or why the reversal of globalisation ain't gonna 'appen
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
A less painful way to run Microsoft’s private cloud
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
CAGE MATCH: Microsoft, Dell open co-located bit barns in Oz
Whole new species of XaaS spawning in the antipodes
AWS pulls desktop-as-a-service from the PC
Support for PCoIP protocol means zero clients can run cloudy desktops
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.