Feeds

Bull waves red flag at HPC with blade supers

Never mind the bullx

Internet Security Threat Report 2014

Having seen its partner Sun Microsystems get the bulk of the 200 teraflops Juropa supercomputer blade cluster deal at Forschungszentrum Jülich, French server maker Bull is trying to position itself as the European favorite for future deals at places such as FZJ with a new line of Xeon-based blade supers called bullx.

Yes, they named their machines after testicles, unless you want to be generous and say that x is a variable. And even then, you can have all sorts of fun with that. (Amuse yourself while I get on with the feeds and speeds.)

The bullx line is, according to Bull, the first European-designed, extreme scale supercomputer that can scale from teraflops to petaflops of number-crunching power. Bull says that it has over 500 supercomputing experts, who had input into the design, which was done in conjunction with some of its biggest customers (oil giant Total Fina and the French Commissariat à l'Énergie Atomique being the two biggies).

The bullx supers are packed into a fairly dense blade form factor, and include Xeon processor modules as well as hybrid accelerator blades that mix Xeons and graphics processor unit (GPU) math co-processors from nVidia to boost certain kinds of calculations.

The bullx chassis is a 7U form factor rack-mounted case that holds 18 half-height blade servers; ten across the bottom and eight across the top, leaving room in the middle of the upper row of blades for electronics and other gadgetry. This gear includes a chassis management module and a 24-port Gigabit Ethernet switch for managing the blades as well as a 36-port quad-data rate (40 gigabit/sec) InfiniBand switch module.

The chassis has room for four power supplies (three plus a spare) and two fan units, and also has a device Bull calls an ultra capacitor module (not a flux capacitor, so don't get excited), which stores up enough juice to let a chassis full of gear ride out a power outage as long as 250 milliseconds. (This may not sound like a lot until you have a simulation running for two months and the server nodes go blinky and you have to start all over again.) But more importantly, the ultra capacitor module means, according to Bull, that in areas that have good, steady electrical power, HPC centers can do away with uninterruptible power supplies, which cost money and consume about 15 per cent of the aggregate power in an HPC cluster because of the inefficiencies of charging batteries.

The bullx B500 compute blades look a lot like other current two-socket Xeon-based blade servers announced these days, but they are tweaked to support InfiniBand. The B500 blades are based on Intel's "Tylersburg" 5500 chipset and support the current "Nehalem EP" quad-core Xeon 5500 processors up to the X5570, which runs at 2.93 GHz but which kicks out 95 watts. Given the price premium of the X5570s and the heat (at 95 watts) they generate, it is far more likely that HPC customers will opt for the E5540, which runs at 2.53 GHz, dissipates 80 watts peak, and costs about half as much per chip.

The amount of memory that the B500 compute blade supports depends on the memory speed you want. If you are cool with 1.07 GHz DDR3 main memory, you can plunk in 96 GB in the 12 slots using 8 GB DIMMs, but if you want faster 1.33 GHz memory, then you can only use six of the slots for a maximum of 48 GB. (It seems far more likely, given the wicked expense of the 8 GB DIMMs, that HPC shops will use cheaper 4 GB DIMMs.) Each blade sports a ConnectX converged server and storage InfiniBand adapter from Mellanox (which plugs into the PCI-Express 2.0 slot on the blade) and a two-port Gigabit Ethernet NIC. The blade has room for one SATA or SSD drive mounted on the blade.

The B505 accelerator blade in the bullx HPC box is a double-wide blade that pairs a single two-socket Nehalem EP server with two Tesla M1060 co-processors. This blade is based on the 5520 variant of the Tylersburg chipset and has only six DDR3 memory slots for a total of 48 GB of main memory (using 8 GB DIMMs) running at 1.33 GHz.

Beginner's guide to SSL certificates

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.