This article is more than 1 year old

Swiss boffins slap together homegrown zBox4 supercomputer

Overnighter yields 54 teraflops with flashy storage

Students and a few wandering professors at the Institute for Theoretical Physics at the University of Zurich had a perfect weekend when they scraped together the funds to buy the parts for their zBox4 homegrown supercomputer and assembled the two-rack beast.

This is the fourth generation of homegrown machines that the ITP students have put together, and it represents a serious upgrade over the zBox3 system it replaces, including a new "platter" design where the motherboards and storage are mounted as well as new custom shelving that looks a tad bit like baker's shelves such as those made famous by Google in its early data centers.

Like many supercomputer upgrades, the zBox4 upgrade was stalled a bit by the later-than-expected delivery by Intel of the Xeon E5 processors, which were expected in late 2011 but which came out in the spring of 2012. The specs of the machine are detailed here, and by El Reg's calculations, the 192-node machine with 3,072 cores should deliver around 54 teraflops, considerably more than the 576-core zBox3 machine based on Core2 processors and with only 1.3TB of aggregate main memory.

The interesting bit about the zBox4 machine is what the boffins opted for in terms of hardware components. To start with, the system has 192 of Super Micro's X9DRT-IBQF motherboards, which have on-board QDR InfiniBand ports.

They chose the eight-core E5-2660 processor to slap into these boards, which is the fastest 95 watt part that Intel sells with all eight cores fired up and which delivers 140.8 gigaflops of peak gigaflops at double precision. Each node was configured with 64GB of main memory, for the expected 4GB per core that is the average out there in HPC Land.

Each node also has a 128GB Vertex 4 flash-based SSD, which is the skinniest of the OCZ units in the Vertex 4 family (and therefore the least expensive). That yields 12.3TB of main memory and 24.6TB of flash storage across the cluster.

The nodes are lashed together with QLogic QDR InfiniBand switches from Intel in a 2:1 fat tree configuration with three core switches and nine leaf switches. There are Gigabit Ethernet switches that link to ports on the nodes for management.

All of the nodes run Scientific Linux 6.3, the clone of Red Hat Enterprise Linux 6.3 with math libraries and other tunings for HPC workloads. The Swiss boffins are using Slurm as their queuing system.

The whole shebang will burn around 44 kilowatts under full load, and costs under $750,000, or about $13,888 per teraflops. The entire machine was built in under 24 hours.

The zBox4 system will link into an existing homegrown Lustre file system with 684TB of capacity that has support for 10 Gigabit Ethernet or QDR InfiniBand links into the storage and that takes up 48 racks of space with its 342 1.5TB disks and 171 2TB disks. ®

More about

TIP US OFF

Send us news


Other stories you might like