Data Centre

Servers

Summit for the readers who are hot for petaFLOPs: Server nodes flashed at SC17

Oak Ridge Top 500-leading system's innards

By Chris Mellor

3 SHARE

Analysis IBM offered HPC fans at SC17 a gawk at the server tray for the upcoming Summit supercomputer at Oak Ridge National Laboratory (ORNL), Tennessee.

This is the system slated to knock China's 93 petaFLOPS Sunway TaihuLight system off the top of the supercomputer tree when it goes live. It is slated to pump out a hoped-for 200 petaFLOPS.

The Summit system follows on from ORNL's current 27 petaFLOP Titan system, computing 5-10 times faster, storing eight times more data and moving it 5-10 times faster as well. It will enable simulation models with finer resolution than Titan, meaning higher fidelity and more accurate simulations.

Summit will have around 4,600 server tray nodes, which will use IBM's Witherspoon Power S922LC trays.

SC17 Summit server tray tweet (https://twitter.com/ibmpowerlinux)

According to Tom's Hardware, these water-cooled trays feature a pair of POWER9 processors, each connected by a 150GB/sec NVLink 2.0 to three 7.5 teraFLOP NVIDIA Volta V100 accelerators (each with a GV100 GPU) which are inter-connected across the NVLink.

Volta GV100 GPU with 84 streaming multiprocessors

Both the CPUs and the GPUs are water-cooled. There is 300GB/sec of aggregated NVLink bandwidth.

The POWER9 CPUs have up to 24 cores and 96 threads. NVLink supports CPU mastering and cache coherence capabilities with IBM POWER9 CPU-based servers. The tray will have from 512GB to 2TB of coherent DDR4 memory, with 340GB/sec of memory bandwidth. All six GPUs and the two POWER9 CPUs can access main memory.

The system uses will use PCIe gen 4 and CAPI to hook up SSDs, FPGAs and NICS, and there is 1.TB of bust buffer NV-RAM.

Trays will be connected across Mellanox InfiniBand links, 100Gbit/s EDR.

Summit racks

The Summit machine will have up to 250PB of storage, accessed by Spectrum Scale (GPFS) and 2.5TB/sec of aggregate bandwidth. This is interfaced via the burst buffers.

Simplistically the data flow is from Spectrum Scales across InfiniBand and into a server node's memory. Each POWER9 CPU controls the activities of three GPUs and these eight compute entities access main memory and much data. The results are streamed out to the burst buffer and then pushed out to the GPFS storage.

Altogether the system will need 15MW of power and take up around 9,000 square feet of space. ORNL is installing it now. Get a Summit fact sheet here and FAQs here. ®

Sign up to our NewsletterGet IT in your inbox daily

3 Comments

More from The Register

What's this under the Christmas tree? A gift-wrapped Mellanox, for Microsoft? Say it ain't so

Windows giant mulls gobbling up network kit maker, according to anon insiders

The Six Billion Dollar LAN: Intel hopes to gobble network kit biz Mellanox 'for $6bn'

Ethernet and InfiniBand kit would be tempting for Chipzilla

Mellanox plumps pipes on Bluefield controller to squeeze out 200Gb/s

That's right, count 'em. And 4 million IOPS to boot

This LAN is your LAN: Storage world still keen on high-speed Ethernet, luckily for Mellanox

Networker still mum on Intel slurp rumours though

Mellanox flushes three directors at behest of activist investor

And agrees to do better or it'll have to hand over more board seats

IBM man goes deep on why they're all shiny OpenCAPI people

HPC Blog More details emerge at HPC powwow in Switzerland

Mellanox NICs Xilinx FPGA to save backplane slots and CPU cycles

And it's not just about bonkers Bitcoin mining rigs

Mellanox says it ships enough ports not to need Starboard

Spat with activist investor continues as companies feud over voting procedures

Mellanox plumps up thanks to the storage world's hankering for high-speed Ethernet

Dell EMC and HPE help fuel record quarterly revenue

Why OpenCAPI is a declaration of interconnect fabric war

+Comment Any standard but Intel in another CPU-memory interconnect consortium