This article is more than 1 year old

Waiting to exascale: Now that IBM has Summit-ed, who's to node what comes next?

Big Blue's rig with Nvidia grunt looks to be first truly exascale system

Comment IBM's 200 petaFLOPS (200,000 trillion calculations per second) Summit supercomputer was unveiled at Oak Ridge National Laboratory last Friday and, scaled up, has proven itself capable of exascale computing in some applications.

That's 1,000 petaFLOPS or one quintillion floating point operations per second.

In comparison, the Cray/Intel Aurora supercomputer project clocked in at 180 petaFLOPS with 50,000 x86 nodes, interconnected with 200Gbit/s OmniPath 2.

These nodes were supposed to be augmented with Intel's Knights Hill version of its multicore Phi co-processor. However, the Knights Hill development was canned in November 2017. Aurora has given way to Aurora 2, due for delivery in 2021, which should be an exascale system with redesigned Phi processors.

The US Department of Energy is part-funding the development of exascale computers through its Coral-2 programme (Coral being "Collaboration of Oak Ridge, Argonne and Livermore", three national labs). The original Coral programme generated the Aurora and Summit systems, the latter of which was kicked off by IBM in 2014. Cray and Intel were awarded $200m in April 2015 to build Aurora. Though Aurora was due to be delivered this year, the failed co-processor design meant that wasn't possible.

Six bidders – AMD, Cray, HPE, IBM, Intel and Nvidia – were invited by the DoE to respond to a Coral-2 request for proposals and some or all did so by May 24. The individual bidders have not been revealed and the bids are being evaluated.

There are three server/HPC system builders – Cray, HPE and IBM – and three processor/co-processor vendors – AMD, Intel and Nvidia.

We may assume Cray/Intel are bidding for the Aurora follow-on (called A21), and we looked at aspects of an HPE exascale system, suggesting an HPE/AMD partnership might be feasible.

The Summit reveal provided hints about an IBM exascale system and that's what we're going to dig into.

Summit nodes

Summit has just 4,608 nodes, which are more powerful than Aurora's X86 ones, interconnected with dual-rail 100Gbits EDR InfiniBand. As Nicole Hemsoth pointed out at our sister publication, The Next Platform, the system also has far fewer nodes than the 18,688 of its Oak Ridge neighbour and previous US supercomputing speed record-holder, Titan, but nevertheless "deliver[s]... 5X to 10X more performance while only increasing power consumption from nine to 13 megawatts."

Each node, basically an AC922 server, has two 3.1GHz Power9 CPUs with 22 cores and 6 x Tesla V100 GPUs, connected by NVLink 2. There is 1.6TB of memory per node.

supercomputer

US regains supercomputer crown from Chinese, for now

READ MORE

The nodes are interconnected with Mellanox dual-rail EDR 100Gbit/s InfiniBand links, 200Gbit/s per node.

There is more than 10PB of main Summit memory and it uses IBM's Spectrum Scale filesystem, initially with about 3PB capacity and 30 GB/sec bandwidth. These numbers will rise to 250PB, 2.5TB/sec sequential and 2.2TB/sec random IO. Peak power usage is 13MW.

HPE has mentioned that exascale computers could have tens of thousands, if not hundreds of thousands, of nodes. But not if Summit could be scaled to an exaFLOPS machine – meaning a fivefold increase in performance.

That would mean 23,040 nodes using the current Power9/6xGPU node setup.

But Nvidia has moved on, having announced its HGX-2 2 petaFLOPS GPU grunt box with 16 Tesla V100s, the latest GPU architecture, connected using six NVSwitches. And there may well be a Volta GPU follow-on, with Ampere and Turing names floating around.

IBM is moving on, developing a POWER10 CPU, due to arrive in 2020. The Coral-2 systems are meant to be deliverable from 2021. It could have 48 cores and support a faster NVLink 3 interconnect.

Mellanox is developing NDR 400Gbit/s InfiniBand switching interconnect.

Could Spectrum Scale have its performance pushed higher? There's no reason to doubt that.

Join the dots

Let's suggest a scaled-up Summit node using POWER10 CPUs, souped-up Nvidia GPUs with faster NVLink, NDR InfiniBand internode links, and a faster/larger Spectrum Scale could provide a pathway to exascale with fewer than 23,040 nodes.

If we scale up Summit nodes 2.5x in performance, using these technologies, then 9,216 of them would get us to 1 exaFLOPS. There's a 40MW limit for power consumption and scaling up Summit power usage by 2.5x gets us to 32.5MW (this is a simplistic extrapolation as power usage in general is directly proportional to number of nodes/performance if the node tech is the same). An enticing prospect.

There are other technologies that could help, such as high-bandwidth memory and storage-class memory. As long as these don't need application software rewrites, they could go into the mix too.

HPE's Machine-based exascale technology set is adventurous and exciting – for HPE. A scaled-up Summit is basically more of the same –less adventurous perhaps but possibly a safer bet.

Cray/Intel's Aurora A21 looks to be as risky as HPE's system, because Intel co-processor development has stalled – perhaps it will use its in-development GPUs – and Xeon under-performance, compared to POWER10, would predicate many tens of thousands of nodes with unproven co-processor/GPU technology.

Big Blue could stalk the processor design halls in triumph: Yeah, Xeon. x86? More like ex-86. Feel the POWER, etc. etc. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like