Stephen Hawking's boffin buds buy HPE HPC to ogle universe

But can COSMOS find a way to improve HPE profits? Hmmm

By Paul Kunert

Posted in HPC, 28th November 2017 16:31 GMT

Well, would ya look at that? Hewlett Packard Enterprise has retained a customer. Stephen Hawking's Centre for Theoretical Cosmology (COSMOS) has slurped the firm's latest data-crunching HPC system to better understand the universe.

The Superdome Flex server was deployed eight days ago at the unit, part of the University of Cambridge's mathematics department, but the box's cost and specs used remain a closely guarded secret. The rest of the world will be able to buy the product from today.

Director Paul Shellard has worked at COSMOS since its inception in 1997, when the unit bought its first in-memory compute platform, SGI's Origin 2000 (HPE acquired SGI in 2016).

"Our purpose is to test our mathematical theories against the latest observational data so we can develop a seamless history of the universe from its origins to the present day and understand all the structures we see around us," Shellard told a bunch of journos at HPE Discover in Madrid.

Current research projects centre on simulating "mini Big Bangs" on the computers to "make predictions about the universe" and then use data to "see if we can see those signatures". Another is using HPC to study the collision of black holes that made ripples in space-time.

COSMOS is faced with a two-pronged challenge, said Shellard, the "flood of new data and new categories of data". It needs a single system to compare computer-modelled data with "theory-driven science".

"You can develop your data analytics pipelines more rapidly, can validate them and then scale them up. It is faster to implement our theoretical ideas," he said.

Now comes the HPE Superdome Flex sales pitch, which must surely have secured COSMOS a healthy discount: "The key factor is flexibility and ease of use so you can do more with fewer people, you don't have to be an expert parallel programer basically, you can get going and scale up to large problems quickly," said Shellard.

"Advanced HPC systems are very complicated, you've got vector levels, threads and then you've got nodes – three levels are vested powers – and all sorts of learning hierarchies. You want to simplify that so the scientists and get traction, get moving and develop ideas at scale."

He described software development as a "bottleneck".

"Taxpayers are generous in allowing us to do blue sky research projects but not that generous... Developing this stuff is difficult and you only have limited support for advanced programs."

Shellard claimed of Superdome Flex, it is "very easy to write programs for this architecture".

In addition to studying gravitational waves, the next big focus will also be on neutron stars, he told us.

COSMOS was using a previous generation SGI HPC box. ®

Sign up to our NewsletterGet IT in your inbox daily

4 Comments

More from The Register

Cray slaps an all-flash makeover on its L300 array to do HPC stuff

ClusterStor node uses slower SAS SSDs

Linux literally loses its Lustre – HPC filesystem ditched in new kernel

Version 4.18 rc1 also swats Spectre, cuddles Chromebooks

Huawei's 4-socket HPC blade server gruntbox gets Skylake mills

Beefier grunts from Chipzilla's latest and greatest

Want to know more about HPC apps? This explicit vid has some answers

HPC Blog Page through this profiler...

HPE gets carried array with HPC: Partners up with DDN

+Comment High-performance servers get data-pumping storage arrays

HPE teases HPC punters with scalable gear

No you can't have full specs or pricing until next month

Teen Pennsylvania HPC storage pusher Panasas: Small files, fat nodes, sharp blades

Analysis Not your average Silicon Valley startup kid

Love cloudy HPC? Microsoft does, slurps Cycle Computing

CPU-wrangler found fame on EC2, becomes Azure business

Ah, breathe that fresh alpine air. And look over there, a majestic HPC Advisory Council

HPC Blog Beastmode crew hosts three-day conference in Lugano, Switzerland

UK splurges £20m on six regional HPC centres

Why? For science!