Data Centre


Stephen Hawking's boffin buds buy HPE HPC to ogle universe

But can COSMOS find a way to improve HPE profits? Hmmm

By Paul Kunert


Well, would ya look at that? Hewlett Packard Enterprise has retained a customer. Stephen Hawking's Centre for Theoretical Cosmology (COSMOS) has slurped the firm's latest data-crunching HPC system to better understand the universe.

The Superdome Flex server was deployed eight days ago at the unit, part of the University of Cambridge's mathematics department, but the box's cost and specs used remain a closely guarded secret. The rest of the world will be able to buy the product from today.

Director Paul Shellard has worked at COSMOS since its inception in 1997, when the unit bought its first in-memory compute platform, SGI's Origin 2000 (HPE acquired SGI in 2016).

"Our purpose is to test our mathematical theories against the latest observational data so we can develop a seamless history of the universe from its origins to the present day and understand all the structures we see around us," Shellard told a bunch of journos at HPE Discover in Madrid.

Current research projects centre on simulating "mini Big Bangs" on the computers to "make predictions about the universe" and then use data to "see if we can see those signatures". Another is using HPC to study the collision of black holes that made ripples in space-time.

COSMOS is faced with a two-pronged challenge, said Shellard, the "flood of new data and new categories of data". It needs a single system to compare computer-modelled data with "theory-driven science".

"You can develop your data analytics pipelines more rapidly, can validate them and then scale them up. It is faster to implement our theoretical ideas," he said.

Now comes the HPE Superdome Flex sales pitch, which must surely have secured COSMOS a healthy discount: "The key factor is flexibility and ease of use so you can do more with fewer people, you don't have to be an expert parallel programer basically, you can get going and scale up to large problems quickly," said Shellard.

"Advanced HPC systems are very complicated, you've got vector levels, threads and then you've got nodes – three levels are vested powers – and all sorts of learning hierarchies. You want to simplify that so the scientists and get traction, get moving and develop ideas at scale."

He described software development as a "bottleneck".

"Taxpayers are generous in allowing us to do blue sky research projects but not that generous... Developing this stuff is difficult and you only have limited support for advanced programs."

Shellard claimed of Superdome Flex, it is "very easy to write programs for this architecture".

In addition to studying gravitational waves, the next big focus will also be on neutron stars, he told us.

COSMOS was using a previous generation SGI HPC box. ®

Sign up to our NewsletterGet IT in your inbox daily


More from The Register

Lazy parent Intel dumps Lustre assets on HPC storage bods DDN

Chipzilla offloads devs, support teams and contracts

HPC botherer DDN breaks file system benchmark record

Runs SPEC SFS 2014 software builds 25% faster than E8 Optane system

Cray slaps an all-flash makeover on its L300 array to do HPC stuff

ClusterStor node uses slower SAS SSDs

Spectre/Meltdown fixes in HPC: Want the bad news or the bad news? It's slower, say boffins

MIT Lincoln metalheads broke big iron so you don't have to… oh, you still have to, don't you?

DDN steps out of HPC niche and into enterprise AI systems hurly-burly

Squares up to Pure, NetApp, Cisco and Dell EMC

Want to know more about HPC apps? This explicit vid has some answers

HPC Blog Page through this profiler...

Huawei's 4-socket HPC blade server gruntbox gets Skylake mills

Beefier grunts from Chipzilla's latest and greatest

Linux literally loses its Lustre – HPC filesystem ditched in new kernel

Version 4.18 rc1 also swats Spectre, cuddles Chromebooks

Artificial intelligence? yawns DDN. That's just the new HPC, isn't it?

We already do bigger, faster arrays – now we're scaling up

HPE gets carried array with HPC: Partners up with DDN

+Comment High-performance servers get data-pumping storage arrays