Stephen Hawking's boffin buds buy HPE HPC to ogle universe

But can COSMOS find a way to improve HPE profits? Hmmm

Galaxies stretching back into time across billions of light-years of space. The image covers a portion of a large galaxy census called the Great Observatories Origins Deep Survey (GOODS).

Well, would ya look at that? Hewlett Packard Enterprise has retained a customer. Stephen Hawking's Centre for Theoretical Cosmology (COSMOS) has slurped the firm's latest data-crunching HPC system to better understand the universe.

The Superdome Flex server was deployed eight days ago at the unit, part of the University of Cambridge's mathematics department, but the box's cost and specs used remain a closely guarded secret. The rest of the world will be able to buy the product from today.

Director Paul Shellard has worked at COSMOS since its inception in 1997, when the unit bought its first in-memory compute platform, SGI's Origin 2000 (HPE acquired SGI in 2016).

"Our purpose is to test our mathematical theories against the latest observational data so we can develop a seamless history of the universe from its origins to the present day and understand all the structures we see around us," Shellard told a bunch of journos at HPE Discover in Madrid.

Current research projects centre on simulating "mini Big Bangs" on the computers to "make predictions about the universe" and then use data to "see if we can see those signatures". Another is using HPC to study the collision of black holes that made ripples in space-time.

COSMOS is faced with a two-pronged challenge, said Shellard, the "flood of new data and new categories of data". It needs a single system to compare computer-modelled data with "theory-driven science".

"You can develop your data analytics pipelines more rapidly, can validate them and then scale them up. It is faster to implement our theoretical ideas," he said.

Now comes the HPE Superdome Flex sales pitch, which must surely have secured COSMOS a healthy discount: "The key factor is flexibility and ease of use so you can do more with fewer people, you don't have to be an expert parallel programer basically, you can get going and scale up to large problems quickly," said Shellard.

"Advanced HPC systems are very complicated, you've got vector levels, threads and then you've got nodes – three levels are vested powers – and all sorts of learning hierarchies. You want to simplify that so the scientists and get traction, get moving and develop ideas at scale."

He described software development as a "bottleneck".

"Taxpayers are generous in allowing us to do blue sky research projects but not that generous... Developing this stuff is difficult and you only have limited support for advanced programs."

Shellard claimed of Superdome Flex, it is "very easy to write programs for this architecture".

In addition to studying gravitational waves, the next big focus will also be on neutron stars, he told us.

COSMOS was using a previous generation SGI HPC box. ®

Sponsored: Learn how to transform your data into a strategic asset for your business by using the cloud to accelerate innovation with NetApp

Biting the hand that feeds IT © 1998–2018