The Reg chats to HPE's HPC man about NASA's supercomputers, lunar ambitions and Columbia
From Spaceborne to Aitken and on to the Moon
The Reg had a chat with supercomputing veteran Bill Mannel, vice president and general manager of HPC and AI at HPE, who told us the company is looking beyond Earth orbit.
Based on Hewlett Packard Enterprise's SGI 8600 system, the 3.69-petaFLOPS beast represents the beginning of a four-year collaboration between NASA Ames and HPE. Named Aitken, after US astronomer Robert Grant Aitken, the machine will be supporting modelling and simulations of entry, descent and landing for the agency.
It will also, of course, play a role in NASA's upcoming Artemis programme, aimed at landing humans on the lunar South Pole region by 2024.
Mannel was formerly with supercomputing stalwart Silicon Graphics (SGI), and rose to the position of general manager of Compute and Storage during his long career there before joining HPE in 2014 to head up the High Performance Computing and AI group.
HPE, of course, snapped up SGI in 2016 for $275m.
HPE and SGI have enjoyed a lengthy relationship with NASA, according to Mannel, who told us about his time working at NASA on SGI's IRIS 3000 before he signed up with the supercomputer company.
He went on to recall the aftermath of the Space Shuttle Columbia disaster, when NASA urgently needed a hefty supercomputer in response to what Mannel delicately described as "the Columbia situation".
The space agency, Mannel said, needed the machinery to ensure the Space Shuttle programme was "healthy". Certainly, a vast amount of modelling and simulation was required before the orbiters could fly again.
We'd recommend a read of former NASA Flight Director Wayne Hale's blog to get a feel for the era.
Speaking with pride, well over a decade on, Mannel recalled that it took just 120 days to get the thing set up, where it would briefly take the number-two position on the supercomputing list. "It was almost the number one", he said ruefully, "but at the very last minute Lawrence Livermore added some compute..."
As manager for Altix product line, Mannel told us the deployment was one of the most challenging undertaken by the company and added that it had a "major impact" on his private life that year.
Named after Columbia, the supercomputer was eventually decommissioned in 2013.
Going modular with NASA
Remarking that the US space agency has "a very ambitious agenda for supercomputer capability" – Mars is, after all, on and off the agenda depending on who is running the US government at any given time – Mannel emphasised the flexibility of the current approach at Ames.
Describing the new facility as "really just a concrete pad", he went on to explain the nifty design aspect within it that would allow the agency to "to drop containers down of different sizes and capabilities as technology changes over the coming years".
The container in question this time around contains the HPE SGI 8600, aka "Aitken". As well as 3.69 petaFLOPS of theoretical peak performance, the supercomputer also includes 221TB of memory and 46,080 cores. Quite a jump from Columbia.
And it is a tad more energy efficient. "It actually uses ambient air," said Mannel, "to cool the water that is used to cool the supercomputer."
"Basically, you dribble water over a pad and blow air through it." Readers might be more familiar with the term "Swamp Cooler".
The approach has enabled the supercomputer to achieve an impressive Power Usage Effectiveness (PUE) of 1.03.
Supercomputing in the cloud?
The modular approach will see NASA able to update its capability over the years. However, Mannel was less than convinced that the public cloud might be pressed into service any time soon for similar scenarios.
Explaining that with a 99 per cent utilisation, the public cloud would "just be incredibly expensive", Mannel cited the example of a customer who had spent $20m on one of HPE's finest, having calculated that doing the same in the public cloud would cost nearer $200m.
"NASA did actually look at a public cloud, as opposed to doing the Aitken supercomputer," said Mannel, but "they found it was not cost competitive at all."
And it wouldn't be like NASA to waste millions or even billions of taxpayer dollars now, would it?
To the Moon and Mars
Mannel was also keen to see supercomputers play a part in the future of human space exploration. The company is flush with success following the nearly two-year sojourn of its Apollo (no, not that one) machine aboard the ISS.
Mannel hoped that the experience had demonstrated to the US space agency that it can equip its spacecraft with more up-to-date machinery rather than hardening something in-house that might be well out of date before getting anywhere near the launchpad.
Indeed, Mannel told us that the gang had been getting a lot of interest from the likes of Northrop Grumman following the success of the Spaceborne computer. Northrop is currently expected to build the habitation module for NASA's Lunar Gateway.
Assuming the agency doesn't change direction again.
A cynic might wonder if all this computing grunt is actually needed. After all, the compute power behind the missions of the 1960s and early 1970s is often a punchline used by those who perhaps don't grasp the challenges involved.
For Mannel, it's all about the amount of analysis that could be done on the spacecraft or habitat rather than having to transmit data back to Earth for processing. Having gathered telemetry from aircraft back in the day, he was impressed at how much was done by today's computers on the aeroplane itself.
The same approach on a crewed spacecraft, he reckoned, "can make the mission safer and the crew safer".
And about that 2024 goal? "I can't answer that... but we will be ready to sell them computer hardware to help them get there."
It is HPE, after all. ®
Sponsored: Beyond the Data Frontier