Feeds

China's Nebulae supercomputer - zero to second in 3 months

The perils of big-system assembly

Secure remote control for conventional and virtual desktops

HPC Blog Those of you interested in what it really takes to bring up a massive system don't want to miss the "Lessons Learned Deploying the World's First GPU-Based Petaflop System" session.

In it, NVIDIA's senior hardware architect, Dale Southard, discusses his experience with China's Nebulae supercomputer - which, in addition to being the #2 system on the TOP500, was also probably the quickest big build of all time, at around three months total.

Southard, who describes himself as a professional debugger, has quite a track record with big systems. Before NVIDIA, he was at Lawrence Livermore, where he participated in a number of large builds (and probably has the scars to prove it). While the Nebulae super went from bare floor to petabyte processing in record time (about 90 days), it had its share of birthing pains.

In his session, Southard spent some time explaining the differences between small, medium and massive systems. According to him, the 'interesting times' begin in earnest when you move from thousand-node systems to something bigger.

The thousand-node system isn't trivial; it requires considerable - often custom - tooling for management and configuration tasks. But the bigger systems are a whole new world of complexity, mainly due to the fact that things that only rarely (if ever) go wrong on smaller systems malfunction frequently when you amass such a huge array of gear.

Southard related a story about a capacitor that blew up like a mini hand grenade, damaging other components as bits of it wormed their way into nooks and crannies. That's not something you see every day.

He also shared a laundry list of things to check out proactively before you begin big-system assembly. For example: make sure that all the systems have a common BIOS level and that they have the correct processors running at the right speed. When you're talking about such a large number of systems, even stringent quality control can let one or two inconsistent builds slip through. Catching these problems early will save countless hours of troubleshooting down the road.

The videos of the sessions aren't up yet, but I'm told that they should be posted by the end of the week. I'll put in links as soon as I get them... ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
Troll hunter Rackspace turns Rotatable's bizarro patent to stone
News of the Weird: Screen-rotating technology declared unpatentable
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.