Inspur jacks up huge heat sink-sporting beast

New GPU’riffic box vies for dominance

By Dan Olds, OrionX

Posted in HPC, 30th May 2017 22:21 GMT

HPC Blog At trade shows, I’m always attracted by the sight of huge heat sinks bunched together on a system board. Big and powerful hardware is a weakness of mine. The sight of them pulls me to the booth like a giant tractor beam.

That’s exactly what happened when I wondered by the Inspur booth at GTC17. Its new AGK-2 server is quite the system. On a single 2U server, it has packed in 8 GPUs, dual CPUs, and 16 DIMM slots. Now that’ll run your Crysis for you…

Better yet, the GPUs are attached to the system board via the newest NVLink 2.0 interface. For the uninitiated, NVLink 1.0 was collaboration between IBM and NVIDIA, with the goal of providing a high-speed direct and dedicated CPU-GPU and GPU-GPU connection.

The first version of NVLink offered up 80GB/s of bandwidth, more than double the 35GB/s that you’d get from attaching the GPUs to PCIe. The newest NVLink, the aptly named NVLink 2.0, provides a mind blowing 300GB/s bandwidth between CPUs and GPUs and from GPU to GPU.

The AGX-2 offers up two M.2 PCIe drive slots on the motherboard, which provide close to 4x the speed of SATA 3. Users can also host up to 8 2.5” drives in the 2U chassis.

Inspur is touting this box for extreme AI deep learning and HPC applications. It’s certainly the most powerful single server your correspondent saw at GTC17, with the possible exception of NVIDIA’s own DGX-2.

The AGX-2 has both air-only cooling and an air/liquid hybrid cooling options for the GPUs (which generate the most heat). This box has some super powerful fans, meaning that with either cooling option you can run the box flat out and not be slowed by thermal limitations.

There was no pricing info available.

Sign up to our NewsletterGet IT in your inbox daily

Post a comment

More from The Register

Only good guys would use an automated GPU-powered password-cracker ... right?

FireEye gives the world GoCrack, a Dockerised hashcat implementation for sysadmins

Who wants multiple virtual workstations on a GPU in a blade server?

NVIDIA reckons engineering types do, so it's cut a new GPU and software to carve it up

GPU teleportation: 2018’s first virtual pissing match

Citrix and VMware are both close to allowing live migration of NVIDIA-powered VMs

Amazon supercharges GPU power, spits out Nvidia-backed G3

Get your office benchmarking Crysi- *cough* I mean, working

Microsoft: We beat Google, AWS to cloudy GPU VMs in Blighty

Now you can shave a few milliseconds from real-time apps and, er, batch processing

OpenAI uses cunning code to speed up GPU machine learning

Sparse is more

Card shark Intel bets with discrete graphics chips, shuffles AMD's GPU boss into the deck

That's a busted flush of a headline

Nvidia's profits so far this year are GPU-ge

The anti-AMD racks up 48 per cent revenue jump

ASUS smoking hashes with 19-GPU, 24,000-core motherboard

Someone sling this at scientists, please, instead of just mining magic internet money

'G' is for 'Google' and 'GPU' and 'gaining on other clouds'

Yet another cloud workstation hits the market, plus some grunt for artificial braniacs