Emergent Tech

Artificial Intelligence

OpenAI uses cunning code to speed up GPU machine learning

Sparse is more

By Katyanna Quach

6 SHARE

Researchers at OpenAI have launched a library of tools that can help researchers build faster, more efficient neural networks that take up less memory on GPUs.

Neural networks are made up of layers of connected nodes. The architecture for these networks are highly variable depending on the data and application, but all models are limited by the way they run on GPUs.

One way to train larger models for less computation is to introduce sparse matrices. A matrix is considered sparse if it is filled with mostly zeroes. The blank elements in the arrays can be compressed and skipped in matrix multiplications and takes up less memory on the GPU.

The computational cost of carrying out operations is proportional to the amount of non-zero entries in the matrices, Durk Kingma, a research scientist at OpenAI, explained to The Register.

By having sparse matrices, it means that the extra computation saved can be used to build wider, or deeper networks that can be trained more efficiently and perform inference up to ten times faster.

Dense network (left) can be made wider (center) or deeper (right) by adding sparsity. (Image credit: OpenAI)

How DeepMind's AlphaGo Zero learned all by itself to trash world champ AI AlphaGo

READ MORE

Nvidia doesn’t really support block sparsity models, Kingma said. So a team at OpenAI decided to develop kernels - tiny programmes that compile software to run on hardware - optimised to build block sparse networks for the wider research community.

The researchers at Elon Musk’s AI research arm have used it internally to train long short-term memory networks to perform sentiment analysis on the text written for reviews for Amazon and IMDB.

“Our sparse model improves the state of the art on the document level IMDB dataset from 5.91 per cent error to 5.01 per cent. This is a promising improvement over our previous results which performed best only on shorter sentence level datasets,” they wrote in a blog post.

The kernels are written in CUDA and OpenAI have currently only developed a TensorFlow wrapper, so other researchers working across different frameworks will have to write their own wrappers. It also supports Nvidia GPUs only.

Scott Gray, a member of technical staff at Open AI, told The Register that “this can indeed be extended to other architectures that support smallish blockwise matrix multiplication. This includes most architectures I’m aware of, but Google’s TPU2 isn’t one of them.”

Although the results are promising, “since the kernels are still so new we do not have definitive view yet on when and where they help [neural network architectures]. In experiments, we provide some situations where it helps to add sparsity to the model. We encourage the community help explore this space further,” Kingma said.

Nvidia are aware of the work and is waiting for the code release so they can support it more generally, Gray added.

OpenAI’s work is similar to Taco, a piece of software created by researchers at the Massachusetts Institute of Technology that generates the code needed to process sparse matrices automatically.

You can play around with the block sparse GPU kernels here. ®

Sign up to our NewsletterGet IT in your inbox daily

6 Comments

More from The Register

AI, AI, Pure: Nvidia cooks deep learning GPU server chips with NetApp

Pure Storage's AIRI reference architecture probably a bit jelly

Yay for Nvidia, GPU giant report decent first quarter results despite recent setbacks

There's still not enough GPUs to go round however

Gone in 60.121 seconds: Your guide to the pricey new gear Nvidia teased at its annual GPU fest

GTC Yours if you can afford it... and wait long for the fabs to make the chips

DB dev waggles GPU-dosed POWER9 server at data warehouse crowd, yells: SQream

NVLinking CPUs to GPUs chews through workloads

Amazon supercharges GPU power, spits out Nvidia-backed G3

Get your office benchmarking Crysi- *cough* I mean, working

Nvidia's profits so far this year are GPU-ge

The anti-AMD racks up 48 per cent revenue jump

Nvidia adds nine nifty AI supercomputing containers to the cloud

Now you can splash out on tons of GPUs if you really need to

Architects? Power-hungry GPU fiends? HP has something for you

Venerable PC maker emits bunch of graphics powerhouses

AI caramba! Nvidia devs get a host of new kit to build smart systems

Kubernetes for GPUs, a PyTorch extension, TensorRT 4, and much, much more

GPU fairy visits Huawei owners, leaves graphics boost under phones

Follows Google with in-place hardware upgrade – kinda