This article is more than 1 year old

Nvidia builds CUDA GPU programming library for machine learning – so you don't have to

Craft a deep neural network on a graphics chipset

Nvidia has released a set of software routines for accelerating machine-learning algorithms on its parallelized graphics processors.

Over the weekend, the GPU maker uploaded cuDNN – CUDA Deep Neural Networks – which is a library of primitives for building software that trains neural networks.

The component is optimized for Nvidia's processors and should, in theory, save programmers time: by using the library, developers won't have to reinvent the wheel when tuning parallelized machine-learning algorithms for GPUs – offloading the mathematical work from the host's application CPU.

Announcing cuDNN, Nvidia pointed to examples of machine learning and neural networks being used by financial companies, web firms and research bodies in areas such as fraud detection and gaming.

In particular, Nvidia highlighted the attempt to perform these tasks by processing pictures and images, looking at things like handwriting and facial recognition.

“The success of DNNs has been greatly accelerated by using GPUs, which have become the platform of choice for training large, complex, DNN-based ML systems,” the company’s solutions architect Larry Brown has blogged.

Brown added that Nvidia was introducing the primitives library due to the “increasing importance” of DNNs and the key role played by GPUs. ®

More about

TIP US OFF

Send us news


Other stories you might like