This article is more than 1 year old

Secret HPE AI chip, TensorFlow updates, neural networks writing themselves – and more

Your weekend dose of machine-learning updates

Roundup It's been an interesting fortnight, sorry, two weeks in AI. In addition to what we've already reported, we have news about HPE developing what looks like a neural network accelerator chip, TensorFlow updates, Google's effort to teach software to make software, and other bits and pieces.

HPE’s ‘neural network processor’? The next biz in line claiming to be working on developing a fast custom-designed chip supposedly for neural networks is HPE.

It was first reported on our sister site, The Next Platform, this week, although there are scant details on how the hardware works, its specs or even if it can deal with stuff like deep learning. That's mainly because our colleagues got wind of the secret R&D effort before it was due to be made public, and the enterprise IT giant is keeping schtum for now.

The mysterious chip's “dot product engine” (DPE) architecture is apparently geared toward carrying out matrix operations at speed, which is useful for executing AI algorithms quickly. It also uses memristors, allegedly, which were supposed to drive HPE's now-defunct Machine computer architecture.

Tom Bradicich, veep and general manager of servers, converged edge, and IoT systems at HPE, told Nicole Hemsoth at The Next Platform: “[It] can be used for inference of several types of neural networks: deep neural networks, convolutional neural networks, recurrent neural networks. Hence it can do neural network jobs and workloads.

“DPE executes linear algebra in the analog domain, which is more efficient than digital implementations, such as dedicated ASICs. And, further, it has the advantage of reconfigurability on the fly. It’s fast because it accelerates vector matrix math, dot product multiplication, by exploiting Ohms Law on a memristor array. It can also be used for other types of operations, such as FFT, DCT, and convolution."

The mention of memristors makes the chip sound more neuromorphic-like than Google’s accelerator chips TPU and TPU2 or Nvidia’s GPUs. The DPE is expected to be showcased at the end of the month at a HPE Partner conference in Spain, so perhaps we’ll get to learn more about it then.

HPE declined to comment.

TensorFlow update Google has released TensorFlow r1.4, which makes it easier for software to use the low-level machine-learning framework through Keras, which is a high level, friendly interface for programmers. For example, developers can reach, via Keras, TensorFlow's Estimator API to add common tools, such as linear classifiers or regressors, to neural networks.

Also, TensorFlow's Dataset API has been updated so that it supports Python generators to feed pipelines of data into neural networks. A new function has also been added to make it easier to train, evaluate, and export models for distributed machine learning.

TensorFlow is the most widely used framework in AI, and here's some advice for those looking to get stuck in.

AI designing AI for image recognition AutoML, Google’s effort to develop machine-learning models that can design neural network architectures has been underway for months. Now, AutoML has been applied to ImageNet and CoCo, two large datasets containing millions of images, to get the software to create layers in neural networks for image recognition tasks.

The team has reported some promising results: AutoML has been able to generate a novel architecture called NASNet, is a small, two-layered model designed by purely Google's code. It achieves a prediction accuracy of 82.7 per cent on ImageNet – which is a decent score that’s on par with SENet, the winning architecture for this year’s Large Scale Recognition Challenge, an ImageNet competition.

With the object detection task using the CoCo dataset, AutoML has a mean average precision of 43.1 per cent – four per cent better than the Faster R-CNN, an older model made in 2015.

It’s a very interesting project to follow, since having AI that can build AI means that some coding can be automated. It makes it easier for developers to create these systems, and could potentially tackle the shortage of people with specialist AI programming skills.

Deep learning training Speaking of a lack of expert AI knowledge, Nvidia has announced new courses, workshops and partnerships to teach more people about deep learning.

Greg Estes, vice president of developer programs at Nvidia, said: “The world faces an acute shortage of data scientists and developers who are proficient in deep learning, and we’re focused on addressing that need. As part of the company’s effort to democratize AI, the Deep Learning Institute is enabling more developers, researchers and data scientists to apply this powerful technology to solve difficult problems.”

The Deep Learning Institute is working with Booz Allen Hamilton, a management consulting biz, to train government employees, including US Air Force for defense purposes.

It’ll also team up with deeplearning.ai, an educational startup created by ex-Baidu chief scientist Andrew Ng, to create new content that will cover natural language processing, financial trading, and video analytics.

There are a few free courses, but for most of the online labs you’ll need to purchase credits. Most single programs requires 30 credits, which costs $29.99 (£22.78). You can find out more here.

Embodied Intelligence A handful of researchers have left OpenAI, Elon Musk's AI research arm, to embark on their own AI robotics startup called Embodied Intelligence.

It will be lead by Peter Chen, chief executive; Pieter Abbeel is the president and chief scientist; Rocky Duan, chief technology officer, and Tianhao Zhang, research scientist.

Embodied Intelligence will focus on bridging the gap between software and hardware in robotics. Abbeel spoke to The New York Times and said that current hardware is good enough to mimic human motion, but the field needs new software for higher levels of automation.

“This is largely a computer science problem – an artificial intelligence problem,” he said.

The startup received $7m in funding from Amplify Partners and other investors from Silicon Valley venture capital firms. It is based in Emeryville, a city east of San Francisco in California.

Pyro Uber AI Labs has released Pyro, a framework that is a probabilistic programming language for deep learning and Bayesian modeling.

In a blog post, Noah Goodman, a member of the Pyro development team, explained that “specifying probabilistic models directly can be cumbersome and implementing them can be very error-prone.”

Uber has to deal with a lot of uncertainty in problems like where to position its drivers, matching drivers to people looking for rides, and working out the fastest routes. It relies on looking for common patterns in data generated from drivers and riders to predict the best solutions. So it’s not surprising that Uber has developed a probabilistic programming language like Pyro for research purposes.

“Probabilistic programming languages (PPLs) solve these problems by marrying probability with the representational power of programming languages. A probabilistic program is a mix of ordinary deterministic computation and randomly sampled values.”

It’s written in Python and is supported by PyTorch.

“We believe the critical ideas to solve AI will come from a joint effort among a worldwide community of people pursuing diverse approaches. By open sourcing Pyro, we hope to encourage the scientific world to collaborate on making AI tools more flexible, open, and easy-to-use,” Goodman said.

You can play with the current alpha version here.

And finally, AMD GPU support has been added to the TVM deep-learning toolchain. ®

More about

TIP US OFF

Send us news


Other stories you might like