This article is more than 1 year old

Chip giants pelt embedded AI platforms with wads of cash

Shift from supercomputer to mobile continues

Analysis Artificial intelligence and machine learning engines are underpinning many emerging applications and services, from making sense of big data for enterprises, to supporting hyper-personalized consumer content, or virtual reality gaming.

The current challenge is to move AI from the supercomputer to the mobile device, supporting technologies like computer vision locally on the handset, car, camera or VR headset.

Qualcomm has been a leader here, but the past weeks have seen Intel and its Chinese partner Rockchip invest in chip-level computer vision and AI capabilities, while Apple has acquired machine learning startup Turi, presumably to enhance its AI-driven personal assistant Siri.

Rockchip licenses CEVA XM4

Rockchip has licensed the XM4 imaging and vision DSP (digital signal processor) design from IP provider CEVA, to enhance these aspects of its system-on-chip (SoC) products. It says it will enable advanced vision features at the low power levels required for mobile devices, supporting digital video stabilization, object detection and tracking, and 3D depth sensing, among others.

Rockchip’s CMO, Feng Chen, said in a statement that the firm wanted to “embrace the potential of computational photography, computer vision and machine learning in our product designs, seamlessly handling even the most complex use cases and algorithms”.

This is not just about vision – the XM4 also targets augmented reality, advanced driver assistance systems (ADAS) and other applications, offloading these computationally intensive tasks from the CPU or GPU; and it will enable Rockchip to use the machine learning capabilities of CEVA’s Deep Neural Network (CDNN2) framework.

Intel buys Nervana

Intel, which is an investor in Rockchip, has AI ambitions of its own, many focused on the massive cloud platforms which will boost its core server chip business, but it is also investing heavily in client-side capabilities such as intelligent assistants (increasingly a key interface to the smart home and car) and virtual reality.

It is spending $400m to acquire AI start-up Nervana, which should be particularly valuable for its work in intelligent cars. In July, Intel announced a new partnership with BMW and Mobileye focused on driverless vehicles, with Mobileye providing the detection capabilities. Nervana could add a far greater level of intelligence or "brainpower" to the car platform.

But Intel’s bigger agenda, as EETimes points out, is likely to be to sideline the GPU (graphics processor unit) as the workhorse of ma-chine learning platforms. Nvidia has stolen a march on Intel in high end GPUs, threatening the larger firm’s incumbency in supercomputers. If the move away from GPUs and towards specialized processors, for AI, accelerates, it will benefit Intel.

Nervana has already mounted a credible challenge to Nvidia’s CUDA software, which powers its GPUs, with its own Neon cloud service (which is CUDA-compatible). And it has its own deep learning accelerator chip on the drawing board, and set to launch next year.

Both these offerings could gain far greater power with Intel behind them, and could enable the US giant to offer AI accelerate boards to challenge Nvidia’s GPU boards, while enhancing Neon to outperform CUDA too. Intel will incorporate Nervana's algorithms into its Math Kernel Library for integration with its industry standard frameworks, and it can add Nvidia-compatible deep learning to its portfolio of cloud services.

The upcoming Nervana chip, called Engine, promises an 8Tbps multichip module with terabytes of 3D memory surrounding a 3D fabric of connected neurons, each using low precision floating point units (FPUs). This design enables it to conduct far more deep learning calculations-per-second on a smaller chip than general purpose GPUs, says the firm.

Karl Freund, senior analyst for at Moor Insights & Strategy, told EETimes: “GPUs are the prominent way to train deep learning neural networks and Nvidia is the leader. Intel has its multicore Xeons and Xeon Phi and with the acquisition of Altera, its FPGAs, but it doesn't have GPUs. Acquiring Nervana is a way of getting into the deep learning market not by copying the general purpose GPU strategy, but by offering a specialized coprocessor specifically designed for neural networks."

Qualcomm’s Zeroth

With specialized deep learning chips being deployed from supercomputers to cars to handsets, the competition will only heat up, and Nvidia may respond with its own offering. At the mobile end, Qualcomm has been highly active in this area with its Zeroth machine learning platform, which can be embedded in the Snapdragon SoC.

Initially, Qualcomm’s Zeroth/Snapdragon work focused on bringing AI algorithms and vision processing to cars, but many other applications are in its sights, as highlighted by its recently announced deep learning software developers’ kit (SDK) for the Snapdragon 820 SoC.

Called the Snapdragon Neural Processing Engine, the SDK is an attempt by Qualcomm to expand beyond its core smartphone markets – by enticing automotive, industrial, and IoT developers to its Snapdragon platform. It also sees the chip supplier looking for broader market reach, and accelerated uptake, by enabling a great deal of its brain-like functionality in software.

Back in May 2014, when Qualcomm first introduced the Zeroth project, the goal was to build a neural processing unit (NPU) that would be a hardware module for its advanced SoCs. But dedicated hardware not only adds to cost and power consumption for mobile devices, but raises barriers to developer support.

So the new SDK for the neuromorphic Zeroth Machine Intelligence Platform – Qualcomm’s equivalent of IBM’s TrueNorth chip – aims to bring the hardware-specific functions of these brain chips to a more general purpose Snapdragon platform. This will help mobile devices to leverage the potential of silicon that acts in a manner somewhat akin to the human brain.

While IBM is planning on shipping its brain chips to end-customers, who will put them to use in servers, the Qualcomm approach relies on software to bring those capabilities to a far more adaptable chipset – one that can be used in mobile devices, industrial IoT endpoints and vehicles.

Apple makes latest AI-related acquisition

The makers of those endpoints and devices will also be taking a keen interest. Apple has made its latest AI-related acquisition, paying $200m for machine learning startup Turi.

Turi offers tools for developers to embed machine learning into applications which automatically scale and tune. Use cases for the technology include product recommendations, sentiment analysis and churn prediction.

It is not yet clear whether these capabilities will be used for in-house developments or made available to iOS developers, but Apple CEO Tim Cook has identified AI as a key way to improve the customer experience of the firm’s devices and software – and of course, it is vital for Apple to continue to lead in user experience and keep customers as addicted to its home, auto and IoT offerings as they are to its phones. It seems likely that Turi’s ability to analyse sentiment could be harnessed to improve Siri’s intelligence and responsiveness, extending it beyond the startup’s current fairly basic applications.

On the company’s most recent earnings call, Cook said: “These experiences become more powerful and intuitive as we continue our long history of enriching our products through advanced artificial intelligence. We have focused our AI efforts on the features that best enhance the customer experience.”

During the briefing, Cook highlighted the potential for Siri to not only understand words from the user, but also identify the sentiment. The acquisition of Turi could be a link between a relatively simplistic function currently, to one which can more effectively predict what the consumer wants and better refine search results.

“We’re also using machine learning in many other ways across our products and services, including recommending songs, apps, and news,” said Cook on the call. “Machine learning is improving facial and image recognition in photos, predicting word choice while typing in messages and mail, and providing context awareness in maps for better directions. Deep learning within our products even enables them to recognize usage patterns and improve their own battery life. Most of the AI processing takes place on the device rather than being sent to the cloud.”

Formerly known as Dato, Seattle-based Turi raised more than $25m from venture capital investors including New Enterprise Associates and Madrona Venture Group.

AI has been at the heart of several of Apple’s 15 acquisitions of the past 18 months, including speech recognition firm VocalIQ, image recognition specialist Perceptio, and facial recognition business Emotient.

Copyright © 2016, Wireless Watch

Wireless Watch is published by Rethink Research, a London-based IT publishing and consulting firm. This weekly newsletter delivers in-depth analysis and market research of mobile and wireless for business. Subscription details are here.

More about

TIP US OFF

Send us news


Other stories you might like