Alibaba crafts AI chips, Facebook uses Bayesian magic to tweak code performance, and more

Your dose of machine-learning medicine

a black robot
There is no reason for this robot image, we just like it

Roundup Good morning – here's some machine-learning bits and bytes to kick start your week. If this sort of tech is right up your street, then perhaps check out our AI conference, M3, in London, England. OK, on with the show...

Alibaba to make its own AI chips: Jack Ma, CEO and chairman of Alibaba, the giant Chinese e-commerce conglomerate, announced it will set up a new company to build a neural network accelerator chip.

The hardware will be perform inference, the decision-making stage in which a trained network acts on incoming data and spits out, say, the type of objects in a photo or the age of someone from their picture. Ma said it was a “core technology” that China needed to develop itself if were to stop relying on US imports. Both countries are locked in a tit-for-tat trade tariff war, with the US slapping extra levies on billions of Chinese-built components coming into America – from aerospace parts to networking equipment. China retaliated by promising tariffs on $60bn of US imported goods.

China is investing heavily in its own homegrown hardware. Bitmain, a company pumping out ASICs for crypto-mining is a unicorn startup valued at over a billion dollars.

Facebook and Bayesian optimization: Here’s a technical post about how Facebook uses Bayesian machine learning methods to help researchers tune parameters for models.

Neural networks contain millions of parameters that have to be carefully tweaked, and it’s a time consuming process. Bayesian optimization builds a statistical model of the relationship between parameters and helps researchers decide which experiments they should run to find the optimal settings for their system.

It reduces the time researchers need to experiment and helps the model reach good outcomes. Facebook used this technique to tinker around with its recommendation systems. You can read the paper here if the blog post doesn’t frazzle your brain already.

Image and speech recognition all in one: Computer scientists at the Massachusetts Institute of Technology in the US have developed a system [paper PDF] that can pick out the right pictures based on an audio description.

It uses two convolutional neural networks. One encodes an image, and the other a spoken audio caption into the same embedding space. That way, the image is closely matched with the audio.

The model can only handle a few hundred different words at the moment. The idea is that this could be a novel way to teach computers translations for words between different languages. The words for objects may be different, but the image representing the two words is the same.

“One promising application is learning translations between different languages, without need of a bilingual annotator” according to MIT's Rob Matheson this week. "Of the estimated 7,000 languages spoken worldwide, only 100 or so have enough transcription data for speech recognition.

“Consider, however, a situation where two different-language speakers describe the same image. If the model learns speech signals from language A that correspond to objects in the image, and learns the signals in language B that correspond to those same objects, it could assume those two signals — and matching words — are translations of one another.”

What does your social media profile say about you?: Deepsense is an online tool that allows users to enter a Twitter handles, email address, Instagram handle... you name it, and it'll come up with a profile of that netizen.

It then uses that person's public posts to predict things like their persona, social activities, interests, and what they like to talk about. It's supposed to show off machine-learning tools for analyzing us as online creature.

But imagine if it could also, worryingly, be used to filter out potential employees: imagine someone turned down for a job because an AI bot through their online persona was too negative. It’s not a good look if a machine decides you’re a bit of a tool. Don't be too worried, though, because it's not particularly brilliant. There's typically not enough information for it to draw from – unless you’re as big as Kim Kardashian or Kanye West.

You can try it here. ®

We'll be examining machine learning, artificial intelligence, and data analytics, and what they mean for you, at Minds Mastering Machines in London, between October 15 and 17. Head to the website for the full agenda and ticket information.




Biting the hand that feeds IT © 1998–2018