This article is more than 1 year old

Don't fall for the hype around OpenAI's Rubik's Cube playing robot, Berkeley bans facial recognition, and more

All in a week's work

Roundup Just in case you're addicted to the world of AI, here's more news beyond what we have already covered this week.

Yay or nay - OpenAI’s robot Rubik’s Cube hand: Some in the AI community have been gushing over OpenAI’s most recent video showing a mechanical hand deftly solving a Rubik’s Cube, even when it was being disturbed by a stuffed giraffe toy.

Here’s the video below. It’s well produced, colourful, and looks pretty impressive at first. Most people use two hands to play with a Rubik’s Cube, and here’s a robot solving it with just one hand.

Youtube Video

But a closer inspection of the research paper [PDF] reveals that the robot, known as Dactyl, could only complete the puzzle from a fully scrambled state 20 per cent of the time – that’s just two times out of the ten trials the researchers performed.

The success rate was higher at 60 per cent when the cube was half-scrambled – a state that involves just 15 moves instead of the full 26 to fully crack the challenge. All that also depends on the robot not dropping the toy, too, which happened 80 per cent of the time in testing.

There are superior algorithms that can solve a Rubik’s Cube faster. The focus of attention here should not be on the puzzle-solving technique, though, but rather the training of a robot hand.

OpenAI taught Dactyl using reinforcement learning, a method that teaches an agent how to perform a specific task over trial and error. The bot is rewarded every time it makes a good move that gets it closer to cracking the Rubik’s cube, a bonus award when it’s fully solved, and a negative reward when it drops the toy. It played with the Rubik’s Cube for an equivalent of 10,000 years during training – an amount that obviously surpasses many human lifetimes.

Crucially, a non-AI algorithm was used to solve the cube, and reinforcement learning was used to perform the robotic manipulation of the gizmo. Again, the real focus of attention should be on training the robot hand, not whether it's any good at solving a Rubik's Cube efficiently.

“Since May 2017, we’ve been trying to train a human-like robotic hand to solve the Rubik’s Cube,” the AI research lab said this week.

“We set this goal because we believe that successfully training such a robotic hand to do complex manipulation tasks lays the foundation for general-purpose robots. We solved the Rubik’s Cube in simulation in July 2017. But as of July 2018, we could only manipulate a block on the robot. Now, we’ve reached our initial goal.”

While this is a fascinating and important step forward for machine learning, we’re not so sure that having a 20 per cent success rate really counts as having fully solved a problem or if playing with a Rubik’s Cube gets us any closer to general-purpose robots. AI robots still need to be carefully trained to do a specific task: just because one of them can rotate a Rubik’s Cube it doesn’t mean it can, say, roll a dice.

Berkeley has banned facial recognition: Berkeley in California has become the latest US city to ban the governmental use of facial recognition technology.

After Berkeley City Council unanimously voted in support of an ordinance that prevents the technology being used by government agencies, including the local police, this week. Now, it joins San Francisco and Oakland in California, as well as Sommerville, Massachusetts, who have also banned facial recognition too.

“We cannot afford to write off the various performance issues related to facial recognition technology as mere engineering problems; facial recognition surveillance poses a range of fundamental constitutional problems,” said Kate Harrison, a councilmember who pushed the ordinance, according to The Mercury News, a local Bay Area publication. “In the face of federal and state inaction, it is incumbent upon cities to enact laws that protect communities from mass surveillance.”

The ordinance is spearheaded by the well known shortcomings of facial recognition. Machine learning models often struggle with identifying women and people of darker skin as accurately as white men due to the lack of representation in skewed datasets, making it inappropriate to use by law enforcement and other governmental agencies.

Play the COMPAS game! Remember the dodgy COMPAS algorithm that was being used in courtrooms to decide how likely it is that an accused criminal would recommit a crime?

Well if you don’t, here’s Pro Publica’s investigation from 2016 that showed that black people were more likely to score higher than any other group. Now, MIT Tech Review have taken the same dataset analysed by Pro Publica, which includes 7,200 profiles containing people’s names, race, age and their risk scores as calculated by COMPAS, and turned it into an interactive game.

Each defendant is represented as a dot and is sorted into ten bins, each marked 1 to 10. One signifies that there is 10 per cent chance of reoffending, five is 50 per cent, and ten is 100 per cent.

Players are then guided through different scenarios and asked to find the limit, where most of the defendants who were marked as a high chance of reoffending were indeed arrested for recommitting a crime. That would mean the algorithm was accurate.

But as you keep playing, the threshold becomes trickier to find. There are some people that have been judged too harshly by COMPAS and have been put in jail for longer than necessary, while others have been set free only to reoffend. The game becomes even more confusing when race is involved and the accuracy scores drop more drastically for black people compared to white people.

You can play it here. ®

More about

TIP US OFF

Send us news


Other stories you might like