# Wanna get started with practical AI? Check out this chap's Rubik's Cube solving neural-net code

## Written in Python, it's not perfect – but it's pretty cool

The Rubik’s Cube is one of those toys that just won't go away. Solving it is either something you can do in minutes to impress, or find so hard you end up using it as a paperweight.

There are several algorithms for solving the classic cube, which has a whopping 43,252,003,274,489,856,000 – about 43 quintillion – possible combinations. One state-of-the-art approach has shown the puzzle can be defeated in 20 or fewer moves, no matter the combination of the colored squares. This figure is dubbed God's Number.

These algorithms were crafted by mathematicians and expert human players. The computers just obey them as they would any other instructions. The machines would be lost without their programmers.

Can software learn how to solve the cube all on its own? It’s an interesting problem that Jeremy Pinto, a systems design engineering master’s student who recently graduated from the University of Waterloo, Canada, decided to crack. He wanted to see if a neural network could figure out how to solve a Rubik’s Cube.

### A simple AI for a simple game

The project is “just a proof of concept,” Pinto told *The Register*. The convolutional neural network (CNN) – a type of neural network normally used in computer vision – he coded for the job is very simple.

Pinto used Keras, a Python deep-learning library that interfaces with Google’s popular TensorFlow, to build his CNN.

It’s made of two layers: a convolutional layer and a feedforward layer. He imported open-source code, including MagicCube which simulates and visualizes Rubik’s Cubes, so he could generate and solve the puzzle virtually in software. The position of each colored square on the cube are encoded as an array of integers.

The cube is randomly shuffled from a solved state to a scrambled configuration in ten steps, ie: it's twisted and rotated randomly ten times. These ten steps are then fed into the CNN backwards, Pinto explained, effectively teaching it step by step how to go from a particular scrambled state to the solved state. At each step, the neural network guesses what the next move to solve the cube should be, then is told the next move, and it gradually learns from that.

It's a neat way of teaching the network how to go from a scrambled state to a solved cube without having to solve the puzzle yourself over and over, either by hand or by an algorithm, to train the CNN.

“It would be given the tenth move as input, and be trained to produce the ninth move, then the ninth move is given and it has to get to the eighth move. This carries on until it’s finally down to the last move and then it’s solved,” Pinto said.

After 12 hours of training using an Nvidia Geforce GTX 1050 graphics card with a Pascal GPU to accelerate the task, and around 50,000 games played, he realized the performance of the CNN plateaued. “That’s when you know to stop training, because it won’t really lead to anymore improvements,” Pinto said.

### Room for improvement

It was now ready for testing. Pinto gave the CNN new random combinations to crack, and to his surprise it managed to figure out, within seconds, how to solve the Rubik’s Cube when six or seven moves away, for about 75 per cent of the time. If you give it more moves than that, that is, you shuffle it more than seven times, it struggles. That's not a surprise seeing as it was trained from ten steps. Here's a short video of it in action:

“Surprisingly, the network works decently well for any position less than six moves away from solved. I found that by shuffling up to six moves, sometimes more, it is able to correctly solve it. It doesn't always work and can definitely use improvement, but it’s a good place to start,” Pinto said on his Github page.

The crude CNN is no match for the God’s number algorithm – it can’t find the most efficient solution for any arbitrary number of shuffles – but it’s a solid start for a small neural network. More training with many more steps will improve the system in its puzzle cracking.

It's also a fun practical starting point for anyone who wants to get into AI but needs some working code to get to grips with it. Folks with a more engineering bent, like those here at *El Reg*, may prefer real-world code to pages and pages of mathematics seen in most neural network papers and text books.

“Humans can generally do it in about five or six shuffles but at ten or more it gets too complex and it becomes a job for those specialized algorithms,” he said. “The entropy for the number of possible moves gets too big.”

At the moment it’s just a side project born from curiosity that took under three days to complete, but Pinto told us it’s “definitely worth exploring.”

“There are many ways to improve upon it," he said. "Using something like a reinforcement learning approach would be a great place to start. Also finding a way of exploiting the symmetries of the cube and invariance to colors would be good. Training larger networks would be also likely necessary. Finally, it would be interesting to inject known algorithms for specific cube solving to see if the network could pick up on it. I'm sure there are a million better ways to improve it.”

The source code can be found here if you want to find out how it works and how to write something similar. ®