Top video game dev nerve-center Unity can now be used to train AI
Hot graphics and complex worlds provide a good testbed for algorithms
Unity, the most popular cross-platform game engine favored by video game developers, on Tuesday opened up its platform for machine learning researchers to test their algorithms.
Reinforcement learning is a strand of machine learning that teaches agents to perform a specific task in a given environment. It’s been useful for training self-driving cars in simulation before they are tested on the roads, for training robots, and for teaching agents to play games like Go and poker.
If a bot makes a good move, it is rewarded with a point. Since it’s been programmed to collect these points, it’ll keep repeating these good actions until it learns how to perform a function. Transferring the skills learned digitally to real life is difficult, and it helps if the bot has been trained in a realistic rendering of its physical environment.
“At Unity, we wanted to design a system that provides greater flexibility and ease-of-use to the growing groups interested in applying machine learning to developing intelligent agents. Moreover, we wanted to do this while taking advantage of the high-quality physics and graphics, and simple yet powerful developer control provided by the Unity Engine and Editor,” the company said in a blog post.
The platform, known as Unity Machine Learning Agents, includes additional features like the Agent Monitor class, which allows researchers to better understand how agents make decisions so that mistakes are easier to debug.
The difficulty of a task can be gradually increased during the training process by increasing the complexity of the environment, and agents can also access multiple camera streams so they can survey more of their environments.
Since Unity Engine and Editor are geared toward game development, it also makes it easier for machine learning researchers to construct scenarios or even create full games to use as testbeds for their reinforcement learning algorithms.
But there is a slight tradeoff for having prettier graphics and more complex environments – it makes it more difficult for the algorithms to comprehend the play field, and requires more processing power from accelerators such as GPUs, so it’ll be more expensive to train agents.
Other companies such as Elon Musk’s OpenAI and DeepMind have also open-sourced similar environments: Gym and DeepMind Lab, respectively.
The beta for Unity Machine Learning Agents is available on Github here, and currently supports Windows, Mac and Linux. ®