Subscribe now

Technology

Google DeepMind’s AI learns to play with physical objects

By Timothy Revell

10 November 2016

A little girl plays with wooden blocks, stacking them into a tower on a windowsill

She could teach AI a thing or two

Push it, pull it, break it, maybe even give it a lick. Children experiment this way to learn about the physical world from an early age. Now, artificial intelligence trained by researchers at Google’s DeepMind and the University of California, Berkeley, is taking its own baby steps in this area.

“Many aspects of the world, like ‘Can I sit on this?’ or ‘Is it squishy?’ are best understood through experimentation,” says DeepMind’s Misha Denil. In a paper currently under review, Denil and his colleagues have trained an AI to learn about the physical properties of objects by interacting with them in two different virtual environments.

In the first, the AI was faced with five blocks that were the same size but had a randomly assigned mass that changed each time the experiment was run. The AI was rewarded if it correctly identified the heaviest block but given negative feedback if it was wrong. By repeating the experiment, the AI worked out that the only way to determine the heaviest block was to interact with all of them before making a choice.

The second experiment also featured up to five blocks, but this time they were arranged in a tower. Some of the blocks were stuck together to make one larger block, while others were not. The AI had to work out how many distinct blocks there were, again receiving a reward or negative feedback depending on its answer. Over time, the AI learned it had to interact with the tower – essentially pulling it apart – to determine the correct answer.

It’s not the first time AI has been given blocks to play with. Earlier this year, Facebook used simulations of stacked blocks to teach neural networks how to predict if a tower would fall over or not.

AI is child’s play

The technique of training computers using rewards and punishment is called deep reinforcement learning, an approach that DeepMind is well known for. In 2014, it used the method to train AI to play Atari video games better than humans. The company was subsequently bought by Google.

“Reinforcement learning allows solving tasks without specific instructions, similar to how animals or humans are able to solve problems,” says Eleni Vasilaki at the University of Sheffield, UK. “As such, it can lead to the discovery of ingenious new ways to deal with known problems, or to finding solutions when clear instructions are not available.”

The virtual world in the research is only very basic. The AI has a small set of possible interactions and doesn’t have to deal with the distractions or imperfections in the real world. But it is still able to solve the problems without any prior knowledge of the physical properties of the objects, or of the laws of physics.

Ultimately, this work will be useful in robotics, says Jiajun Wu at the Massachusetts Institute of Technology. For example, it could help a robot figure out how to navigate precarious terrains.

“I think right now concrete applications are still a long way off, but in theory any application where machines need an understanding of the world that goes beyond passive perception could benefit from this work,” says Denil.

Topics: