AI is now closer to perceiving 3D space the same way as living beings do

AI is now closer to perceiving 3D space the same way as living beings do
The Siliconreview
15 June, 2018

The last few years have witnessed important advances in the field of artificial intelligence (AI). With machine learning algorithms becoming more and more effective, autonomous machines may be closer than we previously imagined. In what could be a step closer to terminator-like robots, DeepMind Technologies, an artificial intelligence subsidiary of Google, in a paper outlined a system whereby a neural network, with little to no data can look at a couple of 2D images of a scene and construct a considerably accurate 3D representation of it.

When we see an image of a room, we can picture the objects in the room as well as the approximate distances between them. Moreover, special awareness comes naturally to us. We would have no difficulty in drawing a map of the room with all the objects in them, from any angle. But computer systems lack this innate ability. However, the latest research from DeepMind claims to have made significant headway into something akin to synthetic spatial awareness for robots.

Most kinds of machine learning algorithms learn by something called supervised learning. It entails taking in mounds of data that has been appropriately structured. This new system from DeepMind does not need such data. It is able to assess real world phenomenon such as change in size of an object relative to the distance from it, layouts and so on.

It was not at all clear that a neural network could ever learn to create images in such a precise and controlled manner,” said Ali Eslami, lead author of the paper. He went on to add, “However we found that sufficiently deep networks can learn about perspective, occlusion and lighting, without any human engineering. This was a super surprising finding.”