Humans can do a lot with a little context. If you see a picture of a toilet, you’ll know it’s probably flanked by a bathtub and sink. The picture didn’t include any of that stuff, but the human brain has a knack for filling in missing pieces. And now thanks to Google DeepMind computer scientists, so does artificial intelligence.
In a paper published in Science Magazine in June, the company described how it created a Generative Query Network (GQN) that can see and imagine almost like a human. In an accompanying video, research scientist S. M. Ali Eslami explains how the algorithm used “something akin to imagination” to turn a few two-dimensional images of a virtual room into a fleshed-out, three-dimensional environment.
This is #1 on Inverse’s list of the 20 Ways A.I. Became More Human in 2018.
“Our ability to learn about the world by simply looking at it is simply incredible. One of the biggest open problems in A.I. is figuring out what is necessary to allow computers to do the same,” he told Inverse in a written statement. “In this work, we train a neural network to predict what a scene might look like from new viewpoints.”
This A.I. was brought to life using a two-part system. The “representation network” translates the sample images into code the computer can understand. Next, the “generation network” creates everything else that isn’t shown in the initial images.
Like some of the other A.I. breakthroughs in 2018, this Deep Mind A.I.’s ability to extrapolate is a key step toward voice assistants that not only serve our needs, up anticipate them. Instead of having to command them, they’ll use context to know when to hand us a coffee or cook us a meal.
This was the landmark A.I. breakthrough of 2018 because it effectively made artificial intelligence significantly more intelligent across the board.A.I.