Apple’s New Research Will Let A.I. Explore Virtual Worlds
Apple has published its first academic research paper, and it’s going to help A.I. get smart by exploring virtual lands. The breakthrough is all about making it easier to train computers to recognize the contents of a photo. Publishing research is a new approach for Apple, and it could help improve A.I. services like Siri that really struggle compared to its competitors.
Starting with iOS 10, Apple scans your iPhone’s images to make them easier to find, without using tags. Searching “dog,” for example, will bring up all your pictures of dogs. There’s no organization needed: The technology can look at every picture and understand what the picture shows. This new research is about making that system better by training it in a lab, using a large amount of computer-generated pictures.
The paper, entitled “Learning from Simulated and Unsupervised Images through Adversarial Training,” was published last Thursday, and represents Apple’s new approach towards openness and collaboration for A.I. research. Less than a month ago, a director of A.I. communications at the company revealed that Apple would soon allow its engineers to publish papers, a move expected to bolster the company’s internal technologies.
The new research means the computer can show the A.I. a picture, find out what it recognizes, and see how it compares to what’s actually in the picture. Because it’s simulated, the computer knows already what’s in the picture: The system expects the A.I. to recognize a dog because that’s the model that’s been placed in the virtual picture.
This is not really new: Intel Labs is using Grand Theft Auto to train self-driving cars for similar reasons. The game’s already been designed with trees, pedestrians, and other objects as the player drives down a street. The team can place a self-driving car A.I. in the virtual world and see how its interpretation of the virtual world compares with what the computer knows it should see:
Apple’s doing something slightly different, as this approach does have a couple of flaws. First off, simulated images aren’t real images, so it’s possible that when everyone unwraps their shiny new iPhone, the A.I. is flummoxed because it’s only just seen a real tree for the first time. Apple’s solution? Make more realistic images. It’s all about using Generative Adversarial Networks (GANs) to improve the virtual images, a complicated bit of A.I. wizardry that involves neural networks playing a game against each other to “learn” how to make their images as realistic as possible. The paper claims the end result is an A.I. that’s far better at processing images.
Before now, Apple did very little to collaborate with academia compared to rivals like Amazon. If the wider community made a discovery or advanced the field, Apple’s independent approach meant it was easier to sideline and ignore. Siri, the voice activated assistant purchased by Apple at the start of the decade, has floundered compared to the likes of Alexa and Cortana. A change of tack could mean future versions of Siri stop getting the most basic requests wrong:
Hopefully, Apple’s breakthrough with image generation is the first step to putting the smart back in smartphone.