Science

Bob Ross Plus Google’s DeepDream A.I. Is Utterly Terrifying

by Graham Templeton

Last year, Google’s DeepMind A.I. development house released a “tool” called DeepDream that let neural networks loose on innocent imagery, with truly terrifying results. Though it was hailed as a window into the secret experiences of A.I, in reality, DeepDream was a demonstration of just how primitive the mind of modern A.I. really is. But could even an artificial intelligence be alien enough to pervert the peaceful, soothing imagery of the great Bob Ross?

A new video titled “Deeply Artificial Trees,” which applies DeepDream to every frame of a Bob Ross video, proves that the answer to this question is, well, yes. Good grief, yes. Further, it even applies the technique to the audio — making the experience truly nightmarish.

DeepDream’s algorithms basically work by following the simplest mathematical route from basic shapes in an image to a guess at the objects those shapes represent. It then morphs the image slightly, then tries again. Over time this means that the images actually become what they’re perceived to be by the A.I. in a sort of self-reinforcing downward spiral toward freakish pictures from the world’s worst acid trip. Take a look at the video (if you dare), and check out our explanation below.

Deeply Artificial Trees was designed to highlight “the unreasonable effectiveness and strange inner workings of deep learning systems. The unique characteristics of the human voice [are] learned and generated as well as hallucinations of a system trying to find images which are not there.”

Researchers think DeepDreams system of “guess at image, reinforce that guess, then guess again” is roughly similar to the first steps in human visual perception in the brain. But the A.I. version gets off track because a neural network doesn’t receive any direction or input from a higher-level organizer like the human brain. Without some evolved wisdom to go with the raw intellect of the computer, every roundish shape could, over many iterations, get categorized as an eye, even if it’s nowhere near where eyes should logically go in the real world. The network produces animal and other specific types of imagery because that’s what was in the pictures Google used to train it — like a human being, it can only draw from what it knows (and apparently, it’s seen a lot of dogs and cars).

One of DeepDream's earliest renderings. Seems like a nice spot for a vacation!

Without enough information about the context of a scene, algorithms can slowly turn details like clouds into creepy mutants on unicycles. Funnily enough, that sort of chaotic pattern-matching is also thought to be part of the cause of human hallucinations on certain drugs. These substances are believed to, among other things, moderately decouple the brain’s sensory regions from its logical faculties. In some ways, a person tripping balls on acid is actually seeing a more baseline, raw level of image perception than the logically corrected version of reality they would see while sober.

That being said, we really don’t recommend getting high before trying out DeepDream.

Related Tags