Science

Video Shows How A.I. Generated Art Can Both Mesmerize or Haunt Your Dreams

by Danny Paez

Earlier this month, the auction house Christie’s sold what it says is the first ever piece of algorithmically generated art sold by a major auction house. The price-tag — nearly half a million American dollars — has raised a number of questions about the origins of authorship, the novelty-obsessed art market, and, perhaps most importantly: why?

And yet the efforts underway to teach machines about art, or more precisely about images, are hardly a publicity stunt. From being able to better detect deceptive videos to retroactively changing the cast of a movie, computer scientists have a number of practical reasons for teaching machines how to better engage with the visual world.

Daniel Heiss is one such technology enthusiast. The creative developer for the ZKM Center for Art and Media was an early adopter of a neural network published by NVIDIA researchers in April. It was created to generate pictures of imaginary celebrities after training with thousands of photos of existing celebs. This inspired Heiss to plug in 50,000 photobooth images collected by one of ZKM’s interactive art installations to see what kind of art his A.I. would produce. In an online interview, he tells Inverse the results were better than he ever imagined.

“I saw the crazy warping of one face images into three face images into two face images and so on. That was much better than I ever thought,” he said. “I even tried to filter the images so that only images with one face are used, but while I was working on that the samples generated from the unfiltered dataset came out so good that I stop that.”

Heiss’ video has since garnered more than 23,000 upvotes on Reddit. He originally tweeted the footage seen above on November 4, in response to another trippy use of NVIDIA’s algorithm by programmer Gene Kogan. Instead of feeding the neural network selfies, Kogan used roughly 80,000 paintings.

Kogan was also blown away with the A.I.’s ability to create frames that resembled distinct styles, instead of just mish-mashing everything.

“I was surprised by its ability to memorize so many different aesthetics without getting too jumbled,” he tells Inverse. “I think that’s the effect of having several hundred million parameters to toy with.”

How We Teach A.I. to Make Its Own Pictures

The NVIDIA research team, led by Tero Karras made use of a generative adversarial network, or GAN, originally theorized by the esteemed computer scientist Ian Goodfellow in 2014. This was the underlying tech behind Google’s DeepDream tool that made waves in the field and online.

GAN consists of two networks: the generator and discriminator. These computer programs compete against each other millions upon millions of times to refine their image generating skills until they’re good enough to create what eventually become known as deepfakes.

The generator is fed photos and begins to try and emulate them as best as possible. It then shows the original and generated pictures to the discriminator, whose job it is to tell them apart. The more trials conducted the better the generator gets at synthesizing images and the better the discriminator becomes at telling them apart. This results in some pretty convincing — but completely fake — faces and paintings.

How This Tech Can Help Artists

A.I. has already made a name of itself in the art world. In addition to the computer-generated portrait that went on sale at Christie’s, DeepDream has been making trippy landscapes since before deepfakes were a thing.

Heiss believes the machine learning tools being created today are ripe to be used by artists, but using them requires technical prowess. That’s why ZKM hosts its Open Codes exhibit to inspire more collaboration between the tech and creative sector.

“Tools that are now emerging can be very useful tools for artists but it´s hard for an artist without any knowledge of programming and system administration skills to use them,” he said. “This connection between science and art can lead to great things, but it needs collaboration in both directions.”

Google DeepDream reimagines Michelangelo

"The Creation of Adam" altered with a Dreamscope DeepDream filter

Early iterations of A.I., like GANS, are able to soak up millions upon millions of data points to see patterns and even images humans could never come up with on their own. However, their creative vision is still limited by what humans choose to give those algorithms as raw data.

With a sharp eye for aesthetics and coding skills, the A.I.-using artists of the future might use machine learning to jumpstart a whole new age of creativity or breathe life into older styles of art. But it’ll take a lot of data to teach the machines how to better mimic human ingenuity and take what the computer spits out a step further.

Related Tags