Science

How Do We Get to 'Westworld?'

HBO's new hit is pure science fiction, but artificial intelligence experts say it maps a new frontier.

by Joe Carmichael
HBO

Viewers of HBO’s Westworld aren’t just on the edge of their seats because the plot mechanisms are kicking into gear. The show forces viewers to engage with questions that have ramifications in reality: What does it mean to empathize with a robot? What makes a robot seem human? These are big questions, but ones that both artificial intelligence engineers and civilians will soon need to broach.

It’s impossible to have a serious discussion about androids without talking about Hanson Robotics, the company that right now seems closest to bringing Westworld-style robots into the real world. Founded by David Hanson in 2003, the Hong Kong–based company is best known for Sophia, who recently sat through an interview with Charlie Rose, and Albert Einstein HUBO, the first walking robot with realistic expressions (and a borrowed face). Those responsive bots, built using patented robotic systems and synthetic skin technologies, were programmed with software Hanson Robotics hopes to commercialize, populating businesses with robots ready to, as the company literature puts it, “develop deep and meaningful relationships with humans.”

Stephan Bugaj, Hanson’s Vice President of Creative, is in charge of personality design. Formerly of Pixar and Telltale Games, where he helped develop the 2014 Game of Thrones video game, Bugaj is an expert on both character and gameplay dynamics. He watches Westworld, and is also on the edge of his seat — but mostly because it forecasts the future of his own work. He says that two systems play a big role inside Dolores Abernathy’s synthetic skull. Both are in their infancy in the real world, but in tandem they could make Hanson robots a lot more relatable. The first is what’s called generative, or self-modifying code, and the second is memory.

A very casual android version of sci-fi writer Philip K. Dick, developed by Hanson Robotics, gets a moment in the spotlight.

Getty Images / Scott Olson

In episode four, Dr. Robert Ford at last tells Bernard Lowe how their robots got so clever. He shows Lowe the pyramid of consciousness: Memory, Improvisation, Self-Interest, and a big question mark. He and the just-revealed, mysterious Arnold “built a version of that cognition in which the hosts heard their programming as an inner monologue, with the hopes that, in time, their own voice would take over,” Ford explains. “It was a way bootstrap consciousness.” Improvisation based on memory.

Dr. Ford is describing self-modifying code, which can’t yet do what it does in Westworld — but soon could.

Neural networks don’t exactly run on self-modifying code, but, functionally, they’re similar. “A semantic or neural network is state-evolving over time, because it’s bringing in new data and basically learning new things,” he explains. Think Tesla’s Autopilot, or Google’s AlphaGo: These A.I.s can be said to “learn” over time, as they absorb new information. When a Tesla crashes, or even almost crashes, the collective Autopilot improves. The A.I. factors in new information in order to avoid future incidents.

Generative code is the next-level. It’s code that writes code. “An A.I. could reason about itself and decide that it needs some new code, and write it itself,” Bugaj says. Enter the doomsayers — chief among them, Elon Musk — who prefer intelligent design over techno-evolution.

Metamorphic Code

But those doomsayers should really be worrying about the third level — self-modifying code. Systems are emerging that can not only improve by accretion, but fully iterate. They can, Bugaj explains, “take the code you’ve already written, and rewrite it to be different.” And that, to him, is the seed of super-intelligence. Without self-modifying code, there is creation and speciation, but no moment of punctuated equilibrium — no leap. The sort of radical new ideas and solutions technologists want to wring from A.I. are most likely to come from systems that programmers can leave to their own devices.

In other words, the robots of Westworld feel human, in part, because they have their own ideas — something that could prove troublesome for tourists. Lowe gives Dolores Alice’s Adventures in Wonderland as edification; he shouldn’t be surprised when she falls down the rabbit hole and befriends the Mad Hatter.

In the second episode, one of the hosts, Peter Abernathy, finds a photograph that doesn't jibe with his code.

“The fundamental of it is that they can learn in some way,” Bugaj says. “They’re definitely adding some sort of semantic network associations. They’re changing things about themselves, whatever that internal structure would look like in this fictional coding universe. They’re learning, they’re formulating new opinions about themselves and about the world around them, and they’re acting on them. And that’s being sentient.”

But that means that programmers have to make smart decisions up front. Bugaj says it’s instructive to consider how the creators of Westworld’s “slavebots,” Dr. Ford (Anthony Hopkins) and Bernard Lowe (Jeffrey Wright), set limitations on the robots in their park. Bugaj suggests that they must have hard-coded some “rigid, Asimov-style rules.” The only way for them to escape their virtual cage would be to rewrite that code. If the code lives within the confines of the robots’ semantic learning systems, then the robots could find and modify it themselves; if it’s hidden elsewhere, under Asimovian lock and key, the slavebots would have to be granted their freedom by someone with access.

Dr. Robert Ford and Bernard Lowe.

And the thing about the hard-coded jail cell is that it exists within a thick-walled prison: A.I.s cannot learn without memory, and the Westworld hosts are designed to forget. There are evidently some bugs in that code, though. Hosts are beginning to remember things they shouldn’t. Memory is the pyramid of consciousness’s foundation, and it makes the hosts believable. It could also be what allows them to break out — or, more innocuously, improvise.

In the real world, A.I.s can’t yet remember like humans — can’t yet sort through, prioritize, associate, and choose to forget certain events. It’s one of a few holy grails in A.I. today.

“One thing reminds you of another, and not just in a way that’s somebody spinning a yarn, but in a way that’s very productive,” Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, tells Inverse. “We use analogies, intuition, this kind of reminding thing, to do a lot of great work. And that capability — people call it associative, people call it content-addressable, any of a number of things — we ain’t got any systems that come even close.”

In a sense, computer memory is already superior to human memory. It’s near-infinite, and relatively infallible. “If you want to store phone numbers, it works great,” Etzioni says. “If you want to store a trillion phone numbers, it still works great.” But we’ve yet to reproduce our own, creative, spontaneous memories with code. Bugaj thinks it’s vital for true cognition: “Everything that we talk about, with a machine being able to learn, comes down to memory management,” he says.

Once A.I. can have short-term, long-term, and episodic memory, Westworld will be a stone’s throw away. Computer memory is not as likely to make headlines as computer vision, or speech, but Bugaj thinks “it’s actually the fundamental topic.” “Getting that right is going to be a big deal. And we haven’t yet; we’re still working on it.”

Westworld, as a fiction, predisposes viewers to empathize with all characters. But as science fiction, with lifelike robots as characters, empathy loses vigor. Like Lowe’s own faltering judgment, viewers can’t help but second-guess their pity. We want to care about Dolores’s plight, but we can’t bring ourselves to ignore Dr. Ford’s cold-hearted reminder, his firm conviction that code cannot yield consciousness: “It doesn’t feel a solitary thing that we haven’t told it to. Understand?”

But as reality’s A.I. landscape inches closer and closer to that of Westworld, we may need to augment our empathy faculties. Maybe Westworld is to us real humans what Alice in Wonderland is to Dolores: A fiction with which we can modify our code, and break free of our preset parameters. Maybe we need Westworld to believe that the appearance of consciousness just is consciousness. Maybe we need Westworld to reckon with our inevitable future. Bugaj, for his part, thinks that’s the case.

“I think they’re doing what a good futurist should do, which is making conjectures about the future, and then exploring how those might play out.”

Related Tags