Science

A Radical New Theory Could Change the Way We Build A.I.

One A.I. scientist wants to ditch the metaphor of the brain and think smaller and more basic.

by Neel V. Patel
Flickr / C_osett

From early on, we’re taught that intelligence is inextricably tied to the brain. Brainpower is an informal synonym for intelligence — and by extension, any discussion of aptitude and acumen uses the brain as a metaphor. Naturally, when technology progressed to the point where humans decided they wanted to replicate human intelligence in machines, the goal was to essentially emulate the brain in an artificial capacity.

What if that’s the wrong approach? What if all this talk about creating “neural networks” and robotic brains is actually a misguided approach? What if, when it comes to advancing A.I., we ditched the metaphor of the brain in favor of something much smaller: the cell?

This counter-intuitive approach is the work of Ben Medlock, who’s not your average A.I. researcher. As founder of SwiftKey, a company that uses machine-learning parameters to design smartphone keyboard apps, his day job revolves around figuring out how A.I. systems can augment many of the standard tools we already use on our gadgets.

But Medlock moonlights as something of an A.I. philosopher. His ideas stretch beyond how to slash a few seconds from texting. He wants to push forward what essentially amounts to a paradigm shift in the field of A.I. research and development — as well as how we define intelligence.

“I lead this kind of double life,” says Medlock. “My work with SwiftKey has all been around how you take A.I. and make it practical. That’s my day job in some ways.

“But,” he says, “I also spend quite a bit of time thinking about the philosophical implications of development in A.I., and intelligence is something that is very, very much a human asset.”

This sort of thinking brought him to the building block of human life, the cell.

“I think the place to start, actually, is with the eukaryotic cell,” he says. Instead of thinking of A.I. as an artificial brain, he says, we should think about the human body as an “incredible machine” instead.

Typically, A.I. scientists prefer the brain as the model for intelligence. That’s why certain machine-learning approaches are described with such terms as “neural networks.” These systems don’t possess any sort of wired connections that siphon information and process it like neurons and neurological structure, yet “neural network” conveys a complexity that’s akin to the human brain.

The metaphor of a neural system is what Medlock wants to tear down, to a certain extent. “If you’re in the field of A.I., you know that actually there’s a chasm between where we are now and anything that looks like human level intelligence,” he says.

Right now, A.I. researchers are trying to model reasoning and independent decision-making in machines by taking an individual task, breaking it down into smaller steps, and training a machine to accomplish that task, step-by-step. The more these machines learn how to identify certain patterns and execute certain actions, the “smarter” we perceive them to be. It’s a focus on problem-solving.

But Medlock says this isn’t how humans operate — tasks aren’t processed and completed in such a neat approach. “If you start to look at human intelligence, or organic biological intelligence, it’s actually a mistake to start with the brain,” he says.

“Cells are information-processing machines”

“Cells are much more like mini information-processing machines with quite a bit of flexibility. And they’re networked so they’re able to communicate with other cells in populations.” One might say the human body is made up of 37.2 trillion individual machines.

Medlock digs deeper on this idea, using the biological process of DNA replication to make his point. The traditional model of evolution has assumed life advances thanks to mutations in the genetic code, in that mistakes inadvertently lead to adaptations that get passed down.

But that mutation-based model of evolution has transformed as of late, thanks to what geneticists are learning about the replication process. Evolution is not as accidental or mutation-caused as we think.

“The cellular machinery that copies DNA is way too accurate,” says Medlock, only making one mistake for every 4 billion DNA parts.

Here’s where the A.I. part comes in: A series of proofreading mechanisms iron out mistakes of sections in DNA, and cells possess tools and tricks to actively modify DNA as a way to adapt to changing conditions, which University of Chicago biologist James Shapiro, in his landmark 1992 study, called “natural genetic engineering.”

“Intelligence is not the ability to play chess”

“It comes back, I think, to what intelligence actually is,” reasons Medlock. “Intelligence is not the ability to play chess or to understand speech. More generally, it’s the ability to process data from the environment and then act in the environment. The cell really is the start of intelligence, of all organic intelligence, and it’s very much a data processing machinery.”

The organic intelligence, he says, confers an embodied model of the world for the conscious organism. “The data that’s coming in [through the senses] only really matters at the point where it violates something in the model that I’m already predicting.”

Medlock is basically saying that if the goal is to create machines that are just as intelligent and adaptable as human beings, we should start building A.I. systems that possess these types of embodied models of the world in order to give intelligent machines the type of power and flexibility that humans already exhibit.

Of course, that raises a bigger question of whether this is what we want out of A.I. “We can keep focusing on the problem-solving approach,” Medlock says, if we’d prefer to see our A.I. focus on executing specific tasks and fulfilling narrow goals.

But Medlock argues that there is probably a limit to this approach. The brain model is useful for developing A.I. that is in charge of one or a few things — but blocks it off from reaching a higher strata of creativity and innovation that feels much more limitless. It’s perhaps the difference between the first part and the fourth part of the infamous “expanding brain” meme.

“With our current approaches — deep learning, artificial neural networks, and everything else — we’re going to start to hit barriers,” he says. “I think we won’t need to then go back to sort of trying to simulate the way organic intelligence has evolved, but it’s a really interesting question as to what we do do.”

Medlock doesn’t have a clear answer on how to apply his theory that A.I. should be thought of as a cell, not a brain. He acknowledges that his idea is just an abstract exercise. A.I. developers may choose to run with the cell as the appropriate metaphor for A.I., but how that might tangibly manifest in the short or long term is entirely up to speculation. Medlock has a few thoughts though.

For one, the whole bodies of these machines would need to be information processors. Although they could be connected to the cloud, they would have to be able to absorb and analyze information in the physical world, independent of a larger server that could be interfaced wirelessly. “I don’t believe that we will be able to grow intelligence that doesn’t live in the real world,” he says, “because the complexity of the real world is certainly what spawns organic intelligence.” So A.I. would need to possess their own physical bodies, fitted with sensors of all kinds.

Second, they need to be mobile. “To be able to have an intelligence that has human-level flexibility, or even animal-level flexibility, it feels like you need to be able to roam,” he says. Interacting with the world, and all its parts, is paramount to simulating human-level cognition. “Movement is key.”

The last major cog is self-awareness — the machine has to have an understanding of its own self, and its division from the rest of the world. That’s still an incredibly large obstacle, not least because we’re still nowhere near certain how self-awareness manifests in humans. But if we ever manage to pinpoint how this occurs in the organic mind, we could perhaps emulate it in the artificial one as well.

Although it’s an idea that takes A.I. to a new level of sci-fi imagination, it’s not totally strange. Medlock suggests looking at the self-driving car. It’s a rudimentary machine right now, fitted with a series of optical sensors and a few others to detect physical hits, but that’s about it. But what if it was covered in a nanomaterial that could detect even minor physical touch, and absorb sensory information of all kinds, and then act on that information. Suddenly, an object shaped like a car is capable of doing a hell of a lot more than simply ferrying people back and forth.

Moreover, all of this should be good news for anyone who fears a Skynet-like robot insurrection. Medlock’s idea basically precludes the notion that A.I. should operate as an interconnected hive-mind. Instead, each machine would work as a discrete self, with its own experiences, memories, decision-making methods, and choices for how to act. Like humans.

Beyond technical constraints, there’s another major hurdle that stymies what Medlock is advocating — and that’s the question of ethics. In remodeling the metaphors we use to approach A.I., he’s also suggesting that A.I. development shifts away from alleviating specific problems, and towards the goal of basically creating a sentient person made of metal and wire.

“I do think there are some arguments to say, from an ethical perspective, maybe we should avoid [building human-level systems],” he says. “However, in practice, we’re driven by problem-solving, and we just keep chipping away at problems, and we see where it takes us. And, hopefully, as we’re progressing, we’re open and we have the kind of conversations about what this means for regulatory systems, for legal systems, for justice systems, human rights, etc.”

Ultimately, Medlock is both hindered and freed by the fact that his ideas are far away from showing up in real, present-day development and testing. It could be a long time, if ever, before the A.I. community embraces and runs with the metaphor of a cell as the inspiration for future intelligent systems, but Medlock has a lot of time to sharpen this idea and play an influential role in determining how it becomes adopted.

Related Tags