Science

MIT Scientists Design Artificial Synapse to Power Brain-Like Computer Chips

For once, analog is more powerful than digital.

by Alasdair Wilkins
Flickr / .:Amy:.

A new era of computing just got closer, as researchers have created the design and run the first ever practical test for an artificial synapse that could let computers replicate some of the brain’s most powerful and intricate functions.

While computers might seem more powerful than our brains, we can actually deal with a much wider range of possible signals than the “on” and “off” of binary, thanks to the synapses that handle the connections between neurons.

Replicating that capability in a computer requires artificial synapses that can reliably send all those subtly different signals. As they describe in Monday’s issue of the journal Nature Materials, researchers at the Massachusetts Institute of Technology have performed what they call the first ever practical test of such an artificial synapse, unleashing what’s known as neuromorphic computing.

While the tests only happened in computer simulations, the tests were promising. The researchers used the artificial synapse designs to recognize different handwriting samples. The simulation they ran managed to almost match what existing traditional algorithms can do in terms of accuracy — 95 versus 97 percent — which is an impressive starting point for tech in is absolute infancy.

Traditional digital computers rely on binary signaling. A value of one means “on,” while a value of zero means “off.” Because computers can perform specific calculations much faster and more efficiently than we can, it’s easy to assume that this binary approach is better than what goes on in our brains.

But the analog setup of the 100 billion neurons inside each of our brains is arguably much more sophisticated. The 100 trillion synapses that manage the connections between those neurons don’t simply send on or off signals.

The different types and numbers of ions that flow across a given synapse determines how strong a signal it sends to a particular neuron, and that spectrum of possible messages means our brain can unlock a far greater variety of computations. If computers could add that kind of complexity to their already sizable toolkits, you would be looking at some seriously powerful machines — and they wouldn’t need to be giant either.

Here’s the problem: Nature has had a couple billion years to perfect the synapses in our brains and those of other species. Researchers have only been trying to create the synthetic equivalent for a few years, and there are some major stumbling blocks. The biggest is that any artificial synapse must reliably send precisely the same kind of signal for each input it receives, otherwise the intricacy just degrades into chaos.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim said. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

The MIT researchers are optimistic their design has made significant headway on this problem by using a different material, a single-crystalline silicon that conducts perfectly without defects. In a simulation, the researchers designed artificial synapses atop this foundation using the common transistor material silicon germanium, they were able to create currents that varied only about four percent between different synapses. That’s not perfect, but it’s a huge improvement on what has been achieved previously.

For now, this work remains theoretical, and there’s a difference between demonstrating promising results in a simulation versus realizing that in an actual real-world test. But Kim and his team are optimistic.

“This opens a stepping stone to produce real artificial hardware,” he said.