Science

Ray Kurzweil Explains How Humanity Can Stop Possible Robot Overlords

by Jack Crosbie
Singularity University

Futurist Ray Kurzweil has devoted his life, pretty much, to thinking about what’s going to happen when robots and artificial intelligence become smarter than humankind. But unlike many other thinkers, that future doesn’t scare Kurzweil — as long as we approach it with the right attitude.

In a new video published Friday by Singularity University, Kurzweil explained his strategy for protecting the human race from super-intelligent A.I. Essentially, Kurzweil’s plan is “If everyone tries to not be a jerk, the A.I. we create won’t end up as a jerk.”

That may sound simplistic, but it’s the foundation of Kurzweil’s whole plan. A.I. is already here, he says — “we don’t have one or two A.I.s we have billions of them,” in our pockets, on our desks, in countless pieces of technology we use every day. At some point, some form of those intelligences (which he calls “brain extenders”) will be smarter in every measurable category than the human brain — just look at AlphaGo, the revolutionary go-playing A.I.

Instead of trying to prevent this, Kurzweil says we should allow the human race to grow with the A.I. we create, focusing on practicing values of decency, democracy, and fairness in human society. A.I. is a reflection of the society that creates it, according to Kurzweil, so the only way to ensure that our future is safe is to make sure it preserves these inherently good values. “The future world which is imbued with A.I., that’s not going to be delivered from Mars,” Kurzweil says. “It’s going to emerge from the civilization we have today.”

The alternative, an A.I. created out of mistrust, malice, or fear, he says, is what everyone is so worried about.

“If something is more intelligent than you and is bent for your destruction that’s not a good situation to get into,” He says. “Putting some little subroutine in your A.I. is not going to be safe.”

He’s referring to “kill switches” — lines of code or functions intended as a failsafe if the machines ever do rebel. While having a backup plan probably isn’t a bad idea, Kurzweil’s point is that, eventually, there’s no way we’ll be able to control the A.I. we create. The solution to humanity being enslaved by the machines is to not enslave the machines in the first place.

Related Tags