Science

Musk, Wozniak, Hawking, Other Geniuses Are Opposing Autonomous Weapons

AI weapons will become 'the Kalashnikovs of tomorrow' if we're not careful.

by Ben Guarino
Flickr.com/Trotaparamos

For decades, the AK-47 has been the killing machine of choice in many dusty corners of the world because it’s cheap, durable, and ubiquitous. Those same factors, argue signatories of an open letter released Monday from the Future of Life Institute, are the reason autonomous weapons are fraught with peril. But what separates artificial intelligence weaponry from Kalashnikovs is that the autonomous machine can “select and engage targets without human intervention” — which, in the wrong hands, could perpetrate atrocities far greater than any uncalculating rifle.

The signatories have their work cut out for them. As humans, we are, as a whole, much better at reacting than thinking. Major players have been slow to lower our nuclear arms (and building autonomous drones, the thinking goes, would require a fraction of the machinery that goes into creating a nuke); elsewhere, land mines still dot decades-old battlefields. When a technology stood to revolutionize warfare — be it gunpowder or naval dreadnoughts or enriched uranium — there has almost never not been an arms race.

The FLI makes a distinction between the drones we have now, piloted remotely by human hands, and a robot switched on and left to its own murderous devices. The creation of the latter would spark a global AI arms race that the FLI argues, and as the history of the Kalashnikov demonstrates, would not end well:

It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

This is some heavy shit. And the names behind this warning are not your run-of-the-mill Skynet crackpots — they are creators, engineers, scientists, and philosophers who have helped shaped what our technology looks like today. A slice of the names include: Apple’s Steve Wozniak, Skype’s Jaan Tallinn, Harvard physicist Lisa Randall, a who’s who of AI researchers and roboticists — including Stuart Russell, Barbara J. Grosz, Tom Mitchell, Eric Horvitz — actress Talulah Riley, Noam Chomsky, Stephen Hawking.

Hawking has bit on a bit of a pessimistic bent lately, also reiterating his predictions about the dangers of encountering extraterrestrial life. He’ll also be answering questions about this letter and the future of technology on Reddit this week — we might not be able to control hostile aliens, but we can put “the onus on scientists at the forefront of this technology to keep the human factor front and center of their innovations.”

You can add your name to the letter, too.

Related Tags