Culture

The Pentagon Is Worried About Killer Robots From China and Russia

We're racing China and Russia for military A.I. domination 

by Peter Rugg
Getty

Can we ensure our national security without pursuing killer robot technology? The Pentagon doesn’t think so.

Such was the discussion at Monday’s National Security Forum at the Center for New American Security, a D.C. think-tank specializing in military issues. Deputy Defense Secretary Robert Work was on hand to say that humanity stood at “an inflection point” with artificial intelligence and that the battlefield of the future will be dominated by autonomous mechanical warriors. Work has been speaking out about the major technological shift happening beneath the military’s feet for a while now. He’s doesn’t mince words.

“We know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff Valery Vasilevich Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield and he said, and I quote, ‘In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations,’” Work told the increasingly nervous crowd. This info did little to quell fears: The company behind Russia’s new Armata T-14 tank has pledged to roll out an army of combat robot prototypes within the next two years.

Work predicted authoritarian regimes would naturally gravitate toward autonomous designs because they philosophically view people with independent thought as weak links. The logic there holds, but the thought is horrifying, especially in the context of a half century off civil wars fought with Russian-made AK-47s.

This is not to say that America isn’t interested in augmenting its own weapons with A.I.. Pentagon officials have already predicted several jobs that will likely be better suited to robotic decision makers in the near future, from cyber warfare and over-the-horizon-targeting battles in which a human intelligence would be too slow to react effectively, to telling F-35 pilots what to target and assisting flying and landing operations, to programming drones and boats.

Despite its democratic ideals, America autonomous weapons research comes with its own set of troubling ethical ramifications that led Elon Musk and Stephen Hawking to call for a “killer robot” ban this summer. The question are massive For instance: If an autonomous weapon received an illegal order would it be programmed with the ethics to reject it? That’s the sort of thing one has to mull before selling a loaded bot.

Could machines commit war crimes independently? That’s another.

Work demonstrated real thoughtfulness at the meeting, but also provided no insight into whether or not his foreign counterparts had a similarly moralistic approach.

Related Tags