Science

Prevent the Robot Apocalypse by Contemplating This A.I. Question of Morality

Panelists at the World Science Festival discussed robot morality. 

by William Hoffman
LiveStream

There is a moral robotics question and scenario that must be answered in order for artificial intelligence to advance.

Imagine there’s a robot in control of a mine shaft and it realizes there is a cart filled with four human miners hurdling down the tracks out of control. The robot can choose to shift the tracks and kill one unaware miner, thus saving the four in the cart, or keep the tracks as they are and allow the four miners to run into a wall and die. Which would you choose? Would your answer change if the one miner was a child instead? If we can’t answer this, how do we program robots to make that decision?

Those were the questions posed to panelists and the audience at the World Science Festival event in New York City titled “Moral Math of Robots: Can Life and Death Decisions Be Coded?” Most chose to direct the cart towards the one miner and changing the person to a child didn’t change many respondent’s minds, but it wasn’t unanimous and human moral decisions often aren’t.

It was a poignant example of the struggles humanity will face as artificial intelligence continues to advance and we cope with the possibility of programing robots with morality.

“A concept like harm is hard to program,” said panelist Gary Marcus, a professor of psychology and neural science at NYU and CEO and co-founder of Geometric Intelligence, Inc. He said concepts like justice don’t have a cut and dry answer to program and we currently don’t have a good set of morals for robots beyond Asimov’s Laws, which were featured in movies such as iRobot.

He said that was the point of him joining the panel, to start a conversation about how we should tackle robot morality.

Panelists discuss robot morality. 

LiveStream

Elon Musk this week discussed his worry about artificial intelligence, going so far as to suggest we all live in some sort of simulation like The Matrix and that humans should invest in neural lace that will attach to the brain and increase human cognitive abilities. Other futurists such as Stephen Hawking have warned of the dangers of A.I. and this week Matt Damon cautioned the same during his commencement speech at MIT.

Other scenarios included militarized autonomous weapon systems and how we’ll morally and philosophically deal with these human decisions on the battlefield as well as what decisions autonomous cars may have to make in the not-so-distant future.

The panelists came to no consensus on what to do, but said there are enormous benefits to A.I. that shouldn’t be thrown out the window just because these questions are hard. They suggested talking more in the open about these sorts of moral mine shaft questions and maybe we’ll get closer to preventing the robot apocalypse.

Related Tags