The Loneliness of the Ethical Driverless Car
The government wants moral machines. The people want to watch Netflix from the driver's seat.
Operating tirelessly from its Jersey Avenue headquarters in Washington, D.C., the Department of Transportation issued formal guidelines last month on how the street-legal driverless cars of the future will operate. That 116-page document, the “Federal Automated Vehicles Policy,” includes a lot of common-sense rules: cars must adapt to local laws, be secure from cyber attacks, and of course, be safe in case of a crash. It also includes a less-than-intuitive guideline: self-driving cars need to be able to make ethical considerations while they’re out on the road, and act on them.
At the intersection of driverless cars and ethics, live some annoying thought experiments — many of them based on the classic [philosophical dilemma of neutrality](https://en.wikipedia.org/wiki/Neutrality_(philosophy)). For example, in situations where a car accident is unavoidable, is it better for a driverless car to kill an elderly pedestrian or a young one? (Mercedes-Benz has suggested it might save the driver over the pedestrian.) Is it better to plow through a group of pedestrians indiscriminately, or to swerve into a roadblock, and kill the car’s passengers instead? MIT has even turned questions like these into a game called the Moral Machine. It’s slightly tongue-in-cheek, but it raises totally valid questions: Who’s to blame in an accident when no one’s actually driving?
That question might make great third-beer conversation with your techno-philosopher friends, but Ryan Calo, a robotics policy expert and Assistant Professor at the University of Washington School of Law, suggests that getting bogged down in issues of morality only delays the arrival of the modern, roadworthy, driverless cars we’re all waiting for.
Whether or not a car’s “ethics” satisfies the DoT, Calo argues that the challenges behind mainstreaming a practical driverless car for everyone, remain the same. At its heart, it is an engineering problem and nothing else. “Let’s work on making sure that cars can tell a white van apart from a light sky before we work on whether or not a car can count how many people are going to die in a crash,” Calo tells Inverse. “At the end of the day, it’s a rare car that’s going to be too stupid to avoid an accident, but can also make fine-grain choices about who to kill when that accident is unavoidable.”
Last year was a bad year to be a human in a car. Those 365 days saw more than 38,000 people killed and over four million injured on American roads. It represented the largest per capita increase of car accidents in 50 years. And this year looks 10 percent worse. Worldwide, automobile accidents claim 1.2 million lives annually. Death and injury, as a result of getting behind the wheel, simply isn’t going away, but the full-fledged promise of the driverless car is that it’s safe, operates tirelessly, and responds appropriately to road conditions more quickly than a human ever could. For companies working to bring this technology to life — notably Google — the aim is to train software to be a better driver than people. Self-driving cars will save lives, far more than any Intro to Philosophy class.
Calo calls the DoT’s ethics guideline “either impossible or unnecessary.” It sets the stage for companies to make things up for the sake of compliance — “We installed an ethical governor!” — or it grants the DoT the opportunity to keep a world-changing technology trapped in a regulatory black hole because it hasn’t been deemed “moral” enough. Just how often does a person have to make a deep-seated ethical choice behind the wheel of a car, anyway?
If we want to turn our attention to the heady driverless car questions that matter, then we ought to look at issues of safety and social impact. It won’t be a self-driving car’s “answer” to a checkmate-style philosophy problem that will ultimately get these things out on the road. Instead, it will be a matter of the cars demonstrating undeniable safety over time.
Answering the question of safety will be the silver bullet in making driverless cars universally legal, suggests Calo, but this is extremely difficult to do. It will be on the government to establish a safety standard for car manufacturers to meet, and it won’t be as easy as giving a letter grade like the New York City Health Department does for restaurants. Such a model of consumer disclosure is great when you’re going out to lunch, but it is imperfect when it comes to driving. “This problem is trickier than rating video games for violence or restaurants for cleanliness because third parties can get hurt,” says Calo. “A person who gets hit by a driverless car did not enter into an agreement with the offending driver.”
On some level, it’s society as a whole that has to accept a new level of risk in a driverless world. That’s why driverless safety regulations are so hard.
If the Department of Transportation wants to play the ethics game, then it ought to add many more ethical considerations to its driverless car policy that are so far completely overlooked. People may soon be automated out of a job: what effect will driverless cars have on people who drive for a living? There may be social justice impacts: will driverless cars make rich people safer and poor people less so? Some will raise concerns over liberty and security: how will police interact with driverless cars? Can they pull you over against your will? Could someone create a catastrophic event by exploiting that?
It’s hypothetical questions like these that deserve our most dogged efforts in answering if we’re ever to see the inside of a real driverless car. Transportation is a society-wide function that touches everyone on every rung of the economic ladder; it’s surprising that the DoT makes no driverless considerations for society at large.
Of all the far-flung, sci-fi-style technology being brandished today, the driverless car ought to capture the most of your imagination because it’s real. It perhaps represents the best and worst of us all at once; it’s technology that embraces the fact that people universally mess up while simultaneously pointing to our faith in technology to make every aspect of our lives easier. Not only are no-brain-required automobiles by Google and Uber on the road today, but a number of other companies, including Tesla and Honda, have built lesser self-driving solutions that help a car maintain its lane and distance on the highway. The scientists have spoken: if we’re going to make driving safer, we need to automate more of the driving experience.
Incremental technological gains will continue to be made — driverless cars will get that much better at avoiding squirrels, or that much better at anticipating a cyclist’s movements. But none of it will mean much until driverless car technology finds an agreed-upon framework to fit into. This will set the world’s governments up to non-controversially determine which driverless cars are suitable for the road, and which ones are worth being a passenger in.
The DoT’s automated vehicle guidelines represent several steps in the right direction, but they remain incomplete. There are three substantive questions left that require concrete answers before we see more driverless cars on the road: how safe is safe enough, who decides that, and on what basis?
Until then, we might as well be talking about Herbie: Fully Loaded.