Self-Flying Helicopters Are the Future of Rescues, Deliveries, and War
Like self-driving vehicles, automated flying robots carry amazing promise -- and serious ethical questions.
A Black Hawk helicopter touches down on a landing pad, picks up an amphibious all-terain vehicle, flies five to seven kilometers, lands again, and off-loads its cargo. The ATV then drives 10 miles, navigating around biological and chemical obstacles, transmitting data back to the waiting chopper. Every step of the process happens without human input.
Though it may sound futuristic, that scene actually took place last month, as reported by Defense One. The automated helicopter that carried out the test is manufactured by Sikorsky Aircraft, makers of the ubiquitous Black Hawk, and showed that pilotless aircraft could effectively “team” with a ground-based robot.
The concept of driverless cars is now a familiar one, thanks to Google. Less familiar is the idea of self-flying helicopters, though in some ways taking a pilot out of a whirlybird presents fewer challenges than taking a driver out of a Toyota. One main reason is that helicopters generally have fewer obstacles to contend with than cars (or driverless tanks). And although self-driving cars aren’t on the market, two self-flying helicopters operated for three years in Afghanistan.
“The teaming of unmanned aerial vehicles [UAVs] and unmanned ground vehicles like what was demonstrated has enormous potential to bring the future ground commander an adaptable, modular, responsive, and smart capability that can evolve as quickly as needed to meet a constantly changing threat and environment,” Dr. Paul Rogers, director, U.S. Army Tank Automotive Research, Development and Engineering Center, tells me.
In 2010, Lockheed Martin – which is currently in the process of buying Sikorsky – developed an automation system to control the Kamen 1200 K-Max, an existing helicopter model designed for heavy-load hauling. The result was an efficient, automated pair of helicopters that reportedly transported 30,000 pounds a day, and about 4.5 million pounds over its three-year deployment. The K-Max couldn’t be flown remotely, like a drone. Instead, prior to a mission it was given a flight path that could then be amended by a human operator via satellite.
After the aircraft returned from Afghanistan, the makers of K-Max gave it a new task: fighting fires. In a test carried out in upstate New York, the robotic helicopter was able to put out a fire without human input beyond an initial flight plan.
Steven Rush was a contract helicopter instructor for the U.S. Air Force, and told me that although remotely piloted aircraft have been carrying out logistical operations for years, he still feels more comfortable with a human at the helm. “I have never met anyone yet who was too keen on climbing in the back of an unmanned helicopter,” Rush says. “However, if it were a tight situation and we needed to get out of Dodge pronto, I could be swayed.”
Earlier this year, I covered human-robot teaming for Vocativ, looking primarily at the issue of “trust” between person and machine. One of the main ethical issues that arises in complex, semi-automated weapons is what experts call “automation bias.” The everyday example of that is when the GPS on your phone tells you to take a route you know is wrong but you do it anyway, because — the thinking goes — computers are smarter than humans.
For the average person, that might mean taking the long way to a restaurant. But in war, blind faith in a machine can be deadly. In the early days of the Iraq war, a Patriot missile battery mistakenly identified a UK plane as an enemy missile. Though the battery had human oversight, the operators deferred to the machine and the plane was shot down, killing the two men in it. A week later, the same system made the same mistake and shot down a U.S. fighter jet, killing the pilot.
That event was more than a decade ago. Since then, human rights groups have lobbied hard for an international ban on autonomous weapons systems, sometimes called “killer robots.” Human Rights Watch has led the call, arguing that people should always be “in the loop” and that machines should never be able to select and kill a target entirely on their own. For now, at least, the Pentagon has a policy prohibiting fully autonomous weapons.
Some defense analysts have pushed back against banning a weapon that may be two decades away, citing possible benefits that lethal autonomous robots could bring. For one, humans will naturally shoot if they feel they are in danger. A robot could be programmed to have a higher threshold for firing, since they have a malleable sense of self-preservation. For another, humans in war zones (and elsewhere) are forced to make snap judgments based on bigotry, fear, and stress responses. It’s possible robots could be free from those limitations.
One danger autonomous weapons pose is that — like drones today — they could lower the human stakes of war, thus enticing belligerent countries. It’s much easier for leaders to authorize combat missions if there’s no chance of your country’s human soldiers dying.
Even non-lethal autonomous robots raise serious ethical questions. As technology philosopher Peter Asaro told me in an email earlier this year, Google’s self-driving cars are perfect examples. “The current prototypes from Google and other manufacturers require a human to sit behind the steering wheel of the car and take over when the car gets into trouble,” he said. “But how do you negotiate that hand-over of control?” He raises several examples: A driver wakes up from a nap and incorrectly thinks an oncoming truck is a threat. Does the car let the driver swerve into trouble when he or she is perfectly safe? “What if the person is legally drunk, but wants to take control?” he asks. And when something does go wrong with a self-driving car, how is blame portioned out?
Those questions become even more profound in a war zone. What if a human commander programs an automated helicopter, with automated weapons, to blow up a building that turns out to be a hospital? Should the program have safeguards in place to comply with international law? Should it be allowed to carry out the task assigned even if it’s illegal?
Even as robotic technology progresses, humans will remain in the cockpit for the foreseeable future. Glenn Bloom, a former Army pilot, describes the thrill of flying a helicopter. “The ability to hover in one place is absolutely amazing,” Bloom says. “The capacity to hover forward, then turn sideways while maintaining the same track on the ground to see what’s behind you and then turn back to continue a takeoff in one fluid motion is unique.”
Rush, the flight instructor, agrees. “Flying a helicopter,” he says, “is the most fun you can have with your clothes on.”