Study Finds We Will Follow Helper Robots -- Even If They're Wrong
Georgia Tech research indicates college kids trust something that looks authoritative and has LEDs.
It has neither compassion nor common sense, yet we will follow it blindly and unquestioning to an untimely demise. We’re not talking about Donald Trump (zinggg), but Georgia Tech’s faux Emergency Guide Robot — a machine that researchers recently put to the test of human trust. If a study of 42 volunteers is any indication, college students respect the shit out of robo-authority.
Defying the scientists’ expectations, test subjects followed the guide robot away from known emergency exits in a simulated disaster. The experiment, which will be presented at a conference of robot-human interaction in March, went like this: Volunteers followed a guide robot to a conference room. Once subjects were settled in the room, researchers flooded the hallway with fake smoke and set off the alarms. The guide robot activated its emergency LEDs and pointed the subjects to the back of the building, in the opposite direction of visible exits.
Twenty-six out of 30 volunteers followed the robot. And in subsequent trials, when the robot broke down or took a wrong turn en route to the conference room, 100 percent of subjects still let the robot lead them through the smoke.
As Georgia Tech Ph.D. student Paul Robinette writes in his dissertation: “To our surprise, all participants followed the robot in the real-world simulation of an emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before.”
Georgia Tech’s Alan Wagner — who spoke to Inverse in November about robots that can (productively) stereotype — said in a statement Monday that: “We wanted to ask the question about whether people would be willing to trust these rescue robots. A more important question now might be to ask how to prevent them from trusting these robots too much.”
Emergency robots have long been a robotics dream, and for good reason. But like phones that freeze, bad Google results, and even poor human judgment, it will serve us well to remember that just because a thing seems reliable doesn’t mean it’s infallible.