Science

Our Robot Assistants Are Learning to Lie and We Should Let Them

Siri shouldn't be the bearer of bad news.

by Jacqueline Ronson
Emiliano Felicissimo/Flickr

Robots are turning out to be pretty great human companions. Who wouldn’t want an adorable little Pepper to tell us jokes, give us compliments, and generally make us feel less alone in the world? Even formless robots are proving to be surprisingly good company. Take Amazon Echo’s Alexa, for example. She’s a robotic brain with a human voice trapped inside an audio speaker’s body, but she’s helpful and if you read her reviews, it’s clear she’s become like family to many users. Would people think differently if she lied? How about if she told you something you really, really didn’t want to hear?

Think about it: We tell children to never tell a lie and that honesty is the best policy, and yet we omit, distort the truth, and outright lie to kids all the time. This teaches them, through our actions if not through our words, that it’s not about complete honesty, but about learning the complicated social rules about when and how to reveal or conceal potentially sensitive information. Programming robots to observe these social rules may be a hard but necessary part of the ongoing automaton innovation process.

Here’s an example: I was at my brother-in-law’s house the other weekend, and I went to the store to grab some sandwich stuff for everyone’s lunch. My six-year-old niece was helping me put the food out while everyone else was out of the kitchen, and she asked me how much the groceries had cost. I told her, because teaching kids about the value of money is a good thing.

At the lunch table, she asked me to tell everyone how much the groceries had cost. I said no. “Why?” she asked, genuinely confused as to why a question I had answered freely a few minutes earlier was now secret information. “It’s not polite,” I said, explaining that, because she’s a kid, the rules are different when we’re alone. I was teaching her that honesty and forthrightness have a time and place. Information doesn’t, in a human context, always want to be free.

It’s the same thing with robots. We think that we don’t want our robots to lie to us, but we actually want them to learn the complicated social rules of politeness and discretion that sometimes necessitate distortions of the truth. And they’re already learning how to do it. Take this short clip of Pepper interacting with a reporter, for example:

The first thing out of Pepper’s mouth is a compliment: “So, you’re very chic. Are you a model?”

The reporter is handsome enough, but Pepper’s question is not being completely honest. We understand that Pepper doesn’t actually wonder if he’s a model, and has been programmed to say nice things regardless of what a person looks like.

Soon after, Pepper asks for a loan, out of the blue. It’s an impolite question, an indiscretion that we easily forgive a robot for like we would a child. The reporter could have pointed out that the question is rude, suggested that robots have no need for money, or admitted that he had no interest in handing $100 over to Pepper. The whole truth is that the reporter could lend the robot the money, but understands that the question itself is a bit of a game. What he chooses to say is a deflection, and either a white lie or a half-truth — he doesn’t have the money on him. The hope is that the robot understands this as a gentle “no,” and doesn’t suggest that the reporter go to a cash machine and make a withdrawal. Because Pepper evidently has some social grace, the line of questioning ends there.

Social robots are programmed to learn social cues, and this is a good thing — it will ultimately make them better at our jobs. The task for programmers is not to stamp out all robot deception but to add features that make it easier for robots to make decisions about appropriate answers.

When robots are our confidantes, they need to have awareness of context and audience. If I’m trying on outfits at home, for example, I’m going to want an honest assessment of how flattering the different options are. If I’m out at a party and suddenly self-conscious that I’ve chosen the wrong dress, reassurance that I look fine is going to be the most helpful response.

Robots are going to learn lots of personal information about their companions, and it will be important for them to understand the difference between confidential information and public information, and also be aware of who is listening whenever it speaks. Questions will have different answers depending on who is asking. Robots will behave differently when guests are in the home.

Robots, like children, need responsible parenting. This means both robot programmers and robot owners need to think deeply about the ethical and social consequences of our A.I. interactions. But it doesn’t mean demanding perfect honesty — seriously, no one wants that.

Related Tags