Researchers Show How AI Can Fake Way Through Conversations Just Like Humans
How to learn without asking stupid questions.
If nothing else, you can this much for artificial intelligence: They’re rarely afraid to look stupid. If a learning A.I. encounters something outside its preprogrammed knowledge, it will not typically be shy in asking the person with whom it is speaking to clarify.
This can, however, make for rather monotonous conversation for the human involved in talking to the chatbot, voice assistant, or generally conversant robot: “What’s an apple?” “What’s tiramisu?” “What’s cured meat?” “What’s do you know literally anything about food you stupid recipe chatbot?”
You get the idea, and as researchers from Japan’s Osaka University point out in a recent spotlight on their work, that last line is indicative of the real problem facing A.I.: Asking questions might be the best way for them to learn, but that doesn’t count for much if the barrage of questions is so irritating or tedious that the human wanders off. It’s not enough for the A.I. to know what it doesn’t know. It also has to know how to keep humans engaged enough to fill in the gaps in its knowledge.
Their newly devised method uses what’s known as lexical acquisition through implicit confirmation, which is basically a fancy way of saying artificial intelligence can now bullshit its way through conversations just as well as humans can. It pulls off this trick not by asking humans to confirm what something is but rather by saying something else that indirectly gets the conversation partner to confirm or deny the A.I.’s instinct is correct.
Let’s take a look at an example.
The A.I. knows enough to make a guess about the origins of the phrase “nasi goreng,” judging it sounds Indonesian. The fact the human said they will try to cook this dish suggests it’s a food of some sort, and so the A.I. generates that statement — not question, though one is hidden within it — about the quality of Indonesian restaurants in the area.
The A.I. could presumably use basic data about recent restaurant openings and their online reviews to figure out whether that statement makes sense factually, perhaps otherwise saying something like “There really need to be more good Indonesian restaurants around here” if it didn’t find many options.
But that’s just to reduce confusion, as even the human disagreeing with the statement would still answer the underlying question about whether nasi goreng is Indonesian. It’s only if the person asks what the A.I. is talking about that it’s clear the machine made a mistake.
So then, if you thought faking your way through conversations at parties was something only humans could ever do… sorry, automation really does come for us all.