Science

A.I. Is Getting Better at Anticipating Shitposts and Not a Moment Too Soon

The robotic hero the internet needs.

by Danny Paez
Unsplash / NeONBRAND

The internet is a hellscape run by trolls and dominated by futile Facebook arguments or worse. And while social media giants across the web have finally, begrudgingly, begun rolling out more robust efforts to try to dissuade derogatory dialogue, these solutions tend to be reactive, in other words coming after the damage has already been done.

It should come as no surprise, then, that artificial intelligence researchers are eager to develop systems that can foresee an online argument before it happens. But as a team of researchers at Cornell University recently explained to Inverse, this urgently needed innovation is not that far from trying to tell the future.

“In the past, there has been work on detecting whether a given comment is toxic,” says computer science Ph.D. Jonathan Chang. “Our goal is slightly different, we want to know if it is possible to predict whether a currently civil conversation will get out of hand sometime in the future. To explore this question, we look at conversations between Wikipedia editors, some of which stay civil and others of which get out of hand.”

Unsplash / Erik Lucatero

Chang and his colleagues analyzed hundreds of messages sent between the sometimes easily irritated Wikipedia curators. They then used 1,270 conversations that took an offensive turn to train a machine-learning model in an attempt to automate this kind of moderation. Their results were published in a paper that was presented at the Association for Computational Linguistics’ annual meeting on July 15.

So how’d the “let’s all calm down” bot do? Fortunately, not too shabby. It ended up being 65 percent accurate, slightly lower than humans’ 72 percent success rate. The researchers found this statistic by creating an online quiz where people could test their comment moderation skills. Turns out, it’s pretty damn hard to figure out if social media users will either wild out or stay civil.

“The idea is to show that the task is hard but not impossible - if humans got only 50 percent accuracy, for instance, this would be no better than random guessing and there would be no reason to think that we could train a machine to do any better,” says Chang. “Comparing our model to human responses has given us some insight on how similar, or dissimilar, our computational approach is to human intuition.”

Unsplash / rawpixel

Chang doesn’t believe this will rid the internet of trash talk, but he sees it as a way to help human social media moderators. Instead of having to pay attention to the millions of comments that can be posted a day, their algorithm can identify the hundreds of those that are at risk of turning into an argument.

While one might think that combing through all of these potentially unpredictable quarrels would give Chang a bout of sadness, the scientist says the experience has actually given him hope for humanity.

“These days, there’s a lot of pessimism surrounding online conversations, but here what we’re seeing is that even in cases where a conversation starts out in an unfriendly manner, there is still a chance for the participants to have second thoughts, change their tone, and put the conversation on track for a brighter outcome,” he says. “The future is not set in stone.”

Maybe there is hope for the internet after all.

Related Tags