Culture

What YouTube's New Anti-Terrorist Policies Will Actually Do

The new strategy involves stifling problematic videos.

by Cory Scarola

YouTube will begin implementing new policies to stifle violent and extremist content.

Incoming are heavy restrictions on videos that are offensive but don’t necessarily meet the qualifications for outright removal under the existing community guidelines.

There’s more engineering resources being put into technology that can flag extremist content. YouTubre will also employ on more actual humans to flag videos. And proactively, it’s promoting more YouTube creators who speak out against the kind of hate and radicalization that the site’s new reforms seek to target, and restricting the accessibility and promotability of extremist videos.

The announcement, delivered on Sunday by Google, YouTube’s parent company, outlined the guidelines. Whether the new policies actually work remains to be seen, though.

The New York Times reported that after the London Bridge attack, friends and family of Khuram Shazad Butt — identified as one of the attackers on bridge — were worried about him watching YouTube videos by Ahmad Musa Jibril, an Islamic cleric based in Michigan. Videos posted to Jibril’s YouTube channel receive thousands of views on average.

YouTube and Facebook have struggled with how to treat content that some may view as objectionable but is protected as freedom of speech and is in accordance with its community guidelines. YouTube’s chosen approach, explained in the post, will be to erect a number of barriers around such content.

“Third, we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content,” writes Kent Walker, a Google lawyer, in the blog post published over the weekend, without referencing any specific content.

“In future these will appear behind an interstitial warning and they will not be monetized, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”

Videos in that gray area category will no longer appear in the recommendations panel, will have comments permanently disabled, and will be inaccessible to advertisers. Back in March, a slew advertisers — like AT&T, Enterprise, and Verizon — pulled ads from YouTube after discovering that their ads were appearing on extremist videos and therefore helping to fund the groups that created them.

Related Tags