Google steps up its fight against extremist and inflammatory videos
Months after some advertisers fled Google over concerns about ads appearing alongside YouTube videos that promote hate and extremism, the Internet giant has announced new steps aimed at tackling such content.
“There should be no place for terrorist content on our services,” Google said in a Sunday blog post outlining ways it will identify problematic videos and remove them from YouTube — or at least stop them from being monetized and make them harder to find.
In March, after a report by Britain’s The Times showed examples of ads appearing next to videos by homophobic British preacher Steven Anderson and American white supremacist David Duke, brands including AT&T, Verizon and Enterprise Rent-A-Car said they would halt or reduce deals to advertise with Google.
The uproar centered on ads placed on YouTube as well as websites and apps that use Google’s ad technology. It was a real concern for Google’s parent company, Alphabet Inc., which has struggled to generate significant profits outside of advertising.
In its Sunday blog post, Google said one way it will fight extremist-related content is by devoting more resources to apply advanced machine-learning research. More than half of the terrorism-related content Google has removed in the last six months was found and assessed by video analysis models, it said; this step will build on that.
Google also said it plans to increase the number of independent experts in YouTube’s Trusted Flagger program, in which a tier of trusted people alert the company to problematic videos. It will add 50 expert nongovernmental organizations to the 63 that are already part of the program.
When it comes to videos that are troublesome but do not clearly violate the company’s policies, such as those that contain inflammatory supremacist content, Google said it will take a tougher stance. It said those videos will be preceded by a warning and will not have advertisements, will not be recommended and will not be eligible for comments.
“We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” the company said in the blog post, written by Google general counsel Kent Walker.
Finally, Google announced the expansion of two programs that try to directly sway people’s opinions: Creators for Change, which promotes YouTube videos that are against hate and radicalization, and Redirect Method, which uses targeted online advertising to reach potential Islamic State recruits and redirect them to anti-terrorism videos.
Seamus Hughes, deputy director of the Program on Extremism at George Washington University, said Google’s announcement is a positive step and signifies that “they are looking at this with fresh eyes.”
And since Google is such a large company, he said, it will lead other companies that host user-generated material to make further changes in fighting terrorism-related content.
AT&T was not moved by Google’s announcement.
“We are deeply concerned that our ads may have appeared alongside YouTube content promoting terrorism and hate,” it said, reiterating a statement it made in March. “Until Google can ensure this won’t happen again, we are removing our ads from Google’s non-search platforms.”
More to Read
Inside the business of entertainment
The Wide Shot brings you news, analysis and insights on everything from streaming wars to production — and what it all means for the future.
You may occasionally receive promotional content from the Los Angeles Times.