The technology giant has amped its extremism blocking efforts on the video sharing platform, using AI.
Terrorist social media marketing – and how to block it – has become an area of central focus for the top platforms as both Facebook and YouTube work to combat the spread of extremist propaganda.
Google and Facebook have both opened up to the public in their own ways to tackle this issue.
To help fight terrorist social media marketing via YouTube, released an op-ed within the FT newspaper, publicly discussing the way in which it is working to boost its extremist content blocking.
Both YouTube and Facebook have seen considerable criticism regarding their anti-terrorism policy enforcement. The pressure has been particularly high in Europe, where there is a greater call to stop extremist content in its tracks. Politicians in Germany and the United Kingdom have placed the blame squarely on YouTube and other social sharing platforms for playing host to content spreading hate and extremism.
Stopping terrorist social media marketing has become especially important in Europe where it is increasingly prevalent.
Attacks have been growing in recent years in several parts of Europe. The UK alone has had a swath of attacks since March of this year. Governments in the United Kingdom and France are both looking into new legislation that would hold tech platforms liable if they do not remove terrorist and extremist content quickly enough. They argue that this type of content is only assisting in radicalizing terrorists.
Earlier in June, Prime Minister May of the United Kingdom called for agreements across democratic allied governments in order to “regulate cyberspace to prevent the spread of extremism and terrorist planning.”
Similarly, Germany has proposed a law to slap massive fines to platforms failing to remove terrorist social media marketing content such as hate speech. There is already significant government backing for this penalty. On a more organic level, social media platforms have found that if they do not make a concerted effort to reduce this type of content in a prompt enough way, it has a direct impact on their revenues.
YouTube experienced this earlier in 2017, when advertisers withdrew their ads en masse when they found their advertisements had been displayed on extremist video content. Google responded with an exceptionally quick update to the platform guidelines to make it less likely for ads to be displayed on controversial, hateful, demeaning or incendiary content.