YouTube wrongly removes Syria conflict videos
Removing the videos could harm documentation of human rights violations, activists say
Thousands of videos documenting violence in Syria were wrongly removed by YouTube after its machine learning software flagged them, forcing staff to intervene and re-upload some of the videos.
The machine learning technology the site uses to police extremist videos flagged the videos as inappropriate content, leading activists to warn that this could jeopardise future efforts to prosecute war crimes.
Eliot Higgins, founder of Bellingcat, a citizen journalism website, told the BBC: "We have a situation where a few videos get wrongly flagged and a whole channel is deleted. For those of us trying to document the conflict in Syria, this is a huge problem."
YouTube does not allow violent, harmful or dangerous content on its site, unless the purpose is "educational, documentary, scientific or artistic (EDSA), and it isn't gratuitously graphic". While there's little detail on what factors the machine learning software considers when assessing a video, human reviewers at YouTube consider a video's metadata, description and, most importantly, the context, before deciding whether to take it down or not.
The video streaming site introduced machine learning to help it police the 400 hours of content uploaded to YouTube every minute, revealing earlier this month that 75% of violent extremist videos it removed were spotted by the machine learning software before any human viewer notified the firm, helping it more than double its take-down rate.
Keith Hiatt, a vice president of human rights technology tools firm Benetech, told the New York Times that removing these videos means losing "the richest source of information about human rights violations in closed societies".
A YouTube spokesperson said: "YouTube is a powerful platform for documenting world events, and we have clear policies that outline what content is acceptable to post. We recently announced technological improvements to the tools our reviewers use in video takedowns and we are continuing to improve these.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"With the massive volume of videos on our site, sometimes we make the wrong call. When it's brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it."
02/08/2017: Machine learning doubles YouTube's extremist video take-down rate
Machine learning has helped YouTube identify and take down extremist content before a viewer has flagged the video themselves.
The video streaming site recently began using machine learning technology to identify and remove "violent extremism and terrorism-related content in a scalable way".
In the last month, over 75% of the violent extremist videos YouTube removed were spotted using machine learning before any human viewer notified the firm, it said.
With 400 hours of content uploaded to YouTube every minute, removing inappropriate or extremist videos is a challenge for the website, but it said: "Our initial use of machine learning has more than doubled both the number of videos we've removed for violent extremism, as well as the rate at which we've taken this kind of content down."
The system is also "more accurate than humans at flagging videos that need to be removed", YouTube said.
The company is now looking to hire more employees to review and enforce upload policies as well as planning to invest in technical resources to address these issues.
Other measures it has taken include working with 15 additional expert NGOs to understand complex issues like hate speech and radicalisation, so the site can identify this content.
It is also implementing some features from Jigsaw's Redirect Method, where if a user searches for sensitive keywords on Youtube, they will be redirected towards playlists of YouTube videos that "confront and debunk violent extremist messages".
Tech giants are facing growing calls to take a more proactive role in controlling what appears on their platforms. This comes after the UK government had to pull its advertising content from YouTube in March as the content was appearing next to videos of a hate preacher banned in the UK.
Furthermore, the UK and French governments are working together to tackle online radicalisation, seeking to introduce laws that will punish tech firms that fail to remove radical material or hate speech from their platforms.
YouTube and other tech companies met Amber Rudd this week at a Silicon Valley forum set up to counter terrorism. The forum hopes to come up with new solutions on how to address and remove terrorist content on the internet.
Zach Marzouk is a former ITPro, CloudPro, and ChannelPro staff writer, covering topics like security, privacy, worker rights, and startups, primarily in the Asia Pacific and the US regions. Zach joined ITPro in 2017 where he was introduced to the world of B2B technology as a junior staff writer, before he returned to Argentina in 2018, working in communications and as a copywriter. In 2021, he made his way back to ITPro as a staff writer during the pandemic, before joining the world of freelance in 2022.