EU will fine social media firms for failing to remove extremist material
Security commissioner signals step change from self-policing


The European Union (EU) will draw up new plans to fine social media companies over their failure to remove extremist content from their services.
Under new proposals, companies such as Facebook and YouTube will be compelled to remove terrorist propaganda within one hour or face hefty fines, according to European commissioner for the security union Julian King, who spoke with the Financial Times.
"We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon," King said, adding that new regulations would create legal certainty for websites of all sizes.
"The difference in size and resources means platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent.
"All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform."
The draft proposals, set to be published next month, signal a shift from the EU's current regulatory outlook which allows companies to voluntarily remove content deemed to incite terrorist violence or radicalise users.
The EU's decision to make its guidelines legally enforceable mirrors a change in heart in terms of the UK's strategy, with the government earlier this year hinting at new rules that represent a clear shift from voluntary guidelines and self-policing.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
After ten companies failed to turn up to government talks, of 14 invited, the then-secretary of state for digital, culture, media and sport (DCMS) Matt Hancock said in May the UK would draft laws to fine firms that failed to tackle online abuse or remove inappropriate content.
"The fact that only four companies turned up when I invited the 14 biggest in; it gave me a big impetus to drive this proposal to legislate through," Hancock said on BBC One's the Andrew Marr Show.
"Before then, and until now, there has been this argument - work with the companies, do it on a voluntary basis, they'll do more that way because the lawyers won't be involved.
"And after all, these companies were set up to make the world a better place. The fact that these companies have social media platforms with over a million people on them, and they didn't turn up [is disappointing]."
The wider movement towards tougher and more meaningful regulation has in-part been motivated by the continuously-unfolding data misuse scandal involving Facebook and the now-defunct Cambridge Analytica. The DCMS select committee, for instance, has proposed several new laws in an interim report published last month that make social media companies, like Facebook, liable for misinformation that is allowed to spread on their platforms.
Meanwhile, in February the former home secretary Amber Rudd revealed an auto-blocking tool the government is hoping can automatically detect and flag extremist content without human intervention. Developed by the London-based artificial intelligence company ASI Data Science, the tool was trained by analysing thousands of hours of ISIS-produced content and forms part of the government's wider efforts to tackle online hate speech and extremist material.

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.
-
Should AI PCs be part of your next hardware refresh?
AI PCs are fast becoming a business staple and a surefire way to future-proof your business
By Bobby Hellard
-
Westcon-Comstor and Vectra AI launch brace of new channel initiatives
News Westcon-Comstor and Vectra AI have announced the launch of two new channel growth initiatives focused on the managed security service provider (MSSP) space and AWS Marketplace.
By Daniel Todd
-
Governance, risk, and compliance is a major growth opportunity, but how will the market develop?
Industry Insights As DORA, NIS2, and AI regulations shake up the compliance landscape, GRC could be a golden opportunity for the channel
By George Bonser
-
Supply chain services, 2023
whitepaper Covering the leading service providers in enterprise supply chain innovation
By ITPro
-
Transforming the aftermarket supply chain
whitepaper with IBM’s cognitive enterprise business platform for Oracle Cloud and generative AI
By ITPro
-
The Forrester Wave™: API management solutions
Whitepaper The 15 providers that matter the most and how they stack up
By ITPro
-
A green future: How the crypto asset sector can embrace ESG
Whitepaper Understanding the challenges and opportunities of new ESG standards and policies
By ITPro
-
A guide to ESG reporting frameworks
Whitepaper Guidelines to assist with your approach to ESG reporting
By ITPro
-
Digital transformation & risk for dummies
Whitepaper Understand the risks to your digital business and accelerate your digital transformation
By ITPro
-
Food and beverage traceability
Whitepaper Understanding food and beverage manufacturing compliance and traceability
By ITPro