EU to give tech firms one hour to remove illegal content
The European Commission is following up on its calls for social media firms to develop tools to tackle extremist and hate content online

The European Union has warned internet companies that they have to remove illegal online terrorist material within an hour of its online postage or risk facing new EU-wide laws.
Reported by Bloomberg, the European Commission outlined a series of stringent recommendations for companies and EU states to comply with -- these range from terrorist content to incitement of hatred and violence, child sexual abuse material, counterfeit products and copyright infringement. Significantly, they have issued internet firms with an hour to remove illegal content, in an effort to stop dangerous and harmful propaganda and hate speeches from spreading like wildfire.
This comes following widespread accusations leveled against social media giants in particular, that alleged the high-profile tech pioneers are not assuming any sense of social responsibility for the negative consequences of their technologies.
The main talking point will be EU's one hour limit. The Computer & Communications Industry Association, which represents giants like Google and Facebook, criticised the EU's plans, arguing that "such a tight time limit does not take due account of all actual constrains linked to content removal and will strongly incentivise hosting services providers to simply take down all reported content".
However the EU maintained that this was necessary, and argued: "Considering that terrorist content is most harmful in the first hours of its appearance online, all companies should remove such content within one hour from its referral as a general rule."
This marks a time when policymakers are increasingly pressing the tech industry to confront dangerous content surfing online. The UK government recently announced that it was developing an extremism-blocking tool to filter out ISIS content from the internet.
The European Commission last year implored social media companies to develop a common set of tools designed to identify, stop and eliminate terrorist and hate content. This represents an advancement of their intentions to see the tech industry more involved in the protection of security and "redouble their efforts to take illegal content off the web more quickly and efficiently".
Get the ITPro. daily newsletter
Sign up today and you will receive a free copy of our Focus Report 2025 - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives

‘If you want to look like a flesh-bound chatbot, then by all means use an AI teleprompter’: Amazon banned candidates from using AI tools during interviews – here’s why you should never use them to secure a job

Businesses must get better at sharing cyber information, urges former GCHQ chief

AI PCs are becoming a no-brainer for IT decision makers