Explained: The state of end-to-end encryption in the UK now the Online Safety Bill saga is over
Industry stakeholders have previously criticized the Online Safety Bill over its heavy-handed approach to encryption
The UK’s Online Safety Bill is set to become law after officially passing through parliament on Tuesday.
Framed by the government as a push to make Britain the “safest place in the world to be online”, the bill will impose requirements on tech firms and social media platforms to moderate and remove illegal content, and will be enforced by Ofcom.
One of the most prominent debates around the Online Safety Bill so far has been in regard to its so-called ‘spy clause’ allowing the UK government to read messages and files sent over end-to-end encrypted messaging platforms.
The government’s stance on the matter has been firm, despite heavy pushback from the technology and privacy sectors.
Some of the most popular encrypted messaging platforms have suggested they would leave the UK if they were compelled to remove end-to-end encryption (E2EE) technology from their products.
With the bill passing and set to become law imminently, questions still remain over the future of E2EE in the UK. But is it as bad as first thought?
Here’s what you need to know.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Will E2EE still exist in the UK?
Simply put, yes. The Online Safety Bill doesn’t equate to an outright ban of end-to-end encryption. However, there are certain clauses within the bill that will see changes introduced.
Chief among these is a requirement that encrypted messaging apps, such as WhatsApp or Signal, will be legally obliged to examine user messages for illegal or harmful material, such as terrorism or child abuse-related content.
Ofcom, the regulator in charge of overseeing the implementation of the bill, can request that providers conduct such activities, on the suspicion of materials being distributed via any given platform.
This aspect of the bill is a serious point of contention, especially with regard to the feasibility of the practice.
Why does the UK think it can break E2EE?
The Online Safety Bill specifically mentions the use of “accredited technology” to conduct such assessments, but some industry stakeholders have noted that the wording remains highly ambiguous.
Earlier this month, techUK deputy CEO Antony Walker questioned the feasibility of this move and raised serious concerns about user privacy.
"One of our biggest concerns centers on Ofcom’s ability to remotely view tests which use live user data, a move that not only poses a security threat but also jeopardizes user privacy,” he said.
“We urge the government to restrict Ofcom’s ability to view live user data and to ensure these powers are only applied to viewing information in a test environment.”
"We support the objectives of the Online Safety Bill. Our emphasis lies in the practical implementation of these objectives.”
The government has repeatedly touted ‘client-side scanning’ as a solution to identifying extreme content distributed via encrypted messaging apps. However, given the nature of encrypted messaging, only the sender and recipient of messages can view the content.
Learn about a six-phase strategy that will improve your log management
Providers are unable to access said content. In this context, CSS would require the examination of content before it was even sent.
Opponents have argued that this is simply unfeasible. Apple famously introduced client-side scanning technology for iCloud content, but later rolled back the move after it was found to have serious discrepancies.
However, research into the use of CSS techniques has been conducted by leading cryptographers in the UK in recent years.
A 2022 research paper authored by experts at the UK’s NCSC and GCHQ lent credence to the idea that CSS-style content moderation could be achieved without compromising user privacy or security.
“Researchers rightly point out that poor designs for safety systems could have catastrophic effects on users safety and security,” the paper reads.
“However, we do not believe that the techniques necessary to provide user safety will inevitably lead to these outcomes.”
This, the paper argued, could include the use of language models “running entirely locally on the client” to detect language related to grooming, for example.
The paper noted that this would require human moderation in certain circumstances, however.
“If the model suggests that a conversation is heading towards a risky outcome, the potential victim is warned and nudged to report the conversation for human moderation.
“Since the models can be tested and the user is involved in the provider’s access to content, we do not believe this sort of approach attracts the same vulnerabilities as others.”
Is E2EE weaker now?
End-to-end encryption is still very much available to users of apps such as WhatsApp or Signal at present. The government has been keen to emphasize that this doesn’t equate to a ban.
However, the requirements of the bill do raise questions over the potential long-term implications of encryption for users of such platforms.
Privacy campaigners have argued that the bill, if passed, will weaken encryption and as a result, wider digital privacy across the country.
Does the UK still want to break E2EE?
The government’s stance on end-to-end encryption has been steadfast for several years now. Fundamentally, lawmakers view this as a serious inhibitor for law enforcement in tackling extreme materials online.
This isn’t the first attempt by the government to tackle the technology, either.
Former prime minister Theresa May called for new legislation to tackle encryption in the wake of a 2018 terrorist attack in London. In the year prior, she banged the drum for a charge against encryption also.
Two former home secretaries, Amber Rudd and Priti Patel, were both vocal on the matter, calling for stronger legislative powers to tackle the issue.
The government’s crusade against end-to-end encryption may have temporarily calmed in the wake of the bill’s passing. However, long term, there could be a renewed charge against the technology, providing the feasibility of techniques such as CSS or alternatives are realized
Will tech firms have to make any changes to E2EE?
Since there is no actual ban to E2EE, tech firms won’t have to make any changes to the technology itself, but may be responsible for developing, or helping to co-develop, a bespoke tool that can scan encrypted messages “as a last resort”.
That’s according to technology secretary Michelle Donelan speaking to Reuters earlier this month. She also told Times Radio that if tech companies were not implementing adequate mitigations, or showing they can abide by the Online Safety Bill’s requirements, then there would be a conversation about the long-term survival of E2EE.
She also admitted that the tools don’t currently exist, but pointed to the possible implementations outlined in the paper written by leading cryptographers Dr Ian Levy of the UK’s NCSC and Crispin Robinson of GCHQ.
Junior minister Stephen Parkinson, addressing the House of Lords, also said that Ofcom’s ability to request for content scanning would be limited to cases only where it was “technically feasible”.
The statements of the two ministers appear to contradict one another with one saying that tech firms must demonstrate they’re taking steps to adhere to the Online Safety Bill’s requirements, or risk the survival of E2EE altogether, and another saying they won’t have to comply unless it’s technically feasible.
How could the government achieve its goals while maintaining E2EE?
According to the paper from Levy and Robinson, due to messages being encrypted and not seen by servers, there must be some activity on the client’s (user’s) side to adequately identify harmful or criminal content.
The research paper outlined a selection of different methods that could be employed in this scenario, but insisted they don’t endorse any of the methods specifically and understood a range of approaches would likely be necessary to achieve results.
Included are ‘matching techniques’, whereby media of known abuse images or content would be compared with media being distributed via a service.
Matching techniques in themselves can be approached from a variety of ways, the paper notes, such as ‘client-side hashing with client-side matching’.
In this instance, the hashing process and the hashed database of abuse material would be stored on the user’s device.
While researchers said this approach would afford “maximum privacy” to users as both images and the hash would “never leave the device unless a match occurs”, having the algorithm and database stored on the user’s device could open both up to compromise It’s possible that criminals could reverse-engineer the algorithm or manipulate the database in a way that allows them to evade detection.
Other techniques, such as ‘client-side hashing with server-side matching’ and ‘hashing with client-server multi-party computation’ were also touted as impactful and E2EE-preserving techniques.
This would see the hashed material stored on the user’s device and sent to a third-party server that held the database of criminal material. It would lessen the risk of compromising the database itself but not eliminate it entirely.
“It would be possible, at least in theory, for a malicious server to build a database of all known images, and thereby to gain the capability to discover which images are being shared in the majority of communications,” the paper reads.
This would be an incredibly expensive way to undermine E2EE, but the cryptographers noted that even the possibility of such a scenario could risk confidence in the model.
Using a multi-party model, whereby multiple servers communicate with a device to compute the hash would mean less of the algorithm required for the process would be stored on the user's device, lessening the chance of it being reverse-engineered. It would also mean the server sees less of the material, or in some cases none of it, preserving user privacy.
There are drawbacks to these approaches that involve a degree of on-device processing, however.
The possibility of compromising servers, databases, and algorithms have already been acknowledged but what hasn’t been said is the potential for these types of hashing, that connect to a third-party, potentially government-controlled database, to be used for other types of surveillance without the user’s knowledge.
The Internet Society cited a 2021 Columbia University study, saying: “System[s] could be built in a way that gives an agency the ability to preemptively scan for any type of content on any device, for any purpose, without a warrant or suspicion.
“Likewise, the same techniques to prevent the distribution of child sexual abuse material (CSAM) can be used to enforce policies such as censorship and suppression of political dissent by preventing legitimate content from being shared or blocking communications between users, such as political opponents.”
Why does the UK want to ban E2EE?
The two main arguments for breaking E2EE are to scout out and prevent crimes related to child abuse and terrorism.
It’s seen as a major inhibitor for law enforcement who cannot monitor content sent over the secure communication protocol like they can with SMS messages or telephone calls, for example.
The Online Safety Bill is not the first time the UK has tried to break E2EE with legislation. The Investigatory Powers Act 2016 also has provisions for encrypted messaging platforms to be “active participants” in data interception efforts - a legal gray area that has not led to any major cases of compromising E2EE for the benefit of law enforcement.
It’s thought that the UK government has so far resisted enacting its power to ban E2EE outright through fear of losing major tech companies from operating in the UK.
What are the arguments against banning E2EE?
Digital privacy campaigners have long fought for E2EE to remain. Encrypted messaging channels are seen as crucial tools in democratic societies to preserve personal privacy from unauthorized actors looking to monitor individuals. These could be the platforms themselves, national governments, or cyber criminals of all flavors (including hackers and stalkers).
Proponents of E2EE also argue that the crimes that the technology is said to facilitate are not an issue that will be entirely solved, or are caused by, E2EE itself.
Both crimes against children and terror offenses existed before widespread access to E2EE messaging platforms, and so campaigners argue that efforts should be focused on combating the source of the crime rather than a privacy-preserving technology that, regardless of its merits, undoubtedly helps to protect the worst kind of criminals in the UK.
If E2EE were banned, criminals would likely seek out alternative forms of anonymity such as a greater reliance on the dark web to mask their communications.
While the dark web doesn’t offer the strength of protection for communications as does E2EE, it would still make the work of law enforcement agencies significantly difficult, all while the public is left without a means to communicate privately.
Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.