UK AI research is under threat from hostile nations, says Alan Turing Institute

University-led AI research is at risk of being stolen for malicious use, says the UK's leading AI institute – with cross-government support needed

Digital map of the UK and Northern Ireland
(Image credit: Getty Images)

The UK's AI research ecosystem is vulnerable to hostile state threats, the Alan Turing Institute has warned.

AI research is attractive to hostile states thanks to its use of sensitive datasets, as well as the potential to reverse engineer AI advancements or co-opt AI models for malicious activity. For example, attackers could use stolen tools designed to counter the misuse of AI systems to figure out novel methods to evade detection.

The institute, which is the UK’s national institute for data science and AI, stressed the UK’s advanced level of AI research makes it a particularly high-priority target for state-sponsored threat actors and for espionage activity.

"Furthering AI research is rightly a top priority for the UK, but the accompanying security risks cannot be ignored as the world around us grows ever more volatile," said research associate Megan Hughes.

"Academia and the government must commit to and support this long overdue culture change to strike the right balance between academic freedom and protecting this vital asset."

The report warns that awareness of security risks across the academic sector is patchy. Meanwhile, there's little incentive for researchers to follow existing government guidance on research security, especially given the pressures academics face to publish their research.

There have been numerous threats to UK academic institutions in the past, with high profile examples including an attack on the University of Manchester in 2023.

In April 2024, MI5 briefed university vice-chancellors on the efforts of hostile states to steal intellectual property to boost their own economic and military capabilities, with hints that China was the main culprit.

In October, Microsoft warned that almost half of UK higher education institutions were experiencing weekly breaches or cyberattacks. The attackers' main tactics involved using malware, Internet of Things (IoT) vulnerabilities and phishing, it said.

It's against this backdrop that the Alan Turing Institute has called for changes within universities and action from government.

All academic institutions, it said, should be required to deliver accredited research security training to new staff and postgraduate research students as a condition of their grant funding and to conduct risk assessments on AI research prior to publication.

It also called for trusted organizations such as Universities UK or UK Research and Innovation (UKRI) to oversee a new ‘centralized due diligence repository’, which would document risks and decisions made on AI research.

Under its proposals, the National Protective Security Authority (NPSA) and the National Cyber Security Centre (NCSC) would engage more widely with UK-based publishing houses, academic journals, and other research bodies on threats, offering tailored support. Alongside this, the Department for Science, Innovation and Technology could provide further funding and advice targeted toward research-intensive universities, helping them retain UK AI talent and avoid engagement with institutions considered “high risk”.

"If the UK is to mitigate the risks involved in AI research, there will need to be a culture change within academia to ensure research security is perceived as essential to high-quality research," the Alan Turing Institute researchers wrote.

"Equally, government will need to formulate a long-term strategy to address structural problems such as funding gaps and talent retention, while providing clear information on the threat landscape and continued support to institutions in raising awareness."

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.