Looking to use DeepSeek R1 in the EU? This new study shows it’s missing key criteria to comply with the EU AI Act
The model is vulnerable to hijacking via prompt injection, despite its reliability in other areas


The popular AI model DeepSeek R1 may contain inherent flaws that make it incompatible with the EU AI Act, according to new research.
DeepSeek R1 took the tech industry by storm in early January, offering an open source option for performance comparable to OpenAI’s o1 at a fraction of the cost.
But the model’s outputs may contain vulnerabilities that jeopardize its rollout in the EU. Using a new framework known as COMPL-AI, researchers analyzed both DeepSeek R1 models: one distilled from Meta’s Llama 3.1 and the other from Alibaba’s Qwen 2.5.
The framework was created by researchers at ETH Zurich, the Institute for Computer Science, Artificial Intelligence and Technology (INSAIT), and LatticeFlow AI. It aims to evaluate models on a range of factors such as transparency, risk, bias, and cybersecurity readiness, measured against the requirements of the EU AI Act.
In a test of whether the model could be hijacked with jailbreaks and prompt injection attacks, both DeepSeek models scored the lowest of all models benchmarked by COMPL-AI. DeepSeek R1 Distill Llama 8B scored just 0.15 for the hijacking and prompt leakage out of a possible 1.0, compared to 0.43 for Llama 2 70B and 0.84 for Claude 3 Opus.
This could put it in jeopardy with Article 15, paragraph 5 of the EU AI Act, which states: “High-risk AI systems shall be resilient against attempts by unauthorized third parties to alter their use, outputs or performance by exploiting system vulnerabilities”.
The analysis comes after similar research into DeepSeek jailbreaking techniques conducted by Cisco, which found the model was susceptible to prompts intended to produce malicious outputs 100% of the time.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
In other areas, the models outperformed some of the most popular open and proprietary LLMs. The model was found to consistently deny it was human, a feat not achieved by GPT-4 or the baseline version of Qwen.
Tested with HumanEval, a widely-used benchmark for assessing an LLM’s code generation capabilities, DeepSeek also outperformed other open source models. DeepSeek R1 Qwen 14B scored 0.71 versus Llama 2 70b’s 0.31, exceeded in COMPL-AI’s leaderboard only by GPT-3.5 (0.76), GPT-4 (0.84) and Claude 3 Opus (0.85).
"As corporate AI governance requirements tighten, enterprises need to bridge internal AI governance and external compliance with technical evaluations to assess risks and ensure their AI systems can be safely deployed for commercial use," said Dr. Petar Tsankov, co-founder and CEO at LatticeFlow AI.
"Our evaluation of DeepSeek models underscores a growing challenge: while progress has been made in improving capabilities and reducing inference costs, one cannot ignore critical gaps in key areas that directly impact business risks – cybersecurity, bias, and censorship. With COMPL-AI, we commit to serving society and businesses with a comprehensive, technical, transparent approach to assessing and mitigating AI risks."
RELATED WHITEPAPER
COMPL-AI is not formally associated with the EU Commission, nor able to provide an official third-party analysis of the EU AI Act. Companies looking to adopt DeepSeek or other models into their tech stack will still need to follow best practices for implementing generative AI.
Leaders may also look into hiring for roles such as chief AI officers and data ethicists, alongside the establishment of sovereign cloud clusters to ensure data used for AI within the EU is compliant with regional laws.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Meta executive denies hyping up Llama 4 benchmark scores – but what can users expect from the new models?
News A senior figure at Meta has denied claims that the tech giant boosted performance metrics for its new Llama 4 AI model range following rumors online.
By Nicole Kobie Published
-
DeepSeek and Anthropic have a long way to go to catch ChatGPT: OpenAI's flagship chatbot is still far and away the most popular AI tool in offices globally
News ChatGPT remains the most popular AI tool among office workers globally, research shows, despite a rising number of competitor options available to users.
By Ross Kelly Published
-
Productivity gains, strong financial returns, but no job losses – three things investors want from generative AI
News Investors are making it clear what they want from generative AI: solid financial and productivity returns, but no job cuts.
By Nicole Kobie Published
-
Legal professionals face huge risks when using AI at work
Analysis Legal professionals at a US law firm have been sanctioned over their use of AI after it was found to have created fake case law.
By Solomon Klappholz Published
-
Google CEO Sundar Pichai says DeepSeek has done ‘good work’ showcasing AI model efficiency — and Gemini is going the same way too
News Google CEO Sundar Pichai hailed the DeepSeek model release as a step in the right direction for AI efficiency and accessibility.
By Nicole Kobie Published
-
Microsoft says AI tools such as Copilot or ChatGPT are affecting critical thinking at work – staff using the technology encounter 'long-term reliance and diminished independent problem-solving'
News Research from Microsoft suggests that the increased use of AI tools at work could impact critical thinking among employees.
By Nicole Kobie Published
-
UK and US reject Paris AI summit agreement as “Atlantic rift” on regulation grows
News The UK and US have refused to sign an international agreement on AI governance amid concerns over "practical clarity'.
By Emma Woollacott Published
-
Who wins out from DeepSeek's success?
ITPro Podcast Firms focused on AI at the edge and open source LLMs could reap massive rewards from the Chinese startup's industry upset
By Rory Bathgate Published