AI-generated code is now the cause of one-in-five breaches – but developers and security leaders alike are convinced the technology will come good eventually
Most security leaders still think AI tools will eventually write secure, reliable, code
AI coding tools are creating serious security risks in production, with one-in-five CISOs saying they've suffered major incidents because of AI-generated code.
AI coding tools now write 24% of production code – 21% in Europe and 29% in the US – according to a new report from Aikido. But it's risky, with 69% of security leaders, security engineers, and developers across Europe and the US revealing they'd found serious vulnerabilities in AI-written code.
US-based respondents were among the worst hit by AI-related flaws, with 43% of organizations reporting serious incidents, compared with just 20% in Europe.
This, the study noted, appears to be down to better prevention and oversight. For example, EU-based firms reported more “near misses” with AI-generated code than their US counterparts, potentially highlighting more robust testing practices.
Adding more tools to address the issue isn’t helping, Aikido found. Indeed, organizations with more security tools report more incidents, with more overhead and slower remediation.
Nearly two-thirds (64%) of those with just one or two tools had an incident, the figure was 90% for those with between six and nine tools.
All-in-one AI coding tools are helping bridge gaps
Notably, teams using tools designed for both developers and security teams were more than twice as likely to report zero incidents than those using tools made for only one specific group.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“Giving developers the right security tool that works with existing tools and workflows allows teams to implement security best practices and improve their posture,” commented Walid Mahmoud, DevSecOps lead at the UK Cabinet Office.
Teams using separate AppSec and CloudSec tools were 50% more likely to face incidents, and 93% of those with separate tools reported integration headaches such as duplicate alerts or inconsistent data.
The security blame game is heating up
The blame for incidents caused by AI code is now becoming a serious point of contention within enterprises, the report noted. For example, 53% of respondents blamed security teams for failing to address issues, while 45% blamed developers who failed to spot issues before pushing to production.
Meanwhile, 42% pointed toward whoever merged it. This blame game is expected to continue escalating, according to Aikido. Half of developers reckoned they’d be blamed if the AI code they wrote introduced a vulnerability, even more than the security team itself.
“There's clearly a lack of clarity among respondents over where accountability should sit for good risk management,” commented Andy Boura, CISO at Rothesay.
Despite concerns across the board, enterprises are expected to continue driving ahead with adoption of AI coding tools, the study noted. Nine-in-ten said they expect AI to take over penetration testing within the next five years, for example
Meanwhile, 96% believe AI will write secure, reliable, code at some point, with the biggest proportion (44%) thinking it will happen in the next three-to-five years.
Only 21% think this will be achieved without human oversight, however, underlining the importance of keeping humans in the loop.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- Think AI coding tools are speeding up work? Think again – they’re actually slowing developers down
- How AI coding is transforming the IT industry in 2025
- AI coding tools are booming – and developers in this one country are by far the most frequent users
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
AI is coming to Ubuntu: Canonical exec teases future AI features and agentic workflow capabilities for version 26.10 — but on a ‘strictly opt-in basis’News A range of new AI features are coming to Ubuntu over the next year, according to maintainers, but only providing they’re of “sufficient maturity and quality”.
-
Everything you need to know about the GitHub Copilot pricing changesNews GitHub Copilot pricing changes mean users will be charged based on consumption, rather than a set number of credits
-
Developers are slacking on AI-generated code safety – here's why it could come back to haunt themNews While organizations are aware of the risks, many are spending little time or effort on tracking artifact versions, origins, and security attestations
-
Four things you need to know about GitHub's AI model training policy – including how to opt outNews Users of certain GitHub Copilot plans will have interaction data used to train AI models, but can opt out
-
'AI doesn't solve the burnout problem. If anything, it amplifies it': AI coding tools might supercharge software development, but working at 'machine speed' has a big impact on developersNews Developers using AI coding tools are shipping products faster, but velocity is creating cracks across the delivery pipeline
-
‘I hope there's a world where AI is is complementary to humans’: Workday CEO vows to support HR workers as Sana integration automates more processes than ever beforeNews Sana from Workday seeks to bring agentic AI to Workday’s systems and beyond with natural language input and third-party connectors
-
‘AI tools are now able to transcend their initial training’: Researchers taught GPT-5 to learn an obscure programming language on its ownNews OpenAI’s GPT-5 learned to code in Idris despite a lack of available data, baffling researchers
-
Alert issued over critical vulnerabilities in Linux’s AppArmor security layer – more than 12 million enterprise systems are at risk of root accessNews Researchers have warned Linux flaws allow unprivileged local users to gain root privileges and weaken container isolation


