Businesses are taking their eye off the ball with vulnerability patching
Most exploitable vulnerabilities go unresolved, according to new research


Security leaders are overconfident in their organization’s security posture while allowing vulnerability patching to fall by the wayside, new research suggests.
According to penetration testing firm Cobalt’s 2025 State of Pentesting Report , only 48% of exploitable vulnerabilities uncovered during penetration testing are fixed – although this increases to 69% for those that have a severity rating of high or critical.
Of particular concern is an apparent blindspot when it comes to AI applications. Of the firms surveyed, 95% had performed penetration testing with their generative AI apps in the last year, of which 32% found vulnerabilities with a rating of high or critical.
These include risks of prompt injection, model manipulation, and data leakage.
Despite this – and despite 72% of respondents ranking AI attacks as their number one concern – only 21% of these high risk vulnerabilities were patched following their discovery.
Additionally, while 81% of security leaders surveyed said they are confident in their organization’s security posture, this bumps up against cold reality when only 50% said they fully trust they can identify and prevent a vulnerability from their software suppliers.
AI security is a growing area of concern for IT and business leaders. Concerns have been raised about the use of AI generated code, the use of ‘shadow AI’, and data privacy compliance – particularly in the public sector.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Gunter Ollman, CTO of Cobalt, struck a fairly sanguine tone over the findings, saying: “It’s a concern that 31% of serious vulnerabilities are not being fixed, however at least these firms are aware of the problem and can develop strategies to mitigate the risk.”
Ollman added: "Organizations that do take an offensive security approach are ... getting ahead of any compliance requirements and reassuring their customers that they’re safe to do business with.”
This may be cold comfort for the 52% of respondents who said they were being pressured to support speed at the cost of security, however.
MORE FROM ITPRO

Jane McCallion is Managing Editor of ITPro and ChannelPro, specializing in data centers, enterprise IT infrastructure, and cybersecurity. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.
Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.
-
Global cybersecurity spending is set to rise 12% in 2025 – here are the industries ramping up investment
News Global cybersecurity spending is expected to surge this year, fueled by escalating state-sponsored threats and the rise of generative AI, according to new analysis from IDC.
By Ross Kelly Published
-
Google Cloud is leaning on all its strengths to support enterprise AI
Analysis Google Cloud made a big statement at its annual conference last week, staking its claim as the go-to provider for enterprise AI adoption.
By Rory Bathgate Published
-
Multichannel attacks are becoming a serious threat for enterprises – and AI is fueling the surge
News Organizations are seeing a steep rise in multichannel attacks fueled in part by an uptick in AI cyber crime, new research from SoSafe has found.
By George Fitzmaurice Published
-
12,000 API keys and passwords were found in a popular AI training dataset – experts say the issue is down to poor identity management
Analysis The discovery of almost 12,000 secrets in the archive of a popular AI training dataset is the result of the industry’s inability to keep up with the complexities of machine-machine authentication.
By Solomon Klappholz Published
-
Hackers are using a new AI chatbot to wage cyber attacks: GhostGPT lets users write malicious code, create malware, and curate phishing emails – and it costs just $50 to use
News Researchers at Abnormal Security have warned about the rise of GhostGPT, a new chatbot used by cyber criminals to create malicious code and malware.
By Nicole Kobie Published
-
LinkedIn faces lawsuit amid claims it shared users' private messages to train AI models
News LinkedIn faces a lawsuit in the US amid allegations that it shared Premium members' private messages to train AI models.
By Emma Woollacott Published
-
Government says breach to AWS-hosted MoD AI recruitment tool would have “concerning consequences”
News Personal data on defense personnel could be placed at serious risk
By Solomon Klappholz Published
-
Where will AI take security, and are we ready?
whitepaper Steer through the risks and capitalise on the benefits of AI in cyber security
By ITPro Published