Agentic AI could be a blessing and a curse for cybersecurity
A new report warns hackers using agentic AI systems could revolutionize the global threat landscape


Agentic AI systems will “further revolutionize cyber criminal tactics,” according to new research from Malwarebytes.
In its 2025 State of Malware report, the security firm warned that businesses need to be prepared for AI-powered ransomware attacks. The firm specifically highlighted the threat posed by malicious AI agents that can reason, plan, and use tools autonomously.
The report claimed that up until this point, the impact of generative AI tools on cyber crime has been relatively limited. This is not because they cannot be used offensively, however. There have been notable examples of generative AI being used to generate phishing content and even produce exploits in limited cases.
But in the main, their use for offensive purposes has been in increasing the efficiency of attacks rather than introducing new capabilities or altering the underlying tactics used by hackers.
But this could all be about to change in 2025, according to Malwarebytes, which argued that agentic AI could help attackers to not only scale up the volume and efficiency of their attacks, but also strategize on how to compromise victims.
“With the expected near-term advances in AI, we could soon live in a world where well-funded ransomware gangs use AI agents to attack multiple targets at the same time,” Malwarebytes warned.
“Malicious AI agents might also be tasked with searching out and compromising vulnerable targets, running and fine-tuning malvertising campaigns or determining the best method for breaching victims”.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Use of offensive agentic AI could be years away
That isn’t to say agentic AI does not have defensive applications, and Malwarebytes noted that agentic AI could be used to address cybersecurity skills gaps that plague the industry.
As these systems become more capable, security teams will increasingly be able to hand off parts of their workload to the autonomous agents that can action them with minimal oversight.
“It is not far-fetched to imagine agents being tasked with looking out for supply-chain vulnerabilities, keeping a running inventory of internet-facing systems and ensuring they’re patched, or monitoring a network overnight and responding to suspicious EDR alerts,” the report argued.
ReliaQuest, which claimed to have launched the first autonomous AI security agent in September 2024, recently said its agent is capable of processing security alerts 20 times faster than traditional methods with 30% greater accuracy at picking out genuine threats.
Speaking to ITPro, Sohrob Kazerounian, distinguished AI researcher at AI security specialists Vectra AI, acknowledged the efficiency increases generative AI has already unlocked for threat actors, but agreed the more interesting shift will come in the future as they experiment with AI agents.
“In the near term, we will see attackers focus on trying to refine and optimize their use of AI. This means using generative AI to research targets and carry out spear phishing attacks at scale. Furthermore, attackers, like everyone else, will increasingly use generative AI as a means of saving time on their own tedious and repetitive actions,” he explained.
“But, the really interesting stuff will start happening in the background, as threat actors begin experimenting with how to use LLMs to deploy their own malicious AI agents that are capable of end-to-end autonomous attacks.”
But Kazerounian said the reality of cyber criminals integrating AI agents into their operations is still years away, as it will require a significant amount of fine-tuning and troubleshooting before these systems reach true efficacy.
RELATED WHITEPAPER
“While threat actors are already in the experimental phase, testing how far agents can carry out complete attacks without requiring human intervention, we are still a few years away from seeing these types of agents being reliably deployed and trusted to carry out actual attacks,” he argued.
“While such a capability would be hugely profitable in terms of time and cost of attacking at scale, autonomous agents of this sort would be too error-prone to trust on their own.”
Regardless, Kazerounian said the industry should be getting ready for this eventuality, as it will require significant changes to the traditional approach to threat detection.
“Nevertheless, in the future we expect threat actors will create Gen AI agents for various aspects of an attack – from research and reconnaissance, flagging and collecting sensitive data, to autonomously exfiltrating that data without the need for human guidance. Once this happens, without signs of a malicious human on the other end, the industry will need to transform how it spots the signs of an attack.”
Solomon Klappholz is a former staff writer for ITPro and ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing, which led to him developing a particular interest in cybersecurity, IT regulation, industrial infrastructure applications, and machine learning.
-
Canon names Guido Jacobs as new managing director for UK&I
News Canon UK & Ireland has officially announced the appointment of brand veteran Guido Jacobs as its new managing director, having assumed the role from April 1.
By Daniel Todd Published
-
Dell PowerEdge R570 review
Reviews With its impressive Xeon 6-core count and big storage features, the PowerEdge R570 is a cost-effective and energy-efficient alternative to expensive 2P rack servers
By Dave Mitchell Published
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Law enforcement needs to fight fire with fire on AI threats
News UK law enforcement agencies have been urged to employ a more proactive approach to AI-related cyber crime as threats posed by the technology accelerate.
By Emma Woollacott Published
-
OpenAI announces five-fold increase in bug bounty reward
News OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.
By Jane McCallion Published
-
Hackers are turning to AI tools to reverse engineer millions of apps – and it’s causing havoc for security professionals
News A marked surge in attacks on client-side apps could be due to the growing use of AI tools among cyber criminals, according to new research.
By Emma Woollacott Published
-
Multichannel attacks are becoming a serious threat for enterprises – and AI is fueling the surge
News Organizations are seeing a steep rise in multichannel attacks fueled in part by an uptick in AI cyber crime, new research from SoSafe has found.
By George Fitzmaurice Published
-
12,000 API keys and passwords were found in a popular AI training dataset – experts say the issue is down to poor identity management
Analysis The discovery of almost 12,000 secrets in the archive of a popular AI training dataset is the result of the industry’s inability to keep up with the complexities of machine-machine authentication.
By Solomon Klappholz Published
-
Microsoft is increasing payouts for its Copilot bug bounty program
News Microsoft has expanded the bug bounty program for its Copilot lineup, boosting payouts and adding coverage of WhatsApp and Telegram tools.
By Nicole Kobie Published
-
Tech leaders worry AI innovation is outpacing governance
News Business execs have warned the current rate of AI innovation is outpacing governance practices.
By Emma Woollacott Published