AI security tools promise to supercharge productivity, but experts worry cyber pros could become too reliant
AI security tools will deliver a boost by automating tasks that frequently bog down cyber professionals, but some industry experts worry teams could become too reliant on these systems


As AI seeps into every corner of the tech sector and beyond, a wave of security tools have flooded the market, promising to simplify cyber operations for enterprises and practitioners.
A significant part of the value-proposition vendors are trying to make with AI security assistants is that companies can free up swamped security teams by automating the more mundane, repetitive tasks in their workflow.
Many of the latest security assistants unveiled by Microsoft, Cisco, Check Point, and others can automate tasks that used to bog down workers, such as organizing the overwhelming stream of alerts security teams deal with every day.
Research from Mandiant in 2023 specifically highlighted “information overload” as a key barrier to effective threat intelligence. It’s an issue that plagues security teams across the industry, the company said, and one that contributes to staff burnout and poorer performance.
In theory, AI security tools should allow security staff to focus on critical tasks that require a human’s attention. Research from Censornet, for example, found 49% of SMBs think AI will boost their cyber defenses by freeing up security teams to proactively investigate risks.
After announcing its Copilot for Security would be rolled out to general availability from 1 April 2024, Microsoft claimed its assistant could carry out documentation tasks - a common pain point for security teams - 46% faster than a human and with greater accuracy.
Another task security professionals often get bogged down with is manually reverse engineering obfuscated scripts. Attackers will obfuscate the scripts employed in their attack chain in order to conceal their methods and keep victims in the dark.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
With skills gaps plaguing the security sector, finding security staff who have the expertise and experience to manually decode these scripts is becoming an uphill task for businesses.
Microsoft, naturally, said its tool will unlock huge productivity gains by enabling junior security analysts to reverse engineer said scripts without having to constantly consult senior colleagues.
No longer reliant on their more knowledgeable senior colleagues, junior security staff will be able to use AI assistants to take on tasks that might have previously been above their skill level, but what does this actually mean for their development?
By virtue of forgoing the process of learning from senior staff who might divulge handy tricks-of-the-trade or identify knowledge gaps in new team members, are the skill levels of new cyber professionals going to suffer as a result of these new security copilots?
Do teams risk becoming reliant on AI security tools?
Speaking to ITPro, Jeff Watkins, chief product and technology officer at xDesign, said the benefits AI adoption can bring to cyber security are clear, specifically in terms of automation.
“If there’s one field that has a huge potential for AI adoption, outside of the blindingly obvious customer support/CRM, it’s cyber security”, he explained.
“From attack analysis and simulations to reverse engineering and adaptive countermeasures, there’s a lot of potential in the area to intelligently automate processes and content generation. In many ways, this is a good thing, given the size of most security teams, as even in a well resourced organization there’s usually one security professional per 100 developers/technologists.”
But concerns around skills erosion are legitimate, Watkins noted, citing the digital amnesia associated with ‘The Google Effect’ and a worst-case scenario where underskilled cyber professionals are toothless to address sophisticated new attacks.
RELATED WHITEPAPER
“There’s a number of important factors in AI adoption in the context of cyber security, the first of which is analogous to ‘The Google Effect’. With too much AI assistance, there’s a chance that security engineers could end up relying entirely on how to leverage the AI assistant over actually learning how to problem solve themselves,” he warned.
“This has greater potential for impact in an organization that uses AI tooling in lieu of a good mentorship and pairing/shadowing approach.”
“The nightmare scenario is that cyber security allows itself to become deskilled in organizations, meaning innovative new threats leave the team feeling helpless to problem-solve their way out of the situation.”
AI is no cyber “silver bullet”, but it could actually help cyber skills
Chris Stouff, chief strategy officer at Armor Defense, told ITPro it’s important businesses recognize the proficiencies and limitations of AI assistants, highlighting the continued importance of a robust, human-led security operations center (SOC).
“I think the representation of AI as a cyber security ‘silver bullet’ is a dangerous one. Whilst I agree its use will be beneficial for things like the automation of repetitive security tasks, what concerns me is the inference that AI, like some of the security products and services hailed before it, could become a standalone solution which will somehow negate the requirement for an effective Security Operations Center (SOC).”
Stouff explained why AI is not the panacea that struggling security leaders might hope for, lacking a human’s situational awareness, judgment, and ability to prioritize tasks.
Mike Isbitski, director of Cybersecurity Strategy at monitoring suite Sysdig, echoed these thoughts, and cautioned against exaggerating the effect AI will have on skill levels among cyber professionals.
Isbitski noted that, long-term, AI assistants could actually help junior team members improve skills.
"Concerns about security teams becoming complacent with overuse of AI to do their work are overblown,” he said.
“The rapid adoption of generative AI is akin to calculators and computers becoming pervasive in the classrooms over the past few decades. It's an inevitability, and generative AI is yet another powerful tool. The technology should be embraced as it'll allow junior practitioners to skill up faster and security programs to scale appropriately in order to mitigate advanced threats."

Solomon Klappholz is a former staff writer for ITPro and ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing, which led to him developing a particular interest in cybersecurity, IT regulation, industrial infrastructure applications, and machine learning.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
So long, Defender VPN: Microsoft is scrapping the free-to-use privacy tool over low uptake
News Defender VPN, Microsoft's free virtual private network, is set for the scrapheap, so you might want to think about alternative services.
By Nicole Kobie Published
-
Hackers are on a huge Microsoft 365 password spraying spree – here’s what you need to know
News A botnet made up of 130,000 compromised devices has been conducting a huge password spraying campaign targeting Microsoft 365 accounts.
By Solomon Klappholz Published
-
Everything you need to know about the Microsoft Power Pages vulnerability
News A severe Microsoft Power Pages vulnerability has been fixed after cyber criminals were found to have been exploiting unpatched systems in the wild.
By Solomon Klappholz Published
-
Microsoft is increasing payouts for its Copilot bug bounty program
News Microsoft has expanded the bug bounty program for its Copilot lineup, boosting payouts and adding coverage of WhatsApp and Telegram tools.
By Nicole Kobie Published
-
Hackers are using this new phishing technique to bypass MFA
News Microsoft has warned that a threat group known as Storm-2372 has altered its tactics using a specific ‘device code phishing’ technique to bypass MFA and steal access tokens.
By Solomon Klappholz Published
-
A new phishing campaign is exploiting Microsoft’s legacy ADFS identity solution to steal credentials and bypass MFA
News Researchers at Abnormal Security have warned of a new phishing campaign targeting Microsoft's Active Directory Federation Services (ADFS) secure access system.
By Solomon Klappholz Published
-
Hackers are using Microsoft Teams to conduct “email bombing” attacks
News Experts told ITPro that tactics like this are on the rise, and employees must be trained effectively
By George Fitzmaurice Published
-
Microsoft files suit against threat actors abusing AI services
News Cyber criminals are accused of using stolen credentials for an illegal hacking as a service operation
By Solomon Klappholz Published