How AI can augment security professionals’ capabilities

A digital shield logo on a screen, with code surrounding it to represent a cyber security vendor.
(Image credit: Getty Images)

While artificial intelligence (AI) has been a fixture in cyber security for many years, from automation to machine learning, its recent evolution has led to an explosion of possibilities for the industry. Like with all emerging technologies, businesses need a solid understanding of how AI can be exploited to better serve their interests - and this is proving particularly important for modern cyber security.

AI can be used in a variety of disciplines to solve problems and augment human capabilities, and the same is true of organizations within the cyber security industry. The technology – particularly machine learning – has also been integrated into systems to help to analyze logs, predict threats, scan for vulnerabilities, and improve tooling. Now, with the advent of generative AI, organizations are seeking ways to optimize further and harness these new services alongside existing tooling to augment their workforce as much as possible.

AI is inducing a major shift in cyber security

The overwhelming majority (93%) of surveyed businesses say they have now adopted generative AI tools in some form, as well as 91% of security teams, according to research by Splunk published this year.

There is, undoubtedly, a rush to adopt these tools – evidenced by the fact that 34% of teams operate without a company policy on generative AI, while only half of security teams in organizations say they are actively developing one. This certainly chimes with historic research by Acumen that showed the size of the market for AI in cyber security would balloon from $14.9 billion just two years ago – when ChatGPT first hit the scene – to $133.8 billion by 2030.

This is an inflection point, according to executive vice president and general manager of security and collaboration at Cisco, Jeetu Patel, who recently highlighted the sheer scale of the sea change the security industry is about to experience.

“The cyber security industry is about to have a pretty seismic change in the way that it’s going to operate,” Patel said in his keynote address at RSA conference. “This is the first time in the history of humanity that I think you can start to see... us entering into a state of abundance. The ability for us to augment capacity to humans is going to be so profound and grow at such different scales and proportions to what we’ve seen before that if you had, suppose, 20 developers on your team expanding that to 100 through digital workers is not going to be hard to do and is going to be very plausible."

Harnessing AI to bolster cyber security workflows

There are several ways that cyber security professionals can augment their day-to-day workflows with generative AI tools, according to the Splunk report. The technology  is seen as a "force multiplier" and can aid in various tasks including identifying risks, threat intelligence analysis, prioritizing and detecting threats, and summarizing data.

The rise of large language models (LLMs) in particular can help professionals aggregate massive and diverse datasets and deliver this information at speed much quicker than any human – while LLMs can also determine the indicators of compromise and attack techniques, summarizing this information in an intelligence report. 

The scope for human error can also be minimized by deploying generative AI to comb through possible incidents, with AI helping to prioritize and triage alerts that may have otherwise been misclassified, even if there was only a minor risk of that happening. 

Finally, security teams can benefit from generative AI tools to summarize new legislation and policy shifts in the national and international realms – condensing the information in a shorter and more digestible format.

The strengths of AI in pattern recognition, according to Red Hat, can also be deployed to help detect cyber security anomalies – which may be the precursor to attacks. Machine learning models, in particular, can identify what the 'normal behavior' in a system looks like and look out for any incidents or deviations that go against the grain. Using AI in automating otherwise manual tasks can free up cyber security professionals to spend more time on valuable projects and important work. Likewise, generative AI tools can be used to help cyber security professionals review source code alongside configuration and infrastructure code. Finally, attack simulations can be run on the code to test whether common attack types may exploit as-of-yet undetected vulnerabilities.

Avoiding 'silver bullet' syndrome with AI adoption

The danger when a technology like AI comes around is that teams within organizations see it as an easy-to-implement solution that won't require any preparation or planning – especially one as intuitive and user-friendly. 

Speaking to ITPro earlier this year, CSO at Armor Defense, Chris Stouff, said it's important to recognize not just the benefits but also the limitations of tools like AI assistants. In particular, seeing AI as a cyber security 'silver bullet' is "dangerous" and teams shouldn't view AI as a standalone solution that negates the need for a security operations center (SOC).

It chimes with Splunk's findings that a lack of process and planning "could come back to haunt security teams", just as businesses struggled with a lack of preparation during the emergence of cloud and the internet of things (IoT). According to Red Hat,  special attention needs to be paid to the potential for AI to grow the attack surface of a business, particularly if tools are not properly configured.

As with any new technology, it can be harnessed for good or evil, but organizations are understandably keen to integrate new AI services into workflows as quickly as possible. For cyber security professionals to get the most out of this technology, businesses must prioritize careful planning, establish clear policies and processes, and ensure these aren't seen as a catch-all or silver bullet that means relaxing other measures and legacy processes that the technology may not yet be good enough to emulate effectively.

Keumars Afifi-Sabet
Contributor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.