How important is it to balance AI, automation, and human-in-the-loop in your response tools?

Futuristic design of artificial Intelligence brain with circuit board.
(Image credit: Getty Images)

Ever since ChatGPT burst onto the scene in November 2022, there has been almost constant debate over the effects generative AI will have on employment and skills. At the center of this discussion is whether this new branch of artificial intelligence will augment jobs, erase them, or have a negligible effect.

Some 44% of total working hours across all industries “have the potential to be impacted by generative AI” according to experts from Accenture writing in Harvard Business Review. They say that the financial services industry presents the largest “opportunity” for working hours to be “transformed by generative AI” – in banking, 72% of working hours can be transformed, in insurance it’s 68%, and in insurance it’s 68%.

For the authors, the ability to use generative AI could significantly improve employee performance. For example, in the case of a data scientist their research found that “76% of all work time can be impacted by generative AI, enabling a 25% improvement in achievable productivity given the current state of technology and practice”.

The idea that 76% of a role could be “impacted by generative AI” may sound disconcerting – especially if it’s a role you occupy. However, futurist and author Bernard Marr, writing in Forbes, says that this is actually a good position to be in.

“Those in the 20% of jobs that are highly likely to be transformed by generative AI are in a position of privilege,” he says, referencing research from Indeed. “The ability to adopt generative AI into their workflow will make them more efficient, productive and valuable.”

AI’s long pedigree in cybersecurity

When it comes to establishing the impact (and utility) of generative AI, cybersecurity has perhaps more experience to draw from than some other industries.

At the company’s annual conference in 2017, McAfee CTO Steve Grobman spoke about the importance of humans in cybersecurity, despite the already widespread use of machine learning and automation. 

"What makes cyber security such a different field than almost everything else … is there's an adversary applying game theory on the other side of the table that's going to change the game. What machines are not good at is recognizing human intellect – you really need humans to understand human intellect," Grobman argued.

Even earlier than that, back in 2008, Carnegie Mellon University professor Lorrie Faith Cranor was writing about the role of humans in cybersecurity and their function within a secure system.

Ultimately, what has been deduced from these many years of research and experience is that a balance is needed between automation and AI on the one hand, and human input on the other.

It might sound trite, but it’s true that automation can take a lot of the drudgery out of security roles. Depending on the size of an organization, its security operations center (SOC) could receive tens or even hundreds of thousands of alerts every day. Even the most well-resourced SOC would struggle with such a deluge of data, but thankfully not all of these incidents require human intervention to resolve them.

Techniques like rules-based monitoring, as well as more advanced behavioral analytics using AI, can be used to automate responses to these alerts. This means that many, if not most, of these straightforward incidents can be resolved without a human needing to take one look at them. This can help mitigate alert fatigue, when humans start to ‘tune out’ of alerts, particularly if there’s a high level of false positives, opening up the door to a data breach or other incident. 

The remaining few dozen that can’t be resolved automatically will be flagged for a cybersecurity professional to assess and deal with. The human in this cybersecurity role has skills and abilities that even the best cybersecurity software doesn’t have currently and quite possibly never will. 

No matter how good it is, AI is incapable of abstract thought, intuition, or creativity – all of which are vital when dealing with unexpected or novel threats. It’s limited in the information it has access to, which means it doesn’t have the full contextual awareness a human does. It’s also difficult for it to add to its knowledge pool on the fly – a person can call or message a colleague to find out if an individual currently trying to log on from another country is there on business or if their account has likely been compromised, for example. An AI, on the other hand, may struggle to do so in a useful or meaningful way.

Onwards to a balanced future

In a June 2024 paper published in ACM Transactions on Internet Technology, researchers from Australia’s national science agency – CSIRO – wrote of the importance of “human-AI teaming”. 

“AI can play a role in supporting human capabilities, with varying degrees of involvement and impact. In the SOC context, these functions guide the process of understanding potential threats, evaluating their significance, deciding on countermeasures, and executing responses to strengthen security, thereby contributing directly to incident triage, analysis, and response,” the researchers said.

Humans also have a role to play in making AI better in a security setting. As Si West, director of customer engagement at Resilience wrote in Insurance Edge in August 2024: “Security controls, such as cyber risk modeling and simulations, are already dependent on AI, but human involvement is crucial to actively manage cyber threats. Such controls need continuous monitoring and updating to keep pace with evolving threats, and humans are key to enhancing a feedback loop.”

When looking at their cyber security strategy, businesses must take into account the fact that humans, automated systems, and AI aren’t interchangeable. Each has a different but complementary role to play and who is doing most of the decision-making will shift depending on the situation.

The strengths and weaknesses of both the cybersecurity professionals and the pieces of software being used should be identified so that each element can support the other and mitigate cyber threats. By enabling this balance and what the CSIRO researchers describe as “flexible autonomy”, organizations can ensure they stand the best chance possible of defending their operations and data from would-be malicious actors.

Jane McCallion
Managing Editor

Jane McCallion is ITPro's Managing Editor, specializing in data centers and enterprise IT infrastructure. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.

Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.