The top three trends in AI security in 2024 so far

A CGI render of a padlock sat on a computer schematic to represent network security and firewalls. Decorative: Purple energy flows behind the padlock to represent the flow of data.
(Image credit: Getty Images)

The field of cyber security has been one of the earliest adopters of artificial intelligence (AI). Automated threat detection and response has been a feature of many cyber security products for years, allowing for a more dynamic and less labor-intensive management of risks.

n the past 18 months, however, there have been significant shifts in how AI is used in cyber security and the risks the technology itself presents.

Here are three of the top trends in AI security in 2024 so far:

1. Generative AI

Generative AI has been one of the biggest breakthroughs in technology in recent years. While it has plenty of consumer uses – from clarifying difficult concepts to simply chatting – it has also made itself useful in business, including in cyber security.

For example, In February 2024 cyber security firm CrowdStrike launched Charlotte AI, a generative AI chatbot that it has incorporated into another new product, Falson for IT. According to the firm, early adopters using Charlotte reported they could answer questions about their security posture 75% faster, write queries 57% faster, and track down attackers 52% more efficiently.

2. Security professionals’ cautious optimism

According to research by the Cloud Security Alliance, sponsored by Google Cloud, most security professionals (63%) believe AI will improve security within their organization, compared to only 12% who disagree.

Just over a third (34%) feel AI will be more beneficial for security teams than malicious third parties, while almost the same percentage (31%) feel it will equally benefit both defenders and attackers.

Speaking on this topic at RSA Conference 2024, Crowdstrike CEO George Kurtz said: "I've been doing this for a long time. And I really think it (generative AI) has the ability to revolutionize security, but more importantly, the operations of security."

3. Keeping data secure

One of the early issues surrounding the use of generative AI by organizations is exposing sensitive data. In 2023, for example, Samsung employees uploaded meeting notes and source code to ChatGPT not realizing that the tool’s developer, OpenAI, retains information fed into it for training purposes.

RELATED WHITEPAPER

Stay one step ahead of identity thieves

(Image credit: Crowdstrike)

Protect yourself against identity theft

This led enterprise cloud providers to build generative AI services that isolate customer data from the underlying platform. For example, AWS’s Amazon Bedrock service allows customers to create generative AI applications. However, the company promises that the data is never used to train the large language models that underpin it. Microsoft, meanwhile, is introducing automated safety evaluations in Azure AI Studio to help AI app developers ensure their applications aren’t vulnerable to risks such as jailbreaking.

Organizations must still take care with using generative AI, though, as if it’s not properly configured it could still leak sensitive information.

ITPro

ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.

For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.