AI security tools see mounting investment as businesses scramble to mitigate generative AI’s issues

A CGI image of cubes rippling at slightly varying heights, each marked with a '1', a '0', or the image of an orange padlock to represent data security. It is lit in blue and purple light.
(Image credit: Getty Images)

Over a third (34%) of organizations adopting generative AI are also investing in AI application security solutions to mitigate known risks such as data leaks.

A new report from Gartner found that in addition to application security solutions, greater investments into privacy-enhancing technologies (PETs), AI model operationalization (ModelOps), and model monitoring are being planned.

Despite enthusiasm for generative AI among business leaders, many felt the need for backup options to prevent inaccurate and harmful outputs and the potential for their proprietary information to be leaked through public AI services.

Among those surveyed, 57% of respondents were particularly concerned about leaked sensitive data in AI-generated code, versus 58% who were most concerned about incorrect outputs or models showing bias.

Avivah Litan, distinguished VP analyst at Gartner, told ITPro that organizations were very worried about the risks of poor data privacy when it comes to generative AI, and that public AI options such Azure OpenAI ask for a degree of trust that some IT leaders find difficult to provide.

“That’s what they’re really worried about: how can we trust OpenAI or Google, or any of them, Microsoft, with our data? Even if they say that they're not sharing it, they're not using or training our data for improving their model, no one will take any liability if we're compromised.

“Number one, the users have to trust without the ability to verify. Number two, it's not a new issue. This has been around as long as SaaS and cloud applications have been around, but it seems to be heightened because LLMs are already a black box and no one knows what's going on inside the model. So people just get more paranoid.”

RELATED RESOURCE

Purple whitepaper cover with white text over background image of suited female wearing glasses

(Image credit: Mimecast)

Get an understanding of why AI/ML is crucial to cyber security, how it fits in, and its best use cases.

DOWNLOAD FOR FREE

Litan suggested that organizations could run on-premise AI models if they wanted to be certain that their data was not being stored, used to train other models, or leaked.

“The only thing that'll stop your data from getting breached in a third-party environment is don't use the third-party environment,” she said.

“Use your own model, download an open-source model, and host it yourself. But most companies don't have the resources for that.”

ModelOps is a governance method similar to DevOps through which firms automate the oversight of machine learning (ML) or AI models, to measure the effectiveness at which they are operating and whether they conform to safety expectations.

PETs protect data from being exposed while in use through encryption and can be used to train third-party generative AI models on data without unnecessarily exposing it. In the same manner, PETs can be used to encrypt AI or ML models to prevent threat actors from backwards-engineering them to reveal the sensitive data on which they were trained.

Apple made headlines in May when it banned its employees from using ChatGPT or GitHub Copilot over fears that employees could hand OpenAI sensitive data such as source code. Samsung workers accidentally leaked source code via ChatGPT, prompting the firm to issue an internal warning against the use of third-party AI in April.

Gartner’s survey, which ran 1-7 April 2023 and analyzed responses from 150 IT and information security leaders, also revealed an inconsistent view of where responsibility for AI systems rests within organizations.

Almost all (93%) IT and security respondents stated that they have a part to play in managing the risk of generative AI, but only 24% stated that they wholly own this responsibility.

Just under half (44%) of those who stated that they do not own this responsibility identified IT as the department that is responsible for overseeing this risk management, while 20% pointed instead to the governance, risk, and compliance department within their organization.

TOPICS
Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.