Gartner urges CISOs to adopt new forms of trust and risk management for AI

CISO AI story illustrated by a digital image of a brain, representing AI, floating above a boardroom table. The brain is formed of glowing orange lines representing data points and connections.
(Image credit: Getty Images)

Organizations need to embed new strategies for AI trust, transparency, and security by 2026 in order to take advantage of the technology’s full business benefits, according to research by Gartner.

The research organization found that firms that operationalize AI trust, risk, and security management (TRiSM) could reap up to a 50% advantage when it comes to AI adoption, meeting their business goals with the technology, and user satisfaction.

TRiSM is a broad term applying to a suite of strategies and technologies, including those that help to keep generative AI models explainable and performing as intended, as well as security applications to protect AI systems from external threats.

CISOs that work to increase the speed of their AI model-to-production, or who enable better governance and rationalize their firm’s AI model portfolio could eliminate up to 80% of faulty and illegitimate information across the same period.

Gartner also urged CISOs to view AI as its own application, necessitating new strategies and complementary technologies outside of their normal workflow.

This will involve maintaining oversight of all AI solutions within their stack, identifying the transparency and explainability necessitated by each, and integrating risk management solutions at the source of AI operations.

“It calls for education and cross-team collaboration,” said Jeremy D’Hoinne, VP analyst at Gartner. 

“CISOs must have a clear understanding of their AI responsibilities within the broader dedicated AI teams, which can include staff from the legal, compliance, and IT and data analytics teams.”

Workforce training on the risks posed by AI, as well as ethical AI use, could become an indispensable part of the CISO’s toolbox in the coming years.

RELATED RESOURCE

Threat intelligence integration: From source to secure

(Image credit: Graylog)

Discover what to consider when choosing a source of threat intelligence

DOWNLOAD FOR FREE

More than a third of businesses investing in generative AI are also investing in AI application security tools, driven by fears that AI could leak data or produce unsafe results. For example, irregular or incorrect outputs are a major concern for businesses seeking to expose their AI models to their customers.

A recent study found OpenAI’s generative AI chatbot ChatGPT produced incorrect answers to programming questions 52% of the time. Researchers also noted that because users tend to prefer the language style of ChatGPT’s answers, mistakes were often overlooked.

User-preferred answers were found to be 77% incorrect on the whole, and 17% of users still marked ChatGPT answers as correct.

Stronger protections against data leaks, such as privacy-enhancing technologies (PETs), are included in this wave of investment, to ensure that proprietary data or personally identifiable customer information is not exposed through errors in a generative AI system.

Measures to prevent threat actors from gaining malicious access to AI models are also front of mind for many IT teams, and some have suggested that stronger machine identity controls linked to a ‘kill switch’ for such systems would help prevent an organization’s AI from being weaponized by hackers.

TOPICS
Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.