Gartner urges CISOs to adopt new forms of trust and risk management for AI
CISOs will need to deploy new strategies for best-case implementations of AI


Organizations need to embed new strategies for AI trust, transparency, and security by 2026 in order to take advantage of the technology’s full business benefits, according to research by Gartner.
The research organization found that firms that operationalize AI trust, risk, and security management (TRiSM) could reap up to a 50% advantage when it comes to AI adoption, meeting their business goals with the technology, and user satisfaction.
TRiSM is a broad term applying to a suite of strategies and technologies, including those that help to keep generative AI models explainable and performing as intended, as well as security applications to protect AI systems from external threats.
CISOs that work to increase the speed of their AI model-to-production, or who enable better governance and rationalize their firm’s AI model portfolio could eliminate up to 80% of faulty and illegitimate information across the same period.
Gartner also urged CISOs to view AI as its own application, necessitating new strategies and complementary technologies outside of their normal workflow.
This will involve maintaining oversight of all AI solutions within their stack, identifying the transparency and explainability necessitated by each, and integrating risk management solutions at the source of AI operations.
“It calls for education and cross-team collaboration,” said Jeremy D’Hoinne, VP analyst at Gartner.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“CISOs must have a clear understanding of their AI responsibilities within the broader dedicated AI teams, which can include staff from the legal, compliance, and IT and data analytics teams.”
Workforce training on the risks posed by AI, as well as ethical AI use, could become an indispensable part of the CISO’s toolbox in the coming years.
RELATED RESOURCE
Discover what to consider when choosing a source of threat intelligence
More than a third of businesses investing in generative AI are also investing in AI application security tools, driven by fears that AI could leak data or produce unsafe results. For example, irregular or incorrect outputs are a major concern for businesses seeking to expose their AI models to their customers.
A recent study found OpenAI’s generative AI chatbot ChatGPT produced incorrect answers to programming questions 52% of the time. Researchers also noted that because users tend to prefer the language style of ChatGPT’s answers, mistakes were often overlooked.
User-preferred answers were found to be 77% incorrect on the whole, and 17% of users still marked ChatGPT answers as correct.
Stronger protections against data leaks, such as privacy-enhancing technologies (PETs), are included in this wave of investment, to ensure that proprietary data or personally identifiable customer information is not exposed through errors in a generative AI system.
Measures to prevent threat actors from gaining malicious access to AI models are also front of mind for many IT teams, and some have suggested that stronger machine identity controls linked to a ‘kill switch’ for such systems would help prevent an organization’s AI from being weaponized by hackers.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Generative AI has had "no material impact" on IT spending
News 2025 could be a watershed year for generative AI-related IT spending
By Ross Kelly Published
-
More than half of firms now using generative AI
News Nearly half of firms are now using generative AI tools in full production, compared to just 4% in March
By Rory Bathgate Published
-
AI security tools see mounting investment as businesses scramble to mitigate generative AI’s issues
News Generative AI providers don't currently have the confidence of business leaders when it comes to sending sensitive data to their clouds
By Rory Bathgate Published
-
Software engineers must embrace generative AI or risk job progression, Gartner says
News Leaders will be expected to embrace more nuanced skills related to generative AI as its popularity builds
By Ross Kelly Published
-
AI chips revenue to reach $53 billion in 2023, Gartner predicts
News Demand for customized AI hardware is driving huge growth in the market
By Rory Bathgate Published
-
Gartner peer insights: Voice of the customer
Whitepaper Master data management solutions
By ITPro Published
-
Gartner urges CIOs to consider AI ethics
News A new report says CIOs must guarantee good ethics of "smart machines" in order to build trust
By Caroline Preece Published
-
Gartner big data research suggests growing IT director interest
News Market watcher cites competitive fears and greater understanding as reasons for growing interest in big data.
By Caroline Donnelly Published