Transparency is “vital” in big tech’s new coalition on AI safety, experts suggest

Futuristic lock icon on black and neon background
(Image credit: Getty Images)

Some of the world's most influential tech companies have banded together to form a coalition on AI security, though experts have told ITPro that certain concerns need to be allayed. 

Dubbed the Coalition for Secure AI (CoSAI), the agreement will see the likes of Amazon, Microsoft, Anthropic, OpenAI, and others focus on collaborative efforts for the burgeoning technology. CoSAI has been about a year in the making, according to Google, following on from the tech giant’s introduction of the Secure AI Framework (SAIF) in 2023. 

Working towards similar ends, this coalition will focus on three areas - or “workstreams” as Google has termed them - to help support a “collective investment in AI security.”

One such area is security in the AI software supply chain. Google has extended SLSA provenance to AI models to “help identify when AI software is secure” by providing an understanding of how it was created and handled in a supply chain.

As part of this workstream, CoSAI will look to aid the management of third-party model risks and expand on the “existing efforts” of supply chain frameworks. 

CoSAI will also look to assist security practitioners in “day-to-day” AI governance challenges by creating clearer pathways to “identify investments and mitigation techniques to address the security impact of AI use.”

Finally, the coalition will work to construct a taxonomy of AI risks and controls, as well as a checklist and scorecard to help guide practitioners in preparedness, management, and monitoring. 

As Peter Wood, CTO at Spectrum Search, pointed out, though, there are concerns around the self-regulatory nature of this body, given that big tech is exercising a degree of control over its own security measures. 

“A principal worry is the matter of who's held accountable. When these tech titans join forces to lay down the law on AI security, there's the worry that these guidelines could skew towards their benefit rather than that of the public interest,” Wood told ITPro.  

“This could potentially quash innovation from more modest firms and startups lacking the same resources. The concern is that self-regulation might become a tool for these giants to keep hold of their stronghold, forming a wall against newer contenders in the AI arena,” Wood added. 

CoSAI is a positive move if handled correctly 

According to Wood, “transparency in how the coalition operates is vital” as, without it, there could be growing concerns that CoSAI is setting standards for its own end. 

RELATED WHITEPAPER

“Without clear transparency, there's a risk that these self-imposed rules could be less about security and more about managing the story around AI,” he said.

In principle, however, the move is a positive one and “underscores a sector-wide recognition of the importance of AI security,” suggesting a push towards developing standards that might take longer to establish if the onus was placed solely on the government

“It needs to be handled with care to ensure it serves the wider interests of society, not just those of the coalition members. Striking the right balance between innovation, regulation, and public interest will be the key to its success,” Wood added. 

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.