Big Tech AI alliance has ‘almost zero’ chance of achieving goals, expert says
Companies like Microsoft, Google, and OpenAI all have competing objectives and approaches to openness, making true private-sector collaboration a serious challenge


A new partnership between Microsoft, Google, OpenAI, and Anthropic that aims to improve AI safety and responsibility is unlikely to meet its targets, according to an analyst.
The Frontier Model Forum has committed itself to the safe development of ‘frontier models’, which it defines as large machine learning models that exceed the most advanced models currently available.
It aims to identify AI best practices, assist the private sector, academics, and policymakers in the implementation of safety measures, educate the public, and deploy AI to fight issues such as climate change.
“It’s nice that these companies have formed this new body because the issues are challenging and common to all model creators and users,” said Avivah Litan, distinguished VP analyst at Gartner.
“But the chances that a group of diehard competitors will arrive at a common useful solution and actually get it to be implemented ubiquitously – across both closed and open source models that they and others control across the globe – is practically zero.”
At present, the Forum is only formed of its founder members but has stated it would welcome applications from organizations that can commit to developing frontier models safely and assisting wider initiatives in the sector.
Meta, which has focused on ‘open’ AI through models such as Llama 2, and AWS which has called for ‘democratized’ AI and favors a broad approach to model access, were absent from the group at the time of launch.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
OpenAI and Microsoft have had a highly successful year of collaboration, with the industry-leading model GPT-4 having powered tools such as ChatGPT and 365 Copilot.
Google has heavily invested in its own models including PaLM 2, which it uses for its Bard chatbot. Its AI subsidiary Google DeepMind has publicly committed itself to AI safety, with CEO Demis Hassabis long having been a proponent of an ethical approach to the technology.
RELATED RESOURCE
Accelerating FinOps & sustainable IT
IT professionals are under pressure to seek new ways of ensuing sustainable growth. Discover IBM Turbonomic Application Resource Management's ability to optimise and automate cloud and data centres.
DOWNLOAD FOR FREE
Anthropic has dominated headlines less than its forum co-founders, but has made waves in the industry since it was founded by ex-OpenAI members in 2021. It claims that its chatbot Claude is safer than alternatives, and has favored a cautious approach to market in contrast with its competitors.
Litan expressed hope that the companies could work to find solutions that could be enforced by governments, and stressed that an international governmental body is necessary to enforce global AI standards but will prove difficult to form.
“We haven’t seen such global government cooperation on climate change where the solutions are already known. I wouldn’t expect global cooperation and governance around AI to be any easier. In fact, it will be harder since solutions are yet unknown. At least these companies can strive to identify solutions, so that’s a good thing.
AI regulation continues at different paces around the world. The UK has appointed a chair to its Foundation Model Taskforce, which seeks to identify guardrails for the technology that can be applied worldwide, but may be falling behind the EU which has already progressed its AI Act through several important rounds of voting.
Dr Kate Devlin, senior lecturer in social and cultural artificial intelligence in the Department of Digital Humanities at King's College London, told ITPro that the Frontier Model Forum is welcome “in principle”, and highlighted the intention of the companies involved to engage with wider partners as a good step.
“It is not yet clear what form this will take: The announcement seems to focus particularly on technical approaches, but there are much wider socio-technical aspects that need to be addressed when it comes to developing AI responsibly,” said Dr. Devlin.
“It is important to ensure there is no conflict of interest when critically examining the impact this technology is having on the world.”
Dr Devlin is a member of Responsible AI UK, a program that aims to consolidate the responsible AI ecosystem and aims to improve collaboration in the space.
Backed by the public body UK Research and Innovation, it has committed itself to research and innovation projects in socio-technical and creative fields in addition to STEM, in order to back the full potential of the UK’s AI ecosystem.
Other groups already aiming to provide safety guidelines for the use of AI include the Global Partnership on AI (GPAI), which brings together 29 countries under a rotating president country to support the OECD’s Recommendation on Artificial Intelligence.
It works to make AI theory a reality, in line with the concerns of legislators, industry professionals, academics, and civil groups.
The Frontier Model Forum named the G7’s Hiroshima AI process as an initiative it will support. This was an agreement made at the G7 Summit in Japan, with world leaders in attendance asked to collaborate with the OECD and GPAI.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
DocuWare CEO Michael Berger on the company’s rapid growth
News ChannelPro sat down with DocuWare CEO Michael Berger to discuss the company's rapid growth and channel strategy.
By Bobby Hellard Published
-
Seized database helps Europol snare botnet customers in ‘Operation Endgame’ follow-up sting
News Europol has detained several people believed to be involved in a botnet operation as part of a follow-up to a major takedown last year.
By Emma Woollacott Published
-
OpenAI woos UK government amid consultation on AI training and copyright
News OpenAI is fighting back against the UK government's proposals on how to handle AI training and copyright.
By Emma Woollacott Published
-
DeepSeek and Anthropic have a long way to go to catch ChatGPT: OpenAI's flagship chatbot is still far and away the most popular AI tool in offices globally
News ChatGPT remains the most popular AI tool among office workers globally, research shows, despite a rising number of competitor options available to users.
By Ross Kelly Published
-
‘DIY’ agent platforms are big tech’s latest gambit to drive AI adoption
Analysis The rise of 'DIY' agentic AI development platforms could enable big tech providers to drive AI adoption rates.
By George Fitzmaurice Published
-
Google DeepMind’s Demis Hassabis says AI isn’t a ‘silver bullet’ – but within five to ten years its benefits will be undeniable
News Demis Hassabis, CEO at Google DeepMind and one of the UK’s most prominent voices on AI, says AI will bring exciting developments in the coming year.
By Rory Bathgate Published
-
OpenAI wants to simplify how developers build AI agents
News OpenAI is releasing a set of tools and APIs designed to simplify agentic AI development in enterprises, the firm has revealed.
By George Fitzmaurice Published
-
Google CEO Sundar Pichai says DeepSeek has done ‘good work’ showcasing AI model efficiency — and Gemini is going the same way too
News Google CEO Sundar Pichai hailed the DeepSeek model release as a step in the right direction for AI efficiency and accessibility.
By Nicole Kobie Published
-
Elon Musk’s $97 billion flustered OpenAI – now it’s introducing rules to ward off future interest
News OpenAI is considering restructuring the board of its non-profit arm to ward off unwanted bids after Elon Musk offered $97.4bn for the company.
By Nicole Kobie Published
-
Sam Altman says ‘no thank you’ to Musk's $97bn bid for OpenAI
News OpenAI has rejected a $97.4 billion buyout bid by a consortium led by Elon Musk.
By Nicole Kobie Published