Meta and IBM launch new AI alliance, but where's Google and Microsoft?
The alliance will support a transparent and collaborative approach to developing responsible, safe AI technologies

IBM and Meta are set to lead a new AI alliance focused on the responsible, open development of artificial intelligence (AI), but notable industry competitors will not participate.
The coalition’s stated goal is to support a transparent and collaborative approach to AI development that the duo said will help produce evaluation standards and tools for building AI systems, as well as educational content to inform public discourse on the technology.
The two technology giants are joined by over 50 founding members and collaborators, including hardware and infrastructure producers, model builders, research institutions, and open source foundations.
Other industry stakeholders pledging support for the alliance include AMD, Intel, Hugging Face, Dell Technologies, Cerebras, Sony, and Oracle.
A number of research institutions have also joined the coalition including notable institutions such as Harvard University, Yale University, Imperial College London, the University of Tokyo, UCLA Berkeley, and Notre Dame.
IBM AI Alliance: What will the coalition do?
The focus of the alliance appears to be on ensuring the future of AI development is open and retains considerations around safety and trust.
In a blog post announcing the partnership IBM said an open, collaborative approach to AI innovation will be critical to advancing the technology.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“Open and transparent innovation is essential to empower a broad spectrum of AI researchers, builders, and adopters with the information and tools needed to harness these advancements in ways that prioritize safety, diversity, economic opportunity and benefits to all.”
IBM and Meta hope the coalition will bring together influential members in the sector to pool resources and knowledge to ensure safety concerns are adequately addressed during the ongoing wave of AI development.
This will see the alliance launch or enhance AI projects that satisfy a list of core objectives centered around fostering a healthy environment for responsible AI innovation.
One of these objectives aims to develop and implement evaluation standards for AI systems, produce a list of vetted safety and trust tools, and advocate for the widespread adoption of these standards and tools.
These projects should also facilitate wider AI skills development on a global scale and produce resources to inform public discourse on AI, the duo said, which long-term will enable more precise and productive regulations on the technology.
Does this alliance represent a new or false hope?
Notable omissions on the list of alliance members include Google, Amazon, Microsoft, Anthropic, and OpenAI, which make up the most influential AI model developers in the sector.
RELATED RESOURCE
Webinar: How to scale AI workloads taking an open data lakehouse approach
Discover the benefit of an open lakehouse approach and you will see watsonx.data live in action
WATCH NOW
Microsoft, Google, OpenAI, and Anthropic all joined the Frontier Model Forum in July 2023, perhaps explaining their disinclination to join this new alliance.
The stated goals of the partnership run along the same lines as the coalition announced by IBM and Meta, but retain a particular focus on safeguarding the development of frontier models – defined as large language models (LLMs) that surpass the most advanced models available today.
Some experts expressed pessimism about how realistic the partnership was in achieving its stated goals of cooperation with the Forum’s members being major competitors.
Is open AI collaboration a pipe dream?
There’s no doubt that new methods for improved model training - or frameworks through which models could circumvent unwanted hallucinations or harmful output entirely - are welcomed. Collaboration between the public and private sector will be a necessary part of this.
Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
Ultimately, however, these kinds of partnerships come second to the guardrails of the market as defined by legislation and competition. AI developers will continue to collaborate on best practices and enter into dialog with world leaders over the potential risk of AI implementation, but the bar for transparency will always lie with regulators.
The ‘black box’ approach to AI development, in which training weights and the nature of data used to train AI models are kept largely secret by each company, is likely to be a sticking point for open AI development. If you don’t know the precise decisions and information that has gone into making a model, evaluation systems can only go so far.
With its focus on open source AI, Meta is arguably in a better position to engage in the kind of meaningful collaboration with academia that will be necessary to lay the groundwork for ethical AI.
But even Meta has set limitations according to market competition. The firm’s open LLM, Llama 2, has been advertised as an open source model but can only be used by firms with more than 700 million monthly active users with permission from Meta.
Solomon Klappholz is a former Staff Writer at ITPro adn ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Does speech recognition have a future in business tech?
Once a simple tool for dictation, speech recognition is being revolutionized by AI to improve customer experiences and drive inclusivity in the workforce
By Jonathan Weinberg Published
-
Microsoft launches new security AI agents to help overworked cyber professionals
News Microsoft is expanding its Security Copilot service with new AI agents to help overworked IT teams deal with surging security threats.
By Bobby Hellard Published
-
Google DeepMind’s Demis Hassabis says AI isn’t a ‘silver bullet’ – but within five to ten years its benefits will be undeniable
News Demis Hassabis, CEO at Google DeepMind and one of the UK’s most prominent voices on AI, says AI will bring exciting developments in the coming year.
By Rory Bathgate Published
-
‘The entire forecasting business process changed’: Microsoft CEO Satya Nadella says Excel changed the game for enterprises in 1985 – he’s confident AI tools will do the same
News The Microsoft CEO says we need to change how we measure the value of AI
By George Fitzmaurice Published
-
Microsoft exec touts benefits of AI productivity gains
News Microsoft CCO Judson Althoff said the company is unlocking significant efficiency gains from AI tools internally.
By George Fitzmaurice Published
-
Google CEO Sundar Pichai says DeepSeek has done ‘good work’ showcasing AI model efficiency — and Gemini is going the same way too
News Google CEO Sundar Pichai hailed the DeepSeek model release as a step in the right direction for AI efficiency and accessibility.
By Nicole Kobie Published
-
‘We’ve created an entirely new state of matter’: Satya Nadella hails Microsoft’s 'Majorana' quantum chip breakthrough
News Microsoft has unveiled a new chip it says could deliver quantum computers with real-world applications in ‘years, not decades'.
By Emma Woollacott Published
-
Microsoft says AI tools such as Copilot or ChatGPT are affecting critical thinking at work – staff using the technology encounter 'long-term reliance and diminished independent problem-solving'
News Research from Microsoft suggests that the increased use of AI tools at work could impact critical thinking among employees.
By Nicole Kobie Published
-
The DeepSeek bombshell has been a wakeup call for US tech giants
Opinion Ross Kelly argues that the recent DeepSeek AI model launches will prompt a rethink on AI development among US tech giants.
By Ross Kelly Published