Meta ditching its responsible AI team doesn't bode well
In dropping its responsible AI team, Meta highlights its innovation-at-all-costs mentality


With Meta reportedly breaking up its Responsible AI (RAI) team, the tech giant has become the latest in a growing list of firms to dance dangerously with AI safety concerns.
Members of the RAI team will be distributed among other areas of Meta, including its generative AI product team, with others set to focus on AI infrastructure projects, The Information reported last week.
A Meta spokesperson told Reuters that the move is intended to "bring the staff closer to the development of core products and technologies".
This does, to a degree, make sense. Embedding those responsible for ethical AI within specific teams could offer alternative voices within the development process that will consider potential harms.
However, the move means Meta’s responsible AI team, which was tasked with fine-tuning the firm’s AI training practices, has more or less been completely gutted. Earlier this year, the division saw a restructuring which left it as a “shell of a team”, according to Business Insider.
The team was hamstrung from the get-go, reports suggest, with the publication noting that it had “little autonomy” and was bogged down by excessive bureaucratic red tape.
Ordinarily, a restructuring and redistribution of staff from this team would raise eyebrows, but given intense discussions over AI safety in recent months the decision from Meta seems perplexing to say the least.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Concerns over AI safety and ethical development have been growing in intensity amidst claims that generative AI could have an adverse impact on society.
RELATED RESOURCE
Learn more about a software platform that can deliver secure, trustworthy, and scalable AI solutions.
AI-related job losses, the use of generative AI tools for nefarious purposes such as disinformation and cyber attacks, and the potential for discriminatory bias have all been flagged as lingering concerns.
Lawmakers and regulators on both sides of the Atlantic have been highly vocal on the topic in a bid to get ahead of the curve. The European Union (EU) has been highly aggressive in its positioning on AI regulation with the EU AI Act, for example.
The US government has also been pushing heavily for AI safeguards in recent weeks, with President Biden signing an executive order aimed specifically at forcing companies to establish AI safety rules.
This confluence of external pressure has prompted big tech to act in anticipation of pending legislation, suggesting that they are willing to bow to pressure and avoid heightened regulatory scrutiny.
In July, a host of major players in the AI space, including Anthropic, Google, Microsoft, and OpenAI, launched a coalition aimed specifically at fine-tuning AI safety standards, for example.
Yet despite this, Meta appears intent on completely swerving safety concerns as it looks to double down on AI development, disregarding its “pillars of responsible AI” which include transparency, safety, privacy, and accountability.
RELATED RESOURCE
Discover how watsonx.data can help your organization successfully scale analytics and AI workloads for all your data.
Jon Carvill, senior director of communications for AI at Meta, told The Information that the company will still “prioritize and invest in safe and responsible AI development” despite the decision.
Team members redistributed throughout the business will “continue to support relevant cross-Meta efforts on responsible AI development and use”.
While these comments appear to be aimed at alleviating concerns over AI safety, it’s unlikely they will put minds at ease long-term and merely serve to highlight the fact Meta is disregarding safety in pursuit of contending with industry competitors.
Playing catch up
Meta’s sharpened focus on generative AI development appears to be the key reasoning behind gutting its responsibility team.
While Microsoft, Google, and other big tech names were going all in on generative AI, Meta was left playing catch-up, prompting CEO Mark Zuckerberg to scrap the company’s metaverse pipedream and pivot to the new-found focus.
Stiff competition in the AI space meant the firm has been forced to pour money, resources, and staff into its generative AI push, which so far has delivered positive results.
Earlier this year, Meta released its own large language model (LLM), dubbed LLaMA. The 7-65 billion parameter model was its first major foray into the space, and was followed by the more powerful Llama 2 model and Code Llama; Meta’s own equivalent of GitHub Copilot.
Meta isn’t alone in cutting resources for responsible AI development. X (formerly Twitter) cut staff responsible for ethical AI development following Elon Musk’s takeover in November last year right around the time the generative AI ‘boom’ ignited with the launch of ChatGPT.
Microsoft, too, cut staff in its Ethics and Society team, one of the key divisions that led research on responsible AI at the tech giant.
Meta is no stranger to criticism and has found itself in repeated battles with regulators on both sides of the Atlantic on topics such as data privacy in recent years, racking up astronomical fines in the process.
This latest move from the tech giant should have alarm bells ringing. A company willing to disregard its own internal AI safety teams in a bid to drive innovation at all costs isn’t the best look and may create long-term headaches for the firm.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Cleo attack victim list grows as Hertz confirms customer data stolen
News Hertz has confirmed it suffered a data breach as a result of the Cleo zero-day vulnerability in late 2024, with the car rental giant warning that customer data was stolen.
By Ross Kelly
-
Lateral moves in tech: Why leaders should support employee mobility
In-depth Encouraging staff to switch roles can have long-term benefits for skills in the tech sector
By Keri Allan
-
Meta executive denies hyping up Llama 4 benchmark scores – but what can users expect from the new models?
News A senior figure at Meta has denied claims that the tech giant boosted performance metrics for its new Llama 4 AI model range following rumors online.
By Nicole Kobie
-
The DeepSeek bombshell has been a wakeup call for US tech giants
Opinion Ross Kelly argues that the recent DeepSeek AI model launches will prompt a rethink on AI development among US tech giants.
By Ross Kelly
-
Meta won't release multimodal AI models in Europe due to "unpredictable" privacy regulations
News The "unpredictable nature" of EU regulations have prompted Meta to scrap plans to release its models across the region
By Nicole Kobie
-
Meta's Llama 3 will force OpenAI and other AI giants to up their game
News The new model pushes open source as a serious contender in the AI space, and proprietary models might soon find themselves playing catch-up
By George Fitzmaurice
-
A new LLM jailbreaking technique could let users exploit AI models to detail how to make weapons and explosives — and Claude, Llama, and GPT are all at risk
News LLM jailbreaking techniques have become a major worry for researchers amid concerns that models could be used by threat actors to access harmful information
By Ross Kelly
-
Three open source large language models you can use today
News Enterprises are flocking to open source large language models, many of which have become highly popular - here’s three you might want to try out
By Solomon Klappholz
-
An open source challenger to GitHub Copilot? StarCoder2, a code generation tool backed by Nvidia, Hugging Face, and ServiceNow, is free to use and offers support for over 600 programming languages
News StarCoder2 offers code generation support for over 600 programming languages, and it’s free to use
By Solomon Klappholz
-
Google’s new ‘Gemma’ AI models show that bigger isn’t always better
News Smaller AI models are clearly the hot new commodity as Google unveils two new lightweight models, Gemma 2B and Gemma 7B
By George Fitzmaurice