Slack refutes claims that customer data is used to train AI models
Slack said its ML models are trained on de-identified, aggregate data and that its models do not access message content


Slack has responded to confusion and concern over the use of customer data for AI model training, insisting that its practices do not include user message content.
Last week, claims circulated online that the productivity platform, which has its own integrated ‘Slack AI’ service, was using customer data - including internal company communications - as training materials.
A spokesperson for Salesforce, which owns Slack, told ITPro it has since updated the language of its policy to better reflect its position on using organizational data for training purposes.
According to the firm’s policies, Slack uses some customer data to develop “non-generative AI/ML models” that support various features such as emojis or channel recommendations.
If organizations don’t want their data used in training “Slack global models”, then they can opt-out, meaning their data will only be used to improve the experience on their “own workspace”.
Slack users say its ‘opt-out’ approach doesn’t cut it
This has still prompted consternation among users, however. What many have taken issue with is the fact that this requires a proactive move on behalf of the organization, rather than Slack opting users out by default or making users perform a much clearer declaration of their desire to opt in.
Instead, an organization must contact Slack’s customer experience team with an opt-out request email, after which point the firm will “process your request and respond once the opt-out has been completed”.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Initial responses to this policy were fiercely critical and Slack quickly faced backlash from a variety of sources, including many users on the Hacker News forum.
One user questioned why such a policy was opt-out and why it needed to be discovered, claiming that their company was “discussing switching to Teams” in an act of retaliation. Another user called it an “incredible rug pull” from the firm.
Engineer and writer Gergely Orosz also took to social media to criticize Slack with claims it was treating paying customers as a product.
“It’s unacceptable that this is automatic opt-in, and paying organizations are not opted out by default,” Orosz said.
Slack responded to Orosz, saying that the firm has “platform-level machine learning models for things like channel and emoji recommendations and search results” but that customers are able to exclude their data from it.
RELATED WHITEPAPER
One staff member at Slack, Aaron Maurer, responded to Orosz and admitted that “we do need to update this particular page to explain more carefully how these privacy principles play with Slack AI”.
Slack has since done this in a company blog post.
“We recently heard from some in the Slack community that our published privacy principles weren’t clear enough and could create misunderstandings about how we use customer data in Slack … as we looked at the language on our website, we realized that they were right,” the company said.
The firm stated that its traditional machine learning models use de-identified, aggregate data and do not access message content in “DMs, private channels, or public channels”.
A Salesforce spokesperson referred ITPro to this blog post while also reiterating some of the post’s key points, highlighting that Slack’s platforms are not trained on user message content.
“Slack has industry-standard platform-level machine learning models to make the product experience better for customers … these models do not access original message content in DMs, private channels, or public channels to make these suggestions,” the spokesperson said.
“We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data."
They added that while it also uses third-party LLMs, these are not trained with customer data and that “off-the-shelf” models used by the firm are hosted in its own AWS environment to ensure security.

George Fitzmaurice is a former Staff Writer at ITPro and ChannelPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Salesforce wants technicians and tradespeople to take AI agents on the road with them
News Salesforce wants to equip technicians and tradespeople with agentic AI tools to help cut down on cumbersome administrative tasks.
By Ross Kelly Published
-
“No, I don't think it is the end of Salesforce”: Klarna CEO clarifies why it stopped using Salesforce – and why he doesn’t think other companies will follow suit
News Klarna CEO Sebastian Siemiatkowski has explained his firm’s decision to stop using Salesforce in favor of its in-house AI tools.
By George Fitzmaurice Published
-
‘DIY’ agent platforms are big tech’s latest gambit to drive AI adoption
Analysis The rise of 'DIY' agentic AI development platforms could enable big tech providers to drive AI adoption rates.
By George Fitzmaurice Published
-
Google’s Gemini AI models are coming to Agentforce
News Salesforce and Google have expanded their partnership with a $2.5 billion deal to bring the Gemini model range to Agentforce.
By Emma Woollacott Published
-
"We are really moving into a world now of managing humans and agents together": Marc Benioff thinks today’s CEOs will be the last to have a fully human workforce – and he's not the only big tech exec predicting the rise of an AI workforce
News Salesforce CEO Marc Benioff has predicted today's generation of CEOs will be the last to manage all-human workforces due to the rise of AI agents.
By George Fitzmaurice Published
-
OpenAI could be plotting a foray into the world of AI agents – here’s what we know so far
News The launch date for OpenAI’s "Operator" agent is believed to have been revealed at a staff meeting
By Nicole Kobie Published
-
Nearly half of workers think using AI makes them look lazy and incompetent
News AI adoption is slowing among desk workers, driven by uncertainty around its permissibility in the workplace
By Solomon Klappholz Published
-
“Values drive value” for Salesforce Ventures as AI investment ramps up
News Having doubled its generative AI investment pot within just months of launching, Salesforce Ventures is once again expanding the scheme
By Ross Kelly Last updated