Slack refutes claims that customer data is used to train AI models

Silhouetted hand using a smartphone pictured in front of the Slack logo and branding.
(Image credit: Getty Images)

Slack has responded to confusion and concern over the use of customer data for AI model training, insisting that its practices do not include user message content.

Last week, claims circulated online that the productivity platform, which has its own integrated ‘Slack AI’ service, was using customer data - including internal company communications - as training materials. 

A spokesperson for Salesforce, which owns Slack, told ITPro it has since updated the language of its policy to better reflect its position on using organizational data for training purposes.  

According to the firm’s policies, Slack uses some customer data to develop “non-generative AI/ML models” that support various features such as emojis or channel recommendations. 

If organizations don’t want their data used in training “Slack global models”, then they can opt-out, meaning their data will only be used to improve the experience on their “own workspace”.

Slack users say its ‘opt-out’ approach doesn’t cut it 

This has still prompted consternation among users, however. What many have taken issue with is the fact that this requires a proactive move on behalf of the organization, rather than Slack opting users out by default or making users perform a much clearer declaration of their desire to opt in. 

Instead, an organization must contact Slack’s customer experience team with an opt-out request email, after which point the firm will “process your request and respond once the opt-out has been completed”.

Initial responses to this policy were fiercely critical and Slack quickly faced backlash from a variety of sources, including many users on the Hacker News forum

One user questioned why such a policy was opt-out and why it needed to be discovered, claiming that their company was “discussing switching to Teams” in an act of retaliation. Another user called it an “incredible rug pull” from the firm.

Engineer and writer Gergely Orosz also took to social media to criticize Slack with claims it was treating paying customers as a product.

“It’s unacceptable that this is automatic opt-in, and paying organizations are not opted out by default,” Orosz said. 

Slack responded to Orosz, saying that the firm has “platform-level machine learning models for things like channel and emoji recommendations and search results” but that customers are able to exclude their data from it. 

One staff member at Slack, Aaron Maurer, responded to Orosz and admitted that “we do need to update this particular page to explain more carefully how these privacy principles play with Slack AI”. 

Slack has since done this in a company blog post.

“We recently heard from some in the Slack community that our published privacy principles weren’t clear enough and could create misunderstandings about how we use customer data in Slack … as we looked at the language on our website, we realized that they were right,” the company said. 

The firm stated that its traditional machine learning models use de-identified, aggregate data and do not access message content in “DMs, private channels, or public channels”.

A Salesforce spokesperson referred ITPro to this blog post while also reiterating some of the post’s key points, highlighting that Slack’s platforms are not trained on user message content. 

“Slack has industry-standard platform-level machine learning models to make the product experience better for customers … these models do not access original message content in DMs, private channels, or public channels to make these suggestions,” the spokesperson said. 

“We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data."

They added that while it also uses third-party LLMs, these are not trained with customer data and that “off-the-shelf” models used by the firm are hosted in its own AWS environment to ensure security. 

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.