Zoom rejects claims AI training policy is mandatory after users vent confusion
The video conferencing firm has denied customers’ data will be used to train AI without consent


Zoom has issued a clarification over items in its terms of service that relate to the use of user data for AI training after a series of LinkedIn posts criticized the firm for allegedly not giving users any way to opt-out.
Users had objected to several clauses that stated Zoom could use customer content, such as uploaded files and data or transcripts and analytics that result from Zoom calls, to train and tune algorithms and models for artificial intelligence (AI) and machine learning (ML).
Points 10.2 and 10.4 in the terms of service had drawn particular concern, as they stated Zoom could use the content for the purposes of “machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof”.
Zoom has rejected criticisms of its terms, and restated that customers have a choice over what their data is used for.
“Zoom customers decide whether to enable generative AI features, and separately whether to share customer content with Zoom for product improvement purposes,” a Zoom spokesperson told ITPro.
Alongside its well-known video conferencing software, the firm has released Zoom IQ, an AI assistant that can summarize meetings and action items from conference calls.
RELATED RESOURCE
Unified Endpoint Management and Security in a work-from-anywhere world
Understand what's influencing security strategies today.
Within businesses that adopt Zoom IQ, administrators have control over whether information is shared with Zoom for training purposes, and can revoke consent they may grant to Zoom for these purposes at any later date.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Users can continue to use Zoom IQ's generative Ai features regardless of whether they opt-in to data sharing or not.
Zoom also collects ‘Service Generated Data’ (SGD), the phrase it uses for diagnostic and telemetry data alongside any other data that Zoom collects or generates as a result of customer use of its services and software.
The terms agreement states that Zoom users grant the firm a “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license” to handle SGD as the firm deems appropriate within the boundaries it has set out.
Greg Wilson, senior software engineering manager at Deep Genomics, wrote a post on LinkedIn urging businesses to drop Zoom as a platform while the practice remained in place.
Zoom COO Aparna Bawa replied to the post, explaining that the terms were intended to improve transparency rather than cause concern.
Separately the company published a blog post in which it addressed the new terms of service, which came into effect in March 2023.
“In Section 10.4, our intention was to make sure that if we provided value-added services, such as a meeting recording, we would have the ability to do so without questions of usage rights,” wrote Smita Hashim, chief product officer at Zoom.
“An example of a machine learning service for which we need license and usage rights is our automated scanning of webinar invites/reminders to make sure that we aren’t unwittingly being used to spam or defraud participants.
“The customer owns the underlying webinar invite, and we are licensed to provide the service on top of that content. For AI, we do not use audio, video, or chat content for training our models without customer consent.”
Some LinkedIn users compared the changes to recent moves by Google.
In July, the search giant changed its terms of service to explicitly allow the company to train its AI models on publicly-available data.
A passage in its privacy policy, which had referenced the use of publicly-available information such as text from open-access websites for training Google’s language models was amended to include reference to Google AI products such as Bard.
Google specifically stated that firms with business information on a website could have this indexed for use in Google services.
The change drew criticism from some in the industry, who pointed out that it could give the firm an unfair advantage over competitors. It has also been compared to OpenAI, which is widely believed to have used a large amount of public data to train models such as GPT-4.
OpenAI quietly released GPTBot configuration on 7 August, which will allow admins to prevent their web data from being scraped for inclusion in future models trained by the firm.
Other companies are facing similar questions as the AI race heats up, with productivity tools such as Google Duet AI and Microsoft 365 Copilot facing scrutiny.
In a post on Mastodon, security researcher Kevin Beaumont suggested that changes such as automatically generating transcripts with generative AI could give individuals insight into private corporate meetings through GDPR mechanisms.
“Probably my favorite unconsidered vector with AI is MS Copilot plans to do meeting summaries and transcriptions... so if you want to find out what a company is saying about you in meetings, wait a year and get in that GDPR subject access request,” wrote Beaumont.
ITPro has reached out to Google for more information.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Should AI PCs be part of your next hardware refresh?
AI PCs are fast becoming a business staple and a surefire way to future-proof your business
By Bobby Hellard Published
-
Westcon-Comstor and Vectra AI launch brace of new channel initiatives
News Westcon-Comstor and Vectra AI have announced the launch of two new channel growth initiatives focused on the managed security service provider (MSSP) space and AWS Marketplace.
By Daniel Todd Published
-
Otter.ai thinks its new generative AI tools will give Microsoft, Google, and Zoom a run for their money
News The new Otter.ai meeting assistant will give users access to a raft of AI features and tools to drive productivity
By George Fitzmaurice Published
-
Answering your four biggest questions about generative AI security
Whitepaper A practical guide for organizations to make their artificial intelligence vision a reality
By ITPro Published
-
Delivering secure, trustworthy & scalable AI
Webinar Trend Micro Vision One as a solution to cyber risks
By ITPro Published
-
ChatGPT needs ‘right to be forgotten’ tools to survive, Italian regulators demand
News ChatGPT users in Italy could be granted tools to have false information changed under new rules
By Ross Kelly Published
-
US starts exploring “accountability measures” to keep AI companies in check
News The move follows Italy’s recent ban on ChatGPT due to data privacy concerns
By Ross Kelly Published