Zoom rejects claims AI training policy is mandatory after users vent confusion
The video conferencing firm has denied customers’ data will be used to train AI without consent
Zoom has issued a clarification over items in its terms of service that relate to the use of user data for AI training after a series of LinkedIn posts criticized the firm for allegedly not giving users any way to opt-out.
Users had objected to several clauses that stated Zoom could use customer content, such as uploaded files and data or transcripts and analytics that result from Zoom calls, to train and tune algorithms and models for artificial intelligence (AI) and machine learning (ML).
Points 10.2 and 10.4 in the terms of service had drawn particular concern, as they stated Zoom could use the content for the purposes of “machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof”.
Zoom has rejected criticisms of its terms, and restated that customers have a choice over what their data is used for.
“Zoom customers decide whether to enable generative AI features, and separately whether to share customer content with Zoom for product improvement purposes,” a Zoom spokesperson told ITPro.
Alongside its well-known video conferencing software, the firm has released Zoom IQ, an AI assistant that can summarize meetings and action items from conference calls.
Unified Endpoint Management and Security in a work-from-anywhere world
Understand what's influencing security strategies today.
Within businesses that adopt Zoom IQ, administrators have control over whether information is shared with Zoom for training purposes, and can revoke consent they may grant to Zoom for these purposes at any later date.
Cloud Pro Newsletter
Stay up to date with the latest news and analysis from the world of cloud computing with our twice-weekly newsletter
Users can continue to use Zoom IQ's generative Ai features regardless of whether they opt-in to data sharing or not.
Zoom also collects ‘Service Generated Data’ (SGD), the phrase it uses for diagnostic and telemetry data alongside any other data that Zoom collects or generates as a result of customer use of its services and software.
The terms agreement states that Zoom users grant the firm a “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license” to handle SGD as the firm deems appropriate within the boundaries it has set out.
Greg Wilson, senior software engineering manager at Deep Genomics, wrote a post on LinkedIn urging businesses to drop Zoom as a platform while the practice remained in place.
Zoom COO Aparna Bawa replied to the post, explaining that the terms were intended to improve transparency rather than cause concern.
Separately the company published a blog post in which it addressed the new terms of service, which came into effect in March 2023.
“In Section 10.4, our intention was to make sure that if we provided value-added services, such as a meeting recording, we would have the ability to do so without questions of usage rights,” wrote Smita Hashim, chief product officer at Zoom.
“An example of a machine learning service for which we need license and usage rights is our automated scanning of webinar invites/reminders to make sure that we aren’t unwittingly being used to spam or defraud participants.
“The customer owns the underlying webinar invite, and we are licensed to provide the service on top of that content. For AI, we do not use audio, video, or chat content for training our models without customer consent.”
Some LinkedIn users compared the changes to recent moves by Google.
In July, the search giant changed its terms of service to explicitly allow the company to train its AI models on publicly-available data.
A passage in its privacy policy, which had referenced the use of publicly-available information such as text from open-access websites for training Google’s language models was amended to include reference to Google AI products such as Bard.
Google specifically stated that firms with business information on a website could have this indexed for use in Google services.
The change drew criticism from some in the industry, who pointed out that it could give the firm an unfair advantage over competitors. It has also been compared to OpenAI, which is widely believed to have used a large amount of public data to train models such as GPT-4.
OpenAI quietly released GPTBot configuration on 7 August, which will allow admins to prevent their web data from being scraped for inclusion in future models trained by the firm.
Other companies are facing similar questions as the AI race heats up, with productivity tools such as Google Duet AI and Microsoft 365 Copilot facing scrutiny.
In a post on Mastodon, security researcher Kevin Beaumont suggested that changes such as automatically generating transcripts with generative AI could give individuals insight into private corporate meetings through GDPR mechanisms.
“Probably my favorite unconsidered vector with AI is MS Copilot plans to do meeting summaries and transcriptions... so if you want to find out what a company is saying about you in meetings, wait a year and get in that GDPR subject access request,” wrote Beaumont.
ITPro has reached out to Google for more information.
Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.