Apple staff restricted from using ChatGPT, GitHub Copilot
The ban follows lingering concerns that employees using ChatGPT might leak company information


Apple employees have been restricted from using generative AI tools such as ChatGPT amid concerns that company information might be leaked or exposed.
A report from the Wall Street Journal revealed that the blanket ban is in direct response to worries that employees might input confidential data into the popular AI chatbot.
The WSJ report added that Apple is currently in the process of building its own internal generative AI toolset to support staff.
The restriction is thought to apply to a raft of external AI tools, including GitHub’s AI Copilot platform, which is used by developers at a host of major tech companies.
Apple’s ban on the use of ChatGPT and generative AI tools isn’t without justification. In March, OpenAI revealed that a bug in ChatGPT led to a leak of user data.
The flaw meant that ChatGPT Plus users began seeing user email addresses, subscriber names, payment addresses, and limited credit card information.
This incident prompted OpenAI to temporarily take the chatbot down to work on a fix.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The glitch in ChatGPT also allowed some users to view the conversation history of others.
This incident led to heightened concerns over the use of the chatbot in workplace environments, with organizations noting that employee use of confidential company information on the platform could be at risk.
What can OpenAI access?
User conversations in ChatGPT can be inspected by OpenAI moderators in certain circumstances.
The company recently introduced a feature that enables users to turn off their chat history. However, OpenAI still stores conversations for up to 30 days before deleting them.
A key concern among businesses has been that OpenAI models are, in part, trained on user inputs, meaning that there is a potential risk that confidential information could be accessed or used in the training of models.
RELATED RESOURCE
The truth about cyber security training
Stop ticking boxes. Start delivering real change.
OpenAI has been vocal on this issue, revealing in April that it was working on a new ChatGPT Business subscription that would give enterprises better ways to “manage their end users”.
“ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default,” the company said in an April statement.
A host of cyber security companies are currently developing tools primarily aimed at supporting businesses to reduce the risk of data leakage when using platforms such as ChatGPT.
Security firm ExtraHop has recently unveiled a tool that enables companies to determine what staff are inadvertently leaking confidential data when using generative AI tools.
ExtraHop said the new tool will help organizations “understand their risk exposure” from internal use of generative AI tools and ”stop data hemorrhaging in its tracks”.
ChatGPT bans
Apple isn’t alone in limiting the use of ChatGPT and generative AI tools for employees. In recent months, a host of major organizations globally have implemented similar policies to mitigate potential risks.
In February, JPMorgan Chase announced a temporary ban on ChatGPT for employees. At the time, the bank revealed that the reasoning behind the restriction was due to its policies on the use of third-party software.
Amazon is one of a number of others to have prevented employees from inputting confidential information into ChatGPT, along with US telco giant Verizon.
Perhaps most famously, Italy implemented a temporary ban on the technology and was one of the first that catalyzed a wave of bans that continues today

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
OpenAI woos UK government amid consultation on AI training and copyright
News OpenAI is fighting back against the UK government's proposals on how to handle AI training and copyright.
By Emma Woollacott Published
-
DeepSeek and Anthropic have a long way to go to catch ChatGPT: OpenAI's flagship chatbot is still far and away the most popular AI tool in offices globally
News ChatGPT remains the most popular AI tool among office workers globally, research shows, despite a rising number of competitor options available to users.
By Ross Kelly Published
-
‘DIY’ agent platforms are big tech’s latest gambit to drive AI adoption
Analysis The rise of 'DIY' agentic AI development platforms could enable big tech providers to drive AI adoption rates.
By George Fitzmaurice Published
-
OpenAI wants to simplify how developers build AI agents
News OpenAI is releasing a set of tools and APIs designed to simplify agentic AI development in enterprises, the firm has revealed.
By George Fitzmaurice Published
-
Elon Musk’s $97 billion flustered OpenAI – now it’s introducing rules to ward off future interest
News OpenAI is considering restructuring the board of its non-profit arm to ward off unwanted bids after Elon Musk offered $97.4bn for the company.
By Nicole Kobie Published
-
Sam Altman says ‘no thank you’ to Musk's $97bn bid for OpenAI
News OpenAI has rejected a $97.4 billion buyout bid by a consortium led by Elon Musk.
By Nicole Kobie Published
-
DeepSeek flips the script
ITPro Podcast The Chinese startup's efficiency gains could undermine compute demands from the biggest names in tech
By Rory Bathgate Published
-
SoftBank could take major stake in OpenAI
News Reports suggest the firm is planning to increase its stake in the ChatGPT maker
By Emma Woollacott Published