OpenAI says hackers keep trying to use its services for cyber attacks
The AI developer has disrupted 20 attempts to use its tools for malicious purposes, saying there's been little impact to date


OpenAI has disrupted more than 20 attempts to use its models since the beginning of the year, but said the attempts to use its tools to disrupt elections or build malware appear to have largely failed — as had a targeted phishing attack against staff.
OpenAI described threat actors using ChatGPT to debug malware, writing content for fake social media accounts and creating disinformation articles, the company warned in a report.
"Activities ranged in complexity from simple requests for content generation, to complex, multi-stage efforts to analyze and reply to social media posts," the company added. "They even included a hoax about the use of AI."
The report comes amid growing concerns about the use of AI to spread disinformation during elections, as well as the possibility of hackers turning to AI generated content to improve or accelerate their spam and malware campaigns. Last month the US Department of Commerce called on AI providers to prove their systems can't be abused by hackers.
OpenAI has asked that the wider industry continue to work together to fight back against such attempts but repeatedly suggested that so far AI wasn't worsening the situation.
"Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the report said.
Organized networks abuse OpenAI tools
OpenAI noted that it disrupted a "handful of networks" that were using its technology to generate social media content about elections in the US, Rwanda, India and the EU.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
That included an Iranian "influence operation" that OpenAI had written on before was using ChatGPT to generate social media posts and longer-form articles, which were published on websites posing as news outlets. Output focused on political content as well as posts about fashion and beauty, which OpenAI suggested was to look more authentic or to build a follower base.
In another example, ChatGPT accounts in Rwanda were generating election-related content to post on X.com, but OpenAI said that most of the posts identified as written by its models gained little interaction.
Indeed, beyond such dodgy activity being halted by OpenAI, the report added: "in these, we did not observe these networks attracting viral engagement or building sustained audiences."
Not a malware machine
The same was true for the potential to build malware using OpenAI's models. The company admitted that hackers were using its tools for debugging — including one known as STORM-0817 for "relatively rudimentary" Android malware — but they weren't able to create entirely new attack techniques.
Some threat groups used OpenAI's tools at the "intermediate" stage of their actions, such as to write posts for stolen social media accounts, rather than to directly hack someone. OpenAI noted that the attackers didn't do anything that couldn't have been achieved without AI.
That said, a China-based hacker known as SweetSpecter not only used OpenAI tools for research, scripting support, and more, but it also used the tools to attempt spear phishing against OpenAI staff, targeting their personal and corporate emails. That campaign was unsuccessful, OpenAI said.
The SweetSpecter hackers posed as ChatGPT users looking for support from the employees. The emails included a malicious attachment — ironically named "some problems.zip" — that includes a file that did list errors in the chatbot but also ran the "SugarGh0st RAT" malware in the background.
RELATED RESOURCE
"The malware is designed to give SweetSpecter control over the compromised machine and allow them to do things like execute arbitrary commands, take screenshots, and exfiltrate data," OpenAI said.
Though the hackers made use of OpenAI's technologies to target the company's staff, the reverse was also true: OpenAI's security teams used ChatGPT to translate, categorize and summarize communications from the attackers.
"As our models become more advanced, we expect we will also be able to use ChatGPT to reverse engineer and analyze the malicious attachments sent to employees," the company added.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
OpenAI woos UK government amid consultation on AI training and copyright
News OpenAI is fighting back against the UK government's proposals on how to handle AI training and copyright.
By Emma Woollacott Published
-
DeepSeek and Anthropic have a long way to go to catch ChatGPT: OpenAI's flagship chatbot is still far and away the most popular AI tool in offices globally
News ChatGPT remains the most popular AI tool among office workers globally, research shows, despite a rising number of competitor options available to users.
By Ross Kelly Published
-
‘DIY’ agent platforms are big tech’s latest gambit to drive AI adoption
Analysis The rise of 'DIY' agentic AI development platforms could enable big tech providers to drive AI adoption rates.
By George Fitzmaurice Published
-
OpenAI wants to simplify how developers build AI agents
News OpenAI is releasing a set of tools and APIs designed to simplify agentic AI development in enterprises, the firm has revealed.
By George Fitzmaurice Published
-
Elon Musk’s $97 billion flustered OpenAI – now it’s introducing rules to ward off future interest
News OpenAI is considering restructuring the board of its non-profit arm to ward off unwanted bids after Elon Musk offered $97.4bn for the company.
By Nicole Kobie Published
-
Sam Altman says ‘no thank you’ to Musk's $97bn bid for OpenAI
News OpenAI has rejected a $97.4 billion buyout bid by a consortium led by Elon Musk.
By Nicole Kobie Published
-
DeepSeek flips the script
ITPro Podcast The Chinese startup's efficiency gains could undermine compute demands from the biggest names in tech
By Rory Bathgate Published
-
SoftBank could take major stake in OpenAI
News Reports suggest the firm is planning to increase its stake in the ChatGPT maker
By Emma Woollacott Published