Enterprise AI is surging, but is security keeping up?

AI chatbot text dialogue boxes in difference colours above a digital circuit board with lines of light emanating from it
(Image credit: Getty Images)

Enterprises are ramping up the adoption of AI tools, according to new research, but the heightened security and data protection risks associated with the technology are causing serious headaches for cybersecurity professionals.

A new report from Zscaler showed a 3,000% year-on-year increase in enterprise AI and machine learning (ML) adoption, based on analysis of over 536 billion AI transactions processed on its cloud platform between February and December 2024.

The US and India were found to be leading the world in terms of AI/ML transaction volumes, with businesses in the UK, Germany, and Japan also showing significant uptake of AI tools.

Overall, businesses around the world sent a total of 3,624 TB of data to AI tools during that period, with OpenAI’s ChatGPT being the most popular application, accounting for 45.2% of transactions.

But while ChatGPT was far and away the most popular tool, Zscaler noted it was also the most widely blocked application, with security leaders stating they were concerned about the lack of visibility into how employees were using it.

AI-enhanced content creation and productivity tools tended to be the most frequently blocked by enterprises, with other commonly restricted applications being Grammarly, Microsoft Copilot, QuillBot, and Wordtune.

The report found enterprises had blocked 59.9% of all AL/ML transactions, which it said signals a growing awareness of the potential risks associated with using these tools, such as data leakage, unauthorized access, and compliance issues.

Businesses are succeeding at reducing AI-driven exposures amid rapid adoption

An analogous report from cloud security specialist Sysdig identified a similar pattern, with 75% of its customers using AI or ML packages in their environments, which has more than doubled since the previous year.

Sysdig revealed that its telemetry showed an eye-watering 500% increase in AI workloads in the last year. This surge was mostly driven by widespread adoption of data analysis tools, the report noted, but the percentage of generative AI packages has more than doubled over the course of a year, rising from 15% to 36%.

Speaking to ITPro, Crystal Morin, cybersecurity strategist at Sysig, said that she and her colleagues felt this huge growth in AI usage was being driven in some part by shadow AI, where employees use AI tools without explicit permission from their employer.

Morin added that businesses are paying close attention to how they are securing AI workloads, however, stating that the proportion of workloads that are publicly exposed to the internet without appropriate security controls has shrunk by 38% in less than eight months.

The report found 12.8% of workloads containing AI packages were publicly exposed in 2025, with only 1% of these representing critical vulnerabilities and 0.5% being in use.

Morin said she felt this was largely the result of businesses and their IT teams placing added scrutiny on how AI is being used in their organization, owing to the level prominence these tools and their associated risks have been given.

“IT and security teams know what to look for, and they are definitely prioritizing it. They’re seeing these packages pop up, they’re getting alerted, and they’re locking them down.”

The fact that such a tiny percentage of these workloads were deemed a critical vulnerability was just very reassuring and showed bolstered security efforts observed across the industry were paying off.

“It’s super exciting to see because there is very low risk of attackers being able to [exploit them], it’s still a concern but a very low risk in comparison to other concerns that we have so that’s a really great security effort there.”

Morin said it was important to emphasize the work being done in this area by cloud security professionals is paying off.

“The story I wanted to tell with this [report] is that we’re doing a good job, cloud defenders have ‘made it’ this year,” she declared.

“If we keep the momentum we can continue making progress as cloud defenders. I think we know enough about defense at this point that we can keep going, continue implementing AI for cloud defense, we know about preventative measures, we know how to defend ourselves.”

MORE FROM ITPRO

Solomon Klappholz
Staff Writer

Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.