Why 'shadow SaaS' is becoming a major blind spot for enterprise security teams
Shadow SaaS and shadow AI are placing organizations at risk of data loss, lack of visibility, and data breaches
Security experts have issued a warning over the rise of 'shadow SaaS', with new research showing nearly three-quarters of workers admitting to using unauthorized SaaS applications at work.
In a survey of more than 250 global security professionals by Next DLP, carried out at RSA Conference 2024 and Infosecurity Europe 2024, 73% admitted to using SaaS applications that had not been approved by their company’s IT team in the past year.
This was despite the fact that they appeared to be highly aware of the risks, with 65% of respondents naming data loss, 62% citing lack of visibility and control, and 52% identifying data breaches as the top risks of using unauthorized tools.
Four-in-ten security professionals said they didn't think employees properly understand the data security risks associated with shadow SaaS and shadow AI.
However, despite this, only 37% said they had developed clear policies and consequences for using these tools, with just 28% promoting approved alternatives to minimize the problem.
Chris Denbigh-White, chief security officer at Next DLP, said the research highlights a clear disparity between employee confidence in using unauthorized tools and their organization’s ability to mitigate the risks.
One-in-ten even admitted they were certain their organization had suffered a data breach or data loss as a result of the use of shadow SaaS. With such high stakes, Denbigh-White said it’s imperative that security teams implement strict measures, or at the least offer alternatives.
Cloud Pro Newsletter
Stay up to date with the latest news and analysis from the world of cloud computing with our twice-weekly newsletter
"Security teams should evaluate the extent of shadow SaaS and AI usage, identify frequently used tools, and provide approved alternatives,” he said. “This will limit potential risks and ensure confidence is deserved, not misplaced."
Shadow SaaS concerns mirrored in AI
Organizations appear to be a little more cautious when it comes to shadow AI, according to the study. Half of respondents said that AI use had been restricted to certain job functions and roles in their organization, and 16% banned the technology altogether.
Overall, 46% of organizations said they have implemented tools and policies to control employees’ use of generative AI.
"Security professionals are clearly concerned about the security implications of GenAI and are taking a cautious approach. However, the data protection risks associated with unsanctioned technology are not new," Denbigh-White said.
"Awareness alone is insufficient without the necessary processes and tools. Organizations need full visibility into the tools employees use and how they use them. Only by understanding data usage can they implement effective policies and educate employees on the associated risks."
In a recent study from WalkMe, based on freedom of information requests, nearly four-in-ten UK councils were found to be allowing staff to use AI tools without having a responsible use policy in place.
Meanwhile, two-fifths of UK office workers told Veritas that they or a colleague have input sensitive information, such as customer, financial, or sales data, into a public generative AI tool.
Six-in-ten failed to realize that this could result in the leaking of confidential information and breach data privacy compliance regulations.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.