A lack of AI guidance is causing GDPR headaches for UK businesses
Businesses are failing to make sure that their staff know the rules around data privacy
Two-in-five UK businesses are failing to offer staff any guidance on the use of artificial intelligence (AI), leading to some risky behavior.
While half of UK office workers are using generative AI at least once a week, and one in five every day, they are not getting the advice they need, research by Veritas finds.
Half said they need guidelines or mandatory policies on generative AI use from their bosses, with 44% offering no guidance at all.
One in four believe that this would create a more level playing field, 68% believe it is essential to know how to use AI tools in the right way, and 85% believe there should be some form of national or international regulation around AI.
And, it seems, guidance is badly needed. Two-fifths of UK office workers admit that they or a colleague have inputted sensitive information, such as customer, financial or sales data, into a public generative AI tool.
Six-in-ten fail to recognize that doing so could result in sensitive information leaking outside the corporate walls, with a similar number unaware that this can cause their organization to fall foul of data privacy compliance regulations.
"Without guidance from leaders on how or if to utilize generative AI, some employees are using it in ways that put their organizations at risk," said Sonya Duffin, solutions lead at Veritas Technologies.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"Organizations could face regulatory compliance violations or miss out on opportunities to increase efficiency across their entire workforce. Both issues can be resolved with effective generative AI guidelines and policies on what’s OK and what’s not."
Two-fifths of UK office workers are using AI to do their research, 43% to write their emails, and 17% to help write company reports. One in ten say they're simply using it to look good in front of their boss.
However, this is causing division amongst colleagues, with 29% believing that colleagues who are using it should be reported to line managers, and a quarter believing they should either get a pay cut or face disciplinary action.
Under GDPR, anyone inputting data into a generative AI needs to comply with GDPR in terms of legal basis, transparency and security. Organizations should prepare a Data Protection Impact Assessment (DPIA) and limit unnecessary processing.
Organizations developing or using generative AI, says the Information Commissioner's Office (ICO), should be considering their data protection obligations from the outset, taking a data protection by design and by default approach.
"The message is clear: thoughtfully develop and clearly communicate guidelines and policies on the appropriate use of generative AI and combine that with the right data compliance and governance toolset to monitor and manage their implementation and ongoing enforcement," Duffin said.
"Your employees will thank you, and your organization can enjoy the benefits without increasing risk."
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.