Nearly half of UK councils are letting staff use AI tools without any training
Freedom of Information requests reveal that a significant number of local authorities are allowing staff to use AI without proper guidance
Nearly four-in-ten UK councils are allowing staff to use AI tools without having a responsible use policy in place, according to new research.
Freedom of Information requests by digital adoption firm WalkMe found that only a quarter have usage policies in place for AI in the workplace.
Similarly, while 19% are in the process of developing policies, 39% currently allow employees to use the technology without having any safeguards in place.
Ofir Hatsor, SVP EMEA at WalkMe, said the use of generative AI tools among local authorities could help improve productivity and deliver better services. However, a lack of appropriate safeguards at many councils should be a serious cause for concern.
"Without policies in place, councils cannot understand how their employees are using AI, let alone control it. This lack of understanding means that employees could be opening themselves, citizens, and the council itself up to unintended consequences: from embarrassment to putting sensitive data at risk."
Generative AI tools such as ChatGPT have previously been banned by a range of private companies due to concerns over data protection and the risk of exposing sensitive corporate information.
Last year, Amazon, Apple, and a host of organizations banned the use of the chatbot for staff.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
So far, it seems, some councils have been getting away with it, with only one having had to take action against an employee for breaching its AI use policy.
Similarly, only one council has had a potential security incident caused by AI use – in this case, a data leak. On that occasion, the council concerned carried out a thorough investigation into AI use and policy development.
The reason for the lack of AI policies appears to be partly due to financial restrictions, the study found. In its FoI, WalkMe found that only three councils said they had a dedicated budget for implementing generative AI tools or training employees.
"It’s not too late for organizations to act. Usage policies should accept that employees will want to use generative AI, offer guidelines on safe and effective use, and demonstrate the consequences if these aren’t followed. This will go a long way towards educating and empowering workers," Hatsor said.
"Beyond policies, councils and businesses should have the funding to invest in approaches that will help deal with the rapid rise of generative AI or any other technology.”
In September 2023, the London Office of Technology and Innovation (LOTI), a coalition of London boroughs, released guidance on how councils should use AI.
It highlighted risks relating to trust and transparency - including issues around hallucination - along with ethics and bias in the models, and data privacy.
Councils aren't alone in allowing staff to use AI without a proper policy in place - and it's leading to some risky behavior.
A recent study by Veritas found that while half of UK office workers are using generative AI at least once a week, 44% are offered no guidance at all.
This, the firm said, could lead to missed opportunities, but also to violations of data privacy regulations.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.