Shadow AI is creeping its way into software development – more than half of developers admit to using unauthorized AI tools at work, and it’s putting companies at risk
Enterprises need to create smart AI usage policies that balance the benefits and risks
With software developers increasingly flocking to AI tools to support daily activities, new research suggests a concerning portion are using unauthorized solutions.
Findings from Harness’ State of Software Delivery Report show that more than half (52%) of developers don’t use IT-approved tools, raising significant compliance and intellectual property concerns.
“Perhaps the most alarming observation was around the use of company-approved coding tools - of lack thereof,” the report states.
“The unauthorized adoption of AI codegen tools creates significant shadow IT challenges that extend far beyond immediate security concerns.”
Shadow AI is a serious cause for concern for teams, the report added, with developers potentially exposing sensitive code snippets to third-party AI services without proper governance.
“Ultimately, they can’t track the origin of AI-generated code, nor can they ensure consistent security standards across teams,” Harness said.
Software developers aren’t the only ones flocking to shadow AI
The rise of shadow AI has become a recurring talking point over the last two years as enterprises globally flock to the various AI tools available on the market.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
In its Chasing Shadows report, Software AG found 75% of knowledge workers are already using AI, with this figure set to rise to 90% in the near future, and more than 50% of this group use personal or non-company issued tools when doing so.
Another study by customer service platform Zendesk noted there has been as much as a 250% rise year on year in the use of unsanctioned AI tools in certain industries.
The financial services sector was found to be the worst affected by this phenomenon with a 250% spike year on year compared to 2023 levels, but the healthcare (230%) and manufacturing industries (233%) also exhibited very high levels of shadow AI use.
Developing robust AI usage policies will be integral to ensuring this growing reliance on unvetted AI tools does not expose your enterprise to unnecessary risks.
Harness’ report listed the critical gaps identified by software engineering leaders in their organization’s AI coding tool policies.
Three-fifths of engineering leaders said companies need policies prescribing the processes for assessing code for vulnerabilities or errors, with 58% stating they need to outline specific use cases where AI is safe or unsafe.
Bharat Mistry, field CTO at Trend Micro, told ITPro implementing the policies included in the Harness report were all wise, but highlighted the importance of training when trying to shape employee behaviour and foster responsible use of personal AI systems.
“I agree with the policies given above, however for me it begins with the human aspect. By investing in comprehensive training and awareness programs, businesses can empower their employees to use AI responsibly, identify and mitigate risks and contribute to the development of ethical and effective AI solutions,” he argued.
“This proactive approach not only enhances the organization’s AI capabilities but also builds a culture of trust and accountability around AI technologies.”
Speaking to ITPro, Steve Ponting, director of Software AG echoed Mistry’s comments, noting that training will be essential in mitigating the risks associated with employees using their own AI tools.
“Workers have been clear: they will use AI whether it’s sanctioned or not. This means that businesses could struggle to manage AI applications, leading to cyber-security risks, skills gaps, and inaccurate work.,” he explained.
“Businesses must have a plan in place to reduce risk, build skills and plan for AI’s inclusion in daily work. If people are determined to use their own AI, training is vital in this regard. Better training would make 46% of employees use AI more, but crucially, they would use it effectively and responsibly.”
Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.