DevSecOps teams are ramping up the use of AI coding tools, but they’ve got serious concerns — AI-generated code is causing major security headaches and slowing down development processes

DevSecOps team sitting at desks in a open plan office space with software developers working on laptops and desktop computer devices.
(Image credit: Getty Images)

While a plethora of organizations globally are now using AI in their software development processes, DevSecOps teams are worried about the growing array of security risks, new research shows.

In a recent survey by Black Duck Software, nine-in-ten developers reported using AI coding tools in their daily workflow and highlighted the marked benefits of integrating AI within the development lifecycle.

The sectors most enthusiastic about AI-generated code development are technology, cybersecurity, fintech, education, and banking/financial, the study found.

Even in the non-profit sector, traditionally less of an early adopter, at least half of organizations surveyed reported that they were using AI.

Yet despite the excitement surrounding AI coding tools, developers and software engineers have reported serious issues. Two-thirds of respondents said they’re growing increasingly concerned about the security and safety of AI-generated code.

"AI is a technology enabler that should be invested in, not feared, so long as the proper guardrails are being prioritized," said Jason Schmitt, CEO of Black Duck.

"For DevSecOps teams, that means finding sensible uses to implement AI into the software development process and layering the proper governance strategy on top of it to protect the heart and soul of an organization – its data."

The main priorities for DevSecOps in terms of security testing were the sensitivity of the information being handled, industry best practice, and easing the complexity of testing configuration through automation, all cited by around a third.

Most survey respondents (85%) said they had at least some measures in place to address the challenges posed by AI-generated code, such as potential IP, copyright, and license issues that an AI tool may introduce into proprietary software.

However, fewer than a quarter said they were ‘very confident' in their policies and processes for testing this code.

DevSecOps teams are being hampered by testing hurdles

The big conflict here appears to be security versus speed considerations, with around six-in-ten reporting that security testing significantly slows development. Half of respondents also said that most projects are still being added manually.

Another major hurdle for teams is the dizzying number of security tools in use, the study noted. More than eight-in-ten organizations said they're using between six and 20 different security testing tools.

This growing array of tools makes it harder to integrate and correlate results across platforms and pipelines, respondents noted, and is making it harder to distinguish between genuine issues and false positives.

Indeed, six-in-ten reported that between 21% and 60% of their security test results are ‘noise’ - false positives, duplicates, or conflicts - which can lead to alert fatigue and inefficient resource allocation.

RELATED WHITEPAPER

"While there's a clear trend toward automation and integration of security into development processes, many organizations are still grappling with noise in security results and the persistence of manual processes that could be streamlined through automation," wrote Black Duck’s Fred Bals in a blog post.

"Moving forward, the most successful organizations will likely be those that can effectively streamline their tool stacks, leverage AI responsibly, reduce noise in security testing, and foster closer collaboration between security, development, and operations teams."

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.