Developers are at their wits end trying to build generative AI applications – skills gaps, complexity, and 'tool sprawl' are creating major hurdles

Software developer looking stressed at desk in a dark room with monitor light illuminating face.
(Image credit: Getty Images)

Generative AI app development faces significant obstacles due to a growing engineering skills gap and a dearth of effective tooling, an IBM study has found.

Though the majority of respondents who identified as AI developers or data scientists considered themselves AI experts, less than a quarter (24%) of application developers ranked themselves at the same level.

This highlights a growing generative AI skills gap, IBM said, as this is new and complicated terrain for most developers. The learning curve is steep and the pace of innovation high, IBM added.

Developers also feel underprepared when it comes to reliable toolkits and frameworks. One-third (33%) of respondents cited the lack of a standardized AI development process and trusted AI lifecycle as top challenges.

While developers cited performance, flexibility, ease of use, and integration as the four most essential qualities in enterprise AI development tools, over a third of respondents said these traits are also the rarest.

Tool sprawl is an issue too, with 72% claiming to use between five and 15 tools in AI application development. 13% claimed to use 15 or more tools.

“The upshot is clear: developers are facing real complexity challenges in the AI stack – which has real consequences,” IBM said.

“Enterprises are investing in generative AI for a competitive advantage. An overly complex AI stack saps this investment and ripples out to other systems,” IBM added.

AI coding tools could help, but devs still need core skills

IBM suggested AI coding tools could help with skills issues, with 99% of those surveyed using AI coding assistants in some capacity.

Nearly half (41%) said these tools saved them between one and two hours a day, and 22% reported over three hours in time savings from the use of AI coding tools.

Such tools can boost efficiency and will doubtless become useful copilots for software developers over time, COO of FDM Group Sheila Flavell told ITPro, though businesses should be wary of an over-reliance.

“In their current state they often result in a surge of errors, security vulnerabilities, and downstream manual work that burdens developers," Flavell said.

Engineers should still master core software engineering principles as well as gain expertise in the management of AI-generated code, she added. With this in mind, firms should prioritize upskilling to improve code review, quality assurance, and security validation, Flavell said.

“This is about setting up your team for long-term success and skills development with AI, rather than looking solely at output of lines of code,” Dom Couldwell, head of field engineering EMEA at DataStax, told ITPro.

“AI tools can be very effective in producing a starting point for a coding task but without the experience required to spot more subtle flaws in the code, and adequate testing to further root out gaps, there is a danger we’ll see an explosion of lower quality code,” he added.

A recent report from Harness found that 59% of developers reported problems with code deployments at least 50% of the time when using AI coding tools, despite 92% of respondents reporting an uptick in the volume of code shipped into production.

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.