Why are so many AI projects destined for failure? Inexperienced staff, poor planning, and a shoehorned approach to agile development are all stifling innovation

Man working on Kanban board to signify agile development

AI projects fail because of technical challenges, people misunderstanding what's possible, and agile development, according to research from the RAND institute.

Research cited by RAND suggests as many as 80% of AI projects fail — that's twice the rate of other technology projects and a serious problem for the industry given the costs involved with AI. RAND notes that the US Department of Defense is spending $1.8 billion annually on military AI applications.

RAND researchers interviewed 65 AI experts, revealing five key causes for failure — what the research institute calls "anti-patterns of AI".

As is often the case, while some of the challenges relate to the technology itself, others have more to do with people.

"AI projects have two components: the technology as a platform (i.e., the development, use, and deployment of AI to complete some set of business tasks) and the organization of the project (i.e., the process, structure, and place in the overall organization)," the report notes.

"These two elements enable organizations and AI tools to work together to solve pressing business problems."

IT projects fail for many reasons beyond technology, RAND notes, pointing to poor execution, problems with how users interact with the end results, and troubles meeting high expectations.

"However, AI seems to have different project characteristics, such as costly labor and capital requirements and high algorithm complexity, that make them unlike a traditional information system," the report states.

"The high-profile nature of AI may increase the desire for stakeholders to better understand what drives the risk of IT projects related to AI."

Five points of failure

To start, RAND reports that the problem to be solved using AI isn't always well understood or communicated, so people simply don't address the real challenge. Because leadership lacks the detailed technical skills, and the data science team doesn't have the full business context, models can be developed that are just enough off point to be useless.

"[They] do not realize that the metrics measuring the success of the AI model do not truly represent the metrics of success for its intended purpose," the report notes.

As an example, the report describes business leaders wanting an algorithm to set the price for a product, but what they actually need is to know the price that offers the greatest profit margin rather than sell the most items.

"The data science team lacks this business context and therefore might make the wrong assumptions," the paper says.

RELATED WHITEPAPER

Projects fail because they lack sufficient data to properly train an AI model. Organizations may collect data for compliance or to track sales figures, but that may not be the right sort of data to feed into a behavioral algorithm's many fields, for example.

"For example, an e-commerce website might have logged what links users click on — but not a full list of what items appeared on the screen when the user selected one or what search query led the user to see that item in the first place," the paper says.

Similarly, organizations focus more on using the "latest and greatest" technology than solving the problem at hand, which may need a different technology or perhaps not require the most up to date solution.

"Not every problem is complex enough to require an ML solution: As one interviewee explained, his teams would sometimes be instructed to apply AI techniques to datasets with a handful of dominant characteristics or patterns that could have quickly been captured by a few simple if-then rules," the researchers said.

The fourth issue highlighted in RAND’s report centers around the fact that organizations may lack the infrastructure to manage data and deploy an AI model, dooming the project to failure.

Supporting infrastructure can include staff with the right skills, as well as operations infrastructure to ensure quick and easy deployment.

"Some interviewees noted that they had observed cases where AI models could not be deployed from test environments to production environments because the production environments were incompatible with the requirements of the mode," the report said.

And lastly, sometimes AI simply can't solve the problem. Senior leaders have "inflated expectations" of what AI is actually capable of, exacerbated by hype-filled pitches from sales people.

"In reality, optimizing an AI model for an organization’s use case can be more difficult than these presentations make it appear," the researchers said, with the researchers adding that many leaders expect AI projects to take weeks instead of months or longer to complete, meaning many are delivered partially complete.

The agile development conundrum in AI projects

Beyond those five general concerns, RAND also raised issues with agile development, a software development methodology that breaks projects down into phases to work on flexibly.

The researchers said a large number of interviewees said that agile software development can be a poor fit for AI projects.

That's partially because of how rigidly some companies interpret agile processes, but also because of the short time scale for developing aspects of a system, known as sprints.

AI development may not fit into existing project management schedules because initial phases of data exploration and experimentation have an "unpredictable duration", the researchers said.

"Rigid interpretations of existing software development processes rarely suit the cadence of an AI project," the researchers found.

"Instead of forcing project teams to follow a uniform set of procedures designed for a different type of engineering, organizations should empower their teams to adapt their processes to fit their workloads. Ultimately, organizations will need to rediscover how to make the agile software development process be adaptive and — truly — agile."

What to do?

Some of the challenges are practical. Partnerships on data could ensure there's enough to train a model properly, while investing in infrastructure will help support AI projects.

Plus, RAND advises leaders to be patient: teams will need to be committed to a project for at least a year to see results.

More generally, organizations need to ensure they understand AI's limitations, clearly communicate the intent and purpose of a project, and focus on the problem to be solved, and choose the right technology rather than jump on the AI bandwagon.