“We’re right on the tip of the iceberg”: Appian SVP on how AI could be as transformative as networking
Longer context windows and reduced hallucinations could be the perfect recipe for enterprise AI success in 2025
AI may be on the cusp of redefining how businesses derive value from their computer systems at the most basic level, from customer insights to improved business processes, according to a senior executive at enterprise software firm Appian.
In conversation with ITPro Malcolm Ross, SVP of product strategy at Appian, said AI has matured to the point where customers are seeing efficiency improvements as a result of AI adoption, but still carries far more potential for 2025 and beyond.
To illustrate his point, Ross kicked off the conversation by pointing to Cisco as an example of a company that in 2000 ranked among the most valuable in the world, but has since fallen from even the top 50.
“The most valuable companies in the world were the companies who used networking to revolutionize business models such as Amazon, Google, Facebook, things like that,” said Ross.
“Suddenly, it’s Nvidia, which is interesting because they're an infrastructure provider of AI – but the companies who are going to revolutionize business operations using AI are the ones who are probably going to win long-term.”
Expanding on this comparison, Ross suggested AI could be as influential to the computing world as networking was in the 2000s, explaining that AI is closer in its potential to this transformative period in computing than it is to other forms of automation such as robotic process automation (RPA).
“Imagine all the different ways you can use networking: you can create a company called Netflix, you can create Twitch, you can do all these different things and AI is much more akin to that than it is an individual piece of technology.”
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The biggest challenges networking faced before HTTPS were data security concerns, Ross noted, which are also a major sticking point for AI adoption. Just as the sector overcame the former, he’s confident that it can circumvent AI data concerns – and in the meantime, he urges leaders to think hard about how they can make AI work as a “core capability” for their business.
Ross told ITPro that in 2024, Appian has seen customers approach developers with more specific demands for what they want to achieve and deploy with AI, buoyed in confidence by small AI productivity gains.
But he added that for developers, the real value lies in the more technical capabilities of generative AI that don’t grab the limelight. By way of example, he points to the fact that at Appian’s recent annual conference, an Appian AI Copilot capability to generate filler data to test applications against received particular customer interest.
“The one that got the most Applause was the generate mock data because it seems mundane but it's like developers know that's a pain in the butt, and now I can just use a magic AI button to do it.”
This is far removed from the extensive changes Ross has envisioned for AI. But over time, and with the right people streamlining AI strategy, Ross suggests it could transform business processes from the ground up.
“Actually, a lot of people are still in exploration mode when long-term, they should be looking at AI as an underlying capability of an operating system kernel,” he told ITPro.
“This allows you to do a huge number of different things like generating applications, generating mock data, tracking all sorts of patterns that you might not have been able to before, and how users interact with digital experiences.”
“So we're right on the tip of the iceberg right now, where we're unleashing that and I'd say you're just going to see the interest explode as we go to 2025-26 as well.
Regulation is driving customer concern
All businesses, particularly those in the EU, must follow strict data protection policies and this could temper AI ambition. Laws such as GDPR and the EU AI Act are already coming to a head with enterprise AI rollout and AI developers like Meta have already shelved plans to train AI systems on EU citizens.
But Ross is confident that by the end of 2024, customers will have surpassed what he dubs the “trust hurdle” – the gap between audit compliance and certifications and deploying AI in sensitive environments.
“Everyone intuitively knows that AI models are a probabilistic representation of data, so when you send your data and it's trained in an AI model, there's a representation of that data in that model,” Ross told ITPro.
“If I have a German customer who is under GDPR compliance laws and I use that customer’s information to train an AI model to better target products and goods to them, then that German customer comes to me and says I want to exercise my right to be forgotten how do I get them out of the AI model?
Ross notes that models can’t “unlearn” information in the same way that humans can’t forget sensitive information on demand. If you learned something you weren’t supposed to, he adds, the best you can do is promise not to tell anyone.
“So it's, it's an inherent problem with AI models, as far as their representational data. And I think more people are understanding that getting a grasp of it, getting these private AI architectures in place in order to manage that as a part of their data set just like they do anything else as well, and being very careful with what they send to these AI models for retraining.”
Context as the key to AI effectiveness
While policy and strategy can define AI success more generally, this is one area where Ross said technology can definitively step in to solve the problem. Specifically, he pointed to the larger context windows now available, which control the amount of information a user can give to an AI model in one go.
“The biggest innovation of generative AI in 2024 has been the expanding context windows,” Ross said.
“Because context windows and generative AIs are essentially short-term memory, I can send the information about that German customer to a generative AI model with a large context window, and make sure that the AI model will not retain that information.
“It only retains it in the context of that one transaction and then in the short term, it just forgets that information about after a transaction is completed. So, it's a much better model for using predictive aspects of AI, but ensuring that it doesn't retain that information for a retraining purpose.”
Context windows have grown progressively larger over the past two years as AI providers seek to make LLMs more useful for enterprises that may have privacy concerns or seek to input huge troves of data such as codebases.
When the GPT-3-powered ChatGPT first launched, it only allowed 4,000 tokens – equivalent to around 3,000 words – of information in any one input. This has since been greatly surpassed, with GPT-4o offering a 128,000 token context window, Anthropic’s Claude 3.5 Sonnet a 200,000 token context window, and Google’s Gemini 1.5 Pro a 1-2 million token context window for select customers.
Ross acknowledges that while major steps have been taken to improve the accuracy of AI outputs in the past year alone – Appian has recorded a jump from 60% to 90% confidence in eliminating AI hallucinations – there will always be a need for a human in the loop to meet customer accuracy demands.
“[Given] the nature of AI, I’m not sure if it’ll ever be 100%, because there’s always a probabilistic nature around taking the structure of the question and you could easily ask one that’s not sourceable from the data set. So there's some level of probabilistic response back that needs always seems to be sanitized.”
In the future, Ross posits that businesses could seek to derive much more value from AI outside of natural language chats of the kind we’re currently seeing in the various AI Copilots currently on the market.
“When I'm interacting with a computer, the context of what I want is often passed in many other ways such as where my screen is, where my mouse is moving, what information I look at, the past five things.
“It's the same fashion as how human communication, like us right now, is more than just our words, it's our eye contact, our facial expressions, there’s more depth to the context of a communication. So a lot of that can be derived, fed to generative AI models to dynamically build things on the fly for customers or predict things for our customers as well.”
More from ITPro
Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.