There's no single route to AI adoption, and Anthropic is charting all available paths
Advances in model capabilities allow firms to get more done with less data preparation, but data regulation must still be met


Businesses should adopt a hands-on approach to AI for successful deployment and take steps to avoid being locked into specific models, according to an Anthropic expert.
Since generative AI became a major industry focus, several companies have risen to the forefront of the AI field – the two standout names being OpenAI and Anthropic. Both firms have fought to achieve major investment from hyperscalers and provide their customers with a vision for how AI can unlock business value.
Frances Pye, head of European Partnerships at Anthropic, sat down with ITPro to discuss how the market has matured since 2022 and the best routes to AI adoption.
“We're getting far fewer questions about the very basics of what an LLM is, or what AI is, we're doing much less of that kind of education and more very specifically what the specific sort of areas that Claude excels at and like use cases and tasks that Claude is particularly good at.
“I think what we’re also seeing is a move from people thinking in isolation, in its sort of little sandbox, what can Claude potentially and what skills does it have into more holistically thinking, ‘How do we actually productionize some of these use cases?”
Anthropic launched its Claude Enterprise plan in September to meet this growing business need, and has targeted business productivity with its Projects feature for Claude Pro and Teams.
But while Anthropic has the technological expertise and is a leader in where AI is headed, Pye acknowledged it doesn’t have decades of experience in business change management or specific customer domains.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
This is why its partnerships with firms such as AWS are crucial, Pye told ITPro, particularly when they’ve worked with potential customers for years and already helped them through digital transformation journeys.
Pye said that among Anthropic’s customers, there have been two main paths to successful AI deployment: “top-down and bottom-up”. Elaborating on the latter, Pye explained:
“In some businesses, they've created these playgrounds and model gardens, where any employee can go in and just test and tinker and play around with things, experiment.
“So if you have people in the business building their own things in these model gardens, that all tinkers up and it's actually a really effective way to surface the key use cases that customers within the business really want to build.
Anthropic is leaning on its partners to enable this approach, such as the account teams at AWS who understand the individual tech stacks of the businesses looking to leverage Anthropic’s solutions.
But Pye cautioned that this approach is only successful with endorsement from the very top, with leaders closely watching AI use cases as they arise and providing staff with the right level of support and expertise to build out products.
The alternative is the ‘top-down’ approach, which involves a more holistic push to AI. Pye explained that this sees whoever sets out the AI budget, be it the CIO or CTO, convening the entire business to make long-term decisions on how AI can align with their key cost drivers.
“Whether or not the first or second works is often a function of the technical maturity of the organization.
“Those that are in less technically advanced industries, perhaps that second option won't work as well for them because the leadership isn't often thinking about technology in the same way as, perhaps, more technologically advanced industries.”
Data continues to be a stumbling block for partners
Data readiness continues to be a pain point for customers, Pye said, acknowledging that this has been an issue that predates generative AI adoption. But while Pye noted that good data infrastructure is still a key requirement for businesses, generative AI models like Claude can go further without preparation than previous tools.
“One thing that is interesting about generative AI, compared to ‘classical’ or ‘traditional’ AI is that these unstructured, messy data formats are a lot more usable by the model than if you were dealing with classical AI.”
With an aim to improve the extent to which its customers can power AI with their data, Anthropic has released integrations between Claude and direct sources of data such as GitHub. It has also worked closely with its customers to meet data sovereignty needs and pressing legislation such as the EU AI Act.
“Regional hosting, for example, being able to use a generative AI model that's hosted in a data center within your region or your country, there are very specific requirements there for some organizations and they've designed whole compliance systems around that.
RELATED WHITEPAPER
“So, that's another effectiveness of our relationship with AWS is that they have that infrastructure around the world already and so increasingly we're continuing to light up more regions to allow more processing within the regions so that that compliance can happen.”
The early discourse around enterprise generative AI adoption was filled with to fine-tune LLMs on one’s own data for more tailored outputs. This was a key focus for Dell and for OpenAI, for example.
But recent advances and concerns over the cost of training AI models – Anthropic CEO Dario Amodei has warned models could soon cost as much as $100 billion to train in the future – have led firms to re-evaluate the need for this step.
Pye pointed to Claude’s 200,000 token context window – equivalent to around 150,000 words – as space enough to include a lot of task-specific data in just one prompt. She also highlighted Anthropic’s ‘prompt caching’ feature as another technique that negates the need for extensive fine-tuning.
Prompt caching is available to developers and stores frequently used contextual information in temporary memory between each Claude API call. This allows the model to use information given to it once to inform multiple outputs within a specific interaction.
This reduces the cost of each prompt – caching data has a set price, but it’s cheaper to use cached data to inform outputs than it is to input it repeatedly – and lowers the latency of conversations. Anthropic has suggested that it could be ideal for agentic chatbots recalling the context of a long conversation, or coding assistants referring to a cached codebase.
Where this doesn’t provide the desired results, Pye said there is still a definite place for retrieval augmented generation (RAG) or fine-tuning.
“But what we don’t want people to do is jump to the most difficult option first and also lock themselves into a model, because model generation changes really fast. So you don't want to spend a lot of time doing some really big technical alteration of the model for then a better model to just come out.
“I think we've seen examples of this, where our customers have spent a lot of time and money fine-tuning. And then a new generation of model has come out from whichever provider and out of the box, that model is then even better than their fine-tuned model.”
Ultimately, this comes back to any organization’s understanding of how AI could improve existing systems and the readiness of its staff to use AI once it’s deployed. This, Pye said, comes back to the hands-on, proof-of-concept approach to figuring out how AI can work best for you.
“We often say to businesses the best thing you can do is just give everyone access to these tools.
“Do it in a way that is compliant and fulfills the things that you need to fulfill from a data perspective, but just get people trying to use them across your whole business and that will help you build the institutional knowledge of these things.”

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
OpenAI woos UK government amid consultation on AI training and copyright
News OpenAI is fighting back against the UK government's proposals on how to handle AI training and copyright.
By Emma Woollacott Published
-
DeepSeek and Anthropic have a long way to go to catch ChatGPT: OpenAI's flagship chatbot is still far and away the most popular AI tool in offices globally
News ChatGPT remains the most popular AI tool among office workers globally, research shows, despite a rising number of competitor options available to users.
By Ross Kelly Published
-
‘DIY’ agent platforms are big tech’s latest gambit to drive AI adoption
Analysis The rise of 'DIY' agentic AI development platforms could enable big tech providers to drive AI adoption rates.
By George Fitzmaurice Published
-
OpenAI wants to simplify how developers build AI agents
News OpenAI is releasing a set of tools and APIs designed to simplify agentic AI development in enterprises, the firm has revealed.
By George Fitzmaurice Published
-
Elon Musk’s $97 billion flustered OpenAI – now it’s introducing rules to ward off future interest
News OpenAI is considering restructuring the board of its non-profit arm to ward off unwanted bids after Elon Musk offered $97.4bn for the company.
By Nicole Kobie Published
-
Sam Altman says ‘no thank you’ to Musk's $97bn bid for OpenAI
News OpenAI has rejected a $97.4 billion buyout bid by a consortium led by Elon Musk.
By Nicole Kobie Published
-
DeepSeek flips the script
ITPro Podcast The Chinese startup's efficiency gains could undermine compute demands from the biggest names in tech
By Rory Bathgate Published
-
SoftBank could take major stake in OpenAI
News Reports suggest the firm is planning to increase its stake in the ChatGPT maker
By Emma Woollacott Published