Amazon Olympus could be the LLM to rival OpenAI and Google
With a reported two trillion parameters, Amazon Olympus would be among the most powerful models available


Amazon is working on the development of a new large language model (LLM) known as ‘Olympus’ in a bid to topple ChatGPT and Bard, according to reports.
Sources at the company told The Information that the tech giant is working on the LLM and has allocated both resources and staff from its Alexa AI and science teams to spearhead its creation.
Development of the model is being led by Rohit Prasad, former head of Alexa turned lead scientist for artificial general intelligence (AGI)according to Reuters.
Prasad moved into the role to specifically focus on generative AI development as the company seeks to contend with industry competitors such as Microsoft-backed OpenAI and Google.
According to sources, the Amazon Olympus model will have two trillion parameters. If correct, this would make it one of the largest and most powerful models currently in production.
By contrast, OpenAI’s GPT-4, the current market leading model, boasts one trillion parameters.
Olympus could be rolled out as early as December and there is a possibility the model could be used to support retail, Alexa, and AWS operations.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Is Amazon Olympus the successor to Titan?
RELATED RESOURCE
Find out what leading Salesforce customers are doing to deliver enterprise value
DOWNLOAD NOW
Amazon already has its Titan foundation models, which are available for AWS customers as part of its Bedrock framework.
Amazon Bedrock offers customers a variety of foundation models, including models from AI21 Labs and Anthropic, which the tech giant recently backed with a multi-billion-dollar investment.
Amazon Olympus could be the natural evolution of Amazon’s LLM ambitions. Earlier this year, the company revealed it planned to increase investment in the development of LLMs and generative AI tools.
ITPro has approached Amazon for comment.
Amazon Olympus deviates from Bedrock "ethos"
Olympus could be seen as a departure from Amazon’s AI strategy to date, with Amazon Bedrock having set the firm aside from the AI ‘arms race’ of Google and Microsoft by focusing on providing as wide a range of third-party AI models as possible in addition to its powerful Titan foundation models.
A core feature of Bedrock in demonstrations has been the ability to use multiple models across a single process like a production line, with models assigned to constituent tasks that play into their individual strengths.
The firm showed an example of this at its AWS Summit London conference, in which it showed a marketing department using a Titan model to SEO-optimize a product, Anthropic’s Claude to generate the product description, and Stability AI’s StableDiffusion to produce the product image.
If the firm has diverted some of its attention to creating a ‘killer’ AI model, one that will be able to outperform most or all models on its own, one could question where that fits in with Bedrock’s ethos.

Rory Bathgate is Features & Multimedia Editor at ITPro, leading our in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
The reports stating that the model has two trillion parameters don’t stretch believability, and this would almost certainly cement Olympus as the largest LLM on the market at double the reported size of GPT-4.
Whether Olympus’ reportedly mammoth size translates into raw performance remains to be seen. LLMs are complex frameworks shaped as much by their training and fine-tuning as by the scale of the data they access, and experts in the field have been divided for years on whether bigger is better when it comes to generative AI.
For many years, it was thought that there could be a cutoff point for generative AI model size, beyond which the model would lose its specificity and begin providing vague, generalized answers. We have long since broken that barrier, and the success many companies have charted with their models has seemingly proved this sentiment wrong.
But there are also examples of smaller models performing at the same level, or better than, their larger counterparts. Google’s Palm 2, which powers its Bard chatbot, has 340 billion parameters compared to the original PaLM’s 540 billion, but has consistently outperformed the model across all benchmarks and is competitive with GPT-4.
Its performance will speak louder than its spec sheet, and it’s too early to judge Olympus before its proper announcement. But the very existence of the model represents a possible major course correction for Amazon, and customers will be watching with anticipation in the months to come.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Google Cloud Next 2025: Targeting easy AI
ITPro Podcast Throughout its annual event, Google Cloud has emphasized the importance of simple AI adoption for enterprises and flexibility across deployment
By Rory Bathgate Published
-
OpenAI woos UK government amid consultation on AI training and copyright
News OpenAI is fighting back against the UK government's proposals on how to handle AI training and copyright.
By Emma Woollacott Published
-
DeepSeek and Anthropic have a long way to go to catch ChatGPT: OpenAI's flagship chatbot is still far and away the most popular AI tool in offices globally
News ChatGPT remains the most popular AI tool among office workers globally, research shows, despite a rising number of competitor options available to users.
By Ross Kelly Published
-
‘DIY’ agent platforms are big tech’s latest gambit to drive AI adoption
Analysis The rise of 'DIY' agentic AI development platforms could enable big tech providers to drive AI adoption rates.
By George Fitzmaurice Published
-
'Customers have been begging us to launch': AWS just rolled out Amazon Q Business in Europe – and it includes new data residency features
News AWS has announced the availability of its Amazon Q Business platform in Europe in a move sure to please sovereignty-conscious customers.
By George Fitzmaurice Published
-
Google DeepMind’s Demis Hassabis says AI isn’t a ‘silver bullet’ – but within five to ten years its benefits will be undeniable
News Demis Hassabis, CEO at Google DeepMind and one of the UK’s most prominent voices on AI, says AI will bring exciting developments in the coming year.
By Rory Bathgate Published
-
Google Cloud announces UK data residency for agentic AI services
News With targeted cloud credits and skills workshops, Google Cloud hopes to underscore its UK infrastructure investment
By Rory Bathgate Published
-
OpenAI wants to simplify how developers build AI agents
News OpenAI is releasing a set of tools and APIs designed to simplify agentic AI development in enterprises, the firm has revealed.
By George Fitzmaurice Published