AWS distances itself from Google, Microsoft with ‘unique’ approach to generative AI
The cloud giant has gone on all in on its pick-and-choose ecosystem


AWS has emphasized its commitment to providing customers with the widest range of AI tools, in a strategy that distances it from other hyperscale AI providers.
The firm, the third of the hyperscalers to fully enter the generative artificial intelligence (AI) foray, has distanced itself from Microsoft Azure and Google Cloud in its focus on an AI ecosystem that combines proprietary large language models (LLMs) with third-party models.
Microsoft has put OpenAI, and its popular GPT-n foundational models, at the center of its AI strategy, with GPT-4 being used throughout its Copilot branded productivity tools.
Google Cloud has committed to providing a range of AI models to its customers, but on a larger scale Google has chosen to pit itself against larger efforts to develop one-size-fits-all models such as PaLM 2 to power its search chatbot Bard.
With that in mind, the ability to access a range of models that are particularly good at one thing out of the box - with no further fine-tuning required - could let smaller firms embrace generative AI without having to settle for ‘jack of all trades’ models that lack specialization.
An example of this can be seen in Amazon’s Titan LLMs, which come pre-trained to filter out profanity and hate speech.
In the keynote presentation at its AWS Summit London conference, the firm ran through an example of a digital marketing firm using a slew of AI models in order to push through a product campaign.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
RELATED RESOURCE
Driving digital innovation with intelligent infrastructure
Strong infrastructure investment is driving digital in all industries
It used Anthropic’s Claude AI assistant for a product description, StableDiffusion to generate a product image, AI21’s Jurassic-1 LLM for social media copy, and AWS’ own Titan foundation model (FM) for SEO-optimized terms.
All models are accessible through Amazon Bedrock, which provides customers with an ecosystem of AI and machine learning via APIs, enabling firms to pick and choose which data is passed to which model for tailored outputs.
Swami Sivasubramanian, VP of database, analytics, and ML at AWS, emphasized that in combination with AWS’s Amazon SageMaker JumpStart, customers can embrace pre-trained models or train their own ML tools using its cloud architecture.
The cloud giant has been a vocal proponent of ‘democratizing’ AI in recent months, having partnered with the data and machine learning platform Hugging Face to provide trusted open models.
Sivasubramanian rejected the idea that any one model could provide meet the use cases of every customer, all the time, in contrast to competitors such as Microsoft which has focused its efforts on powerful all-rounder LLMs such as GPT-4.
He noted that different models vary in their effectiveness when presented with tasks such as handling languages other than English, or in more subtle ways such as the tone of voice they tend to take.
“Even among text models, different ones tend to have different support for languages beyond English, for example,” he told ITPro.
“Different customers, depending on their use cases - price profile, performance - end up picking the right tool. That’s one of the reasons we designed Bedrock to provide access to the best-in-class models and provide the ability to customize these foundation models with their data.”
He acknowledged that at present, businesses lack concrete measures to benchmark AI models for specific use cases, and suggested that companies would instead come to know which models were best trained for what use cases through exposure over time.
“I expect as this space matures more and more, you're going to find out which models tend to have better capabilities for different languages, different modalities, and even different use cases and so forth.
“Because these are a function of the areas and data they were trained on in the pre-training phase as well.”
This last point could prove especially useful for smaller firms, which may not have the expansive metadata necessary to tailor a model to fit their specific company brand to a significant degree.
AWS’ API-led approach could also allow firms to freely explore which models are best suited for their purposes on a trial-and-error basis.
ITPro spoke to Dr. Pandurang Kamat, chief technology officer at digital transformation firm Persistent, about the firm’s adoption of the AI code-generation tool Amazon CodeWhisperer, which operates similarly to Microsoft’s GitHub Copilot.
RELATED RESOURCE
Solve global challenges with machine learning
Tackling our world's hardest problems with ML
Persistent has been an AWS partner since 2012, and has indicated that it will roll out CodeWhisperer across its 16,000-strong organization dependent on the effectiveness of its outputs.
“There are a lot of questions because of the way the product came into the public psyche, which was from a consumer-facing way”, said Kamat.
“So customers have questions like ‘hey, what happens to my data here? How is it being retained? What boundaries is it crossing? Is it being used to train the model itself?’”
He stated that Persistent would be trialing it on its projects to test the specific gains associated with CodeWhisperer before it could recommend it to its customers, in line with AWS’ approach of urging customers to try models out before fully committing to adopt them.
Amazon has claimed that CodeWhisperer has driven an average of 57% faster code adoption, and has made the individual tier of the solution free to use.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
'Customers have been begging us to launch': AWS just rolled out Amazon Q Business in Europe – and it includes new data residency features
News AWS has announced the availability of its Amazon Q Business platform in Europe in a move sure to please sovereignty-conscious customers.
By George Fitzmaurice Published
-
AWS sharpens sustainability focus as AI environmental concerns rise
News The hyperscaler says sustainability plays a part in core decision-making in the age of AI
By George Fitzmaurice Published
-
AWS goes all in on AI agents with new features for Bedrock and Amazon Q
News Agentic customizability is coming to Bedrock and the Amazon Q developer assistant
By George Fitzmaurice Published
-
New AWS and Box collaboration brings AI models to enterprise content
News Box customers can now access Anthropic’s Claude and Amazon Titan foundation models within Box AI
By Daniel Todd Published
-
Databricks expands AWS partnership to drive generative AI capabilities
News The new agreement promises “unmatched scale and price performance” to help customers take genAI applications to market faster
By Daniel Todd Published
-
Amazon’s $4 billion investment in Anthropic faces UK competition probe – here’s what it means
News The CMA investigation into the Anthropic investment is the latest in a slew of probes by the competition regulator
By Emma Woollacott Published
-
Hyperscaler AI spending is getting out of control — and Microsoft says it could take 15 years for it to make good on investments
News Tech giants' results show billions being poured into AI infrastructure, but big leaps in revenue remain elusive
By Nicole Kobie Published
-
ChatGPT could be facing some serious competition: Amazon is reportedly working on a new AI tool, ‘Metis’, to challenge the chatbot’s dominance
News Amazon could be preparing to mount a serious challenge on ChatGPT’s dominance with the launch of a new chatbot service
By Ross Kelly Published