Intel targets AI hardware dominance by 2025
The chip giant's diverse range of CPUs, GPUs, and AI accelerators complement its commitment to an open AI ecosystem


Intel has laid out a roadmap for establishing product leadership in the processor market by 2025, alongside a goal of democratising AI under a consolidated range of AI-optimised hardware and software.
Core to its proposition is a diverse range of products including central processing units (CPUs), graphics processing units (GPUs), and dedicated AI architecture alongside open-source software improvements.
Businesses can expect to benefit from fourth-generation ‘Sapphire Rapids’ Xeon CPUs immediately, with the fifth-generation Xeon codenamed ‘Emerald Rapids’ set for a Q4 2023 release. This will be followed in 2024 by two processors known as Granite Rapids and Sierra Forest.
Sapphire Rapids can deliver up to ten times greater performance than previous generations. Internal test results also showed that a 48-core, fourth-generation Xeon delivered four times better performance than a 48-core AMD EPYC for a range of AI imaging and language benchmarks.
With Granite Rapids and Sierra Forest, Intel will address current limitations for AI and high-performance computing workloads such as memory bandwidth, with 1.5TB memory bandwidth capacity, and 83% peak bandwidth increases over current generations.
Seperately, Intel is also focusing development of GPU and FPGAs (field programmable gate arrays) to meet the demands for large language model training, largely through its Intel Max and Gaudi chips.
It stated that Gaudi 2 has demonstrated two times higher deep learning inference and training performance than the most popular GPUs.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Training on this level is key for large language models (LLM), and demand has risen since the meteoric rise in generative AI models such as ChatGPT.
Around 15 FPGA products will be brought out this calendar year, which will add to Intel’s compute product range, including for deep learning, artificial intelligence, and other high-performance computing needs.
Over time, Intel intends to draw together its GPU and Gaudi AI accelerator portfolios to allow developers to run software to run across architectures.
A plan for an open AI ecosystem
In addition to its achievements and plans for hardware, the firm said it aims to capture and democratise the AI market through software development and collaboration.
RELATED RESOURCE
With 6.2 million active developers in its community, and 64% of AI developers using Intel tools, its ecosystem already has strong foundations for further AI development.
Intel cited its recent work with Hugging Face, enabling the 176 billion-parameter LLM BLOOMZ through its Gaudi2 architecture. This is a refined version of BLOOM, a text model that can process 46 languages and 13 programming languages, and is also available in a lightweight 7-billion-parameter model.
“For the 176-billion-parameter checkpoint, Gaudi2 is 1.2 times faster than A100 80GB,” wrote Régis Pierrard, machine learning engineer at Hugging Face.
“Smaller checkpoints present interesting results too. Gaudi2 is 3x faster than A100 for BLOOMZ-7B! It is also interesting to note that it manages to benefit from model parallelism whereas A100 is faster on a single device.
Hugging Face noted that the first-generation Gaudi accelerator also offers a better price proposition than A100, with a Gaudi AWS instance costing $13 per hour in comparison to Nvidia’s $30 per hour.
Intel did not provide benchmarks for Gaudi performance next to an H100, the successor to the A100 which is part of the reason big tech is choosing Nvidia for AI.
But lining up Nvidia’s GPUs - long considered best in market - against its own shows Intel is confident that it can deliver and exceed shareholder expectations when it comes to market dominance by 2025.
As part of its Hugging Face collaboration, fourth generation Xeon was used to improve the speed of the open source image generation model Stable Diffusion by more than three times as part of its work with Hugging Face.
The company affirmed its commitment to keep contributing upstream software optimisations to frameworks like TensorFlow and PyTorch, as one of the top three contributors to the latter.
To further open the AI ecosystem, Intel is adding more features to oneAPI, its cross-architecture programming model that offers an alternative to Nvidia’s CUDA software layer.
One of these improves access to SYCL, an open source, royalty-free programming model based in C++ that is heavily used to access hardware accelerators.
Intel’s SYCLomatic can be used to migrate CUDA source code automatically, freeing programmers from time constraints that could otherwise lock them into Nvidia’s architecture.
“We believe that the industry will benefit from an open, standardised programming language that everyone can contribute to, collaborate on, and which is not locked into a particular vendor so it can evolve organically based on its community and public requirements,” said Greg Lavender, CTO and GM of the software and advanced technology group at Intel.
“The desire for an open, multi-vendor, multi-architectural alternative to CUDA is not diminishing. Fundamentally, we believe that innovation will flourish the most in an open field, rather than in the shadows of a walled garden.”

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Gaining timely insights with AI inferencing at the edge
Whitepaper Business differentiation in an AI-everywhere era
By ITPro Published
-
Scaling AI from pilot to production: Maximize AI impact with HPE & Intel
Whitepaper Transform AI proof-of-concepts into full-scale implementations
By ITPro Published
-
UK supercomputer boom as HPE and Dell receive funding for new AI cluster
News The UK’s AI computing capabilities will increase by an order of magnitude in 2024
By Rory Bathgate Published
-
AI gold rush continues as Hugging Face snags $235 million from IBM
News The investment round, which brings the company's valuation to $4.5 billion, also includes Amazon, Google, Intel, and Salesforce
By Richard Speed Published
-
Why is ASUS reviving Intel’s NUC mini-PC line?
News The diminutive PC is to rise again while analysts look for the business case
By Richard Speed Published
-
Calls for AI models to be stored on Bitcoin gain traction
News AI model leakers are making moves to keep Meta's powerful large language model free, forever
By Rory Bathgate Published
-
Why is big tech racing to partner with Nvidia for AI?
Analysis The firm has cemented a place for itself in the AI economy with a wide range of partner announcements including Adobe and AWS
By Rory Bathgate Published
-
Baidu unveils 'Ernie' AI, but can it compete with Western AI rivals?
News Technical shortcomings failed to persuade investors, but the company's local dominance could carry it through the AI race
By Rory Bathgate Published