Presented by AMD
What is exascale computing? Exploring the next step in supercomputers
60 years after the birth of the first supercomputers, we are entering a new era


When the first computers were built, famously they would fill entire rooms but had computing power comparable to a modern day calculator. In the time between now and then, the chips that power computers have become both smaller and – following Moore’s Law – exponentially more powerful.
Today, the computers that sit on our desks are more powerful than Jack Kilby, the father of the modern microchip, could ever have imagined. In this broad sweep of history, the progress that has been made in computing power is incredible.
It’s not just PCs that have experienced an explosion in their processing power, though. In recent years innovations in chip design and manufacture has allowed for the next step in supercomputing: exascale.
The supersize world of supercomputers
Supercomputers, while resting on many of the same fundamental principles as desktop computers, are fundamentally different in the way they process data and what they’re used for.
The devices most people use every day, be that a laptop, desktop, phone, or tablet, use sequential computing. Put simply, this means one operation is performed after another in sequence. For most use cases this is perfectly fine, even if what’s being asked is quite demanding – such as some scientific calculations.
Supercomputers use parallel processing, where multiple computations are undertaken at once to solve much bigger problems faster. The exact speed increase depends on how many processes can be parallelized, in accordance with Amdahl's Law, but it can be up to 20 times faster than a standard computer with similar specifications. That said, such a comparison would be increasingly hard to make given the advances in supercomputing architecture overall, from network fabric through to chips, cooling, and beyond.
Despite being far more advanced than early computers, they do bear a resemblance to each other in that supercomputers are exceptionally large. Consisting of rows and rows of specialised hardware, superficially at least those first computing pioneers would recognise the layout. One major difference, however, is that these computers aren’t so big they occupy an entire room – they occupy an entire specially built or adapted building instead.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Taking the next step: exascale computing
Until recently, the maximum speeds supercomputers ran at was petaflops – 1015 floating-point operations per second (flops). The first computer to break the petascale barrier was Roadrunner, an IBM-built supercomputer featuring a mix of IBM and AMD chips that came online in 2008 and reached a top performance of 1.456 petaflops. This was followed by the Cray-built supercomputer Jaguar, which also featured AMD chips and had a peak performance of 1.75 petaflops.
More recent supercomputers like Lumi and Tuolumne have a sustained performance of hundreds of petaflops, which sounds – and in many ways is – impressive. However, recent advances in technology have allowed for even greater power.
Exascale, a system that’s capable of 1018 flops, is the next step beyond petaflops in terms of supercomputing power.
This is still something of an emerging technology – there are only three exascale supercomputers as measured by the Linpack benchmark, all of which are in the USA.
The first of these to come online was Frontier. Hosted at the Oak Ridge Leadership Computing Facility and jointly operated by the Oak Ridge National Laboratory (ORNL) and the US Department of Energy, Frontier became operational in 2022. It’s based on the HPE Cray EX, a liquid-cooled, blade-based, high-density system and is powered by 9,472 AMD third generation Epyc 7713 64 core 2GHz CPUs and 37,888 AMD AMD Instinct MI250X GPUs.
It achieved a 1.1 exaflop performance when it became fully operational in 2022, as measured by Top500, putting it at the top of the list of the fastest supercomputers in the world. This has since increased to 1.35 exaflops as of November 2024, with a theoretical peak of 2.05 exaflops.
It’s also surprisingly energy efficient; while it consumes almost twice the power of its predecessor, Summit (21 MW vs 13 MW, respectively), it’s approximately nine times more powerful. Indeed, when it first came online, it was second in the Green500 chart of energy efficient supercomputers with an energy efficiency of 52.2 GFlops/watt.
Although it has since been surpassed by a number of other, less powerful supercomputers, such as Adastra 2 and Capella, it’s still in the top 50 and is one of the most powerful to rank there, alongside HPC6, Lumi, and El Capitan – the fastest, most powerful supercomputer in the world.
Raising the bar
In November 2024, El Capitan dislodged Frontier from the number one spot on the Top500, with a performance level of 1.74 exaflops and a theoretical peak of 2.75 exaflops.
Like Frontier, El Capitan is built on hardware from HPE and AMD, namely HPE Cray EX255A architecture with 43,308 Epyc 24 core 1.8 GHz CPUs and 43,808 Instinct MI300A GPUs, with a combined core count of over 11 million.
Despite requiring more power than Frontier (30 MW vs 21 MW), it’s more energy efficient, sitting at 18 in the November 2024 Green500 rankings with an energy efficiency of 58.89 GFlops/watt.
Unlike Frontier, which is available to researchers from around the world to use, El Capitan is exclusively for use by the US National Nuclear Security Administration (NNSA). Housed at the Lawrence Livermore National Laboratory (LLNL), its primary focus is to support the maintenance and management of the USA’s nuclear stockpile, which includes simulating nuclear testing. Additional uses, according to LLNL, are modeling high-energy-density physics experiments like fusion reactions and exploring in detail how materials behave in extreme conditions.
The past two years have seen an incredible leap forward in the processing power of supercomputers. Combined with the generative AI revolution, it’s an exciting time to be a researcher or an IT professional.
Jane McCallion is Managing Editor of ITPro and ChannelPro, specializing in data centers, enterprise IT infrastructure, and cybersecurity. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.
Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Does speech recognition have a future in business tech?
Once a simple tool for dictation, speech recognition is being revolutionized by AI to improve customer experiences and drive inclusivity in the workforce
By Jonathan Weinberg Published
-
AMD has put in the groundwork for a major AI push while the tech industry has fawned over Nvidia
Analysis The processor giant will be keen to use its AMD Advancing AI Event in San Francisco to capitalize on recent successes
By Ross Kelly Published
-
Empowering enterprises with AI: Entering the era of choice
whitepaper How High Performance Computing (HPC) is making great ideas greater, bringing out their boundless potential, and driving innovation forward
By ITPro Last updated
-
AMD’s acquisition spree continues with $665 million deal for Silo AI
News The deal will enable AMD to bolster its portfolio of end-to-end AI solutions and drive its ‘open standards’ approach to the technology
By Ross Kelly Published
-
AMD retains its position as the partner of choice for the world’s fastest and most efficient HPC deployments
Supported content AMD EPYC processors and AMD Instinct accelerators have been used to power a host of new supercomputers globally over the last year
By ITPro Published
-
AMD strikes deal to power Microsoft Azure OpenAI service workloads with Instinct MI300X accelerators
Supported content Some of Microsoft’s most critical services, including Teams and Azure, will be underpinned by AMD technology
By ITPro Published
-
What do you really need for your AI project?
Supported content From GPUs and NPUs to the right amount of server storage, there are many hardware considerations for training AI in-house
By ITPro Published
-
AMD reveals MI300 series accelerators for generative AI
Supported editorial MI300X is the star of the new family of processors with AMD ‘laser-focused’ on data center deployment
By ITPro Published
-
AMD and Microsoft cement relationship with cloud collaborations
The products offer Azure customers the possibility of running high intensity or AI workloads on powerful infrastructure, without having to house or maintain it themselves
By ITPro Published