How IT decision makers can prepare for the AI Age
Whether it’s the hardware in your office or what’s powering your virtual machines, the right processors are the key to success

The world is adopting generative AI at a rapid pace, unleashing innovations from ChatGPT to neural processing units (NPUs) to power their services. And this global expanse of generative AI tools may seem like it's all chatbots and AI apps, but the adoption in the B2B space has also been startling.
A recent poll from Gartner found that 40% of its respondents had deployed generative AI in more than three business units. Similar findings came through McKinsey’s annual State of AI Report with 65% of respondents saying their organization was not only regularly using generative AI, but that it had done so in at least one of its business functions.
The hype, it would seem, is very real. However, before one can deploy AI tools and processes, a strategy must be conceived. Achieving ‘AI readiness’ will mean different things to individual organizations. What will they use AI for, how much budget can they allocate to it, and who will take charge of it? These are the basic initial questions. It’s also worth asking if this is something to be handled in-house or with a vendor.
Whether you’re looking to power your data center with AI processors or put Copilots on the desks of your employees, the key thing to know about AI is that you need lots of processing power.
GPU, CPU, and NPU
The first area to look at is the processor; hardware requirements will vary depending on the business needs and the type of AI application, but there are some vital components to consider. Graphics processing units (GPUs) are essential for AI applications; their ability to swiftly handle large volumes of data makes them particularly valuable for machine learning systems, such as those that deal with image and speech recognition.
CPUs hardly need an introduction as they’re a fundamental part of all computers. In the context of AI, while they don’t possess the vital importance of a GPU, they are integral for running operating systems and managing the resources of a computer – which helps other components manage AI functions.
And then there’s the new kid on the block: NPUs. These are designed to run AI-related jobs locally on your computer, such as blurring a background in video calls. The other processors (GPU/CPU) can handle these types of tasks, but the NPU has lower power demands and can take this task away from the central processor (CPU), so essentially, its job is to support the other components and handle most of the generative AI tasks the computer operates.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
It is these main components – CPU, GPU, NPU – that are powering the hardware that supports AI tools and services whether it’s on an office device, in a data center, or for a cloud-based application.
However, your machine’s data storage system is also key as it’s tasked with handling large data volumes which is what your AI applications will need. The most common forms of computer storage systems are solid-state drives (SSDs) and hard disk drives (HDDs) – your device will need a minimum 512GB to run AI workloads.
Although it is recommended to have the latest and greatest when it comes to hardware for AI applications, apps like DeepSeek have shown us that somewhat older machines can handle powerful language models, so a shrewd eye for any of the above elements can save you money.
PCs, data centers, and virtual machines
Our laptops and PCs are currently going through dramatic changes, with new ‘AI’ models flooding the market. At face value, these new machines have been marketed based on the addition of NPU chips and keyboards featuring dedicated Copilot buttons. But under the hood, they’re powered by CPUs and GPUs that can process high-volume workloads, power large language models, and help the humble office worker create and build with generative AI.
What’s key here, particularly for AI workloads, is how the data is moved around the system. An AMD GPU, for instance, uses Infinity Fabric, a technology that facilitates high-speed data transfer between the GPU, CPU, and other components within the system.
For a GPU that you’ll use for AI workloads and or training large models, you should prioritize high core counts, large memory capacity, and high bandwidth. However, if it’s for inference on smaller models, high clock speeds equal faster processing.
You should also consider power consumption, ensuring your server infrastructure is capable of handling the power requirements of the GPUs you want to use. It’s also key to ensure it is compatible with the software, as your preferred AI frameworks will need to be optimized for said GPU. Scalability should also be mulled over here, as you may need to save room for expansion if it all goes well.
What’s interesting, though, is that fewer GPUs should also be a consideration. Certain GPU models are designed for virtual machines where users can increase their workloads without adding more physical hardware elements.
This is an area that Microsoft has considered with its Azure cloud platform and it has become the first cloud provider to offer virtual machines based on AMD's latest Instinct GPU, the MI300X. This new VM series (the ND MI300X v5) is the first cloud offering of its kind, according to Microsoft, and is designed to give high bandwidth memory capacity of any available VM. The aim is to allow customers to serve larger models faster and with fewer GPUs.
The age of AI is upon us. And whether you’re looking at what’s on the desks of your employees, the hardware in your data center – or what’s powering your virtual machines – the right processors are the key to success.
Bobby Hellard is ITPro's Reviews Editor and has worked on CloudPro and ChannelPro since 2018. In his time at ITPro, Bobby has covered stories for all the major technology companies, such as Apple, Microsoft, Amazon and Facebook, and regularly attends industry-leading events such as AWS Re:Invent and Google Cloud Next.
Bobby mainly covers hardware reviews, but you will also recognize him as the face of many of our video reviews of laptops and smartphones.