The cost of overprovisioning
Having extra capacity available "just in case" may seem prudent, but it could be more expensive than you think

Disruption. Agility. Hybrid. When it comes to some of the biggest business buzzwords – especially in the tech industry – the common thread that runs through all of them is flexibility. The ability to adapt, to take some services from here and others from there to create the solution you need, to be responsive.
Yet the reality of the business landscape hasn’t been particularly true to these ideals. Certain tenets that, it’s fair to say, are rather old-fashioned still sit at the heart of IT strategy and one of the most deep rooted is overprovisioning.
Overprovisioning may sound like either no big deal or a sensible precaution at first glance; buying more capacity than you need in expectation of future growth or to accommodate spikes in demand that may appear unexpectedly seems like a prudent thing to do. The costs of this unused hardware, however, can far outweigh any benefits of having this safety buffer in place.
A problem the cloud can’t solve
Not long ago, industry pundits would have informed you that the answer to overprovisioning was for businesses to turn their backs on their on-premises data centres and take their IT to the public cloud. The flexibility and scalability of the public cloud, particularly with the big hyperscalers, was supposed to mean that there was no reason for organisations to overprovision – you could instantly scale up or down depending on what you needed at any time.
The reality, however, was somewhat less straightforward for multiple reasons. The first is that old habits die hard – rather than managing cloud capacity continually, many organisations chose instead to replicate what they had been doing on-premises. Reasons may vary, from wanting a predictable cost from month to month, to still feeling safer with that extra capacity ready and waiting.
This was further complicated by the fact that the public cloud isn’t suited to all workloads, whether for latency, legislative or cost reasons, or something else altogether. In these situations, organisations were still left needing on-premises hardware that they would, once again, often overprovision.
In fact, research from Hewlett Packard Enterprise (HPE) carried out in late 2020 found that two thirds of organisations are using less than 60% of their infrastructure, whether that’s on-premises in a private cloud, or in a public cloud. This can end up being more expensive than you might think – in total, each year overprovisioning across all types of infrastructure costs companies a whopping $15 million (£10 million) in total in misaligned resources.
But this isn’t to say cloud doesn’t have its advantages. Elastic compute in particular is one of the standout features of public cloud computing (even if overprovisioning does undermine it somewhat) and public cloud still attracts customers some 20 years after it was first developed.
The question, then, is what can be done to marry up the advantages of scalable, on-demand computing as offered by the cloud, the security offered by on-premises infrastructure, all while eliminating the problem of overprovisioning.
The data centre on demand
Thankfully, there is an answer to this question: On demand, on-premises infrastructure. This new trend, pioneered by HPE with its GreenLake portfolio, is gaining traction among businesses across all verticals and sizes, from the biggest enterprises down to SMBs.
On-demand infrastructure offers the best of both worlds between cloud and on-premises. It is scalable, meaning capacity can be turned up and down as needed, right down to the level of individual cores with HPE GreenLake Silicon on Demand.
With infrastructure on demand, you know that the capacity is there and ready to use when you need it, taking away many of the concerns that cause overprovisioning in the first place. As with public cloud, however, you only pay for the capacity you use – if you only use 60% of the storage you have available, for example, you only pay for that 60%. Should you need to scale up, whether temporarily or permanently, you can do so quickly and easily through the GreenLake Central console.
What’s more, by having the necessary hardware located in your own data centre, there’s no latency – a problem that still bedevils public cloud. This means you can use a consumption-based model for provisioning infrastructure even for edge workloads, where latency often causes the greatest problems.
For organisations that prefer not to manage their own infrastructure at all, GreenLake services are also available through HPE managed service partners, freeing up the IT department to do other valuable work around the business.
GreenLake was the first service of its kind when it was rolled out some five years ago, and continues to pioneer in new areas, such as Silicon on Demand and the incorporation of AI and machine learning into its offerings.
As far as HPE is concerned, this model of on demand infrastructure provisioning, which eliminates the problems of overprovisioning associated with both traditional on-premises and public cloud, is where all business IT is heading. Indeed, CEO Antonio Neri has described GreenLake as the future of HPE, saying: “GreenLake should be synonymous to HPE.
“This will be our leading product, leading offer, our leading experience, where everything else underneath is part of that experience, whether it's connectivity as a service. Whether it’s... data services, whether it's load optimisation, whether it’s [artificial intelligence], machine learning – all of that caters to that platform.”
With infrastructure on demand becoming ever more comprehensive, both in what is available and how it’s delivered, there has never been a better time to stop overprovisioning and start right sizing in a way that guarantees extra capacity will be available when needed, without needing to worry about latency or data sovereignty.
Find out how HPE GreenLake can deliver more efficient solutions for your business
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.
For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.
-
Why keeping track of AI assistants can be a tricky business
Column Making the most of AI assistants means understanding what they can do – and what the workforce wants from them
By Stephen Pritchard
-
Nvidia braces for a $5.5 billion hit as tariffs reach the semiconductor industry
News The chipmaker says its H20 chips need a special license as its share price plummets
By Bobby Hellard
-
HPE eyes enterprise data sovereignty gains with Aruba Networking Central expansion
News HPE has announced a sweeping expansion of its Aruba Networking Central platform, offering users a raft of new features focused on driving security and data sovereignty.
By Ross Kelly
-
HPE unveils Mod Pod AI ‘data center-in-a-box’ at Nvidia GTC
News Water-cooled containers will improve access to HPC and AI hardware, the company claimed
By Jane McCallion
-
‘Divorced from reality’: HPE slams DOJ over bid to block Juniper deal, claims move will benefit Cisco
News HPE has criticized the US Department of Justice's attempt to block its acquisition of Juniper Networks, claiming it will benefit competitors such as Cisco.
By Nicole Kobie
-
HPE plans to "vigorously defend" Juniper Networks deal as DoJ files suit to block acquisition
News The US Department of Justice (DoJ) has filed a suit against HPE over its proposed acquisition of Juniper Networks, citing competition concerns.
By Nicole Kobie
-
HPE Discover Barcelona: What’s the business benefit of supercomputers?
ITPro Podcast With potential in fields such as AI to scientific modelling, global interest in supercomputers continues to rise
By Jane McCallion
-
El Capitan powers up, becomes fastest supercomputer in the world
News Earth’s newest supercomputer is fast, efficient, and its use cases are rather different
By Jane McCallion
-
HPE ProLiant DL145 Gen11 review: HPE pushes EPYC power to the network edge
Reviews A rugged and very well-designed edge server offering a remarkably high EPYC core count for its size
By Dave Mitchell
-
Inside Lumi, one of the world’s greenest supercomputers
Long read Located less than 200 miles from the Arctic Circle, Europe’s fastest supercomputer gives a glimpse of how we can balance high-intensity workloads and AI with sustainability
By Jane McCallion