Cost of critical application failures revealed
Financial losses are heightened during periods of peak customer demand

On average, critical application failures cost enterprises between $500,000 and $1 million per hour, according to research recently published by Datastax.
Sudden surges in customer demand are cited as playing a key role in failures, with existing IT infrastructure unable to offer the scalability necessary to effectively manage such events.
By critical applications, we are referring to a software program or suite of related programs that must function continuously for a business or segment of a business to be successful. If a critical application experiences downtime, however brief, financial consequences will ensue, as Datastax have quantified.
What is application performance management (APM)? What Is Big Data?
Failures can result from various causes, from bugs embedded in code, to erroneous deployment features and hardware failures. However, consumer trends are an increasingly prevalent catalyst for application shutdown, the spurts of sudden demand threatening to overload technology stacks.
Sudden rises in demand can be reliably pin-pointed on the calendar, several dates per year certain to carry spikes in consumer spending both online and in-store. Most sectors will be affected, particularly around traditional consumer sales events such as Black Friday, and on occasions which came into being at the turn of the millenium, such as Cyber Monday, coined in 2005.
One notable example of the damages of critical application failure arose as a result of Prime Day. The e-commerce giant Amazon created the annual shopping holiday to provide a plethora of deals and packages exclusively for its Amazon Prime users. In 2018 however, Prime Day was marred by service disruptions due to heavy online traffic, preventing users from finalising purchases.
The year 2017 saw Amazon generate an estimated $1 billion in sales from its 30-hour Prime Day event, or $34 million per hour. While the 2018 blackout was brief, that still equates to mammoth losses in revenue.
Get the ITPro. daily newsletter
Sign up today and you will receive a free copy of our Focus Report 2025 - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Application failures during peak periods are not only financially damaging in the short-term, but can also be destabilising in the long-term. Datastax's report found that 53% of potential customers leave a website if online performance lags by merely three seconds. With an abundance of choice available, customers take this as their cue to jump to an alternative, on the high-street by popping next door, or online with a mere flick of the wrist and click of a button.
Retailers need to ensure they provide positive customer experiences that result in unhampered and repeat purchases, but this can be difficult considering fluctuating demand. If infrastructure is enlarged to handle the very peaks of workload, sheer and sustained drops throughout the rest of the year will result in expensive surpluses in capacity.
Scalability is the key. Through investing in a technology stack that can shift to meet demand, surpluses or deficiencies in capacity can be eliminated. To further minimise risk, database infrastructure can be simplified to lower stack complexity, can be distributed to improve uptime and elasticity, and can be open sourced to reduce security and operational risk.

‘This shift highlights not just a continuation but a broad acceptance of remote work as the norm’: Software engineers are sticking with remote work and refusing to budge on RTO mandates – and 21% would quit if forced back to the office

‘Frontier models are still unable to solve the majority of tasks’: AI might not replace software engineers just yet – OpenAI researchers found leading models and coding tools still lag behind humans on basic tasks