Four-hour Google Cloud outage blamed on 'network congestion'
Multiple Google products and third party services were taken offline on Sunday night
Google Cloud Platform (GCP) suffered a significant outage on Sunday night that lasted nearly three hours, knocking offline services including G Suite, YouTube and Google Cloud.
The issue was first noted on the company's cloud status dashboard at 8.25pm BST on 2 June as a Google Compute Engine problem.
Shortly, however, reports of problems with Google Cloud, YouTube and more started hitting Twitter and by 8.59pm, the dashboard acknowledged it was a "wider network issue".
By 12.09am on 3 June, the issue was resolved but little detail is available as to what happened beyond "high levels of network congestion in the eastern USA, affecting multiple services in Google Cloud, G Suite and YouTube".
However, someone claiming to work on Google Cloud (but currently on holiday) posted a message on Hacker News saying: "It's disrupting everything, including unfortunately the tooling we usually use to communicate across the company about outages."
"There are backup plans, of course, but I wanted to at least come here to say: you're not crazy, nothing is lost ... but there is serious packet loss at the least," they added.
In a statement, Google told Cloud Pro: "We will conduct a post mortem and make appropriate improvements to our systems to prevent this from happening again. We sincerely apologise to those that were impacted by yesterday's issues. Customers can always find the most recent updates on our systems on our status dashboard."
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Some, however, have questioned what exactly Google meant by "high levels of network congestion in the eastern USA".
Clive Longbottom, co-founder of analyst house Quocirca, told Cloud Pro: "If this was the case, a lot more than GCP would have been impacted: this does not seem to have been the case. As such, it would appear that what Google possibly means is that it was excessive network traffic in its own environment in the Eastern USA."
He suggested that the excessive network traffic was potentially caused by something internal.
"This could be something like a memory leak on an app going crazy, or (like AWS some time back) human error through a script causing a looping command bringing chaos to the environment."
This doesn't mean that organisations should abandon cloud for business-critical workloads, however. Owen Rogers, research director at the digital economics unit of 451 Research, told Cloud Pro: "Four hours is quite a long time ... but it's a tricky issue, because outages are going to happen now and then, and all customers can do is to build resiliency such that if an outage does occur, they have a backup.
"Using multiple availability zones and regions is a must, but if applications are business critical, multi-cloud should be considered. Yes, it's more complex to manage; yes, you'll have to train more people. But if your company is going to go bust because of a few hours of outage, it is an investment worth making. It appears some hyperscalers are more resilient than others, but even the best are likely to slip up occasionally."
Jane McCallion is ITPro's Managing Editor, specializing in data centers and enterprise IT infrastructure. Before becoming Managing Editor, she held the role of Deputy Editor and, prior to that, Features Editor, managing a pool of freelance and internal writers, while continuing to specialize in enterprise IT infrastructure, and business strategy.
Prior to joining ITPro, Jane was a freelance business journalist writing as both Jane McCallion and Jane Bordenave for titles such as European CEO, World Finance, and Business Excellence Magazine.