All hail the ‘supercloud’
The supercloud model would do wonders for IT, but is the truly open and flexible cloud really a possibility?
There have been some high-profile cloud outages of late.
In the ensuing madness following them, those with hyperbolic tendencies rushed to criticise the very nature of cloud computing and the relinquishing of control it enforces upon customers.
Amazon, Google and Microsoft all know how infuriating both the outages themselves and the subsequent pejorative reaction can be.
Whilst the need for greater redundancy is more than apparent, everyone knows internet-based services are likely to fail at one time or another. Whether one of Zeus's bolts has taken out a data centre power supply, or a botched attempt at improving services has caused carnage - as in the case of Google last week - sometimes there is little that can be done.
Cloud users may be infuriated when Google Docs, Office 365 or EC2 go down, but they need to have systems in place for when blackouts occur. Yet they will still lose out on the benefits of the cloud whilst an outage is ongoing.
Besides greater resiliency, added redundancy and a smarter approach on behalf of the customer, is there anything we can do to ease the pain of an outage aftermath? Enter the supercloud.'
The what now?
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Admittedly, the supercloud does sound like overblown marketing puffery, but underneath the name is a concept which could be the panacea to all our cloudy woes.
Nathan Pearce of F5 Networks, the man who appears to have coined the term, explained how the concept of the supercloud' entails using more than one vendor, moving applications between them and using resource from different vendors based on usage cost, time of day, or any other factor that determines uptime or expense.
"Imagine, if you will, that you could run your applications in your own data centre then, when you get a surge in users, burst your capacity to your cloud vendor, whether it is Amazon, Rackspace or any other cloud provider," Pearce told IT Pro.
"What if you could detect failures in your infrastructure or your provider's infrastructure, and re-provision to another cloud provider? The ultimate aim should be to keep your applications available."
In the world of the supercloud,' the application delivery controller (ADC) would determine what data or apps would go where, and at what time.
"For example, a supercloud could be configured so that one server functions as an overflow' manager. In the event of a private cloud getting close to capacity, the overflow' host would take on the extra burden," Pearce continued.
"Similarly, by placing an application delivery controller (such as F5's BIG IP traffic manager) between clouds, IT teams could examine the resources which each application has, and if necessary, reduce connectivity and resource access to tier 2 and 3 applications. Consequently, tier 1 applications would have room to flex', enabling them to take more resources during busy times."
Tom Brewster is currently an associate editor at Forbes and an award-winning journalist who covers cyber security, surveillance, and privacy. Starting his career at ITPro as a staff writer and working up to a senior staff writer role, Tom has been covering the tech industry for more than ten years and is considered one of the leading journalists in his specialism.
He is a proud alum of the University of Sheffield where he secured an undergraduate degree in English Literature before undertaking a certification from General Assembly in web development.