AWS takes on proprietary relational database giants with Aurora
Cloud giant claims relational database engine combines proprietary systems' performance with open source pricing
Amazon Web Services (AWS) is taking on the likes of Oracle and IBM with the preview release of its cloud-based relational database engine Aurora.
The service was announced during the opening keynote of the cloud giant’s re: invent partner and customer conference in Las Vegas earlier today by AWS senior vice president Andy Jassy (pictured).
During his time on stage, he said the problem with most proprietary relational databases is that they’re expensive to use, the risk of vendor lock-in with them is high and they also have the potential to hamper enterprise cloud migration plans.
As a result, it’s not uncommon for enterprises to seek out open source alternatives, he said, but the performance they provide often pales in comparison.
“It’s pretty rare that we meet enterprises who don’t ask us to help them shift from what they’re doing right now [with a proprietary platform] to something that’s a little more customer-friendly,” he said.
“[But] to get the sort of performance from these open source database engines you get from proprietary databases is really hard.”
Aurora, he claims, represents the best of both worlds by offering users access to a “commercial-grade” database engine that offers the same level of performance as its proprietary competitors, but at an open source-type price point.
Cloud Pro Newsletter
Stay up to date with the latest news and analysis from the world of cloud computing with our twice-weekly newsletter
As such, Jassy said the offering would be sold at one tenth of the cost of competing commercial database solutions.
Aurora has been in development at AWS for three years and reportedly boasts five times the performance of a “typical” mySQL implementation. It also compatible with the open source database.
Anurag Gupta, general manager of Amazon Aurora, told attendees the offering is designed to provide AWS customers with an alternative to the “expensive” and “monolithic” on-premise relational databases of years gone by.
“Databases have been around for a long time and when they first got started they were really pretty innovative, introducing things like SQL and transactions... but they were expensive, monolithic software running off expensive mainframe hardware,” Gupta explained.
“Back then that was the way all software was written. The problem is, 40 years later, databases are completely ubiquitous, but they’re still built around that same mainframe mindset [and it’s] still super complicated and still super expensive.”
This kind of attitude to software design has no place in the cloud era, according to AWS.
“There is a better way, and that’s the same way you [the audience] have been building your own services and applications. You don’t build monolithic software... you use scalable, multi-tenancy components... and that’s what we’ve done,” he added.
He also talked up the system’s performance during his time on stage by declaring it can handle six million inserts a minute.
“That’s a lot faster than stock MySQL running on the largest instances available from AWS,” he added.
Furthermore, any data processed by the system is backed up to Amazon’s S3 cloud storage service and replicated six ways across three availability zones to guarantee the service’s durability.
Aurora is being offered on a preview basis to AWS users to trial from today.