Feeds

Amazon outage spans clouds 'insulated' from each other

Not what it says on the tin

Boost IT visibility and business value

It's not surprising that Amazon's infrastructure cloud has gone on the fritz. This is what happens to internet services from time to time. What is surprising – or least more troubling – is that today's outage affected multiple "availability zones" across the service.

"Availability Zones," according to Amazon, "are distinct locations that are engineered to be insulated from failures in other Availability Zones."

It would seem they don't exactly work as they're designed.

Amazon has not responded to our inquiries about the outage. The company is typically tightlipped about such things. But in a brief messages posted to its Amazon Web Services (AWS) status page, the company acknowledged that the outage affected multiple availability zones in the service's "US East" region.

Amazon's Elastic Compute Cloud (EC2) serves up on-demand processing power from multiple geographic regions. Two are in the US – an "East" region located in Northern Virginia, and a "West" located in Nothern California – and others are now up and running in Europe and Asia.

Each region may then be divided into multiple availability zones. "By launching instances in separate Availability Zones," Amazon says, "you can protect your applications from failure of a single location." But today's outage – which began around 1:41am Pacific time and also affected the use of Amazon's Elastic Block Store (EBS) service – spread across multiple zones in the East region.

Amazon's EC2 service level agreement guarantees 99.95 per cent availability for each region if you're operating in multiple availability zones.

The outage brought down multiple public websites using Amazon's service, including Foursquare, Reddit, Quora, and Hootsuite. According to Amazon, the outage was caused by a "network event" that caused the service to "re-mirror" a large number of EBS volumes in the East region. "This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes," the company said at 8:54am Pacific.

"Additionally, one of our internal control planes for EBS has become inundated such that it's difficult to create new EBS volumes and EBS backed instances. We are working as quickly as possible to add capacity to that one Availability Zone to speed up the re-mirroring, and working to restore the control plane issue. We're starting to see progress on these efforts, but are not there yet. We will continue to provide updates when we have them." This also affected the use of EC2 instances.

As of 1:48pm Pacific, EBS problems persisted in the zone where they originated, but Amazon said all other zones had been restored.

Within its EC2 regions, Amazon offers "inexpensive, low latency network connectivity" between availability zones, so many customers have chosen this option rather spreading their sites across multiple regions. If you straddle regions, you have to send traffic across the public internet. According to one user, this is "comparatively expensive, slow and unreliable".

"So if you're playing the AWS game and setting up a master/slave MySQL database (to take a highly pertinent example), what you do is you put the master and the slave in the same Region, but make sure they're in different Availability Zones," Justin Santa Barbara, the founder of FathomDB, wrote this morning.

"You don't normally put them in separate Regions, otherwise you have to cross the expensive, slow and unreliable links between Regions, and you'll likely have more problems trying to keep your databases in sync. You are at risk e.g. if a hurricane hits the eastern seaboard and destroys the datacenter, but short of that you should be OK - as long as AWS does what they promised."

But today, it didn't.

Amazon has never explained how its availability zones are designed. They may be in the same data center. They may not. And it's unclear how they're designed to prevent simultaneous outages. Whatever the case, they didn't behave as Amazon said they would. "AWS broke their promises on the failure scenarios for Availability Zones. It means that AWS have a common single point of failure (assuming it wasn't a winning-the-lottery-while-being-hit-by-a-meteor-odds coincidence)," Santa Barbara wrote.

"The sites that are down were correctly designing to the 'contract'; the problem is that AWS didn't follow their own specifications. Whether that happened through incompetence or dishonesty or something a lot more forgivable entirely, we simply don't know at this point. But the engineers at quora, foursquare and reddit are very competent, and it's wrong to point the blame in that direction."

Amazon has said that after the problems are corrected, it will post a "postmortem" describing the problem in detail. Hopefully, it will finally explain how availability zones are separated. Not that they really are. ®

Update: This story has been updated to show that Amazon's SLA guarantees 99.95% uptime if you're running across multiple availability zones. Previously, it said that the guarantee was for the entire region.

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Nimble's latest mutants GORGE themselves on unlucky forerunners
Crossing Sandy Bridges without stopping for breath
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.