This article is more than 1 year old

Amazon outage spans clouds 'insulated' from each other

Not what it says on the tin

It's not surprising that Amazon's infrastructure cloud has gone on the fritz. This is what happens to internet services from time to time. What is surprising – or least more troubling – is that today's outage affected multiple "availability zones" across the service.

"Availability Zones," according to Amazon, "are distinct locations that are engineered to be insulated from failures in other Availability Zones."

It would seem they don't exactly work as they're designed.

Amazon has not responded to our inquiries about the outage. The company is typically tightlipped about such things. But in a brief messages posted to its Amazon Web Services (AWS) status page, the company acknowledged that the outage affected multiple availability zones in the service's "US East" region.

Amazon's Elastic Compute Cloud (EC2) serves up on-demand processing power from multiple geographic regions. Two are in the US – an "East" region located in Northern Virginia, and a "West" located in Nothern California – and others are now up and running in Europe and Asia.

Each region may then be divided into multiple availability zones. "By launching instances in separate Availability Zones," Amazon says, "you can protect your applications from failure of a single location." But today's outage – which began around 1:41am Pacific time and also affected the use of Amazon's Elastic Block Store (EBS) service – spread across multiple zones in the East region.

Amazon's EC2 service level agreement guarantees 99.95 per cent availability for each region if you're operating in multiple availability zones.

The outage brought down multiple public websites using Amazon's service, including Foursquare, Reddit, Quora, and Hootsuite. According to Amazon, the outage was caused by a "network event" that caused the service to "re-mirror" a large number of EBS volumes in the East region. "This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes," the company said at 8:54am Pacific.

"Additionally, one of our internal control planes for EBS has become inundated such that it's difficult to create new EBS volumes and EBS backed instances. We are working as quickly as possible to add capacity to that one Availability Zone to speed up the re-mirroring, and working to restore the control plane issue. We're starting to see progress on these efforts, but are not there yet. We will continue to provide updates when we have them." This also affected the use of EC2 instances.

As of 1:48pm Pacific, EBS problems persisted in the zone where they originated, but Amazon said all other zones had been restored.

Within its EC2 regions, Amazon offers "inexpensive, low latency network connectivity" between availability zones, so many customers have chosen this option rather spreading their sites across multiple regions. If you straddle regions, you have to send traffic across the public internet. According to one user, this is "comparatively expensive, slow and unreliable".

"So if you're playing the AWS game and setting up a master/slave MySQL database (to take a highly pertinent example), what you do is you put the master and the slave in the same Region, but make sure they're in different Availability Zones," Justin Santa Barbara, the founder of FathomDB, wrote this morning.

"You don't normally put them in separate Regions, otherwise you have to cross the expensive, slow and unreliable links between Regions, and you'll likely have more problems trying to keep your databases in sync. You are at risk e.g. if a hurricane hits the eastern seaboard and destroys the datacenter, but short of that you should be OK - as long as AWS does what they promised."

But today, it didn't.

Amazon has never explained how its availability zones are designed. They may be in the same data center. They may not. And it's unclear how they're designed to prevent simultaneous outages. Whatever the case, they didn't behave as Amazon said they would. "AWS broke their promises on the failure scenarios for Availability Zones. It means that AWS have a common single point of failure (assuming it wasn't a winning-the-lottery-while-being-hit-by-a-meteor-odds coincidence)," Santa Barbara wrote.

"The sites that are down were correctly designing to the 'contract'; the problem is that AWS didn't follow their own specifications. Whether that happened through incompetence or dishonesty or something a lot more forgivable entirely, we simply don't know at this point. But the engineers at quora, foursquare and reddit are very competent, and it's wrong to point the blame in that direction."

Amazon has said that after the problems are corrected, it will post a "postmortem" describing the problem in detail. Hopefully, it will finally explain how availability zones are separated. Not that they really are. ®

Update: This story has been updated to show that Amazon's SLA guarantees 99.95% uptime if you're running across multiple availability zones. Previously, it said that the guarantee was for the entire region.

More about

TIP US OFF

Send us news


Other stories you might like