Original URL: http://www.theregister.co.uk/2011/04/29/amazon_ec2_outage_post_mortem/

Amazon cloud fell from sky after botched network upgrade

'Catholic penance' awards 10 days of credit

By Cade Metz

Posted in Cloud, 29th April 2011 18:42 GMT

Amazon has apologized for the extended outage that hit its AWS infrastructure cloud late last week, providing an extensive explanation for the problem and extending 10 days of credit to customers who were using data stores in the portion of the service where the problem originated.

The outage was sparked, the company said, when engineers attempted to upgrade network capacity in a single "availability zone" in the service's East Region and network traffic was shifted to the wrong router. AWS is divided into multiple geographical regions, and each region may be sub-divided into zones designed to be insulated from each other's failures. But for several hours, the outage spread to other availability zones.

Credits will automatically be provided to users using data stores in the original availability zone, whether or not their resources or applications were affected by the outage, Amazon said. During the outage, some customer data was lost, but Amazon did not explain how this was allowed to happen.

The company did promise to improve its communication during future outages and offer additional online tools customers can use to monitor the health of their resources. Amidst the outage, Amazon was heavily criticized for offering relatively little information to the outside world. "We would like our communications to be more frequent and contain more information," the company said in an outage "post mortem".

"We understand that during an outage, customers want to know as many details as possible about what’s going on, how long it will take to fix, and what we are doing so that it doesn’t happen again."

Justin Santa Barbara, founder of FathomDB, a startup that uses AWS, was among those who criticized Amazon during the outage, taking particular issue with the fact that the problems spread across multiple availability zones. Following the release of Amazon's post mortem, Santa Barbara welcomed some of the company's decisions, but still feels the situation was mishandled.

"Judging by the length [of the post mortem], we can understand what took them so long. I am sure everyone would have appreciated more details during the outage itself, so that we could make an informed restore vs. ride it out decision, rather than continually being told 'just a few more minutes' until we lose faith," he told The Register.

"The length of their communication reminds me of a Catholic penance, yet it contains surprisingly little actionable information. Important information, such as exactly how it is that data was lost, or the normal failure models for data in a system that appears to be constantly failing, are missing."

According to Amazon, the problem began at 12:47 am Pacific on April 21 when engineers upgraded the capacity of the primary network in a single availability zone in the AWS East Region, located in Northern Virginia. Amazon Web Services (AWS) offer on-demand accessing to readily scalable computing resources, including processing power and storage. One service, known as Elastic Block Storage (EBS), provides storage volumes that customers can move between virtual server instances on the company's primary Elastic Compute Cloud service, and EBS was at the heart of the outage.

When upgrading network capacity, Amazon said, it usually shifts traffic from one router in its primary EBS network to a second router in the same network. But on Thursday, traffic was incorrectly routed to another, lower-capacity EBS network, and this second network could not handle the extra load.

"Many EBS nodes in the affected Availability Zone were completely isolated from other EBS nodes in its cluster," Amazon said. "Unlike a normal network interruption, this change disconnected both the primary and secondary network simultaneously, leaving the affected nodes completely isolated from one another."

During the outage, Amazon referred to this only as a "network event".

The mistake meant that many EBS nodes could not connect to their replicas, and they started searching for free space where they could re-mirror their data. With so many volumes affected, not all could find available space.

"Because the issue affected such a large number of volumes concurrently, the free capacity of the EBS cluster was quickly exhausted, leaving many of the nodes 'stuck' in a loop, continuously searching the cluster for free space," the company said. "This quickly led to a 're-mirroring storm', where a large number of volumes were effectively 'stuck' while the nodes searched the cluster for the storage space it needed for its new replica."

Thirteen per cent of EBS volumes in the availability zone were affected.

Clouds as dominos

This caused a kind of domino effect. The EBS cluster couldn't handle API requests to create new volumes, and as these requests backed up in a queue, it couldn't handle API requests from other availability zones. At 2:40 am, engineers disabled all requests to create new volumes in the affected availability zone, and ten minutes later, the company said, requests from other zones were operating normally.

But then EBS nodes in the affected zone started failing, and at about 5:40 am, this again caused problems in other zones. Amazon said that within about 3 hours, engineers began to lower error rates and latencies in those other zones and that by 12:04 pm, they had isolated the problem in the original zone. For about 11 hours that morning, users were also unable to launch new EBS-backed EC2 instances in the affected zone.

Just after noon, about 13 per cent of EBS volumes in the original zone remained "stuck" and EBS APIs remained disabled. By 12:30pm on April 22 (the next day), all but 2.2 per cent of EBS volumes were restored. By 2 pm on April 24, all but 0.07 per cent was restored, and these, Amazon said, won't be restored. The company did not explain why.

The outage also affected Amazon's Relational Database Service (RBS), as RBS relies on EBS for storage.

Amazon said it will automatically provide customers with 10 days of credit to equal to 100 per cent of their usage of EBS volumes, EC2 instances and RDS database instances that were running in the affected availability zone at the time of the outage. It did not mention credits for services operating in the other availability zones.

The company did say that availability zones are physically separate from each other, but did not elaborate. It's unclear whether they're in separate data centers. In the post mortem, the company also said it intends to improve the design of the availability zones so that an EBS outage like this cannot spread from one zone to another.

Amazon also promises to expose additional APIs that will allow customers to more easily determine whether their instances are affected by an outage. This move was applauded by FathomDB's Santa Barbara, but he believes that the world should consider alternatives to Amazon, which pioneered the infrastructure cloud market and controls the largest market share.

"Amazon has been open about admitting to failure, and has promised to expose more of the private APIs so that customers and partners can be better able to help themselves in future outages, without relying on AWS to do so," he said.

"This is reassuring, though I believe that in the long term customers will be looking at other cloud operators and technologies, for redundancy and different philosophies in terms of timely and open customer communication, but also in terms of relying on well-understood technologies and on the broader community of engineering talent rather than just those at AWS." ®