This article is more than 1 year old

Amazon cloud fell from sky after botched network upgrade

'Catholic penance' awards 10 days of credit

Amazon has apologized for the extended outage that hit its AWS infrastructure cloud late last week, providing an extensive explanation for the problem and extending 10 days of credit to customers who were using data stores in the portion of the service where the problem originated.

The outage was sparked, the company said, when engineers attempted to upgrade network capacity in a single "availability zone" in the service's East Region and network traffic was shifted to the wrong router. AWS is divided into multiple geographical regions, and each region may be sub-divided into zones designed to be insulated from each other's failures. But for several hours, the outage spread to other availability zones.

Credits will automatically be provided to users using data stores in the original availability zone, whether or not their resources or applications were affected by the outage, Amazon said. During the outage, some customer data was lost, but Amazon did not explain how this was allowed to happen.

The company did promise to improve its communication during future outages and offer additional online tools customers can use to monitor the health of their resources. Amidst the outage, Amazon was heavily criticized for offering relatively little information to the outside world. "We would like our communications to be more frequent and contain more information," the company said in an outage "post mortem".

"We understand that during an outage, customers want to know as many details as possible about what’s going on, how long it will take to fix, and what we are doing so that it doesn’t happen again."

Justin Santa Barbara, founder of FathomDB, a startup that uses AWS, was among those who criticized Amazon during the outage, taking particular issue with the fact that the problems spread across multiple availability zones. Following the release of Amazon's post mortem, Santa Barbara welcomed some of the company's decisions, but still feels the situation was mishandled.

"Judging by the length [of the post mortem], we can understand what took them so long. I am sure everyone would have appreciated more details during the outage itself, so that we could make an informed restore vs. ride it out decision, rather than continually being told 'just a few more minutes' until we lose faith," he told The Register.

"The length of their communication reminds me of a Catholic penance, yet it contains surprisingly little actionable information. Important information, such as exactly how it is that data was lost, or the normal failure models for data in a system that appears to be constantly failing, are missing."

According to Amazon, the problem began at 12:47 am Pacific on April 21 when engineers upgraded the capacity of the primary network in a single availability zone in the AWS East Region, located in Northern Virginia. Amazon Web Services (AWS) offer on-demand accessing to readily scalable computing resources, including processing power and storage. One service, known as Elastic Block Storage (EBS), provides storage volumes that customers can move between virtual server instances on the company's primary Elastic Compute Cloud service, and EBS was at the heart of the outage.

When upgrading network capacity, Amazon said, it usually shifts traffic from one router in its primary EBS network to a second router in the same network. But on Thursday, traffic was incorrectly routed to another, lower-capacity EBS network, and this second network could not handle the extra load.

"Many EBS nodes in the affected Availability Zone were completely isolated from other EBS nodes in its cluster," Amazon said. "Unlike a normal network interruption, this change disconnected both the primary and secondary network simultaneously, leaving the affected nodes completely isolated from one another."

During the outage, Amazon referred to this only as a "network event".

The mistake meant that many EBS nodes could not connect to their replicas, and they started searching for free space where they could re-mirror their data. With so many volumes affected, not all could find available space.

"Because the issue affected such a large number of volumes concurrently, the free capacity of the EBS cluster was quickly exhausted, leaving many of the nodes 'stuck' in a loop, continuously searching the cluster for free space," the company said. "This quickly led to a 're-mirroring storm', where a large number of volumes were effectively 'stuck' while the nodes searched the cluster for the storage space it needed for its new replica."

Thirteen per cent of EBS volumes in the availability zone were affected.

Next page: Clouds as dominos

More about

TIP US OFF

Send us news


Other stories you might like