Feeds

Amazon cloud fell from sky after botched network upgrade

'Catholic penance' awards 10 days of credit

High performance access to file storage

Amazon has apologized for the extended outage that hit its AWS infrastructure cloud late last week, providing an extensive explanation for the problem and extending 10 days of credit to customers who were using data stores in the portion of the service where the problem originated.

The outage was sparked, the company said, when engineers attempted to upgrade network capacity in a single "availability zone" in the service's East Region and network traffic was shifted to the wrong router. AWS is divided into multiple geographical regions, and each region may be sub-divided into zones designed to be insulated from each other's failures. But for several hours, the outage spread to other availability zones.

Credits will automatically be provided to users using data stores in the original availability zone, whether or not their resources or applications were affected by the outage, Amazon said. During the outage, some customer data was lost, but Amazon did not explain how this was allowed to happen.

The company did promise to improve its communication during future outages and offer additional online tools customers can use to monitor the health of their resources. Amidst the outage, Amazon was heavily criticized for offering relatively little information to the outside world. "We would like our communications to be more frequent and contain more information," the company said in an outage "post mortem".

"We understand that during an outage, customers want to know as many details as possible about what’s going on, how long it will take to fix, and what we are doing so that it doesn’t happen again."

Justin Santa Barbara, founder of FathomDB, a startup that uses AWS, was among those who criticized Amazon during the outage, taking particular issue with the fact that the problems spread across multiple availability zones. Following the release of Amazon's post mortem, Santa Barbara welcomed some of the company's decisions, but still feels the situation was mishandled.

"Judging by the length [of the post mortem], we can understand what took them so long. I am sure everyone would have appreciated more details during the outage itself, so that we could make an informed restore vs. ride it out decision, rather than continually being told 'just a few more minutes' until we lose faith," he told The Register.

"The length of their communication reminds me of a Catholic penance, yet it contains surprisingly little actionable information. Important information, such as exactly how it is that data was lost, or the normal failure models for data in a system that appears to be constantly failing, are missing."

According to Amazon, the problem began at 12:47 am Pacific on April 21 when engineers upgraded the capacity of the primary network in a single availability zone in the AWS East Region, located in Northern Virginia. Amazon Web Services (AWS) offer on-demand accessing to readily scalable computing resources, including processing power and storage. One service, known as Elastic Block Storage (EBS), provides storage volumes that customers can move between virtual server instances on the company's primary Elastic Compute Cloud service, and EBS was at the heart of the outage.

When upgrading network capacity, Amazon said, it usually shifts traffic from one router in its primary EBS network to a second router in the same network. But on Thursday, traffic was incorrectly routed to another, lower-capacity EBS network, and this second network could not handle the extra load.

"Many EBS nodes in the affected Availability Zone were completely isolated from other EBS nodes in its cluster," Amazon said. "Unlike a normal network interruption, this change disconnected both the primary and secondary network simultaneously, leaving the affected nodes completely isolated from one another."

During the outage, Amazon referred to this only as a "network event".

The mistake meant that many EBS nodes could not connect to their replicas, and they started searching for free space where they could re-mirror their data. With so many volumes affected, not all could find available space.

"Because the issue affected such a large number of volumes concurrently, the free capacity of the EBS cluster was quickly exhausted, leaving many of the nodes 'stuck' in a loop, continuously searching the cluster for free space," the company said. "This quickly led to a 're-mirroring storm', where a large number of volumes were effectively 'stuck' while the nodes searched the cluster for the storage space it needed for its new replica."

Thirteen per cent of EBS volumes in the availability zone were affected.

High performance access to file storage

Next page: Clouds as dominos

More from The Register

next story
Seagate brings out 6TB HDD, did not need NO STEENKIN' SHINGLES
Or helium filling either, according to reports
European Court of Justice rips up Data Retention Directive
Rules 'interfering' measure to be 'invalid'
Dropbox defends fantastically badly timed Condoleezza Rice appointment
'Nothing is going to change with Dr. Rice's appointment,' file sharer promises
Cisco reps flog Whiptail's Invicta arrays against EMC and Pure
Storage reseller report reveals who's selling what
Just what could be inside Dropbox's new 'Home For Life'?
Biz apps, messaging, photos, email, more storage – sorry, did you think there would be cake?
IT bods: How long does it take YOU to train up on new tech?
I'll leave my arrays to do the hard work, if you don't mind
Amazon reveals its Google-killing 'R3' server instances
A mega-memory instance that never forgets
USA opposes 'Schengen cloud' Eurocentric routing plan
All routes should transit America, apparently
prev story

Whitepapers

Mainstay ROI - Does application security pay?
In this whitepaper learn how you and your enterprise might benefit from better software security.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Mobile application security study
Download this report to see the alarming realities regarding the sheer number of applications vulnerable to attack, as well as the most common and easily addressable vulnerability errors.