Feeds

Amazon cloud fell from sky after botched network upgrade

'Catholic penance' awards 10 days of credit

Top 5 reasons to deploy VMware with Tegile

Amazon has apologized for the extended outage that hit its AWS infrastructure cloud late last week, providing an extensive explanation for the problem and extending 10 days of credit to customers who were using data stores in the portion of the service where the problem originated.

The outage was sparked, the company said, when engineers attempted to upgrade network capacity in a single "availability zone" in the service's East Region and network traffic was shifted to the wrong router. AWS is divided into multiple geographical regions, and each region may be sub-divided into zones designed to be insulated from each other's failures. But for several hours, the outage spread to other availability zones.

Credits will automatically be provided to users using data stores in the original availability zone, whether or not their resources or applications were affected by the outage, Amazon said. During the outage, some customer data was lost, but Amazon did not explain how this was allowed to happen.

The company did promise to improve its communication during future outages and offer additional online tools customers can use to monitor the health of their resources. Amidst the outage, Amazon was heavily criticized for offering relatively little information to the outside world. "We would like our communications to be more frequent and contain more information," the company said in an outage "post mortem".

"We understand that during an outage, customers want to know as many details as possible about what’s going on, how long it will take to fix, and what we are doing so that it doesn’t happen again."

Justin Santa Barbara, founder of FathomDB, a startup that uses AWS, was among those who criticized Amazon during the outage, taking particular issue with the fact that the problems spread across multiple availability zones. Following the release of Amazon's post mortem, Santa Barbara welcomed some of the company's decisions, but still feels the situation was mishandled.

"Judging by the length [of the post mortem], we can understand what took them so long. I am sure everyone would have appreciated more details during the outage itself, so that we could make an informed restore vs. ride it out decision, rather than continually being told 'just a few more minutes' until we lose faith," he told The Register.

"The length of their communication reminds me of a Catholic penance, yet it contains surprisingly little actionable information. Important information, such as exactly how it is that data was lost, or the normal failure models for data in a system that appears to be constantly failing, are missing."

According to Amazon, the problem began at 12:47 am Pacific on April 21 when engineers upgraded the capacity of the primary network in a single availability zone in the AWS East Region, located in Northern Virginia. Amazon Web Services (AWS) offer on-demand accessing to readily scalable computing resources, including processing power and storage. One service, known as Elastic Block Storage (EBS), provides storage volumes that customers can move between virtual server instances on the company's primary Elastic Compute Cloud service, and EBS was at the heart of the outage.

When upgrading network capacity, Amazon said, it usually shifts traffic from one router in its primary EBS network to a second router in the same network. But on Thursday, traffic was incorrectly routed to another, lower-capacity EBS network, and this second network could not handle the extra load.

"Many EBS nodes in the affected Availability Zone were completely isolated from other EBS nodes in its cluster," Amazon said. "Unlike a normal network interruption, this change disconnected both the primary and secondary network simultaneously, leaving the affected nodes completely isolated from one another."

During the outage, Amazon referred to this only as a "network event".

The mistake meant that many EBS nodes could not connect to their replicas, and they started searching for free space where they could re-mirror their data. With so many volumes affected, not all could find available space.

"Because the issue affected such a large number of volumes concurrently, the free capacity of the EBS cluster was quickly exhausted, leaving many of the nodes 'stuck' in a loop, continuously searching the cluster for free space," the company said. "This quickly led to a 're-mirroring storm', where a large number of volumes were effectively 'stuck' while the nodes searched the cluster for the storage space it needed for its new replica."

Thirteen per cent of EBS volumes in the availability zone were affected.

Beginner's guide to SSL certificates

Next page: Clouds as dominos

More from The Register

next story
It's Big, it's Blue... it's simply FABLESS! IBM's chip-free future
Or why the reversal of globalisation ain't gonna 'appen
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
A less painful way to run Microsoft’s private cloud
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
CAGE MATCH: Microsoft, Dell open co-located bit barns in Oz
Whole new species of XaaS spawning in the antipodes
AWS pulls desktop-as-a-service from the PC
Support for PCoIP protocol means zero clients can run cloudy desktops
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.