Feeds

Amazon cloud fell from sky after botched network upgrade

'Catholic penance' awards 10 days of credit

Secure remote control for conventional and virtual desktops

Amazon has apologized for the extended outage that hit its AWS infrastructure cloud late last week, providing an extensive explanation for the problem and extending 10 days of credit to customers who were using data stores in the portion of the service where the problem originated.

The outage was sparked, the company said, when engineers attempted to upgrade network capacity in a single "availability zone" in the service's East Region and network traffic was shifted to the wrong router. AWS is divided into multiple geographical regions, and each region may be sub-divided into zones designed to be insulated from each other's failures. But for several hours, the outage spread to other availability zones.

Credits will automatically be provided to users using data stores in the original availability zone, whether or not their resources or applications were affected by the outage, Amazon said. During the outage, some customer data was lost, but Amazon did not explain how this was allowed to happen.

The company did promise to improve its communication during future outages and offer additional online tools customers can use to monitor the health of their resources. Amidst the outage, Amazon was heavily criticized for offering relatively little information to the outside world. "We would like our communications to be more frequent and contain more information," the company said in an outage "post mortem".

"We understand that during an outage, customers want to know as many details as possible about what’s going on, how long it will take to fix, and what we are doing so that it doesn’t happen again."

Justin Santa Barbara, founder of FathomDB, a startup that uses AWS, was among those who criticized Amazon during the outage, taking particular issue with the fact that the problems spread across multiple availability zones. Following the release of Amazon's post mortem, Santa Barbara welcomed some of the company's decisions, but still feels the situation was mishandled.

"Judging by the length [of the post mortem], we can understand what took them so long. I am sure everyone would have appreciated more details during the outage itself, so that we could make an informed restore vs. ride it out decision, rather than continually being told 'just a few more minutes' until we lose faith," he told The Register.

"The length of their communication reminds me of a Catholic penance, yet it contains surprisingly little actionable information. Important information, such as exactly how it is that data was lost, or the normal failure models for data in a system that appears to be constantly failing, are missing."

According to Amazon, the problem began at 12:47 am Pacific on April 21 when engineers upgraded the capacity of the primary network in a single availability zone in the AWS East Region, located in Northern Virginia. Amazon Web Services (AWS) offer on-demand accessing to readily scalable computing resources, including processing power and storage. One service, known as Elastic Block Storage (EBS), provides storage volumes that customers can move between virtual server instances on the company's primary Elastic Compute Cloud service, and EBS was at the heart of the outage.

When upgrading network capacity, Amazon said, it usually shifts traffic from one router in its primary EBS network to a second router in the same network. But on Thursday, traffic was incorrectly routed to another, lower-capacity EBS network, and this second network could not handle the extra load.

"Many EBS nodes in the affected Availability Zone were completely isolated from other EBS nodes in its cluster," Amazon said. "Unlike a normal network interruption, this change disconnected both the primary and secondary network simultaneously, leaving the affected nodes completely isolated from one another."

During the outage, Amazon referred to this only as a "network event".

The mistake meant that many EBS nodes could not connect to their replicas, and they started searching for free space where they could re-mirror their data. With so many volumes affected, not all could find available space.

"Because the issue affected such a large number of volumes concurrently, the free capacity of the EBS cluster was quickly exhausted, leaving many of the nodes 'stuck' in a loop, continuously searching the cluster for free space," the company said. "This quickly led to a 're-mirroring storm', where a large number of volumes were effectively 'stuck' while the nodes searched the cluster for the storage space it needed for its new replica."

Thirteen per cent of EBS volumes in the availability zone were affected.

Secure remote control for conventional and virtual desktops

Next page: Clouds as dominos

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
Turnbull should spare us all airline-magazine-grade cloud hype
Box-hugger is not a dirty word, Minister. Box-huggers make the cloud WORK
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
Microsoft adds video offering to Office 365. Oh NOES, you'll need Adobe Flash
Lovely presentations... but not on your Flash-hating mobe
prev story

Whitepapers

Free virtual appliance for wire data analytics
The ExtraHop Discovery Edition is a free virtual appliance will help you to discover the performance of your applications across the network, web, VDI, database, and storage tiers.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Business security measures using SSL
Examines the major types of threats to information security that businesses face today and the techniques for mitigating those threats.