Feeds

Amazon outage spans clouds 'insulated' from each other

Not what it says on the tin

Remote control for virtualized desktops

It's not surprising that Amazon's infrastructure cloud has gone on the fritz. This is what happens to internet services from time to time. What is surprising – or least more troubling – is that today's outage affected multiple "availability zones" across the service.

"Availability Zones," according to Amazon, "are distinct locations that are engineered to be insulated from failures in other Availability Zones."

It would seem they don't exactly work as they're designed.

Amazon has not responded to our inquiries about the outage. The company is typically tightlipped about such things. But in a brief messages posted to its Amazon Web Services (AWS) status page, the company acknowledged that the outage affected multiple availability zones in the service's "US East" region.

Amazon's Elastic Compute Cloud (EC2) serves up on-demand processing power from multiple geographic regions. Two are in the US – an "East" region located in Northern Virginia, and a "West" located in Nothern California – and others are now up and running in Europe and Asia.

Each region may then be divided into multiple availability zones. "By launching instances in separate Availability Zones," Amazon says, "you can protect your applications from failure of a single location." But today's outage – which began around 1:41am Pacific time and also affected the use of Amazon's Elastic Block Store (EBS) service – spread across multiple zones in the East region.

Amazon's EC2 service level agreement guarantees 99.95 per cent availability for each region if you're operating in multiple availability zones.

The outage brought down multiple public websites using Amazon's service, including Foursquare, Reddit, Quora, and Hootsuite. According to Amazon, the outage was caused by a "network event" that caused the service to "re-mirror" a large number of EBS volumes in the East region. "This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes," the company said at 8:54am Pacific.

"Additionally, one of our internal control planes for EBS has become inundated such that it's difficult to create new EBS volumes and EBS backed instances. We are working as quickly as possible to add capacity to that one Availability Zone to speed up the re-mirroring, and working to restore the control plane issue. We're starting to see progress on these efforts, but are not there yet. We will continue to provide updates when we have them." This also affected the use of EC2 instances.

As of 1:48pm Pacific, EBS problems persisted in the zone where they originated, but Amazon said all other zones had been restored.

Within its EC2 regions, Amazon offers "inexpensive, low latency network connectivity" between availability zones, so many customers have chosen this option rather spreading their sites across multiple regions. If you straddle regions, you have to send traffic across the public internet. According to one user, this is "comparatively expensive, slow and unreliable".

"So if you're playing the AWS game and setting up a master/slave MySQL database (to take a highly pertinent example), what you do is you put the master and the slave in the same Region, but make sure they're in different Availability Zones," Justin Santa Barbara, the founder of FathomDB, wrote this morning.

"You don't normally put them in separate Regions, otherwise you have to cross the expensive, slow and unreliable links between Regions, and you'll likely have more problems trying to keep your databases in sync. You are at risk e.g. if a hurricane hits the eastern seaboard and destroys the datacenter, but short of that you should be OK - as long as AWS does what they promised."

But today, it didn't.

Amazon has never explained how its availability zones are designed. They may be in the same data center. They may not. And it's unclear how they're designed to prevent simultaneous outages. Whatever the case, they didn't behave as Amazon said they would. "AWS broke their promises on the failure scenarios for Availability Zones. It means that AWS have a common single point of failure (assuming it wasn't a winning-the-lottery-while-being-hit-by-a-meteor-odds coincidence)," Santa Barbara wrote.

"The sites that are down were correctly designing to the 'contract'; the problem is that AWS didn't follow their own specifications. Whether that happened through incompetence or dishonesty or something a lot more forgivable entirely, we simply don't know at this point. But the engineers at quora, foursquare and reddit are very competent, and it's wrong to point the blame in that direction."

Amazon has said that after the problems are corrected, it will post a "postmortem" describing the problem in detail. Hopefully, it will finally explain how availability zones are separated. Not that they really are. ®

Update: This story has been updated to show that Amazon's SLA guarantees 99.95% uptime if you're running across multiple availability zones. Previously, it said that the guarantee was for the entire region.

Beginner's guide to SSL certificates

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Go beyond APM with real-time IT operations analytics
How IT operations teams can harness the wealth of wire data already flowing through their environment for real-time operational intelligence.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.