AWS users felt a great disturbance in the cloud, as S3 cried out in terror

S3izure made things tricky for an hour, but was no apocalypS3 to match March mess

The world received an unpleasant reminder of what it's like to live without the cloud on Thursday, after Amazon Web Services' Simple Storage Service fluttered for an hour or so.

The incident invoked memories of the S3 outage in March 2017 that caused interruptions to plenty of web services and apps, sparking much rending of garments and gnashing of teeth as the fact of AWS being fallible worked its way into the minds of the faithful.

Amazon data center

Amazon S3-izure cause: Half the web vanished because an AWS bod fat-fingered a command

READ MORE

The US-EAST-1 region that caused so much trouble in March was again the culprit on Thursday, as at 11:58AM on Thursday AWS reported “increased error rates” on S3. By 12:20 the company admitted “We can confirm that some customers are receiving throttling errors accessing S3.” By 12:38 the problem was identified and the fix commenced, error rates fell by 12:49 and at 1:05 AWS sounded the all-clear, confident that errors had ceased nine minutes previously.

AWS CodeCommit, Elastic Beanstalk and Storage Gateway all wobbled, too, and all in the same North Virginia data centre.

While this incident was nowhere near as bad as March's ApocalypS3, users were predictably grumpy about this latest S3izure:

AWS hasn't revealed the cause of the problem. If it does we'll update this story. For now, another Tweet on the incident looks like good advice.

®




Biting the hand that feeds IT © 1998–2018