Amazon web services customers vent spleen
Cloud giant says sorry for outage and resulting issues
Vulnerabilities in Amazon's web services that were exposed after lightning hit power supplies at the weekend have led to stinging criticism from some customers.
The bolt knocked out the utility and back-up generators in Dublin, causing a blackout which took down the Elastic Cloud Compute (EC2) and Relational Database Services (RDS).
Efforts to bring EC2 back online were delayed as the Elastic Block Storage (EBS) servers required manual operations before they could restore customer volumes, while making extra copies of data sucked up capacity, meaning it needed to find extra juice elsewhere.
Amazon said on Monday it would resolve the process in 48 hours but wrote to customers yesterday informing them it had discovered an error in EBS software which "incorrectly deleted" one or more blocks when cleaning snapshots.
"The root cause was a software error that caused the snapshot references to a subset of blocks to be missed during the reference counting process," the company said.
Snapshots containing the missing blocks were disabled and copies of affected snapshots have replaced the empty blocks.
"We apologise for any potential impact this might have on your applications," Amazon said.
On its services health board today, the US firm described its EC2 services as still having connectivity issues.
One customer said his data contained significant numbers of "trashed blocks". Fortunately he had migrated to a virtual server at another hosting firm, so he deleted the snapshots and breathed "a sigh of relief", but warned it could have been worse if he had relied on EBS snapshot for backups.
"This just goes to confirm my own assessment, which is that AWS is not suitable for small-scale deployments. The economies of scale and price/performance just don't work at the low end.
"They are much more suitable for large scale deployments where service provision and backups can be split across multiple availability zones. It also serves as a reminder not to put all one's eggs into one basket," he said.
Another agreed, "There is clearly a massive defect in the multi-availability zone products."
The master database instances hosted in areas unaffected by the Dublin incident were "taken down by their slaves being hosted inside the affected zone", said another source.
One customer summarised the situation as "if you stored your backup in the cloud too, [Amazon] hosed it".
Sponsored: The Nuts and Bolts of Ransomware in 2016