This article is more than 1 year old

Amazon tries to ruin infosec world's fastest-growing cottage industry (finding data-spaffing S3 storage buckets)

AWS comes up with blanket policies to smother public-facing cloud silos

Amazon Web Services is taking steps to halt the epidemic of data leaks caused by the S3 cloud buckets it hosts from being accidentally left wide open to the internet by customers.

Thus, if you are among the growing bunch of infosec researchers on the hunt for misconfigured public-facing S3 silos packed with slurpable private info and other goodies, it may about to become a little more difficult or tedious to hit pay dirt.

This assumes people take notice and use the new security features, of course. We're not holding our breath.

AWS evangelist (translation: marketing guy) Jeff Barr introduced today a new set of controls to set blanket policies across accounts that will block public access to cloud storage from being enabled. These can be applied to S3 buckets and access control lists (ACL).

With the protections in place, objects placed in the buckets are blocked from enabling public access or cross-account access. The idea, said Barr, is to make it clear to both admins and end users of S3 buckets that public access is intended to be very limited in scope, and should only be enabled for things like web hosting – and not general storage of internal documents.

The problem is that S3 can be used to store files you want to make public on your website, and can also be used to hold private data in the cloud, which ends up accidentally being made public. It would be cool if AWS found a way to enforce a harder separation between the private storage of files, and public-facing web page materials. In the meantime, we have these aforementioned blanket policies.

army

Massive US military social media spying archive left wide open in AWS S3 buckets

READ MORE

"This is a new level of protection that works at the account level and also on individual buckets, including those that you create in the future," Barr explained.

"You have the ability to block existing public access (whether it was specified by an ACL or a policy) and to ensure that public access is not granted to newly created items. If an AWS account is used to host a data lake or another business application, blocking public access will serve as an account-level guard against accidental public exposure."

Barr notes that administrators will still be able to tweak their access controls as needed. If public access is necessary for a specific object in one bucket, for example, the administrator will still be able to go into that specific settings menu and enable public access on a case by case basis.

A quick glance at the past year's headlines will make it abundantly clear why Amazon is adding the new access controls. Dozens of high-profile exposure incidents have been traced back to S3 buckets and objects that were improperly configured to allow public access, leaving the sensitive data open to anyone who happened to come across the bucket.

While Amazon has taken steps to try and limit the exposure of buckets (new instances are set to private by default, Barr said) researchers continue to come across storage silos that, for one reason or another, have had that setting changed either on the entire bucket or an individual object.

This time last year Amazon tried to encourage more responsible behavior with some additional dialogue boxes for admins and better group control policies. It seems that wasn't enough and now AWS is getting tighter.

This is where the new controls come in. By giving a blanket option to block all access and changes to policies by default, AWS hopes to provide an additional layer of protection against unintended changes to public and cross-account visibility. ®

More about

TIP US OFF

Send us news


Other stories you might like