Feeds

Putting a mule on a cloud: one man's battle with Amazon S3

Are we there yet?

Reducing the cost and complexity of web vulnerability management

We wanted to use S3 to store the Mule Enterprise Edition download, which runs about 58MB. With each release of Mule EE there is a build and release cycle that results in a file uploaded from the development servers to our product download site. The resulting "blessed" file is migrated by a member of the development team.

In contrast to Mule Enterprise Edition, our Community version is on the bloody edge with a continuous integration server managing a broad file structure with multiple dependencies. It's not really worth it for us to go through the scripting hassles of moving all the dev files onto S3, so we maintain a server specifically for those releases and all the associated parts.

We wanted a non-technical person to be able to update the website and let the engineers keep writing code. To upload files to S3, you can use FTP, SFTP, etc. You can also use some of the new hip things like S3browse.com that let you browse S3 like a file system. Another problem solved. Onto the time-bombed URLs.

One business requirement was to make time-bombed expiration URLs so we can analyze the exact number of downloads. The Amazon API allows us to create a user-specific URL that expires so you can figure out exactly how many times the URLs are generated vs. downloaded.

Our website is primarily LAMP based and we use the REST API to make the calls to S3. S3 groups files into objects and buckets. An object matches to a stored file. Each object has an identifier, an owner, and permissions. An object is addressed by a URL, such as http://s3.amazonaws.com/TheRegister/MyFileDownload

We're using the GET request via PHP to grab the file. Easy as pie.

Here is a code sample of how it works:

// create a unique url for an amazon download // the bucketName is the directory, the objectName is the file function getS3Redirect($bucketName, $objectName) {

// unique credentials for our S3 account $S3_URL = "http://s3.amazonaws.com";
$keyId = "XXXXXXXXXXXXYYYYYYYYYYYYYYY";
$secretKey = "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZ";

// Configure the link timeout $expires = time() + 3600;

// Create the GET request querystring
$bucketName = "/" . $bucketName . "/";
$stringToSign = "GET\n\n\n$expires\n$bucketName$objectName";
$hasher =& new Crypt_HMAC($secretKey, "sha1");
$sig = urlencode(hex2b64($hasher->hash($stringToSign)));

// We now have a lovely URL to return return "$S3_URL$bucketName$objectName?AWSAccessKeyId=$keyId&Expires=$expires&Signature=$sig"; }

// draw the link <a href="<? echo getS3Redirect("MyFileToDownload", "myfiletodownload-installer.jar"); ?>" target="_blank">Download My File(58 MB)

Services like S3 are great for file serving and for for running VM (virtual machine) images (we do that too - more in a separate article), but not great if you are dealing with a complicated build system or if you are really paranoid. It's also tough to do things that are programatically complicated or diverse unless you are using VM images. For example, Google App Engine only runs Python, so if you need Java you couldn't run your app there. And since Google doesn't support VMs you would be SOL.

There are many risks and annoyances that have to be balanced with the (theoretical) rewards. For example, users have little to no control over the security measures that the Cloud vendor takes with their files. You also have to trust that the provider is doing enough in the way of disaster recovery, backups etc. so that if the Cloud gets blown away your files/pictures/ code aren't completely lost. While we expect very high levels of uptime, history has shown that the vendors aren't completely there yet.

That said, I think that we're using S3 in the right way. That is, our business doesn't fail if Amazon goes down, and the cost-benefits definitely outweigh the occasional downtime, especially since we run our own mirrors and could make a switch in a matter in minutes.

It's still early days in the Clouds, but there are some advantages to be had right away. Minimal administration, unlimited storage and plenty of bandwidth mean that you too can make your downloads ready for scale. ®

Reducing the cost and complexity of web vulnerability management

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
Students playing with impressive racks? Yes, it's cluster comp time
The most comprehensive coverage the world has ever seen. Ever
Google+ GOING, GOING ... ? Newbie Gmailers no longer forced into mandatory ID slurp
Mountain View distances itself from lame 'network thingy'
Run little spreadsheet, run! IBM's Watson is coming to gobble you up
Big Blue's big super's big appetite for big data in big clouds for big analytics
Seagate's triple-headed Cerberus could SAVE the DISK WORLD
... and possibly bring us even more HAMR time. Yay!
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.