Feeds

Amazon invites 5 terabyte mondo-files into the heavens

Time to stream your genome sequencer

Internet Security Threat Report 2014

Amazon has increased the maximum object size on its S3 online storage service to 5 terabytes. Previously, S3 users were forced to store large files in chunks no larger than about 5 gigabytes.

"When a customer wanted to access a large file or share it with others, they would either have to use several URLs in Amazon S3 or stitch the file back together using an intermediate server or within an application," Amazon said in a Friday blog post. "No more. We've raised the limit by three orders of magnitude."

Each S3 object can now range from one byte to 5 terabytes, letting you store extremely large files – including scientific or medical data, high-resolution video, and backup files – as single objects.

You can upload these larger objects using the relatively new Multipart Upload API, which was previously used to upload beefy files in parts.

OpenStack – the (truly) open source project that lets you mimic Amazon's S3 and EC2 services inside your own data center – says that it's working on larger-object support as well. OpenStack community manager Bret Piatt of Rackspace tells us that this will arrive early next year with OpenStack's "Bexar" release and that it too will expand sizes to 5 terabytes.

OpenStack was founded by Rackspace and NASA after both outfits were struggling to scale up their infrastructure clouds. OpenStack is based on Nova, a cloud fabric controller designed by NASA, and Cloud Files, a storage controller built by Rackspace. The storage platform is known as swift, and Rackspace says that it now has a "mature" swift codebase running in a production environment.

Incidentally, Bexar is named for a county in Texas, Rackspace's home state. We're told it's pronounced "bear." We would make fun of this, but we're also told that messin' with Texas is verboten. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
Turnbull should spare us all airline-magazine-grade cloud hype
Box-hugger is not a dirty word, Minister. Box-huggers make the cloud WORK
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
Do you spend ages wasting time because of a bulging rack?
No more cloud-latency tea breaks for you, users! Get a load of THIS
prev story

Whitepapers

Free virtual appliance for wire data analytics
The ExtraHop Discovery Edition is a free virtual appliance will help you to discover the performance of your applications across the network, web, VDI, database, and storage tiers.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
The total economic impact of Druva inSync
Examining the ROI enterprises may realize by implementing inSync, as they look to improve backup and recovery of endpoint data in a cost-effective manner.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Website security in corporate America
Find out how you rank among other IT managers testing your website's vulnerabilities.