Feeds

AWS flicks switch for S3-to-Glacier migration

Ice your data

  • alert
  • submit to reddit

Secure remote control for conventional and virtual desktops

Amazon Web Services has turned on a new facility that allows migration of data from its S3 cloud storage service to its new Glacier cloud archive service.

Glacier was launched a few weeks ago, and offers cloud storage at $0.01/Gb/month for most US regions ($0.011 in some North California areas and Ireland, and $0.012 in Tokyo), well below the $0.125 charged for storage in S3.

Data stored in Glacier is not immediately available, with three to five hours being the advertised restore time. Data resident in S3, by contrast, is accessible in real time.

Those hoping for a simple 'Take this bucket and send it to Glacier' button will be disappointed by AWS's scheme for moving data between the two services. Doing so uses a tab dubbed "Lifecycle", now available when S3 users invoke the Properties of an S3 bucket. The Lifecycle tab offers the chance to create rules that determine when data should be shuffled into Glacier, and also offer the chance to expire data.

Those rules require users to enter a prefix, and it appears the rules will only archive files with that string preceding a file name. For users with lots of files in S3 who don't fancy renaming them all just to send them to Glacier, this thread in the AWS forums indicates that offering no prefix will mean all files in a bucket are put on ice in Glacier.

The inevitable blog post about the new service also offers some new details about how to restore data from Glacier back into S3.

A new RESTORE command is now offered in S3 that lets users commence the process of disinterring data, which requires one to specify a retention period. Once that value has been determined and data restored, AWS says the following will happen:

"Your restored object will remain in both Glacier and S3's Reduced Redundancy Storage (RRS) for the duration of the retention period. At the end of the retention period the object's data will be removed from S3; the object will remain in Glacier."

While the new details about Glacier's operations is welcome, AWS is still silent on the technology powering the service. The three-to-five hour restore time suggests whatever storage medium sits behind Glacier has slow seek times or may even need to be brought online (perhaps physically) before data transfers can flow. Tape is therefore a strong candidate, with a homebrew variant of Copan's MAID scheme that saw spun-down disks used for archived data another subject of speculation.

Sadly, AWS won't offer a straight answer when asked about the innards of Glacier: the last time we asked, we were helpfully advised that the company uses "low cost commodity hardware."

No matter what the hardware Glacier uses, it represents a challenge to conventional storage vendors. AWS's recent release of a cloud storage gateway that can tier between on-premises kit and S3 means the cloud company can now offer users three tiers of storage with automated means of moving between the three. It may not offer automation as elegant as that from hardware-selling rivals, but certainly has a strong story. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.