Feeds

IBM's monster tape will take three days to fill

35TB cartridge poses whole new set of problems

5 things you didn’t know about cloud backup

IBM Research has devised technology with FujiFilm to create a 35TB capacity tape, but it will take 3 days to write the data at LTO5 speeds.

The new hyper-capacity half inch tape technology has been successfully read and written at a 29.5bn bits/sq in areal density, which means a tape capacity of 35TB according to the researchers. This is said to be 44 times the 800MB raw density of LTO4 tape. From a technological point of view the gee whiz factor is impressive.

The media is FujiFilm's Nanocubic tape, with an ultra-fine, perpendicularly-oriented barium-ferrite magnetic medium that apparently does not use expensive metal sputtering or evaporation coating methods. IBM has developed new servo control technologies enabling a 25X increase in the number of parallel tracks on half inch tape, with a track width of less than 0.45 micrometers.

There is an ultra-narrow 0.2um data reader head and a data read channel based on a data-dependent noise-predictive, maximum-likelihood (DD-NPML) detection scheme developed at IBM Research in Zurich. IBM Research at Almaden developed a reduced-friction head assembly allowing the use of smoother magnetic tapes and an advanced GMR (Giant Magneto-Resistive) head module incorporating optimised servo readers.

The capacity can be increased to the 100bn bit/sq in level according to the IBM researchers. However, one issue that IBM and FujiFilm do not discuss is the time to read or write 35TB of tape data. Using LTO5's tape transfer speed of 140MB/sec it would take 2.89 days (69.44 hours) to write the full 35TB. To write 35TB in the same time that LTO5 writes its 1.5TB of raw data, that's 2.98 hours, would require the tape speed to increase 23.33 times, and that assumes that read/write heads can process the signals passing to and from the tape that quickly.

Accelerating tape speed 23.33X would also increase the risk of tape deformation or breakage and require more electricity for the drive. It seems likely that either multiple-head tape drives or greatly increasing the number of tracks readable by a single head would be needed to be developed to cut the tape read/write times down to more practicable levels. A back of an envelope calculation suggests a 4-head drive or drive which read 4 times as many tracks would cut the 35TB read/write time to 17.36 hours. Another possibility would be to stripe the data across two or more tape drives. A 4-drive setup using such heads would deal with 35TB in 4.34 hours and that starts looking reasonable.

Such striping across multi-headed drives implies a tape library using 35TB cartridges would need more drives and more robotic capability to move cartridges between slots and drives, such that, for example, four cartridges could be delivered to four drives simultaneously. If tape libraries are forecast to sustain their usability because tape storage economics are going to outstrip those of disk for many more years, then changes to allow tape cartridge striping, multi-headed drives, and multiple simultaneous cartridge loading into drives look necessary. ®

5 things you didn’t know about cloud backup

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?