Feeds

IBM builds biggest-ever disk for secret customer

120 PB monster 'for simulation'. Nuke labs?

Next gen security for virtualised datacentres

Analysis Flash may be one cutting edge of storage action, but big data is causing developments at the other side of the storage pond, with IBM developing a 120 petabyte 200,000-disk array.

The mighty drive is being developed for a secret supercomputer-using customer "for detailed simulations of real-world phenomena" according to MIT's Technology Review, and takes current large-array technology trends a step or two further.

IBM Almaden storage systems research director Bruce Hillsberg says that 200,000 SAS disk drives are involved, rather than SATA ones, because performance is a concern. A back-of-an-envelope calculation suggests 600GB drives are being used and Seagate 2.5-inch Savvios come to mind.

Won't lose any data for a million years? Really, give it a rest, this is marketing BS...

We're told that wider racks than normal are being used to accommodate the drives in a smaller amount of floorspace than standard racks would require. Also these racks are water-cooled rather than fan-cooled, which would seem logical if wide drawers crammed full of small form factor (SFF) drives were being used.

Some 2TB of capacity may be needed to hold the file data for the billions of files in the array. The GPFS parallel file system is being used with a hint that flash memory storage is used to speed its operations. This would indicate that the 120PB array would include, say, some Violin Memory arrays to hold the meta-data, and would scan 10 billion files in about 43 minutes.

RAID 6, which can protect against two drive failures, is not enough – not with 200,000 drives to look after – and so a multi-speed RAID set-up is being developed. Multiple copies of data would be written and striped so that a single drive failure could be tolerated easily. A failed drive would be rebuilt slowly in the background. The rebuild would not slow the accessing supercomputer down much if at all. A dual-drive failure would have a faster re-build. A three-drive failure would get a faster rebuild again, with, we assume, the compute side of the supercomputer slowing down somewhat due to a lower array I/O rate.

Hillsberg doesn't say how many drives could simultaneously fail. The MIT article text says the array will be "a system that should not lose any data for a million years without making any compromises on performance". Really, give it a rest, this is marketing BS. Having it work and not lose data for 15 years will be good enough.

We're interested that homogeneous disk drives are being used – presumably all the data on the array will be classed as primary data, apart from the file meta data which will need a flash speed-up. That means no tiering software is needed.

There will be lessons here for other big data drive array suppliers, such as EMC's Isilon unit, DataDirect and Panasas. It will be interesting to see if they abandon standard racks in favour of wider units, SFF drives, water-cooling and improved RAID algorithms too. ®

Bootnote

Storage-heavy supercomputer simulations are used in such tasks as weather forecasting, seismic surveying and complex molecular science - but there would seem no reason to keep any such customer's identity a secret. Another area in which supercomputer simulations are important is nuclear weapons: without live tests it becomes a difficult predictive task to tell whether a warhead will still work after a given period of time. As a result, the US nuclear weaponry labs are leaders in the supercomputing field.

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Object storage bods Exablox: RAID is dead, baby. RAID is dead
Bring your own disks to its object appliances
Nimble's latest mutants GORGE themselves on unlucky forerunners
Crossing Sandy Bridges without stopping for breath
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
7 Elements of Radically Simple OS Migration
Avoid the typical headaches of OS migration during your next project by learning about 7 elements of radically simple OS migration.
BYOD's dark side: Data protection
An endpoint data protection solution that adds value to the user and the organization so it can protect itself from data loss as well as leverage corporate data.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?