Feeds

IBM builds biggest-ever disk for secret customer

120 PB monster 'for simulation'. Nuke labs?

Designing a Defense for Mobile Applications

Analysis Flash may be one cutting edge of storage action, but big data is causing developments at the other side of the storage pond, with IBM developing a 120 petabyte 200,000-disk array.

The mighty drive is being developed for a secret supercomputer-using customer "for detailed simulations of real-world phenomena" according to MIT's Technology Review, and takes current large-array technology trends a step or two further.

IBM Almaden storage systems research director Bruce Hillsberg says that 200,000 SAS disk drives are involved, rather than SATA ones, because performance is a concern. A back-of-an-envelope calculation suggests 600GB drives are being used and Seagate 2.5-inch Savvios come to mind.

Won't lose any data for a million years? Really, give it a rest, this is marketing BS...

We're told that wider racks than normal are being used to accommodate the drives in a smaller amount of floorspace than standard racks would require. Also these racks are water-cooled rather than fan-cooled, which would seem logical if wide drawers crammed full of small form factor (SFF) drives were being used.

Some 2TB of capacity may be needed to hold the file data for the billions of files in the array. The GPFS parallel file system is being used with a hint that flash memory storage is used to speed its operations. This would indicate that the 120PB array would include, say, some Violin Memory arrays to hold the meta-data, and would scan 10 billion files in about 43 minutes.

RAID 6, which can protect against two drive failures, is not enough – not with 200,000 drives to look after – and so a multi-speed RAID set-up is being developed. Multiple copies of data would be written and striped so that a single drive failure could be tolerated easily. A failed drive would be rebuilt slowly in the background. The rebuild would not slow the accessing supercomputer down much if at all. A dual-drive failure would have a faster re-build. A three-drive failure would get a faster rebuild again, with, we assume, the compute side of the supercomputer slowing down somewhat due to a lower array I/O rate.

Hillsberg doesn't say how many drives could simultaneously fail. The MIT article text says the array will be "a system that should not lose any data for a million years without making any compromises on performance". Really, give it a rest, this is marketing BS. Having it work and not lose data for 15 years will be good enough.

We're interested that homogeneous disk drives are being used – presumably all the data on the array will be classed as primary data, apart from the file meta data which will need a flash speed-up. That means no tiering software is needed.

There will be lessons here for other big data drive array suppliers, such as EMC's Isilon unit, DataDirect and Panasas. It will be interesting to see if they abandon standard racks in favour of wider units, SFF drives, water-cooling and improved RAID algorithms too. ®

Bootnote

Storage-heavy supercomputer simulations are used in such tasks as weather forecasting, seismic surveying and complex molecular science - but there would seem no reason to keep any such customer's identity a secret. Another area in which supercomputer simulations are important is nuclear weapons: without live tests it becomes a difficult predictive task to tell whether a warhead will still work after a given period of time. As a result, the US nuclear weaponry labs are leaders in the supercomputing field.

The Power of One eBook: Top reasons to choose HP BladeSystem

More from The Register

next story
Apple fanbois SCREAM as update BRICKS their Macbook Airs
Ragegasm spills over as firmware upgrade kills machines
Attack of the clones: Oracle's latest Red Hat Linux lookalike arrives
Oracle's Linux boss says Larry's Linux isn't just for Oracle apps anymore
THUD! WD plonks down SIX TERABYTE 'consumer NAS' fatboy
Now that's a LOT of porn or pirated movies. Or, you know, other consumer stuff
EU's top data cops to meet Google, Microsoft et al over 'right to be forgotten'
Plan to hammer out 'coherent' guidelines. Good luck chaps!
US judge: YES, cops or feds so can slurp an ENTIRE Gmail account
Crooks don't have folders labelled 'drug records', opines NY beak
Manic malware Mayhem spreads through Linux, FreeBSD web servers
And how Google could cripple infection rate in a second
FLAPE – the next BIG THING in storage
Find cold data with flash, transmit it from tape
prev story

Whitepapers

Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Reducing security risks from open source software
Follow a few strategies and your organization can gain the full benefits of open source and the cloud without compromising the security of your applications.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Consolidation: the foundation for IT and business transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.