Feeds

IBM builds biggest-ever disk for secret customer

120 PB monster 'for simulation'. Nuke labs?

Beginner's guide to SSL certificates

Analysis Flash may be one cutting edge of storage action, but big data is causing developments at the other side of the storage pond, with IBM developing a 120 petabyte 200,000-disk array.

The mighty drive is being developed for a secret supercomputer-using customer "for detailed simulations of real-world phenomena" according to MIT's Technology Review, and takes current large-array technology trends a step or two further.

IBM Almaden storage systems research director Bruce Hillsberg says that 200,000 SAS disk drives are involved, rather than SATA ones, because performance is a concern. A back-of-an-envelope calculation suggests 600GB drives are being used and Seagate 2.5-inch Savvios come to mind.

Won't lose any data for a million years? Really, give it a rest, this is marketing BS...

We're told that wider racks than normal are being used to accommodate the drives in a smaller amount of floorspace than standard racks would require. Also these racks are water-cooled rather than fan-cooled, which would seem logical if wide drawers crammed full of small form factor (SFF) drives were being used.

Some 2TB of capacity may be needed to hold the file data for the billions of files in the array. The GPFS parallel file system is being used with a hint that flash memory storage is used to speed its operations. This would indicate that the 120PB array would include, say, some Violin Memory arrays to hold the meta-data, and would scan 10 billion files in about 43 minutes.

RAID 6, which can protect against two drive failures, is not enough – not with 200,000 drives to look after – and so a multi-speed RAID set-up is being developed. Multiple copies of data would be written and striped so that a single drive failure could be tolerated easily. A failed drive would be rebuilt slowly in the background. The rebuild would not slow the accessing supercomputer down much if at all. A dual-drive failure would have a faster re-build. A three-drive failure would get a faster rebuild again, with, we assume, the compute side of the supercomputer slowing down somewhat due to a lower array I/O rate.

Hillsberg doesn't say how many drives could simultaneously fail. The MIT article text says the array will be "a system that should not lose any data for a million years without making any compromises on performance". Really, give it a rest, this is marketing BS. Having it work and not lose data for 15 years will be good enough.

We're interested that homogeneous disk drives are being used – presumably all the data on the array will be classed as primary data, apart from the file meta data which will need a flash speed-up. That means no tiering software is needed.

There will be lessons here for other big data drive array suppliers, such as EMC's Isilon unit, DataDirect and Panasas. It will be interesting to see if they abandon standard racks in favour of wider units, SFF drives, water-cooling and improved RAID algorithms too. ®

Bootnote

Storage-heavy supercomputer simulations are used in such tasks as weather forecasting, seismic surveying and complex molecular science - but there would seem no reason to keep any such customer's identity a secret. Another area in which supercomputer simulations are important is nuclear weapons: without live tests it becomes a difficult predictive task to tell whether a warhead will still work after a given period of time. As a result, the US nuclear weaponry labs are leaders in the supercomputing field.

Beginner's guide to SSL certificates

More from The Register

next story
The cloud that goes puff: Seagate Central home NAS woes
4TB of home storage is great, until you wake up to a dead device
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Intel offers ingenious piece of 10TB 3D NAND chippery
The race for next generation flash capacity now on
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Getting ahead of the compliance curve
Learn about new services that make it easy to discover and manage certificates across the enterprise and how to get ahead of the compliance curve.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.