Feeds

IBM builds biggest-ever disk for secret customer

120 PB monster 'for simulation'. Nuke labs?

Reducing the cost and complexity of web vulnerability management

Analysis Flash may be one cutting edge of storage action, but big data is causing developments at the other side of the storage pond, with IBM developing a 120 petabyte 200,000-disk array.

The mighty drive is being developed for a secret supercomputer-using customer "for detailed simulations of real-world phenomena" according to MIT's Technology Review, and takes current large-array technology trends a step or two further.

IBM Almaden storage systems research director Bruce Hillsberg says that 200,000 SAS disk drives are involved, rather than SATA ones, because performance is a concern. A back-of-an-envelope calculation suggests 600GB drives are being used and Seagate 2.5-inch Savvios come to mind.

Won't lose any data for a million years? Really, give it a rest, this is marketing BS...

We're told that wider racks than normal are being used to accommodate the drives in a smaller amount of floorspace than standard racks would require. Also these racks are water-cooled rather than fan-cooled, which would seem logical if wide drawers crammed full of small form factor (SFF) drives were being used.

Some 2TB of capacity may be needed to hold the file data for the billions of files in the array. The GPFS parallel file system is being used with a hint that flash memory storage is used to speed its operations. This would indicate that the 120PB array would include, say, some Violin Memory arrays to hold the meta-data, and would scan 10 billion files in about 43 minutes.

RAID 6, which can protect against two drive failures, is not enough – not with 200,000 drives to look after – and so a multi-speed RAID set-up is being developed. Multiple copies of data would be written and striped so that a single drive failure could be tolerated easily. A failed drive would be rebuilt slowly in the background. The rebuild would not slow the accessing supercomputer down much if at all. A dual-drive failure would have a faster re-build. A three-drive failure would get a faster rebuild again, with, we assume, the compute side of the supercomputer slowing down somewhat due to a lower array I/O rate.

Hillsberg doesn't say how many drives could simultaneously fail. The MIT article text says the array will be "a system that should not lose any data for a million years without making any compromises on performance". Really, give it a rest, this is marketing BS. Having it work and not lose data for 15 years will be good enough.

We're interested that homogeneous disk drives are being used – presumably all the data on the array will be classed as primary data, apart from the file meta data which will need a flash speed-up. That means no tiering software is needed.

There will be lessons here for other big data drive array suppliers, such as EMC's Isilon unit, DataDirect and Panasas. It will be interesting to see if they abandon standard racks in favour of wider units, SFF drives, water-cooling and improved RAID algorithms too. ®

Bootnote

Storage-heavy supercomputer simulations are used in such tasks as weather forecasting, seismic surveying and complex molecular science - but there would seem no reason to keep any such customer's identity a secret. Another area in which supercomputer simulations are important is nuclear weapons: without live tests it becomes a difficult predictive task to tell whether a warhead will still work after a given period of time. As a result, the US nuclear weaponry labs are leaders in the supercomputing field.

Choosing a cloud hosting partner with confidence

More from The Register

next story
Wanna keep your data for 1,000 YEARS? No? Hard luck, HDS wants you to anyway
Combine Blu-ray and M-DISC and you get this monster
US boffins demo 'twisted radio' mux
OAM takes wireless signals to 32 Gbps
No biggie: EMC's XtremIO firmware upgrade 'will wipe data'
But it'll have no impact and will be seamless, we're told
Microsoft's Office Delve wants work to be more like being on Facebook
Office Graph, social features for Office 365 going public
Apple flops out 2FA for iCloud in bid to stop future nude selfie leaks
Millions of 4chan users howl with laughter as Cupertino slams stable door
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.