Feeds

Traditional RAID is outdated and dying on its feet

Well, it sure is for large-scale data

  • alert
  • submit to reddit

Top 5 reasons to deploy VMware with Tegile

HPC blog The video below is an interview I did with IBM storage guru Robert Murphy at SC13 in Denver. I’m still in catch-up mode after my recent near-disastrous rootkit episode.

Youtube Video

In the video, Robert and I talk about how today’s typical RAID mechanisms (1, 5, 6, 10) just aren’t up to the job of protecting data against drive failure while providing ongoing access. Two trends in particular are responsible for the murder of traditional RAID:

Larger spindle size: Disks today are bigger than ever, with enterprises deploying single drives with an astounding 4TB capacity. Unfortunately, durability isn’t keeping up. A modern 4TB drive isn’t any more reliable than a 1TB version of the same spinner.

When a 4TB drive fails, it takes a whole lot of time to rebuild. How long will depend on your array, how many drives are in it, and other factors, but I’ve seen estimates ranging from 20 hours to days for a single 4TB rebuild.

While the drive is being rebuilt your RAID array is still operating, but in degraded mode. The amount of degradation will vary according to your hardware, software, etc., but you could be operating 25% to 35% slower.

Many more spindles: With the explosion of data, enterprises today have many more spindles than in the past. Even uber-large drives (technically, any drive above 2TB qualifies as ‘uber-large’) haven’t allowed organizations to cut down on the sheer number of spindles they’re spinning.

When you’re flogging thousands or tens of thousands of drives, you’re going to get more failures (now there’s a blinding glimpse of the obvious). Drives will need to be replaced and volumes rebuilt – and you’ll be running in degraded mode while this is going on.

Add more drives, and you’ll have more failures, meaning yet more time in degraded mode during rebuilds. As rebuild times stretch out, you’re marginally more likely to see another failure on the same array, which could result in data loss.

Is there a better way?

IBM thinks so. It’s their homegrown GPFS (General Parallel File System). GPFS approaches the data protection and access problem from a much wider angle. It’s a version of declustered RAID that stripes data uniformly throughout the array, meaning that when drives inevitably fail, all of the drives in the array pitch in to help on the rebuild.

According to IBM, rebuilds are radically faster (minutes rather than hours) and ‘degraded mode’ operation is 3-4 times less degrading.

IBM has also introduced a new storage appliance, the GPFS Storage Server, which combines GPFS with inexpensive hardware to yield a storage server with high levels of data protection and high performance.

What’s surprising is the cost. While Robert didn’t give out any numbers, he said that the GPFS-SS should cost about half of what other storage vendors charge for their proprietary arrays. It’s not very often that you see IBM competing on price, right?

Check out the video at the top for more details, charts, and some interesting storage talk.

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Protecting against web application threats using SSL
SSL encryption can protect server‐to‐server communications, client devices, cloud resources, and other endpoints in order to help prevent the risk of data loss and losing customer trust.