Feeds

Researchers reveal radical RAID rethink

“Pipelined erasure coding” helps storage to scale at speed

Next gen security for virtualised datacentres

Singaporean researchers have proposed a new way to protect the integrity of data in distributed storage systems and say their “RapidRAID” system offers top protection while consuming fewer network, computing and storage array resources than other approaches.

RAID – redundant arrays of inexpensive disks – has been a storage staple for a almost quarter of a century. The technique involves replicating data across a number of disks so that failure or loss of a single spindle does not result in data loss. When a drive dies, RAID means a new drive can be added to an array and the data from the original drive will be restored. Different “levels” of RAID work with varying quantities of disk and deliver different levels of reliability.

RAID has, of late, become less popular as various scale-out architectures offer different approaches to redundant data storage. The technique is also challenged by multi-terabyte disk drives, as the sheer quantity of data on such disks means rebuilding a drive can take rather longer, and hog more IOPS, than many users are willing to endure.

Erasure codes are one of the techniques challenging RAID and can most easily be understood as a form of metadata. Erasure codes allow fragments of data to be spread across a wider pool of disks, before the desired data is re-assembled using fragments from multiple sources. Erasure codes feature in the Google File System, Hadoop’s file system, Azure and several commercial products.

Some have even described erasure codes as delivering RAIN – a redundant array of inexpensive nodes – that is positioned as a successor to RAID.

The Singaporean researchers’ work, available on arXiv, proposes a new scheme called RapidRAID that goes beyond other implementations of erasure codes, reducing the amount of storage required to create a viable archive while also speeding the time required to create that archive.

The team thinks this is possible with what it calls “pipelined insertion” under which:

“… the encoding process is distributed among those nodes storing replicated data of the object to be encoded, which exploits data locality and saves network traffic. We then arrange the encoding nodes in a pipeline where each node sends some partially encoded data to the next node, which creates parity data simultaneously on different storage nodes, avoiding the extra time required to distribute the parity after the encoding process is terminated.”

The paper linked to above then proposes RapidRAID, a set of erasure codes which, just like RAID, offer different levels of data protection.

Tests of the new codes are described in the paper, which compares RapidRAID to the Reed-Solomon erasure codes used in many current implementations. In a test involving 50 thin clients and 16 EC2 instances, the researchers proclaim RapidRAID superior in some ways.

The researchers therefore declare RapidRAID a viable big data enabler, but conclude that there’s more work to be done before it can be declared suitable for applications that require more than two copies of data.

The codes are available for download on github. ®

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Death by 1,000 cuts: Mainstream storage array suppliers are bleeding
Cloud, all-flash kit, object storage slicing away at titans of storage
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
VMware vaporises vCHS hybrid cloud service
AnD yEt mOre cRazy cAps to dEal wIth
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
7 Elements of Radically Simple OS Migration
Avoid the typical headaches of OS migration during your next project by learning about 7 elements of radically simple OS migration.
BYOD's dark side: Data protection
An endpoint data protection solution that adds value to the user and the organization so it can protect itself from data loss as well as leverage corporate data.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?