This article is more than 1 year old

Perish the fault! Can your storage array take a bullet AND LIVE?

Sysadmin Trevor's gentle guide to protecting your data - and your career

Feature Storage doesn't have to be hard. It isn't really all that hard. If you ask yourself "can my storage setup lead to data loss" then you have already begun your journey. As a primer, I will attempt to demystify the major technologies in use today to solve that very problem.

Certain types of storage technologies (rsync, DFSR) are employed with the understanding that there will be some data loss in the event of an error. Changes that haven't been synchronised will be unavailable or even lost. This is generally viewed as highly available (HA) as opposed to fault tolerant (FT) storage, though the terms are frequently interchanged and misused.

Where the sort of data loss that accompanies an HA setup isn't acceptable, we start talking about true fault tolerance. In storage this takes the form of RAID (or ZFS) and various forms of clustering. A proper storage area network (SAN) from a major vendor may well incorporate one or more techniques, often combined with proprietary home-grown technologies.

RAID (redundant array of independent disks)

RAID is pretty crap. Oh, it was the absolute bee's knees once upon a time; but that time is well past. RAID lashes multiple hard drives together into a single volume and it theoretically capable of surviving differing numbers of failed drives depending on the RAID level. Due to rising drive sizes (among other factors) the most commonly implemented, RAID 5, is now a liability in some scenarios. Its successor, RAID 6, will be soon.

I speak, of course, of the dreaded Unrecoverable Read Error (URE). The short version is that you can get UREs on any drive without them being detected by the drive itself or the controller electronics. Lots of things can cause UREs, some you cannot prevent (such as cosmic rays). RAID is supposed to allow you to suffer drive failures, but if you encounter a URE on a supposedly "good" drive during a rebuild you're hooped.

This isn't the end of the world; the maths on this is well known and straightforward. Your chances of a URE depend on the quality of the drive you use. If you use low-end SATA drives, then you need to consider RAID 5 in the same manner as you consider RAID 0 (completely unsafe) and RAID 6 should be employed only as a temporary patch until you can get your data onto something more reliable.

Hard drives in plastic case

Life's certainties: taxes, death and drive failures

Better quality drives – typically SAS drives without an identical SATA model or Fibre Channel disk – have lower URE rates. This dramatically lowers your chances for a catastrophic array failure while rebuilding your storage system and gives RAID a few more years of usefulness. A good hardware RAID controller can cope with UREs in RAID 6 and map around them. The chances of two UREs happening at the same time on two disks in a RAID 6 on the same mapped sector are very small.

These higher-quality drives will only save us for so long, however. Longer rebuild times and drive failure correlation are also problems. Disks in an array tend to be the same age, from the same run, have the same defects, and thus die in groups. Flash has its own problems as well and isn't going to save RAID either. Properly designed RAID arrays with enterprise-class components will still be viable until well into the next decade. Consumer-grade stuff, not so much.

ZFS designed by Sun

ZFS is a filesystem as designed by a paranoid schizophrenic. It is also a replacement for RAID. The true depth of its data integrity technologies are beyond the scope of this article, but suffice it to say that it can withstand triple disk failures and actively works to combat things like UREs. While it is almost magical in its ability to ensure the integrity of your data, there is one condition when using ZFS: never, ever, under any circumstances, lie to ZFS.

Do not use ZFS on a virtual disk (hypervisor-created, iSCSI or FCoE) or on hardware RAID. ZFS must have complete transparent control of the hard drives under its care. Using features such as VMware's "raw device mapping" is fine so long as what you are mapping is a local disk.

Some administrators run ZFS on hardware RAID anyways by disabling the ZFS Intent Log and configuring the hardware controller to ignore ZFS's commands to flush data onto the disks. This allows the RAID controller to determine disk flushing and relying on a cache battery in case of power outages.

This is typically part of a tuning strategy to drive up performance, measured in IO operations per second (IOPS). This is most common among administrators mixing ZFS and NFS as NFS asks the system to flush data to the disk after each write; a design feature that clashes with ZFS's more advanced algorithms for IOPS and data integrity balance.

Other administrators – myself among them – frown on this because it removes some of ZFS's data integrity features from play. I prefer to rely on hybrid storage pools with solid-state disks or NVRAM drives if IOPS are a concern. It is better to configure ZFS to lie to NFS about having flushed writes to disk and allow ZFS to retain all its protection mechanisms intact.

ReFS from Redmond

Microsoft's ReFS is often touted as Microsoft's answer to ZFS. Let me be perfectly clear here: this is in no way shape or form reality. ReFS is a huge advancement over NTFS, however there is still a lot of work to do. Hopefully there will be a future in which Microsoft's resilient storage technologies can withstand the loss of more than a single disk failure, but at present it is nothing more than a technology demonstration, in my opinion.

ReFS and Storage Spaces need to get together and have little proprietary babies for a few generations before they are ready to go toe-to-toe with ZFS. In the here and now, nothing should replace traditional hardware RAID for Microsoft administrators using local storage on their servers.

More about

TIP US OFF

Send us news


Other stories you might like