Original URL: http://www.theregister.co.uk/2013/03/13/2013_storage_primer/

Perish the fault! Can your storage array take a bullet AND LIVE?

Sysadmin Trevor's gentle guide to protecting your data - and your career

By Trevor Pott

Posted in Storage, 13th March 2013 10:03 GMT

Feature Storage doesn't have to be hard. It isn't really all that hard. If you ask yourself "can my storage setup lead to data loss" then you have already begun your journey. As a primer, I will attempt to demystify the major technologies in use today to solve that very problem.

Certain types of storage technologies (rsync, DFSR) are employed with the understanding that there will be some data loss in the event of an error. Changes that haven't been synchronised will be unavailable or even lost. This is generally viewed as highly available (HA) as opposed to fault tolerant (FT) storage, though the terms are frequently interchanged and misused.

Where the sort of data loss that accompanies an HA setup isn't acceptable, we start talking about true fault tolerance. In storage this takes the form of RAID (or ZFS) and various forms of clustering. A proper storage area network (SAN) from a major vendor may well incorporate one or more techniques, often combined with proprietary home-grown technologies.

RAID (redundant array of independent disks)

RAID is pretty crap. Oh, it was the absolute bee's knees once upon a time; but that time is well past. RAID lashes multiple hard drives together into a single volume and it theoretically capable of surviving differing numbers of failed drives depending on the RAID level. Due to rising drive sizes (among other factors) the most commonly implemented, RAID 5, is now a liability in some scenarios. Its successor, RAID 6, will be soon.

I speak, of course, of the dreaded Unrecoverable Read Error (URE). The short version is that you can get UREs on any drive without them being detected by the drive itself or the controller electronics. Lots of things can cause UREs, some you cannot prevent (such as cosmic rays). RAID is supposed to allow you to suffer drive failures, but if you encounter a URE on a supposedly "good" drive during a rebuild you're hooped.

This isn't the end of the world; the maths on this is well known and straightforward. Your chances of a URE depend on the quality of the drive you use. If you use low-end SATA drives, then you need to consider RAID 5 in the same manner as you consider RAID 0 (completely unsafe) and RAID 6 should be employed only as a temporary patch until you can get your data onto something more reliable.

Hard drives in plastic case

Life's certainties: taxes, death and drive failures

Better quality drives – typically SAS drives without an identical SATA model or Fibre Channel disk – have lower URE rates. This dramatically lowers your chances for a catastrophic array failure while rebuilding your storage system and gives RAID a few more years of usefulness. A good hardware RAID controller can cope with UREs in RAID 6 and map around them. The chances of two UREs happening at the same time on two disks in a RAID 6 on the same mapped sector are very small.

These higher-quality drives will only save us for so long, however. Longer rebuild times and drive failure correlation are also problems. Disks in an array tend to be the same age, from the same run, have the same defects, and thus die in groups. Flash has its own problems as well and isn't going to save RAID either. Properly designed RAID arrays with enterprise-class components will still be viable until well into the next decade. Consumer-grade stuff, not so much.

ZFS designed by Sun

ZFS is a filesystem as designed by a paranoid schizophrenic. It is also a replacement for RAID. The true depth of its data integrity technologies are beyond the scope of this article, but suffice it to say that it can withstand triple disk failures and actively works to combat things like UREs. While it is almost magical in its ability to ensure the integrity of your data, there is one condition when using ZFS: never, ever, under any circumstances, lie to ZFS.

Do not use ZFS on a virtual disk (hypervisor-created, iSCSI or FCoE) or on hardware RAID. ZFS must have complete transparent control of the hard drives under its care. Using features such as VMware's "raw device mapping" is fine so long as what you are mapping is a local disk.

Some administrators run ZFS on hardware RAID anyways by disabling the ZFS Intent Log and configuring the hardware controller to ignore ZFS's commands to flush data onto the disks. This allows the RAID controller to determine disk flushing and relying on a cache battery in case of power outages.

This is typically part of a tuning strategy to drive up performance, measured in IO operations per second (IOPS). This is most common among administrators mixing ZFS and NFS as NFS asks the system to flush data to the disk after each write; a design feature that clashes with ZFS's more advanced algorithms for IOPS and data integrity balance.

Other administrators – myself among them – frown on this because it removes some of ZFS's data integrity features from play. I prefer to rely on hybrid storage pools with solid-state disks or NVRAM drives if IOPS are a concern. It is better to configure ZFS to lie to NFS about having flushed writes to disk and allow ZFS to retain all its protection mechanisms intact.

ReFS from Redmond

Microsoft's ReFS is often touted as Microsoft's answer to ZFS. Let me be perfectly clear here: this is in no way shape or form reality. ReFS is a huge advancement over NTFS, however there is still a lot of work to do. Hopefully there will be a future in which Microsoft's resilient storage technologies can withstand the loss of more than a single disk failure, but at present it is nothing more than a technology demonstration, in my opinion.

ReFS and Storage Spaces need to get together and have little proprietary babies for a few generations before they are ready to go toe-to-toe with ZFS. In the here and now, nothing should replace traditional hardware RAID for Microsoft administrators using local storage on their servers.

Get this right and you'll be singing in the RAIN

RAIN is a redundant (or reliable) array of inexpensive Nodes. For a brilliant explanation I direct you to this video by Gene Fay of Nine Technology. Short version: RAIN copies your data across multiple individual computers for redundancy.

A server rack full of storage nodes

Seize the RAINs, keep your servers' data protected

There are many different implementations of RAIN out there today; this is a large part of what the kerfuffle over "big data" is all about. When you have conversations about HDFS, GlusterFS or Amazon's S3 you are talking about RAIN. In general, RAIN setups don't work like traditional file systems, although the Gluster team is building tech on top of GlusterFS that seeks to change this.

With most RAIN setups, your operating system doesn't mount them, you don't create NFS or SMB shares. If you really want to do those types of activities you need to be using virtual disks on top of the RAIN array using something like FUSE. At this point you're way out in the weeds and you should probably be reassessing the whole project. Still, if you really want to, you can be bizarre and run VMware virtual machines on Gluster via an NFS server translator.

While you can throw layers of translation of top of a RAIN setup in order to make it pretend to be a traditional disk, RAIN is generally for object (not file) storage. It's better to think of RAIN setups as really big databases rather than traditional file systems.

Bulletproof clusters

Of course, if ZFS or RAID underpin your storage layer, then what happens if I shoot the storage server? RAIN would seem to be resilient to the loss of an individual system, but there's nothing native to ZFS or RAID to deal with a bullet through the CPU.

This is where clustering comes in. An ideal deployment for fault tolerance would have two servers in bit-for-bit lock-step. In the free software world you are looking at DRBD with Linux or HAST with FreeBSD.

Assuming you have a solid hardware RAID underneath, Microsoft's Server 2012 is actually the basis for a very reliable cluster. Cluster Shared Volumes v2 is how I get my RAID 61: hardware RAID 6 on each node, mirrored. (I turn write caching off in order to ensure that I don't lose data in memory if a node dies. Slower, but safer.)

Combine that with Server 2012's new NFS 4.1 server, the iSCSI target or SMB 3.0 (which supports multichannel, transparent failover and node fault tolerance) and I can shoot one of my Microsoft servers without the VMware cluster that uses them for storage knowing anything's happened.

Speaking of VMware, they offer the vSphere Storage Appliance. It is a reliable technology for creating a storage cluster, however it only scales to three physical systems per storage appliance.

It's all rather a mess right now, isn't it?

If you are starting to sense some holes in feature availability here, you aren't alone. This is why storage vendors exist as separate entities. Honest-to-$deity fault-tolerant storage with open-source tools is an absolute pig to implement and Microsoft needs time to get all its technology ducks in a row. (It needs triple disk redundancy with ReFS on Cluster Shared Volumes scaling to hundreds of nodes before it is a real player.) VMware has the basic technology but it needs to scale quite a bit more before they are a real consideration.

This is why there are so many storage startups out there. It is also why the storage giants can still sell those big, expensive SANs. There is a lot to consider when planning your storage today, even if it is only for a single server. What you knew ten years ago doesn't really apply any more. What you knew five years ago is probably just enough to get you into trouble.

Of course, these technologies are for fault tolerance only. Fault tolerance is not a backup. If your data doesn't exist in at least two physical locations, then your data does not exist; make sure that on top of utilising the fault tolerant technologies discussed above that you have a proper backup plan. And remember: a fault tolerant system (or a backup) that hasn't been tested isn't any form of protection at all. ®