Finally – from brandbox to whitebox: Storage fabric is SDS realised
An existential threat to today's big players
A new term is popping up among startups: the storage fabric. Depending on who you talk to, the term itself is recycled, but the concepts and implementations definitely are new. Today's storage fabrics are closer to the originally hyped promise of Software Defined Storage (SDS). What remains to be seen is if the arrival is timely enough to matter.
The basic idea behind modern storage fabrics is really not that dissimilar to proper storage virtualization. You take whatever storage you have, wherever you have it, and you mash it all together into one great big amorphous blob. It's not quite centralized storage a la SAN or NAS. It's also not NOT centralized storage a la SAN or NAS. Repeat for hyperconverged, cloud, hybrid cloud, local and so forth.
This is perhaps best explained with an example.
Let's say that you're a shop with a handful of NASes offering up SMB and NFS shares for consumption, a couple of SANs serving up LUNs, and you have a bunch of servers with hard drives in them. Without a storage fabric, if you want to do neat virtualization things like vMotion to VMs running on those servers you need to either have some hyperconvergence software installed, or you need to store your workloads on one of the SANs or NASes.
This has worked fine for some time, but starts to fall apart in a few different places. If you want to scale up you have two non-optimal choices. The first is to do complicated needs analysis to determine what kind of storage you are likely to need in the future. You then buy a great big blob of it in the form of SAN or NAS an hope you didn't screw up.
Alternately, if you've gone the hyperconvergence route, you can add more nodes to your hyperconverged cluster, assuming you haven't hit your cluster size limit. If you've hit your cluster limit you have to either shard your existing cluster (not easy) or start a new cluster (usually a minimum of three nodes). Adding nodes to get storage might leave you with a lot of unused compute capacity, and a hyperconvergence vendor's idea of a "storage heavy" solution tends to not align well with real world needs.
All of the above is without having had a discussion about trying to balance performance and capacity, or getting into the network implications of any of this, and it leaves public cloud storage as its own separate thing.
Along come storage fabrics. There are few different versions running around, but I'll take the most flexible among them as the basis for what's possible. A storage fabric essentially claims all the storage you have and lashes it together into a single clustered pool of storage from which you provision what you need.
In the case of systems with local storage, a storage fabric would either install locally into the system or allow you to deploy a virtual machine into a hypervisor. The software then offers you the choice to claim all the storage it can see, and whatever RAM you choose to devote to it. In the case of VMs, you can usually either pass disks through directly to the VM and/or assign it virtual disks, as you see fit. So far, I have only encountered bare metal installers for Linux; I'm not sure why no one seems to have made one for Windows yet.
One or more of the installed copies of the fabric software can claim SAN and/or NAS storage. In most fabric solutions you have to evacuate the storage you wish to claim and then assign it to the fabric, however, this is changing.
At least one startup offers the ability to "ingest" SAN and NAS storage. For SANs what happens is you take your LUN offline for a second and you assign it to the fabric. The fabric will then immediately post that LUN from its own IP address, though it will usually also offer you the option to expand the size of the LUN if you so choose. This works more or less the same for NAS shares.
Once the fabric is responsible for forwarding the storage on it applies its storage policies to the SAN or NAS storage its been assigned. If you, for example, assigned a policy of three copies of data with a maximum of 1ms latency and a target of 100ms latency the storage fabric would then start duplicating the data to other storage devices within the fabric, moving hot blocks to flash.
The manager nodes in a storage fabric can usually post multiple storage types. Storage that appears local when installed on bare metal, iSCSI, FCoE, SMB and NFS for "remote" consumption. I use quotes around remote because in the case of VMs running nodes you will have to assign that storage back to the nodes they are running on using one of these remote protocols if you wish to use that storage for workloads. In essence, the VMs act as virtual storage appliances in a manner very similar to many hyperconverged solutions.
Storage fabrics can also claim public cloud storage. Each vendor has a different set of public cloud providers they support. There are usually two basic ideas behind the public cloud and/or service provider cloud support. The first is that regular snapshots of provisioned storage are shipped up to the cloud for data protection purposes.
The second cloud integration allows for the really cold blocks to be shipped up to the public cloud, making the on-premises storage fabric into something of a cloud storage gateway. Where vendors differ here tends to be in whether or not they use a distributed metadata system throughout all the data nodes that means the on-premises and cloud storage is truly blended, or if they take the approach of making the public cloud copy the "single source of truth", with the on-premises hardware predominantly acting as a cache.
Storage fabrics can layer data services on top of the whole shebang. This means that, regardless of the capabilities of the underlying hardware, any storage you provision could have support for deduplication, compression, encryption and so forth.
All about the whitebox
The endgame of storage fabrics is to allow migration to whitebox storage. If you want more capacity, you add a cheap box full of cheap disks. If you want more speed you add a cheap box full of NVMe flash and/or lots and lots of RAM. If you want more archival storage, add more cloud.
Data access is constantly monitored to ensure that latency is within the limits specified by the profile for the provisioned storage. If it's not, blocks are moved to faster and/or closer devices. This creates some locality awareness.
A storage fabric is an existential threat to all of today's big storage players. It renders them as irrelevant as VMware made the hardware vendors. Whitebox, tech titan, it doesn't matter. Just as virtualization made the workload the bit that we care about, storage fabrics are putting the data front and center once more.
This means that the real evolution and battleground points on today's fabrics are all around allowing finer grained controls to data locality and incorporating content awareness so that sensitive data can be flagged. This would let you do things like "not put data containing personally identifiable information into clouds or geos that don't support appropriate privacy measures". It's still early days on this level of capability, but what I've seen thus far is encouraging. ®
Sponsored: Beyond the Data Frontier