Data Centre


EMC crying two SAN breakup tears

Binary split Humpty SAN Dumpty logically put back together again

By Chris Mellor


Analysis Dell EMC is working on fixing the increasing split between primary storage data on flash and capacity data storage on object arrays by logically combining them underneath a 2 TIERS software abstraction layer.

The starting point is that SAN disk or hybrid flash/disk arrays are diverging into separate arrays under the pressures for faster access to primary data and more space being needed for secondary data.

One array is for primary data on flash and is called a hot edge or fast tier in Dell EMC's scheme of things.

The other is for secondary (nearline) data on disk accessed through an object storage system, which could be on- or off-premises and has slower access to data, but is significantly lower cost per GB than the hot edge store.

Dell EMC calls this a cold core or capacity tier, and the company is working on a 2 TIERS abstraction layer [PDF] to do this. A slide from that deck shows its starting premises:

We would disagree with the third sub-bullet on this slide, as on-premises file or block-accessed capacity disks are not being replaced by the cloud per se, but by object-accessed capacity disks that could be either on- or off-premises (in the cloud.)

EMC thinks that the hot edge could be in the hundreds of TBs area, while the capacity tier is much larger; think hundreds of petabytes.

A unifying abstraction layer would have metadata indicating on which tier a data item is located and where it is within that tier. There would then be a single global namespace for data items, one capable of encompassing trillions of objects. The layer code could also move data between the tiers as necessary, using a policy-driven approach for automated data placement (tiering.)

It believes that other approaches to logically unifying the two tiers can run out of metadata space in the hot edge, leading to slower capacity tier access for additional metadata. The way to fix this is not to massively increase metadata storage space in the fast tier, but to cache metadata in it instead.

The way this would work is by imagining that a set of client servers accessing the fast tier would send requests to a shared DSSD array using RDMA access, or they access a virtual flash SAN using ScaleIO to aggregate local direct-attached flash drives.

Direct aggregated or network-attached fast tier storage

Behind this is an object-storage-based capacity tier, which could be an Isilon array or an ECS scale-out commodity appliance cluster. These two tiers can grow or shrink independently.

Note that this overall scheme, minus the flash-based fast tier, is somewhat similar to Quantum's StorNext product, which is sold into the entertainment and media workflow market.

Each server would access 2 TIERS software, which presents a SAN via a POSIX API and single namespace to the server's apps, and has policy-driven tiering to send old or unwanted data to the capacity tier. It maps the apps' access to that data into objects on the capacity tier.

The EMC software has the fast tier using a distributed Orange File system with a read-only, read-through translation service on a local FUSE file system. This service uses dynamically loaded namespaces (DLNs) for metadata tiering. A DLN points to a part of the global namespace, like a file system's sub-tree. Within that part are pointers to objects, like inodes in a file system directory.

We have no information on how DLNs are loaded or possibly pre-fetched.

With this general 2 TIERS scheme in mind, EMC suggests two ways to instantiate the idea, using a DSSD fast tier with Isilon or ECS capacity tiers:

Dell EMC 2 TIERS example

An alternative is to host the whole thing in AWS using Omnibond's CloudyCluster, which deploys OrangeFS in AWS.

What we have here, in general, is an approach to a post-SAN/post-NAS array on-premises or public cloud world. The SAN/filer is broken into two separate pieces and logically re-combined using the 2 TIERS software with the broken SAN.


This kind of imaginative storage thinking is what we have come to expect of EMC, and there is no equivalent of it we know of in the storage development shops of its mainstream competitors. Indeed the only equivalent of such inventive creativity we can think of is in HPE's server division, where we see developments such as Synergy.

If Dell, infused with such EMC inventiveness, can apply this to its servers, then HPE would have cause to watch out. And if HPE could apply its Synergy creativity to its storage products, then Dell EMC would have stronger competition.

By the way, Dell EMC has registered the 2 TIERS trademark. ®

Sign up to our NewsletterGet IT in your inbox daily


More from The Register

This is a leader's square for storage leaders, IBM, Dell EMC and Scality tell wide-eyed Qumulo

All the node movers and shakers in Gartner's paranormal polygon

They said yes, grins Dell Technologies: Expects to go public this month

Class V shareholders agree to sell or swap the stock

Dell EMC refreshes Unity arrays with splash of Skylake and NVMe

Get array with you

Ready Stack repacked: Dell EMC unloads DIY converged infrastructure designs

Also upgrades VxBlock options

Bless. It's VMware and Dell EMC's first jointly engineered hybrid cloud infrastructure solution

Cloud Foundation makes debut on Dell EMC VxRail systems

Dell EMC and Cisco renew their vows as converged offspring VxBlock turns 10

Yes, converged infrastructure model is still a thing

Dell EMC and more HPE arrays embrace storage-class memory

Exclusive Soon every vendor will want to be a SCMbag

Dell EMC plucks Tech Data distie man Tomlin to run UK channels

Updated Latest exec hired to make the direct sales conflict go away

Talk about keeping it in the family: Dell-owned Pivotal shares rocket after Dell-owned VMware mulls gobbling it up

Stock price back up to, er, just below IPO level

Dell EMC better watch out, HPE better not frown, Chinese server sales are talk of the town

Inspur, Huawei and Lenovo together shipped more in 2018


Optimising your data for business analytics

As your data resources grow, so do your opportunities to draw deeper, more powerful insights from analytics. But making progress isn’t easy. In fact, a recent survey of 1,000 …

Comprehensive Cloud Data Protection

Organizations are moving to a comprehensive strategy for on-premises and cloud data protection. As more IT teams use cloud for digital transformation, they are also updating data protection strategies.

How to Make Money in the Cloud

The global Software-as-a-Service (SaaS) market is projected to grow from $49 billion just two years ago to $67 billion by next year.

Redefining Enterprise Data Protection with Commvault and NetApp

While the term “enterprise” is thrown around loosely, enterprise data protection is not easy to achieve