Where is Compellent's technology going?
Primary dedupe and synchronous Live Volume, it seems
Compellent's latest controllers are over-powered for their work and form the basis for a series of developments to help it to scale out its storage more and improve its efficiency.
That's the scenario painted by people telling us they are familiar with the themes of Compellent's roadmap.
Currently a Storage Centre tops out at just over 1,000 disk drives. A firmware upgrade is thought to be coming in Storage Centre 5.4.2 which will double the number of drives that can be attached to a SAS chain, from 48 to 96. This will increase the theoretical maximum number of drives to around 1,500. It is thought this upgrade will be delivered in late December this year or January next.
Live Volume and deduplication
The Live Volume capability uses asynchronous replication to copy data from a primary volume in one Storage Center array to a secondary volume in a second array. Only the different data in any update to the first volume, the delta, is copied to the second volume. The replication will become synchronous, with a roughly 50-mile distance limit, in the next release of Storage Center. This could come in the first half of 2011 if our sources are correct.
Primary data deduplication could also come in the first half of next year, built with Compellent's own technology and not OEM'd from another supplier such as Permabit. This will increase the effective capacity of Compellent arrays.
What might happen is that as a page comes into the array its hash value is computed and stored as part of the page's metadata. When Data Progression runs it examines the metadata and eliminates pages with duplicate hash values, replacing the duplicates with a pointer.
Combining tiering and Live Volume
The arrays use Data Progression to automatically allocate hot blocks of data to faster tiers of storage, with data moving up from bulk SATA drives through 10K and then 15K disk drive tiers and ultimately into solid state storage. It will be possible, it is thought, to combine Live Volume and Data Progression in such a way that the different tiers in a volume can be stored in different Storage Centers. One could focus on bulk data storage with a second holding more active data on, say, SSD and 10K SAS drives.
We could have an active data node and a bulk data node with accessing servers seeing a single logical volume that is actually split between two Storage Centres with data moving automatically between the two, promoted to a faster tier by Data Progression and moved to the active data member of a Storage Center pair by synch Live Volume. This would potentially seriously increase the overall capacity of a Storage Center implementation.
Synch Live Volume might be deliverable in the third quarter of 2011.
It has been pointed out that synch Live Volume gives you host-based mirroring that is agnostic to the server application layer and whatever hypervisors and operating systems are found there. Synch Live Volume is thought to be equivalent to EMC's VPLEX but minus the VPLEX Controller hardware needed in addition to the EMC array controller hardware, making Synch Live Volume much less expensive than VPLEX.
On this basis a combination of Synch Live Volume and Data Progression would be in advance of what VPLEX could do.
Compellent has an active:active dual controller architecture and does not have a multi-controller architecture, known as N + 1 controllers, in the same way as EMC, HP, HDS and IBM high-end storage arrays. It appears to be unclear if Compellent will develop an N + 1 controller architecture or will develop a way to combine pairs of Storage Centres, high-availability pairs, in some kind of cluster or federation.
What's the importance of all this? We might envisage a managed service provider with multiple tenants using Compellent storage to provide the secure and scalable bulk and high-speed storage needed with thin provisioning, deduplication, Data Progression and Live Volume providing the ability to automatically place data blocks in the right tiers of storage within and between Storage Centers, load balance across a cluster or federation of the arrays and have business continuity/disaster recovery arrangements to a distant set of Storage Center arrays.
All of these developments would certainly suck up a lot of controller CPU cycles and make Compellent's storage offering stronger in its appeal to the medium and entry-level large enterprises that appear to be the company's heartland. ®
Sponsored: Benefits from the lessons learned in HPC