EMC delivers sub-volume tiering to mid-range
CLARiiON and Celerra get FAST 2
CLARRiiON and Celerra arrays have the ability to automatically move chunks of data inside logical volumes from slow to fast access storage tiers and back again.
FAST (Fully-Automated Storage Tiering) currently moves entire logical volumes (LUNs) from, say, high-capacity but slow SATA disk drives to faster drives when the LUN has a high access rate. However, not all data in a LUN is uniformly needed at the same rate. It makes sense to promote only the high-demand chunks of the LUN to faster, and more expensive, storage tiers such as solid state drives (SSDs).
This is what FAST 2 does. It has policy options such as "auto-tiering" moving data chunks on the basis of their I/O pattern, "highest tier" to allocate high-access data to SSD and "lowest tier" to dump low-demand data into the SATA pool.
Finer granularity is better for such tiering and the gold standard here is Compellent's block-level tiering called Data Progression. IBM and 3PAR have both recently introduced their own sub-LUN automated data movement in their arrays, making it pretty much a standard feature of enterprise drive arrays.
FAST 2 is integrated with EMC's Unisphere management software for CLARiiON and Celerra for scheduling and configuring it. Unisphere can be used directly or through VMware. It's been refreshed too, with some reports being generated up to 18 times faster than before.
With FAST 2 data can be automatically compressed by the drive array controller at block-level, and this would typically be done with inactive data on SATA drives, releasing disk capacity for other use.
Another new feature is FAST Cache for CX4 CLARiiONS and Celerra NS arrays, using their SSD storage tier to cache read and write data during what EMC calls unpredicted spikes in application workload. This indicates that FAST 2 may not respond quickly enough to a sudden increase in access rate for a particular piece of data, with FAST Cache overrides the auto-tiering policy in FAST 2, scooping up in-demand data and caching it.
ThIs is comparative to controller SSD caching as offered by NetApp with its PAM (Performance Acceleration Module). The only effective difference is the location of the SSD cache; in the controller with NetApp, and in the array with EMC.
FAST 2, FAST Cache and the new UniSphere will arrive in the third quarter of the year. ®
How many intros?
Now let's get this straight. EMC just pre-announced a FAST2 product that they already pre-announced over a year ago in April 2009.
How many pre-announcements get covered as news these days? Next we'll get an "introduction" announcement. Then a "delivery" announcement. This is media manipulation at its best.
not here yet
How can one say "EMC delivers" when the last sentence of the article says it'll be available in
Q3 ? Last I checked it's not Q3.
Everyone has known for some time now that the next gen FAST was coming this year.
My question would be what size blocks is EMC working with. I suppose Chuck might be able to answer I expect him to blog on it and I can ask him. 3PAR is using 128MB, IBM is using 1GB, I've never heard what size Compellent uses, so would be curious.
IBM's easy tier technology didn't pan out well in their SPC-1 tests. I suspect other automatic storage tiering technologies will be similar, the array won't be able to react fast enough to take full advantage of the faster tier. I think in the longer term the only solution is a full write through cache layer of SSD.
I wrote about it here:
Automagic storage tiering is supposed to help reduce costs, but somehow IBM managed to come up with a solution that costs 19 times more per SPC-1 IOP with storage tiering on vs traditional 15k RPM drives without SSD.