This article is more than 1 year old

NetApp says tiering is dying

The FAST and the dead

Comment NetApp CEO Tom Georgens thinks 3PAR, Dell, EMC, Compellent and others are wrong - automated tiering of data across different levels of drives is a dying concept.

The tiering idea is that data should be on the correct type of drive at any point in its life cycle. Fast access data should be on fast access solid state drives (SSD) or 15,000rpm Fibre Channel drives. Intermediate access data should be on middling speed disks, say 10K SAS, with low access rate data on 7200 or 5400rpm SATA bulk capacity drives.

This type of thing was pioneered by Compellent with its automated progression or placement of blocks of data according to its activity level. EMC is implementing FAST (Fully-Automated Storage Tiering) on its Symmetrix, CLARiion and Celerra products with phase one being LUN (Logical UNit) migration, and the coming phase two being more granular sub-LUN migration.

Georgens said in his Q3 fy2010 earnings call: "FAST is kind of an umbrella name for a bunch of point technologies that are different on every platform. But first and foremost whatever NetApp does, it’s going to consistent across all of the SAN and NAS, high end and low end."

He is dismissive of multi-level tiering, saying: "The simple fact of the matter is, tiering is a way to manage migration of data between Fibre Channel-based systems and serial ATA based systems."

He goes further: "Frankly I think the entire concept of tiering is dying."

Fast access data will come to be stored in flash with the rest in SATA drives, so: "With the advent of Flash, and we talked about our performance acceleration module (PAM), basically these systems are going to go to large amount of Flash which are going to be dynamic with serial ATA behind them and the whole concept of having tiered storage is going to go away."

NetApp's PAM is a controller-based cache, not a flash replacement for a hard drive. The prospect raised here is that NetApp arrays will simplify, having just SSD and SATA drives with nothing in between. The flash will be like a huge cache.

Georgens' comments could be taken as a hint that an intermediate stage on the NetApp front will be to provide some form of automated data movement in Data ONTAP, possibly later this year or early next. With the SSD-SATA array destination there will need to be ways to get data from SATA to SSD and back again, so such code wouldn't be wasted.

NetApp under Georgen's reign is striking out on an individualistic course. It's more or less withdrawn from the deduplicated virtual tape library (VTL) product space. Now it's saying that anything beyond automated data movement between SSD and SATA is a short-term fix to a dying problem.

There is also just a scent of NetApp not being that excited about scale-out NAS, as IBM is with its SONAS product. Unlike previous earnings calls, NetApp did not mention Data ONTAP 8 and its clustering. Maybe the Georgens strategy is to concentrate on supplying mainstream low-end, mid-range and high-end unified storage arrays, with block and file access protocols, while ignoring niche areas like VTL, extreme scale-out NAS, and fancy tiering.

NetApp wants to do the basics very, very well, ride the virtualisation wave and milk the mainstream storage market for all it's worth. And with two storming quarters under his belt, the Georgens way looks pretty good right now. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like