NetApp's 50 per cent guarantee
Not against named competitors though
NetApp will guarantee customers they will use half as much NetApp storage in virtual server and desktop environments compared to 'traditional storage', but won't make comparisons to specific products from competing suppliers.
NetApp's chief marketing officer, Jay Kidd, said: "The pressure for cost reduction has led to the rapid adoption of storage-efficiency technologies such as thin provisioning, deduplication, RAID-DP, and Snapshot ... Our customers are consistently realising unprecedented space savings greater than 70 per cent with these technologies in their virtual infrastructures."
General storage utilisation without de-duplication, thin provisioning and other techniques, is said to be 25-40 per cent and can be worse in virtual server environments with multiple virtual machine (VM) images storing the same data over and over again. NetApp reckons its customers can save at least 50 per cent of the storage capacity used in such systems by following its best practices and using standard NetApp features such as:
- Thin Provisioning — Aggregating unused capacity across storage volumes and sharing it dynamically across all applications as needs change.
- De-duplication — Eliminating redundant copies of data on primary, archive, and backup data.
- RAID-DP — Safeguarding data from double disk failure. NetApp claims it provides better protection and performance levels than RAID 10 without the high-capacity overhead.
- Snapshot — Recovering data from a point-in-time copy and protect data with no performance impact and minimal consumption of storage space.
How it works
The customer's new NetApp FAS system - other NetApp storage is excluded - purchased for primary storage only, will be compared to a theoretical baseline system, and not actual products from specific suppliers. NetApp says the baseline system size will be:
Determined from the amount of data to be stored and the amount of storage overhead that a system of similar protection and performance levels typically requires. For example, suppose that you need a system to accommodate 10TB of data. Here’s how we calculate the baseline:
- Add on 100 per cent overhead for RAID 10 protection; 2.6 per cent overhead for rightsizing and formatting; and two spare drives.
- Total raw capacity required for 10TB of data on a traditional storage system is roughly 21.75TB.
- 50 per cent less storage means that the customer will need to purchase only 10.75TB of raw space with NetApp.
Customers anywhere in the world who purchase a new system for their virtual desktop and server environment can take advantage of the NetApp guarantee. Specifically, if customers don't use 50 per cent less storage after following best practices as vetted by NetApp Professional Services, NetApp will provide the additional capacity as needed to meet the shortfall at no additional charge, up to 50 per cent of the original capacity purchased.
Pillar Data offers an 80 per cent capacity utilisation guarantee. No other storage suppliers offer guarantees of capacity utilisation or lower capacity needs. We can expect competing suppliers, such as 3PAR, EMC, HP, Hitachi Data Systems and others to rapidly produce their own capacity saving calculations over similar theoretical baseline systems and try to trump NetApp's ace. NetApp does have an advantage in that its ASIS de-duplication works on primary storage. Other suppliers prefer de-duplication to be used with disk-held backup and archive data.
The NetApp programme starts now and will finish at the end of March 2009. Access an efficiency calculator and more details here. ®
>Anyone who has had questions or qualms about it had 45 days to respond and didn't...In fact >they still haven't responded.
The only people who would have had an interest in responding to this were EMC and "Just to set the record straight, only SPC members can challenge SPC results. EMC is not a member of the SPC."
The point here is that there are so many veiled pre conditions attached to this offer, that the 50% guarantee headline is extremely misleading.
* New FAS systems must be purchased for primary storage only. V-Series, S line, and VTL are excluded.
* The program is not applicable to N series from IBM.
* Can be using any one or more of the following protocols: FC, iSCSI, and NFS.
* Must be running Data ONTAP® 7.3 or later. Data ONTAP 10 is excluded.
* Capacity on the system supporting the virtual environment must be at least 14 drives.
* Must agree to have the following features enabled:
o Thin provisioning without LUN reservation
o NetApp SnapshotTM
* Must follow the NetApp best practices described in the following technical reports:
o TR 3428: NetApp and VMware VI3 Storage Best Practices
o TR 3505: Deduplication Implementation and Best Practices
o Whitepaper: 50% Virtualization Guarantee Program Technical Guide
* The following services are required to help with the implementation. Must purchase a minimum level of Professional Services deployment and implementation services as follows:
o NetApp Installation and Deployment
o NetApp VMware Implementation Service
* No more than 10% of the following data types under the Program: images and graphics, XML, database data, exchange data, and encrypted data. This also means that large database exchange deployments are excluded from this Program. These data types are deduplicated at a lower rate.
* Must have at least 10 similar virtual machines per flexible volume, so that deduplication can work properly to realize the capacity savings.
* Excludes workloads with high performance requirements that require spindles; to be determined by SE/PS during sizing.
Researching NetApp - They seem to recommend 8 (data) + 2(raid) for their RAID-DP model and recommend no more than 60% space utilization before you risk degrading performance. (Once you deep dive into how RAID-DP actually works, this makes sense).
Immediately one sees Available storage = 0.8 x 0.6 x Raw Storage = 0.48 raw storage. This is worse than Raid 1+0.
And then for dedup. , NetApp does post-write deduplication which means you need to write data to disk first *before* cleaning up replicated data. So you need to allow around a .75 multiplier to allow for deduplication space and then if you use snapshots there is some overhead so multiply by .8 again and you are at .48 * .75 * .8 = .27 utilization of raw storage. Call it .25 after allowing for metadata, etc. (Yes, I know you don't really need to do snapshots but then why pay for it?)
Going back to their magic 10.75TB value, you get 2.7TB usable. Getting the next 50% free puts you to 4.0TB usable before you start paying again (assuming you are willing to face your cheesed off financial controller, cap in hand for more storage budget.)
NetApp uses the same math as the American banking industry.
Just like their triple disk protection arguement, "I know we can't do proper RAID 10 so lets just make it sound like doing a software mirroring of RAID 4" is a really good idea - genius! They are the masters of using spin to turn competitive weaknesses into advantages.
If you look at most vendors TCO studies you can soon spot flaws when you see the configurations used for their "independant" comparissons done by the supposedly independant analyst (read as Stooge). The NetApp one is a classic example, in that they always use RAID 6/DP versus RAID 1, use space efficient snapshots and assume the other vendors use full copies (when most also have space efficient),
They do have some great software and features but I lose respect for them as a company when they try this kind of BS