Avere menaces NetApp with accelerators
Promises satisfaction without short-stroking
Avere says it is capturing business that would have gone to NetApp because its clustered accelerators cost less than NetApp storage upgrades, and run faster.
The FXT products from Avere are called optimisers by the company, and they sit in front of filers. They use tiers of storage, including DRAM, NV-RAM, flash and 10K SAS disk drives. The idea is to cache filer I/O in the appropriate storage tier so accessing servers get a much faster I/O completion than if the I/Os were being handled by the actual filers.
Up to six FXT nodes can be clustered together and any NFS-using filers can be accelerated by Avere. It is focusing on NetApp and EMC because they are the big fish in its sea. El Reg was briefed by Avere CEO Ron Bianchini on a couple of recent wins against NetApp.
The first was ION Geophysical which has an SRME application (Surface Related Multiple Elimination) which removes unwanted energy traces from seismic data records. It's called a demultiple tool and is intended to clean up the seismic records by getting rid of extraneous inputs deriving from the air-water interface involved in much seismic recording activity.
ION had a pair of NetApp FAS6080s, fitted with Flash Cache, and these fed data to thousands of processors and co-processing elements involved, with NetApp delivering 2GB/sec of data from these arrays. It wasn't enough, but the FAS6080s had all the spindles they could handle, as their controller CPUs were running at 100 per cent utilisation.
Bianchini said: "Effectively they were short-stroking drives." Avere slides show ION suffering latencies of up to 40ms from the filers, which were delivering five kilo operations a second (kops/s).
ION put a 6-node FXT 2700 cluster in place in front of the 6080s. This included 3TB of flash, 500GB per node, which was globally shared.
The operations rate went up to 250 kops/s and throughput rose to 5GB/sec, with latency falling to less than 1ms. The load on the filers dropped by 90 per cent with their CPU utilisation falling to 10 per cent. Bianchini said that after the FXT cluster was installed, ION only needed a third of the total capacity of the FAS6080s.
We don't know what the actual cost-savings of buying the FXT set-up versus buying more FAS 6080s were for ION, but we do have cost comparisons from the SPEC organisation for the sfs2008 benchmark. A FAS6080 delivering 120,011 ops/sec cost $1,351,000 and used Fibre Channel disk drives to provide 14TB of usable capacity. A 6-node Avere FTX 2500 cluster delivered 131,591 ops/sec, with 15.3TB of SATA disk storage behind it, and cost $445,000. It took up 16 rack units with the 6080 needing 84.
SPEC gives us an EMC comparison too. A Celera NS-G8 gateway sitting in front of a 12.9TB usable storage VMAX delivered 110,621 ops/sec and cost $8,435,000. Ouch, and it needed 95 rack units too.
The school run
Avere told us about a Massachusetts school district that, for some reason, was using virtual desktop images (VDI). Students used laptop computers but these were fixed in each classroom. When a lesson finished, every 50 minutes, the students would log off their laptop, move to the next classroom, and boot the laptop they were using in that room. They booted from Cisco UCS servers, which serviced the 400-500 Windows clients from a FAS2000- array.
I know: guaranteed boot storm city every 50 minutes. The boot storm could last for up to 15 minutes, it was insanity, but the IT guys wanted out from Microsoft maintenance madness on all their laptops, and they wanted a single consistent laptop environment, which they couldn't guarantee if students provided their own machines, hence the VDI route.
The escape route using NetApp storage would have been to buy a FAS6000-class array, according to Bianchini. Instead the school district installed a 2-node FXT cluster with parallel access to a golden VDI master in flash. The Boot storm still took place every 50 minutes, only it only lasted for two minutes instead of 15. Everything from an IP operations perspective, such as backup, was unchanged. The FXTs were slotted into the data path between the UCS servers and the FAS2000 array and just did their accelerating schtick.
Basically, Bianchini is saying, Avere FXT accelerators can be used to avoid NetApp or EMC upgrades costing thousands of dollars more, tens of thousands of dollars more, and even, in the extreme, hundreds of thousands of dollars more.
He says his technology can accelerate access to filers in the cloud too. You can get rid of filers in your own data centre, putting the data up in the cloud, and rely on FXTs to give you pretty much the same access speed as before because of their caching. One thing they don't do is WAN optimisation and compression, as Riverbed's Whitewater appliance does. So the two products complement each other.
Bianchini says Avere FXTs have been used in front of BlueArc filers too. But, we said, these are hardware-accelerated and go fast. "No it doesn't," Bianchini replied: "It's one box and eventually the box runs out of gas... We sit in front of BlueArc, Isilon, whatever." ®
feeling loved-up, Chris?
Pressure on Big Iron?
With a massive performance at a low $$/IOPS, these accelerator appliances will have a large impact on both block and file/NAS markets.
Mainly, we'll see a downsizing of array HDD requirements by as much as 5x on spindle count, with a ripple-though on the array controllers, racks and frames, etc. This is a revenue risk for the Big 3, but may remove the mystique of needing Big Iron at all, and open up a wave of opportunity for Dell, Huawei and others to sell both the accelerator appliances and storage.
When we add in Openstack and its Linux-like implications for being hostable on x86 COTS boxes, we have the Object Storage space also vulnerable to an accelerator approach, though I suspect this will be on the drive, rather than user, side of the array appliance.
Life in storage is about to get interesting!