More like this



The Channel

IO Turbine claims to turbo-charge storage I/O

Hot product or hot air?

For Sale sign detail

Comment IO Turbine says its Accelio software accelerates virtualised server-storage I/O better than any other flash cache-using product by being implemented as a hypervisor plug-in. Close to the seat of ESX power, it gets its work done faster.

The Accelio software consists of a hypervisor-resident component and a virtual machine agent. We were interested in understanding how the I/O improvement actually happens, compared to other flash-based products intended for the same purpose, and how Accelio does it better.

Firstly, there are a number of caching products emerging – LSI CacheCade and Gridiron are just two recent ones. How does IO Turbine differ from these?

According to a statement IO Turbine sent to us: "Gridiron is iron. We are 100 per cent software. Gridiron is a SAN accelerator of which there are several in this category and several more in the NAS accelerator space. All of these suffer from a common limitation; they are downstream from the host and have no visibility into the source of the data they are caching. Put another way, they only see the application's I/O requests after they've been effectively filtered by the hypervising software.

"Accelio learns what is most important to cache by being right in the guest virtual machine making the requests, and can redirect those requests so they never get out of the host. Any I/O that can be contained within the host will naturally be much faster than those that have to be put out on the SAN. Furthermore, Accelio can take advantage of host-installed PCI devices that can provide I/O responses at near memory speeds where SAN/NAS caches can, at best, operate over 4, 8, or 10Gbit/s network structures. So I/O requests that Accelio can keep local can be satisfied in a handful of microseconds where anything to the SAN/NAS must take hundreds or even thousands of microseconds simply by the fact they get out on the network.

"Comparing the LSI CacheCade device, the most significant difference is the lack of guest-aware data patterns. The supported VMware driver for CacheCade will accelerate I/O at the ESX layer but has no knowledge of the source of the I/O requests as Accelio does. In addition, deployment of CacheCade at the ESX layer and the subsequent deployment of guest disk storage will break the ability for that particular guest to be able to vMotion in a VMware cluster.

"So while CacheCade might provide many similar caching benefits, most of these will be difficult to realise in a hypervised environment. SAN/NAS caches automatically suffer from the increased latency imposed by the I/O request leaving the host and having to be routed through the network to be satisfied."

Hypervisor plug-in

Accelio is implemented as a software plug-in to the hypervisor. What are the benefits of this?  Does it mean more VMs can be supported? Is latency reduced? How does this hypervisor plug-in approach make Accelio better?

IO Turbine's statement said: "In general, one can clearly observe that Accelio software will free up system resources on the host by allowing I/Os to be more easily completed in less time. So applications will run faster, I/O loads on primary storage will be reduced, both of which combine to free up resources and conceivably allow a greater number of VMs to be hosted.

"Accelio has a component that runs in the guest O/S and a component that runs in the hypervisor."

The claimed benefits due to the guest O/S component are:

  • We act on the data at the source, ie, near the application rather than at the storage subsystem level where it is already mixed with I/O from other VMs/applications and is far more difficult to efficiently identify for redirection to an SSD/Flash.
  • Accelio is unique in that it has highly optimised/efficient algorithms to "distill" an application's most frequently read or "hot" data set.
  • Once Accelio has identified the hot data set, it then transparently redirects appropriate I/O to SSD/Flash.
  • This proximity to the application greatly reduces latency (no traversing the network to get to data on NAS or SAN). [It] reduces latency from milliseconds (spinning disk) to microseconds (SSD/Flash)
  • Increases IOPS from hundreds (disk) to hundreds of thousands (SSD/Flash)
  • Accelio in the Guest O/S provides file and application level knowledge to enable fine-grained control and tuning on a per VM basis, at the file, volume or disk level."

IO Turbine also said there was a particular benefits accruing from the hypervisor component:

  • Accelio's hypervisor component allows thin provisioning of cache across VMs and enables support of vMotion."

VMware vMotion

With reference to Accelio supporting vMotion, we thought this is vMotion within the physical server hosting the VMs and not vMotion between physical servers, since Accelio uses cache in a physical server. We asked IO Turbine to spell out the reasons why competing caching technologies don't or can't fully support vMotion.

The company's response was this: "Accelio does not hinder vMotion and does support vMotion between physical servers. This is done through careful adherence to the shared storage requirements imposed by VMware that allows for vMotion. VMs are not aware that their data is being stored on a local storage device; that is all hidden by Accelio software. Any storage solution that allocates storage for a VM onto a locally attached storage device will disable vMotion for that VM. Accelio is unique in its ability to cache data for a guest VM locally and not cause vMotion to be disabled."

These answers indicate that Accelio software should enable a virtualised server to make more effective use of flash. It could be that, unless you buy the flash equivalent of a JBOD, the flash will come with its own controller which may duplicate some of the Accelio software's functions. As ever, your mileage might vary and pilot tests are probably worthwhile. ®

Sponsored: Achieving rapid delivery of high quality software with continuous delivery