Original URL: https://www.theregister.com/2013/10/16/pernixdata_better_thanvmwares_own_flash_cache/

PernixData shows off write-caching, clustering one-trick FVP pony

Can VMware kit do this? No. Well, not for now, anyway...

By Chris Mellor

Posted in Virtualization, 16th October 2013 14:04 GMT

Virtualising server-side flash biz PernixData's CEO says its FVP VMware hypervisor flash caching software is better than VMware's own flash read cache because it has write caching and clustering, two things that VMware's gear lacks.

VMware's Flash Read Cache was announced in August as virtual storage that uses server direct-attached flash as a VM (virtual machine) and hypervisor read cache. Chunks of the cache are allocated to each VM and can "teleport" from one server to another, with the same flash cache, with vMotion.

PernixData's FVP not only does read caching but also write caching – clustering different servers' flash caches into an aggregated virtual cache. Flash capacity is dynamically assigned to VMs, not statically. It doesn't require the source and target servers in a vMotion exercise to have the same flash capacity, either.

Poojan Kumar, PernixData's co-founder and CEO, says write caching isn't really worth doing unless you can protect against a failure that loses the written data in the cache. Clustering is a way to do that – the write caching and clustering are two sides of the same coin, in his view.

Kumar thinks FVP may have the edge over VMware when it comes to read caching because PernixData's product is designed for flash, whereas the VMware file system used in the Flash Read Cache is probably still disk-based.

Because of this it will be hard, he says, for VMware to add write caching and clustering additions because the product hasn't been designed for flash from day one. Kumar thinks PernixData has a two to three year lead over VMware – and anyone else offering hypervisor and VM flash caching.

PernixData VSA

The general roadmap ideas include Hyper-V and possibly other hypervisor support. Another train of thought is that FVP sees all the IO from the virtualised server and could perhaps do something with it, such as providing storage array controller functions.

Far from seeing FVP as a one-trick pony, PernixData sees it not only as speeding access to data but also as being well-placed to organise, in some way, the data that may be downstream from the cache. If that could be done so that data storage was less expensive and/or more flexible than with traditional arrays, as well as enhancing server data access even more, then Pernixdata could well expand its product's capabilities.

If it doesn't do that and VMware adds write caching and cluster support for its read cache, then Pernixdata might well be in trouble – its one trick would no longer be needed. ®