Original URL: https://www.theregister.com/2011/04/04/marvell_dragonfly/

Marvell builds gateway to the clouds

DragonFly caching HBA

By Chris Mellor

Posted in SaaS, 4th April 2011 16:04 GMT

Marvell has a new take on serve I/O caching with a 2-tier NV-RAM and SSD DragonFly caching adapter claimed to increase server storage I/O tenfold.

The idea is to have a PCIe card with 1-8GB of level 1 NV-RAM cache, Marvell embedded processors and software which turns the NV-RAM and level-2 solid state drive (SSD) cache, also hooked up to the server, into a distributed storage I/O cache that supports direct-attached storage, filers (NAS) and block-access storage area network (SAN) storage, using iSCSI, SCSI, FCoE or Fibre Channel.

Marvell DragonFly VSA in case

Marvell DragonFly with covering on.

It's called the DragonFly Virtual Storage Appliance, and Marvell OEM customers or the reselling channel have to buy the SSDs separately. They can choose single or multi-level cell SSDs, in 2.5-inch or 3.5-inch form factors, and using 3 or 6Gbit/s SAS or SATA interfaces.

The context Marvell describes is a world of rapidly virtualising servers which are making heavier and heavier I/O demands on backend storage arrays which, it claims, are becoming storage I/O bottlenecks. Instead of expensively upgrading them, stick a DragonFly cache into each array-accessing application server, and cache both storage reads and writes to improve server storage I/O performance affordably.

HyperScale caching

Marvell reveals a fair bit of detail about its HyperScale-branded caching. There is flash-aware write-buffering which de-stages writes from NV-RAM to the SSD storage in what Marvell says is a log-structured manner that is cognisant of SSD wear-levelling and garbage collection. Secondly there is re-ordered write-coalescing, which de-stages writes from the SSD storage to hard disk drive storage ordered in such a way as to reduce the number of IOPS needed. In other words, random writes are re-ordered to be more sequential. Altogether, writes can be increased 10 times in number with such caching, according to Marvell.

On the read side, Marvell says, "Intelligent population and eviction algorithms enable granular I/O temperature mapping to distinguish hot vs cold data at a sub-VM (virtual machine) level (block and file). Configured as an option to default write-back mode, write-through operates as a read cache so that only random reads benefit. This offers up to 10x improvement in random read performance."

Having storage I/O caches in servers is what Fusion-io provides with its ioDrives and also Virident with its tachIOn cards. Marvell says its DragonFly is better because it uses faster-than-flash NV-RAM as a level one cache, with the flash used as a larger capacity second level cache. Marvell ignores the benefits of having flash cache in servers, such as NetApp's arrays, and of having flash drives replacing hard drives in storage arrays. In fact, Marvell says, you can use slower arrays for storing data with the DragonFlys, providing the speed needed to handle servers running lots of VMs.

Performance

As well as the highlight on a 10 X increase in server storage I/O, Marvell says DragonFly offers a maximum 200,000 4K block random writes and reads; the read number requiring a 100 per cent hit rate in the maximum read cache. It has a 3GB/sec capability for 256KB block writes and reads.

Marvell DragonFly VSA uncased

Marvell DragonFly with casing removed.

It seems intuitively obvious that overall system performance though will depend upon the amount and type of the SSDs installed in the host server along with the DragonFly card.

There is a supercapacitor to safeguard cache contents in the case of power failure and one DragonFly can be a synchronous mirror of another to provide high availability.

Marvell claims DragonFly represents an entirely new architecture in the industry, and is a good fit to cloud service provider requirements, as the embedded DragonFly processors put almost no load on host servers and enable storage arrays to handle many more virtual machines accessing their resources than otherwise.

Beta testing will start in the third quarter and general availability should commence by year-end. A DragonFly should cost about the same as a high-end host bus adaptor. ®