This article is more than 1 year old

Marvell builds gateway to the clouds

DragonFly caching HBA

Marvell has a new take on serve I/O caching with a 2-tier NV-RAM and SSD DragonFly caching adapter claimed to increase server storage I/O tenfold.

The idea is to have a PCIe card with 1-8GB of level 1 NV-RAM cache, Marvell embedded processors and software which turns the NV-RAM and level-2 solid state drive (SSD) cache, also hooked up to the server, into a distributed storage I/O cache that supports direct-attached storage, filers (NAS) and block-access storage area network (SAN) storage, using iSCSI, SCSI, FCoE or Fibre Channel.

Marvell DragonFly VSA in case

Marvell DragonFly with covering on.

It's called the DragonFly Virtual Storage Appliance, and Marvell OEM customers or the reselling channel have to buy the SSDs separately. They can choose single or multi-level cell SSDs, in 2.5-inch or 3.5-inch form factors, and using 3 or 6Gbit/s SAS or SATA interfaces.

The context Marvell describes is a world of rapidly virtualising servers which are making heavier and heavier I/O demands on backend storage arrays which, it claims, are becoming storage I/O bottlenecks. Instead of expensively upgrading them, stick a DragonFly cache into each array-accessing application server, and cache both storage reads and writes to improve server storage I/O performance affordably.

HyperScale caching

Marvell reveals a fair bit of detail about its HyperScale-branded caching. There is flash-aware write-buffering which de-stages writes from NV-RAM to the SSD storage in what Marvell says is a log-structured manner that is cognisant of SSD wear-levelling and garbage collection. Secondly there is re-ordered write-coalescing, which de-stages writes from the SSD storage to hard disk drive storage ordered in such a way as to reduce the number of IOPS needed. In other words, random writes are re-ordered to be more sequential. Altogether, writes can be increased 10 times in number with such caching, according to Marvell.

On the read side, Marvell says, "Intelligent population and eviction algorithms enable granular I/O temperature mapping to distinguish hot vs cold data at a sub-VM (virtual machine) level (block and file). Configured as an option to default write-back mode, write-through operates as a read cache so that only random reads benefit. This offers up to 10x improvement in random read performance."

Having storage I/O caches in servers is what Fusion-io provides with its ioDrives and also Virident with its tachIOn cards. Marvell says its DragonFly is better because it uses faster-than-flash NV-RAM as a level one cache, with the flash used as a larger capacity second level cache. Marvell ignores the benefits of having flash cache in servers, such as NetApp's arrays, and of having flash drives replacing hard drives in storage arrays. In fact, Marvell says, you can use slower arrays for storing data with the DragonFlys, providing the speed needed to handle servers running lots of VMs.

Next page: Performance

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like