Feeds

NetApp may serve app server flash cache

'Won't catch us with our Flash pants down' says NetApp

Designing a Defense for Mobile Applications

Comment We've seen EMC and Dell looking to manage flash caches in application servers and contrasting this with NetApp's array controller Flash Cache strategy. Actually NetApp is involved with storage array management of flash caches in app servers - witness its Project Mercury presentation at the FAST '11 conference at San Jose in February.

A Reg commenter pointed this out, as did a NetApp person responding to the Dell server flash story, saying there's "no catching us with flash pants down".

The idea was to link a flash cache in the server to a shared, centrally-managed storage array so that virtual machines (VMs) and their storage could be moved between physical servers in a shared-pool datacentre without losing any I/O speed-up benefits from having a local flash cache.

NetApp Project Mercury schematic

NetApp Project Mercury code stack (NetApp)

NetApp devised its Mercury block-oriented, write-through flash cache as a KVM/QEMU block driver in a Linux guest VM. It provided an hg disk format and requests sent to the hg device were handed over to the SSD cache.

The cache was "warmed" (loaded) with a few days' activity and then NetApp engineers looked at the server I/O effects. There was a "nearly 40 per cent reduction in mean I/O service time" with a "near 50 per cent reduction of requests sent to [the] server." Almost all the reads were serviced from the Mercury cache.

Serial I/O showed a small improvement whereas random I/O had a substantial improvement.

Server flash cache or storage array flash?

These were measurements of server I/O with and without the Mercury cache. We don't know what the effects would be if the server with/without Mercury cache was compared to a cached server with/without a storage array with a Flash Cache in its controller and a storage array with/without SSDs as a drive tier.

It seems likely that a Mercury-cached server would have some read I/O improvement over a Flash Cached storage array controller but not that much, as we could assume the flash contents would be the same and the Mercury cache would be a few microseconds nearer the server DRAM in latency terms.

It would be a few more microseconds nearer the server's main memory than an SSD drive tier in a network-access storage array, assuming the data contents were the same again. Whether this latency improvement is significant or not intuition can't say; we need engineers and measurement to tell us that.

Judging by the existence of Dell, EMC, and NetApp work in this area, the indications are that it is significant.

Texas Memory Systems view

Jamon Bowen, Director of Sales Engineering at Texas Memory Systems, blogged on this topic, answering this question: "Doesn’t being on the PCIe bus increase performance by being as close to the CPU as possible?"

He wrote: "Yes, but nowhere near the degree it is promoted. Going through a HBA to FC-attached RamSan adds about 10µs of latency –that’s it. The reason that accessing SSDs through most SAN systems take 1-2 ms is because of the software stack in the SAN head – not because of the PCIe to FC conversion.

"For our customers the decision to go with a PCIe RamSan-70 for a FC/IB-attached RamSan-630 comes down to whether the architecture needs to share storage."

TMS is not working on a way to make its PCIe RamSan-70 card shareable: "If the architecture needs… shared storage, use our shared storage systems."

He is not saying server PCIe flash has no role in large scale server infrastructures needing co-ordination. This is how he sees that role:

In a shared storage model, a big core network is needed so each server can access the storage at a reasonable rate.  This is one of the main reasons a dedicated high performance Storage Area Network is used for the server to storage network.

However, after there are more than a few dozen servers, the network starts to become rather large.  Now imagine if you want to have tens of thousands of servers, the network becomes the dominant cost …  In these very large clusters the use of a network-attached shared storage model becomes impractical.

A new computing model developed for these environments – a shared nothing scale-out cluster.   The basic idea is that each computer processes a part of the data that is stored locally; many nodes do this in parallel, and then an aggregation step compiles the results. This way all of the heavy data to CPU movement takes place within a single server and only the results are compiled across the network.  This is the foundation of Hadoop as well as several data warehouse appliances.

In effect, rather than virtualized servers, a big network, and virtualized storage via a SAN or NAS array; the servers and storage are virtualized in a single step using hardware that has CPU resources and Direct-Attached Storage.

PCIe SSDs are important for this compute framework because reasonably priced servers are really quite powerful and can leverage quite a bit of storage performance. With the RamSan-70 each PCIe slot can provide 2 GB/s of throughput while fitting directly inside the server. This much local performance allows building high performance nodes for a scale-out shared-nothing cluster that balances the CPU and storage resources.

Otherwise, a large number of disks would be needed for each node or the nodes would have to scale to a lower CPU power than is readily available from mainstream servers. Both of these other options have negative power and space qualities that make them less desirable.

The rise of SSDs has provided a quantum leap in storage price-performance at a reasonable cost for capacity as new compute frameworks are moving into mainstream applications. 

A shared, centrally-managed storage array could conceivably pre-load the server PCIe caches in Bowen's shared-nothing, scale-out cluster model, but would then have no further role to play. We might think that TMS would see no role for shared storage in such clusters because it doesn't want to be beholden to suppliers of such systems for RamSan-70 sales.

It will be interesting see how HP, IBM and Oracle view the role of app server flash cache technology, and even more interesting to see server flash cache I/O behaviour with and without flash-cached and flash-tiered storage arrays. ®

The Power of One eBook: Top reasons to choose HP BladeSystem

More from The Register

next story
Apple fanbois SCREAM as update BRICKS their Macbook Airs
Ragegasm spills over as firmware upgrade kills machines
Attack of the clones: Oracle's latest Red Hat Linux lookalike arrives
Oracle's Linux boss says Larry's Linux isn't just for Oracle apps anymore
THUD! WD plonks down SIX TERABYTE 'consumer NAS' fatboy
Now that's a LOT of porn or pirated movies. Or, you know, other consumer stuff
EU's top data cops to meet Google, Microsoft et al over 'right to be forgotten'
Plan to hammer out 'coherent' guidelines. Good luck chaps!
US judge: YES, cops or feds so can slurp an ENTIRE Gmail account
Crooks don't have folders labelled 'drug records', opines NY beak
Manic malware Mayhem spreads through Linux, FreeBSD web servers
And how Google could cripple infection rate in a second
FLAPE – the next BIG THING in storage
Find cold data with flash, transmit it from tape
prev story

Whitepapers

Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
Reducing security risks from open source software
Follow a few strategies and your organization can gain the full benefits of open source and the cloud without compromising the security of your applications.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Consolidation: the foundation for IT and business transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.