Feeds

NetApp may serve app server flash cache

'Won't catch us with our Flash pants down' says NetApp

Beginner's guide to SSL certificates

Comment We've seen EMC and Dell looking to manage flash caches in application servers and contrasting this with NetApp's array controller Flash Cache strategy. Actually NetApp is involved with storage array management of flash caches in app servers - witness its Project Mercury presentation at the FAST '11 conference at San Jose in February.

A Reg commenter pointed this out, as did a NetApp person responding to the Dell server flash story, saying there's "no catching us with flash pants down".

The idea was to link a flash cache in the server to a shared, centrally-managed storage array so that virtual machines (VMs) and their storage could be moved between physical servers in a shared-pool datacentre without losing any I/O speed-up benefits from having a local flash cache.

NetApp Project Mercury schematic

NetApp Project Mercury code stack (NetApp)

NetApp devised its Mercury block-oriented, write-through flash cache as a KVM/QEMU block driver in a Linux guest VM. It provided an hg disk format and requests sent to the hg device were handed over to the SSD cache.

The cache was "warmed" (loaded) with a few days' activity and then NetApp engineers looked at the server I/O effects. There was a "nearly 40 per cent reduction in mean I/O service time" with a "near 50 per cent reduction of requests sent to [the] server." Almost all the reads were serviced from the Mercury cache.

Serial I/O showed a small improvement whereas random I/O had a substantial improvement.

Server flash cache or storage array flash?

These were measurements of server I/O with and without the Mercury cache. We don't know what the effects would be if the server with/without Mercury cache was compared to a cached server with/without a storage array with a Flash Cache in its controller and a storage array with/without SSDs as a drive tier.

It seems likely that a Mercury-cached server would have some read I/O improvement over a Flash Cached storage array controller but not that much, as we could assume the flash contents would be the same and the Mercury cache would be a few microseconds nearer the server DRAM in latency terms.

It would be a few more microseconds nearer the server's main memory than an SSD drive tier in a network-access storage array, assuming the data contents were the same again. Whether this latency improvement is significant or not intuition can't say; we need engineers and measurement to tell us that.

Judging by the existence of Dell, EMC, and NetApp work in this area, the indications are that it is significant.

Texas Memory Systems view

Jamon Bowen, Director of Sales Engineering at Texas Memory Systems, blogged on this topic, answering this question: "Doesn’t being on the PCIe bus increase performance by being as close to the CPU as possible?"

He wrote: "Yes, but nowhere near the degree it is promoted. Going through a HBA to FC-attached RamSan adds about 10µs of latency –that’s it. The reason that accessing SSDs through most SAN systems take 1-2 ms is because of the software stack in the SAN head – not because of the PCIe to FC conversion.

"For our customers the decision to go with a PCIe RamSan-70 for a FC/IB-attached RamSan-630 comes down to whether the architecture needs to share storage."

TMS is not working on a way to make its PCIe RamSan-70 card shareable: "If the architecture needs… shared storage, use our shared storage systems."

He is not saying server PCIe flash has no role in large scale server infrastructures needing co-ordination. This is how he sees that role:

In a shared storage model, a big core network is needed so each server can access the storage at a reasonable rate.  This is one of the main reasons a dedicated high performance Storage Area Network is used for the server to storage network.

However, after there are more than a few dozen servers, the network starts to become rather large.  Now imagine if you want to have tens of thousands of servers, the network becomes the dominant cost …  In these very large clusters the use of a network-attached shared storage model becomes impractical.

A new computing model developed for these environments – a shared nothing scale-out cluster.   The basic idea is that each computer processes a part of the data that is stored locally; many nodes do this in parallel, and then an aggregation step compiles the results. This way all of the heavy data to CPU movement takes place within a single server and only the results are compiled across the network.  This is the foundation of Hadoop as well as several data warehouse appliances.

In effect, rather than virtualized servers, a big network, and virtualized storage via a SAN or NAS array; the servers and storage are virtualized in a single step using hardware that has CPU resources and Direct-Attached Storage.

PCIe SSDs are important for this compute framework because reasonably priced servers are really quite powerful and can leverage quite a bit of storage performance. With the RamSan-70 each PCIe slot can provide 2 GB/s of throughput while fitting directly inside the server. This much local performance allows building high performance nodes for a scale-out shared-nothing cluster that balances the CPU and storage resources.

Otherwise, a large number of disks would be needed for each node or the nodes would have to scale to a lower CPU power than is readily available from mainstream servers. Both of these other options have negative power and space qualities that make them less desirable.

The rise of SSDs has provided a quantum leap in storage price-performance at a reasonable cost for capacity as new compute frameworks are moving into mainstream applications. 

A shared, centrally-managed storage array could conceivably pre-load the server PCIe caches in Bowen's shared-nothing, scale-out cluster model, but would then have no further role to play. We might think that TMS would see no role for shared storage in such clusters because it doesn't want to be beholden to suppliers of such systems for RamSan-70 sales.

It will be interesting see how HP, IBM and Oracle view the role of app server flash cache technology, and even more interesting to see server flash cache I/O behaviour with and without flash-cached and flash-tiered storage arrays. ®

Intelligent flash storage arrays

More from The Register

next story
The cloud that goes puff: Seagate Central home NAS woes
4TB of home storage is great, until you wake up to a dead device
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Intel offers ingenious piece of 10TB 3D NAND chippery
The race for next generation flash capacity now on
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
SAVE ME, NASA system builder, from my DEAD WORKSTATION
Anal-retentive hardware nerd in paws-on workstation crisis
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Choosing a cloud hosting partner with confidence
Download Choosing a Cloud Hosting Provider with Confidence to learn more about cloud computing - the new opportunities and new security challenges.
New hybrid storage solutions
Tackling data challenges through emerging hybrid storage solutions that enable optimum database performance whilst managing costs and increasingly large data stores.