Dell pours Fluid Cache into PowerEdge servers
Flash the cash for faster database and HPC cache
RNA Networks got its start doing memory clustering across multiple server nodes, but since buying the company nearly two years ago Dell has been tweaking the memory-caching engine to speed up disk accesses. Specifically, the Fluid Cache software for PowerEdge servers that Dell is finally delivering converts Express Cache solid state disk modules into a peppy caching front-end for internal disks and JDOD arrays attached to servers.
Brian Payne, executive director for PowerEdge servers at Dell, tells El Reg that Fluid Cache for DAS, short for direct attached storage, is the first of several different products that the IT giant will roll out based on the RNA Networks technology.
Some history is in order to explain what Dell has done and what it could do in the future.
RNA Networks was founded in Intel's chip stomping grounds of Portland, Oregon, back in 2006 by Jason Gross and Ranjit Pandit, that latter of whom worked at Intel on the InfiniBand interconnect when it was being created, as well as the Pentium 4 chip; he also led lead the database clustering project at SilverStorm Technologies (which was acquired by QLogic).
The company uncloaked out of stealth mode four years ago with its first product, RNAmessenger. The idea behind RNAmessenger was that main memory, not processing or I/O, is the real bottleneck in the server. Instead of upgrading servers to get more memory capacity in each node, they argued, it made more sense to couple servers together so they can share their main memory as a pool.
The RNAmessenger software did not have tight coupling like SMP or NUMA CPU and memory clustering, but it was tight enough – and coherent enough – to trick a Linux kernel into thinking it had a lot of memory, and letting it use it. You didn't need InfiniBand's Remote Direct Memory Access (RDMA) or the Ethernet equivalent, RDMA over Converged Ethernet (RoCE), to make it work – but it sure helped. But you did need workloads that had a message passing (as in supercomputers) or publish/subscribe (as in financial trading systems) architecture for it to work well.
In July 2010, RNA Networks tweaked its caching engine and created a variant called Memory Virtualization Acceleration (abbreviated MVX, of course) that similarly created a memory pool across a cluster of server nodes for applications to splash in. Importantly, the caching engine in this MVX variant turned main memory into a cache for network-attached storage, riding above the Linux kernel and intercepting data reads or writes for data out to the NAS device and redirecting them to main memory.
The important thing three years ago – and what will be important for future Dell products – is that a cluster of server nodes running MVX all saw the memory pool as their own main memory, all at the same time. It is a bit like software-based NUMA in that the RNA software allocates some main memory in each server node as local main memory and some main memory on all server nodes as remote memory that it lies about and says is local memory to the operating system on each node.
With Fluid Cache for DAS, explains Gordon Bookless, senior engineering technologist at Dell, the caching engine is more or less the same, except now instead of caching to main memory, it caches to the Express Flash SSDs that Dell can plug into its PowerEdge 12G servers. (Small racks and blades get two of these units, and fatter servers get up to four of these hot-pluggable, front-loading SSDs.) And instead of pointing the cache to network-attached storage that connects to many servers, it points it to internal disk drives or JBOD arrays linked directly to a single server.
The big change with Fluid Cache is that there is a virtual block driver now, instead of a virtual memory overlay. And in case you are wondering, Dell is replicating data to two Express Flash SSD units at the same time, writing data to one and then the other and not committing the write until a block of data is on the second SSD. Once that is done, the data is replicated to the actual disk drives, and then removed from the second SSD. The net effect is speed on writes without risking data loss.
It stands to reason that Dell will eventually offer a version of Fluid Cache that spans multiple machines, perhaps called Fluid Memory and allowing the kind of main memory aggregation that RNA Networks had originally tried to sell. That could be especially useful for certain kinds of HPC and financial services applications, but admittedly a harder sell than accelerating storage on any PowerEdge 12G server regardless of operating system or workload.
This first release of Fluid Cache for DAS only supports the Linux variants that are supported on PowerEdge 12G servers, and it doesn't sound like Windows will be supported in the same manner. In the second half of this year, Dell will stretch Fluid Cache so it can front-end Compellent arrays attached to PowerEdge 12G servers, and Windows support "will be linked to the incorporation of the SAN," as Payne put it.
A PowerEdge rack server spitting out an Express Flash SSD module
The Fluid Cache feature only talks to those Express Flash modules at the moment, which are made by Micron Technology and which come in 175GB and 350GB capacities in their 2.5-inch form factor. The 175GB unit costs $2,843 and the 350GB unit costs $5,147, which is roughly twice as expensive as "value" SAS SLC solid state drives that Dell is selling for the PowerEdge 12G boxes. You don't have to pay for a Fluid Cache license for each SSD or even pair of SSDs.
The Fluid Cache for DAS software will come with a perpetual license and costs $3,500 per physical machine plus $700 per year for maintenance on that software license. (This is a lot less than RNA Networks was trying to charge, but it is also a more limited use case, too.) Still, the combination of the flash drives and software ain't cheap.
So what do you get for the money on the Fluid Cache/Express Flash combo?
Dell is still putting the Fluid Cache for DAS software through the paces, but Payne says it is particularly useful for two workloads. For traditional HPC workloads where NFS is used to link a server to its storage, the combination of Fluid Cache and Express Flash was able to deliver a factor of 23X higher I/O operations per second (IOPS) on random reads and did up to 7X better on writes. For database workloads, front-ending the internal storage in the server with Express Flash and the Fluid Cache software boosted transaction throughput by a factor of 2X and cut the average transaction response time by 95 per cent. ®
Sponsored: Are DLP and DTP still an issue?