Feeds

RNA rejiggers server memory pooling

Better performance, simpler pricing

The essential guide to IT transformation

Since early 2009, RNA Networks has been selling memory virtualization and pooling software to help companies get around memory bottlenecks in their standalone systems. Now it's taking another whack at the market with a rejiggered product lineup, more sensible packaging, and simpler pricing.

As El Reg told you in February 2009 when Portland, Oregon-based RNA Networks came out of stealth mode and started peddling its first product, called RNAmessenger, the company has a slightly different twist on server virtualization. RNA's software, which is now called MVX (short for Memory Virtualization Acceleration), creates a memory pool from a network of servers for applications to splash in as they run on each server node separately.

There are virtualization tools that carve up one physical server into many virtual servers — the ESX Server, KVM, Hyper-V, and Xen hypervisors commonly used on x64 iron do this, as do z/VM, PowerVM, and IntegrityVM on mainframe, Power, and Itanium iron.

There are other tools, like vSMP from ScaleMP and the ill-fated initial instance of Virtual Iron, that aggregate the server capacity across a network of many separate physical machines and use a different kind of hypervisor to make it all look like one giant server using symmetric multiprocessing (SMP) or non-uniform memory access (NUMA) clustering.

That first kind of server virtualization is relatively easy, but the latter is very tough to make work — or else no one would still be making real SMP and NUMA systems, with chipsets and on-chip SMP support.

With the MVX release 2.5 lineup, RNA Networks has taken the core RNAmessenger and RNAcache programs, merged them back into one product with a bunch of different features aimed at accelerating workloads running on servers in three distinct ways. This will make the job of selling MVX a lot easier initially, and given what has happened with virtual machine and logical partition hypervisors on servers, there's plenty of room down the road to have differentiated features and give them their own SKUs and additional costs.

But as RNA Networks has learned, when you are starting out you have to keep the message — and the products — simple.

The memory-pooling software at the heart of MVX 2.5 does just what the name implies: it takes the raw main memory in servers and carves it up into local main memory for each physical server and a shared extended memory space that all of the servers can access remotely.

This access will work best over InfiniBand links with Remote Direct Memory Access (RDMA) or 10 Gigabit Ethernet links with the analog RDMA over Converged Ethernet (RoCE) protocol. But the MVX product will work over a standard Gigabit Ethernet network without direct memory access between nodes. Depending on how you configure MVX, you can make that memory pool look like extended memory or a fast RAM disk, to use PC metaphors.

The MVX 2.5 features are called Memory Cache, Memory Motion, and Memory Store, and they all rely on the memory-pooling technology that launched in 2009.

The Cache function turns the memory pool into a cache for network attached storage (NAS) file systems linked to the servers, storing frequently used data by the servers into the memory pool. Provided the servers have enough memory, it is possible for terabytes of data to be stored in the servers cluster and to be accessed by many server nodes at memory speeds and to avoid the bottleneck of waiting for disk drives and networks to provide the data to server nodes.

This scheme, says Rod Butters, RNA Networks chief marketing officer and VP of products, is a lot less expensive than putting big gobs of cache memory on the NAS. The important thing about the Cache function (which was originally shipped as RNAcache in June 2009), is that each server node has simultaneous access to the datasets stored in the memory pool. MVX handles the contention issues.

Memory Motion is an MVX feature that gives operating systems on physical servers or virtualized servers and their hypervisors access to a shared memory pool that functions as a giant swap device for the physical and virtual machines that participate in the pool. (This is just another way of getting around waiting for disk drive or even solid state disks to feed the servers data.)

The Memory Store feature turns the memory pool into a collection of virtual block storage devices that server workloads can use instead of swapping big and temporary files out to disk as they perform calculations. One server can mount multiple instances of these virtual RAMdisks, and multiple servers can mount a single virtual RAMdisk if they are sharing data.

This virtual block device is actually new code in the MVX 2.5 release, and it can sneak in under virtual machine hypervisors before they load on servers and make a single host look like it has more memory than it actually does, allowing for it to support even more virtual machines than the physical limits might imply are possible.

Here's what one benchmark that RNA Networks has done on its MVX block device code shows in terms of speeding up reads and writes over SATA and solid state disks:

RNA MVX Block Device Benchmark

RNA Network's MVX software is not restricted to x64 machines and can be used with servers running 32-bit or 64-bit Sparc, Power, and Itanium processors as well as 32-bit x86 iron. It is intended to scale across of hundreds of nodes and deliver multiple terabytes of shared memory in the pool. It currently works on Unix and Linux servers.

The original RNAmessenger was priced on a per-node basis, with prices ranging from $7,500 to $10,000 per server depending on architecture. RNAcache cost $2,000 per server node. With the converged MVX 2.5 product, RNA Networks is shifting to a single price for the product: $80 per gigabyte in the shared pool. That's more or less the same, as each server node in a cluster donates around 96GB to the shared pool. ®

Boost IT visibility and business value

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.