Feeds

RNA rejiggers server memory pooling

Better performance, simpler pricing

Choosing a cloud hosting partner with confidence

Since early 2009, RNA Networks has been selling memory virtualization and pooling software to help companies get around memory bottlenecks in their standalone systems. Now it's taking another whack at the market with a rejiggered product lineup, more sensible packaging, and simpler pricing.

As El Reg told you in February 2009 when Portland, Oregon-based RNA Networks came out of stealth mode and started peddling its first product, called RNAmessenger, the company has a slightly different twist on server virtualization. RNA's software, which is now called MVX (short for Memory Virtualization Acceleration), creates a memory pool from a network of servers for applications to splash in as they run on each server node separately.

There are virtualization tools that carve up one physical server into many virtual servers — the ESX Server, KVM, Hyper-V, and Xen hypervisors commonly used on x64 iron do this, as do z/VM, PowerVM, and IntegrityVM on mainframe, Power, and Itanium iron.

There are other tools, like vSMP from ScaleMP and the ill-fated initial instance of Virtual Iron, that aggregate the server capacity across a network of many separate physical machines and use a different kind of hypervisor to make it all look like one giant server using symmetric multiprocessing (SMP) or non-uniform memory access (NUMA) clustering.

That first kind of server virtualization is relatively easy, but the latter is very tough to make work — or else no one would still be making real SMP and NUMA systems, with chipsets and on-chip SMP support.

With the MVX release 2.5 lineup, RNA Networks has taken the core RNAmessenger and RNAcache programs, merged them back into one product with a bunch of different features aimed at accelerating workloads running on servers in three distinct ways. This will make the job of selling MVX a lot easier initially, and given what has happened with virtual machine and logical partition hypervisors on servers, there's plenty of room down the road to have differentiated features and give them their own SKUs and additional costs.

But as RNA Networks has learned, when you are starting out you have to keep the message — and the products — simple.

The memory-pooling software at the heart of MVX 2.5 does just what the name implies: it takes the raw main memory in servers and carves it up into local main memory for each physical server and a shared extended memory space that all of the servers can access remotely.

This access will work best over InfiniBand links with Remote Direct Memory Access (RDMA) or 10 Gigabit Ethernet links with the analog RDMA over Converged Ethernet (RoCE) protocol. But the MVX product will work over a standard Gigabit Ethernet network without direct memory access between nodes. Depending on how you configure MVX, you can make that memory pool look like extended memory or a fast RAM disk, to use PC metaphors.

The MVX 2.5 features are called Memory Cache, Memory Motion, and Memory Store, and they all rely on the memory-pooling technology that launched in 2009.

The Cache function turns the memory pool into a cache for network attached storage (NAS) file systems linked to the servers, storing frequently used data by the servers into the memory pool. Provided the servers have enough memory, it is possible for terabytes of data to be stored in the servers cluster and to be accessed by many server nodes at memory speeds and to avoid the bottleneck of waiting for disk drives and networks to provide the data to server nodes.

This scheme, says Rod Butters, RNA Networks chief marketing officer and VP of products, is a lot less expensive than putting big gobs of cache memory on the NAS. The important thing about the Cache function (which was originally shipped as RNAcache in June 2009), is that each server node has simultaneous access to the datasets stored in the memory pool. MVX handles the contention issues.

Memory Motion is an MVX feature that gives operating systems on physical servers or virtualized servers and their hypervisors access to a shared memory pool that functions as a giant swap device for the physical and virtual machines that participate in the pool. (This is just another way of getting around waiting for disk drive or even solid state disks to feed the servers data.)

The Memory Store feature turns the memory pool into a collection of virtual block storage devices that server workloads can use instead of swapping big and temporary files out to disk as they perform calculations. One server can mount multiple instances of these virtual RAMdisks, and multiple servers can mount a single virtual RAMdisk if they are sharing data.

This virtual block device is actually new code in the MVX 2.5 release, and it can sneak in under virtual machine hypervisors before they load on servers and make a single host look like it has more memory than it actually does, allowing for it to support even more virtual machines than the physical limits might imply are possible.

Here's what one benchmark that RNA Networks has done on its MVX block device code shows in terms of speeding up reads and writes over SATA and solid state disks:

RNA MVX Block Device Benchmark

RNA Network's MVX software is not restricted to x64 machines and can be used with servers running 32-bit or 64-bit Sparc, Power, and Itanium processors as well as 32-bit x86 iron. It is intended to scale across of hundreds of nodes and deliver multiple terabytes of shared memory in the pool. It currently works on Unix and Linux servers.

The original RNAmessenger was priced on a per-node basis, with prices ranging from $7,500 to $10,000 per server depending on architecture. RNAcache cost $2,000 per server node. With the converged MVX 2.5 product, RNA Networks is shifting to a single price for the product: $80 per gigabyte in the shared pool. That's more or less the same, as each server node in a cluster donates around 96GB to the shared pool. ®

Security for virtualized datacentres

More from The Register

next story
It's Big, it's Blue... it's simply FABLESS! IBM's chip-free future
Or why the reversal of globalisation ain't gonna 'appen
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Bitcasa bins $10-a-month Infinite storage offer
Firm cites 'low demand' plus 'abusers'
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
Pssst. Want to buy a timeshare in the clouds?
The Google dilemma — controller or spreader of knowledge?
CAGE MATCH: Microsoft, Dell open co-located bit barns in Oz
Whole new species of XaaS spawning in the antipodes
Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
A less painful way to run Microsoft’s private cloud
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
New hybrid storage solutions
Tackling data challenges through emerging hybrid storage solutions that enable optimum database performance whilst managing costs and increasingly large data stores.