Gear6 satiates hungry apps with 500GB RAM monster

I'll take a rack of memory, please

fingers pointing at man

Gear6 fits into that elite, blissful class of start-ups that have an easy to digest premise and infrastructure-friendly gear.

The Silicon Valley-based firm ships a pair of caching appliances. These RAM-based boxes plug right into existing Ethernet networks and work as complements to disk-based shared storage systems. As a result, applications that depend on accessing large data sets tend to enjoy dramatic performance improvements by getting much of their information straight from the speedy appliances rather than always going out to disk.

Gear6 likes to focus its pitch around I/O operations per second or IOPS. Its base CACHEfx G200 appliance boasts 250,000 IOPS with 1.6GB/s of throughput and less than half a millisecond of latency. This system takes up half a rack, has 250GB of capacity and costs $400,000 with the hardware, software and support. The G400 appliance is twice the machine across the board.

At 250,000 IOPS, Gear6 boasts that it can beat out a high-end NAS (network attached storage) system from, say, NetApp by 5X. The company also says that its half a millisecond latency easily beats out the typical 2 to 8 milliseconds of latency experienced by customers with demanding, clustered software. All told, Gear6 promises a 10X to 30X performance improvement for applications with large data sets.

So, you can imagine oil and gas, video, bio-tech, financial services and database companies wanting to have a look at the gear. And, in fact, the likes of Sony and GX Technology, a player in the oil and gas field, have already bought Gear6 systems.

While the hardware may seem expensive, it will prove attractive enough to high performance computing users and big business types that demand the most out of their hardware at just about any price.

To use the memory boost, customers slot the Gear6 appliances into their existing Ethernet networks (10Gbps support is there) in between server clusters and shared storage systems. Administrators then point servers with low I/O performance at the Gear6 systems.

"At that point, hundreds of servers can read the cached information simultaneously without any impact on our system's performance," Gear6 VP Gary Orenstein told us. "We're talking about maintaining thousands upon thousands of current sessions with all of these CPUs and keeping everything coherent, synchronized, accurate and reliable."

At the moment, each G200 appliance is actually constructed out of eight server-like systems with their own CPU (x86), networking and memory. All of those systems are joined to form a single coherent cache. Gear6 then adds on its Reflex OS management software to deal with monitoring the health of each server and the overall performance of the system. In addition, the company provides a number of tweaks for spreading software across the memory and scheduling different jobs.

In the future, Gear6 hopes to ship more software for divvying up appliances into separate partitions.

The company also hopes that customers will come up with some novel uses for its hardware. You just dump a ton of memory in users' laps and wait to see what happens.

You all knew that someone would pump out MAS (memory attached storage) boxes sooner or later. There's just too many tempting uses for such a system - like slapping a database in memory - to ignore the concept. And it seems very likely that Gear6 will face some direct competition soon enough.

Gear6's website seems pretty thin on benchmarks or detailed statistical information beyond its basic performance claims. But its simple speed-up premise remains sound and easy to digest.

We'll be curious to see just how many customers are willing to shell out for this I/O edge.

You can test your "Gear6 readiness" here.

The company's management page is worth a look too. As is customary for a start-up, Gear6 has a number of ex-Sun employees. In particular, we're glad to see that Martin Patterson found a home after Sun banished his start-up Terraspring to the place where Sun acquisitions go to die.®

Sponsored: Designing and building an open ITOA architecture