Original URL: https://www.theregister.com/2012/02/16/rainstor_hadoop/

Big data elephant mates with RainStor

RainStor Hadoops its storage

By Chris Mellor

Posted in Storage, 16th February 2012 18:38 GMT

RainStor, the deduping database supplier, is bringing its analytics engine and enterprise database to Hadoop, rather than bringing Hadoop data to its engine.

Hadoop is becoming a standard for storing big data but most business intelligence analytics software – such as that pushed out by GreenPlum, Netezza and Teradata – does not natively support the Hadoop file system, HDFS, so data has to be extracted and moved to the analytics engine. This takes time and needs disk space for the copied data.

John Bantleman, RainStor's CEO, briefed us on Hadoop support by RainStor, RainStor Big Data Analytics on Hadoop, and said existing business intelligence (BI) analytics routines run against extracted Hadoop data can take hours – whereas RainStor's Hadoop-supporting analytics engine can run analytics much, much faster, 10 to 100 times faster it's claimed. Before we get to that, let's just acquaint ourselves with RainStor's history.

The story starts with a UK company called Clearpace back in 2008. It's NParchive product archived less-frequently accessed data from an Oracle database or other RDBMS uniquely in deduplicated form, with a 20:1 or better dedupe ratio, on cheap SATA drives. SQL routines could be run against the NParchive and there was no need for data rehydration to do this.

Bantleman moved Clearpace across to Silicon Valley and renamed it and the product RainStor, although there was no RAIN – redundant array of internet nodes – aspect to the name though. The second phase of its development saw a move into telecommunications and using its database to cope with storing records of the tens of billions of network events a day.

A four-hour Hadoop MapReduce run to find a single stock's NYSE average price for a day, ran 1,800 times faster using a SQL query against Hadoop data stored natively in RainStor.

One RainStor customer is Softbank in Japan. It stores 2PB of raw data, compressed and deduped to 135TB held on HP scale-out NAS drive array storage. It gets answers to questions about what individual subscribers did in a day in two to five seconds. A traditional database/data warehouse scheme would involve many petabytes of data at an average cost of $20,000/TB, meaning a 3PB setup would cost upwards of $60m. The RainStor/HP hardware system cost around $5m.

Big data elephant

This is big data under any definition and big data means Hadoop and RainStor's third development phase. It has spent over a year integrating Hadoop support into its product, enabling RainStor to run natively on Hadoop, and execute both MapReduce queries and SQL queries against compressed and deduped Hadoop data. The company claims it can dedupe and compress such data with an up to 40:1 ratio; 97.5 per cent compression. Telco records are, for example, highly repetitive in their content and rewardingly susceptible to compression and deduplication.

RainStor says: "The compressed multi-structured data set running on HDFS delivers maximum efficiency and reduces the cluster size by 50 per cent to 80 per cent, which significantly lowers operating cost."

What about EMC Isilon's Hadoop integration and integration with Greenplum?

Bantleman says: "Greenplum doesn't allow you to run MapReduce; it's actually a Postgres database inside and about parallel relational SQL queries.. We are the only enterprise database that can run on HDFS ... [and] we've added the ability to support MapReduce.

"Greenplum, Teradata, Netezza and Vertica have built connectors to allow you to bring data out of Hadoop into their own databases. They can't run natively on Hadoop clusters; we can. .. RainStor allows you to run ad hoc analytics directly on the Hadoop environment."

Bantleman added that he thinks data transfers at big data scale are silly.

Fast and really fast

The RainStor Hadoop product can avoid big data transfers, and it can run queries against Hadoop data quicker than other approaches; Bantleman says RainStor can provide a 10-100X performance boost for analytics.

He quotes an extreme example of RainStore analytics acceleration with a New York Stock Exchange example, where the analytics task was to calculate the average daily trading price for a single stock for a day. There were 1.5 billion trades on the day in question in November 2011, and they were stored in a Hadoop data store.

A Hadoop MapReduce batch run took four hours while a RainStor MapReduce run looking at all the data took 80 minutes. With the query treated as an ad hoc query the Hadoop MapReduce time was the same: four hours. A RainStor MapReduce run with filtering took two minutes and a RainStor SQL run took eight seconds.

Bantleman provides these figures with a straight face. Apparently, a four-hour Hadoop MapReduce run to find a single stock's NYSE average price for a day, with 1.5 billion trades in around 8,000 files, ran 1,800 times faster using a SQL query against the Hadoop data stored natively in RainStor.

Partition filtering vs brute force

Bantleman said: "We have partition filtering. Most databases have rows and columns and row indices. The RainStor filter tells me what not to read. The query looks at our metadata and asks which partitions contain, for example, IBM. There might be 8 instead of 8,000. Brute force reads everything, taking lots of time; we don't."

When RainStor was forced to read everything in the batch run – all 8,000 partitions – it was still 3 times faster because its data was compressed 25 times, whereas the raw Hadoop data wasn't: "We ran faster because the I/O overhead was massively reduced."

Other goodies in the RainStor Hadoop product include geo-replication and the ability to set retention and expiration times for data. The data can be input under one schema and can cope with schema changes so that it can be viewed through different schema without having to be re-ingested.

Looking ahead, Bantleman believes machine-to-machine messaging will cause a huge increase in the amount of data organisations may have to deal with. He also said he thinks that big data compression and deduplication will be extremely valuable if you need to store big data in flash-based storage memory. This would enable many concurrent high-speed queries of much less big data than the amount you started out with.

RainStor Enterprise Big Data Analytics On Hadoop is available now. ®