Feeds

Big data elephant mates with RainStor

RainStor Hadoops its storage

Next gen security for virtualised datacentres

Fast and really fast

The RainStor Hadoop product can avoid big data transfers, and it can run queries against Hadoop data quicker than other approaches; Bantleman says RainStor can provide a 10-100X performance boost for analytics.

He quotes an extreme example of RainStore analytics acceleration with a New York Stock Exchange example, where the analytics task was to calculate the average daily trading price for a single stock for a day. There were 1.5 billion trades on the day in question in November 2011, and they were stored in a Hadoop data store.

A Hadoop MapReduce batch run took four hours while a RainStor MapReduce run looking at all the data took 80 minutes. With the query treated as an ad hoc query the Hadoop MapReduce time was the same: four hours. A RainStor MapReduce run with filtering took two minutes and a RainStor SQL run took eight seconds.

Bantleman provides these figures with a straight face. Apparently, a four-hour Hadoop MapReduce run to find a single stock's NYSE average price for a day, with 1.5 billion trades in around 8,000 files, ran 1,800 times faster using a SQL query against the Hadoop data stored natively in RainStor.

Partition filtering vs brute force

Bantleman said: "We have partition filtering. Most databases have rows and columns and row indices. The RainStor filter tells me what not to read. The query looks at our metadata and asks which partitions contain, for example, IBM. There might be 8 instead of 8,000. Brute force reads everything, taking lots of time; we don't."

When RainStor was forced to read everything in the batch run – all 8,000 partitions – it was still 3 times faster because its data was compressed 25 times, whereas the raw Hadoop data wasn't: "We ran faster because the I/O overhead was massively reduced."

Other goodies in the RainStor Hadoop product include geo-replication and the ability to set retention and expiration times for data. The data can be input under one schema and can cope with schema changes so that it can be viewed through different schema without having to be re-ingested.

Looking ahead, Bantleman believes machine-to-machine messaging will cause a huge increase in the amount of data organisations may have to deal with. He also said he thinks that big data compression and deduplication will be extremely valuable if you need to store big data in flash-based storage memory. This would enable many concurrent high-speed queries of much less big data than the amount you started out with.

RainStor Enterprise Big Data Analytics On Hadoop is available now. ®

Next gen security for virtualised datacentres

Whitepapers

Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.