Original URL: https://www.theregister.com/2012/04/12/microsoft_in_memory/

Microsoft opens trenchcoat, reveals 'in-memory' Big Data column

Just the (100 billion) facts, man

By Gavin Clarke

Posted in Databases, 12th April 2012 07:32 GMT

If there’s one thing scarier than the big data tsunami, tech vendors tell us, then it’s tech vendors getting left out of the big-data conversation.

Microsoft is the latest software maker to crowbar itself into the debate on big data, this time claiming a place at the table on in-memory databases.

According to a blog here and flagged here, in-memory database technologies are reaching a tipping point and will become common during the next 5-10 years.

And guess what? Microsoft is poised to exploit this. SQL Server Technical Fellow Dave Campbell writes:

"Microsoft has been investing in, and shipping, in-memory database technologies for some time."

Campbell identified “in-memory” in the Microsoft world as a column-based storage engine in Word and Excel. This has now shipped with the newly released SQL Server 2012 as the xVelocity in-memory analytics engine that's part of SQL Server Analysis Services.

Campbell claimed a 200 times performance gain for one SQL Server 2012 customer “through the use of this new in-memory optimized columnstore index type.”

Microsoft’s man promised more from Redmond’s labs.

“Microsoft is also investing in other in-memory database technologies which will ship as the technology and opportunities mature,” he said. He didn’t reveal details but said that this includes an in-memory database solution in the company’s lab “and building our real-world scenarios to demonstrate the potential.”

“One such scenario, based upon one of Microsoft’s online services businesses, contains a fact table of 100 billion rows. In this scenario we can perform three calculations per fact – 300 billion calculations in total, with a query response time of 1/3 of a second. There are no user defined aggregations in this implementation; we actually scan over the compressed column store in real time,” he said.

When it comes to in-memory, database giant Oracle at least has some legitimacy. Years before Big Data was a blob on the horizon, Larry Ellison’s database beast swallowed tiny TimesTen in 2005. TimesTen uses replication and access techniques to keep data in the memory of a system rather than write to disk cache.

Last week, Ellison’s great white whale SAP revived its own HANA in-memory platform, announcing a $337m data base adoption program and $155m SAP HANA Real-Time Fund for startups and entrepreneurs to develop real-time apps.

SAP Ventures, the ERP giant’s venture-capital wing, has also joined Toshiba, Juniper Network and others putting $50m into flash array start-up Violin Memory.

This might account for Microsoft's claims.

While Oracle might be the in-memory leader, though, it isn't above a little shameless bandwagon-jumping when it needs to. The database giant in January 2011 was laying some tenuous claims of its own this time on NoSQL.

"Is Berkeley DB a 'NoSQL' solution today?" Oracle asked here of the embedded database it bought in 2006.

"Nope. Could Berkeley DB grow into a NoSQL solution? Absolutely" - given the right changes. ®