Microsoft boasts of Big Data chops for in-memory SQL
Data warehousing gets handier with Hadoop
Microsoft claims that new in-memory data processing capabilities in the next build of SQL will improve performance by a factor of fifty compared to current speeds.
"We're bringing an in-memory transactional capability to SQL Server," said Ted Kummert, corporate vice president of the Business Platform Division, in his keynote presentation at the Professional Association for SQL Server (PASS) conference in Seattle. "And this thing is wicked fast."
He demoed the software, codenamed Hekaton thus far, using applications in-memory without the need for coding changes, at speeds of between nine and 30 times current performance. Microsoft has put a lot of work into ensuring that applications can be converted to in-memory processing with the minimum of recoding, Leland said.
Hekaton is going to be built into the next major build of SQL Server, he said, although no timeframe was given. In the first half of next year, however, SQL will also get a refresh of Microsoft's Parallel Data Warehouse (PDW) with a new data processing engine dubbed PolyBase that can handle both relational data and non-relational databases run on Microsoft's version of Hadoop.
"This thing's built for Big Data," Kummert said. "From the storage architecture, dramatically lower cost pre terabyte, performance, and now we're introducing PolyBase to unify the relational and non-relational worlds at the query level."
As for the here and now, Wednesday sees the release of the first service pack for SQL Server 2012, which brings more integration of SQL into Office applications. In particular, Excel is getting a makeover to fold Power View and Power Pivot controls more closely into Excel to increase its usefulness in business intelligence apps.
"Excel is now the complete end-user BI tool," Kummert promised. ®
Sponsored: The Nuts and Bolts of Ransomware in 2016