Inside the Hekaton: SQL Server 2014's database engine deconstructed

Nadella's database squares the circle of cheap memory vs speed

Stored procedures

Also of note in this release, Microsoft has quietly introduced natively compiled stored procedures. These compile down to DLLs containing ready to run machine code, working with in-memory tables to reduce CPU compilation overhead on high volume transactional systems. Although currently not supporting disk-based tables, cursors, multi row inserts, CTEs or correlated sub queries, they are a useful optional performance tweak that can be easily introduced with a few minor syntax changes wherever in-memory tables are used.

ColumnStore Indexes

ColumnStore indexes caused a sensation in SQL Server 2012, offering big performance gains and very impressive in-memory and on-disk data compression. Unfortunately, because you could not update them, they were limited mostly to data warehouse scenarios where indexes are regularly dropped and rebuilt, and were not adopted as widely as might be expected.

We were all waiting for something we could use on our OLTP databases.

SQL Server 2014 addresses this shortcoming with updateable columnstore indexes, removing the need to drop and rebuild the index each time data is updated. In addition, the COLUMNSTORE_ARCHIVE option offers further compression for rarely accessed, archive and legacy data, a feature particularly useful in data warehouses in conjunction with partitioning.

It's worth remembering that the prospect of updateable columnstore indexes caused much excitement when they were first discussed at SQL Server conferences immediately after the 2012 launch. Although overshadowed by Hekaton, this is a very significant performance enhancing feature that should be understood and in the arsenal of every DBA and developer.

BPE: Buffer Pool Extensions

Whilst the addition of buffer pool extensions may not be the most dazzling new feature in SQL Server 2014, we should not make light of its benefits. This new feature can be used on existing OLTP systems independent of Hekaton, transparently, without any conversion and with minimal effort, unlike Hekaton and ColoumnStore indexes.

Broadly speaking, when a query requests data from a database, SQL Server will read the data pages from disk into the buffer pool, so that the next time they’re requested they can be read from memory rather than disk. Updated pages are modified in the buffer pool before being written back to disk.

In a typical OLTP system, these intensive, random I/O operations are small reads and writes, which contend for the use of a mechanical disk arm and the rotation of a spinning disk in order to meet the request. Under load, I/O latency increases and transactional throughput decreases. Database server performance drops. Specifically, queries that return data from disk are slower than those that return data already cached in memory – the buffer pool.

The traditional solution for reducing these bottlenecks is to add DRAM to the server’s motherboard to create a larger buffer pool. The larger the buffer pool, the less disk access, which improves performance. This, incidentally, is why SQL Server grabs as much memory as it can at start-up.

But main memory is limited, and expanding it involves unacceptable service outage time on many systems. Microsoft have recognised this problem, and with the rapidly falling price, increased capacity and superior speed of SSD storage over disk, have introduced it as a dedicated, tertiary layer between DRAM and disk. This creates a two level, tiered storage hierarchy for the buffer pool. Level 1 is the primary buffer pool, stored in DRAM and level 2 (L2) is the buffer pool extension, held on the SSD. To maintain data consistency, only clean, or unmodified pages are written to the L2 SSD cache to prevent data loss, the buffer manager being responsible for moving pages between the two.

All of this is transparent and can be configured by the DBA in-flight with a single, simple T-SQL command, extending the buffer pool up to 32 times the size of RAM. The only downsides? As might be expected, removing the BPE causes memory pressure and subsequent performance degradation. We also found that when a SQL Server 2014 instance is shutdown, the BPE is automatically removed at re-start, and has to be manually reconfigured. This is something we’d like Microsoft to reconsider for a future minor release.

SQL Server 2014 floats closer to the cloud

You can now backup and restore databases and logs, both directly and through maintenance plans, to a URL or Windows Azure storage. This can be performed at the database or instance level, and from on-site or SQL Azure servers. Addressing security concerns about offsite data, proper database backup encryption has been introduced at long last, offering AES and Triple DES, effectively protecting offsite and onsite backups from those without the appropriate key or certificate. This feature is also available to normal, non-cloud backups.

High availability is there, too. AlwaysOn replicas can be hosted on a Windows Azure machine. This is useful and makes sound financial sense for those global organisations that want to make their data catalogue available through their websites in local markets around the world, or want to test their presence in a new market without rolling out a new data centre.

A mix of up to eight (previously four) secondary replicas can be created, with secondary replicas now remaining readable even in the event of a cluster loss, a significant high availability advance. The AlwaysOn management toolset has also been improved with a suite of new stored procedures and DMVs to further reduce troubleshooting time for production DBAs – critical in a DR situation.

Finally, for operations such as large data loads that would be re-started in the event of a failure, delayed durability increases log write performance, and can be set at the database or transaction level.


SQL Server 2014 dispenses with the fanfare of recent releases. Instead, it quietly but solidly positions it for the future, by exploiting the ongoing sea-changes and cost reductions in storage and CPU technologies. It also places it as the mainstream product others will have to emulate and follow to catch up. We think it’s the release with which SQL Server has finally come of age. Our opinion? Mission: Accomplished. ®

Jon is an independent SQL Server consultant, having worked as a data warehouse developer, DBA, team lead, trainer, author and manager for 18 years. He also runs the Bristol SQL Server user group, holds a BSc in Computer Science and an MSc in Business Intelligence from the University of Dundee. Rachel has worked as a software and database developer for 17 years across a wide variety of private industries. She is MCP Certified, runs the Bristol SQL Server user group and is involved in the technical community.

Sponsored: Minds Mastering Machines - Call for papers now open

Biting the hand that feeds IT © 1998–2018