Feeds

Two-faced about flash? Not us, say Google Janus researchers

Boffins discover storage cache workload-splitting technique

Next gen security for virtualised datacentres

A research project has found an algorithm for getting the best use out of cloud-scale data centre server flash caches and identifying workloads best placed completely in flash.

Yesterday's USENIX conference audience had a presentation about this called "Janus: Optimal Flash Provisioning for Cloud Storage Workloads"

In Roman mythology, Janus was a god who faced two ways, looking both to the future and the past. In the Googleplex the two directions are towards flash and disk.

The presentation paper abstract says "Janus is a system for partitioning the flash storage tier between workloads in a cloud-scale distributed file system with two tiers, flash storage and disk." You can download the paper here (pdf).

Janus small

Roman God Janus

It's a 12-page document discussing how Google researchers characterised the read profile of some of its file-accessing workloads, in terms of the age of the data being read, and devised a cacheability score for each workload. Based on this information, a limitation on flash write rates, and a priority rating for the workload, a precise amount of flash cache is provisioned to maximise the workload's total reads from flash.

Newly-created files are written direct to flash and evicted using either a FIFO (First-In-First-Out) or LRU (Least-Recently-Used) method

This approach is in contrast to having a unified flash tier shared by all workloads, and it improves the flash hit rate by between 47 and 76 per cent over the unified flash tier approach.

How did Google’s boffins do it?

First of all, they scanned the file system data and sampled traces of I/O activity, collecting information about the age of the bytes stored and the age of data accessed for different workloads.

Janus trace chart

Janus trace chart showing cumulative distribution function of the bytes stored, and read operations sorted by the (FIFO) age of the data for a particular workload.

In the example chart above, "50 per cent of the data stored by this particular user is less than 1 week old, but that corresponds to over 90 per cent of the read activity."

They then calculated a cacheability function that "tells us the rate of read hits we are likely to get for a workload if we allocate it a given amount of flash."

They find a break-even point, "which is the IOPS/GiB threshold determining whether a workload would be cheaper on flash storage or on disk." This threshold can be derived from the IOPS/$ of disk and the GiB/$ of flash. Using Seagate Savvio 10K.3 disk drives, they calculate a break-even point of 1.5IOPS/GiB. Workloads with values above this are better served from flash, and ones with values below it are better served from disk.

The Janus project then calculates how much flash cache workloads served from disk should optimally have, while reducing the flash write rate. The text talks about "piecewise linearity" and "concavity assumptions" and how to "relax the concavity assumption on the cacheability function," and also how to "relax the constraint on the write rate via Lagrangian relaxation."

It also has equations with lots of thetas and sigmas and this hack has had to take lots of paracetamol to reduce the impact of his headache function.

The boffins tested Janus against several datasets using Google's Colossus distributed filesystem, and write:

This system has been in use at Google for 6 months. It allows users to make informed flash provisioning decisions by providing them a customised dashboard showing how many reads would be served from flash for a given flash allocation. Another view helps system administrators make allocation decisions based on a fixed amount of flash available in order to maximise the reads offloaded from disk.

They say "flash hit rates using the optimised recommendations are 47-76 per cent higher than the option of using the flash as an unpartitioned tier."

We should note that Google operates at massive cloud scale with tens of thousands of servers. Optimising the usage of flash cache across thousands of servers can bring significant cost efficiencies.

Also, Google's workloads are unique to Google. What this paper says, though, is that it is worthwhile analysing flash read hit rates for workloads in cloud-scale data centres so as to buy the optimal amount of flash and use it with partitions, as it were, sized for different workloads.

Read the paper to check out the concepts and maths involved more closely. But have some paracetamol to hand just in case. ®

Next gen security for virtualised datacentres

Whitepapers

Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.