Feeds

Two-faced about flash? Not us, say Google Janus researchers

Boffins discover storage cache workload-splitting technique

Internet Security Threat Report 2014

A research project has found an algorithm for getting the best use out of cloud-scale data centre server flash caches and identifying workloads best placed completely in flash.

Yesterday's USENIX conference audience had a presentation about this called "Janus: Optimal Flash Provisioning for Cloud Storage Workloads"

In Roman mythology, Janus was a god who faced two ways, looking both to the future and the past. In the Googleplex the two directions are towards flash and disk.

The presentation paper abstract says "Janus is a system for partitioning the flash storage tier between workloads in a cloud-scale distributed file system with two tiers, flash storage and disk." You can download the paper here (pdf).

Janus small

Roman God Janus

It's a 12-page document discussing how Google researchers characterised the read profile of some of its file-accessing workloads, in terms of the age of the data being read, and devised a cacheability score for each workload. Based on this information, a limitation on flash write rates, and a priority rating for the workload, a precise amount of flash cache is provisioned to maximise the workload's total reads from flash.

Newly-created files are written direct to flash and evicted using either a FIFO (First-In-First-Out) or LRU (Least-Recently-Used) method

This approach is in contrast to having a unified flash tier shared by all workloads, and it improves the flash hit rate by between 47 and 76 per cent over the unified flash tier approach.

How did Google’s boffins do it?

First of all, they scanned the file system data and sampled traces of I/O activity, collecting information about the age of the bytes stored and the age of data accessed for different workloads.

Janus trace chart

Janus trace chart showing cumulative distribution function of the bytes stored, and read operations sorted by the (FIFO) age of the data for a particular workload.

In the example chart above, "50 per cent of the data stored by this particular user is less than 1 week old, but that corresponds to over 90 per cent of the read activity."

They then calculated a cacheability function that "tells us the rate of read hits we are likely to get for a workload if we allocate it a given amount of flash."

They find a break-even point, "which is the IOPS/GiB threshold determining whether a workload would be cheaper on flash storage or on disk." This threshold can be derived from the IOPS/$ of disk and the GiB/$ of flash. Using Seagate Savvio 10K.3 disk drives, they calculate a break-even point of 1.5IOPS/GiB. Workloads with values above this are better served from flash, and ones with values below it are better served from disk.

The Janus project then calculates how much flash cache workloads served from disk should optimally have, while reducing the flash write rate. The text talks about "piecewise linearity" and "concavity assumptions" and how to "relax the concavity assumption on the cacheability function," and also how to "relax the constraint on the write rate via Lagrangian relaxation."

It also has equations with lots of thetas and sigmas and this hack has had to take lots of paracetamol to reduce the impact of his headache function.

The boffins tested Janus against several datasets using Google's Colossus distributed filesystem, and write:

This system has been in use at Google for 6 months. It allows users to make informed flash provisioning decisions by providing them a customised dashboard showing how many reads would be served from flash for a given flash allocation. Another view helps system administrators make allocation decisions based on a fixed amount of flash available in order to maximise the reads offloaded from disk.

They say "flash hit rates using the optimised recommendations are 47-76 per cent higher than the option of using the flash as an unpartitioned tier."

We should note that Google operates at massive cloud scale with tens of thousands of servers. Optimising the usage of flash cache across thousands of servers can bring significant cost efficiencies.

Also, Google's workloads are unique to Google. What this paper says, though, is that it is worthwhile analysing flash read hit rates for workloads in cloud-scale data centres so as to buy the optimal amount of flash and use it with partitions, as it were, sized for different workloads.

Read the paper to check out the concepts and maths involved more closely. But have some paracetamol to hand just in case. ®

Beginner's guide to SSL certificates

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
Symantec backs out of Backup Exec: Plans to can appliance in Jan
Will still provide support to existing customers
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.