Feeds

Flash and the five-minute rule

NAND then there was disruption

Next gen security for virtualised datacentres

Enter Flash

Enter the flash dragon. It's less expensive, generally speaking, than disk drives were in 1987 and this changes things. Flash is the disruptive technology that brings this RI discontinuity back into balance - that was the substance of Dale's SNIA pitch.

It has 100 times better IOPS per dollar and a thousand times better IOPS per milliwatt than disk at random reads. It is 10 times better at bandwidth per milliwatt than disk. It is also 10 times better at MB per milliwatt than DRAM, and wins big in the latency stakes over disk, but DRAM is better still.

Notwithstanding that disks are really good at doing sequential writes, flash not really buying you anything with writes, flash will show up as a disk and a DRAM replacement.

Dale provided five-minute rule RI numbers for flash against DRAM.  A slide stated: "Assuming that he cost of cache is dominated by its capacity, and the cost of backing store is dominated by its access cost (cost per IOPS), then the break even interval for keeping a page of data in cache is given by dividing the backing store cost per IOPS by the cache cost per page."

In 1987, using these metrics, disk cost $2,000/IOPS and RAM was $5/KB. A 1KB page's break-even point was 400 seconds.

In 2008, Dale said, disk was $1 per IOPS, a 2,000x reduction, and DRAM was $50/GB, a 100,000x reduction, meaning it was $0.05/MB. The 50KB page break-even was five minutes, the 4KB once was one hour and the 1KB one was five hours. There needed to be a 50-fold increase in page size to cache for break-even at five minutes.

Looking at break even for flash and hard drives (HDD)  in 2010 he said HDDS cost $1/IOPS, single-level cell (SLC) flash around $10/GB and multi-level cell (MLC) around $4/GB. A250KB page break even with SLC was five minutes, but five hours with a 4KB page size. It was five minutes with a 625KB page size with MLC flash and 13 hours with a 4KB MLC page size.

Again there needed to be a 50-fold increase in page size to cache for break even at five minutes.

Looking at DRAM and flash his numbers were $0.05/IOPS for 4KB enterprise SLC, $0.02 for 4KB and enterprise MLC, with DRAM at $20/GB. A 6KB page size SLC break even came out at five minutes, as did a 2KB page size MLC.

What does this mean?

Flash makes it cost-effective to keep more small random data in a NAND cache than DRAM, say a five-plus hour working set in NAND and a one-hour one in DRAM. The random data working set size in DRAM can be reduced.

Next gen security for virtualised datacentres

Next page: Why Now?

Whitepapers

Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.