Feeds

'Wear levelling' - a bedroom aid for multi-layer cell Flash

Helps it last longer

Secure remote control for conventional and virtual desktops

Wear-levelling

Wear-levelling algorithms are used to reduce the likelihood of particular blocks being used up, having their maximum number of writes reached, and so falling out of service and reducing the capacity of the SSD, and its ability to free up blocks for fresh writes.

With dynamic wear-levelling written data is put in blocks from the free block pool. Garbage collection patrols the existing blocks and allocates ones with deleted (invalid) data to the background erase processing after which they are added to the free block pool.

However blocks containing static, unchanging data just sit there and don't get rewritten. We could imagine that over a period of, say 6 months, such blocks receive zero writes whilst other blocks could receive, let's be dramatic, 200 writes, this creating an imbalance. Static wear-levelling locates these static data blocks and moves the data to the more often written blocks, transferring their data to the previously static data blocks.

Then, over the next six months the previously static blocks get 200 writes and the previously well-used blocks get no writes. At the end of a year both sets of blocks have had 200 writes; a levelled wear number between the two sets of blocks.

All the foregoing applies both to SLC and MLC NAND. The problem with MLC NAND is that its endurance is less than SLC NAND, For example, Samsung has suggested SLC flash can support up to 100,000 writes, 2-bit MLC is a tenth of that at 10,000 writes and 3-bit MLC is ten per cent of that at 1,000 writes. Extending this trend would have 4-bit MLC supporting 100 writes; clearly a complete no-no for its deployment unless radical measures are taken.

SandForce says its controllers level the amount of writes across flash blocks. The controllers have a recycler for garbage collection, and its says its DuraWrite technology optimises the number of program cycles and this can, SandForce claims, extend the endurance of its flash by up to 20 times compared to other controllers.

Over-provisioning

The main point of basic wear-levelling is to ensure an equality of write numbers across the blocks in the SSD. Above and beyond that there are other techniques used to extend endurance.

One technique is to over-provision the flash, an opposite of the thin-provisioning idea seen in shared storage arrays. An SSD with a nominal capacity of 200GB may actually have 250GB capacity, with the extra 50GB hidden from the host system and used solely at the discretion of the SSD controller. As flash blocks in the SSD wear out they are mapped out of use by the controller, and a new block added to the general free block pool from the 50GB reserve.

There is a limit to how long this will work because, eventually the 50GB reserve is used up and the SSD then faces a slow death as blocks fail one after the other. If the SSD is targeted at a known application, such as a consumer media player then its makers know most writes will be of long sequential files, music tracks or videos, and they can predict how long a given amount of flash will last if they assume an average number of bytes of data is written per day.

With a combination of wear-levelling and over-provisioning they can produce flash for a consumer device that could last say, five years with 500GB of data being written per day.

Remote control for virtualized desktops

Next page: TRIM

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
The Heartbleed Bug: how to protect your business with Symantec
What happens when the next Heartbleed (or worse) comes along, and what can you do to weather another chapter in an all-too-familiar string of debilitating attacks?