Resistive Ram cache to make Flash fly, say boffins
Wham, bam, thank you, RRam
SSDs appeal to ordinary computer users because of their speed and silence. Data centre folk appreciate those qualities too, but also like the SSD's very low power consumption. Energy is no small cost for a data centre, where there can be tens of thousands of drives all slurping electricity at once.
But it's not a free ride. SSDs aren't cheap, and they have longevity issues. Write information on them too many times and you'll only be able to read them after that. The fact that large areas of memory rather than single bits need to be written at a time - and erased first, a process that clears an even bigger space - means they are not well-suited to applications that unceasingly write and re-write data.
Solving these problems is something of a Holy Grail in the storage business. Many options exists - or have, at least, been proposed - but they're all many years from replacing wholesale the NAND Flash chips used in today's SSDs.
One Japanese group, from Chuo University, Tokyo, has suggested a half-way house: using one of the new kinds of non-volatile memory, Resistive Ram (RRam) as a buffer between the outside world and the host drive's NAND chips. This "hybrid" SSD uses Flash for bulk storage, and RRam for speed.
RRam uses a feature of certain dielectric materials, which normally exhibit a high resistance, to spontaneously create physical, permanent paths, called "filaments", down which current can flow. All it takes is the application of sufficiently high but not impractical voltage.
The pathways reduce the material's resistance, but it's possible to break them, returning the dielectric to its high resistance state. Re-apply the voltage, new paths form, and the material re-enters low-resistance mode.
It's this ability to maintain either of two states - high and low resistance - that allows the dielectric to store binary information. Hook it up to a controlling transistor and you have a usable non-volatile memory cell.
And it's one that can switch much more quickly than its Flash equivalents can: 10ns compared to 100,000ns. The upshot: its write speed is much, much higher - closer, in fact, to volatile memory like Ram.
Coupling it with Flash puts the RRam in the role of a cache. Controller algorithms store frequently read data in the RRam, from where the information can be gathered more quickly.
Next page: Cache in hand
Re: How much does RRAM cost?
If you read the article thoroughly, you'll note that RRAM, like most post-Flash tech, isn't yet at the same economies of scale that NAND Flash possesses. There just isn't enough RRAM to go around to use it in quantity. Furthermore, the chips in use today don't hold a whole lot, especially compared to a same-sized chip of NAND. It is correctly stated in the article: a technological bridge, a means of bringing a nascent tech into the mainstream to take advantage of its benefits, even in small amounts, while economies of scale continue to build.
Re: How much does RRAM cost?
I'm guessing that RRAM yields are low and cost is high.
Flash is (relatively) cheap.
Eventually, as you say, if the yields and cost high enough, and there prove to be no fundamental problems with the technology, then flash is history.
There are solutions...
There's an obvious fix for optimising the write cycles on SSDs with enterprise arrays, and that is to make use of the massive amounts of NV Ram (battery backed up) and use NetApp-style "write anywhere" type file systems or SUN's WAFL. (Files can be used to emulate block mode systems). That way, there's no necessity to scrub and write a whole page of SSD just because a few KB need updating. As SSDs don't suffer from seek times, then fragmentation is not a performance issue (speak it softly, but a NetApp with high space utilisation can suffer rather badly from that - it won't happen with SSDs).
Of course, there's still a write-cycle limit, but as it's possible to hot-swap HDDs anyway, then there's surely no fundamental barrier to hot-swapping write-exhausted SSDs. All that's required is a financial model for including write activity in the cost of ownership, and not just capacity charges based on GB.
Enterprise arrays use these very large NV RAMs to optimise write-back to HDDs already (as well as pointers etc.) by decoupling the write operation to the server from the back-end activity, thus you often find such arrays offering sub-millisecond random-write times with 7-10ms random reads as the former are buffered. Only if the number of back-end I/Os saturates the back-end I/O capacity and swamps the NV RAM does the random-write times suffer badly (although people might be amazed how many enterprise arrays hit internal processing and data path limits before the back end disks - and not even SSDs - are saturated).