Cache in hand
In the other direction, the RRam cache is large enough so that data can be held there until there's a sufficient quantity of it to write it out to Flash efficiently.
So quite apart from allowing the drive to appear to operate at RRam rather than Flash speeds, this has the added benefit of organising the data to minimise and possibly eliminate the small, random writes, and the data fragmentation they engender, which are the bugbear of Flash performance and reduce chips' longevity.
The design of Flash chips makes it necessary to write a whole chunk of data - called a Page - of up to 4KB in size even if only a few bytes need to be changed. This is because Flash Pages need to be erased before they can be written to. Worse, Flash has to be erased a Block at a time, and a Block is a much larger amount of memory than a Page, 16KB or bigger. Each erase and write operation gobbles up power.
Caching allows that entire 4KB write to comprise new data. So while 4KB still needs to be written, it does so once rather than every time one part of the page changes.
The Chuo University university designed such a hybrid SSD using 256GB of Flash and 1GB of RRam. The team didn't build such a device, but simulations yielded an 11x increase in write performance - 4.2MB/s for the Flash alone, rising to 46MB/s with the RRam cache in place - and a 79 per cent reduction in the energy consumed for all these writes: 0.12J/MB down to 0.024J/MB.
The team reckon that with smarter, 3D chip construction, connecting the controller, RRam and Flash chips with lines that run through each part - called Through Silicon Vias - the energy saving is even greater, with consumption falling to 0.008J/MB.
Reduced writing means increased longevity: the Flash chips in the Hybrid SSD would last, the team reckons, more than seven times as long as those in a Flash-only SSD. Hybrid drives will cost more, of course, but as long as the extra is less than the price of seven SSDs, that's good news for data centres bulk-buying solid-state drives.
A more logical approach would be to migrate to RRam SSDs, but these are some way off. RRam chips aren't expected to go into mass production until 2014. It will be some years more before they reach today's NAND Flash chip prices.
And the Chuo team's approach applies equally to SSDs with a cache of regular Ram. The downside here, of course, is that Ram loses its data when the power is cut, so the drive needs to be able to hold sufficient energy to write the contents of the Ram cache to Flash as soon as a power cut is detected.
While the Japanese team waits for RRam to become more readily available, companies like Sandforce and Indilinx are building better Flash controllers more able to work around NAND's limitations. But RRam's speed advantage will be hard to beat. ®
Re: How much does RRAM cost?
If you read the article thoroughly, you'll note that RRAM, like most post-Flash tech, isn't yet at the same economies of scale that NAND Flash possesses. There just isn't enough RRAM to go around to use it in quantity. Furthermore, the chips in use today don't hold a whole lot, especially compared to a same-sized chip of NAND. It is correctly stated in the article: a technological bridge, a means of bringing a nascent tech into the mainstream to take advantage of its benefits, even in small amounts, while economies of scale continue to build.
Re: How much does RRAM cost?
I'm guessing that RRAM yields are low and cost is high.
Flash is (relatively) cheap.
Eventually, as you say, if the yields and cost high enough, and there prove to be no fundamental problems with the technology, then flash is history.
There are solutions...
There's an obvious fix for optimising the write cycles on SSDs with enterprise arrays, and that is to make use of the massive amounts of NV Ram (battery backed up) and use NetApp-style "write anywhere" type file systems or SUN's WAFL. (Files can be used to emulate block mode systems). That way, there's no necessity to scrub and write a whole page of SSD just because a few KB need updating. As SSDs don't suffer from seek times, then fragmentation is not a performance issue (speak it softly, but a NetApp with high space utilisation can suffer rather badly from that - it won't happen with SSDs).
Of course, there's still a write-cycle limit, but as it's possible to hot-swap HDDs anyway, then there's surely no fundamental barrier to hot-swapping write-exhausted SSDs. All that's required is a financial model for including write activity in the cost of ownership, and not just capacity charges based on GB.
Enterprise arrays use these very large NV RAMs to optimise write-back to HDDs already (as well as pointers etc.) by decoupling the write operation to the server from the back-end activity, thus you often find such arrays offering sub-millisecond random-write times with 7-10ms random reads as the former are buffered. Only if the number of back-end I/Os saturates the back-end I/O capacity and swamps the NV RAM does the random-write times suffer badly (although people might be amazed how many enterprise arrays hit internal processing and data path limits before the back end disks - and not even SSDs - are saturated).