AMD readies HyperFlash cache tech
To be built into SB700 Southbridge
AMD's 'Puma' laptop platform, due to debut next year with the 'Griffin' processor and the M780 chipset, will incorporate the chip maker's alternative to Intel's TurboMemory Flash cache technology, as expected.
AMD's M780 chipset incorporates the company's own SB700 Southbridge chip. The SB700 will feature what AMD calls HyperFlash, a native link to a bank of Flash storage that can be used by Windows Vista and other operating systems to store frequently needed data to save the system from going to the hard drive for it.
The upshot: data loads more quickly and less power is consumed in doing so.
Intel announced its plan to support Flash caches last year and recently shipped its Turbo Memory module, a 512KB or 1GB PCI add-in card, which is an optional part of the 'Santa Rosa' Centrino platform and future desktop chipsets.
HyperFlash feeds straight into the SB700 rather than going via the PCI bus - an advantage, claimed AMD Fellow Maurice Steinman, becuase it keeps the PCI bus free for other add-ins, such as wireless cards.
However, he did admit that the link between the cache and the SB700 is proprietary. But he said the details of the connection have been provided to a number of Flash chip makers - including, we guess, Flash-making AMD spin-off Spansion - to ensure laptop vendors have a choice of suppliers.
He would not be drawn on whether AMD will in due course support the Intel, Hynix, Micron and Sony-backed Open NAND Flash Interface (ONFI) initiative, charged with eliminating proprietary links to Flash memory chips.
This cache is managed differently
To Oliver, the first commentator:
This is a non-volatile cache, so it's ready after being shut-off. The transfer rate from flash is not really better than a hard drive, but access is so much faster that by putting the most used bit of start-up libraries and favorite applications in flash the overall performance feels much snappier at the times when the waits are most annoying.
It's different from a traditional most-recently used cache scheme (CPU cache and filesystem RAM cache) in that once you shown a pattern of commonly used files, they'll live in flash with relatively little turn over.
The article didn't state that the flash wasn't replaceable or integrated into the chip, just the 'link' which I took as a bus controller of sorts. I suppose one could read "a bank of Flash storage" as being either however memory slots are often referred to as 'bank 0,' 'bank 1' etc. and nobody speaks of them not being fixed in place.
It's a very brave (aka. foolish) idea...
to integrate a proprietary extension into a motherboard which has a chip that will die after a certain number of writes. Performance will slowly degrade as more pages of the flash chip go bad and it can't be replaced because it's a proprietary technolgy and chanches are low for finding a replacement module in shops.
Also there are many nice connection protocols there which can be used to interface a flash module. How about sata or a pci-express 1 bus? Cpu-s with native pci-e controllers could even use them as memory mapped devices, which is a fast, standard and cheap way. (usually video cards share their ram this way but 286 era ems and intel's new flash solution used it too)
I am admittedly ignorant of the full operation of these new super-giga-boost caches, but what is the point? How are these different to adding extra RAM? I would have thought that one could adapt the standard RAM interface and improve its throughput in order to support other data transfers, such as those more related to data normally carried by the hard disk.
Surely we should be simplifying and integrating computer architecture, not adding more components that may well be outdated when faster flash storage comes to the mainstream market.
OR - With all these caches about in chips, hardisks, RAM and now the motherboard, is it pointing to some sort of distributed chache architecure?