Besides the XPoint: Persistent memory tech is cool, but the price tag... OUCH
No economies of scale = piss-poor adoption
Posted in Storage, 2nd February 2018 13:39 GMT
Analysis The prospects of XPoint and other persistent memory technologies becoming a standard part of servers' design is being held up because the darn stuff costs too much, an analyst has said.
That's because it is made in small quantities so no economies of scale occur which would make it cheaper.
Object Analysis analyst Jim Handy explained this at the SNIA's Flash Memory Summit in January and it starts with the memory hierarchy idea.
The chart displays memory and storage technologies in a space defined by performance, meaning bandwidth (vertical access) and cost (horizontal axis).
There is a sweet spot diagonal running up the chart from left to right, from tape (slow/cheap) at the bottom through disk, SSD, DRAM and the cache levels to L1 cache, which is the fastest and most expensive item on the chart.
Any new technology aiming to punch its way into the memory hierarchy at any point needs to be better performing than things below it and less expensive than items above it on the chart.
We have seen NVDIMMs trying to push their way into the SSD-DRAM gap and generally failing – witness Diablo Technologies.
Handy said NAND suffered from the same issue until around 2004. Before then, in its then SLC (1 bit/cell) form, it was more expensive than DRAM ($/GB) even though a 100mm die using a 44nm process stored 8GB compared to a equivalent DRAM die storing 4GB. Twice the bits should mean half the cost, but it didn't – because not enough of the stuff was made to bring in economies of scale.
In 2004, Handy said, the number of NAND flash wafers made reached a third of the number of DRAM wafers made and we saw a crossover:
Ever since NAND and DRAM prices have been separating, with, El Reg suggests, MLC (2bits/cell), TLC (3bits/cell) and 3D NAND (many more bits/die) increasing the separation.
Incoming persistent memory (PM) technology has a lot of manufacturing cost because it involves new materials and processes, and that makes it more expensive, and this extends the time needed to gain economies of manufacturing scale.
Make more, support more
The NAND and NVDIMM-N lessons for XPoint, and other persistent memory technologies aimed at the same DRAM-NAND gap, is that their manufacturing volume needs to be high enough to provide a cost-performance profile matching that of the gap placement on the memory hierarchy chart:
Their manufacturing volume needs to approach that of DRAM, in Handy's view. They also need software support, particularly for persistence, and this is coming on Linux, Windows and VMware. The initial PM take-up will be for performance, and require faster-than-NAND performance and lower-than-DRAM pricing.
Until XPoint gets there, it won't become ubiquitous, Handy believes. He said Intel is motivated enough to make that happen.
It seems to the Vulture's storage desk that if Samsung can make its Z-SSD cheap enough then it can slot in the DRAM-SSD gap on the memory hierarchy faster than XPoint and prevent it becoming mainstream, while Samsung develops its own post-NAND persistent memory technology to leap-frog XPoint.
Secondly, technologies competing with XPoint, such as STT-RAM, ReRAM and Phase-Change Memory, are all out in the wilderness unless and until there is a realistic path to manufacturing volume for them. It's a harsh world out there.