This article is more than 1 year old

Big Data storage of the future: Fat spinning tubs smothered in NVRAM gravy

No more tiers, vows one storage guru as industry places its bets

Jean-Luc Chatelain, EVP of one of the major high capacity storage firms, says he sees storage tiers collapsing, leaving only server non volatile memory (NVRAM) and massively fat spinning data tubs of up to 64TB and rendering tape irrelevant. But how do we get to this point?

Chatelain - aka JLC - is the exec who heads up strategy and technology for privately held DataDirect Networks. DDN supplies huge great drive arrays that suck in and pump out data at great rates for supercomputing, high performance computing, media and entertainment, pharma and geo-science apps - all the usual HPC suspects. It has been having a great time doing this and has announced a $100m R&D investment in exascale computing storage.

DDN has also started running system application software in its storage arrays, such as file systems, opening the door to running other data-munching software in its array controllers, aka embedded X86 servers, that needs low latency access to mountains of data.

Chatelain, speaking in his personal capacity rather than as representative of the company, said that HPC storage is currently being used in a set of niche vertical areas, but said he believes that the onrush of Big Data-style processing into general business and public sector organisations is going to make it more of a horizontal activity. He said that should bring a concomitant need for HPC-style storage to enable the real-time processing of Big Data analytics that users will want. This will provide a big opportunity for storage vendors with the right Big Data analytics storage products - step forward DDN.

Chatelain highlights DDN's WOS (Web Object Scaler) as a clusterable highly scalable object storage array that's in use today in massive Big Data applications, including defence intelligence analytics work.

He said he thinks that in future the right storage products will be able to do two things: handle the huge volumes of data involved, and provide exceedingly low latency access to the working subsets of it. That's where Chatelain sees very much bigger data tub drives and very much faster non-volatile storage memory.

We can summarise his ideas like this:

Starting 2014 and gathering pace in 2016, we're going to see two tiers of storage in big data/HPC-class systems. There will be storage-class memory built from NVRAM, post-NAND stuff, in large amounts per server, to hold the primary, in-use data, complemented by massive disk data tubs, ones with an up to 8.5-inch form factor and spinning relatively slowly, at 4,200rpm. They will render tape operationally irrelevant, he says, because they could hold up to 64TB of data with a 10 msec access latency and 100MB/sec bandwidth.

This idea of much higher capacity disk drives attacking tape in the online archive space has a surface appeal because we could see disk drive manufacturers liking the idea of replacing lost performance disk manufacturing volume, lost to flash, with new online disk archiving disks taking share from tape reels.

Gartner analyst Valdis Filks says tape has a unique advantage: offline files can't be corrupted or deleted. It's the safety net enterprises need. He says the big, fat disk ideas remind him of IBM's old SLED (Single Large Expensive Drive, the 3390 from 1989, now discontinued.

What do other's think of JLC's ideas?

James Bagley of Storage Strategies Now said:-

With regards to persistent memories other than flash, I think his timetable is too aggressive, since the only real alternative is MRAM and Everspin is just starting to sample 64Mb parts, while 64Gb 20nm flash parts are flooding the market from Micron and Toshiba.

Everspin has an aggressive plan to continue to shrink lithographies but they have a long way to go, current parts are around 120nm cell size. I’m pretty bullish on MRAMs taking a piece of the server and controller NVRAM market over the next 2-3 years but don’t see it displacing flash in the typical cache and top tier. LSI’s 12Gb SAS controllers will likely use the Everspin chips.

We are in agreement with Jean-Luc that object storage is going to dominate many applications because of the unbridled growth of unstructured data.

With regard to a coordinated attack on tape by HDD, he is probably correct, but tape will still be around for my grandchildren.

Josh Krischer of Josh Krischer and Associates, thought the NVRAM ideas were good, seeing New NVRAM products being:

  • Next Generation SSDs – Storage Class Memory (SCM)
  • Cost within 10x of enterprise disk
  • Performance within 3X of DRAM
  • Endurance superior to NAND.

He said: "In my opinion there will be [a] new type of SSD based on Storage Class Memory (SCM). [It's] not clear which technology will win but one (or two) out of MRAM, FeRAM, Racetrack, Organic and Polymer, and Resistive RAM (RRAM)... It will be a new storage tier between the memory and the Flash SSDs or lower performance SCM which will replace the flash technology."

He noted that in November Everspin Technologies had announced the industry's first Spin-Torque Magnetoresistive RAM (ST-MRAM) chip due to be shipped in 2013.

Krischer agrees big spinning disk data tubs will be needed but can't see 8.5-inch form factor disks coming and replacing tape:

Why … kick a dead horse? Tape is not [a] growing business. The smart tape vendors, like Fujitsu with CentricStor, are not enjoying great success. I bet on “cheap” disks with … mirroring in de-clustered RAID [or] Erasure Coding. [For example] 2.5-inch HDDs (2020 - 12TB, 3.5-inch - 60TB), more platters, all SAS.

We asked the disk drive manufacturers, Hitachi GST, Seagate, Toshiba and Western Digital. None replied but then they don't generally discuss product roadmaps out to four years with the likes of us.

El Reg's take on this is that JLC is largely right: Big Data -processing servers will get post-NAND-based NVRAM storage memory alongside their main DRAM, and hold the bulk of the data they need in a networked massive single-tier, scale-out disk drive array, likely enough to be using object storage technology.

Who's right? Tell us what you think in our storage forum. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like