This article is more than 1 year old

Guess who: Storage chip maker [blank] can't wait for all-storage-chip data centers

It's Intel, we're talking about Intel

+Comment Intel is pushing the idea of an all-flash data center so it can make up for slowing processor revenue growth by selling 3D NAND and XPoint chips and SSDs.

Here is a re-envisaged copy of a slide Jerry Xu, Intel's general manager for NVM (non-volatile memory) solutions in China, showed at the Huawei Storage Summit in Shenzhen, Nov 3.

Intel_storage_hierarchy_today

It shows the storage and memory hierarchy in use today in data centers which have adopted SSDs. This is a quite straightforward graphic with DRAM (memory) for the hot data, warm data in PCIe NAND SSDs, and cold data stored on cheap and slow SATA disk drives. Some call this scheme flash and trash storage.

There is a third storage tier for even colder data (frozen?), which is an archive using either SATA disk drives or old-school tape media.

We see that DRAM has a 10GB/sec bandwidth and about a 100 nanosecond latency. PCIe SSDs have around a 3.2GB/sec bandwidth and a latency under 100 microseconds, many times slower than DRAM.

The poor sluggish SATA disks run at 6Gbit/s, meaning about 540MB/sec (so slow) and have a near 50 millisecond latency; glacial compared to PCIe NAND.

After Jerry Xu strode through slides extolling NVMe PCIe SSDs, 3D NAND, and then 3D XPoint, he showed a Storage Hierarchy Tomorrow slide which heralds an all-solid-state future, apart from one option in the archive space:

INtel_Huawei_source_slide

Jerry Xu's presentation slide captured on iPhone. We tidied it up.

This source slide is hard to read, so we tidied it up by making our own copy:

Intel_storage_hierarchy_tomorrow

Jerry Xu's source slide re-envisaged

We have the same 3-layer pyramid with hot, warm, and cold layers, but the four data tiers have all moved up one layer so that DRAM is now above the pyramid, with the same bandwidth and latency characteristics as before. Below that, things are different and the media in the layers is generally much faster.

Hot data lives firstly in 3D XPoint DIMMs with nanosecond-class latency, about 250ns, and a 6GB/sec bandwidth. Slightly less hot data resides in NVMe 3D XPoint SSDs, which have a much longer latency of about 10 microseconds, a tenth of the access latency of PCIe SSDs in the Storage and Memory Hierarchy Today slide above. But they can store more data than the socket and physical size-limited XPoint DIMMs.

Warm data moves to NVMe 3D NAND SSDs. These have a PCIe 3.0 x2 link running at around 3.2GB/sec with latency approaching 100 microseconds. Thus it's pretty much the same as the warm data scheme used today, with the proviso that 3 NAND should provide higher capacity at a lower cost/GB than today's 2D or single-layer planar flash.

The cold tier is split in two like the hot tier. First are NVMe 3D NAND SSDs, and below them are SATA or SAS disk drives for customers who don't want to pay a flash premium when archiving data. The SATA speed is 6Gbit/s again and minutes can be taken moving a disk from offline to on-line.

There is no longer any tape in this tier.

The Intel slide has a set of use cases for each layer, with low-cost archive being for the cold data layer. In the warm tier we see Big Data analytics, an active-active object store, and Swift, Lambeth, HDFS, and Ceph. Specifically we have 3D NAND SSDs being used for Big Data and object storage, which some people might call a stretch.

The hot data use cases are Server-side and/or AFA, business processing, high-performance and in-memory processing, Analytics, Scientific, Cloud, Web, Search, and Graph.

Comment

Xu was looking for collaboration opportunities with Huawei; he was open about this, and it seems to us El Reg storage disk denizens, that Intel is making a play for the server and storage array media market with flash booting out disk, except for the cheapest and deepest archives needing SATA disk drives.

If this reading of the situation by Intel is correct, then a good supply of 3D NAND and 3D XPoint chips and SSDs will be needed as well as significant amounts of XPoint DIMMs. That thought would help explain its go-it-alone China chip fab adventure.

As the server processor market gets saturated and the PC market looks less and less likely to come back, and, additionally, the tablet market declines, then Intel, with a currently server/PC/notebook processor-dominated revenue stream, has the prospect of growth stalling, if not decline setting in.

By shipping 3D NAND and 3D XPoint chips and components, then Intel could add storage-class memory and more NAND items to its product group and restore growth to hungry Chipzilla. For Intel the all-solid state data center is thereby a necessity and not a marketeer's wet dream*. ®

* And Violin Memory's evangelism of its all-flash data center idea becomes less extreme.

More about

TIP US OFF

Send us news


Other stories you might like