This article is more than 1 year old

Let the compute see the data... to smash storage networking bottlenecks

So laggy, no likey

Part 1 Bringing compute to data sounds like a great way to bypass storage access bottlenecks but progress is difficult because of software issues, the need to develop special hardware and a non-x86 environment.

As data amounts grow from terabyte to petabyte and beyond, the time it takes to bring data to the processors is becoming an increasing pain in the ass.

All computing involves bringing compute and data together, loading DRAM with data from storage so that the processor can do its stuff. It's not essentially a case of physical closeness; whether the processor is 1cm or 20cm from the data doesn't matter that much. It's more about reducing the data access latency and increasing the rate at which data can be read from or written to storage.

There is a bottleneck between storage and compute because storage media access, meaning disks mostly, is slow. Storage networking is slow too and processing the storage IO stack takes too many cycles. There have been several attempts to fix this problem, some of which have failed, and others which are still being developed, notably adding compute to SSDs.

These are:

  • Bringing compute to storage array
  • Bringing storage to compute
  • In-memory systems
  • Bringing compute to disk drives
  • Bringing compute to flash drives
  • Bypassing problem with NVMeoF

Bringing compute to storage array

Coho Data attempted adding compute to its storage array, the DataStream MicroArray. They were introduced in May 2015, with Xeon-based server/controller, PCIe NVMe flash cards, and disk storage. However, the product failed to progress and the company closed down in August 2017.

Coho_Data_Datastream_array

Coho Data DataStream array

The compute was there for so-called closely coupled storage tasks such as video stream transcoding and Splunk-style data analysis.

It was not there to run general applications which executed on host servers. There were two obvious issues. First of all software had to be written/procured to run on the array and do the closely coupled storage tasks. Secondly host server code for its initiation, orchestration, management and the processing of results of its computations had to be written/procured.

Tasks previously carried out on a server which had an attached storage array now had to be subdivided into a host server part and a storage array part then managed. This will apply to any other product being compute to storage media as well. The Coho array used x86 processors. If the compute that is brought to storage drives is not x86 then the procuring and/or writing of code to run on it will be outside the x86 mainstream development path.

As far as we know, there are no other ongoing attempts to bring compute to storage arrays in any significant way.

Bringing storage to compute

Hyperconverged infrastructure (HCI) appliances bring storage to compute in the sense that they do away with external shared storage arrays.

The local storage on an HCI node is used instead and the several nodes have their storage combined and aggregated into a virtual SAN. This is still accessed using the standard storage IO stack, such as iSCSI, and data may need to be accessed on a remote node across, say, an Ethernet link.

So this form of bringing compute and storage closer together doesn't do away with the storage access IO stack or networked access to remote storage. The HCI benefits lie elsewhere.

In-memory systems

An in-memory (DRAM) system tries to do away with storage for run-time processing altogether. Data is loaded into memory from storage and then used there, with radically faster access than there might be if the stored data were on disk.

GridGain and Hazelcast are examples of suppliers producing software to run in-memory systems.

Gridgain_stack

GridGain stack

The SAP HANA database is another such system. The source data on disk is accessed infrequently to load the in-memory system, and changes to the in-memory data are written out to disk, again infrequently.

Alternatively, changes to the in-memory database are written to transaction logs which are stored on disk. A crashed database can be recovered from these. Example products that log in-memory transactions are Redis, Aerospike and Tarantool.

The in-memory system is limited to using DRAM and will have practical size limitations as a result.

Alternative schemes of adding compute to disk drives aim to supply many terabytes of capacity more cheaply than DRAM and provide a different kind of performance boost than that delivered by storage-stack and storage network-accessed all-flash arrays.

Bringing compute to disk drives

Seagate more or less initiated this idea with its Kinetic technology – strapping small processors to disk drives and adding an object access protocol and storage scheme to the drives.

Seagate_Kinetic_drive

Seagate Kinetic disk drive

Part of the justification was simpler storage access stack processing. But upstream applications to use these drives have been slow coming, a software difficulty, and partly because the disk drives were still disk drives, hence slow compared to flash drives.

OpenIO Arm-y disk drives

OpenIO has added Arm CPUs to disk drives, turning them into nano-nodes for object storage.

OpenIO_WD_HDD_with_ARM_board

From OpenIO – WDC disk drive with added Arm CPU system

It has a Grid for Apps scheme and Enrico Signoretti, strategy head, said: "The HDD nano-node is good for traditional object storage use cases (such as active archives for example), but we want to replicate what we are already doing on x86 platforms on the nano-node.

"Thanks to Grid for Apps [serverless computing framework], we have already demonstrated image recognition and indexing, pattern detection, data validation/preparation during ingestion and, more in general, data processing and metadata enrichment. With the right amount of CPU power we will be able to move most of these operations directly at the disk level, creating value from raw data while it is saved, accessed or updated."

He mentioned video surveillance as an example application area: "A remote camera could have one or more nano-nodes storing all the video stream, doing operations locally (like face recognition, removal of useless parts, and so on) and send to the core only relevant information (with metadata included in it). All data is saved locally, but only relevant information is moved to the cloud.

"By operating in this way, you can save a huge amount of network bandwidth while removing all the clutter from the central repository, which also results in faster operations and less storage costs in the cloud. This is an advanced application, but it is a game changer."

He says flash-based nano-nodes look promising because if the faster media speed: "At the moment the HDD in the nano-node is limiting the range of applications because the lack of IOPS.

"As soon as Flash become[s] a viable option in terms of $/GB for capacity-driven applications we will be ready to leverage our serverless computing framework to run more applications closer to the data. Real time video encoding, AI/ML, IoT, real time data analytics are all fields we are looking very closely and we will share more on this in the following months."

Reg comment

It appears that, generally, users do not see enough justification for adding compute to disk drives when disk IO latency far exceeds the time spent in storage IO stack processing. It exposes disk's slowness whereas bringing compute to radically faster flash drives – SSDs – looks more promising.

It is as if it's not just necessary to kill storage networking latency; it's necessary to kill disk drive latency as well before bring compute to storage starts making sense.

And that's what we are going to look at in part 2 of this examination of moving compute to data – in-situ flash drive processing. ®

More about

TIP US OFF

Send us news


Other stories you might like