This article is more than 1 year old

IT admins hate this one trick: 'Having something look like it’s on storage, when it is not'

Memory... lights the access speed of RAM. (Or does it?)

What about the tech?

We've heard a lot about how Komprise's tech works in Subramanian's rebuttal, so we asked Cree some questions about infinite-io's offering to get the bottom of how it plans to solve the same problem.

El Reg: If we think of infinite-io as a storage metadata reference layer, then how would you compare and contrast it to Primary Data’s technology?

Mark Cree: Primary Data and infinite-io both rely on live metadata to intelligently migrate files to a cloud or appropriate storage tier. Primary Data virtualizes NAS systems, servers, and clouds into one or more new name spaces. Virtualization is accomplished through a complex mapping process that requires workflow changes as new mount points and/or drive letters are introduced. [It's] similar functionality to companies ... like Polyserve, Acopia, and most recently Formation Data.

Primary Data also provide metadata acceleration, but [its] performance is limited by the layers and IO’s it must transverse to get to the metadata and then serve it up.

At infinite-io [there are] absolutely no workflow changes and [we tier] data to a cloud like Primary Data, but we do this totally transparent to installed applications and users.  We may have moved 90 per cent of a customer’s data to a low-cost cloud or storage tier, but to the end-user or application, the data appears and responds just as if it were still on a local NAS system. ...

We don’t need to traverse through a file system to respond, we see the request on the network and respond directly out of memory.

El Reg: If we think of infinite-io as a front-end storage array accelerator, then how would you compare and contrast it to Avere’s FXT technology?

Mark Cree: Avere and infinite-io both install in front of existing NAS storage and accelerate metadata, but the similarities stop there. Avere virtualizes the NAS systems and clouds behind them into one or more new name spaces using a complex mapping system similar to Primary Data.

Avere operates like a cache, and requires a “warm up” period before they can effectively speed up metadata and file operations. In addition to metadata, they can cache files. Avere’s performance, like [that of] Primary Data, is throttled by the fact that it uses a file system to respond to metadata request. ... It’s a lot of layers and IOs just to get to the metadata and their current implementation can’t get down to double digit microsecond performance serving metadata or files.

Unlike Avere, we’ll never have any metadata cold spots or misses since we store all metadata in memory.

El Reg: We can see how infinite-io use would be beneficial when accessing data in disk drive arrays but, surely, it’s less beneficial with faster all-flash arrays, where metadata accesses would be fulfilled faster. Are we wrong?

Mark Cree: Our metadata response times are record setting at 65 microseconds on average.  We had the CTO of one of the major storage vendors tell us we are 5 times faster than anything they sell and 3 times faster than any in-memory file system they have simulated responding to metadata requests.

So, yes we can make all-flash NAS arrays faster. ... The secret sauce to our performance comes from being on the network and not having to traverse through the complexity and layers of a file system to respond to a metadata request.

El Reg: What role would infinite-io play if the shared storage resource was an NVMe-over-Fabrics-accessed array?

Mark Cree: We don’t really play directly in the block storage market where NVMe is targeted. Infinite-io is focused on unstructured file data which most analysts predict is growing more than 5 times the rate of structured block data. Having said that, if the NVMe-over-Fabrics array was front-ended with a file system we would treat that file system like any other NAS system and perform inactive data migration and metadata acceleration.

El Reg: What role does infinite-io have in a data centre built with hyper-converged infrastructure appliances, where the shared storage us distributed between the (server) nodes?

Mark Cree: This is an interesting application and we’ve had some discussions with one of the large hyper-converged vendors. Most of the hyper-converged systems store VM files on a file system. Under that model, we would be able to tier cold VM’s on the hyper-converge system to a low cost storage tier like a cloud while making them appear and recall just as if they were local; increasing the hyper-converged system’s performance by freeing local storage and lowering inactive VM storage costs.

El Reg: Would infinite-io have a role to play in the public cloud if the storage array was front-ending was a software implementation in AWS or Azure, for example?

Mark Cree: The infinite-io system is based on standard x86 code. We could theoretically run it on any hardware with enough horse power to give the expected data throughput rates. Being a startup, we’ve focused on on-premise customer solutions, but there is nothing from a technical perspective that would stop us from front-ending storage that was a software implementation in a cloud like AWS, Azure, Google, IBM/Softlayer, or Virtuestream.

+RegComment

Should you use infinite-io or Komprise to tier your data and move cooling data to cheaper, slower storage tiers and, ultimately, to the public cloud?

Komprise does not disagree on infinite-io use for hot or primary data but after that both have differing views, views that perhaps only a comparative benchmark run could produce real world data with which to prove or disprove.

It would make sense, perhaps, to have pilot installations if your shortlist choice included both of these suppliers. ®

More about

TIP US OFF

Send us news


Other stories you might like