This article is more than 1 year old

IT admins hate this one trick: 'Having something look like it’s on storage, when it is not'

Memory... lights the access speed of RAM. (Or does it?)

Debate An argument about how to solve the same technical problem has sprung up between two rival startups with plenty of reason to say the other's tech is not up to scratch. But they raise some interesting issues about how to solve slow access to moved files, where to store metadata, and more.

How best to archive files yet preserve ready access? There are differences of opinion between using symbolic links or in-memory metadata, with Komprise using the former and infinite-io using metadata in memory.

Infinite-io's CEO Mark Cree recently took issue with Komprise's view of how to solve slow access to moved files by replacing stubs with symbolic links.

Komprise co-founder, president and COO Krishna Subramanian quickly responded with ripostes to infinite-io's assertions.

(Some of the answers have been edited for brevity).

Mark Cree: I applaud Komprise for getting a product to market quickly... The real issue is leaving something behind that’s not the real data no matter what you call it. Either way, you run into problems:

  1. The space savings between a stub and a symbolic link [are] almost irrelevant.
  2. IT admins hate having something look like it’s on storage, when it is not. It makes it extremely hard to do triage when a disaster happens.
  3. Scans are outdated before they finish.
  4. These types of solutions don’t scale well and kill NAS performance.

Komprise on 1: Space savings between a stub and a symbolic link

Krishna Subramanian: I think he is missing the point – we are not replacing a stub with a symbolic link because it takes up less space.

There are two reasons customers hate stubs – the first is that stubs are proprietary and so you need either storage agents so each storage can understand the stub or some proprietary interface to each storage. So they are not portable and managing revisions of stubs along with storage upgrades or migrations is a nightmare.

Krishna_Subramanian

Komprise co-founder, president and COO Krishna Subramanian

The second reason is that stubs are static and point to the moved data – it would be like you had only one map to get to your data and that map was in the stub. So if that stub is corrupted or deleted for any reason, your data is orphaned. So stub management is a nightmare and often needs a database to be backed up.

Komprise eliminates both these issues by using dynamic links to create an open, standards-based cross-storage interface that is resilient to failures.

First, a link is a standard construct that the file systems understand, so no proprietary interface is required. We use links not to save space (over what is used by stubs) but to move data transparently without proprietary, restricted approaches such as a stub.

Since the advent of the [Windows] XP operating system, both SMB and NFS file systems support symbolic links. With that development we are now able to use a standard construct that a file system understands and supports to transparently forward an access request for an archived data to Komprise.

Second, unlike other stub based approaches, we don’t store the context in the stub. With those approaches, you lose the stub, you lose access to the moved file. Komprise maintains context internally and within the target storage. Thus if a stub is deleted it can be recreated, assuming it was an inadvertent deletion.

2: IT admins hate having something look like it’s on storage, when it is not

Krishna Subramanian: We’ve heard just the opposite! IT administrators are unable to determine what data to archive since they are not the owners of the data and so today they must ask permission of the users and of course users never want their data moved and so nothing is moved.

With Komprise, they don’t need to ask. They can move data based on IT policies and the data is still accessible and visible to the user so that they can operate on it if needed.

We’ve found that any time you rely on humans/users to do something it never works. This approach bypasses this crucial road block.

We provide the option to give a visual indication that the file is indirect or make it fully transparent – but almost all our customers choose the fully transparent path.

3: Scans are outdated before they finish

Krishna Subramanian: Yes, they are … for hot data! Our success has always come from mapping the appropriate technology to the use case at hand. When moving data that is say over 6 months old (and we are finding that on average 50 per cent of the data on primary storage is over 1 year old) by doing an adaptive scan that runs in the background without interfering with active work we find that the file servers are not impacted.

We also find that during that scan period maybe 0.01 per cent of the files cross the threshold and is now 6 months old. We catch these on the next scan and move the files then. Since we are dealing with cold data we do not need to be real time and this eliminates unwanted overhead on the source file servers that is disruptive. Had we been fronting the data and providing meta data access to hot data, this approach would not work.

We let existing companies (e.g. NetApp, Pure, EMC) who are good at managing hot data manage hot data. We provide a risk mitigated approach that our customers really appreciate.

4: These types of solutions don’t scale well and kill NAS performance

Krishna Subramanian: He might be thinking of legacy client-server solutions that are limited by central bottlenecks such as databases and so have trouble scaling.

We are a fully distributed scale-out architecture with no central bottlenecks. And we don’t kill NAS performance because we run in the background – traditional approaches run in the foreground and so they disrupt active usage.

We are like a housekeeper of data – just as you would not want your housekeeper clearing dishes while you are eating dinner, Komprise adaptively backs off and runs non-intrusively in the background when the file servers are in active use or the network is in active use.

Our typical customer manages petabytes of data across 10,000+ shares involving several hundred million files across file servers and we scale seamlessly without customers having to set any special QOS policies or managing our environment.

El Reg: How does infinite-io define the value of metadata?

Mark Cree: Komprise seems to totally miss the value of metadata. The model that does back-end metadata scanning is flawed and slow to recall data. Since the metadata is constantly changing as users access files, a static scan is obsolete before it even finishes. You’re likely to get a lot of false positives on file migrations that will lead to file ping-ponging as active files get migrated and then need to be brought back.

On metadata

Krishna Subramanian: Again, he has completely missed the boat. What he is saying makes sense if you are managing ALL of the metadata … hot and cold … as infinite-io does. For his solution what he is saying is indeed correct. It does not apply to us.

Infinite-io does an initial scan to create the metadata and I assume while scanning it is sniffing the network for any metadata changes. In the process it creates a metadata server that then is the central point through which all data transactions occur. If infinite-io goes down, you lose access to all of your data. It is fronting a customer’s data .. hot and cold data. We find this to be a very high risk approach.

Mark Cree: The real value of metadata, in our opinion, is to enable the management of vast amounts of data AND to maintain the performance for both active data and archive data as they grow.

Krishna Subramanian: We fundamentally disagree. We feel hot data should be managed by primary storage which the customer bought to manage hot data. We see the primary storage as a large cache of all the hot data. Over time, 99.999 per cent of the data will be cold and that will be on capacity storage. Komprise manages all of that cold data and provides ways to transparently access, search and potentially restore that data as needed.

When cold data is accessed it is cached on Komprise thus providing fast access. If the access exceeds some custom policy, the data is re-hydrated back onto the primary storage. This approach allows us to leverage the primary storage as a cache for hot data. As a result, we do not require extensive, super fast and expensive hardware.

El Reg: So how do you you process metadata?

Mark Cree: At infinite-io we take a different approach. We install like a network switch in front of installed storage. Our product is totally transparent to all installed apps and hardware, making us easier to install and maintain. We do a one-time scan of all metadata and put the results in DRAM in our platform.

Mark_Cree

Mark Cree

The metadata is then kept up to date by watching network activity. In fact, we actually learn metadata from the network while performing the initial scan. Since we have all the metadata, and it is continually being updated in real-time by watching network traffic, we don’t need or use stubs. We know where everything is and simply redirect the request at the network-level.

Krishna Subramanian: As stated above, they front all data. If they go down, what happens to the access to the customer’s data? When they come back up, how long does it take to replace their stale metadata with fresh meta data?

I liken this to a housekeeper who tells you, “I will keep you house totally in order and neat, but there is only one catch – I will tell you and your family when to wake up, when to eat your meals, I will watch everything you do, and as long as you abide by these, you will be ok.”

Would you hire this housekeeper?

This is the problem solutions like Acopia had and why network-level data management has not worked.

El Reg: Does this have an effect on data access time and performance?

Mark Cree: Since metadata requests make up 80 per cent more of most workloads, having the metadata in DRAM allows us to dramatically enhance the performance of any NAS system(s) behind us. We serve metadata on average at 65 microseconds directly on the network, totally off-loading the NAS system(s) behind us. The fastest SSD-based NAS systems today generally respond to metadata requests in the 500 microsecond to millisecond range – yes, we can make a NetApp appear 5x-10x faster.

Krishna Subramanian: I would agree with this. Back in the day the metadata chatter was killing NAS file servers (FS). There were many “metadata” servers geared to take up that chatter thus freeing FSs to do what they do .. read/write files. They are not in business today. FSs have solved this problem with fast SSDs. While infinite-io may have still faster SSDs, it comes at a cost and it is in the path of hot data. Why would a customer buy expensive primary storage to front it with expensive network layer metadata servers?

El Reg: You say there is a public cloud access angle to this. What is it please?

Mark Cree: Where this gets really interesting is with the cloud migrated data. We give our customers the tools to create effective cloud migration polices. With them, they rarely need to recall data that has been migrated to a cloud, usually less than 5 per cent of the time.

Even better, of the 5 per cent we may need to recall, 80 per cent of those recalls are requests for metadata. In those cases, infinite-io can intercept that metadata request on the network and respond out of DRAM making the cloud faster than a flash array and rarely requiring an actual file recall from the cloud. If you are going to a public cloud this dramatically reduces in and out file charges.

A system that scans data on the back-end has no way to performance-enhance anything. It’s usually the reverse, the continual scanning slows overall system performance.

Krishna Subramanian: To me this makes little sense. But this paragraph does say what we’ve been saying. Cold data that you migrate is rarely accessed. (In fact their 5 per cent seems quite high. We’re not seeing such high numbers).

In my mind, it does not make sense to risk existing infrastructure by putting in expensive fast hardware resources to accelerate access to data that is rarely accessed! Furthermore, the latency is not just in accessing the metadata; the bigger issue has to do with accessing the content and infinite I/O does not address this bigger issue.

Komprise will cache data accessed from the cloud and reduce further access to the cloud thus reducing costs and providing on-premise access latency. It will re-hydrate that data onto the primary storage based on custom policies to further improve access latency and reduce cloud egress costs.

Their statement that a system that scans data on the back-end does not improve performance and actually slows things down is correct only if that system is designed incorrectly – if it gets in the way of active data usage, and if that same system simply scans data and sits on its hands and does nothing. We are an adaptive analytics-driven scale-out data management solution designed to optimize handling of cold data non-intrusively across storage using open standards.

More about

TIP US OFF

Send us news


Other stories you might like