Feeds

Tegile founder: Will we IPO? Yes. And we're NOT another NetApp

CEO Rohit Khetrapal chews the fat about all things hybrid

Next gen security for virtualised datacentres

Servers would compute and storage store

WtH: How do you regard the development towards server-based storage?

Rohit Khetrapal: We always believe that you should do the work where it is required. The architecture must be natural in its implementation. If you start taking storage components and placing them in the server layer you do not get cache consistency.

For instance, you can use server flash to speed up a virtual desktop. The reality is that this server is writing, and if that server fails in some form, then you can’t guarantee the write. When you let the server do the compute part and let the storage do its part, you are able to provide a much more robust solution in the environment.

To us the naturalness of the architecture cannot be disturbed that easily.

WtH: This is a very interesting topic to me, as several storage companies placed their bets the other way.

In my talk with the CEO of Diablo Technologies, he expressed the opposite view, namely that it is all about increasing the amount of work per server. His perspective is that storage in itself does not produce work and that it should be as close as possible to compute to increase the work done per server.

If you take that idea and add techniques like PernixData that provide data resilience, how is that not a good solution?

Rohit Khetrapal: I believe the purity of an architecture always makes it more robust. Now we try and have different servers taking care of robustness.

In a storage layer the write guarantee is something that has been done for many decades.

We can move compute very easily. VMware has allowed for that. But data you cannot move that quickly. Being able to guarantee the data is extremely critical. In an ULTRADIMM failure, what does one do? To me it is really about data guarantee and data robustness, this is why you’ve got these separated architectures.

WtH: I get your point here about purity of architecture. I do believe, however, that from a customer standpoint there are two valid options when your old storage array does not perform. You either invest in a new array, or you implement a server-side caching layer like we discussed.

Rohit Khetrapal: You tell me, William, where should the caching architecture be if EMC or NetApp could do it correctly? It should sit where the storage layer sits. It is because they are unable to do it, that we are moving caching to the server layer.

WtH: Well… to be specific, in PernixData FVP a write gets written to two other hosts before the write is acknowledged. That sounds pretty robust to me. If you add that to the extremely fast flash options out there, by for instance Diablo, you can’t deny that doesn’t make for a compelling solution.

Rohit Khetrapal: Fair! That is true but still to me you are compensating for what the storage architecture cannot do. If the storage architecture does this naturally, you would not need it.

Build a business case: developing custom apps

Next page: IPO coming?

Whitepapers

Best practices for enterprise data
Discussing how technology providers have innovated in order to solve new challenges, creating a new framework for enterprise data.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
How modern custom applications can spur business growth
Learn how to create, deploy and manage custom applications without consuming or expanding the need for scarce, expensive IT resources.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?