This article is more than 1 year old

Parting the clouds for IT admins: We chat to CloudPhysics

Part 1: Kabuki theatre with sales and pre-sales

VMware and Cloud Physics

WtH: Is it true that VMware is looking at CloudPhysics and scratching its head now, thinking they should have come up with this solution?

John Blumenthal: VMware was and is a great company and many of us have made our careers there, so in many ways it is regarded as the mothership. Many of the things we were working on were not really in scope with the work that was being done at VMware, mainly because this is a SAAS oriented approach to delivering analytics which is unlike the on-premise approach that VMware took.

We have many discussions with VMware [as] we are in the partnership programme. We still have a great deal of allegiance and interest in offering more value to VMware customers.

WtH: How quickly can CloudPhysics include new technologies like PernixData, Infinio and others, and suggest recommendations on these?

John Blumenthal: Something like PernixData is very interesting layer that your IT infrastructure might contain.

Our goal is to ultimately model, simulate an entire data centre. Today we have broken it down in smaller discrete simulations, one of which has to do with caching. We have a caching analytics service with a module that allows us to work with any vendor and make tweaks to that model to incorporate effectively how their caching mechanism works. We sit down with a lot of storage vendors like Fusion IO, Proximal Data and we are very well known with the PernixData guys from our times at VMware.

Sitting with Satyam Vaghani and Frank Denneman would be great to update the information on PernixData so that a user can run a CloudPhysics service before the procurement of Pernix and understand the value proposition and benefit of introducing Pernix before they even purchase. Use real data to do that, avoid a POC and the cost and exercise that comes along with that.

WtH: So CloudPhysics customers can dry-run new technology to see how it will impact their actual production environment?

John Blumenthal: That is one of our main use cases yes. The procurement process is often a very wasteful exercise in today’s IT Infrastructures because your storage vendors do not actually know your environment and conversely you don’t really understand their technology.

The way the dance goes today, is a kind of kabuki theatre with sales and presales. It involves trying to replicate a production system and generate data that may or may not be indicative of what would actually happen in production.

So we looked at his and said, we can build a model of caching technology that has a mathematical model basis to it. We then gather workload traces non-disruptively from a cluster – and that is our secret sauce, being able doing these collections non-disruptively.

Then we can run these traces through our simulator and then literally within 2 to 3 per cent variance indicate to a user what the benefit would be for one or a group of workloads that are running in a cluster with a cache of a certain size. That benefit is highly accurate and highly quantitative.

You can avoid a lot of danger involved with making wrong purchases this way too.

Being able to simulate exactly what is in production and do that non-disruptively without having to spin up a proof of concept is what we believe is the future of how IT infrastructure will be sold.

WtH: Do you consider this to be your biggest use case?

John Blumenthal: It is the one that is bringing revenue to the company most immediately. We build this as one of our first services about a year ago and it gathered [the] interest of many storage vendors. It is the basis of the company's first revenues.

But expanding upon that are other services that we introduced that are focused less on procurement accuracy and efficiency and more on Risk and Safety.

For example we have a High Availability simulator, which has an HA health check service attached to that too. This is based on the work that Frank and Duncan have put together in their analysis and writings on High Availability. We have actually encapsulated much of that in our HA simulator and HA health check services.

The nature of the problem we are solving here, is as you provision virtual machines or modify HA policy groups, you don’t have visibility into the impact of those changes.

Meaning you do not know whether you have reserved enough remote resources in order to succeed in the event of a fail-over. Our simulators allow you to look at the consequences of a particular change and understand very accurately whether you are actually wasting resources by having too much capacity or not enough in which case you will not have a successful fail-over.

Additionally we have a couple of other services that are focused on understanding particular operational hazards [that] are starting to kick in and take on dramatic interest among the user base that we are involved with.


Blumenthal claims his firm provides Google-level IT infrastructure utilisation from its VMware virtualised server data sensing and analysis. The second part of Willem's interview with John Blumenthal talks about this and will be published next week. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like