Parting the clouds for IT admins: We chat to CloudPhysics
Part 1: Kabuki theatre with sales and pre-sales
A company's IT Infrastructure is crucial to its survival and success, which is why most companies invest heavily in that area to reduce risk and increase performance.
Despite these investments, the complex and dynamic nature of IT infrastructure environments means that most companies do not have a complete picture of their IT infrastructure and the workloads that run on it. There is a risk and performance penalty associated with this. The risk lies in making changes to your largely unknown environment - like adding resources, adding workloads and changing policies.
The performance penalty arises from the fact that an unknown IT infrastructure cannot use its resources optimally.
I recently spoke to the CEO of a company hoping to exploit this situation: CloudPhysics. It has developed a non-disruptive data collection technique for VMware vSphere environments where data about the IT Infrastructure and the workloads that run on it are collected. This data is analysed using VMware knowledge databases and anonymised user-data from other CloudPhysics customers.
Based on this analysis, IT administrators are given a set of recommendations with precise execution directions to reduce risk and optimise performance in their IT Infrastructure.
CloudPhysics offers services that allow IT administrators to dry-run emergency procedures like fail-overs on the real production environment and evaluate the impact of new technology on their production environment.
I interviewed John Blumenthal, CloudPhysics CEO, to get more insight into its product and to hear his views on the market.
WtH: So how are things going John?
John Blumenthal: I am quite amazed by the attention we received. The launch at VMworld in San Francisco went very well and we seem to have a strong appeal to VMware admins and specialists that design and implement IT infrastructure.
CloudPhysics overall process
WtH: CloudPhysics extensively gathers information and suggests changes and optimisation strategies. Does it implement them as well?
John Blumenthal: Not yet. We do make recommendations; our goal is to not just find a problem, but also to provide an execution path and a remediation plan in the analytics that we are delivering. We believe that is the next generation of how data is put into use by a VMware admin or designer.
It is not enough just to index and search this data to look for correlations but to actually find causation. There is a known phrase in data science, that correlation is often not causation. In fact they’re often times never related.
We think that a lot of log analytics platforms which effectively allow you to do these forms of searches, fall short of what an administrator needs to find, which is an analytical based answer that provides a direction of what to do.
However we do not take the final step, which is to actually execute the recommendation. That has to do more with the nature of a SaaS service attaching to your network and the concerns people have about a remote system actually executing changes in your environment. So we go as far as the data and the execution plan, but not the actual execution at this time.
WtH: Do you imagine developing a perhaps locally installed add-on that does allow for execution in the near future?
John Blumenthal: We do. We ultimately intend to do that and deal with all the security concerns. So in that sense it will look like a highly informed resource management approach. Among the team members we have people that were responsible for a lot of the resource scheduler at VMware.
Our idea has been to implement greater quality and quantity of the analytics that drive those changes. As the market adopts our solution, we will step forward with options for making these final changes, as you pointed out.
WtH: So a large portion of your team consists of ex-VMware and ex-Google employees right?
John Blumenthal: Yes indeed. One of my co-founders, Iran Ahmad, was core in the DRS team and the author of storage DRS and Storage IO control, and Carl Waldspurger, who works with us as an advisor, was the actual principal engineer responsible for the original architecture and implementation of DRS. Carl spends quite a bit of time with us on architecture and direction.
Our goal is to ultimately model, simulate an entire data centre.
Sponsored: Benefits from the lessons learned in HPC