Related topics

CloudSleuth slaps dashboards on heavenly grids

Google performance anxiety cure

channel

With more and more companies deploying real applications out there on public clouds, someone is going to have to deliver sophisticated performance monitoring and modeling tools that can span multiple clouds to not only help companies using cloudy infrastructure do their comparative shopping, but also to point their fingers at their vendors when application service level agreements are not met.

The CloudSleuth portal launched by Compuware , which peddles its own line of performance management tools under the Gomez brand name, and a bunch of partners is interesting and useful for cheapskates, but it has a ways to go to be a truly useful portal for linking your perceived performance on any given cloud and network running your applications to what other people are seeing.

CloudSleuth is not the first freebie portal showing performance data for popular clouds. The Cloud Commons, which was formed by chip maker Advanced Micro Devices, Linux giant and cloud wannabe Red Hat, the Silicon Valley branch of Carnegie Mellon University, and a bunch of other companies has a neat little dashboard called CloudSensor that shows rudimentary performance metrics for creating and destroying files on Rackspace Hosting clouds and on the four Amazon EC2 data centers (US East and West, Ireland, and Singapore); for Microsoft Azure SQL database and file serving; the uptime over 15 minute intervals for Google Gmail and Windows Hotmail email services; and dashboard response time for AWS, Google Apps, Salesforce.com, and Rackspace Cloud services.

The information is neat, primitive, and only shows a window of data graphically for one or two hours. It's fun and it is somewhat useful, but it's not exactly like getting real-time telemetry that you can extract from your own systems and then use to model future performance.

Compuware and its partners in the CloudSleuth portal - cloud management tool maker GoGrid, content delivery provider CDNetworks, networking giant and server wannabe Cisco Systems, and cloud hosters OpCloud and Internet Initiative Japan - want to do a better job at showing customers real-time performance on the public clouds. And they also want to be able to upsell to cloudy infrastructure buyers who are frustrated by the lack of performance information and who would consider moving from one cloud to another or to pay for a content delivery network to boost performance of their applications out there on the intertubes.

The CloudSleuth Provider View dashboard, which you can see here shows the response time and availability of a relatively simple two-page e-commerce Web retailing site selling sneakers. This reference application is hosted on Amazon EC2 images in the company's four data centers, on GoGrid's east and west data centers, Google's App Engine and Microsoft's Azure platform clouds, as well as on IIJ GIO, OpSource, RackSpace, and Terremark clouds.

Where possible, the instances running the baby retail application are running Tomcat 6.0.24 in its default configuration on the cloud; App Engine and Azure run their own stuff. This part of the CloudSleuth portal allows you to see the response time and availability for these continually running benchmark images for 6 hour, 24 hour, 7 day, and 30 day windows.

CloudSleuth Global Provider View

CloudSleuth Global Provider View (click to enlarge)

Not only does the tool show uptime and response time for the virtual machine running the benchmark, but it also takes readings of latency over a network of over 100,000 real PCs that are in the Gomez customer network and that gave been programmed to smack these baby sneaker selling Web sites from 160 different countries over more than 2,500 local ISPs. The idea is to not only measure the performance back in the clouds, but out there on the backbone and on the last mile of the Internet link where real users often see pretty pathetic performance. The benchmark is set up to do measurements 200 times an hour from 125 different general geographical areas on the globe.

The Cloud Performance Analyzer, which you can play with here, takes a sample of he same application running in the Amazon EC2 East data center and lets you see the performance delay around the networks of the world, drilling down into backbone availability and showing the effect of using content delivery networks to speed up access to this application.

CloudSleuth Performance Analyzer

CloudSleuth Performance Analyzer (click to enlarge)

This tool has a rolling scrollbar like a stock ticker showing the real-time effect of the networks on EC2 performance around the globe.

Compuware is providing these two services to anyone for free, and Doug Willoughby, director of cloud computing at the company says that they will remain free. Willoughby says that the CloudSleuth tools provided by Compuware are just a starting point and that over time Compuware is hopeful that other companies will snap in their own tools and real-time data. He also concedes that it is possible that service providers or tool vendors will try to directly monetize the large amounts of performance data that is currently being gathered about clouds and networks.

Compuware already has a year and a half of data stored up, as it turns out. But CloudSleuth only shows a rolling 7 days in the Cloud Performance Analyzer view and only up to 30 days in the Global Provider View. None of this data is in a form that you can download; you can look at it, but you can't touch it. ®

Sponsored: Designing and building an open ITOA architecture