How to get a firm grip of applications performance
Monitoring for good behaviour
When applications go wrong, they can either stop working or slow to a crawl. The problem for IT managers is keeping track of when this happens and why, and preferably preventing it altogether. How can they do this?
Ideally, your applications are all written under one framework, like J2EE or .Net, which makes monitoring web server performance a lot easier.
In the real world, however, application performance usually relies on a complex fabric of interconnected components, including web and database servers, legacy applications, network hardware and software and policy management tools.
Analyst Gartner argues that this complexity is often exacerbated by technologies that “bind late”, bringing together the components needed to complete a transaction long after it has been launched. A supposedly identical repeated transaction may take different routes through the infrastructure, using different resources, with different performance profiles.
What really matters is what the end-user sees, according to Jeff Cotrupe, global program director at Frost & Sullivan’s Stratecast practice.
“It’s really focusing on monitoring the system’s performance from a user perspective, and how that fits into overall customer experience monitoring (as in ‘jeez, I didn’t like my bill’),” he says.
See the world
Some application performance management systems such as Keynote use background agents in different locations across the world to sense whether users in Brazil, say, are suffering from particular access problems.
“Then they can go back to their customers, and say that the point of failure was at the ISP or a given server. This is one browser we found, and this is why,” says Cotrupe.
These can be used for mobile applications as well as the PC sector. Seattle-based Rootmetrics, for example, is one company that focuses on handsets and mobile operators.
The Apdex Alliance has published a user experience index designed to be used for all transactional applications.
Ideally, you want to get a handle on problems before the users do
But end-user experience monitoring is not the only approach. Ideally, you want to get a handle on problems before the users do. That requires measuring the resources used by the application and often features some predictive analysis.
Gartner lists five dimensions of application performance monitoring: end-user experience monitoring; user-defined transaction profiling; application component discovery and modelling; application component deep-dive monitoring; and the application performance management database.
Transaction profiling involves following a transaction through the system, while component profiling identifies the application components used to execute them.
Deep-dive monitoring involves intense monitoring of the different architectural pieces used to fulfill a user request (such as application servers), while the database stores all of the resulting information.
That is a lot of dimensions for companies already struggling with delivery and budgeting issues. David Chapman, an application performance management specialist at Fujitsu, cuts to the chase.
“Correct levels of utilisation and proper capacity management can prevent issues before they occur,” he says.
At a basic level, scripting can help here, as your automated jobs look for things that could cause problems, such as memory leaks, table defragmentation or network traffic congestion, and try to take action before they hinder performance.
One small step
Eric Marks, chief executive of consultancy Agile Path, adds that virtualisation can help to make automation easier. “If you just take the first baby step of virtualising your infrastructure, that’s a huge payoff right there,” he says.
This can make provisioning faster, and allow developers and testers to simulate peak load in pre-configured test systems more effectively.
Well, maybe. But virtualising the operating system and the application also risks breaking old models of time-based performance monitoring, creating another challenge for monitoring tools which must take the new mode of operation into account.
As we grope blindly towards these elusive application performance goals, the best way is to attack the problem from top and bottom. Monitoring user experiences provides you with useful intelligence about what people are seeing on the desktop (or tablet or phone).
Monitoring system resources through a combination of scripting and management tools will give IT administrators a sense of what particular parts of the infrastructures need tweaking.
Providing multi-level monitoring in this way will help to provide a more holistic view of application performance, and hopefully avoid those embarrassing conversations with business managers in the hallway. ®
Sponsored: Network DDoS protection