Today's users don't expect things to fail
But we know how many support calls we get!
Desktop poll results This week's mini-poll reveals, as perhaps expected, that the primary source of information used to measure desktop support issues are the number of support calls hitting the help desk and number of calls successfully resolved. For over three quarters of organisations, these two rather primitive measurements form the foundation of desktop support measurements. Time to resolve calls is the other significant metric.
More specialised metrics based on the type of call are monitored by just a third of operations, with estimates of diagnosis and resolution costs followed by only one in ten organisations. We are still a long way off measuring the real cost of support!
We get a better picture of what tools are employed to measure downtime and time-to-fix processes: a wide range of solutions have been adopted, with automated tracking supplied by support desk tools leading the way.
But even these are used by fewer than one organisation in three. We recognise that responses will have been gathered from organisations of all sizes, with information garnered from more generic systems management tools being used by another 25 per cent. All the same, this leaves over forty per cent reporting that they either have no formal tools in place to measure down time and repair times or that they have simply a general idea of such metrics for reporting purposes.
When it comes to having formal service level policies in place covering time to fix operations, 42 per cent of organisations have either standard agreements in place with users, or they make use of agreements based around variable to time to fix dependent on different problem types. Another seven per cent make use of agreements in which users select a time to fix each associated with a defined cost. The largest proportion of support organisations has no formal agreements in place, but attempts to fix each call based on user needs.
Damning with faint praise? Measuring user perceptions is challenging, but it is clear that the primary metric in use is the number of complaints received. This measure of users’ perceptions is counted far more often than the amount of praise received. Nearly half of organisations use internal or external surveys as a guide to user satisfaction with support call handling, while 40 per cent-plus rely on the old standard of “gut feelings”. Around a quarter of respondents make use of some quality management and reporting tools such as end user response time monitoring etc. One in ten do nothing at all.
With these standards in place, an interesting reader comment makes the point: "Why no mention of user satisfaction in the metrics? We added this to our support system a couple of years back and it's incredibly helpful to see when users are (dis)satisfied with how their issues are dealt with and you get better participation than with a periodic survey. Investigating individual calls with negative feedback has enabled us to tighten up a number of procedures and it's meant a focus away from volume and 'time to resolve' both of which generally have little relevance to user satisfaction - which is the aim in providing a support service, isn't it?"
To conclude, monitoring and measurement of support call resolution times and end users perceptions of the help desk service is a maturing science. Expectation management and having reporting tools that let everyone know how well the support service and the general IT services are operating can help smooth interactions between users and IT. There is still some way to go to identify the best methods to address these areas. Indeed the whole of desktop service delivery remains a work in progress for most organisations.
time to close metrics - what management fodder
Ahh those metrics on "time to close" - management bean-counters love it because they think that it somehow measures how great of a job they're doing. Reality of it is much different.
If you have a problem that goes unresolved, eventually your IT department will close the ticket out of boredom, or out of the aforementioned beancounters' pressure to "make their numbers." There is always some "other" resolution code that lets them park it on a technicality and get it off their active list. To the user what happens is that you get harassing phone calls of "can we close the ticket now?"
Pressure is on closing the tickets; any actual resolution of a problem is just an accidental side-affect and not necessarily guaranteed.
Of course, this only measures the success rates for calls which make it to the support desk.
In my experience (as a user, not a helldesk operator), companies outsource the helpdesk to a bunch of monkeys who are incapable of suggesting anyting better then "reboot & try again". This is soon followed by a drop in helpdesk calls since everyone gives up and just gets the office 'expert' to help instead.
The resulting numbers are interpreted as a great step forward; fewer calls to the helpdesk /must/ mean that everythinbg is working better, of course. Then the poor sod who has now become the informal support tech gets caned in his annual review for not meeting his real project deadlines because he's so distracted by all the cries for help . He quits or gets laid-off, so the company has lost one person who really does know what he's doing.
Nothing left to do but award the cost-cutting bonuses for the year, while wondering why everything is going down the pan.
Ending versus solving
The IT industry seems ( from a user's perception) to measure success based on speed of closing calls. This does not, however, seem to need to equate with resolving the issue. Or to put it another way. Once the support script has come to the end they will palm the user off and close the call. "You need to reinstall Windows and lose all your data" or, in the case of HP. "We can't resolve this issue, even though it is clearly covered by your warranty. You'll have to pay for a different levelof support and I guarantee they'll be able to resolve the issue evne though I couldn't"