Original URL: https://www.theregister.com/2007/07/02/hp_speed_survey/

Another damned thick survey

HP seems to be continuing and expanding on Mercury's approach

By David Norfolk

Posted in Software, 2nd July 2007 08:02 GMT

There are two sorts of people in the world: those that divide the world into two sorts of people and those that don..... No, those who measure what they do; and those (probably a larger group) who trust to luck and public opinion and love the dangerous life. I think the first group should be running businesses people (customers, employees) depend on.

On the other hand, I've gone on record as being very cynical about the "measurements" represented by most surveys. So I'm pleased that Hewlett-Packard (HP) is continuing its recent acquisition Mercury's approach to measuring the environment it's playing in with properly designed surveys overseen by an independent third party (the Economist Intelligence Unit). As someone at HP marketing said to me, "we'd perhaps have liked it to say that Requirements Management was more important in Europe than it did - but you simply can't manipulate this sort of survey...." Good.

I was at a panel discussion and Q&A session to celebrate the publication of HP's "IT at the Speed of Business” report, produced in collaboration with the Economist Intelligence Unit - you can read more here and download the whole report after registering here. One question was "what percentage of IT initiatives undertaken in your company over the past three years has had the intended positive business outcomes". Over half, overall, of those surveyed thought under 50 per cent. Which would worry me if I was in IT in those companies "I've lived through a culling of the IT department.

Well over half of the correspondents thought their company would experience a substantial increase in profitability from faster delivery of IT services and projects - although, interestingly this was over 70 per cent in agreement in Asia Pacific and under 50 per cent in the Americas. And this doesn't really fit with the idea that half of IT projects don't do anything for the business - even well-conducted surveys often don't deliver easy answers but just an informed basis for discussion.

Nevertheless, the report claims to show that there's a correlation between company performance (profit growth) and improving the speed of IT delivery. Nearly 60 per cent of respondents from Asia Pacific thought that better requirements definition would speed delivery but only 22-23 per cent in Europe and the Americas thought the same. And remember that the outsourcing industry doesn't just use Asian programmers, so it's not just the Asians that are involved in remote delivery these days.

The report contains an opinion from Philip Everson, a Deloitte consulting partner, to the effect that IT staff are often incentivised wrongly - rewarded for the number of help desk calls answered within 30 seconds, for example; instead of for the number of problems fixed in the first interaction.

One interesting result falls into the "glass half full/half empty" category. About 50 per cent didn't think that their career development or remuneration would be damaged by late delivery of a project they were responsible for....

So perhaps that's why IT is always "late, over-budget and wrong”. Or is it a sign that we're often just a convenient scapegoat for (business) management failures elsewhere - it would be so unfair to punish the sacrificial goat and perhaps it wouldn't volunteer to take the flack next time if you did.

One discussion issue occupying the group was the definition of "failure". Suppose a project is cancelled before it ever goes live, because the business has moved on while it was being built and it no longer has a business justification (which implies that some form of "portfolio management" is going on). Is this a "success" or a "failure"? Is the process a success, because it allows you to cancel projects that re no longer justified at an early stage, before they waste too much money?

An attendee described a project which used Oracle tools to generate application code from models. This wasn't easy - the tools didn't do quite what the marketroids promised and weren't quite as supportive as the developers would have liked. The developers decided that the model-driven approach, with these tools at least, wasn't for them.

However, the project worked, it delivered a fit for purpose system; although it did come in over-budget and late. Now, was this project a failure? By some criteria, yes (partially, anyway) but by others - delivering a working system and increasing useful organisational knowledge - it was a success. Perhaps "success" and "failure" are relative and depend on how you specify the "success criteria" at the very beginning.

You do define "success criteria" and baseline current practice before you start anything, of course? If you don't, how do you know whether anything succeeds or fails? Is the real metric in many companies, perchance, whether the CEO likes the project - or goes drinking with its project manager? ®