Feeds

Compuware's testing roundtable

Software quality is back on the board's agenda

Build a business case: developing custom apps

I’ve just been representing Reg Developer as one of only three journalists at an interesting roundtable discussion devoted to “Software Quality, Best Practice and Governance” (which seems to mean Testing, in the widest sense).

Among those present were Sarah Saltzman of Compuware, the sponsor of the roundtable event; Teresa Jones of Butler Group; Brian Wells, chair of something called the TMMi foundation (of which more later); Geoff Thompson, of Experimentus; and, representing the businesses using this stuff, Philip Griffiths, Global Applications Architect at Heinz. Most noticeable, by the way, was Sarah’s reluctance to talk about Compuware’s products too much – well done!

The discussion started off conventionally – but got more controversial later. The importance of early defect removal was stressed and the need for business resources to be made available, so that testers could relate the evolving system to the needs of the business as well as (or instead of) what the IT group said it was going to deliver. Clearly, it helps if the testers contribute to the project from inception; rather than being brought in at the end, as a "barrier to going live".

But I've heard all this before, at various times over the last 30 years or so. We even touched on the fact that if your project is behind schedule and testing is being squeezed, shipping in "testing mercenaries" on short term contracts probably puts you further behind schedule (someone on the team has to stop what they're doing and get these people up to speed). But this is one of Fred Brook's insights from 30 years or so ago. Why is it still relevant? And, since we really do know a lot about managing software quality why do we still accept bugs in software that is only marginally fit for purpose, sometimes just because it is popular (the Microsoft syndrome)?

And here’s where the discussion turned interesting. You won’t get blamed for buying popular (or well-marketed) software even if it isn't high quality, and bugs in features you don’t use don’t matter much, so perhaps we then come to accept bugs in features that do (after all, if popular software contains bugs, perhaps they don’t matter). Then, sometimes the problems are not in the software but the spec it was written to, so you can’t safely blame anyone (“the user is always right”), even though using software which automates the wrong process is seriously expensive and disruptive.

Once again, it’s the people/culture/change issues that bite. Phillip Griffiths pointed out that getting the resources for managing quality out of a management often preoccupied with short-term stock market performance isn’t easy (in the long term, quality is free, but only if you are prepared to invest in it up front in the shorter term). And people often have a vested interest in the status quo.

“Hero management” – by people who get a lot of fun (and, probably overtime and bonuses) out of waiting for things to go wrong and then fire-fighting with flair and enthusiasm - is rife. And it’s a very expensive approach to managing quality. It was even noted that a profitable testing market is evolving – full of people who’ve done a course on testing but who have little experience of “defect risk management” and who "don't know what they don't know" – but are paid high rates to help with last minute quality panics. And, often a system will go live even though people at the sharp end know it can’t work, because no-one is brave enough to tell management the bad news; and people believe that they can manage their way out of the mess anyway – hero management again.

Griffiths sees a need for a business model fundamentally based on “fitness for purpose” quality. The CEO must define a way of working that is process-oriented, not technology-focussed and must support the quality champions in middle management actually managing change. And s/he must allocate business resources to testing and analysis.

It all comes down to organisational maturity and process improvement, it seems to me. Without this, you can even “get it right” in response (usually) to some crisis – and then slip back into old ways. Geoff Thompson described a firm which had done just that. In a situation of extreme pressure with no time for rework, quality was built in from the start and the developers/testers were given the time they needed – on condition that that was all they got – and, on delivery, the product "just worked" without any problems. But now that things are less stressed, techniques like “pair testing” (with the testers embedded in development alongside programmers), that helped achieve this, are less popular.

Which is where TMMi comes in. It’s based on the TMM (Test Maturity Model) process model, from around 2003, which adds testing to the Capability Maturity Management model (CMM; now Capability Maturity Management Integration, CMMI); which doesn’t go into much detail on actually achieving demonstrable quality. According to Brian Wells, testing is what demonstrates Quality and it covers all forms of defect removal

Now, the TMMi foundation is taking the text book TMM model and "refactoring" it (simplifying it , adding stuff, moving stuff around - training is moving from level 3 to level 2, for example). The aim of this independent foundation is to produce a "generic but detailed enough" public domain reference model for the testing process – which (amongst other things) should help developers choose tools without the fear that their tool choice is being driven by a vendor agenda embodied in a proprietary process.

The TMMi model is non-prescriptive – it won’t mandate an independent test team, for example, but suggest that there is a need for independence in at least part of the testing process – and it is research driven, not commercial. The first draft of the new TMMi model should be available for review in 1Q 2007; delivery of version 1 is expected at the end of 2007 and I hope to look at it in more detail in Reg Developer on due course.

And Compuware? Well, it is looking at mapping TMMI onto the requirements-driven “risk based testing” process models it is developing for itself (see, for example, the Risks Are For The Weekend Executive Guide, at the bottom of the page, registration needed). That is something else we may look at in Reg Developer – and perhaps we’ll look at some of Sarah’s testing and requirements management tools too.

David Norfolk is the author of IT Governance, published by Thorogood. More details here.

Gartner critical capabilities for enterprise endpoint backup

More from The Register

next story
Why has the web gone to hell? Market chaos and HUMAN NATURE
Tim Berners-Lee isn't happy, but we should be
Microsoft boots 1,500 dodgy apps from the Windows Store
DEVELOPERS! DEVELOPERS! DEVELOPERS! Naughty, misleading developers!
Mozilla's 'Tiles' ads debut in new Firefox nightlies
You can try turning them off and on again
'Stop dissing Google or quit': OK, I quit, says Code Club co-founder
And now a message from our sponsors: 'STFU or else'
Apple promises to lift Curse of the Drained iPhone 5 Battery
Have you tried turning it off and...? Never mind, here's a replacement
Uber, Lyft and cutting corners: The true face of the Sharing Economy
Casual labour and tired ideas = not really web-tastic
Linux turns 23 and Linus Torvalds celebrates as only he can
No, not with swearing, but by controlling the release cycle
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.