Feeds

Ain't testing finished yet, Ma?

Making the most of seminar breaks

Choosing a cloud hosting partner with confidence

A week or so ago I met Geoff Brace, a Director of Owldata Ltd, at a BCS CMSG (Configuration Management Special Interest Group) seminar on ITIL - yes Matilda, ITIL is relevant to developers. We got into a discussion about testing, sparked by this article.

It reminded Geoff about some attempts he'd made to predict defect rates in software - "if testing finds a defect rate above a certain threshold," Geoff suggests, "perhaps the code should be rejected [and rewritten] rather than tested further".

This makes sense: if your testing is estimated at 50 per cent efficient, then finding 20 bugs means you probably have 20 left. The more you find, the more likely the software is to be defective when you finish (making some assumptions about the quality and consistency of testing).

Now consider a certain “cowboy” style of programming - code a chunk of a programme and throw it at the compiler, using the compiler to check for the undeclared variables and syntactic errors. You then iterate, refining and correcting the code until it compiles - any defects found in testing are then obviously the tester's fault :-)

Well, Geoff says that he found out the hard way that if you write carefully and desk check thoroughly, you not only get code that compiles first time but also runs correctly. So, he suggests, why not combine these ideas. Write the code and compile it - then count the number of iterations needed to get a clean compile.

Geoff suggests that this number could be a good predictor of the number of run time detectable defects. Unfortunately, it's difficult to prove - he points out that you'd have to deny the code writer use of a syntax directed editor and make access to the compiler such that you can actually count the number of times the same item (at different versions) is submitted.

You'd probably end up with something like the old batch processing of FORTRAN compiles, when you punched the program up on cards, put them in a tray in the computer room and got back a print out the following morning. Well, who'd want to go back to those days - but, thinking about it, desk checking code to get out compile errors did find a lot of logic errors too.

So, we decided that this idea probably didn't get us anywhere in the end. But it did highlight one point. If you don't know how many errors are in your code when you start testing, how do you know whether you've finished testing?

The obvious answer is 'when the time allocated for testing has run out' – i.e., when you hit the go-live date the boss agreed to before anyone knew what the project involved in detail - but that's really hard to defend.

Another approach is to plot 'defects found' against time and to stop when the curve flattens out - but that might just show that the test pack is inadequate....

And there are various mathematical predictors for potential defects, so you can stop when you've found something like that number.

However, it really comes down to balancing risks - the longer you delay going live, the greater the risk of business losses (assuming the program does something useful) caused by using the old processes. The earlier you go live, the greater the risk of the new application not working ,or causing damage to the business. I find risk-based testing (see here, for example) a very attractive approach.

But there's another predictor in all of this. If someone doesn't have a rational idea of the "success factors" for testing and can't come up with a rational approach for deciding when it's finished, I predict that there's a good chance that the application will be rubbish.

Last word to Geoff: "When I was working with a team developing avionic systems, their final acceptance criterion was, 'Would you fly it?' They actually visualised themselves as the pilot. This was an interesting approach and seemed effective."

Intelligent flash storage arrays

More from The Register

next story
UNIX greybeards threaten Debian fork over systemd plan
'Veteran Unix Admins' fear desktop emphasis is betraying open source
Netscape Navigator - the browser that started it all - turns 20
It was 20 years ago today, Marc Andreeesen taught the band to play
Redmond top man Satya Nadella: 'Microsoft LOVES Linux'
Open-source 'love' fairly runneth over at cloud event
Return of the Jedi – Apache reclaims web server crown
.london, .hamburg and .公司 - that's .com in Chinese - storm the web server charts
Chrome 38's new HTML tag support makes fatties FIT and SKINNIER
First browser to protect networks' bandwith using official spec
Admins! Never mind POODLE, there're NEW OpenSSL bugs to splat
Four new patches for open-source crypto libraries
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.