Feeds

Ain't testing finished yet, Ma?

Making the most of seminar breaks

Top 5 reasons to deploy VMware with Tegile

A week or so ago I met Geoff Brace, a Director of Owldata Ltd, at a BCS CMSG (Configuration Management Special Interest Group) seminar on ITIL - yes Matilda, ITIL is relevant to developers. We got into a discussion about testing, sparked by this article.

It reminded Geoff about some attempts he'd made to predict defect rates in software - "if testing finds a defect rate above a certain threshold," Geoff suggests, "perhaps the code should be rejected [and rewritten] rather than tested further".

This makes sense: if your testing is estimated at 50 per cent efficient, then finding 20 bugs means you probably have 20 left. The more you find, the more likely the software is to be defective when you finish (making some assumptions about the quality and consistency of testing).

Now consider a certain “cowboy” style of programming - code a chunk of a programme and throw it at the compiler, using the compiler to check for the undeclared variables and syntactic errors. You then iterate, refining and correcting the code until it compiles - any defects found in testing are then obviously the tester's fault :-)

Well, Geoff says that he found out the hard way that if you write carefully and desk check thoroughly, you not only get code that compiles first time but also runs correctly. So, he suggests, why not combine these ideas. Write the code and compile it - then count the number of iterations needed to get a clean compile.

Geoff suggests that this number could be a good predictor of the number of run time detectable defects. Unfortunately, it's difficult to prove - he points out that you'd have to deny the code writer use of a syntax directed editor and make access to the compiler such that you can actually count the number of times the same item (at different versions) is submitted.

You'd probably end up with something like the old batch processing of FORTRAN compiles, when you punched the program up on cards, put them in a tray in the computer room and got back a print out the following morning. Well, who'd want to go back to those days - but, thinking about it, desk checking code to get out compile errors did find a lot of logic errors too.

So, we decided that this idea probably didn't get us anywhere in the end. But it did highlight one point. If you don't know how many errors are in your code when you start testing, how do you know whether you've finished testing?

The obvious answer is 'when the time allocated for testing has run out' – i.e., when you hit the go-live date the boss agreed to before anyone knew what the project involved in detail - but that's really hard to defend.

Another approach is to plot 'defects found' against time and to stop when the curve flattens out - but that might just show that the test pack is inadequate....

And there are various mathematical predictors for potential defects, so you can stop when you've found something like that number.

However, it really comes down to balancing risks - the longer you delay going live, the greater the risk of business losses (assuming the program does something useful) caused by using the old processes. The earlier you go live, the greater the risk of the new application not working ,or causing damage to the business. I find risk-based testing (see here, for example) a very attractive approach.

But there's another predictor in all of this. If someone doesn't have a rational idea of the "success factors" for testing and can't come up with a rational approach for deciding when it's finished, I predict that there's a good chance that the application will be rubbish.

Last word to Geoff: "When I was working with a team developing avionic systems, their final acceptance criterion was, 'Would you fly it?' They actually visualised themselves as the pilot. This was an interesting approach and seemed effective."

Providing a secure and efficient Helpdesk

More from The Register

next story
Preview redux: Microsoft ships new Windows 10 build with 7,000 changes
Latest bleeding-edge bits borrow Action Center from Windows Phone
Google opens Inbox – email for people too thick to handle email
Print this article out and give it to someone tech-y if you get stuck
Microsoft promises Windows 10 will mean two-factor auth for all
Sneak peek at security features Redmond's baking into new OS
UNIX greybeards threaten Debian fork over systemd plan
'Veteran Unix Admins' fear desktop emphasis is betraying open source
Google+ goes TITSUP. But WHO knew? How long? Anyone ... Hello ...
Wobbly Gmail, Contacts, Calendar on the other hand ...
DEATH by PowerPoint: Microsoft warns of 0-day attack hidden in slides
Might put out patch in update, might chuck it out sooner
Redmond top man Satya Nadella: 'Microsoft LOVES Linux'
Open-source 'love' fairly runneth over at cloud event
prev story

Whitepapers

Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.