Feeds

Mathematical approaches to managing defects

Radical new approaches toward software testing needed?

Securing Web Applications Made Simple and Scalable

Bayesian analysis, by Pan Pantziarka

Some of the hardest questions to answer in development are about whether testing is "finished": Have we done enough testing, where should we concentrate testing effort, and when do we release the software?

There are usually countless pressures influencing these decisions – with enormous penalties in terms of loss of prestige as well as financial consequences if the decisions are badly wrong – and yet very often we depend on "gut feel" for an answer.

Even when software is passed through a formal testing process, the question of when to stop testing is not an easy one to answer. Does the fact that a component or module has had a lot of defects picked up (and corrected) during testing, tell us more about the quality of the component or the efficacy of the tests?

Given the reality that we can never get the resources required to test as much as we would want, and, just as importantly, that the testing process is itself imperfect, is there anything better than intuition to help developers gauge when software is ready to roll?

One of the things that would help is an objective model of the quality of a package at any given phase of the development lifecycle. Such a model can then be used to predict accurately the number of defects that remain to be discovered at any stage in the development lifecycle. It then becomes possible to base the "when do we release" decision on something other than gut instinct.

This is precisely the task that Paul Krause of University of Surrey set out to do with the Philips Software Centre (PSC) with Martin Neil and Norman Fenton of Agena Ltd.

Using Bayesian Networks, they have developed a general model of the development processes at PSC, which has been applied to a number of different software projects (see the detailed research paper here, together with the references therein). Similar work has also been done at Motorola Research Labs in Basingstoke and at Qinetiq.

Bayesian Networks, also known as Bayesian Belief Networks or graphical probabilistic models, are ideal for tasks of this kind. They are a technique for representing causal relationships between events and utilising probability theory to reason about these events in the light of available evidence.

Set of nodes

A Bayesian Network consists of a set of nodes which represent the events of interest, and directed arrows which represent the influence of one event on another. Each node may take on a range of values or states – a node which represents a thermostat, for example, may have states corresponding to "hot" or "cold", or it could represent different temperature ranges or even a continuous temperature scale.

Probabilities are assigned to each node corresponding to a belief that the states it represents will take on those values. Where a node is influenced by other nodes, (i.e. it has inputs from other nodes), it is necessary to compute the conditional probability it takes on a given state based on the states of those causal nodes.

Bayes' Theorem is used to simplify the calculation of these conditional probabilities. When a node takes on a given state – for example thermostat with only two states reads "hot" – the probability for that state is set to one and the probability for the "cold" state is set to zero. This information is propagated through the network updating the other nodes to which it is connected, resulting in a new set of "beliefs" about the domain being modelled.

Bayesian Networks can be used in a number of ways. Firstly, the structure of the network and the various probabilities mean that it is possible to use them for predictive purposes. In other words, one can say that given this structure and these facts, event x has y chance of occurring. Alternatively, the same network can be used to explain event x took place because of the influence of events y and z. Reasoning can move in either direction between causes and effects.

Applying these principles to software development at Philips, the team created, and linked, Bayesian Networks for every stage in the lifecycle – from specification through to design and coding, unit test and integration. Using an approach pioneered in previous research projects, the sub-networks for each phase were constructed from a set of templates, leading to an approach that Fenton and Neill dubbed object-oriented Bayesian Networks.

The end-result was called AID (Assess, Improve, Decide). The model takes in data about the type of product (number and scale of components, experience of the developers etc), and other data relevant for each phase of the life-cycle and is able to deliver an estimate of the number defects at any point in the process. The network was validated by using historical data for a number of projects and comparing estimated defects with those actually found.

The results have been very encouraging and the AID tool is being further developed so it can be used in a production environment. One other property of Bayesian Networks is that techniques for "theory revision" – or learning from experience – exist, so that data from each project can be used to refine and improve the network.

Many of the lessons learned from the work at Philips – such as dynamic discretisation of probability intervals - have been incorporated into AgenaRisk, a tool which can be used to build software defect risk models. While we are a long way from having such Bayesian models available as Eclipse or Visual Studio add-ins, the work is progressing in the right direction and once the results start to trickle out from research labs and into the wild perhaps the answers to those hard questions won't seem so shrouded in doubt after all.

Next page: Formal methods, by David Norfolk

Bridging the IT gap between rising business demands and ageing tools

More from The Register

next story
NO MORE ALL CAPS and other pleasures of Visual Studio 14
Unpicking a packed preview that breaks down ASP.NET
Secure microkernel that uses maths to be 'bug free' goes open source
Hacker-repelling, drone-protecting code will soon be yours to tweak as you see fit
KDE releases ice-cream coloured Plasma 5 just in time for summer
Melty but refreshing - popular rival to Mint's Cinnamon's still a work in progress
Cheer up, Nokia fans. It can start making mobes again in 18 months
The real winner of the Nokia sale is *drumroll* ... Nokia
Put down that Oracle database patch: It could cost $23,000 per CPU
On-by-default INMEMORY tech a boon for developers ... as long as they can afford it
Another day, another Firefox: Version 31 is upon us ALREADY
Web devs, Mozilla really wants you to like this one
Google shows off new Chrome OS look
Athena springs full-grown from Chromium project's head
prev story

Whitepapers

Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
Seven Steps to Software Security
Seven practical steps you can begin to take today to secure your applications and prevent the damages a successful cyber-attack can cause.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.