Rules of Thumb (sometimes) considered harmful
On software testing
Comment We all like rules of thumb - operational, design or coding shortcuts that don't need any tedious evaluation of alternatives. Effectively, they represent re-use of experience, which is a Good Thing.
But, there is a risk. When I worked for DBA, I used to meet people using rules of thumb that optimised memory utilisation by a DBMS long after memory became a commodity - and who were making their programs less usable as a result.
It is worthwhile questioning your favourite rules of thumb occasionally - when (or if) you have a spare moment. Take the one that says, "Never test your own work". It makes some sense, because errors in your understanding of the program spec or the way the business works may well be reflected in your tests, just as much as they are in your code.
Let's suppose you don't test your own work. What happens? Well, you probably give the QA team programs that don't work properly and don't interoperate. Certainly, they will efficiently find the silly errors you've left lying around and correct errors in the interface spec - and then time will run out and the complicated errors that should be found in systems testing may get through the net. It would make more sense for the programmers to carry out basic unit testing themselves, leaving the QA team to concentrate on discovering whether the working code actually contributes to what the business is trying to do.
But you will need some help, if you plan to unit test your own work, or so Jerry Rudisin (CEO of Agitar, which sells Agitator, a tool which does exactly that) tells me. He suggests that any organisation that is serious about improving software quality and economics will see great value in unit testing and asks them:
"Do you encourage developers to unit test their own code as it is being developed, so that QA can integrate and do system testing starting with a set of individual software units whose behaviour has been validated" And "Do you set objective targets for the thoroughness of unit testing, and manage to those targets?"
Given this, Agitar's offering, unsurprisingly, includes a "management dashboard" that can present the progress of unit testing in a form that management, and even business users can understand. If programmers indulge in unit testing their own work, this must be both a "business process" and highly transparent to all stakeholders. Otherwise, wishful thinking is a very human characteristic, even among developers!
This all makes sense to me, as long as the organisation concerned is reasonably mature - that is, that it sets objectives and measures the gap between what it achieves and the original objective. And, of course, that it then does something to reduce the gap. I think that adoption of formal process-improvement methods (CMM or CMMI "Capability Maturity Management", perhaps) provides a "rule of thumb" metric for management's real commitment to "maturity", but Jerry questions my "rule of thumb".
"I think the real issue with CMM is that it tends to be "high ceremony" and rewards repeatability - even of a foolish process - instead of success," he says. "Looser and less "deterministic" approaches, such as agile and XP often lead to better s/w than some CMM Level 3, 4, or 5 processes."
In many cases, this is probably true, but some of the people I meet who really understand CMMI (CMM Integration, which is replacing CMM) would accept eXtreme Programming as very much part of a "level 3", or even higher, process. Another rule of thumb is that skilled and intelligent practitioners often have a better insight into what a process is really about than the "process police" - or even some of the practitioners that can be spared from the job to talk to journalists about process. ®
David Norfolk is co-editor of Application Development Advisor
Sponsored: Customer Identity and Access Management