Feeds

Driven to test-action

A practical view of unit testing

Intelligent flash storage arrays

Automated tests

Automating test execution relieves much of the tedium that fuels the common perception of testing as a dull and menial task. Unit tests focus on things that can be automated within a localised and code-centric view - for example, usability testing and system performance testing are excluded. Tests need to be written in executable form rather than being left in abstract in a programmer's head or in a document. For unit tests, the most natural executable form is code in the same language as the unit being tested. And how much more fun is that than the manual approach? The programmer testing responsibility is realised as a coding activity!

Automated unit tests are quite definite and repeatable in their judgement and there is less of "it kinda looks OK, I think", "I think it's OK, but I don't have the time to check" or "it didn't appear to trip any of the debug assertions when I last ran it by hand, so I guess it's OK".

Automation offers continuous and visible feedback: "All tests passed, none failed". Many people underestimate the value of such feedback. It raises the safety net to a comfortable level and gives a more concrete and local indication of status and progress than many other measures: counting lines of code is simply not a useful measure of progress at any level; end-user accessible functionality is a system and team-level indicator; test cases offer a personal minute-to-minute, day-to-day progress indicator.

Unit test coverage is inevitably incomplete in practice, so by definition one's ability to trap defects is limited to the quality and quantity of those tests. This is not a criticism of testing, just an observation of practical limits. If one were to claim "we will catch all defects through unit testing", it would be a flight of fancy that should be shot down. However, I know of no one advocating any form of unit testing who believes such a proposition. What is guaranteed is that when you have no unit tests you will catch precisely zero defects through unit testing! Defects represent a form of waste that can brake - and even break - development if allowed to accumulate.

It is worth remembering that perhaps one of the most wasteful code-related activities of all is debugging. Pretty much any opportunity to prevent a situation where debugging becomes a normal and necessary activity should be taken - tests, static analysis, reviews, etc. Unlike debugging, all of these activities can be estimated reasonably in terms of the time they take; debugging lacks this basic property and is also labour intensive. Writing a test involves effort, but with a significantly smaller and consistent schedule footprint. Writing a test is much more linear and far easier to estimate, and running an automated test is not a labour intensive activity. Debugging is not a sustainably cost-effective development practice.

Example-based test cases

The goal of unit tests is to test functional behaviour, which can be expressed using assertions, rather than operational behaviour, for example, performance. There are many different unit testing styles. A distinction often drawn is between black-box testing and white-box testing (also known as structural testing or glass-box testing). The premise of white-box testing is that tests are based on the code as written and they explore the paths and values based on the internal structure of the code. For classes, this means the question of examining private data and using private methods often raises its head.

And it is this question that highlights some of the shortcomings of the white-box testing approach. White-box testing must, by necessity, be carried out after the code is written: before it is written there is no structure on which to perform structural testing. Its emphasis on coverage, and thus sequencing in development, can expose it to the cutting edge of schedule pressure. White-box testing is also coupled to the implementation of a concept: private details get exposed, poked and prodded. Changes to the implementation, even when functional behaviour is preserved, will likely break the tests. Thus, the set of test cases is brittle in the face of change, which will discourage programmers either from making internal changes that ought to be made or from carrying out white-box testing in the first place.

The point of partitioning a system into encapsulated parts is to reduce the coupling of one part of a system on another, allowing more degrees of freedom in implementation behind an interface. White-box testing can strip a system and its developers of that freedom. It is interfaces that define the usage and the path of dependencies within a partitioned system. Consequently, it is interfaces that affect the lines of responsibility and communication in development. This suggests that other development activities need both to respect and to support these boundaries and intentions; working against them can introduce unnecessary friction and distortion [4].

Many previous articles have emphasised the contract metaphor as the one of the richer ways of reasoning about an interface [5, 6, 7]. Contracts take a black-box view, focusing on interfaces and constraints on - but not details of - implementation. A black-box test is based on asserting expected effects based on testing given inputs in a given situation.

An example-based style of black-box testing focuses on presenting tests of an implementation through specific examples that use its interface. The contract can be formulated, framed and tested through representative samples [8], as opposed to exhaustively and exhaustingly running through all possible combinations of inputs for their outputs.

Beyond the basics...

The combination of programmer testing responsibility, automated tests and example-based test cases offers motivation and a platform for practical unit testing. Test-Driven Development stands on this base, employing the combination of active testing, sufficient design and refactoring, to take the role of testing more solidly into design, and vice versa. ®

References/bibliography:

1. Kevlin Henney, "Learning Curve", Application Development Advisor, March 2005
2. James O Coplien and Neil B Harrison, Organizational Patterns of Agile Software Development, Pearson Prentice Hall, 2005
3. Rex Black, Critical Testing Processes, Addison-Wesley, 2004
4. Melvin E Conway, "How Do Committees Invent", Datamation, April 1968, available from http://www.melconway.com/research/committees.html
5. Kevlin Henney, "Sorted", Application Development Advisor, July 2003, available from http://www.curbralan.com
6. Kevlin Henney, "No Memory for Contracts", Application Development Advisor, September 2004, available from http://www.curbralan.com
7. Kevlin Henney, "First Among Equals", Application Development Advisor, November 2004, available from http://www.curbralan.com
8. Kevlin Henney, "Put to the Test", Application Development Advisor, November 2002, available from http://www.curbralan.com

This article originally appeared in Application Development Advisor.

Kevlin Henney is an independent software development consultant and trainer. He can be reached at http://www.curbralan.com.

Remote control for virtualized desktops

More from The Register

next story
Download alert: Nearly ALL top 100 Android, iOS paid apps hacked
Attack of the Clones? Yeah, but much, much scarier – report
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Microsoft: Your Linux Docker containers are now OURS to command
New tool lets admins wrangle Linux apps from Windows
Soz, web devs: Google snatches its Wallet off the table
Killing off web service in 3 months... but app-happy bonkers are fine
First in line to order a Nexus 6? AT&T has a BRICK for you
Black Screen of Death plagues early Google-mobe batch
Whistling Google: PLEASE! Brussels can only hurt Europe, not us
And Commish is VERY pro-Google. Why should we worry?
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Go beyond APM with real-time IT operations analytics
How IT operations teams can harness the wealth of wire data already flowing through their environment for real-time operational intelligence.
Why CIOs should rethink endpoint data protection in the age of mobility
Assessing trends in data protection, specifically with respect to mobile devices, BYOD, and remote employees.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?