Original URL: http://www.theregister.co.uk/2007/03/09/test_driven_development/

Driving on the right side of the code

Test-driven development

By Kevlin Henney

Posted in Developer, 9th March 2007 01:08 GMT

Column Perhaps one of the most interesting things about TDD is not the specification-oriented and design-centred role in which testing is employed, but the amount of explanation it requires as a term. And I don't just mean expanding the abbreviation to Test-Driven Development or Test-Driven Design, as opposed to, say, Telecommunications Device for the Deaf (That said, I have heard it mistaken for Top-Down Design.)

One illustration of the problem of explanation can be found in the abstract for the session "Are Your Tests Really Driving Your Development?", run by Nat Pryce and Steve Freeman at the XP Day 2006 conference:

Everybody knows that TDD stands for Test Driven Development. However, people too often concentrate on the words "Test" and "Development" and don't consider what the word "Driven" really implies. For tests to drive development they must do more than just test that code performs its required functionality: they must clearly express that required functionality to the reader. That is, they must be clear specifications of the required functionality. Tests that are not written with their role as specifications in mind can be very confusing to read. The difficulty in understanding what they are testing can greatly reduce the velocity at which a codebase can be changed.

One of the common misinterpretations of TDD is that it is no more than getting developers involved in testing. Developers are responsible for their work, which includes the idea that code is tested at the code level, as opposed to just having it tested indirectly in the software system as a whole. This responsibility defines a foundation on which TDD is built, but it is not itself TDD: there is no sense in which the tests are driving an aspect of development. In a traditional developer-based testing approach, tests are employed to discover defects in code, whereas the emphasis of TDD is that tests are also used for specification and design.

The need for terminology explanation, however, often goes further than just clarifying the distinction between traditional developer-based testing and TDD. Not only do many people stop listening after the first word, they follow through by applying one particular interpretation to that word. They hear test and make assumptions and projections about the whole approach based on a limited understanding of what that word might entail.

Presenting a concept by examining the individual words in its name is effective as pedagogy, as the previous quote demonstrates, but such a literal analysis of a jargon term is hopeless if you are trying to read anything much deeper into a concept — instead of depth, you are likely to end up with an understanding that is at best superficial and at worst just plain wrong.

For another example of misinterpreted jargon, let's consider a related term: Test-First Programming. Historically, the term Test-First Programming predates Test-Driven Development.

Many developers treat them as synonyms, but in practice they have slightly different usage: Test-First Programming is normally used in the context of Extreme Programming and emphasises the practice that code that tests for a behaviour should be written before — as opposed to alongside or after — the code that fulfils the behaviour; Test-Driven Development often refers to a slightly broader set of the practices, often when considered outside the context of Extreme Programming, and although a test-first style may be employed, strictness of timing is emphasised less than the role of tests as a form of feedback. Both terms embody the notion that testing is continuous and integrated with development, rather than separated, or even divorced. The Pragmatic Programmers' tips "Test Early. Test Often. Test Automatically." and "Coding Ain't Done 'Til All the Tests Run" capture the essence.

And yet it is possible to miss all of these clues and connotations to misinterpret the idea of Test-First Programming as writing all of the test code for a module or major subsystem before writing any of the code — in other words, do the tests first. Yes, this would be a naïve interpretation of the practice according to its name, but that doesn't stop people making such literalist interpretations. So, inevitably, a few have tried to pursue this mutated style doggedly and dogmatically in the name of buzzword compliance.

A little embarrassingly, one of the software testing classics, Glenford Myers' The Art of Software Testing, was updated by authors who appeared to use this coarse-grained interpretation as the basis of a whole chapter on Extreme Testing. Claims such as "In XP, you must create the unit tests first, then you create the code to pass the tests" and "All code modules must have unit tests before coding begins" are enough to make you cringe and hanker for the unadulterated first edition.

Given the confusion sometimes surrounding terminology of any form, and not just TDD-related terminology (although test certainly deserves special mention), one approach to clarification is to use a completely different term. This is part (but not all) of the motivation behind the adoption of terms such as Behaviour-Driven Development and Example-Driven Development.

As Dan North notes, "'Behaviour' is a more useful word than 'test'", which is not just down to a difference in meaning and emphasis: in software development, the word behaviour comes with greater precision and less baggage than test. On the other hand, the term Test-Driven Development is out there and has mindshare. So, if test is where the confusion arises, another path to clarification is to reclaim the word. This latter approach is a steeper, uphill struggle, but it can serve to highlight more clearly where misunderstandings spring from and where there is in fact common ground.

For many involved in development the notion of testing is intimately tied with the idea of "trying it out". To "try it out", therefore, needs an "it" to "try out", which necessarily places the act of testing after the act of making — to test-drive a car you first need a car, for example. With such an interpretation, it is indeed difficult to see how development could be driven forward with tests: Development-Driven Testing perhaps, but Test-Driven Development appears not to make any useful sense.

It makes even less sense if you view testing as a primarily human activity. This perspective is surprisingly common, even in this modern age of new fangled, electrically powered, automated computing machinery. If "test it" means "try it out", there is a strong sense that "someone" is doing the "trying". It is true that humans provide the ultimate reality check in software development and are essential actors for some kinds of testing, such as usability testing.

But a general approach to testing that is intrinsically labour intensive is an expensive way of doing something that computers can be programmed to do at a fraction of the cost. Fortunately, most developers and managers who view testing as a manual activity have the wit and wisdom to see that an approach where testing is both continuous and human based is unsustainable, and therefore do not attempt it. Most, but not all. I have come across a couple of projects that set out on a path to adopt what they thought of as TDD based on the idea of testing as "someone trying it out". Fortunately, these attempts ended in exhaustion and boredom before the projects ended in failure.

Taking a step back, we can see that, in its most general form, a test is a proposition about the execution of a piece of code or some aspect of a system. A test quantifies and qualifies something about a piece of software, whether from the inside — where we may, for example, talk about unit tests and class design — or the outside — where we may, for example, talk about system tests and use cases. A test therefore states a decision as well as the means to confirm that the decision is observed.

What are the decisions that are being made? They are about requirements: how we organise and interpret requirements, all the way from the system level down to individual classes and methods. Put another way, there is an intimate relationship between the idea of software requirements and testing: they represent two sides of the same coin rather than different currencies. When viewed at the system level, such requirements make up the classical notion of requirements that relate to the purpose of a software product. When viewed internally to the system, requirements represent the design decisions made by developers about how code is structured and the distribution of responsibilities across the code.

Now, here comes the driver. Instead of leaving such decisions in prose, verbal or even unspoken form, write them down for posterity in a way that is both readable and executable. Decisions can be stated explicitly and checked automatically. Importantly, decisions can be made — tests can be written — in the presence or the absence of code. The process of making these decisions is a dialogue that can be interwoven with other drivers in the development of a system. The degree of interweaving, the role of feedback and the granularity of the dialogue is what distinguishes one model of development from another, one project from another, and so on.

Of course, a test-driven perspective is not the only one a software developer needs to consider. As with so many other -driven developments in software (model, use case, story, risk, priority, etc.), even this deeper, more specification-oriented and design-centred view of tests should not be considered the sole driver. Likewise, when considered as an act of confirmation, automated testing is also only one of a number of complementary means, which include peer review, static analysis and user feedback. The key is to recognise it as one of many, rather than none of many. The visibility and clarification that TDD can bring to understanding requirements and articulating design make it a skill (yes, it requires skill) worth considering, acquiring and applying.

Welcome back to Kevlin Henney, who’s been occupied with co-writing two volumes in Wiley’s Pattern-Oriented Software Architecture series (these should be available in April).

Pattern-Oriented Software Architecture: A Pattern Language for Distributed Computing, Volume 4

Pattern Oriented Software Architecture: On Patterns and Pattern Languages, Volume 5