This article is more than 1 year old

One programmer's unit test is another's integration test

Word games

Returning to the question of programmer testing and what to test, if we believe that the quality of class interfaces, class implementations, class and object relationships, and so on matter, we should demonstrate this care and attention with appropriate tests. This care and attention is normally described in terms of unit tests.

However, there is a (not so) small matter of terminology we ought to clear up. In spite of the relative maturity of testing as a discipline, there is no single or standard accepted meaning of the term unit test. It is very much a humpty-dumpty term. But this is not to say that all definitions are arbitrary and any definition will do. When it comes to unit test what distinguishes one definition from another is its utility. If a definition does not offer us something we can work with and use constructively, then it is not a particularly useful definition.

One circular definition of unit test is any test that can be written using something that claims to be a unit-testing framework. Such a definition is often implicitly assumed by many users of the xUnit family of frameworks – it's got unit in the name, so it must be a unit test, right? This means that a unit can be pretty much anything, including a whole system.

While we can say that this is not an unreasonable interpretation of the word unit as an English word, it does not give us a particularly useful definition of unit testing to work with in software. It singularly fails to distinguish code-focused tests from external system-level tests, for example.

Kent Beck tends to treat any code-focused test that is smaller than the whole system as a unit test, which at least distinguishes between whole-system tests of the software and internal tests of the code, but accommodates perhaps coarser "units" than would fit many people's intuitive notion of what a unit test ought to cover.

At the other end of the scale, there are definitions of kinds of tests that identify many different levels of granularity and scale. The problem with too many levels is not only is there even less consensus about the terms, but the distinctions are, in practice, either not useful or not consistent.

For example, one scheme for classifying tests differentiates unit testing, component testing, integration testing and system testing. This may or may not be useful depending on what we mean by unit versus what we mean by component. If a unit is taken to be a function or a class in strict isolation from other classes, and component simply means a combination of units, we have a definition, albeit not a very useful one. There are very few classes that are not built in terms of other classes, so everything becomes a component test and the idea of a unit test is irrelevant and impractical, except for the most trivial classes.

If, on the other hand, a component is taken to be a DLL or other well-defined and deployable unit of executable code, as is commonly meant in the context of component-based development, then that clearly distinguishes between units which are classes (and may be built from other classes that are also unit tested) and components that contain them and can be independently deployed.

While this definition makes sense for a typical .NET or COM project, it doesn't necessarily make sense for other platforms and systems. So, although there are conventions that locally give concrete meaning to these terms, there's enough variation across development platforms and application architectures to mean that the fine distinctions are perhaps too fine for us to use this scheme more generally. The variety of meanings associated with unit test is rivalled only by the diversity of meanings component has gathered in the context of software development.

More about

TIP US OFF

Send us news


Other stories you might like