This article is more than 1 year old

Test, test and test again

Should testing drive development or development drive testing?

One tester per developer…

With a ratio of one test engineer for every development engineer it's hard for design flaws to stay flawed. Particularly as, from 'Day One' of the development cycle Watts and his colleagues submit all new builds to an overnight hosedown of test data.

But it's more than process design – it's about how software comes apart. When tools are performing a relatively straightforward function - comparing two databases - the temptation is to cut straight to the chase and build the logic into the UI, a single testable entity. While this seems sensible and economical, the downside is that it's only testable after the whole thing is finished. Imagine trying to build a motorbike without being able to test that the parts are sound before you bolt them in place.

The solution is to build a fully accessible logic engine and keep a very thin UI as an entirely separate "presentation" layer on top. That way test engineers can detect the proverbial hairline fracture in a method or subroutine as soon as it's minted; and not when it's being taken for a test-drive by a potential customer.

This process of 'relentless testing' also stretches beyond the development and test engineers. A team of usability engineers who work with customers and designers from the earliest phase of developing a new tool through to the polishing of the final button are also part of the process. They ensure that from the splash-screen onwards, the tool is completely self-explanatory and that it needs no more thinking about than using a pair of scissors.

This relentless focus on testing probably results in more time spent on making sure that internal team relationships are working as well as they should be. As mentioned earlier a software developer is quite often prone to thinking in a rather "community of geeks" kind of way, and has a hard time seeing the commercial wood for the trees of fascinating code.

You only have to compare Software Developers with Test Engineers and you see the difference instantly. The Test Engineer sees everything in terms of 'Them and Us'; 'Them' being the soft-hearted software developers and 'Us' being the hard-bitten bastids whose job it is to make them cry. And cry they do. I would too if I'd spent days coding up a spiffy new interface or regular expression, only to have the person sitting at the desk across from me break it in the first five minutes I might even be driven to build it right in the first place – Ed]. It must be like having the Cousin from Hell coming visiting on Christmas morning and grabbing your lovingly assembled F-117, with an evil glint in his eye; except that, in the test engineer’s case s/he also arrives with a lovingly packed toolbox of infernal instruments to help speed the disassembly along.

Along with the Cruise Control continuous-integration build rig and the equally widely used NUnit framework (used for automated regression testing) all the testers have their own preferred pliers, callipers and drills to hand: hard-wearing ‘building site’ tools with very specific jobs. The main ‘using point’ (and its curious how often this is different from the selling point) is that the tools work first time and don’t stop. Michelle Taylor, for example, is one of Jonathan Watts’ fellow test engineers and uses on a regular basis: Xenu’s LinkSleuth, PassMark’s TestLog and StudioSpell, an add-in for Visual Studio from Keyoti. The sight of any of these can bring a developer out in hives.

Happy endings

There is a happy ending: typically, when you put these two attitudes in close proximity, the good money drives out the bad and a shared concern for durability prevails. Developers quickly learn to build stuff that defeats their colleagues' meanest efforts and a healthy quality arms race ensues [well, it does if management understands the people issues involved and sets suitable goals and rewards - Ed].

In many software companies, after unit testing, nothing happens until system testing begins. But there's a real opportunity to test earlier if you set things up as I recommend, because all products have an API that allows access for functionality testing without the UI being in place; and the earlier issues are found, the better the payback.

It all feeds into the bottom line. It's well known that, if a problem can be fixed at design level or requirements level, it will save a lot more money than if something was found later on. Investment at each stage turns into payback: a simpler, more usable product that is easier to test, therefore less likely to let customers down, which means they will then make recommends to colleagues and a virtuous circle then starts to turn.

Richard Collins is Development Strategist in the SQL tools team at Red Gate Software, a specialist maker of tools for SQL Server and .NET.

More about

TIP US OFF

Send us news


Other stories you might like