Original URL: https://www.theregister.com/2007/01/04/analysis_design_meeting/

Analysis, design, and never the twain shall meet

Jumping the chasm

By Matt Stephens

Posted in Software, 4th January 2007 11:29 GMT

Not many people know how to get from analysis to design. Software agilists “solve” the problem by blurring the distinction between the two. There has to be a better way...

It’s a familiar scenario to many. Deep in the basement where coders dwell, Hero Programmer #1138 stops at an apparent impasse and thinks: “Oh hang on, we didn’t agree on what happens if the user cancels at this point. In fact, we didn’t even talk about it. Well, I’ll just roll back the whole transaction and pop up a warning dialog taunting the user. But we also didn’t talk about transactions for this part of the system, so they’re not factored into the design. Well, here goes, let’s start coding!”

The programmer, bless his cotton socks, makes project-shaking decisions then and there, without a customer or analyst in sight, to retro-design new functionality into the system. Programmers are great for making technical decisions, but they’re not the right people to be defining and signing-off on new requirements. Yet this scenario happens, a lot. It’s the way in which many projects wobble and stumble towards eventual completion. Then the bug hunt begins.

The project’s development process (if there is a defined process at all) must have some serious shortcomings if great swathes of functionality can be missed. In the agile world, where flexibility is sometimes taken to extremes, these shortcomings are addressed by putting safety nets in place: have an on-site customer (or team of analysts) to be at the programmers’ beck and call . Incidentally, this arrangement was partly why the infamous Chrysler “C3” project failed, because such available paragons seldom represent the real paying users well. And rethinking the design as new functions are added is dressed up as “evolutionary design”, with safety nets such as copious unit tests and pair programming.

Evolutionary design does have its place: but using it as a “process hack” to catch forgotten requirements just isn’t it. It’s much better to explore the requirements in-depth first, looking at all the “rainy day scenarios”, i.e. things that can go wrong, or events where the end-user steps off the trail and does unexpected things. It’s not as difficult, or as time-consuming, as it sounds to get it right: but software agilists wouldn’t sell nearly as many books, or get to speak in public as often, if they admitted this.

The reason why software agility appears to be such a saviour is that it skirts around the need to get from requirements to source code, by blurring the distinction between them. But – and here’s the thing – getting from requirements to working, maintainable source code really isn’t that difficult, if you do it right. And if you get it right, then you don’t need those process hacks. I’ll have a go at explaining how to do this in this article.

If it isn’t too disingenuous to plug my own book, the process that I describe here is illustrated in more detail, with oodles of examples and exercises, in Use Case Driven Object Modeling with UML: Theory and Practice (co-written with Doug Rosenberg, and published later this month). The process is adapted and refined from Ivar Jacobson’s approach to OO analysis and design (with use cases, robustness diagrams and sequence diagrams).

So, how to get from analysis to design in a few easy steps. Here goes:

First, create a domain model. This is a glossary of real-world terms used in the project. Some excellent books have been written on domain modeling, most notably Domain-Driven Design by Eric Evans. Your initial attempt at the domain model shouldn’t take very long; a couple of hours at most. You just want a rough draft, from which you can write the use cases.

Second, write said use cases – and – (this is important) reference the domain objects in the use case text. Use cases are behavioural requirements: they define how the user and the system will interact, and are written in user action/system response couplets:

“The user enters his username and password and clicks Login;
the system validates the login credentials
from the Master Account List
and logs the user in.”

It isn’t exactly Shakespeare, but it’s clear and unambiguous. Use cases, when written properly at least, are divided into one basic course and many alternate courses (aka sunny day and rainy day scenarios). You’d want to describe what happens if the user entered the wrong password, for example, so this would go in an alternate course.

Third, and this is the lynchpin, do some preliminary design. This is really the bridge that gets you from analysis (use cases) to detailed design. The goal of this step is to bash your use cases into shape, so that you can do good stuff with them such as create an object-oriented design, produce accurate estimates, and write meaningful unit tests.

Such a lot of advice has been written about how to write use cases, and most of it is contradictory. Depressingly often, I hear people being told to write “high-level” use cases: that is, make them vague and technology-independent. Trouble is, if you hand a vague and technology-independent use case to a programmer, you’ll end up with a fuzzy design that is ambiguous, buggy, and full of undiscovered functionality. And this is where our hero programmer comes into play: he’s coding away, and is the first to discover that the requirements spec is missing some crucial “what-ifs”. He’s on a roll and doesn’t want to stop, so he hacks in whatever’s convenient from a programming point of view. This entire dysfunctional situation can be avoided by doing preliminary design while you’re still writing the use cases.

So the goal of preliminary design is to turn your use cases into something much more concrete: to eliminate ambiguity, turn passive voice descriptions into active voice, separate out any functional requirements that may have slipped into the use case text, and to discover missing requirements. To do this, use a technique called [drum roll please...] robustness analysis. Not many people have heard of it; in fact it’s one of the industry’s best-kept secrets, but it works. Robustness analysis involves drawing a picture of your use case, using three UML-esque elements: boundary objects, entity objects, and controllers (logical software functions). It’s all very MVC, and is great for creating the preliminary design for web applications.

The key is to write the use case text in the context of your domain model, and reinforce this relationship on the robustness diagram. So if the domain model has objects called “Book” and “Shopping Cart”, in your use case you wouldn’t write “The user selects a stock item to purchase”, you’d write something more concrete and specific, like “The user clicks to add the Book to his Shopping Cart”. Book and Shopping Cart will both become entity objects on your robustness diagram; and “add to Shopping Cart” will become a controller, as it’s an action - a verb.

Robustness analysis is great as a sanity check: if your use case doesn’t translate well onto a quick n’simple robustness diagram, how in the blazes will you be able to create a solid design from it?

The technique also has an interesting, and useful, effect: it turns your use cases into nice compact, unambiguous units of behaviour that are easy to design from, easy to estimate, and easy to write unit tests for (you write one unit test class per controller).

This has been a high-level summary, of course. But in future articles I’ll explore different parts of this process in more detail, as well as exploring different aspects of software agility with a critical eye.®