Feeds

Evolutionary vs. traditional database design

DBA fights back

Build a business case: developing custom apps

We recently published an article on the advantages of evolutionary database design (EDBD), a process which has its roots in the agile/extreme programming world. To provide a little balance, some yang for the yin, we asked Mark Whitehorn to comment on the article and give his views on EDBD vs. the more traditional database design approach.

What is traditional database design?

Just as it is clear that there is no consensus about exactly what constitutes agile modelling and eXtreme programming, the same is true of traditional database design (TDBD). Where three traditional designers are gathered together, there you’ll find four views of how best to design databases. However, for the sake of discussion, let’s assume that we are talking about development based on User, Logical and Physical modelling.

Business analysts talk to users (who have the User model located conveniently inside their heads); from these discussions, the analysts develop a Logical model. This is typically an Entity Relationship (ER) model, which can be signed-off by the users. The developers then take this off into a corner and add the geeky, technical stuff (data types, indexes, etc.) which turns the Logical model into a Physical one. Finally, they push a virtual button in the modelling tool and out pops the database schema appropriate for their engine of choice.

If you’re not with us, you’re against us.

One facet of the article under discussion is that it treats traditional database designers somewhat dismissively: ‘data professionals have pretty much missed out on the fundamental ideas and techniques…. They've got a lot of catching up to do.’ and ‘It’s going to be a huge effort to get existing data professionals to recognize and then overcome many of the false assumptions which their “thought leadership” have made over the years.’

While this is guaranteed to cause a glow of schadenfreude-like satisfaction in EDBD devotees, it is also likely to alienate the very people that the article is presumably trying to convert, which I believe is a shame. Any established process like TDBD actively benefits from the occasional challenge to see if it can be improved or should be replaced. Such dialog is not aided by a flame war where the only winners are the sellers of entrenching tools.

The article puts forward the premise that ‘Traditional approaches to database design clearly aren't working out for us’, so let’s start by taking a look at that idea.

Does traditional database design cut it (the mustard, that is)?

I certainly agree that some impressively huge projects have had their share of media attention – no one mention the NHS IT project, for example (oops, too late). Most of us are aware of traditional projects which have gone down so spectacularly that the flames have lit up the sky for miles around.

Even if we look at less illuminating failures there is no doubt that TDBD projects do regularly produce, to quote the original article, ‘tables that have columns that are no longer being used… columns that are being used for several purposes because it was too hard to add new columns … tables with data quality problems’. Indeed, I wouldn’t even try to pretend that this is anything other than a major problem.

Oh, so that’s it, Mark; you accept that the TDBD process is flawed then? Well, no. The EDBD argument at this point appears to be “We see a great number of bad databases, therefore the design process is flawed, and therefore we must change it.” The problem is here is the non-sequitur between the first two clauses.

Consider the following argument “We see a great number of road accidents, therefore the rule set governing driving is flawed, therefore we must change it.” In fact the majority of accidents occur when people implement the rule set badly. The rule set says don’t drink and drive, but people do. People break the speed limit; they jump the lights and so on. So the accidents we observe tell us nothing about how good (or bad) the rules are.

In like manner, I agree we observe many flawed databases, but simply observing them neither proves nor disproves the efficacy of the process.

So, why do we observe badly designed databases?

In my experience, there are two main reasons why databases end up poorly designed.

  1. Even in this day and age it is relatively common to come across commercially available databases that were initially designed by specialists in the field that would be serviced by the database, rather than by professional database developers. Here I am thinking of accounting databases designed by accountants or HR systems designed by heads of personnel. These applications are often good in terms of the functionality they try to provide but very poor in terms of design.
  2. Equally sadly, but still true, we see commercial databases designed by ‘computer professionals’ who, with the best will in the world, are not trained database developers either and do not have a full understanding of the task and its ramifications. The design does not proceed according to the traditional model.

My experience (and that is all any of us can apply with certainty to issues like this) is that these two account for by far the majority of the poorly designed databases that I’ve come across.

So, do we observe any traditional databases that are well designed? Of course we do. There are plenty of examples but they’re usually unremarkable, invisible even. Well designed, well structured, they just work. In a perfectly fair world they would attract headlines like: “Shock Horror! Database comes in on time, below budget and works! Heads won’t roll!” But, for fairly obvious reasons, they don’t.

Can the traditional model handle change?

Another major criticism aimed at TDBD by the EDBD community is that the traditional approach is poor at handling change. ‘Unfortunately, the traditional data community assumed that evolving database schemas is a hard thing to do and as a result never thought through how to do it.’

So, does TDBD have a mechanism for handling and implementing change? Oh, yes. Users propose a change and, following discussions to ascertain a full understanding of what is required, the change is incorporated into the logical model. This is echoed down to the physical model; a change schema is produced, tested and ultimately applied to the operational database. Does it work? In my experience, it works perfectly well when the process is neither over- nor under-managed, is properly resourced and applied intelligently.

Does it always work? Sadly not. It is demonstrably true that some traditional databases are very, very difficult to evolve. No question, this is also a serious problem and once again I think it is important to look at why. In my opinion, there are two main causes.

  1. The database is initially well designed and, in order to keep it so, the development team goes overboard with processes to control change management. The change process is made cumbersome to the point where it is unworkable. Changes can only be made very slowly; in practice, too slowly to be effective.
  2. The database is initially well designed but poor management thereafter prevents it from being properly maintained. Lip service is often paid to the need for a change management process but in practice ill-managed changes are rapidly and unintelligently applied to the database. These cause the structure to degrade over time, rendering it more and more difficult to change the schema.

Boost IT visibility and business value

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft refuses to nip 'Windows 9' unzip lip slip
Look at the shiny Windows 8.1, why can't you people talk about 8.1, sobs an exec somewhere
Intel's Raspberry Pi rival Galileo can now run Windows
Behold the Internet of Things. Wintel Things
Linux Foundation says many Linux admins and engineers are certifiable
Floats exam program to help IT employers lock up talent
Microsoft cries UNINSTALL in the wake of Blue Screens of Death™
Cache crash causes contained choloric calamity
Eat up Martha! Microsoft slings handwriting recog into OneNote on Android
Freehand input on non-Windows kit for the first time
Linux kernel devs made to finger their dongles before contributing code
Two-factor auth enabled for Kernel.org repositories
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
7 Elements of Radically Simple OS Migration
Avoid the typical headaches of OS migration during your next project by learning about 7 elements of radically simple OS migration.
BYOD's dark side: Data protection
An endpoint data protection solution that adds value to the user and the organization so it can protect itself from data loss as well as leverage corporate data.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?