Feeds

Evolutionary vs. traditional database design

DBA fights back

Business security measures using SSL

We recently published an article on the advantages of evolutionary database design (EDBD), a process which has its roots in the agile/extreme programming world. To provide a little balance, some yang for the yin, we asked Mark Whitehorn to comment on the article and give his views on EDBD vs. the more traditional database design approach.

What is traditional database design?

Just as it is clear that there is no consensus about exactly what constitutes agile modelling and eXtreme programming, the same is true of traditional database design (TDBD). Where three traditional designers are gathered together, there you’ll find four views of how best to design databases. However, for the sake of discussion, let’s assume that we are talking about development based on User, Logical and Physical modelling.

Business analysts talk to users (who have the User model located conveniently inside their heads); from these discussions, the analysts develop a Logical model. This is typically an Entity Relationship (ER) model, which can be signed-off by the users. The developers then take this off into a corner and add the geeky, technical stuff (data types, indexes, etc.) which turns the Logical model into a Physical one. Finally, they push a virtual button in the modelling tool and out pops the database schema appropriate for their engine of choice.

If you’re not with us, you’re against us.

One facet of the article under discussion is that it treats traditional database designers somewhat dismissively: ‘data professionals have pretty much missed out on the fundamental ideas and techniques…. They've got a lot of catching up to do.’ and ‘It’s going to be a huge effort to get existing data professionals to recognize and then overcome many of the false assumptions which their “thought leadership” have made over the years.’

While this is guaranteed to cause a glow of schadenfreude-like satisfaction in EDBD devotees, it is also likely to alienate the very people that the article is presumably trying to convert, which I believe is a shame. Any established process like TDBD actively benefits from the occasional challenge to see if it can be improved or should be replaced. Such dialog is not aided by a flame war where the only winners are the sellers of entrenching tools.

The article puts forward the premise that ‘Traditional approaches to database design clearly aren't working out for us’, so let’s start by taking a look at that idea.

Does traditional database design cut it (the mustard, that is)?

I certainly agree that some impressively huge projects have had their share of media attention – no one mention the NHS IT project, for example (oops, too late). Most of us are aware of traditional projects which have gone down so spectacularly that the flames have lit up the sky for miles around.

Even if we look at less illuminating failures there is no doubt that TDBD projects do regularly produce, to quote the original article, ‘tables that have columns that are no longer being used… columns that are being used for several purposes because it was too hard to add new columns … tables with data quality problems’. Indeed, I wouldn’t even try to pretend that this is anything other than a major problem.

Oh, so that’s it, Mark; you accept that the TDBD process is flawed then? Well, no. The EDBD argument at this point appears to be “We see a great number of bad databases, therefore the design process is flawed, and therefore we must change it.” The problem is here is the non-sequitur between the first two clauses.

Consider the following argument “We see a great number of road accidents, therefore the rule set governing driving is flawed, therefore we must change it.” In fact the majority of accidents occur when people implement the rule set badly. The rule set says don’t drink and drive, but people do. People break the speed limit; they jump the lights and so on. So the accidents we observe tell us nothing about how good (or bad) the rules are.

In like manner, I agree we observe many flawed databases, but simply observing them neither proves nor disproves the efficacy of the process.

So, why do we observe badly designed databases?

In my experience, there are two main reasons why databases end up poorly designed.

  1. Even in this day and age it is relatively common to come across commercially available databases that were initially designed by specialists in the field that would be serviced by the database, rather than by professional database developers. Here I am thinking of accounting databases designed by accountants or HR systems designed by heads of personnel. These applications are often good in terms of the functionality they try to provide but very poor in terms of design.
  2. Equally sadly, but still true, we see commercial databases designed by ‘computer professionals’ who, with the best will in the world, are not trained database developers either and do not have a full understanding of the task and its ramifications. The design does not proceed according to the traditional model.

My experience (and that is all any of us can apply with certainty to issues like this) is that these two account for by far the majority of the poorly designed databases that I’ve come across.

So, do we observe any traditional databases that are well designed? Of course we do. There are plenty of examples but they’re usually unremarkable, invisible even. Well designed, well structured, they just work. In a perfectly fair world they would attract headlines like: “Shock Horror! Database comes in on time, below budget and works! Heads won’t roll!” But, for fairly obvious reasons, they don’t.

Can the traditional model handle change?

Another major criticism aimed at TDBD by the EDBD community is that the traditional approach is poor at handling change. ‘Unfortunately, the traditional data community assumed that evolving database schemas is a hard thing to do and as a result never thought through how to do it.’

So, does TDBD have a mechanism for handling and implementing change? Oh, yes. Users propose a change and, following discussions to ascertain a full understanding of what is required, the change is incorporated into the logical model. This is echoed down to the physical model; a change schema is produced, tested and ultimately applied to the operational database. Does it work? In my experience, it works perfectly well when the process is neither over- nor under-managed, is properly resourced and applied intelligently.

Does it always work? Sadly not. It is demonstrably true that some traditional databases are very, very difficult to evolve. No question, this is also a serious problem and once again I think it is important to look at why. In my opinion, there are two main causes.

  1. The database is initially well designed and, in order to keep it so, the development team goes overboard with processes to control change management. The change process is made cumbersome to the point where it is unworkable. Changes can only be made very slowly; in practice, too slowly to be effective.
  2. The database is initially well designed but poor management thereafter prevents it from being properly maintained. Lip service is often paid to the need for a change management process but in practice ill-managed changes are rapidly and unintelligently applied to the database. These cause the structure to degrade over time, rendering it more and more difficult to change the schema.

New hybrid storage solutions

More from The Register

next story
'Windows 9' LEAK: Microsoft's playing catchup with Linux
Multiple desktops and live tiles in restored Start button star in new vids
Not appy with your Chromebook? Well now it can run Android apps
Google offers beta of tricky OS-inside-OS tech
New 'Cosmos' browser surfs the net by TXT alone
No data plan? No WiFi? No worries ... except sluggish download speed
iOS 8 release: WebGL now runs everywhere. Hurrah for 3D graphics!
HTML 5's pretty neat ... when your browser supports it
Greater dev access to iOS 8 will put us AT RISK from HACKERS
Knocking holes in Apple's walled garden could backfire, says securo-chap
NHS grows a NoSQL backbone and rips out its Oracle Spine
Open source? In the government? Ha ha! What, wait ...?
Google extends app refund window to two hours
You now have 120 minutes to finish that game instead of 15
Intel: Hey, enterprises, drop everything and DO HADOOP
Big Data analytics projected to run on more servers than any other app
SUSE Linux owner Attachmate gobbled by Micro Focus for $2.3bn
Merger will lead to mainframe and COBOL powerhouse
prev story

Whitepapers

Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Security and trust: The backbone of doing business over the internet
Explores the current state of website security and the contributions Symantec is making to help organizations protect critical data and build trust with customers.