Feeds

Read, test, don't repeat - how to avoid code complexity

Emergent Design: Lessons from Y2K

Internet Security Threat Report 2014

page break

No, not really. Remember accidental coupling is coupling that arises unintentionally, or for a misguided, unnecessary reason. Intentional coupling is coupling I intend, and that helps my project. I am generally not stupid, and neither are you. What we do on purpose is usually better than what happens beneath our notice.

Here, I am introducing inheritance coupling in order to solve the redundancy problem. I want changes in Weapon to propagate down to Pistol and TommyGun. It is my intent that this happen: it is no accident, and I am not liable to forget that it is going to happen. I am going to depend on it, in fact.

Protected data members

In my gun-toting example, I used protected members myBullets and safety in the abstract class Weapon, so that I could access them via inheritance in the same way that I had been accessing them when they were local, private members of Pistol and TommyGun.

My next step is to change this. I would make myBullets and safety private, and then declare protected or public methods to get() them and set() them. Why?

We said that inheritance creates coupling. That can be a good thing, as it is in the case of eliminating the redundancies we have dealt with. But a bad kind of inheritance coupling can emerge with protected data members.

With protected data members, subclasses are coupled to the existence of those members. If I later want to store the bullets outside the Weapon object, maybe in a Clip or Magazine object, I will now have to change Pistol, TommyGun, and whatever other subclasses I have created to now call methods to get the bullet amount rather than accessing myBullets directly. If I make a getBullets() method in Weapon, then make Pistol, TommyGun, and so on call it to get the bullet amount, then I can change the way it is stored and retrieved in that one place, getBullets().

Also, with get() and set() methods, I can create read-only data members (just do not provide a set() method), or write-only, or I can put validating code into the set(), and so on. This is such an important thing that it has lead to the creation of an entirely new language feature in .Net: the property.

Generally, protected data members are a bad idea. Public data members are even worse. With that said, design at this level is often a balancing act between coupling and redundancy. The key is to have a considered reason behind your decision to keep things in the subclasses (at the cost of redundancy) or to put them in the superclass (at the cost of coupling them).

If you have a sensible motivation that drives this decision, that sensibility will likely mean that the approach is clear and explicit, and will not cause maintenance problems. Furthermore, the decisions you make are not set in stone. Remember, you are embarking on an evolutionary process here. You can expect to change things as you learn more about the project, as the requirements expand and change, and as you have new, better ideas.

Refactoring, and adherence to your coding principles, will give you the confidence to make changes when this occurs, and your understanding of design will allow you to see the opportunities that arise for improved design that may result from the change process.

In other words, an emergent design.

Testability

One thing I have learned in recent years is that unit testing actually has great value in terms of evaluating code against these principles. As a consultant, I am usually called in when things are not going well. Companies rarely call for extra help when everything is fine, after all. Consultants add a lot of expense.

In order to come up to speed on the team's activities, goals, and current situation, I need something to work with. Asking for design and requirements documentation is usually a forlorn hope. If the team is really in trouble, how up-to-date do you suppose these documents are?

However, I have noticed that reading unit tests can be very revealing. In a way, they are a record of the expectation of the developer or tester who wrote them, and therefore can reveal a lot about what a class is supposed to do.

Usually, one of the first questions I ask a customer's development team is whether it is unit testing. Generally, the answer is "no".

Furthermore, when I suggest that unit testing might be a good idea, I generally encounter a lot of resistance, with the excuses that unit testing is "too hard", "too time-consuming", "frustrating", and that it does not bring enough value for the trouble.

At first, I would argue with them, until I tried to add tests to their projects, finding that it was all those things and more. This puzzled me, because tests that I wrote simultaneously with the coding were so were much easier, and were clearly worth the time and effort it took to create them. It turns out that code is hard to test when it is not designed to be testable in the first place, and so adding unit tests to an existing code base is awfully difficult.

Top 5 reasons to deploy VMware with Tegile

Next page: Readability

More from The Register

next story
Netscape Navigator - the browser that started it all - turns 20
It was 20 years ago today, Marc Andreeesen taught the band to play
Sway: Microsoft's new Office app doesn't have an Undo function
Content aggregation, meet the workplace ... oh
Sign off my IT project or I’ll PHONE your MUM
Honestly, it’s a piece of piss
Return of the Jedi – Apache reclaims web server crown
.london, .hamburg and .公司 - that's .com in Chinese - storm the web server charts
NetWare sales revive in China thanks to that man Snowden
If it ain't Microsoft, it's in fashion behind the Great Firewall
Chrome 38's new HTML tag support makes fatties FIT and SKINNIER
First browser to protect networks' bandwith using official spec
Admins! Never mind POODLE, there're NEW OpenSSL bugs to splat
Four new patches for open-source crypto libraries
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.