Feeds

Architecting for IT service delivery

Service by design? Really?

  • alert
  • submit to reddit

Boost IT visibility and business value

Lab A few years back I was involved in a project that turned out far more interesting than I expected. The plan was to write a training course about a software development methodology. As you see, it did start from a reasonably low point in terms of interest – but it quickly evolved into a much more worthwhile exercise.

The course in question documented a Sun Microsystems internal approach known as “3DM”, or 3-dimensional methodology. For those familiar with the Rational Unified Process (RUP), it aimed to extend this into how to deploy applications in an appropriate manner to meet service criteria such as scalability, availability and so on.

In fact, like all good approaches, 3DM was not based on theory. Rather, it distilled down best practices learned by consultants in the field, around when and where to adopt clustering, load balancing, replication and failover, and other such constructs.

It’s all good stuff, and the general lesson is that good practice is out there. This isn’t the place to document the whys and wherefores, not only because the truth is ‘out there’ but also because it tends to depend on the hardware and software involved.

But surely, says the outsider, IT is going to be more about virtualised machines running modular applications on industry-standard servers? Doesn’t that mean that IT gets simpler and simpler, reducing any dependency on good design?

Sadly, no. Despite the suggestion from some quarters that IT is getting simpler and simpler, the need for skills to build reliable, scalable systems is as prevalent today as it ever was.

Good systems design always was, and still remains, a constant battle between the theoretically possible and the actually practical. Today’s IT systems can be impossibly complex, running layer upon layer of barely compatible software, linking together older and newer systems that were never meant to be linked. From the outside in, IT may look like a Ford Mondeo – standardised to the point of being impossibly dull. To the engineers working on the inside however, IT is more like a Morgan, with each individual part, each connection and configuration item custom made.

As a result we can bandy around terms like ‘failover’ regardless of their actual practicality. While failover is nominally about taking a single workload and getting it up and running on another server, but in reality there is often a complex web of dependencies between server, network and storage hardware that can be difficult to unravel, never mind replicate. Are both servers (source and target) identically configured? Do they share the same network connectivity with the storage? Is the storage itself configured correctly to support the application in question? And so on and so on.

Virtualisation may help answer some of these questions of course, but only if it is considered architecturally, which brings us to the nub of the matter. Part of the challenge is that we’re not building in good service by design – it’s just not being costed into the business cases for new systems and applications, as we’ve seen in several research studies. As a result, such things as failover have to be bolted onto systems and applications after the event, rather than being built in from the start.

As with many complex problems, the temptation might be to offload the complexity onto a third party, which is one reason perhaps why the interest in hosted services is growing.

However, unless you have worked out a way to offload the more complex stuff, you might just be adding to the problem. In the outsourcing wave, many reported the issue of outsourcing the best guys, leaving the dross behind (in some cases, to run the contracts). Might we end up with a similar issue with third party hosting, in that the easier systems will migrate, leaving the complexity behind, and creating an integration challenge that now runs across the firewall?

One thing’s for sure. If we are going to achieve any state of IT nirvana any time soon, some pretty fundamental shifts are going to be required in terms of the role of good design. Clearly best practice exists, if we choose to take it up and work through the short term pain and additional costs required to get things onto a firmer footing.

Or perhaps the only realistic option is to keep going with the candle-wax and string, patching things together as they go wrong and adding new layers of technology and complexity on a regular basis. Perhaps we secretly prefer it this way – just as a Morgan may suffer from the foibles of being custom built, the design itself taking into account the subsequent need to be tinkered with. To change this would require a major shift in mindsets and behaviours at all levels, without which it is difficult to see how any nirvana state of service delivery could ever be achieved. If you feel any different, do tell. ®

Build a business case: developing custom apps

More from The Register

next story
KDE releases ice-cream coloured Plasma 5 just in time for summer
Melty but refreshing - popular rival to Mint's Cinnamon's still a work in progress
Leaked Windows Phone 8.1 Update specs tease details of Nokia's next mobes
New screen sizes, dual SIMs, voice over LTE, and more
Mozilla keeps its Beard, hopes anti-gay marriage troubles are now over
Plenty on new CEO's todo list – starting with Firefox's slipping grasp
Apple: We'll unleash OS X Yosemite beta on the MASSES on 24 July
Starting today, regular fanbois will be guinea pigs, it tells Reg
Another day, another Firefox: Version 31 is upon us ALREADY
Web devs, Mozilla really wants you to like this one
Secure microkernel that uses maths to be 'bug free' goes open source
Hacker-repelling, drone-protecting code will soon be yours to tweak as you see fit
Cloudy CoreOS Linux distro declares itself production-ready
Lightweight, container-happy Linux gets first Stable release
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
The Essential Guide to IT Transformation
ServiceNow discusses three IT transformations that can help CIO's automate IT services to transform IT and the enterprise.
Maximize storage efficiency across the enterprise
The HP StoreOnce backup solution offers highly flexible, centrally managed, and highly efficient data protection for any enterprise.