Squaring the service delivery circle
Marrying what they want with what you can deliver
Workshop In the last article, we considered a number of perspectives on service delivery, top to bottom, left to right.
In operational terms however, the relationship that matters the most is the one agreed between coal-face IT staff and the users they support. What’s at the heart of making this work?
Things were simpler in the old days. Go back a couple of decades and (despite the advances of client-server) most systems involved specific applications doing specific things for specific parts of the business. That didn’t mean the job of IT felt any easier of course: negotiating terms with hard-done-by business groups was nobody’s idea of fun, whether or not they related to a formal Service Level Agreement (SLA).
The principles behind what makes a ‘good’ SLA remain broadly the same today as they always were. Service availability is one key characteristic – linked with the idea that, from a user’s perspective, IT should ‘just work’ whatever the context. Availability goes back to back with performance, in that simple access to a system is ultimately pointless if response times are too poor to make it usable. Certain systems need the highest levels of availability and performance, whereas for others it may be a case of setting expectations – for example, the system may run more slowly on a Monday morning when everyone is logging in, but it should be OK after that.
Without dwelling on SLA concepts, other elements include risk mitigation in terms of security and business continuity, data protection and compliance monitoring, and of course criteria around the IT organisation or service provider – support hours, reporting frequencies, maintenance periods, time to resolution and so on. All in all, there is nothing outdated in a ‘standard’ SLA, at least in principle.
The past years have seen repeated generations of technology being adopted, integrated and not quite superseded, giving us the wonderfully complex IT environments we know and love today. Some organisations have told us how they never like to switch anything off, resulting in the electronic equivalent of hoarders, with systems stacked like jam jars on the data centre racks.
Indeed, if we look at today’s IT ‘landscape’, in all of its internet-based, mobility-aware, highly distributed, home-working glory, it becomes harder to see what exactly is the service being delivered, let alone the dependencies between all the different elements. Given the increasing reliance on external service providers, from communications to hosted software, we don’t necessarily even have the ability to see some of the critical links in the chain.
Meanwhile we have those pesky users, who keep taking matters into their own hands. What we used to call the ‘Blackberry effect’, where sales directors kitted their teams out with email without telling the IT department, has morphed into consumerisation, with individuals getting on with their jobs using facilities outside the reach of IT – conducting business deals over instant messaging or Twitter, updating core data on their (more powerful) home computer, and so on.
All the same, one group of people – the IT department – remains tasked with making sure the service gets through, and is inevitably judged based on how successful it is.
Returning to the question, then, what is at the heart of making service delivery work? Given that it is impossible to manage everything at a deep level, even if it were under the control of IT, the challenge becomes one of scope. We can see the ‘problem space’ as a Venn diagram, two overlapping circles, one of which contains everything IT could manage, and the other, everything business users feel they might need. The overlap concerns those elements that both sides agree require pro-active management – the ‘business critical’ systems and services.
Seeing IT in this way does start to make things a little easier. The cost of service delivery is directly proportionate to the size of the overlap, that is, all those systems and services that are agreed to be necessary. Defining priorities with a (potentially sceptical) user base can be challenging, of course. However, given that management comes at a cost, the debate about what goes in doesn’t have to be an emotional one – faced with a finite (and potentially shrinking) budget, there are limitations as to what can go in.
Deciding what goes in and what stays out is not a one-off activity, but something that requires regular review. Consumerisation and the Blackberry effect are part of the picture, and will become more and more the norm, in which case it is up to IT to help the business prioritise business critical facilities, and manage them accordingly. Increasingly, we expect to see IT to act as much as a facilitator of this kind of decision, as well as taking responsibility for traditional, operational systems management. Indeed, given the increasingly diverse pool of service provision mechanisms, its future might depend on it. ®
Sponsored: DevOps and continuous delivery