The Register® — Biting the hand that feeds IT

Feeds

Don't get 2e2'd: How to survive when your IT supplier goes titsup

  • alert
  • print

Why you should always see it coming

Free whitepaper – Hands on with Hyper-V 3.0 and virtual machine movement

Analysis I used to know a finance director who had a favourite mantra: “Minimise fixed costs.”

The concept's a simple one: by all means use permanent staff to deal with the aspects of your business that don't change much, but where your revenue streams go up and down, think of ways of allowing the cost of servicing those revenue streams to vary in unison with the ebbs and flows.

Outsourcing is an obvious way to look, and companies all over the world are doing it. Yet in the last couple of weeks one of the UK's major service providers, 2e2, has gone spectacularly pear-shaped.

Customers have been sent into a major panic over retrieving and relocating their data and services, and difficulties in cash-flow have prompted the administrators to place a letter on 2e2's website asking data centre users to contribute a sum between £4,000 and £40,000 to keep them running.

Although big news today, this is nothing unique and the trick to making sure your trousers don't become fast friends with your ankles is all in the preparation.

Back in the dot-com era, for instance, a cluster of startups all used a particular London-based web development house for their implementation and hosting services. This development house in turn used one of the big London data centre companies for its hosting services.

Everything became more than a little confused when the company in the middle went out of business: the hosting provider's bills stopped getting paid, and so the hoster cut off the services and refused access until someone paid the outstanding balance. Thankfully, due partly to luck but largely to diligent record-keeping, it was possible to prove to the hosting provider that the equipment belonged not to the defaulting service provider but to the end client. So they scooted in, claimed the kit, and installed it elsewhere.

Our kit! Oh God, our kiiiiiit...

Outsourcing doesn't just mean hosting, though: what about when you decide to lease kit instead of buying it? Atlantic Computers was an IT leasing company that blew up in the late 1980s. This was back in the day when IT was properly expensive, and a company I worked with leased its IBM System/38 (remember them?) from Atlantic.

Everything happened too quickly, and the upstream owner of the kit decided to give notice that it was going to come and repossess the equipment – which, thanks to the fact that it ran our entire business's enterprise resource planning system, would have meant utter disaster. Salvation came in the lateral thinking of one of the senior managers, who simply told the owners: “You can have it, but as it contains classified defence material we'll have to destroy it before it's removed from the premises”. Not too surprisingly, a far more calm process of negotiation followed and the equipment remained.

Next, consider one of the services we all get someone else to do instead of doing it ourselves: telephony and data circuits. Not many of us run our own fibre from centre to centre, deciding instead (quite sensibly) to rent services from companies that already have thousands of miles of fibre under the street. But what happens when the telco lets you down?

Consider the case of one UK SME that relied on its internet and voice lines to keep the call centre running. One day, of course, everything stopped working and they gazed with awe upon the facial expression of the JCB driver in the street outside as the latter realised what he'd just done. Again the story is only partly sad: thankfully one of the lines ran the alarm system and had a priority-fix SLA on it, so the engineer that was soon on site was plied with tea and biscuits and persuaded to re-splice all the lines, not just the one he was obliged to do.

By failing to prepare, you are preparing to fail

The thing is, though, there is seldom an excuse for falling victim to a service provider getting it wrong or going out of business. Occasionally I'd say it's forgivable: the demise of Atlantic, was, for instance, quite hard to predict and its clients couldn't necessarily expect to see that one coming.

A UK SME relied on its internet and voice lines to keep the call centre running. One day, of course, everything stopped working and they gazed with awe upon the facial expression of the JCB driver in the street outside as the latter realised what he'd just done.

The 2e2 example is, however, just daft. Do these clients not have lawyers who go through the contracts asking: “What if”? And have they not said to themselves: “Our data is critical, so what happens if we lose an entire data centre”? If they've agonised over having a secondary data centre and decided they can't afford it, they're entitled to a little sympathy. If they've not considered it, though, the same isn't true.

I've had complex telecoms contracts in the past, for instance, and it's always seemed sensible to understand the entire context of the connection. Take a leased-line internet connection to an office in North London, for instance; our supplier was COLT but the last kilometre or so was provided by BT as it was off-net for COLT.

The exercise was one of risk assessment and risk acceptance: because of the need for a different upstream provider we had to accept a degraded SLA to cover the fact that the call-out time for a fault was the response time in the SLA between us and COLT plus the response time in the SLA between them and BT.

Helpfully the consideration of the stability of either company was an easy one, as both BT and COLT were going strongly at the time. For that type of service, with the particular usage patterns and mission-criticality (not much) in our setup, it was fine; in other situations it wouldn't be and we'd have considered resilient links, multiple providers and the like.

I live and work in the Channel Islands, which makes life interesting with regard to service provision. We have three hosting providers on Jersey, so consideration of single points of failure is always fun. Say you're starting from nothing and you want a resilient data centre setup. You could go to one provider that has two data centres in different parts of the island, and benefit from the low cost of connecting the two (they're both connected to that provider's resilient metro network, after all). Or you could decide to go with two providers, which reduces the risk should one provider go under, but the interconnects will be more complex and expensive as you're going partially off-net. Or you could say that having two data centres on the same island is in itself too risky, not least because the power provision into the island as a whole resembles a bit of damp string and some hamsters in a wheel, and look to Guernsey or the mainland instead.

There's no right answer in the general sense – it's very much horses for courses – but you have to have these debates with yourself and justify the end decision.

Suppliers go under and when they do sink, your business can suffer. And if and when you sign up for a service and you don't give due consideration to the eventualities of one of your suppliers failing you, you're wasting both your money and time, and, potentially, your business. ®

Dave is a senior network and telecoms specialist who has spent 20 years working in academia, defence, publishing and intellectual property. Founding technical editor of Network Week and Techworld, Dave’s specialisms include design, construction and management of global telecoms networks, infrastructure and software architecture, development and testing, database design, implementation and optimization. Dave and his family live in St Helier on the island paradise of Jersey.

Free whitepaper – Hands on with Hyper-V 3.0 and virtual machine movement

Expect the unexpected

I'm not going to mention names, I don' t want them to come and get me. I worked for a large utility that had gone to the effort of having two centres, six miles apart, away from known aircraft routes, fault lines, rivers, volcanos, lay lines and crop circles. They then built their billing machines over in the sites, 100% redundancy of everything, right down to power from different grid supply points (we're talking 400,000V network tracing). As a utility, they billed millions a day at sod all margin, so they needed the cash flow to pay suppliers quickly, we used to talk about 10 days without billing means even the banks would stop lending money.

I was a grad given a project to check the business continuity, what was expected to be another pointless exercise they ran every year to keep the useless grads busy for a month. They were in for a shock, a factory that was halfway between our sites and used to process nothing in particular had changed hands, so I wrote to them asking what they did for business contiuity and if we could learn anything, seemed like something to do in my month to write this report.

The start of paragraph two said it all "As a processor of chemicals who have a statutory 10 mile exclusion zone should there be a confirmed leak......"

4
0
Anonymous Coward

please please please

please be Capita next, please be Capita next, please be Capita next....

3
0
Anonymous Coward

Re: IT has come a long way

Instead of datacentres in the UK, Lithuania, Bangalore, Hong Kong, Manila and Chicago we only need UK and Hong Kong.

sorry, fail. Given the various shenanigans of governments around the world, you need to separate your data centres by jurisdiction to be secure. (Loosely translated as you need to keep them out of US clutches)

2
0
Anonymous Coward

Last mile.......

You'd be suprised how inbred the telco industry is, I work for a very large globa telco and we often get third party tails from cable and wireless, you'd be suprised (or not) how often they are provided by BT (OR COLT etc)

Also, Our own internal sales teams don't seem to understand when a customer says they want geographic diversity there is no point having 2 seperate circuits going between four different sites if your on net portion is on the same fibres! I have had to point out that they really should get a 3rd party supplier in some of the corners of our more far flung empire (Ex USSR springs to mind) but they still dont get it!!

2
0

It can be the simple things ...

Some years ago I managed to wangle a place on a Business Continuity course, and very enlightening it was too. The guy doing the course wasn't a trainer by trade, he actually did "BC stuff" for a living - and as a result had some wonderful tales.

One he told us was how a number of businesses were locked out of their premises for a week after a lorry caught a phone line and pulled it down. How, one might ask, did that happen ? Well who hasn't heard "The Gasman Cometh" by Flanders and Swann ? http://www.youtube.com/watch?v=zyeMFSzPgGc

Well after the lorry pulled down the overhead cable, BT decided it would be better underground - so started digging. When they hit the water main, the water people were called to deal with it. As the water people dug a hole to access the water main, they hit the gas main.

Everyone was evacuated, and by the time it was all sorted, they'd been out of their premises for a week !

And in response to the comments about resilient routing, he mentioned that as well. It's of particular interest to people responsible for emergency call centres - and apparently even they can struggle to find out what's really happening. In one example he cited, a new centre was built, with diverse routing to two separate exchanges - only it turned out that neither exchange was actually an exchange as we'd imagine it, both were in fact just satellites off one big exchange, and so a single point of failure.

1
0