Surprise! Thanks to the cloud, you've got a hybrid infrastructure

Don't be bitten by the differences

Pop art style illustration of man exclaiming "WHAT?" in shock/horror/bemusement. Illustration via Shutterstock

Hybrid IT infrastructures are rapidly becoming the norm. Even if there isn’t a conscious decision to adopt a hybrid of on-premises/cloud networks and servers (for instance, on-premises servers replicating near-real-time to failover partners in the cloud), the adoption of cloud apps is making many setups hybrid by default, even if not by design.

First of all, this is a good thing. All of my recent experience has had an element of cloud hooked into the company infrastructure (perhaps it’s not an accident after all that I find myself on the judging panel of the UK Cloud Awards), and it has been an overwhelmingly positive experience to bring the cloud elements into the organisations.

Cloud backup has brought immediate benefits with regard to the replica living off-site, for example, and moving email to the cloud has brought the benefit of reclaiming hundreds of gigabytes of storage for more useful things than storing photographs of the marketing manager’s weekend barbecue.

You do, however, have to do the job properly. It’s easy to take the easy path to the cloud, but a quick deployment is often offset by an increased effort in managing, maintaining and securing the result.

Reporting

The first thing to remember is that any automated reporting you’ve set up over the years will suddenly not be there by default when you replace a system – unless you’ve done a complete like-for-like replacement (which is unlikely).

One of the weirdest things is if you’re able to migrate users in batches to the new system: the existing reporting gradually dwindles over time unless you have had the foresight to research the current data flows carefully and replicate them in the cloud.

Incidentally, if a new system cannot produce reporting flows in the same format as the old one, it is often easier to adopt some kind of transformation engine to massage them into the required form than it is to change the reporting tool itself. (Oh, and I’ve even seen instances where a legacy, scary-to-touch reporting tool had been configured years previously to accommodate the buggy exports from the source system, and the developers had to write a transform that took the new system’s correct output and put the bugs back in.)

Get a full understanding

Heard the one where the cloud-based backup turned out to be a useless pile of ones and zeros? (If you haven’t: it’s here.) Never, ever assume that the service:

  1. Does what you think it says on the tin; or
  2. Does out of the box what the old one did after years of tweaks.

If you want a cloud backup that gives you 30 daily restore points, read the spec and make sure it does – and test it to be completely sure. Likewise if the cloud-based trouble-ticketing system “integrates with your directory service,” ask what this means: I’ve come across some where it basically means installing a proprietary widget that runs daily, hoovers user information out of Active Directory (AD), and uploads it to the cloud service. Few terms in business applications are truly unambiguous, so define what you want and make sure the service fits the requirements (or at least that you’re happy to live with the gap between the requirements and the reality).

Directory integration

Following on from that last example, directory service integration (which generally means AD) is an absolute must. The biggest pain in the backside I ever come across (which is, it must be said, one of many) is where people put systems in with their own internal user authentication databases. It’s a security and support nightmare: it’s one more system for the service desk to create new users on, and is also one more system for them to forget to remove people from when they leave. Which brings us neatly on to …

Accessibility

Cloud systems are… well, in the cloud. Which means on the internet. They’re not sitting inside your corporate, on-premises network behind a nice resilient pair of NAT firewalls, with access restricted to (say) VPN connectivity or similar.

One of the benefits of the cloud is that the users are no longer shackled by remote access mechanisms which are often clunky and long-winded to use; the downside is that if one of your cloud services has its own internal user database and leavers aren’t rigorously removed upon departure, there’s a good chance they still have access to some of your data.

Maybe you used two-factor authentication on your legacy services: can you do the same in the cloud? Sometimes it’s a lot harder because you don’t have control over the service provider’s authentication integration; at the very least you may have to switch to a 2FA service of their choice, not the one you’re used to. You can secure cloud services, but by default they’re generally more accessible than on-premises ones.

Low-level access

With all of these new services you’re taking on, it’s easy to fall into the trap that the service provider will support them. And of course they will to an extent: it’s now their problem to keep the operating system and app software up to date. But they’re not going to be able to figure out for a user why the service is unavailable if, say, your office internet service has turned up its toes.

So you still need the service desk to understand the apps, how they work, how you connect to them, and how to diagnose at least first-line and perhaps also second-line issues. You also need to accept that they generally won’t have access to many of the low-level diagnostic functions they’re used to with on-premises setups and so you need to mitigate what you can and accept what you can’t mitigate.

There’s no running up a quick Wireshark to trace the traffic hitting the mail server, for example. So look back through your ticketing system at the calls you’ve had for the systems you’re clouding, and ask yourself how you’d have dealt with it had it not been on-premises. After all, unlike in the financial services world, past performance often is a predictor of future behaviour.

The add-ons

The final consideration is that you’ve undoubtedly built an armoury of favourite tools over the years: not just diagnostic tools but add-ons and plug-ins for systems – many of which may well have been forgotten over the years – with the functionality simply assumed by the current users and support team to be part of the standard platform. And in fact this kind of goes back full circle to where we started – if you’re moving to a hybrid infrastructure, that’s fine. But do yourself a favour: define what you want it to do, so you can make sure that’s what it does.

Because the one thing it’s guaranteed not to do is behave just like the old one. ®


Biting the hand that feeds IT © 1998–2017