This article is more than 1 year old

Hybrid cloud: Define what it is, then decide what you want

Choose a provider carefully, think about what you need

Back-ending the cloud with on-prem

Cloud-based web sites are popular, because they fit the model perfectly: you can run up and down servers as demand ebbs and flows by putting them behind software-based load balancers that know how many back-end servers are running at any time and can pass traffic to just the active ones.

It's very rare to have a web installation in the cloud with the back-end database the other end of a WAN or Internet link – the performance just wouldn't be there.

What you probably will do, though, is have a modest subset of your data (the bit the website needs to get at) on storage in the cloud with some kind of cache layer providing a jump-off to the on-prem storage to fetch and store items that aren't used so frequently.

I've run plenty of Web-based services that run 95 per cent of their functions from data held in the website (product and customer data, primarily) with only five per cent calling back to “mother” — for instance when someone actually expresses an interest in a product and it needs to check availability in real time.

Processing in the cloud

Something people keep talking more and more about is doing heavy duty processing in the cloud. There are two common reasons for this, either:

  • They have infrequent requirements for big data crunching tasks that take lots of CPU – quarter-end analytical software runs, for example – so they take advantage of pay-as-you-go cloud processing
  • They need vast amounts of processing against a challenging deadline and have algorithms that parallelise well – so by running 200 processors they can do a week's processing in about an hour

In both cases you can look to mount on-premise filestores from the servers in the cloud, but similarly you can look to one of the increasing number of storage virtualisation products starting to appear that let you present your global storage setup as a virtual layer – so your cloud-based servers simply mount volumes as if they were in the cabinet next to them even when they're far distant over the wide area.

Connectivity between locations

To connect between two locations at an operational level (meaning for the servers and apps to talk to each other) you need a technology that's supported by both ends. In all but the most extreme of cases this will be an IPSec VPN tunnel.

Just because a VPN is the lowest common denominator doesn't mean it's a bad thing to use. A few years ago I ran the connection between one of my offices and a data centre a few hundred miles away over a 40Mbit/s IPSec tunnel, and the users didn't even realise the servers weren't in the on-site comms room.

IPSec's great because it's dead easy to do – particularly if you're using Amazon's cloud and you have an on-premise router from a popular vendor like Cisco or Juniper, as it'll generate the configuration text for you to paste into the router (though most providers have perfectly usable wizards).

If you're made of money or you have very high-end requirements then some cloud providers have the capability for you to hook into them with a fixed link, though it's pricey. Given the potential for IPSec to run at proper speeds you should think twice before jumping.

There is one exception in this latter case, though: in some (admittedly niche) locations you may well be able to hook your premises and cloud installation via a metropolitan link of some kind.

So where I live in the Channel Islands, for example, one of the main providers can hook your office and their cloud installation with their native MPLS network, and another can do the same with its VPLS installation: in both cases they're effectively extending the network of the data centre where the cloud services live into the customer's premises.

More about

TIP US OFF

Send us news


Other stories you might like