Feeds

Dumping gear in the public cloud: It's about ease of use, stupid

Look at the numbers - co-location might work out cheaper

Internet Security Threat Report 2014

Sysadmin blog Public cloud computing has finally started to make sense to me now. A recent conversation with a fellow sysadmin had me rocking back and forth in a corner muttering "that's illogical".

When I emerged from my nervous breakdown I realised that capitalising on the irrationality of the decision-making process within most companies is what makes public cloud computing financially viable.

For certain niche applications, cloud computing makes perfect sense. "You can spin up your workloads and then spin them down when you don't need them" is the traditional line of tripe trotted out by the faithful.

The problem is that you can't actually do this in the real world: the overwhelming majority of companies have quite a few workloads that aren't particularly dynamic. We have these lovely legacy static workloads that sit there and make the meter tick by.

Most companies absolutely do have non-production instances that could be spun down. According to enterprise sysadmins I've spoken to, they feel that many dev and test environments could be turned off approximately 50 per cent of the time. If you consider that there are typically three non-production environments for every production environment, this legitimately could be a set of workloads that would do well in the cloud.

While that is certainly worth consideration, it only really works if it's implemented properly. Even if you can spin some workloads up and down enough to make hosting them in the public cloud cheaper than local, do you know how to automate that? If you don't – or can't – automate some or all of those workloads, are you going to remember to do spin them up as needed? What if you get sick?

For the majority of workloads proposed to be placed in the public cloud, I always seem to be able to design a cheaper local alternative fairly easily. This often applies even to the one workload for which cloud computing is arguably best suited: outsourcing your disaster recovery (DR) setup.

Colocation is still a thing

When I talk about DR with most businesses – big or small – they have a binary view of the world. They see the options as either building their own DR site, or using a public cloud provider. Somewhere in the past five years we seem to have collectively forgotten that a vast range of alternative options exist.

The first and most obvious option is simple colocation. There are any number of data centres in the world that will rent you anything from a few Us of rack space to several racks' worth for peanuts. Or, at least, "peanuts" when compared to the cost of public cloud computing or rolling your own secondary data centre.

In addition to traditional unmanaged colocation, most colocation providers will offer you up dedicated servers. Here they pay the initial capital cost of the hardware and lease it to you along with the rack space. There's also fully managed hosting available for both "you own the hardware" and "you lease the hardware" options.

In almost all cases these colocated solutions are cheaper than a public cloud provider for DR, and DR is the only bulk public cloud workload that I've been able to come close to making financial sense for businesses smaller than a 1000 seat enterprise. (Okay, dev and test under some circumstances can be worth it as well.)

So how is it that so many businesses choose the public cloud? As the debate unfolded I began to realise that the viability of the public cloud has nothing to do with the viability of the economic arguments and everything to do with politics.

Beginner's guide to SSL certificates

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.