Cloud pricing begins to take hold inside the firewall
Say goodbye to predictable pricing, and hello to (cheap) uncertainty
Analysis The growth in cloud computing is starting to affect the way technology companies charge and license their on-premises products, which could save IT pros money but also make budget planning much more difficult.
Though public clouds are not the right model for many companies, the interest the technology has drawn from techies to journalists to CFOs has put the cloud way of doing things at the forefront of people's minds, and led to a subtle but important change in how software and hardware is being sold.
We're talking here about capacity-based licensing, and the related fracturing of monolithic software suites into individual components to be bought and sold.
Asigra's announcement on Wednesday that it is splitting its recovery licensing away from its backup licensing, and charging companies according to usage is symbolic of a larger shift occurring in the industry, as companies try to slice and dice their products into discrete packages that can be sold on to punters.
Though these pricing models have been available for decades, the rise of software-as-a-service pricing for cloud applications has made the approach more influential, broadened the categories it can appear in, and trained IT buyers to balk less at the confusion it can introduce.
Microsoft has been doing this by making its System Center on-premise software more modular, and selling a larger range of add-ons. Similarly, HP's "CloudSystem" appliances have adopted a pay-per-use model for storage that means you can buy a large appliance but only pay for the storage capacity you use, and "burst" into more drives at the click of a button and pay for the privilege.
The benefits of this type of model are that it lets companies shrink their cost in the short-term, while still being able to scale-up either capability or usage in the future. It also lets them select (and pay for) only the components of software that they actually use, and then charges them according to usage of their own resources.
The problem is that it makes budgetary planning a nightmare: if software costs x this year, but its price can be affected by y factors outside my control such as a customer needing to perform more restores (Asigra), backup more data (Dell NetVault) or an uptick in capacity due to gaining new clients (CloudSystem, and others), then working out future expenditures can be tricky.
Though it allows businesses to potentially reduce the stuff they classify as a capital outlay, it can do weird things to rolling operational expenditure, which can prompt awkward questions from the bosses when you tell them the cost of restores has doubled for the next year due to some unforeseen usage this quarter.
It also means IT pros could have a harder time getting a big enough budget for their needs, as their typical costs hover around a low level but they'll need to ask for a larger budget in case of an unforeseen need to scale up use of their tech. To which the financial controllers may ask, "Why don't we just give you the smaller budget and we can allocate resources in case you overspend?" That would put IT pros in the awkward position of having less budget than before, and make them look bad by causing them to ask for money if one of their customers or another business unit unexpectedly needs to use more kit.
With companies such as Amazon Web Services driving a wedge between traditional channel sellers and punters, and more companies butting up against capacity or usage-based pricing, it's likely that attitudes among the beancounters may change – but we're not there yet.
One sysadmin for a small Canadian IT consultancy told us at this week's Asigra summit in Canada that losing predictable pricing put them in a tough position with customers. Another IT bod from a St Louis consultancy said his engineers billed by the hour for restores, and his company would lose margins having to absorb variable pricing
The best ways to protect budgets from this trend is to carefully model your expected operating usage of your software or gear, and then whack on a margin to allow for unexpected events. But for overworked and under-appreciated IT pros, this prospect may seem like a bitter pill to swallow. ®