How the cloud could conquer the world in 2010
Escape from the belly of the Fail Whale
Comment The cloud was one of the big topics of 2009, with a broad swath of technologies and offerings coming to market from vendors of all shapes and sizes.
There was much excitement, but there was an equal amount of consternation about security, reliability, and adoption trends.
And while we're still not exactly sure what the cloud is - sometimes it's a website that hosts your social data, other times it's an application delivered through a web browser, and still other times it's full-scale multi-tenant IT infrastructure - there is something meaningful in there somewhere.
Technologies such as virtualization that underlie Amazon EC2, along with open source software like MemcacheD used heavily by Facebook and other large web-shops, have been important enablers of cloud services.
What remains to be seen, though, is if enterprises will adhere to similar design principles or just stick with the same old architecture.
Services people define "cloud" in a way that lets them leave work undone in order to make themselves attractive to enterprise customers. One of the biggest leftovers is the need to provide the kind of always-on reliability and management tools that business customers expect as a given.
Salesforce.com, Amazon Web Services (AWS), Microsoft's Azure, Facebook, and Twitter all had downtime of one kind or another this year, during which users and customers sat clueless, wondering when the services might come back up. The opacity of the cloud and the communication styles of the various providers factor heavily into how users view and trust cloud services.
Uptime - we've heard of it
Salesforce and AWS, both of which provide services to businesses (and occasionally consumers), managed to keep their uptime in the 99.99 per cent realm in 2009. Twitter's Fail Whale, however, came to symbolize the company's inability to keep up with its users' demand to report on every important moment of their days. Microsoft's Azure service experienced a whopping 22 hours of downtime in 2009 - with nary an explanation in sight on-site.
If you have users on a service that goes south - regardless of whether that service is alpha, beta, or production - they're entitled to know if and when it will be functioning again. Just because the service is in the cloud doesn't mean basic IT tenets should be thrown out the window.
There are myriad technical reasons why these service outages occur. It could be anything from poorly designed architecture to network issues to human error. Until customers get more visibility into cloud services via management tools and transparency from the services themselves, the cloud won't reach its potential as a core IT element.
When it comes to winning over business users, one factor in the cloud's favor may be that many of the hardware and software companies they already know and use are already pushing cloud services. This familiarity could provide a degree of comfort, and it should make the process of switching between on-site and cloud simpler from a purchase and migration perspective.
Many thought that the cloud would usher in an open source utopia where free and open source (FOSS) companies would stumble upon business models. Some FOSS vendors found that hosting their apps made them money, but more found that their applications were somehow even more free as users quickly deployed to EC2 - and paid Amazon - but continued to not pay for the software itself.
It's proprietary applications offered by enterprise mainstays such as Oracle, IBM, and other big vendors that may turn out to be the big winners. The big vendors simply manipulated and corrected their licensing strategies to offer their applications in an on-demand or subscription manner.
AWS, for example, now offers EC2 instances for which the software licenses are included in the per-hour rate for server instances. This means that users who want to run Windows applications don't have to deal with dreaded Windows licensing - instead, they simply request a machine and use it while Amazon deals with paying Microsoft.
Ultimately, cloud principles and their underlying technologies need to make their way into the enterprise to fulfill their promise. When we start to consider internal or private clouds, however, it once again brings up the question of "what is a cloud?" Internal or enterprise clouds are still new territory, but the theory is to apply the same principles that internet-based providers have done.
Sooner or later it will be easy enough to build your own internal compute cloud, regardless of whether you want it virtualized or cross-border or whatever. As with anything else, developers will solve the problems sooner or later.
2009 provided some very broad leaps in cloud adoption with IT folk starting to feel more comfortable about using offsite services, and analyst firms such as Gartner attaching revenue numbers to the services.
There is little doubt that we'll see changes to the technology and huge variants of business models. What remains to be seen is if the cloud along with all of its benefits and encumbrances will move from an IT accessory to an IT mainstay. ®