Feeds

Amazon thinks Cloud will conquer Man by 2010

Like a mainframe but closer to angels

Providing a secure and efficient Helpdesk

Structure 08 Amazon CTO Werner Vogels believes that cloud computing will be commonplace within two years.

Before delivering his Wednesday morning keynote at Structure 08 here in San Francisco, Vogels looked ahead to a future incarnation of this cloud-obsessed mini-conference. "At Structure 10, the whole discussion will be different," he said. "All the things that now seem new will be established."

Of course, that's pretty close to what you'd expect this tech-minded Dutchman to say. The world's largest online retailer offers a conspicuous cloud known as Amazon Web Services, selling online processing power, storage, and other distributed tools to application developers across the globe.

Why is the world's largest online retailer in the cloud computing biz? After building a massive cloud for its own apps, Amazon realized others could benefit as well - while paying the company some extra coin.

Back in 2001, when Amazon needed new ways of accommodating its ever-growing online operation, HP convinced Werner Vogels and crew to buy a mainframe-like system. But then Amazon decided that going forward was a better idea than going backward. After 12 months, the company ditched the hulking box and transformed its site - in Vogels words - "from a single app into a platform."

This meant adopting a unified model for the literally hundreds of software tools that play into each page of Amazon.com. "We had all these shared pieces of software that needed to work together, and these became bottlenecks. Constructing one piece of shared software that needs to interact with all the others is just a nightmare," Vogels explained. "So we developed a model where Amazon could be way more agile in terms of being able to build and try out new pieces of software without impacting everyone else."

In short, Amazon put a common business logic layer between its myriad apps and all its back-end data. "We put an API around the business logic and the only way you could interact with the data would be through the business logic. No direct database connections were allowed.

"Slowly, we moved all of our services to this model. And we became a platform not only for Amazon but our partners as well." That includes Target (which helped inspire the platform), the NBA, and Marks & Spencer.

But this was just step one. Amazon also realized that its engineers were spending far too much time ensuring their services were always backed by the proper hardware resources.

"We reviewed all the different teams that were working on the different pieces of our services," Vogel said. "Each of those teams were spending 70 per cent of their time on infrastructure tasks...Each of those teams were doing the same things, learning the same lessons over and over again."

So the company virtualized its infrastructure, switching to a model where these teams could grab resources on demand. That's right, it embraced good old fashioned utility computing. Or grid computing. Or distributed computing.

Whatever you want to call it, Amazon then opened this virtualized infrastructure to world+dog. This includes everything from the Amazon Elastic Compute Cloud (EC2), which offers processing power, and Amazon Simple Storage Service (S3), which servers up disk space, to Amazon SimpleDB, its cloudified database platform.

According to Vogels, 370,000 developers have registered for Amazon Web Services since their debut in 2002, and the company now spends more bandwidth on the developers than it does on e-commerce.

That's still a long way from commonplace. But Google, Salesforce, and others - so many others - are pushing this idea just as hard. Later in the day, Vogels' 2010 prediction was echoed by Sun CTO Greg Papadopoulos. Within two years, Papadopoulos said, 50 per cent of all enterprise apps would run in the cloud. We shall see. ®

Update

Greg Papadopoulos' handlers have gotten in touch to clarify his statement. "Greg predicted that the majority of 'system volume' would go to high-performance computing, Software-as-a-Service, and web-serving infrastructure - in aggregate. He's basically saying that greater than 50 percent of systems (i.e. processors / nodes) will be going to those three use cases combined."

Providing a secure and efficient Helpdesk

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.