Feeds

Google gets AGILE to increase IaaS cloud efficiency

You will be assimilated task rationalised

Boost IT visibility and business value

USENIX Google and North Caroline State University researchers have worked out how to instrument cloud infrastructure to the point where they can predict future demand 68% per cent better than previously, giving other cloud providers a primer for how to get the most out of their IT gear.

The system was outlined in an academic paper AGILE: Elastic distributed resource scaling for Infrastructure-as-a-Service which was released by the giant on Wednesday at the USENIX conference in California.

Agile lets cloud operators predict future resource demands for workloads through wavelet analysis, which uses telemetry from across the cloud stack to look at resource utilization in an application and then make a prediction about likely future resource use. The provider then uses this information to spin up VMs in advance of demand, letting it avoid downtime.

The system works like a road-building machine for the mammoth car that is an IaaS's cloud, spinning up just enough infrastructure ahead to avoid downtime, but not so much that it has a big stretch of allocated resources with no usage.

Though some of our beloved commentards may scoff at this and point out that such auto-workload assigning features have been available on mainframes for decades, Google's approach involves the use of low-cost commodity hardware at a hitherto unparalleled scale, and wraps in predictive elements made possible by various design choices made by the giant.

While AGILE can give something functionally similar to what was found on old big iron, it does so on systems that cost less, that run in a geographically distributed manner, and which doesn't need to have any knowledge of the specifics of the application to assign resources to it.

"AGILE can predict resource demands over the medium-term with up to 68% higher accuracy than existing schemes," the researchers, aided by Google infrastructure whiz John Wilkes, write. "AGILE can efficiently handle dynamic application workloads given target service level objective violation rates, reducing both penalties and user dissatisfaction."

Google Agile wavelet analysis

AGILE's wavelet analysis tech lets it predict workloads

AGILE works via a Slave agent which monitors resource use of different servers running inside local KVM virtual machines, and it feeds this data to the AGILE Master, which predicts future demand via wavelet analysis (pictured) and automatically adds or subtracts servers from each application.

The system can make good predictions when looking ahead for one or two minutes, which gives cloud providers time to clone or spin-up new virtual machines to handle workload growth. The AGILE slave imposes less than 1 per cent CPU overhead per server, making it lightweight enough to be deployed widely.

Armed with this data, AGILE can then go about the process of creating new VMs. It does this either from taking a snapshot and then using that as the basis for a new server ("cold cloning"), taking a snapshot and loading in post-snapshot data via demand paging to create a seamless cut-over ("post copy live cloning"), and most impressively "pre-copy live cloning" which starts a new server after nearly all information from the copied VM has been copied over to provide both good performance, and current data.

"To minimize the impact of cloning a VM to meet a predicted performance demand, AGILE copies memory at a rate that completes the clone just before the new VM is needed," the researchers write. "AGILE performs continuous prediction validation to detect false alarms and cancels unnecessary cloning for maintaining low resource cost."

When Google published the MapReduce and Google File System papers in the early 2000s, they spawned the Hadoop ecosystem. We think it's likely that systems like AGILE will reappear in other clouds. ®

Bootnote

An earlier version of this article indicated that AGILE is being run within Google's infrastructure, when in fact AGILE has been developed outside Google's cloud via researchers at NCSU using a combination of their data and telemetry from Google. We regret the error.

The essential guide to IT transformation

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7
New chip scales to 1024 cores, 8192 threads 64 TB RAM, at speeds over 3.6GHz
Docker kicks KVM's butt in IBM tests
Big Blue finds containers are speedy, but may not have much room to improve
US regulators OK sale of IBM's x86 server biz to Lenovo
Now all that remains is for gov't offices to ban the boxes
Gartner's Special Report: Should you believe the hype?
Enough hot air to carry a balloon to the Moon
Flash could be CHEAPER than SAS DISK? Come off it, NetApp
Stats analysis reckons we'll hit that point in just three years
Nimble's latest mutants GORGE themselves on unlucky forerunners
Crossing Sandy Bridges without stopping for breath
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.
Rethinking backup and recovery in the modern data center
Combining intelligence, operational analytics, and automation to enable efficient, data-driven IT organizations using the HP ABR approach.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.