Feeds

Google chucks $757m at data center empire

60% surge in secret hardware cash

Internet Security Threat Report 2014

Google's capital expenditures – the amount of money forked into its worldwide network of data centers – reached $757m in the third quarter, their highest level since early 2008, when the company was erecting at least three new data center facilities in the US.

In fact, the first quarter of 2008 was the only quarter in the company's history when it spent more on its data centers – $842m – than in three months ending September 30. We know that the company is currently building a new data center in Finland – a newspaper destruction metaphor sitting on the site of an abandoned paper mill – but there are presumably other new projects underway as well.

Google did not immediately respond to an inquiry seeking an explanation for the steep rise in spending. The company likes to keep quiet about the location and design of data centers. In April 2009, the company at longlast lifted the curtain on its famously modular data center design – but it only showed bits and pieces of its very first modular facility, which had been built four years earlier.

The epic ad broker now owns at least 37 data centers across the globe, including the unfinished facility in Hamina, Finland. But over the past three years, the Finland facility was the only new data center revealed to the public. In the summer of 2007, Google announced it was building a trio of new data centers in the US – in Goose Creek, South Carolina; Pryor, Oklahoma; and Council Bluffs, Iowa – and its spending reached unprecedented levels in the first quarter of the next year. But then the bottom fell out of the US economy.

The company went from spending $842m in the first quarter of 2008 to a mere $139m in the second quarter of 2009, and along the way, it delayed construction of the Oklahoma facility. Spending has slowly increased since the middle of 2009, but it took a rather large leap during this last quarter, jumping 60 per cent, from $476m to $757m.

Most likely, the Finland data center is responsible for some of the increase – the facility is slated to cost $260m, including the $52m purchase of the paper mill – but this can't account for it all.

Data Center Knowledge has a nice graph showing the ups and downs of Google's capital expenditures.

Mountain View has said that in order to roll out Google Instant – a new version of its search engine that serves up results pages as you type – it increased the capacity of its back-end. But it also downplayed the extent of this extra capacity, putting more emphasis on its efforts to designed Google Instant in a way that minimizes the need for added servers.

"One solution would have been to simply invest in a tremendous increase in server capacity, but we wanted to find smarter ways to solve the problem," reads a blog post from distinguished engineer Ben Gomes. "We did increase our back-end capacity, but we also pursued a variety of strategies to efficiently address the incredible demand from Google Instant."

During the press event in San Francisco announcing the service, one engineer said that the Google Instant servers keep track of what data the browser already has and what data is already being gathered by other servers and that for the roll-out of the service, the company had improved its caching system. At one point, Gomes indicated the new caching system is related to to Google Caffeine, the new search index software infrastructure that rolled out across the company's data centers earlier this year.

Eventually, Google plans to expand its network across "100s to 1000s" of locations around the world. So, if it hasn't started data center number 38, it will. ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Docker's app containers are coming to Windows Server, says Microsoft
MS chases app deployment speeds already enjoyed by Linux devs
Intel, Cisco and co reveal PLANS to keep tabs on WORLD'S MACHINES
Connecting everything to everything... Er, good idea?
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
'Urika': Cray unveils new 1,500-core big data crunching monster
6TB of DRAM, 38TB of SSD flash and 120TB of disk storage
Facebook slurps 'paste sites' for STOLEN passwords, sprinkles on hash and salt
Zuck's ad empire DOESN'T see details in plain text. Phew!
'Hmm, why CAN'T I run a water pipe through that rack of media servers?'
Leaving Las Vegas for Armenia kludging and Dubai dune bashing
Windows 10: Forget Cloudobile, put Security and Privacy First
But - dammit - It would be insane to say 'don't collect, because NSA'
Oracle hires former SAP exec for cloudy push
'We know Larry said cloud was gibberish, and insane, and idiotic, but...'
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.