Feeds

Google seeks interwebs speed boost with TCP tweak

10 lines of code deliver '12 per cent jolt'

Beginner's guide to SSL certificates

Velocity Google vice president of engineering Urs Hölzle has warned that unless we update the internet's underlying protocols, any improvements to network bandwidth will be wasted.

"It's very clear that the network speed itself will increase," Hölzle said today during a keynote speech at the internet-infrastructure obsessed Velocity conference in Santa Clara, California. "It's conceivable that [in the next several years] the average network speed worldwide will grow by a factor of three, from 1.8Mbps to 5.4Mbps. However, if you don't fix the protocols, we will not be able to exploit that extra bandwidth."

According to Google's internal tests, the average webpage is 320KB. With the user's average bandwidth at 1.8 Mbps, Hölzle says, load times should be around 1.4 seconds. But in reality, according to Google tests, the average load time is closer to 5 seconds. The problem, Hölzle reckons, is not the network. The problem is the protocols - as well as the browser.

Famously, Mountain View is working to improve browser speeds with Google Chrome, whose revamped JavaScript engine turned the market on its head when it arrived in 2008. Hölzle couldn't help replaying that well-over-the-top video in which Chrome outraces a flying potato, and in predictable fashion, he boasted that Chrome's arrival has pushed the likes of Microsoft and Mozilla to significantly improve the speed of their own browsers.

But Google – if you hadn't noticed – is pushing speed in all sorts of other areas as well. Hölzle says the company's goal is to achieve 100 millisecond load times on Chrome, and this will only come with improvements to the net's underlying protocols.

"We want you to be able to get from one page to another as quickly as you turn the page on a book," he says.

Simply by making "some very modest changes" to the aging TCP protocol, Google has been able to boost the speed of its image search engine by 18 per cent, without any changes to the site itself. On average, the company believes, such TCP tweaks can provide a 12 per cent speed boost. Google has published a paper on its TCP work, available here (PDF). According to Hölzle, this update – which involves increasing TCP's initial congestion window – would involve a change of about 10 lines of code.

Meanwhile, as previously announced, Google is developing a new application protocol it calls SPDY, pronounced, yes, "speedy." The project is meant to reduce web latency via improvements like multiplexed streams, request prioritization, and HTTP header compression. In the past, Google has said that with SPDY, it sees "up to" a 55 per cent improvement when downloading the web's top 25 sites over simulated home connections, and according Hölzle, the protocol can reduce packet count by 40 per cent and byte count by 15.

SPDY creates a session between the HTTP application layer and the TCP transport layer. It is not an http replacement, though it uses an HTTP-like request-response setup.

"SPDY replaces some parts of HTTP, but mostly augments it," reads a Google FAQ. "At the highest level of the application layer, the request-response protocol remains the same. SPDY still uses HTTP methods, headers, and other semantics. But SPDY overrides other parts of the protocol, such as connection management and data transfer formats."

According to Hölzle, on low-bandwidth links, headers are "surprisingly costly." Headers alone, he says, can cost more than a second of latency. But with SDPY's header compression, Google has seen a latency reduction of 85 per cent. This alone means a 45 to 1142 ms improvement in page loads.

Hölzle also points to Google's efforts to improve DNS – the company now runs its own public DNS service, and it has proposed changes to the protocol, hoping to improve the way the protocol maps web users to particular data centers – and he trumpeted Mountain View's work to improve the secure sockets layer (SSL) protocol.

Of course, as it seeks to update the net's protocols, Google is pushing for added bandwidth as well. As Hölzle mentions, the company is working to test 1Gbps fiber networks in certain American cities – though it says it has no intention of joining the last-mile business.

In any event, Mountain View is obsessed with speed. After all, at Google, a faster web translates to more cash. According to Hölzle, Google co-founder Larry Page tells his product managers that speed is a product's most important feature. Everything else is secondary. ®

Providing a secure and efficient Helpdesk

More from The Register

next story
Of COURSE Stephen Elop's to blame for Nokia woes, says author
'Google did have some unique propositions for Nokia'
FCC, Google cast eye over millimetre wireless
The smaller the wave, the bigger 5G's chances of success
It's even GRIMMER up North after MEGA SKY BROADBAND OUTAGE
By 'eck! Eccles cake production thrown into jeopardy
Mobile coverage on trains really is pants
You thought it was just *insert your provider here*, but now we have numbers
Don't mess with Texas ('cos it's getting Google Fiber and you're not)
A bit late, but company says 1Gbps Austin network almost ready to compete with AT&T
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Cloud and hybrid-cloud data protection for VMware
Learn how quick and easy it is to configure backups and perform restores for VMware environments.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.