Google Apache mod speeds into outside world
Fastsoft on Gooroids
Though Google is loath to open source the foundations of its massive online infrastructure, it does share code for tools at the edge of its network, tools designed to accelerate the actual delivery of web stuff. Mountain View doesn't want Facebook rebuilding the Googlenet. But it is intent on accelerating the web as a whole. Ultimately, this juices the Google bottom line.
In June 2009, Google unveiled SPDY, an application layer protocol designed to improve the speed of the existing HTTP protocol, and last fall, it open sourced something it rather inelegantly calls mod_pagespeed, an Apache web server module that accelerates page delivery by optimizing content as it is served. It's yet to been seen how widely either will be used, but both are making at least some headway outside the Googleplex.
SPDY has turned up inside multiple operations – including a website acceleration service known as Strangeloop and the Israel-based content delivery network Cotendo – and mod_pagespeed is finding its own way as well.
Fastsoft – a Pasadena, California company offering hardware appliances and software that speed the delivery of beefy internet content – is working to roll mod_pagespeed into its existing technologies. This outfit has built a prototype that pairs mod_pagespeed with its own software on the same server, and according to chief technology officer Cheng Jin, the aim is to "jointly optimize" the two tools and release them as a single product.
Based on work that originated with Jin and mentor Steven Low at the California Institute of Technology, Fastsoft's existing software seeks to optimize the operation of good old TCP solely from the server side. In essence, the technology takes algorithms that economists have traditionally used to better allocate physical resources among human beings and applies them to internet applications trying to share bandwidth among client machines.
"If you look at the network resource allocation and resource sharing problem, it's very similar to an existing problem in economics, where you have a fixed resource to be shared among multiple people," Jin tells The Register. "With TCP, you have all these application users sharing a limited resource: bandwidth."
With raw TCP, he explains, end user machines operate in their own self interest, without really considering the behavior of other machines. FastSoft seeks to accelerate the content delivery by encouraging users to pull in the same direction.
"You have all these parties, individual application users, trying to figure out how many resources they're going to take, and each one is optimizing based on his own objective or utility, rather than optimizing for some global objective," Jin says. "What we do is look for a reasonably fair allocation of bandwidth for all users."
The mod_pagespeed tool is a natural compliment to this effort. Google's open source Apache module optimizes page caching, minimizes client-server round trips, and reduces payload size for websites. It rejigs what objects go into a page and the content of each object. Once this is done, FastSoft can work to facilitate the actual transfer. "modpagespeed deals with how the content is put together," Jin says, "Then our job is to overcome issues such as packet loss, latency, and congestion."
When mod_pagespeed was first unveiled, Google said it was working with web hosting service GoDaddy to run the technology with customer websites, and Contendo, the Israeli CDN, has also embraced the module, alongside SPDY. Ultimately, FastSoft will take the technology to CDNs as well. The company recently inked a deal to supply appliances for the Los Angeles-based content delivery network NetDNA.
According to Jin, FastSoft's mod_pagespeed prototype was built independently of Google, though the two have since discussed the project. So, yes, there are at least some cases where Google is happy to open source its back-end work – and chat about it too. But the big secrets are still secrets. ®
There was a very good discussion on slashdot (<http://tech.slashdot.org/story/11/04/11/1448259/Google-Cuts-Chrome-Page-Load-Times-In-Half-w-SPDY>, see in particular this thread <http://tech.slashdot.org/comments.pl?sid=2078504&cid=35782366>, by people who knew what they were talking about, and SPDY did not come off well.
If it is a push protocol, how will it affect my local copy of squid, or indeed any HTTP caching service?
If it is to reserve bandwidth, what takes up the largest amount of bandwidth on an average web page? (perhaps flash adverts?).
How does pushing content affect my ability to control the content I see? because I don't want to have adverts unblockably pushed on me, which I suspect is the point (please note that in my current blocking of the majority of adverts, I actually free up bandwidth for other users, so this is a good thing, no?)
Why doesn't it, as the above slashdot thread asked and failed to get a reply, try to fix HTTP's supposed problems (pipelining was one named by a google tech) rather than work around them?
Failing that, why not use existing technology for multiplexing (which is supposed to one of the core features of SPDY) like BEEP?
Just out of interest, did you contact to the IETF to try and work with them?
Or other browser makers?
In summary what is the business case for SPDY, what is the technical case for SPDY?
read the /. thread
A tech involved in spdy development did respond but not satisfactorily, or at all, when questions got tough.
And yes, heads should roll if they develop stuff which cannot be justified. That's called bad management.
Google failed to respond to a slashdot thread?
Yea gods! Someone should contact the person at google responsible for responding to sensible slashdot comments immediately! Heads WILL roll!