Feeds

Google's SPDY blamed for slowing HTTP 2.0 development

Working group contemplates packing it in and moving to HTTP 3.0

Security for virtualized datacentres

The HTTP 2.0 working group appears to be in crisis, with work on integrating Google's SPDY HTTP-boosting protocol blamed for taking the project off the rails.

The accusation comes from prominent FreeBSD developer Poul-Henning Kamp, whose post to the working group's mailing list calls for work on the protocol to be abandoned.

Kamp believes SPDY is to blame for the problems, as while the working group first thought adopting the protocol would mean “we get HTTP/2.0 almost for free” proper examination of Google's code has revealed “numerous hard problems that SPDY doesn't even get close to solving, and that we will need to make some simplifications in the evolved HTTP concept if we ever want to solve them.”

The Internet Engineering Task Force included SPDY in the draft HTTP 2.0 spec in December 2012, presumably because the cunning client-server interactions it offers represented a chance to improve web server performance. Such a supposition is not unreasonable as Google eats its own dogfood with SPDY: the ad-slinger's Chrome browser can use it and Google servers respond in kind when it is detected.

Kamp writes that, in his opinion, the working group has “wasted a lot of time and effort trying to goldplate over the warts and mistakes” in SPDY. The result, he argues, isn't worth completing, never mind releasing, because it won't properly address the issues that a next-generation HTTP standard should usefully do.

He therefore calls for the HTTP 2.0 process to be abandoned in favour of a newly-defined HTTP 3.0 project, because “Now even the WG chair publically admits that the result is a qualified fiasco and that we will have to replace it with something better 'sooner'.”

The chair Kamp refers to is Mark Nottingham, who, in this post calls for rapid action to get HTTP 2.0 ready. Evidence of him declaring the development process a fiasco is harder to come by, although the working group's mailing list is a little more fractious than others in your correspondent's experience.

The working group meets in New York City next week to review its progress. The meeting could be a fun one to observe! ®

Internet Security Threat Report 2014

More from The Register

next story
TEEN RAMPAGE: Kids in iPhone 6 'Will it bend' YouTube 'prank'
iPhones bent in Norwich? As if the place wasn't weird enough
Consumers agree to give up first-born child for free Wi-Fi – survey
This Herod network's ace – but crap reception in bullrushes
Crouching tiger, FAST ASLEEP dragon: Smugglers can't shift iPhone 6s
China's grey market reports 'sluggish' sales of Apple mobe
Sea-Me-We 5 construction starts
New sub cable to go live 2016
New EU digi-commish struggles with concepts of net neutrality
Oettinger all about the infrastructure – but not big on substance
PEAK IPV4? Global IPv6 traffic is growing, DDoS dying, says Akamai
First time the cache network has seen drop in use of 32-bit-wide IP addresses
EE coughs to BROKEN data usage metrics BLUNDER that short-changes customers
Carrier apologises for 'inflated' measurements cockup
Comcast: Help, help, FCC. Netflix and pals are EXTORTIONISTS
The others guys are being mean so therefore ... monopoly all good, yeah?
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.