Google researchers propose fix for ailing SSL system
Changes would overhaul net's foundation of trust
Security researchers from Google have proposed an overhaul to improve the security of the Secure Sockets Layer encryption protocol that millions of websites use to protect communications against eavesdropping and counterfeiting.
The changes are designed to fix a structural flaw that allows any one of the more than 600 bodies authorized to issue valid digital certificates to generate a website credential without the permission of the underlying domain name holder. The dire consequences of fraudulently issued certificates was underscored in late August when hackers pierced the defenses of Netherlands-based DigiNotar and minted bogus certificates for Google and other high-profile websites. One of the fraudulent credentials, for Google mail, was used to snoop on as many as 300,000 users, most of them from Iran.
Under changes proposed on Tuesday by Google security researchers Ben Laurie and Adam Langley (PDF here), all certificate authorities would be required to publish the cryptographic details of every website certificate to a publicly accessible log that's been cryptographically signed to guarantee its accuracy. The overhaul, they said, is designed to make it impossible – or at least much more difficult – for certificates to be issued without the knowledge of the domain name holder.
“We believe that this design will have a significant, positive impact on an important part of the internet security and that it's deployable,” Langley wrote in a blog post. “We also believe that any design that shares those two properties ends up looking a lot like it.” Some of the ideas overlap with recommendations recently published by the Electronic Frontier Foundation for improving the security of SSL.
While few disagree that SSL in its current form is hopelessly broken, finding agreement on a way to fix the fragile certificate authority infrastructure has proven to be elusive. Indeed, within hours of Laurie and Langley's plan going public, critics were already saying it was unworkable. Among the complaints was the critique that it would require the divulging of information considered to be proprietary in the fiercely competitive market for SSL certificates.
“I assume that CAs wouldn't agree to provide their entire customer data to the public (and competition),” Eddy Nigg, COO and CTO of StartCom, the Israeli-based operator of StartSSL, told The Register. He held out a voluntary set of baseline requirements recently adopted by the CA/Browser Forum as a more effective fix. Members of the forum hope to make the requirements mandatory for all CAs.
Nigg also said that Laurie and Langley's proposal could place significant technical burdens on website operators and browser makers. One or more authorities would have to be established to compile the lists around the clock and make them available to millions of users each time they access an SSL-protected page, and both activities would require considerable bandwidth and processing resources to be done properly.
“If browsers would have to ping this data upon every first connection per day per site, this would require lots of resources,” Nigg said. “This is something Google might be able to do, but not that many other entities will have those capabilities and interest.”
Next page: No more secret certs
Google to the rescue
Google proposes another technical solution that can only be implemented by Google hosting massive amounts of data for customers interact with in revealing ways. This one is even better than Google's whitespace WiFi solution, where all access points must query a database (Google & friends) for unused frequencies using their GPS coordinates.
DNS sec says "Hi!"
This seems like a problem DNS sec would solve to everyone's satisfaction. Not only would it make it more difficult to conduct specific small-site hacks on local TLS connections achieved by changing what DNS servers clients are told to use through DHCP, it would enable a far more sensible solution to the problem of "Did the owners of the domain authorise the CA to issue this cert?"
If your DNS records can be authenticated, you can then tie the certificate to the domain by having a unique key pair tied to the domain through (say) a key fingerprint in a DNS record. You could then sign SSL certificates issued to you with your domain certificate and the browser could verify that chain of trust as well as the chain to a trusted CA.
Attackers would then simultaneously have to subvert a CA and subvert your authoritative DNS to make any headway on breaking your security though this route. Maybe not impossible, but certainly far harder than doing only one of them.
On the "this is theoretical bullshit" side of the coin, it would require DNS sec to be ubiquitous enough to just outright reject non-authenticated DNS records. If the attacker could simply substitute regular old unsigned DNS without the browser throwing a fit, then that leg of the security would fall apart, though if the user had gone to the site before the client could at least cache the level of security it is used to and flag it up to the user if the site has had a downgrade. In any event, my idea seems much less outlandish than requiring CAs to run what amounts to yet another DNS system for looking up certs.
What about having an SSL entry as part of the DNS record?
I'm sure that people must have discussed this before (and so it might be fundamentally flawed in one way or another), but at least to achieve a partial fix against bogus CAs issuing certs for your domain, why not include some form of a TXT entry in the DNS record that identifies a CA (or CAs) that you have authorised to issue certs for your domain? Same kind of thing as an SPF entry, only for SSL?
That way, whenever a browser etc. sees an SSL cert, it can identify the CA on the cert and do a DNS lookup to confirm that it is authorised. If it's not authorised at the DNS level then the SSL cert is rejected.
Heck, if the TXT field allows a sufficiently long entry you could even store a key hash.
Obviously, this would be open to DNS poisoning attacks (or e.g. the ISP intentionally tweaking the DNS records it serves up...), but could it at least act as one layer of defense?