More like this

Security

Dan Kaminsky is an expert on DNS security – and he's saying: Patch right God damn now

Glibc bug – dubbed Skeleton Key – could persist in caches

Exclusive Dan Kaminsky, the man who could have broken DNS but fixed it instead, is warning that the glibc bug found by Red Hat and Google could be much worse than anyone has predicted.

"I've seen a lot of bugs, but this bug was written in May 2008, right at end of my own patching effort on DNS," Kaminsky told The Register on Friday night, referring to his previous research into DNS insecurity in that year. "I'm busy fixing one bug and someone writes another. It took a decade to fix my flaw and I thought we'd got better than this."

After an intensive a day or so research, Kaminsky reckons it's possibly possible for poisoned DNS lookups exploiting getaddrinfo()'s CVE-2015-7547 bug to persist in caches. What does that mean?

Well, let's say your Linux email client tries to fetch an image embedded in an email from evildomain.com. Your client, like a ton of other open-source software, uses glibc to lookup evildomain.com as a numeric internet IP address.

You, the hacker, have set up evildomain.com's DNS name servers to send back an overly long DNS reply that exploits a buffer overflow bug in glibc in your victim's software. But your victim is using Comcast's DNS servers, or Google's DNS systems or OpenDNS's services, to lookup evildomain.com, so your malicious payload has to be forwarded through several systems before it reaches the vulnerable computer.

According to Kaminsky, it is possible, maybe, for this booby-trapped reply to traverse these caches and reach the victim's PC, and exploit the hole in glibc to ultimately execute malware on the machine. But here's the kicker: let's say the attack doesn't work, but the payload lingers in the ISP's DNS cache.

The next time the victim's machine looks up evildomain.com, they'll get the payload again. On the fourth or tenth try, the exploit may quite well work – it may hit a sweet spot allowing it to bypass the operating system's security mechanisms, such as ASLR and non-executable stacks, and gain remote code execution.

The key thing here is cache traversal: Kaminsky believes it's possible for malicious payloads to linger in caches, which (say) JavaScript running in browsers could exploit, firing off thousands of requests a second, until the payload hits its target.

According to Kaminsky, it's going to be a slow burn problem that could dog the internet for years to come, given current patching practices.

"This CVE is easily the most difficult to scope bug I’ve ever worked on, despite it being in a domain I am intimately familiar with," he wrote in his analysis.

"The trivial defenses against cache traversal are easily bypassable; the obvious attacks that would generate cache traversal are trivially defeated. What we are left with is a morass of maybe’s, with the consequences being remarkably dire (even my bug did not yield direct code execution)."

Kaminsky said that 99 per cent of exploitable scenarios require cache traversal, but that isn't tough in the long, or even medium, term. Right now we've got a situation where every server running glibc needs to be patched and fast – even if it means server downtime.

This comes after Yahoo! security engineers managed to exploit CVE-2015-7547 to gain remote code execution on an Apache PHP server set up. In short, update, reboot, survive.

This isn't a Shellshock or Heartbleed vulnerability, of course, but it exposes a wider flaw in the way the community is writing its code. Buffer overflows in 2016 are an embarrassment. ®

Sponsored: The Nuts and Bolts of Ransomware in 2016