Feeds

The value of vulnerabilities

To disclose or not to disclose? That is the question

  • alert
  • submit to reddit

SANS - Survey on application security programs

There is value in finding vulnerabilities. Yet many people believe that a vulnerability doesn't exist until it is disclosed to the public. We know that vulnerabilities need to be disclosed, but what role do vendors have to make these issues public?

One of the things I really love about information security is the large number of different technologies involved. With personal computers alone, there are all sorts of architectures, operating systems, devices, and protocols to learn about. There's never a shortage of information to digest. It's hard to maintain a balance between knowing a little bit about everything, and understanding some specific things at a very deep level.

When it comes to vulnerabilities, there is already a large spectrum of understanding associated with them. From a high level, that understanding may simply be about what technologies are affected and what the exploitation results are. In contrast, if you get down to the lowest level, as a researcher you can explore the vulnerability in gruesome detail - how exactly the vulnerable code was found, and how the issue can be exploited. At this level, there's even a big difference between knowing how to exploit a vulnerability and actually exploiting it. And of course there is some middle ground between the high level view and the view a researcher might have. This middle ground might enable people to implement technical mitigations for the issue, and otherwise understand the vulnerability at a level deep enough to pinpoint and protect against the attack vector associated with the vulnerability - even if these people might not understand the intricate technical details themselves.

Some vulnerabilities have a very small gap between these levels, such as the case of a simple SQL injection issue. Here, someone with a very high level understanding of the issue would probably not have too much trouble figuring out or learning how to exploit it. On the other hand, there are vulnerabilities where the gap between these two levels is immense. The Symantec Firewall DNS parsing kernel stack overflow of 2004 is a great example of that. Exploiting this vulnerability was something that only a select group of people would have been able to accomplish in a reasonable amount of time.

Before I digress too much further, let me just say that I find vulnerabilities to be fascinating little things. Each one is unique, and each one has its own subset of knowledge requirements to fully understand it.

Where do vulnerabilities come from?

Although this might sound like a simple question, the answer isn't always simple. There are two schools of thought about where vulnerabilities come from, so I'll discuss each as we explore further.

Most public vulnerabilities are disclosed by a security researcher, and more often than not these are on a major security-related mailing list such as Bugtraq. A security researcher can be an employee of a corporation, a full-time independent researcher, or even an audit-by-night researcher who simply glances through code in his spare time. In some cases, the person who discovers a vulnerability may have done so simply by accident. In most cases, discovering and researching a vulnerability in its entirety is a pretty intensive process that may involve a lot of skilled man hours.

Now, for whatever reason, the public disclosure of a vulnerability is often considered to coincide with its very existence. Even the often-used term "zero-day" seems to imply that an undisclosed vulnerability doesn't really exist yet. This belief is a mistake that too many people make. It's as if people are under the impression that these vulnerabilities don't actually pose any sort of threat until they're publicly disclosed. If a vulnerability is discovered in the proverbial forest, and no one hears of it, then people think it isn't really a vulnerability, so to speak.

The process of "responsible disclosure" requires security researchers to sit on information until vendors have released patches for it. In the past, we've even seen hostility between vendors and security researchers, who have two very different opinions on disclosing this information. Vendors want the time to fix the problem, which can be a pretty involved process. Security researchers disclosing this information to vendors are obviously looking to have the issues addressed, regardless of whether or not it's their primary reason for disclosure. And although these seem like similar goals, conflict often arises.

Currently, it would seem that the security industry as a whole acknowledges vulnerabilities on their disclosure date. In some cases, these issues are reported to vendors weeks, months, or even years before disclosure happens. There are no guarantees, and therefore I think it would be pretty naive to believe that the person reporting the issue is the only one aware of its existence. That in itself is pretty frightening if you think about it.

Combat fraud and increase customer satisfaction

More from The Register

next story
Parent gabfest Mumsnet hit by SSL bug: My heart bleeds, grins hacker
Natter-board tells middle-class Britain to purée its passwords
Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
Sammy's newbie cooked slower than iPhone, also costs more to build
Obama allows NSA to exploit 0-days: report
If the spooks say they need it, they get it
Web data BLEEDOUT: Users to feel the pain as Heartbleed bug revealed
Vendors and ISPs have work to do updating firmware - if it's possible to fix this
Snowden-inspired crypto-email service Lavaboom launches
German service pays tribute to Lavabit
One year on: diplomatic fail as Chinese APT gangs get back to work
Mandiant says past 12 months shows Beijing won't call off its hackers
Call of Duty 'fragged using OpenSSL's Heartbleed exploit'
So it begins ... or maybe not, says one analyst
NSA denies it knew about and USED Heartbleed encryption flaw for TWO YEARS
Agency forgets it exists to protect communications, not just spy on them
prev story

Whitepapers

Designing a defence for mobile apps
In this whitepaper learn the various considerations for defending mobile applications; from the mobile application architecture itself to the myriad testing technologies needed to properly assess mobile applications risk.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
Five 3D headsets to be won!
We were so impressed by the Durovis Dive headset we’ve asked the company to give some away to Reg readers.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.