This article is more than 1 year old

Responsible bug disclosure by corporate fiat

Who should decide?

I must have a masochistic streak. Nothing else could explain why I occasionally argue in this space that people should act responsibly when disclosing holes in software. If I even hint that the doctrine of full disclosure has limits, the reaction is overwhelming. Among other things, I've been called a Microsoft lackey, a fascist, and "just a plain dolt." You'd think I was criticizing CISSPs.

Most of the negative feedback seems to stem from the belief that I'm opposed to full disclosure. In fact, I'm not.

But I believe that it's time for the security community to develop a broadly supported model for disclosing security vulnerabilities. This model should ultimately result in full disclosure of every security hole in every application. Just not all at once.

Without such a consensus, the computer security industry resembles nothing more than a country without traffic laws or even agreement on which side of the road cars should drive. Such situations end in either formal laws or informal consensus, but they inevitably end.

My preference is for an informal consensus, but one strongly backed by informal community sanctions against violators. It should be enough to know who is breaking the rules and it should be enough to refuse to do business with them until they mend their ways. I see no need to involve the law and the attendant bickering among lawyers. If we can get our act together, we can keep ourselves out of court.

How can this consensus develop? The IETF has decided that they are not the appropriate forum for non-technical best practices; therefore, the Christey-Wysopal "Responsible Disclosure" draft was formally withdrawn from consideration. This was indeed unfortunate: few organizations are in as good a position to create consensus as the IETF. As Steven Christey, one of the authors of the draft, wrote me, "The IETF was unique, we haven't found any good equivalent."

Christey suggests that the newly formed Organization for Internet Safety (OIS) may play a role in promoting responsible disclosure. The group, announced last month, currently consists of founding members @stake, BindView, Caldera International, Foundstone, Guardent, ISS, Microsoft, NAI, Oracle, SGI, and Symantec. [Symantec publishes SecurityFocus Online - ed.]

But I worry that the OIS's exclusion of individual security researchers, and the consequent reliance on corporate security interests, may create too much ill will in the community for them to succeed.

I believe any attempt at consensus building should begin with careful consideration of a proposal's viability as a community standard. It should accept the ultimate necessity of full disclosure, while accounting for the responsibilities of all parties and the limitations of the process.

Daring Exploits

Ominously, the OIS is opposed to the release of actual exploit code. That's bad: ultimately, the code will be developed, and will be shared among the members of the cracker underground. If upstanding members of the community are denied access to live exploits, they will be the only ones to suffer.

The OIS FAQ reasons that "it is unethical to intentionally make one person more vulnerable than another" but preventing the public release of exploits does just that. Furthermore, as the OIS acknowledges, there are legitimate uses for such code, and it should be publicly available at a suitable time.

The ultimate disclosure of every vulnerability, with full details, is unavoidable. In the most extreme case, ambitious hackers or crackers can reverse-engineer individual patches to reveal the original hole. There's no stopping it.

But full disclosure is more than a technical fact: it is also a practical necessity.

It may be difficult to remember, but there was a time when the standard vendor response to a security problem was deny, deny, deny. The vendor would simply claim that no problem existed, even as it was exploited by unprincipled attackers. Full disclosure originated as a response to vendor stonewalling: in the presence of a publicly available exploit, developers could not plausibly claim that no problems existed.

That said, security researchers should not proceed directly from discovery to release of a full-blown exploit.

There must be a period of time during which vendors can verify the existence of the vulnerability and develop a patch or workaround for the problem. The point of full disclosure is to inform system administrators and force a vendor response. If vendors respond willingly, immediate full disclosure can cause more problems than it solves.

Therefore, a responsible security researcher should first contact the vendor. If after a suitable period of time (such as one or two weeks) the vendor has not acknowledged receipt of the report, public disclosure is acceptable. Such rapid disclosure would encourage vendors to work with the security community rather than denying its existence.

After acknowledging receipt of the report, the vendor should verify the existence of the problem. If the problem cannot be duplicated, the vendor should communicate further with the researcher until both agree on the existence or nonexistence of the hole. Should the two not come to an agreement after a reasonable time, the researcher should feel free to report the hole publicly. After all, if the hole does not exist, the community will eventually reject the report and the researcher's reputation will suffer.

Once the hole has been reproduced, the vendor should develop a fix or workaround for the problem. This should be done rapidly, and preferably the vendor's progress should be reported to the researcher. If the process is not complete after a reasonable period of time (four to six weeks), the vendor should request that the researcher hold off on a public report of the bug. At that point waiting on the vendor or publicly reporting the bug is at the discretion of the researcher, and the vendor should not be surprised to see a public report on the hole.

Credit Where Due

When the vendor does produce an advisory regarding the vulnerability and its fix or workaround, the researcher who discovered the problem should be prominently credited, along with the date that the hole was reported. This information is important to the proper functioning of the security community: reliable, responsible researchers should be rewarded for both their ability and their responsible participation in the larger community. The first step towards reward is acknowledgement.

If all parties act responsibly, every major security hole can be acknowledged and repaired in less than two months. After that point, an exploit for the hole will likely circulate within the hacker underground, even if no exploit is released by the researcher. Best practices at this point suggest rapid, but not instantaneous, installation of vendor packages.

Steve Beattie and others have a paper entitled "Timing the Application of Security Patches for Optimal Uptime" which they'll present at the LISA 2002 conference in Philadelphia this December. I'm hoping this paper can provide a practical guideline for the time that should elapse between release of a patch and public release of a working exploit by the researcher.

Full disclosure has its limits. The most serious of these limits is the public's patience and attention span. If both vendors and researchers can agree that a particular problem is minor, release of a fix could reasonably wait until a point release or security rollup patch. At such time, it could be reported as one of a series of simultaneous fixes, with further details available for the curious. Should any such minor vulnerability later prove a serious problem, a researcher or vendor should release a higher-level advisory emphasizing the importance of that particular update.

This procedure would allow the release of problems in a manner proportionate to their seriousness. It could preserve the limited attention of the ultimate audience for these reports, system administrators' ability to secure their systems, and both researchers' and vendors' reputations. All it takes is some time, some patience, and a willingness to work together.

The alternative is legislation, duelling lawyers, perhaps even criminal prosecutions. And no amount of name-calling will fix that.

© 2002 SecurityFocus.com, all rights reserved.

More about

TIP US OFF

Send us news


Other stories you might like