Exploit Code on Trial
Security pros gathering at a Stanford University Law School conference on responsible vulnerability disclosure Saturday harmonized on the principle that vendors should be privately notified of holes in their products, and given at least some time to produce a patch before any public disclosure is made. But there was pronounced disagreement on the question of whether or not researchers should publicly release proof-of-concept code to demonstrate a vulnerability.
UK-based security researcher David Litchfield, of NGS Software, said he publicly swore off the practice after an exploit he released to demonstrate a hole in Microsoft's SQL Server became the template for January's grotesquely virulent Slammer worm. At Saturday's conference, held by the university's Center for Internet and Society, Litchfield said he wrestled with the moral issues for some time. "At the end of the day, part of my stuff, which was intended to educate, did something nefarious, and I don't want to be a part of that," said Litchfield, a prolific bug-finder.
That kind of soul-searching is music to Microsoft's ears. The disclosure standards promulgated by the Organization for Internet Safety, an industry effort founded by Microsoft and handful of large security companies, require researchers to withhold any exploits from the public for at least 30 days following the first public advisory on a bug. But Redmond would like to see researchers abstain entirely, said Steve Lipner, the software-maker's director of security engineering strategy. "We prefer that finders wait before releasing exploit code, or, better, don't release exploit code," he said. "It's something where ... we're trying to ask for cooperation, instead of something that we're trying to mandate or dictate."
California-based security vendor eEye and the Polish white hat hacker group LSD -- both prodigious exploit publishers in the past -- have taken to withholding proof-of-concept code when disclosing serious security holes.
Len Sassaman, security architect at the e-privacy company Anonymizer, says the attitude shift endangers an important part of the Internet's healing cycle when a new vulnerability is discovered. "If the researchers are discouraged from releasing working exploit code... we lose a valuable tool there," he said. "We don't get the proof-of-concept code, we don't get the motivation to create the patch on the vendor side, and to implement it on the user side."
Suppressing exploits also threatens to strip security research of the rigor of serious scientific inquiry, said Matt Blaze, a researcher at AT&T Laboratories. And network defenders sometimes use proof-of-concept code to evaluate techniques to prevent a compromise, to help detect exploitation of a new vulnerability, and to test that a patch actually works. Conference attendee Warren Stramiello, a network administrator at the Georgia Tech Research Institute, challenged Microsoft's Lipner to come up with a way to do all of that without the help of working code. Lipner countered that exploits aren't very useful to white hats as they're made out to be. "The set of users that would use exploit code to protect themselves... is probably much smaller than the set of people who would be put at risk by it," Lipner said.
Of course, black hat hackers have shown that they're perfectly capable of writing their own exploits. Even the author of the Slammer worm author demonstrated enough skill to have written the worm from scratch, without Litchfield's help. "If anything," said Litchfield, "I saved him 20 minutes."
Sponsored: Today’s most dangerous security threats