This article is more than 1 year old

Linux security self-censorship ominous

Alan Cox - traitor, or Felten understudy?

October was a bad month for proponents of full disclosure. First, Microsoft's Scott Culp argued in an essay that security researchers shouldn't reveal the nature of security holes in software. Then Culp may have found an unexpected ally in his war against full disclosure: Linux's second-in-command, Alan Cox.

Cox's decision to delete security-related material from the Linux kernel changelog seems almost to honor Culp's request that we suppress information useful to attackers.

While at least some of the security changes made in the prerelease of the 2.2.20 Linux kernel have already been discussed elsewhere, Cox claims that describing these changes might be in violation of the same anti-circumvention provisions of the Digital Millennium Copyright Act (DMCA) used to prosecute Russian programmer Dmitri Sklyarov, and cited by Professor Felten in his initial decision not to publish a paper describing weaknesses in SDMI.

Cox may be making a broader political statement by his decision, but it could have unintended consequences. If Cox's self-censorship is taken as precedent by other developers, exploit researchers who choose to publish their code may become more vulnerable to prosecution.

Not only will those developers appear conspicuous in their contrast to Cox, but opponents of full disclosure could argue that Cox's decision reflects a broad understanding of the limitations imposed by the DMCA, and that security researchers who take a different route are willfully flaunting those restrictions.

While I believe there may be unintended consequences to Cox's decision, I don't doubt his sincerity.

Many in the community complain that Cox is just trying to make a point about the DMCA, and is hurting U.S.-based Linux developers in the process. But the Felten and Sklyarov cases demonstrate that developers are in genuine legal peril. Is it likely that Cox or Linux kernel overlord Torvalds would be prosecuted for posting an accurate changelog? Absolutely not. Is it certain that they would not be prosecuted? No.

Regardless of his position on the DMCA, Alan Cox says he is in favor of full disclosure when a vendor is not responsive, or if knowledge of a vulnerability is already widespread in the computer underground. "Just waiting for vendors sadly doesn't work," he wrote me in an email.

Which is all the more reason he should be wary of inadvertently supporting the efforts of Microsoft, and other enemies of disclosure.

Elias Levy wrote an eloquent rebuttal to the Microsoft essay. But I'd like to zero on in one particularly egregious claim Culp makes in his argument: that an administrator "doesn't need to know how a vulnerability works in order to understand how to protect against it."

On smaller or more tightly-controlled networks, it may be true that full disclosure does not directly serve the needs of system administrators. But network administrators at medium and large sites must have access to exploit code in order to ensure the security of their networks. Unless one administrator has access to every single device on his or her network, there are times when the only way to test for a vulnerability is to attempt an exploit against a server.

Although commercial tools are available that scan for vulnerabilities, the lag time between development of the exploit and the next periodic update to security scanning packages is too long for many enterprises. In checking for vulnerable systems, speed is of the utmost importance.

In some cases, running a live exploit may be the only way to root out all vulnerable systems on a network with widely-dispersed controls.

Of course, administrators shouldn't run an exploit unless it's authorized by a policy formally approved by management, and should only run them under close supervision from a manager. Otherwise, they risk being fired or prosecuted.

Even with management approval, attempting an exploit against one's own network is a technique of last resort, and can be dangerous in the best of circumstances. Some exploits have been trojaned, so as to provide the original author of the code a back door onto the system. Worse, on a production server, a successful or partially-successful use of an exploit can crash the server, causing an outage or even data loss.

Despite this, Culp's arrogant assumption that he knows what system administrators need in order to do their job is astounding. The idea that any one vendor will look out for users' best interests has not been borne out by the history of the industry, nor will a responsible system administrator rely on such an assertion.

Support by industry for the DMCA, and repeated attempts to suppress full disclosure of security vulnerabilities, are further evidence that users need to look out for themselves. That's one of the reasons Linux, with its open source ethic, has always been such a great choice for security. Let's hope it stays that way.

© 2001 SecurityFocus.com, all rights reserved.

Jon Lasser is the author of Think Unix (2000, Que), an introduction to Linux and Unix for power users.

More about

TIP US OFF

Send us news


Other stories you might like