This article is more than 1 year old

Just how buggy is Firefox?

Code analysis row after tool identifies 611 defects

Security researchers that carried out a code analysis of popular open source browser Firefox using automated tools, have discovered scores of potential defects and security vulnerabilities despite coming to the conclusion that the software was generally well written.

A former Mozilla developer has criticised the methodology of the analysis and said it provides little help in unearthing real security bugs.

Several versions of the software were put through their paces by Adam Harrsion of Klocwork using Klocwork's K7 analysis tool. The analysis, which culminated in an examination of Firefox version 1.5.0.6 unearthed 611 defects and 71 potential security bugs.

A large number of these flaws resulted from the code not checking for null after memory was allocated or reallocated. Memory management issues accounted for the next highest defect count (141 flaws). Failure to check the execution path of code also frequently cropped up as a potential error.

Firefox developers have been sent the analysis results, which Harrsion concedes is preliminary. "Only someone with in-depth knowledge and background of the Firefox code could judge the danger of a particular security vulnerability," he writes.

It's unclear how many, if any, of the potential defects identified by Klocwork's tool are exploitable, the most important consideration.

Neither Microsoft nor Opera have released proprietary code for their respective browsers for similar analysis, so no comparisons can be drawn.

Alec Fleet, a former developer on the Mozilla Project, said that running code analysis tools has some benefit, but he criticised Klocwork's conclusions as incomplete and potentially misleading.

"To claim that there are 611 known, specific, real defects is just wrong. With most of these tools the signal to noise ratio is very high," he writes.

"This is not to say there aren't 141 other legitimate memory management defects lurking, but it takes a deeper (human) understanding of the codebase, as well as testing of actual codepaths in use, to flush them out. To spend smart developers' time going over long reports of machine-generated lint would be a waste," Fleet adds.

Harrsion defended the quality of his analysis against these criticisms. "Although this analysis was automated, the level of analysis is more sophisticated then a traditional lint-type tool. In this particular analysis we reviewed the entire results to verify the correctness of the defects... [but] as with any analysis only the developers can be the final judge on the severity of these problems," he said. ®

More about

TIP US OFF

Send us news


Other stories you might like