Researchers look to predict software flaws
Want to know how many flaws will be in the next version of a software product? Using historical data, researchers at Colorado State University are attempting to build models that predict the number of flaws in a particular operating system or application.
In an analysis to be presented at a secure computing conference in September, three researchers used monthly flaw tallies for the two most popular web servers - The Apache Foundation's Apache web server and Microsoft's Internet Information Services (IIS) server - to test their models for predicting the number of vulnerabilities that will be found in a given code base.
The goal is not to help software developers to create defect-free software - which may be so unlikely as to be impossible - but to give them the tools to determine where they need to concentrate their efforts, said Yashwant Malaiya, professor of computer science at Colorado State University and one of the authors of the paper on the analysis .
"The possible reasons that vulnerabilities arise are much smaller than the reasons for the number of defects, so it should be possible to reduce the number of vulnerabilities," Malaiya said. "It would never be possible to reduce the issues to zero, but it should be possible to reduce it to a much smaller number."
The research could be another tool for developers in the fight to improve programmers' security savvy and reduce the number of flaws that open up consumers and companies to attack. While the number of vulnerabilities found in recent years leveled off, web applications boosted the number of flaws found in 2005 .
Moreover, the advent of data-breach notification laws has forced companies , universities and government agencies to tell citizens when a security incident has put their information in peril . The resulting picture painted by numerous breach notifications has not been heartening .
The latest research focuses on fitting an S-shaped curve to monthly vulnerability data, positing that a limited installed based and little knowledge of new software limits the finding of vulnerabilities in a just-released application, while exhaustion of the low-hanging fruit makes finding vulnerabilities in older products more difficult.
The researchers found that the number of vulnerabilities found in Windows 95, Windows NT and Red Hat Linux 7.1 fit their model quite well, as does those found in the Apache and IIS web servers between 1995 and the present. The web server analysis, which will be discussed in the September paper, suggests that IIS has reached a saturation point, with a lower rate of vulnerabilities discovered than Apache. Moreover, that analysis found that the S-curve relationship holds for broad classes of vulnerabilities, such as input validation errors, race conditions, and design errors.
Some software developers believe that such research could allow product managers to make better decisions about when a software program is ready to be shipped and how many vulnerabilities will likely be found.
"There isn't an engineering manager that wouldn't love to know the number of vulnerabilities they should expect to have after pushing out a product," said Ben Chelf, chief technology officer for Coverity, a maker of source-code analysis tools that can be used to detect potential software flaws. "A VP of engineering can, on the release date, say, 'We expect to find 50 more security issues in this code'. That helps mitigate cost and risk."
Yet, the researchers' predictions have been hit or miss, even with a large margin of error of 25 per cent. A paper released in January 2006 predicted that the number of flaws found in Windows 98 would saturate between 45 and 75; at the time, data from the National Vulnerability Database  showed that 66 vulnerabilities had been found, but that number has continued to increase to 91 as of July.
However, the researchers' prediction for Windows 2000 has apparently been accurate: The current number of vulnerabilities for the operating system is 305, just within the 294-to-490 range given in the computer scientists' paper.
Whether the models become more accurate may rely on getting better data on the number of software flaws discovered after development. The models used for prediction of future vulnerabilities assume that defect density - the number of software flaws per 1,000 lines of code - remains the same between software versions.
It's not an unreasonable assumption: Historically, the researchers found that a company's programming teams tend not to get better, making the same number of mistakes in one version of software as the next, said CSU's Malaiya.
However, such observations use data that predates the increasing use of static code analysis software and initiatives among developers, such as Microsoft, to improve the security of their products.
Some security experts have doubts whether the model will ever be able to make better than a rough estimate of the number of vulnerabilities that will likely to be found in a particular application.
The prediction of the number of vulnerabilities from general trend data may gloss over too many important details to be of real value, said Gerhard Eschelbeck, chief technology officer for anti-spyware firm Webroot Software.
"This is a little bit like predicting the next earthquake," Eschelbeck said. "It's a valuable area of research but it may not, in the end, be practical."
Because vulnerability researchers' interest in finding flaws in a particular product can be fickle , general trends could be swamped by other variables.
In July, for example, Microsoft's Internet Explorer browser will likely see an uncharacteristically large number of vulnerabilities found because one researcher has decided to release a bug each day of the month . Market forces could also throw off the models, since a handful of companies now pay for previously unknown flaws, a situation that could cause researchers to stay interested in older operating systems .
Moreover, the discovery of less serious flaws is far less important than critical vulnerabilities that could lead to remote compromises, Eschelbeck said.
"It is not just about the number, but about the severity," he said. "Just the pure number does not mean a lot without the context."
If such limitations could be overcome, the ability to predict the future number of software flaws could have big benefits, said Brian Chess, chief scientist with source-code analysis tool maker Fortify. For example, the assumption that vulnerabilities will always be present in software may suggest a better strategy for dealing with the issues. Developers can choose to put their resources into finding the more serious issues, he said.
"If you accept that flaws can't be gotten rid of, you can decide which mistakes you are going to make and which ones are not acceptable," Chess said. "Even though you cannot predict which line of codes will have the vulnerabilities, you can push the actual class of vulnerabilities one way or another."
In the end, even if the research does not produce accurate predictions, accepting that you will have security problems and learning to deal with the aftermath of releasing a software product are important lessons, he said.
"The next thing you build will have security problems just like the last thing you did, but let's make sure that when we have a vulnerability, we can deal with it," Chess said. "I think that is an evolution in the way that people think about building security into their software."
This article originally appeared in Security Focus .
Copyright © 2006, SecurityFocus