Stay focused on fuzzy tests, warn security experts
Stop when you get that warm feeling
RSA The idea of throwing random test data at a program to see if it cracks has been around in one form or another since the beginning of software development. A formalized approach called fuzzing, based on Professor Barton Miller's work at the University of Wisconsin in the late 1980s, is undergoing a revival as a means of testing the security of applications.
Devised as a way to test Unix systems, fuzzing - or fault-injection testing - has benefited from the explosion in web development, with browser rivals Microsoft and Mozilla recently enthusing about the technique. There's been a proliferation of tools and late last year we saw publication of the Sulley framework, to automate attacks by testers.
No surprise, then, fuzzing is a hot topic at this week's RSA conference in San Francisco, California, where the security community will give their take on using this technique to protect your applications. Their view: don't rely exclusively on fuzzing.
"Fuzzing has been a round a while - but we are seeing it becoming much higher profile now. Everyone wants it although they don't necessarily understand it," principal security consultant for Leviathan Security Michael Eddington told Reg Dev ahead of his RSA presentation.
Eddington hopes to give RSA attendees a better grasp of fuzzing. The top line is fuzzing needs to be factored into the development lifecycle along with other security tests. "The advantage of fuzzing is that it gets round the problem of making assumptions in testing - it stops us being too smart and missing the obvious," Eddington said.
"Potentially any crash you get with fuzzing could turn out to be a security issue. So you need to include it in the lifecycle and probably re-use it several times. But it is only one of the tests you need along side other techniques such as code review and static analysis."
"Fuzzing is useful for finding bugs in bad code. The number-one mistake application developers make in testing is that they expect data to arrive in a certain order and fuzzing can get round this. But the trick is to know when to stop fuzzing and how to move on to other techniques such as static analysis," he said.
Chess advocates established code-coverage metrics - such as statement coverage - to work out when fuzzing has done its job. "Once the code-coverage metric has flattened out you know that its time to move on to other test methods. It's important to find the balance between dynamic-testing techniques like fuzzing and static analysis," Chess said.®
Sponsored: Hyper-scale data management