Stay focused on fuzzy tests, warn security experts
Stop when you get that warm feeling
RSA The idea of throwing random test data at a program to see if it cracks has been around in one form or another since the beginning of software development. A formalized approach called fuzzing, based on Professor Barton Miller's work at the University of Wisconsin in the late 1980s, is undergoing a revival as a means of testing the security of applications.
Devised as a way to test Unix systems, fuzzing - or fault-injection testing - has benefited from the explosion in web development, with browser rivals Microsoft and Mozilla recently enthusing about the technique. There's been a proliferation of tools and late last year we saw publication of the Sulley framework, to automate attacks by testers.
No surprise, then, fuzzing is a hot topic at this week's RSA conference in San Francisco, California, where the security community will give their take on using this technique to protect your applications. Their view: don't rely exclusively on fuzzing.
"Fuzzing has been a round a while - but we are seeing it becoming much higher profile now. Everyone wants it although they don't necessarily understand it," principal security consultant for Leviathan Security Michael Eddington told Reg Dev ahead of his RSA presentation.
Eddington hopes to give RSA attendees a better grasp of fuzzing. The top line is fuzzing needs to be factored into the development lifecycle along with other security tests. "The advantage of fuzzing is that it gets round the problem of making assumptions in testing - it stops us being too smart and missing the obvious," Eddington said.
"Potentially any crash you get with fuzzing could turn out to be a security issue. So you need to include it in the lifecycle and probably re-use it several times. But it is only one of the tests you need along side other techniques such as code review and static analysis."
"Fuzzing is useful for finding bugs in bad code. The number-one mistake application developers make in testing is that they expect data to arrive in a certain order and fuzzing can get round this. But the trick is to know when to stop fuzzing and how to move on to other techniques such as static analysis," he said.
Chess advocates established code-coverage metrics - such as statement coverage - to work out when fuzzing has done its job. "Once the code-coverage metric has flattened out you know that its time to move on to other test methods. It's important to find the balance between dynamic-testing techniques like fuzzing and static analysis," Chess said.®
Fuzzing is quite good fun
Yeah it has been around for quite sometime - and it is just automated testing really on all the different input levels.
But, it allows you to bring in some rather esoteric computer science techniques, so genetic algorithms can be useful, libraries of old exploits can be abstracted and detection of compromise honed.
With the improvement in computing speeds and parallel computing it becomes more powerful day by day.
The Fuzzing book is ok, but I do think they hold back a bit and some of their conclusions early on are more rule of thumb just waiting to be broken, but still an excellent read.
Obviously the logical move from fuzzing is back to ideas such as Z and formal specification where the program has to be mathematically proven to work, though oddly no one often wants to pay for that style of work, maybe fuzzing will make that side of things more appealing.