Feeds

Stay focused on fuzzy tests, warn security experts

Stop when you get that warm feeling

Choosing a cloud hosting partner with confidence

RSA The idea of throwing random test data at a program to see if it cracks has been around in one form or another since the beginning of software development. A formalized approach called fuzzing, based on Professor Barton Miller's work at the University of Wisconsin in the late 1980s, is undergoing a revival as a means of testing the security of applications.

Devised as a way to test Unix systems, fuzzing - or fault-injection testing - has benefited from the explosion in web development, with browser rivals Microsoft and Mozilla recently enthusing about the technique. There's been a proliferation of tools and late last year we saw publication of the Sulley framework, to automate attacks by testers.

No surprise, then, fuzzing is a hot topic at this week's RSA conference in San Francisco, California, where the security community will give their take on using this technique to protect your applications. Their view: don't rely exclusively on fuzzing.

"Fuzzing has been a round a while - but we are seeing it becoming much higher profile now. Everyone wants it although they don't necessarily understand it," principal security consultant for Leviathan Security Michael Eddington told Reg Dev ahead of his RSA presentation.

Eddington hopes to give RSA attendees a better grasp of fuzzing. The top line is fuzzing needs to be factored into the development lifecycle along with other security tests. "The advantage of fuzzing is that it gets round the problem of making assumptions in testing - it stops us being too smart and missing the obvious," Eddington said.

"Potentially any crash you get with fuzzing could turn out to be a security issue. So you need to include it in the lifecycle and probably re-use it several times. But it is only one of the tests you need along side other techniques such as code review and static analysis."

Brian Chess, chief scientist at Fortify Software and also at RSA, warned there are scenarios where fuzz testing can become counter productive.

"Fuzzing is useful for finding bugs in bad code. The number-one mistake application developers make in testing is that they expect data to arrive in a certain order and fuzzing can get round this. But the trick is to know when to stop fuzzing and how to move on to other techniques such as static analysis," he said.

Chess advocates established code-coverage metrics - such as statement coverage - to work out when fuzzing has done its job. "Once the code-coverage metric has flattened out you know that its time to move on to other test methods. It's important to find the balance between dynamic-testing techniques like fuzzing and static analysis," Chess said.®

Beginner's guide to SSL certificates

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
How to determine if cloud backup is right for your servers
Two key factors, technical feasibility and TCO economics, that backup and IT operations managers should consider when assessing cloud backup.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Website security in corporate America
Find out how you rank among other IT managers testing your website's vulnerabilities.