Redmond security guru explains IE vuln miss
The one that got away
A Microsoft insider has posted an explanation for the firm's failure to spot a critical flaw in Internet Explorer that obliged the firm to publish an out-of-sequence patch earlier this month.
Michael Howard, a principal security program manager with the software giant, explains that the flaw cropped up in a blind-spot developers weren't trained to scour for potential flaws. Human error is always a factor in developing secure code and sometimes fuzzing tools can help unearth error. Unfortunately, in this case, testing tools weren't up to the job either.
Howard explained that the flaw involved a "time-of-check-time-of-use" bug in how Internet Explorer handles data binding objects. "Memory-related [time-of-check-time-of-use, or TOCTOU] bugs are hard to find through code review," Howard writes in a post to Microsoft's Security Development Lifecycle blog. "We teach TOCTOU issues, and we teach memory corruption issues, and issues with using freed memory blocks; but we do not teach memory-related TOCTOU issues."
Automated tools that throw a range of tests data at applications in order to look for problems also came unstuck, he adds.
"In theory, fuzz testing could find this bug, but today there is no fuzz test case for this code. Triggering the bug would require a fuzzing tool that builds data streams with multiple data binding constructs with the same identifier. Random (or dumb) fuzzing payloads of this data type would probably not trigger the bug, however."
Microsoft's security testers plan to update their testing methodology in order to look more closely for the class of vulnerability exploited by the recent IE flaw. Howard's technically literate post goes on to explain how defences built into Vista and Server 2008 mitigated against the bug. The post, which provides coding examples, illustrates the inherent problems of security testing, an issue developers well away from Redmond are obliged to grapple with every day. ®
It is a fairly obscure bug, but worrying in that a) it exists, and has done for gods-know-how-long, and b) that they managed to miss it with their testing and QA procedures.
Also, Abstruse - that's now my word of the day!
Oh my gawd...
That's their security GURU????
That really explains why MS really know jack about security and their internal testing tools are worth jack too (by their own admission)
I'd expect a company who has the biggest market share on the desktop out there to have sophisticated test tools - fuzzing and otherwise. Clearly security isn't very high on the Agenda at M$
I've seen this practice
This is simple to understand - MS doesn't know what it is doing, simple really. This practice is rather common. The underline problem:
1. The design team doesn't fully understand what they want. They gave a fuzzy outline and objectives, without giving all the requirements to the team do the coding.
2. The coders don't know what they are doing. They may do their best based on what they understood of the partial requirement. Plus some of those so called "developers" really cannot code anything there days.
3. There is no/little documentation, and the test team doesn't really test properly.
To "fix" the problem, the design/review team must know what they want and what they look for. This requires the team to understand the technology, environment, and the will to do some work, rather than sit there and saying "I want these, these and that, now write it".