Top tip? Sprinkle bugs into your code to throw off robo-vuln scanners
Also: hide aeroplanes from enemy fighters by blowing their wings off mid-flight
Miscreants and researchers are using automation to help them find exploitable flaws in your code. Some boffins at New York University in the US have a solution to this, and it's a new take on "security through obscurity".
Here it is: add more bugs to your software to throw the automatic scanners off the scent of really scary blunders. We already know what you're probably thinking: "On a bad day, I get software that's more bug than code, and you want more bugs?" – but bear with us.
The researchers – Zhenghao Hu, Yu Hu, and Brendan Dolan-Gavitt – only want the "right" kind of bug added to software: something that's not exploitable, doesn't cause crashes, but will show up if someone bug-scans the software.
And they want thousands of these bugs, if possible, so as to gum up the black-hat business model by making it expensive to work out which bugs are "real".
The aim of what they call "chaff bugs" (metaphorically drawn from an aircraft tossing out foil chaff to confuse enemy radar) is to give what looks like a huge attack surface to a black hat, and send them on an exploit wild goose chase.
Their reasoning stems from a typical attack scenario, which they describe in their paper at arXiv, "Chaff Bugs: Deterring Attackers by Making Software Buggier", as finding bugs, triaging those bugs (that is, identifying those that offer a possible attack), developing the exploit, and deploying it.
The "chaff bugs" are designed to escalate the cost of the triage and exploit-development steps. "Rather than eliminating bugs, we propose instead to increase the number of bugs in the program by injecting large numbers of chaff bugs that can be triggered by attacker-controlled input," the paper stated.
With the right constraints and the right automation, the researchers claimed they could put a lot of likely-looking bugs into their targets (the Nginx server, the Linux
file utility, and the
libFLAC codec library), as shown in the table below.
When they ran their bug-infested software through the American Fuzzy Lop fuzzer, "all of our chaff bugs were considered EXPLOITABLE or PROBABLY EXPLOITABLE", meaning a bug-hunter would have to do a lot of manual work to eliminate false leads.
How to constrain bugs
The hard part is to introduce a bug that doesn't offer a real exploit. The paper noted that chaff bugs have to be constrained, so they only appear in conditions that aren't exploitable and "will only, at worst, crash the program".
If you can achieve that, while at the same time making it hard for an attacker to quickly triage a chaff bug as non-exploitable, "we can greatly increase the amount of effort required to obtain a working exploit by simply adding so many non-exploitable bugs that any bug found by an attacker is overwhelmingly likely to be non-exploitable".
In the paper, the researchers concentrated on two classes of bug: stack buffer overflows, and heap buffer overflows – both of which are rich pickings for attackers, because they're so often exploitable.
However, they're also classes of bugs the researchers reckon they can control, so that the chaff bug isn't exploitable: "Because the stack layout of a function is determined at compile time, we can control what data will be overwritten when the overflow occurs, which gives us an opportunity to ensure the overflow will not be exploitable."
That's because there are two important properties that the "defender" can control: the target of the overflow, and the value that's written during the overflow.
Their target is a "variable that is unused by the program", and the value is "constrained so that it can only take on safe values at the site of the overflow".
If you only needed to add a few bugs to the software, you'd do it by hand, but for chaff bugs to overwhelm "real" bugs, you need lots of them – so the researchers turned to a tool called LAVA for help.
Developed in a collaboration between New York University, MIT Lincoln Laboratory, and Northeastern University in Massachusetts, LAVA is a synthetic bug dataset conceived to benchmark fuzzers and other bug-finding tech. For this research, LAVA was repurposed to work in reverse, so to speak, managing how bugs are added to code. ®
Sponsored: Becoming a Pragmatic Security Leader