'Tamper evident' CPU warns of malicious backdoors
Like shrink wrap for your microprocessor
Scientists have devised a chip design to ensure microprocessors haven't been surreptitiously equipped with malicious backdoors that could be used to siphon sensitive information or receive instructions from adversaries.
The on-chip engines at the heart of these "tamper evident microprocessors" are the computer equivalent of cellophane shrink wrap or aluminum seals that flag food or drug packages that have been opened by someone other than the consumer. They're designed to monitor operations flowing through a CPU for signs its microcode has been altered by malicious insiders during the design cycle.
The design, to be made public
this next week at the 31st IEEE Symposium on Security & Privacy, comes as an investigation by Engineering & Technology magazine reported that at least five percent of the global electronics supply chain includes counterfeit elements that could "cause critical failure or can put an individual's data at risk," according to The Inquirer. While most of that appears to be coming from grey-market profiteers, analysts have long fretted that bogus routers and microprocessors could pose a threat to national security.
"The root of trust in all software systems rests on microprocessors because all software is executed by a microprocessor," the scientists, from Columbia University's computer science department, wrote in their paper describing the design. "If the microprocessor cannot be trusted, no security guarantees can be provided by the system."
At the heart of their proposal are two engines hardwired into a processor that continuously monitor chip communications for anomalies. One of the engines, dubbed TrustNet, sends an alert whenever a unit executes more or fewer instructions than is expected. A second, called DataWatch, watches chip data for signs the CPU has been maliciously modified.
The engines are built to detect a variety of potential threats such as emitter backdoors, which typically append instructions to a processor's normal batch of communications so that data is copied to "shadow addresses" that can later be accessed by the attackers. They're also built to flag corrupter backdoors that subtly alter microarchitectural operations.
The defenses are premised on the assumption that the backdoors would be installed by insiders working in a single sub-unit of a design team. While a well-funded nation state could afford to buy out an entire team, the more likely scenario is that adversaries would be much more limited, the researchers said. The insiders would add the hidden instructions during the RTL, or register transfer level, phase of design, which involves writing the microcode that controls a chip's functions.
The scientists demonstrated the design on a simplified OpenSPARC T2 processor from Sun Microsystems and got promising results. All emitter and control corrupter attacks were flagged in all cases, and no false positives occurred. They also said the added performance costs were negligible, with less than 3 KB of storage required per processor core.
The scientists are Adam Waksman and Simha Sethumadhavan. A PDF of their paper is here. ®
As a consumer I have no way to check whether my "secured" hardware actually does contain these features or whether those features are in fact secure and don't actually add a backdoor where previously there wasn't one. The concern is with a rogue design team adding microcode but what is to stop it being added by the TN/DW team under Government or other influence?
About that root of trust thing...
>"The root of trust in all software systems rests on microprocessors because all software is executed by a microprocessor"
Well yeah, but then again, the reason nobody's yet bothered to actually backdoor a CPU in practice is because we can't trust the software, or the environment, or the network, or the users, or in fact anything at all, so it hasn't been worth it yet. This is just the same signature-based blacklisting approach that the virus and rootkit arms race has so comprehensively proven can never win. AV is a win in general despite that because it's still worth preventing the 99% of infections even if a targeted one is going to sneak under the radar anyway, but in this area you're trying to detect just a single targeted attack, and it only has to get past the radar once in order to end up in every chip that you end up printing from the design; in those circumstances, it's always going to be a customized attack every time anyway, and this just locks down a couple of possible approaches to backdooring a chip without doing anything to address the million others.
Quis custodiet ipsos custodes?