Researchers find backdoor in milspec silicon
Claim world's first finding of secret features in chips
A pair of security researchers claim to have found a back door in a commercial field-programmable gate array (FPGA) marketed as a secure tool for military applications.
The FPGA in question is the Actel ProASIC3, a device manufacturer MicroSEMI recommends for use in “portable, consumer, industrial, communications and medical applications with commercial and industrial temperature devices,” but also comes in models boasting “specialized screening for automotive and military systems.”
Sergei Skorobogatov, a researcher at the University of Cambridge, and Christopher Woods of London's Quo Vadis Labs have released a draft paper (PDF) describing a method whereby attackers can “disable all the security on the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device.”
The pair chose the ProASIC3 for their tests because, they say, it is a very widely used device, boasts of superior security and is known to have military users. Those qualities, the pair say, made it an ideal subject for a back door hunt.
The pair used the Actel's own analysis tools and the Joint Test Action Group (JTAG) interface to analyse the silicon. That analysis yielded undocumented features, thanks to discovery of what the draft paper calls “command field and data registers.”
The pair also applied differential power analysis (DPA), a method of analysing variations in electrical activity that hint at tasks being performed in silicon, and “ Pipeline Emission Analysis (PEA)” to probe the device “in an attempt to better understand the functionality of each unknown command.” Just how PEA does so is not clear: the draft paper says PEA was developed by the “sponsor” of the research, but that entity is not revealed. Even the footnote describing the technique has been redacted so it reads “ Removed to comply with anonymity requirement for submission”.
But the paper hints PEA is a more sensitive version of DPA, describing it as follows:
“The outstanding sensitivity of the PEA is owed to many factors. One of which is the bandwidth of the analysed signal, which for DPA, stands at 200 MHz while in PEA at only 20 kHz.”
PEA seems to have done the trick, yielding evidence of a passkey that allows control of many features in the FPGA.
“Further investigation,” the paper says, “revealed that this is a backdoor function with the key capable of unlocking many of the undocumented functions, including IP access and reprogramming of secure memory.”
The paper is clearly marked as a draft and Skorobogatov promises to detail the exploit fully at the 2012 Workshop on Cryptographic Hardware and Embedded Systems in Belgium.
One imagines the presentation will be rather well attended. ®
Or another great big carry bag in the bathtub type accident. These researchers are no doubt deviants with a (so far) secret and very dangerous proclivity for certain auto erotic practices which are soon to become evident...
PEA, an interesting description
One poster in slashdot proffers the following explanation, found at http://it.slashdot.org/comments.pl?sid=2879043&cid=40137395:
FPGAs commonly protect user-code with encryption. An encryption engine is included in the silicon to which the user has limited access to crypto=keys with which to encrypt the code that is installed in ROM/Flash.
A number of attacks are known against microcontrollers/FPGAs that secure code with encryption - notably differential power analysis (DPA) which works by connecting a current probe to the chip, and collecting measurememnts of energy consumption as the device performs an authentication operation. By carefully, measuring power traces over thousands of authentication operations, statistical analysis can reveal clues about the internal secret keys; potentially allowing recovery of the key within useful periods of times (minutes to hours).
These secure FPGAs contain a heavily obfuscated hardware crypto-engine, with lots of techniques to obstruct DPA (deliberately unstable clocks, heavy on-chip RC power filtering, random delay stages in the pipeline, multiple "dummy" circuits so that an operation which would normally require fewer transistors than an alternative, has its transistor count increased, etc.). The idea being that these countermeasures reduce the DPA signal and increase the amount of noise, making recovery of useful statistics impractical. In their papers, this group admit that the PA3 FPGAs are completely impervious to DPA, with no statistical clues obtained even after weeks of testing.
This group have developed a new technique which they call PEA which is a much more sensitive technique. It involves extracting the FPGA die, and mapping the circuits on it - e.g. using high-resolution infra-red thermography during device operation to identify "interesting" parts of the die by heat production under certain tasks - e.g. caches, crypto pipelines, etc. Having identified interesting areas of the die, an infra-red microscope with photon counter is focused on the relevant circuit area. As it happens, transistors glow when switched, emitting approx 0.001 photons per switching operation. The signal from the photon counter is therefore analogous to the DPA signal, but with a much, much stronger signal-to-noise ratio, allowing statistical analysis with far fewer tries. The group claim the ability to extract the keys from such a secure FPGA in a few minutes of probing with authentication requests.
The researchers claim to have found the backdoor, by fuzzing the debug/programming interface, and finding an undocumented command that appeared to trigger a cryptographic authentication. By using their PEA technique against this command, they were able to extract the authentication key, and were able to open the backdoor, finding they were able to directly manipulate protected parameters of the chip.
Re: "take a peak"
I think you should leave the mountains where they are.