Flash drives dangerously hard to purge of sensitive data
When secure wiping isn't
In research that has important findings for banks, businesses and security buffs everywhere, scientists have found that computer files stored on solid state drives are sometimes impossible to delete using traditional disk-erasure techniques.
Even when the next-generation storage devices show that files have been deleted, as much as 75 percent of the data contained in them may still reside on the flash-based drives, according to the research, which is being presented this week at the Usenix FAST 11 conference in California. In some cases, the SSDs, or sold-state drives, incorrectly indicate the files have been "securely erased" even though duplicate files remain in secondary locations.
The difficulty of reliably wiping SSDs stems from their radically different internal design. Traditional ATA and SCSI hard drives employ magnetizing materials to write contents to a physical location that's known as the LBA, or logical block address. SSDs, by contrast, use computer chips to store data digitally and employ an FTL, or flash translation layer, to manage the contents. When data is modified, the FTL frequently writes new files to a different location and updates its map to reflect the change.
In the process left-over data from the old file, which the authors refer to as digital remnants, remain.
“These differences between hard drives and SSDs potentially lead to a dangerous disconnect between user expectations and the drive's actual behavior,” the scientists, from the University of California at San Diego, wrote in a 13-page paper. “An SSD's owner might apply a hard drive-centric sanitization technique under the misguided belief that it will render the data essentially irrecoverable. In truth, data may remain on the drive and require only moderate sophistication to extract.”
Indeed, the researchers found that as much 67 percent of data stored in a file remained even after it was deleted from an SSD using the secure erase feature offered by Apple's Mac OS X. Other overwrite operations – which securely delete files by repeatedly rewriting the data stored in a particular disk location – failed by similarly large margins when used to erase a single file on an SSD. Pseudorandom Data operations, for instance, allowed as much as 75 percent of data to remain, while the British HMG IS5 technique allowed as much as 58 percent.
Singling out one or more files to be erased is the only sanitization technique that allows the disk on which the data is stored to continue being used. And yet the researchers found that all single-file overwrite techniques failed to remove all digital remnants, even when the procedure was accompanied by disk defragmenting, which rearranges the remaining data in the file system.
“Our data shows that overwriting is ineffective and that the 'erase procedures provided by the manufacturer' may not work properly in all cases,” the paper warns.
Whole-disk wiping techniques faired only slightly better with SSD media. In the most extreme case, one unnamed SSD model still stored 1 percent of its 1 GB of data even after 20 sequential overwrite passes on the entire device. Other drives were able to securely purge their contents after two passes, but most of them required from 58 hours to 121 hours for a single pass, making the technique unviable in most settings.
The researchers also found serious failures when subjecting SSD media to degaussing, in which a drive's low-level formatting is destroyed. Because degaussing attacks magnetism-based features of disks, it is ineffective when applied to to next-generation storage devices. “In all cases, the data remained intact,” the researchers wrote.
The researchers found the most effective way to sanitize data on SSDs was to use devices that encrypted their contents. Wiping happens by deleting the encryption keys from what's known as the key store, effectively ensuring that the data will remain encrypted forever.
“The danger, however, is that it relies on the controller to properly sanitize the internal storage location that holds the encryption key and any other derive values that might be useful in cryptanalysis,” the researchers wrote. “Given the bugs we found in some implementations of secure erase commands, it is unduly optimistic to assume that SSD vendors will properly sanitize the key store. Furthermore, there is no way to verify that erasure has occurred (e.g., by dismantling the drive).”
The findings were recorded by writing files with identifiable patterns to SSDs and then using a field-programmable gate array device device to search for the fingerprint after using secure erasure techniques to delete the files. The researchers' device cost about $1,000, but “a simpler, microcontroller-based version would cost as little as $200, and would require only a moderate amount of technical skill to construct,” they said.
Right now, SSDs are most often encountered in USB thumb drives, and it's not unusual for them to hold as much as 32 GB of data. An increasing number of laptops by default ship with SSDs installed as the primary storage mechanism. Flash storage underpins that vast majority of smartphones, as well.
A PDF of the paper is here. ®
We're so serious about security...
"... and then taking them home ..."
This is the part you didn't tell the customers about, right?
One way around this problem...
... is to physically destroy the devices, which is what the *really* paranoid do. Granted, this wouldn't work out too well for companies that send their old kit to a company that refurbishes and resells the stuff ('asset recovery' is the usual name given to this process), as they tend to like having working storage devices to sell along with the equipment in question.
Interesting article otherwise- This is actually ammunition for not having corporates buy SSDs at this time.
Wear level(l)ing? De-gaussing????????
You got through that whole article without referring to wear levelling (and so does the paper), which is presumably a major part of the problem. If the OS repeatedly writes to what the OS thinks is logical block 42, it won't always end up in the same physical block of flash memory, because any given block of flash has a limited lifetime - a limited number of write cycles. Because of that, the SSD includes a flash controller that implements a "wear leveling" layer that attempts to ensure that any given physical block of flash memory does not get more than its fair share of writes, by mapping between logical blocks and physical blocks. If that made no sense, fair enough, look it up elsewhere, where you will hopefully also find words that explain how SSDs manage to present disk-like block sizes that aren't the same as the inherent SSD block size, and how SSDs have more internal blocks than they offer the host, for bad block replacement just like on a real hard drive.
So when this magic file erase software thinks it is erasing a specific file, it overwrites what it thinks are the required logical blocks, which courtesy of wear leveling etc are not the physical blocks where the original data was actually written.
Given that, if you read the whole "disk" from start to finish it is entirely possible courtesy of wear levelling etc that you will find pieces of the data that you wrote earlier are still accessible. They won't be where you expect them, but unless you correctly overwrite the whole disk from start to end (possibly including replacement blocks which aren't directly user-accessible) there is a risk that data may leak.
Can I have my ticket to California now please? I only need a couple of minutes and then I can go to the beach, if that's OK.
[The idea that there's any practical value in analog-hacking these things, as with supercooled DRAM... just don't, OK]
"subjecting SSD media to degaussing, in which a drive's low-level formatting is destroyed."
You cannot be serious? Shirley? What kind of iriot expects degaussing to have any effect on a flash-based storage device?
Secure burning of an SSD probably erases it.