Original URL: http://www.theregister.co.uk/2007/03/31/bill_softky_lie_detectors/

Will there ever be a real 'Lie Detector'?

Polygraph Pollyannas

By Bill Softky

Posted in Science, 31st March 2007 10:02 GMT

Column Lie detectors figure prominently in the sauciest dramas, like espionage and murder, but they deeply polarize opinion. They pit pro-polygraph groups like the CIA, the Department of Energy and police forces against America's National Academy of Sciences, much of the FBI, and now the US Congressional Research Service. The agencies in favor of lie detectors keep their supporting data secret of obfuscated. The critics have marshaled much better arguments.

They have countless earnest references on the site antipolygraph.org, including an amusing 1941 screed on "How to Beat the Lie Detector", or an elegant essay in Science Magazine. My favorite: a letter by the convicted CIA double-agent Aldrich Ames - written from prison! - with the authority of someone who kept his traitorous career intact by successfully beating polygraphs time and time again: "Like most junk science that just won't die... because of the usefulness or profit their practitioners enjoy, the polygraph stays with us," he wrote.

So it's clear the old lie detector technology is bunk, pure and simple. Will there ever be a new technology which does in fact detect lies? No, and here's why.

Cheating the system

First, the problem with hiding your lies, the "false negative" problem. Ordinary polygraphs measure simple things like breathing, blood pressure, and skin electricity; presumably, when you get lie you get tense, and glitches in those measurements give you away. But the problem is that those signals are different for everyone. And since there is no common "lying blood pressure" or "lying skin resistance," the test needs to calibrate it for you individually: someone needs to determine your own personal *difference* between "lying" and "truth-telling" measurements.

That's the rub: a diligent or practiced liar can beat the system by ensuring that there isn't such a difference. For example, he might try suppressing the "lying" indicators, by learning to calm himself or breathe naturally during a lie. Or he might boost the baseline indicators, when he knows he is being calibrated for "truth-telling": by making a sharp inhalation, surreptitiously poking himself, or even deliberately lying or blurting out an uncomfortable truth.

These particular tricks have been known to beat polygraphs for decades, but the principles still apply to any kind of physiological measurements, because human biology varies so strongly. Your reporter knows this variability firsthand, having once worked on a fancy but doomed technology to measure blood pressure in sick people.

Basically, anyone determined to beat a polygraph (or presumably any other kind of lie detector) can game the system by practicing ways to screw up the testing methodology. But that merely makes the test less useful at catching bad guys. What about good guys?

False positives

Such tests can ruin their lives, when a "false positive" comes up. Part of the problem is that tests which measure stress, like polygraphs, also tend to *induce* stress, since the consequences of failure can be as drastic as unemployment or prison. The biggest problem here is in sheer numbers: you have to sacrifice a lot of innocent people to catch every bad guy.

Even if you take at face value the self-interested American Polygraph Associations own reliability numbers- they quote 92 per cent accuracy, without explaining its exact meaning - that would wrongly "fail" roughly 100 people out of every thousand tested. Even 99 per cent accuracy would tend to produce far more wrongly-ruined careers than rightfully-caught evildoers, at least for rare crimes like terrorism, murder, and spying. Sadly, no one seems to have hard numbers on just how bad the false-positive problem is...or if they do, they're secret.

But perhaps a new, brain-related technology will solve these problems?

One contender is "Brain Fingerprinting", which claims to use brain waves to measure the familiarity of information: Was someone exposed to this information before, regardless of its emotional salience?  Tiny electrical signals on the scalp (brain waves) evidently reverberate  in a slightly different pattern if you see a familiar vs. an unfamiliar image. Here are two improvements upon polygraphs: the signal comes straight from the brain, rather than from secondary physiological markers, and it claims to deal with more neutral familiarity and "knowledge" of experience rather than the stress of lying abut it (although the information to be tested must be suddenly flashed on a screen for the technique to work).

The inventor and chief promoter, Lawrence Farwell, has sound academic credentials, a handful of refereed publications, a US Senator's testimonial, and has helped reverse a murder conviction. But it will take a far more ambitious research program than his to confirm his methods measure "evidence stored in the brain". Measuring whether this works is at least as hard (and important) as measuring if a heart-drug works, and that kind of research program costs hundreds of millions of dollars.

Mind magnet

An even sexier technology is brain-imaging. In particular, "functional magnetic resonance imaging"- with the subject enclosed by a giant liquid-helium-cooled magnet - is a method for showing not just what your brain looks like, but which parts work harder (leading to colorful brain-pictures with glowing red spots indicating processes like "concentration" or "arousal").

One boffin, Dr. Scott Faro of Philadelphia, has found a handful of regions which seem to glow a bit more when a volunteer is lying then when truth-telling, here. His claims are both more scientific and more circumspect.

"We have just begun to understand the potential of fMRI in studying deceptive behavior," he says.

That caution is encouraging, and not just because a single interrogation costs thousands of dollars in a gadget costing millions. Like any biometric method, lie detection has an uphill fight to bring its false-positive rate low enough to justify its expense and consequences.

But it's also much harder and riskier than other biometrics: you can't verify or refute a lie detection  as easily as a retinal  scan, and you can't measure how well people might game it or react under stress.  And of course, countless careers can be permanently ruined by your mistakes.

The present brand of lie detection still hasn't proved itself scientifically in seventy years of trying, so it should be shelved before it derails even more careers or mistakenly vets even more spies. The new methods may be better, but we should test them as carefully as we do drugs before we give them an equivalent chance to do serious damage. ®

Bill Softky has worked on dozens of science and technology projects, from the deep paradoxes of nerve cells  to automatically debugging Windows source code.  He hopes someday to reverse-engineer the software architecture of mammalian learning, and meanwhile works as Chief Algorithmist at an internet advertising startup.