Emergent cheese-sandwich detector enlisted in War on Terror
How not to win
Technology on Trial It's very bad luck for USA Today that on the very same day they reported the profound failing of the FBI's digital and computer analysis systems in the Madrid bombings, they published a column suggesting that just such technologies could prevent such attacks in future. Uncritical gee-whizz columns about new technology are nothing new, but this one by Kevin Maney could be the most ill-timed of its kind.
As we reported this week, the Spanish authorities discovered a bag of explosives, with a set of fingerprints, a week before the Madrid bombings in March that killed almost 200 people and injured 1,800 more. Unable to find a fingerprint match, they appealed to the FBI who promptly found a "100 per cent" match, and arrested an Oregon lawyer and ex-US serviceman. So convinced were agents they had their man, they persuaded the Spanish authorities to look no further. In fact, the FBI's suspect had nothing to do with the bombings.
If the Feds had examined the original fingerprint - rather than a poor digital copy - they would not have believed it was "100 per cent positive" and, perhaps, the horror might have been averted. The FBI was further convinced by a computer-generated network profile that placed former Army officer Brandon Mayfield at the center of the conspiracy. Mayfield had converted to Islam in the 1980s and represented a man in a child custody case who was later sentenced on terror charges.
Both pieces of digital evidence fall apart when human judgement is introduced. For example, Mayfield had never been to Spain. However, the FBI regarded the machine logic with superior intelligence to its own human detective skills and intuitions. Of course, since computers don't have any intelligence, and don't perform any magic, they should be used with great circumspection. One of the least controversial declarations is Nicholas "Does IT Matter?" Carr's new books is that "as the strategic value of the technology fades, the skill with which it is used on a day-to-day basis may well become even more important to a company's success."
But the dominant tone of technology marketing is the opposite: less humans, and less skillful humans will be needed as the tools become cleverer. Nowhere is this more apparent than in USA Today's breathless summary of the CIA's technology investments. For example,
The CIA invested in Tacit Knowledge Systems. The Palo Alto, Calif., company's software could scan all of every agent's outgoing e-mail, looking for clusters of words that tell the system what and who each agent seems to know.
It doesn't "read" the e-mail for content, insists Tacit CEO David Gilmour — it's just trying to get to know the user better, "like a really smart personal assistant," he says.
Tacit could then recognize that the Albuquerque agent needs cheese sandwich-related information, and it would know the Berlin agent seems to have cheese sandwich expertise. Tacit's system could then tell the Berlin agent that she might want to get in touch with the Albuquerque agent. "We help the good guys find each other," Gilmour says.
Actually, it should help cheese sandwiches find each other. You'll note the boilerplate disclaimer by Tacit's CEO: the software that does the reading isn't really doing any reading - a similar argument was made by Google recently over its Gmail snoopbot. It isn't reading, it's simply rubbing up against your leg, like a friendly kitten.
30 degrees of guilt
There's worse to come. We discover,
Systems Research & Development (SRD) created something called Non-Obvious Relationship Awareness. That sounds like a New Age marriage-counseling technique. But it is actually a technology for sorting through vast amounts of information to find the tiniest hints of collusion.
Tiniest hints? An unfortunate phrase, given Mr. Mayfield's treatment. In the New York Times story titled Spain Had Doubts Before U.S. Held Lawyer in Madrid Blasts Ibrahim Hooper, spokesman for the Council on American-Islamic Relations points out that, "it becomes the whole Kevin Bacon game — no Muslim is more than six degrees away from terrorism." SRD's software, Maynes tells us, can find relationships "at up to 30 degrees of separation" which proves that if nothing else, the CIA will have a new definition of tenuous.
The backlash against social networking software like Friendster and Orkut is a consequence of the inadequacies of representing aspects of real social relationships. Friends and strangers are given the same weight, and users often find themselves receiving large quantities of unwanted email. (That's only the start of the trouble: Orkut, you'll be pleased to discover, allows you to enjoy just seven categories of humor).
We know what these problems are, and they can't be wished away. Who's to blame?
Partly it's the inadequacy of the computer researchers themselves, who have become very adept at recognizing patterns, but not at placing them in any kind of meaningful context. You can't have escaped hearing excitable chatter about "memes" - a reductive model of looking at the world which strips these ideas from their psychological or historic contexts. In meme-world, we're simply dumb transmitters for ideas, fashions, or scientific theories, which choose us rather than the other way round. So people who get excited about "memes" aren't interested in why a piece of information belongs in a particular context. It's a fun excercise, but it doesn't get us very far. Employing a cultural dweeb-detector before allowing people to write software may be too draconian, but we certainly need researchers who recognize the limits of their explorations.
(Looking over DARPA's catalog of robot ant armies and self-healing minefields, you could conclude that their researchers aren't up to the task either, as they're more interested in making silly toys).
And equally, we could be a lot more reactive when confronted with dishonest technology marketing. It's partly teleological, in that we've become numbed into thinking that technology always improves, and generally makes things better. Technological innovations "emerge", so they must be good, OK? In fact, our models are very primitive. However, the linear march of progress is a faith, and explains why technology marketeers are permitted such a sunny, optimistic tone. If technology was met with the same skeptism that greets medical innovations, we wouldn't have such a problem.
The FBI's blunders reflect a change in criminal investigation procedures since computers began to play a significant part of detective work. Policing now involves aggregating vast silos of digital information in the belief that some clever software robot can be unleashed later, to make sense of it all. Intuition and common sense have been correspondingly downgraded, and that's a real loss.
Most technology disasters such as ERP overruns and commercial security compromises don't seem to affect us very much. We pay a price, but it's very indirect. In dealing with the threat of terror, a technology disaster has a very high price - a human price. How do we start to fix it? ®
FBI apology for Madrid bomb fingerprint fiasco
TIA lives? Report lists US gov 'dataveillance' activities
Invisible GIs to heal selves, leap tall building with nanotech
US puts on pair of robotrousers
Boffins buff bugging bugs
A back door to Poindexter's Orwellian dream
Meet the transhumanists behind the Pentagon terror casino
The self-healing, self-hopping landmine
Sponsored: Global DDoS threat landscape report