Why phishing catches punters
Hook, line and sinker
Comment Occasionally a criminal is so, well, clever that you have to admire him even as you wish that he spends the rest of his life in jail.
Take Arnold Rothstein, for instance. One of the kingpins of organised crime in New York City during Prohibition and before. The "Great Brain", as he was termed, was more than likely behind the infamous Black Sox scandal, in which the 1919 World Series was fixed in favour of the Cincinnati Reds. He is also widely credited with inventing the floating crap game immortalised in Guys and Dolls.
Like some character out of a Damon Runyon story, Rothstein's "office" was outside of Lindy's Restaurant, at Broadway and 49th Street, and he associated with gangsters whose names still trip off the tongue three-quarters of a century later: Meyer Lansky, Legs Diamond, Lucky Luciano, Dutch Schultz. When it comes to colorful, clever criminals, Rothstein is at the top of the heap.
And then, on the other end of the scale, today we have the phishers. Scumbags of the seb, phishers vomit out emails to as many millions of people as they can possibly reach, hoping that a tiny few will respond to their fraudulent request to update their account information at PayPal, eBay, or CitiBank (or just about any other bank you can imagine). This is an enormous problem, and it's not getting any better. I recently read a fascinating study that shows just why that's the case.
If you haven't read Why Phishing Works (850Kb PDF) - written by Rachna Dhamija, J D Tygar, and Marti Hearst - stop what you're doing now and go get it (or at the very least, read a short summary of what it offers).
In just 10 pages, your eyes will be opened to just how much of a problem the public - and the security people tasked with protecting them - really face. I knew it was bad, but I had no idea it was this bad.
Basically, the researchers sat a variety of folks down and had them use some websites. Some were fakes created by the team, and some were not. After watching what the participants did with the websites, the researchers quizzed the users as to the motivations for their behaviours. The results are eye-opening, to say the least. Here's some of the scarier things I learned.
Think that cues in the browser will help? Forget it.
When Firefox 1.0 came out, I thought it was a major benefit that the background color of the address bar changed to gold when you were on a site using HTTPS. "How cool!" I remember saying to a friend, "In addition to the gold lock, the entire address bar is gold too. That'll make it even more obvious to people that they're on a secure site!" And that was in addition to the other three indicators that Firefox provides. How utterly naive of me.
In the study by Dhamija et al, 23 per cent of the users don't even look at cues provided by the web browser, such as the address or status bars. Many have no idea what the padlock icon means; in fact, one participant confidently asserted that the padlock indicates that the website can't set cookies.
Instead of browser cues, these people look at the web page itself. Does it "look" and "feel" right? Are there VeriSign logos on the page? How about animations? Does it seem authoritative? In some cases, the padlock icon on the web page itself was enough to convince some that the site was safe, more so than if the padlock was in the browser's chrome.
URLs don't work with everyone either
Some users pay attention to the fact that the address bar changes as they travel through a website, but they don't really have the foggiest idea what the URL itself means. This extends to HTTPS as well. IP addresses do raise alarms, however...although the users don't really know what those are. They just find numbers suspicious.
Users fixate on the weirdest things
The site that fooled all but one participant in the study was for Bank of the West (that's a link to the real website ... or is it?). On that site was a cute animated video of a bear. Evidently that tickled a number of the users who reloaded the page several times to see that animated bear. In fact, some of the participants said that the animation was proof that the site was legit, since it would take too much effort to copy it!
The ordinary folks in the study also figured that if a site has ads on it, then that increases the likelihood that it's not a fake. Likewise, the presence of a favicon (the little icon that appears in the address bar to the left of the URL) was deemed indicative of a site that was not out to steal your money and identity. Amazing what people glom onto.
It's incredibly easy to fool people
I was astonished to read - which again shows my naiveté - that some of the people tested in the study were not only unaware of the term "phishing," but were also surprised that anyone would even engage in such criminal behavior in the first place. In the face of such ignorance, it's no surprise that phishing works.
Others might be aware of phishing, but either ignored or were unsure how to use the various cues provided by the web browsers. This isn't exactly surprising when you consider they were asked by the browser to "accept this certificate temporarily for this session". Would your uncle or grandma know what a "certificate" is? How about a "session"? Didn't think so.
Even the more sophisticated users were largely fooled by the fake www.bankofthevvest.com site. Take a look at that URL again. See it? Instead of "west", the researchers used "vvest," with two vs. This fooled 91 per cent of the participants. Even if you look at the address bar regularly, and pay attention to the links you click, I could see how that would pass right by.
Users are confident that they're right
Damn Interesting is a blog that posts something every day or so about things that are, well, usually pretty damn interesting. In March it was a post titled "Unskilled and Unaware of It" that showed those who lack knowledge or skill at something not only don't realise it, they also think they're far better than they actually are!
The more incompetent someone is in a particular area, the less qualified that person is to assess anyone's skill in that space, including their own. When one fails to recognise that he or she has performed poorly, the individual is left assuming that they have performed well. As a result, the incompetent will tend to grossly overestimate their skills and abilities.
These assertions were certainly borne out by the phishing study, which found that the participants were almost always very confident of their abilities to tell a fake site from a real one...even when they were grotesquely incorrect. And remember, that includes those folks who never look at the address bar to even see if they're on an HTTPS site. Doesn't exactly improve your confidence, does it?
Worse things are coming
Computer Science professor John Aycock and his student Nathan Friess recently published a warning about the coming threat of spam zombies from outer space. The title is straight out of something directed by target="_blank"Ed Wood, but the concept isn't nearly as funny.
These new zombies will mine corpora of email they find on infected machines, using this data to automatically forge and send improved, convincing spam to others.
The next generation of spam could be sent from your friends' and colleagues' email addresses - and even mimic patterns that mark their messages as their own (such as common abbreviations, misspellings, capitalisation, and personal signatures) - making you more likely to click on a web link or open an attachment.
Couple this with the statements made by the phishing study participants that they "regularly" follow links sent to them in emails from friends, co-workers, and employers, and we can easily see disaster looming.
What can we do?
At this point, I honestly feel pretty befuddled. Education is a piece of the solution, but how do we do that in the most effective manner? The browser and the web have gotten increasingly complicated over the past decade, so that your average user now has quite a lot to learn before we can feel comfortable turning him loose on the wild 'n wooly web. Maybe too much, in many cases.
Clearly, using more popup warnings isn't the answer, and the study bears this out: when confronted with a browser warning about a self-signed cert, well over half the users immediately clicked on OK to remove the warning without reading it. And adding additional warnings into the browser's chrome - more icons, more address bar colours, and so on - won't help when a substantial number of users never even look in those areas.
Should we just build web browsers so that they simply do not allow users to visit dangerous or questionable sites? There are already a number of initiatives in place that seek to create a central database of bad sites that software programs can reference; for instance, the next version of Firefox uses one maintained by Google (a service also provided by the Google toolbar for Firefox), while IE 7 will use one run by Microsoft.
Anti-phishing warnings are on by default in the upcoming versions of both browsers, which is good, but they both default to a warning message that can be quickly clicked past by the user. Maybe that shouldn't be allowed, or at least be made a lot more difficult to circumvent.
I know a lot of you are going to kick and holler about that, but if you're reading this, you're by definition different than the vast majority of users out there. Answer me this truthfully: do you really trust Aunt Sally or Steve in Accounting or your kid sister Brooke to carefully read an anti-phishing warning, ponder the ramifications, and then make a wise choice? If you answer in the affirmative, then you haven't read Why Phishing Works. Go read it, and you may change your mind.
But what about you? Do you have any ideas? Let's see if we can't come up with some ways to fix this problem...or at least lessen the likelihood that others will be fooled. That way we can get back to dealing with criminals that have just a touch of panache about them, like Arnold Rothstein and his ilk. Certainly we should wish a pox on both their houses, but better a Rothstein than the plague of phishers we see today.
Copyright © 2006, SecurityFocus
This article originally appeared in Security Focus.
Scott Granneman teaches at Washington University in St Louis, consults for WebSanity, and writes for SecurityFocus and Linux Magazine. His latest book, Hacking Knoppix, is in stores now.