Original URL: https://www.theregister.com/2006/06/21/phishing_with_rachna_dhamija/

Phishing with Rachna Dhamija

The human factor

By Federico Biancuzzi

Posted in Security, 21st June 2006 08:47 GMT

Interview Federico Biancuzzi interviews Rachna Dhamija, co-author of the paper "Why Phishing Works" and creator of Dynamic Security Skins. They discuss the human factor, how easy it is to recreate a credible browser window made with images, some new anti-phishing features included in the upcoming version of some popular browsers, and the power of letting a user personalise his interface.

Could you introduce yourself?

I'm a postdoctoral fellow at the Centre for Research on Computation and Society at Harvard University. I teach a computer science course on Privacy and Security Usability, which tackles one of the most challenging problems in computer security: the human factor. Before that I was a PhD student at UC Berkeley, and before that I worked on electronic commerce privacy and security at CyberCash.

Recently you co-authored an experiment to understand how and why phishing works. What did you learn?

We wanted to understand why phishing attacks work. We conducted a usability study where we showed 22 participants 20 websites and asked them to determine which ones were fraudulent, and why. We found that the best phishing website fooled 90 per cent of participants.

We discovered that existing security cues are ineffective, for three reasons:

  1. The indicators are ignored (23 per cent of participants in our study did not look at the address bar, status bar, or any SSL indicators).
  2. The indicators are misunderstood. For example, one regular Firefox user told me that he thought the yellow background in the address bar was an aesthetic design choice of the website designer (he didn't realise that it was a security signal presented by the browser). Other users thought the SSL lock icon indicated whether a website could set cookies.
  3. The security indicators are trivial to spoof. Many users can't distinguish between an actual SSL indicator in the browser frame and a spoofed image of that indicator that appears in the content of a webpage. For example, if you display a popup window with no address bar, and then add an image of an address bar at the top with the correct URL and SSL indicators and an image of the status bar at the bottom with all the right indicators, most users will think it is legitimate. This attack fooled more than 80 per cent of participants.

We also found that popup warnings are ineffective. When presented with a browser warning of a self-signed certificate, 15 out of 22 participants proceeded to click OK (to accept the certificate) without reading the warning. Finally, participants were vulnerable across the board - in our study, neither education, age, sex, previous experience, nor hours of computer use showed a statistically significant correlation with vulnerability to phishing.

How does the detection rate of your test compare to that of real users?

Our participant population was highly educated, consisting of staff and students at a university. The minimum level of education was a bachelor's degree. Our population was also more knowledgeable than average, because they were told that spoofed websites were in the test set. They were also more motivated than the average user would be, because their task in the study was to identify websites as legitimate or not. For these reasons, we would expect that the spoof detection rate in our study would be higher than it would be in real life. However, any spoofs that fooled our participants would also be likely to fool real users.

Were these spoofing methods using OS or browser dependent bugs (or "features")?

In our study, we didn't take advantage of any number of bugs or vulnerabilities that allow spoofing in browsers (such as the IDN spoofing vulnerability). We only used very simple attacks that are easy for attackers to craft today, even if we assume that users are using secure, up-to-date and fully patched browsers. If we took advantage of bugs and vulnerabilities, we expect that the spoofing rate might have been higher.

How much are default settings important for complex topics such as crypto configuration?

Choosing the appropriate default settings is a critical aspect of privacy and security design, whether it is for cookie policies or crypto configuration. Most users do not change the default settings. In our usability study, we used the default browser settings in Firefox, and we took advantage of some of those defaults in crafting attacks. For example, Firefox forces all popup windows to display only a small portion of the chrome (the status bar) by default. This allowed us to insert a false address bar and false status bar with security indicators, and the majority of participants in our study were fooled into thinking that this was a legitimate webpage, rather than a fraudulent pop-up. The next version of Firefox may force the address bar to also be displayed by default, which should help more users notice this type of spoofing attack.

We also tested the condition where a browser encounters a self-signed cert. Currently the default setting in most browsers is to pop-up a modal warning dialog with some options. We found that most users accepted the default option ("Accept this certificate for this session"), and they proceeded to visit the website. IE7 will introduce some new warning notice designs to address this problem. They plan to block known phishing pages by default (such as by showing an inline error web page instead - this page displays a warning and allows the user to click a link to proceed). For suspicious sites or sites with certificate errors, they will color the address bar yellow and drop down a warning from the address bar. Only time (and usability studies!) will show if users will also learn to ignore these warnings just as they have with pop-up warnings.

Are you currently working on other tests about how phishing works?

Currently, I'm working on other techniques to prevent phishing in conjunction with security skins. For example, in a security usability class I taught this semester at Harvard, we conducted a usability study that shows that simply showing a user's history information (for example, "you've been to this website many times" or "you've never submitted this form before") can significantly increase a user's ability to detect a spoofed website and reduce their vulnerability to phishing attacks. Another area I've been investigating are techniques to help users recover from errors and to identify when errors are real, or when they are simulated. Many attacks rely on users not being able to make this distinction.

You presented the project called Dynamic Security Skins (DSS) nearly one year ago. Do you think the main idea behind it is still valid after your tests?

I think that our usability study shows how easy it is to spoof security indicators, and how hard it is for users to distinguish legitimate security indicators from those that have been spoofed. Dynamic Security Skins is a proposal that starts from the assumption that any static security indicator can easily be copied by attacker. Instead, we propose that users create their own customised security indicators that are hard for an attacker to predict. Our usability study also shows that indicators placed in the periphery or outside of the user's focus of attention (such as the SSL lock icon in the status bar) may be ignored entirely by some users. DSS places the security indicator (a secret image) at the point of password entry, so the user can not ignore it.

DSS adds a trusted window in the browser dedicated to username and password entry. The user chooses a photographic image (or is assigned a random image), which is overlaid across the window and text entry boxes. If the window displays the user's personal image, it is safe for the user to enter his password.

We also propose a way for the server to generate a unique abstract image for each user and each transaction. This image is used to create a "skin" that automatically customises the browser window or the user interface elements in the content of a webpage. The user's browser can independently reach the same image that it expects to receive from the server. To verify the server, the user only has to visually verify that the images match. With DSS, the user has to recognise only one image and remember one password, no matter how many servers he interacts with. In contrast, other shared secret schemes require users to save a different image with each server.

How did you have the intuition to use computer generated graphics to help users recognise valid websites from fake ones?

There are two types of images that can be used in this approach. The first is real images (photographs). This is the secret image that users must choose when setting up their browser and then recognise before entering their password. There is large body of cognitive science literature that shows that humans are very good recognising images they have seen before. Our user studies showed that participants really enjoyed this recognition task, especially if they could choose their own images.

We also experimented with randomly generated images. In previous work, we proposed using the Random Art algorithm as a way to automatically generate images for a graphical password scheme call Deja Vu (which I developed with Adrain Perrig). Random Art has a nice property that it takes a bit string as input and generates a random abstract image. Given the image, it should be hard to determine the input string.

With security skins, we were trying to solve not user authentication, but the reverse problem - server authentication. I was looking for a way to convey to a user that his client and the server had successfully negotiated a protocol, that they have mutually authenticated each other and agreed on the same key. One way to do this would be to display a message like "Server X is authenticated", or to display a binary indicator, like a closed or open lock. The problem is that any static indicator can be easily copied by an attacker. Instead, we allow the server and the user's browser to each generate an abstract image. If the authentication is successful, the two images will match. This image can change with each authentication. If it is captured, it can't be replayed by an attacker and it won't reveal anything useful about the user's password.

When do you plan to release the securityskins plugin?

Currently, we have a prototype of the interface developed in Mozilla XUL, which we are improving based on feedback from our studies. Mozilla turned out to be a good prototyping tool, and allows us to rapidly iterate through interface ideas. A number of organisations have expressed interest in adopting security skins, and we have started development of an extension that can be released to the public. So stay tuned!

Should we expect to solve the problem just working on one level, either human or technological?

No, I think the solution to phishing will require advances on both levels. However, our study suggests that a different approach is needed in the design of security systems. Rather than approaching the problem solely from a traditional cryptography-based framework (what can we secure?), we have to take into account what humans do well and what they do not do well.

Do you think that so-called Web 2.0 features (Ajax in particular) could make the situation worse by providing phishers with the ability to launch complex applications from a web page?

Javascript and Ajax definitely allow attackers to create better attacks. They make it possible to simulate every element of a web browser. However, Ajax also allows more interesting web applications and security interfaces to be developed. Instead of blaming specific development techniques, I think we need to change our design philosophy. We should assume that every interface we develop will be spoofed. The only thing an attacker can't simulate is an interface he can't predict. This is the principle that DSS relies on. We should make it easy for users to personalise their interfaces. Look at how popular screensavers, ringtones, and application skins are - users clearly enjoy the ability to personalise their interfaces. We can take advantage of this fact to build spoof resistant interfaces.

This article originally appeared in Security Focus.

Copyright © 2006, SecurityFocus

Federico Biancuzzi is freelancer; in addition to SecurityFocus he also writes for ONLamp, LinuxDevCenter, and NewsForge.