And what if you're at work or connected to the virtual private network? Your browser can go after the corporate portal. Do you have single sign-on? That means you're logged into every web application on your intranet, and your renegade browser can go after any of them.
Using CSRF, an attacker can attack all of these targets and can do just about anything you can do through your browser. All these attacks can be done remotely and basically anonymously.
OK, so what can you do to protect yourself? First, don't stay logged into websites. You have to actually hit the log-out button, not just close the browser. Next, stop CSRF from getting to your critical websites by using a separate browser to access them. Companies are increasingly using separate browsers for accessing intranet applications and the internet - more should follow suit.
If your web application is attacked by a CSRF, all you'll see is normal transactions being performed by authenticated and authorised users. There won't be any way to tell that the user didn't actually execute the transaction. Probably the only way you'll find out that you have a CSRF problem is when users start complaining about phantom transactions on their account. The attacker can cover their tracks easily by removing the attack once it has worked.
Taken alone, CSRF attacks are simple and powerful. However, most attackers use CSRF and cross-site scripting (XSS) in conjunction. Together, these two techniques allow attackers to invade a victim's browser and execute malicious programs using the credentials of site the user is logged into.
This combination is devastating, and I'm frankly surprised that a cross-application CSRF-XSS worm hasn't already been developed.
The best solution to CSRF is to require a random token in each business function in your application. You can generate the random token when the user logs in and store it in their session. When you generate links and forms, simply add it to the URL or put it in a hidden form field. For example:
Requests that show up without the right token are forged and you can reject them. If you want to add protection without modifying code, the OWASP CSRFGuard is a filter that sits in front of your application and adds token support.
Whatever steps you take to protect yourself - whether it's the physical act of using different browsers or taking a token-based approach with the OWASP filter, make sure you do something - and soon. It will be difficult to roll out protection against forged requests once an attack has started.
Jeff Williams is the founder and CEO of Aspect Security and the volunteer chair of the Open Web Application Security Project. His latest project is the Enterprise Security API, a free and open set of foundational security building blocks for developers.
Still Smells Like Phish
"The html part of the malware open a 1x1 pixel iframe to http://SiteA..."
Fairly tricky, as drawing from the history is quite problematic in this case, and how else would Site A's URL be available to Site B?
This is where it breaks down, for me. Site A's cookies would be unavailable to Site B at all times, so unless the user is inputting data into a Site B page (phishing), how would said data migrate? Not from the iframe to the iframe ... that data's held on Site A's servers and in the cookie. Unless Site B can cause Site A to give up that info, it can't get and use it to send that "malware request" through the iframe.
From Site A to Site A is where the protection is.
From Site B to Site A doesn't accomplish this hack without data from Site A.
How is Site B getting that data? Not from Site A's cookie.
Are we talking about XHR's open() method? If so, how's the username and password getting in there without reading a cookie? Does it use getAllResponseHeaders()? Then somehow Site B has to send a request to Site A first in order to observe those headers ... without credentials.
Gotta be a phish. No?
csrf is suble
Nothing technical about it. It can be coded either way. The server has all the same information. With broadband, conventional page reloading is not the tedium it once was with dial-up. I'm in agreement about the slick interface for an app feel. There are some very good ones out there. For the discussion of the article, however, the former does not demand JS, the latter cannot function without it. And JS opens up the prospect of CSRF...
It has nothing to do with phishing nor input validation. It is impersonation and forged requests. My website can be perfect but can still be the target of attacks from another site with XSS holes. But (and it's a big but) the requests are from my customers. I cannot distinguish real requests from forged ones unless I make user access such a pain that nobody wants to visit. With CSRF you'll realise there is almost no easy defence against it /and/ keep all your users. Hence the doom and gloom of the article.
To recap, it happens like this. We have four agents in this scenario.
- Bob, at home, using his browser.
- SiteA, perfect, with no XSS holes or anything. Perhaps a bank.
- Sid the bad guy who want to rip off Bob's account at SiteA.
(1) Sid puts malware on SiteB.
(2) Bob visits SiteA and logs into his account. SiteA sends Bob a cookie (a random token or nonce) so that further requests from his browser do not require him to re-enter his password for every requested page (a convenience feature).
(3) While still browsing SiteA, Bob opens up a new tab and surfs into SiteB.
(5) SiteA receives this request. It is coming from Bob's browser at his IP address along with the Bob's cookie for SiteA. The server checks the cookie and sees that it's a valid cookie for Bob's current session and executes the request. Fill in the blanks of what the request could be. SiteA logs the request, IP address etc.
(6) Much later Bob complains to SiteA that his bank account is now empty. SiteA examine the logs and sees that it was Bob who made the request and tells him tough bananas.
While it is up to SiteA to do all they can to thwart this (they don't want customers being ripped off) you need to be aware of what might be going on while browsing. If you browse one site at a time, logout and delete cookies before going somewhere else then leaving JS on is just fine. If you like to have many tabs open (I do) then you need to make the necessary security adjustments. You could keep one browser strictly for online banking.
A long post but I hope it clears up why this is important. It's not that I hate JS (or Web 2.0), it's that it's too dangerous to just give it free reign for no really good reason.
This *is* a simple phishing issue? While I appreciate amanfromMars' response, it does little to answer my question. Beyond strenuous validation of user input, what's left to do? It really seems like a lot of fuss over nothing, as long as user input is validated.
I mean, it's my form working with processing on my server ... I don't care if someone tries to host the page in an iframe on their site ... their site still isn't going to be able to read my server's sessions or my server's cookies. I fail to see how the user's credentials could be snagged without (a) a successful phishing excursion and (b) the user then entering said info into the phished form.
In the absence of (a), this doesn't work. Am I right?
If the phishing is unsuccessful, then this is a non-issue, right?