ModSecurity 2.0 hits the streets
Ivan Ristic explains what's hot about the new release
Interview ModSecurity is an open source web application firewall that runs as an Apache module, and version 2.0 offers many new features and improvements. Federico Biancuzzi interviewed Ivan Ristic to discuss the new logging system, events tracking and correlation, filtering AJAX or AFLAX applications, and just-in-time patching for closed source applications.
Could you introduce yourself?
Ivan Ristic: I am a web application security specialist and have been referred to as a web application firewall guy. In truth, I have many diverse interests (most of them related to technology) but I tend to deal with only one at a time. We live in exciting times when there is so much to do; wherever you look there is room for improvement.
My background is in software development and I have spent significant time architecting software systems. However, over the last couple of years I became focused exclusively on security. Today I am probably best known for my work on ModSecurity, which is an open source web application firewall, and my book, Apache Security, which was published by O'Reilly in 2005.
As a result of the recent acquisition of ModSecurity by Breach Security, I moved to work for them as their Chief Evangelist. My job is mainly going to be to working on ModSecurity (which Breach Security are going to continue to develop as an open source product) along with extending Breach Security's web application security products and promoting web application firewalls in general.
I am also involved with the Open Web Application Security Project and the Web Application Security Consortium. These are two organisations with similar goals - to increase awareness of web application security issues - but different ideas how to get there. I am very glad to be involved with both.
How did you start the project?
Ivan Ristic: It all started back in 2002. Back then the awareness of web application security issues wasn't nearly as high as it is today. I realised there were many vulnerable web applications out there and wondered if there was something I could do to help secure them. I also understood that educating the market about this problem was going to take some time and that a solution was needed now. At the same time, I realised from my experience as a programmer that even when you understand web security it is difficult to produce secure code, especially when working under the pressure so common in today's software development projects. [editor's note: Ivan wrote an Infocus technical article on ModSecurity back in 2003]
So, I thought it would be very useful to have something sitting in front of a web application to screen the requests as they come in. I now realise that I was thinking about a web application firewall but I did not know it at the time. I understood my security world well, but at the time I was not up-to-date with the state of the art in security (such as security products, papers, etc). If similar products existed, as a programmer I wasn't aware of them at that time.
I had to make my biggest decision at the beginning of the project: do I make it a standalone program (a reverse proxy), a Snort plug-in, or an Apache plug-in? Those were my three choices.
I did not like the reverse proxy idea because I would be forced to spend a lot of work dealing with raw HTTP, which was not very exciting. The Snort plug-in approach did not appeal to me because of Snort's orientation toward lower network layers and there was (and is) an issue with it being unable to see through SSL.
With Apache, however, I could dive straight in and start achieving my goals. Since ModSecurity can be embedded into Apache, the audience is much bigger. Not everyone can afford to put a reverse-proxy in front of their web server(s) (and you need two to avoid a central point of failure). Over the years I actually spent a lot of time wrestling with Apache and its APIs (especially in Apache 1.3.x) because my requirements were not typical. Apache 2.x has a much better architecture, but I was working with it at a time when it was pretty new and there was no documentation to speak of.
Today ModSecurity is recognised as the world's most widely deployed web application firewall. It is referred to as the "Swiss-army knife" of web security and people are using it for a wide range of functions, from web application monitoring and web intrusion detection and prevention. It works especially well for what I call "just-in-time" patching, where you provide a temporary fix working outside the vulnerable web application.
What's new in ModSecurity 2.0?
Ivan Ristic: The 2.0 version of ModSecurity is very important because it is a complete rewrite. It's the next generation code. Since its inception in late 2002 ModSecurity was based on the same code base. I am pleased that the original architecture lasted for several years but it is now time to move on. The new architecture builds on top of everything I have learned during the past four years with ModSecurity 1.9.x. and opens the door to many exciting new features.
One such feature is portability. ModSecurity 1.9.x is an Apache-only product. I have always wanted to port it to [Microsoft's] Internet Information Server but the same issues that enabled me to start working very fast at the beginning (tight integration with Apache) prevented me to do the port later on. It took a complete rewrite to pull ModSecurity out of Apache and into a portable code base. So I am expecting an IIS/ISA version to be available fairly soon.
Feature-wise, there are many improvements and I don't really know where to start. Some of the major improvements include:
- Five processing phases (where there were only two in 1.9.x). These are: request headers, request body, response headers, response body, and logging. Those users who wanted to do things at the earliest possible moment can do them now.
- Per-rule transformation options (previously normalisation was implicit and hard-coded). Many new transformation functions were added.
- Transaction variables. This can be used to store pieces of data, create a transaction anomaly score, and so on.
- Data persistence (can be configured any way you want although most people will want to use this feature to track IP addresses, application sessions, and application users).
- Support for anomaly scoring and basic event correlation (counters can be automatically decreased over time; variables can be expired).
- Support for web applications and session IDs.
- Regular Expression back-references (allows one to create custom variables using transaction content).
- There are now many functions that can be applied to the variables (where previously one could only use regular expressions).
- XML support (parsing, validation, XPath).
More than ever, ModSecurity is a precision tool: it does not stand in your way but allows you to do what you want, when you want it. There is very little, if anything, happening implicitly. And I think that is a good thing. I really like the fact that ModSecurity does not give you anything by default. There are so many tools that give the illusion of security along with simplicity, but they don't tell you that there are circumstances when they don't work as expected. So we end up with the majority of users not knowing about the weak aspects. But the bad guys know that stuff well and use it regularly.
On the other hand, I received feedback that ModSecurity is becoming more difficult to use. I think a correct way to address this problem is through a console. I believe in the separation of concerns: the core engine must address the security issues and the GUI should address the ease of use. The ModSecurity Community Console will be available soon and will begin to address this problem. [editor's note: the ModSecurity Console v1.0 was just released]
What's new in the logging system?
Ivan Ristic: For those wishing to use ModSecurity as a logging tool there are two innovations. First is a special processing phase, intended for placement of rules that determine whether or not to log the transaction. In general, two decisions need to be made: 1) whether or not to log the transaction and 2) which parts of the transaction to log. A typical example of the latter is when you don't want to log the response bodies by default, but you choose to log the response body of a transaction that has suspicious content in the response body itself.
One problem with logging comes from the presence of sensitive data in the request payloads. Luckily ModSecurity can now be configured to remove request parameters, request headers, and response headers from the logs. This can be achieved by naming the fields you wish to remove in advance, but it is also possible to automatically detect sensitive content (for example, use a regular expression to detect credit card numbers in parameters).
It is now possible to use variables. What type of cool things can we use them for?
Ivan Ristic: The addition of custom variables in ModSecurity v2.0 (along with a number of related improvements) marks a shift toward providing a generic tool that you can use in almost any way you like. Variables can be created using variable expansion or regular expression sub-expressions. Special supports exists for counters, which can be created, incremented, and decremented. They can also be configured to expire or decrease in value over time. With all these changes ModSecurity essentially now provides a very simple programming language designed to deal with HTTP. The ModSecurity Rule Language simply grew organically over time within the constraints of the Apache configuration file.
In practical terms, the addition of variables allows you to move from the "all-or-nothing" type of rules (where a rule can only issue warnings or reject transactions) to a more sensible anomaly-based approach. This increases your options substantially. The all-or-nothing approach works well when you want to prevent exploitation of known problems or enforce positive security, but it does not work equally well for negative security style detection. For the latter it is much better to establish a per-transaction anomaly score and have a multitude of rules that will contribute to it. Then, at the end of your rule set, you can simply test the anomaly score and decide what to do with the transaction: reject it if the score is too large or just issue a warning for a significant but not too large value.
What level of events tracking and correlation can we achieve?
Ivan Ristic: The data persistence features enable ModSecurity to understand higher level abstraction concepts, such as IP address, application, session, and user. You are no longer forced to look at only one transaction at a time, which is what the HTTP protocol is forcing you to do. Now it is possible to put a series of requests into the same group, using some piece to identify which group it belongs to. This can be trivial as it is with IP addresses but it can also be challenging, for example, if you want to figure out the application session ID, which can be in one of several commonly used places.
From one point of view, data persistence allows you to employ anomaly detection on a wider scale: per IP address, application, session, user, and so on. Probably the best example is detection of brute force attacks; you could start counting the number of failed authentication attempts from one IP address. Once the number increases over the threshold you could decide to block the offending IP address or, even better, simply slow down subsequent authentication attempts.
The point is that you are not looking for a single suspicious action any more - you are using counters to look for anomalies. Other examples include looking for IP addresses with too many failed requests (too many 404 responses typically point to web application scanner activity), enforcing session inactivity timeouts, session duration timeouts, and so on.
ModSecurity v2.0 users are free to create their own storage keys, meaning they can do with their data whatever they want. For example, once you decide an IP address is suspicious you could place it on the "IP addresses to watch" list and start logging every transaction performed, even for requests that come several days/weeks/months later. My favorite is the ability to move the entire session or user account to a honeypot system, then you can record their actions in an environment where they can not do you any harm.
How can ModSecurity be used to stop attacks?
Ivan Ristic: I think we first need to cover the difference between a positive security model and a negative security model. With a negative security model you are trying to figure out if there is something potentially dangerous in the traffic. This is typically how network intrusion systems work. This approach is easy to start working with but it's fairly difficult to design a foolproof rule set. Negative security rules can be written for generic web application attacks and also for known problems and exploits.
The positive security model is deployed when you only ensure the incoming data is safe to use, which is a much easier task than trying to detect bad stuff. For example, if you have a variable "userid" and it's an integer then you only need to have a rule that ensures the contents of the variable is always an integer. While more secure, the positive security model is also more difficult to deploy because it needs to be custom configured to the specific application being protected.
Another challenge lies in the fact that a positive security model is actually a duplication of the application business logic. Hence whenever the application changes the model must be updated. Some web application firewalls deal with these issues by learning the traffic and creating the positive model automatically.
ModSecurity supports both security models but it does not provide any learning facilities just yet.
What can we do to filter AJAX or AFLAX applications?
Ivan Ristic: From one point of view, AJAX applications are no different from your "normal" web applications. In both cases we have client devices (typically browsers) communicating with servers via HTTP. This means that most negative-security approaches used by web application firewalls continue to work. Rules and signatures designed to detect (and prevent) common web application attacks (SQL injection, cross-site scripting, and so on) will continue to work.
But there is a significant challenge involved with using web application firewalls to protect AJAX applications where the communication between clients and servers take place over a custom protocol transported inside HTTP. Browsers can use only two types of encoding to transport parameters: application/x-www-form-urlencoded and multipart/form-data. The competitive advantage of web application firewalls comes from the fact they understand these encoding and thus can drill into individual parameters.
However, I think the situation is going to improve soon. AJAX applications are just starting to be deployed so there is a lot of variety. Eventually most developers will want to use well-established AJAX libraries for their applications. They will in turn use communication protocols that are either documented or can be reverse-engineered, and web application firewalls will be able to more accurately learn the application's behavior.
What type of protection can we create to fight SQL injection attacks?
Ivan Ristic: In many cases, web application firewalls prevent SQL injection attacks but the protection is not always foolproof. Since SQL injection is a technique and not a specific attack, there are many ways of performing a SQL injection. It is possible to write a very good rule set for protection but it's not possible to guarantee that it will work 100 per cent.
The best solution is combining a positive security model of the application with the negative security of rule sets. The positive security can significantly reduce the possible false positives with SQL injection attacks and provide the best protection, since SQL injections are not normally part of acceptable application behavior.
Sometimes we hear about a bug in an application (perhaps closed-source) that we still have to run in production while we wait for a patch. Could we use ModSecurity to patch its I/O in the meantime?
Ivan Ristic: That's a very good question. External patching (also called "just-in-time patching" and "virtual patching") is one of the biggest advantages of web application firewalls. Since WAFs can intercept requests, check whether they are safe or not and simply drop offending requests, they are the easiest way to patch a known vulnerability in a web application.
Fixing vulnerabilities in web application always requires time. Organisations rarely have access to an application's source code and are at the vendor's mercy while waiting for a patch. Even if they have access to the code, implementing a patch in development takes time.
So what we have is a window of opportunity for the attacker to exploit. The beauty of web application firewalls is that they can fix this problem externally. A fix for a specific vulnerability is usually very easy to design. In most cases it can be done in less than 15 minutes. The end result is that the organisation is back in control of their fate and the risk is significantly reduced.
Recently we have been reading about a lot of exploits against browsers. Could we use ModSecurity to filter the data our webserver is sending to them?
Ultimately, however, client-side security is a problem that can only be partially addressed on the server side. There are issues of trust that are not technical problems.
This article originally appeared in Security Focus.
Copyright © 2006, SecurityFocus
Federico Biancuzzi is freelancer. In addition to SecurityFocus he also writes for ONLamp, LinuxDevCenter, and NewsForge.