Feeds

ModSecurity 2.0 hits the streets

Ivan Ristic explains what's hot about the new release

Internet Security Threat Report 2014

What's new in the logging system?

Ivan Ristic: For those wishing to use ModSecurity as a logging tool there are two innovations. First is a special processing phase, intended for placement of rules that determine whether or not to log the transaction. In general, two decisions need to be made: 1) whether or not to log the transaction and 2) which parts of the transaction to log. A typical example of the latter is when you don't want to log the response bodies by default, but you choose to log the response body of a transaction that has suspicious content in the response body itself.

One problem with logging comes from the presence of sensitive data in the request payloads. Luckily ModSecurity can now be configured to remove request parameters, request headers, and response headers from the logs. This can be achieved by naming the fields you wish to remove in advance, but it is also possible to automatically detect sensitive content (for example, use a regular expression to detect credit card numbers in parameters).

It is now possible to use variables. What type of cool things can we use them for?

Ivan Ristic: The addition of custom variables in ModSecurity v2.0 (along with a number of related improvements) marks a shift toward providing a generic tool that you can use in almost any way you like. Variables can be created using variable expansion or regular expression sub-expressions. Special supports exists for counters, which can be created, incremented, and decremented. They can also be configured to expire or decrease in value over time. With all these changes ModSecurity essentially now provides a very simple programming language designed to deal with HTTP. The ModSecurity Rule Language simply grew organically over time within the constraints of the Apache configuration file.

I don't want the rule language to grow to be more complex than it already is. So as a future development direction I am considering embedding an entire JavaScript interpreter into ModSecurity. The idea is to use the rule language to handle the simple cases, falling back to JavaScript for very complex scenarios or for uses where the rule language would simply be a non-feasible solution. Another path, especially for those who care deeply about performance, is to write plug-ins. This is already possible with version 2.0, but the opportunities will increase with future releases.

In practical terms, the addition of variables allows you to move from the "all-or-nothing" type of rules (where a rule can only issue warnings or reject transactions) to a more sensible anomaly-based approach. This increases your options substantially. The all-or-nothing approach works well when you want to prevent exploitation of known problems or enforce positive security, but it does not work equally well for negative security style detection. For the latter it is much better to establish a per-transaction anomaly score and have a multitude of rules that will contribute to it. Then, at the end of your rule set, you can simply test the anomaly score and decide what to do with the transaction: reject it if the score is too large or just issue a warning for a significant but not too large value.

What level of events tracking and correlation can we achieve?

Ivan Ristic: The data persistence features enable ModSecurity to understand higher level abstraction concepts, such as IP address, application, session, and user. You are no longer forced to look at only one transaction at a time, which is what the HTTP protocol is forcing you to do. Now it is possible to put a series of requests into the same group, using some piece to identify which group it belongs to. This can be trivial as it is with IP addresses but it can also be challenging, for example, if you want to figure out the application session ID, which can be in one of several commonly used places.

From one point of view, data persistence allows you to employ anomaly detection on a wider scale: per IP address, application, session, user, and so on. Probably the best example is detection of brute force attacks; you could start counting the number of failed authentication attempts from one IP address. Once the number increases over the threshold you could decide to block the offending IP address or, even better, simply slow down subsequent authentication attempts.

The point is that you are not looking for a single suspicious action any more - you are using counters to look for anomalies. Other examples include looking for IP addresses with too many failed requests (too many 404 responses typically point to web application scanner activity), enforcing session inactivity timeouts, session duration timeouts, and so on.

ModSecurity v2.0 users are free to create their own storage keys, meaning they can do with their data whatever they want. For example, once you decide an IP address is suspicious you could place it on the "IP addresses to watch" list and start logging every transaction performed, even for requests that come several days/weeks/months later. My favorite is the ability to move the entire session or user account to a honeypot system, then you can record their actions in an environment where they can not do you any harm.

Choosing a cloud hosting partner with confidence

More from The Register

next story
Be real, Apple: In-app goodie grab games AREN'T FREE – EU
Cupertino stands down after Euro legal threats
Download alert: Nearly ALL top 100 Android, iOS paid apps hacked
Attack of the Clones? Yeah, but much, much scarier – report
Microsoft: Your Linux Docker containers are now OURS to command
New tool lets admins wrangle Linux apps from Windows
Bada-Bing! Mozilla flips Firefox to YAHOO! for search
Microsoft system will be the default for browser in US until 2020
Facebook, working on Facebook at Work, works on Facebook. At Work
You don't want your cat or drunk pics at the office
Soz, web devs: Google snatches its Wallet off the table
Killing off web service in 3 months... but app-happy bonkers are fine
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Business security measures using SSL
Examines the major types of threats to information security that businesses face today and the techniques for mitigating those threats.