How secure are your applications?
Locking the stable door before the horse bolts
Let’s be blunt. The fine heritage of application development has not traditionally incorporated the pre-emptive creation of secure code, i.e. programs that are built from the ground up to be secure.
There are a number of potential reasons for this – not least that in the old days, before every system was connected (either directly or indirectly) to some kind of network, a certain code of conduct was assumed between developers, operations staff and users, that nobody would try to break anything. This ‘club rules’ spirit continues even now, despite repeated proof that such mindsets are, with the best will in the world, outdated.
Of course there are plenty of examples to the contrary. Military systems have long had to take security into account, and Commercial Licensed Evaluation Facilities (CLEFs) existed over a decade ago, whose task it was to try to break into bespoke applications using a variety of penetration testing techniques. These days, in the UK we have such certification schemes as CHECK and CREST, which are very much a continuation of this theme.
But we are still far from the situation where such a thing as a ‘secure application’ is seen as the norm, rather than the exception. For a recent example of inadequate protections being baked in from the outset, we only need to look at Spotify but to be sure, there will be plenty of internal examples that are quietly swept under the carpet.
It would be too easy to have an alarmist rant at this point about the scale of the threat, the naivety of the people involved, the absolute need to respond to the issues right now – but that’s not really the point as change is in the air anyway. There are a number of reasons for this, which (as usual) boil down to a combination of legal changes (e.g. PCI, DSS) drawing attention to the risks, vendors getting their acts together in terms of tooling, and the community at large warming to the idea of addressing the problem.
In a recent conversation with Tim Orchard at security consulting firm Activity, in answer to an open question, I was told that “We are definitely seeing a rise in demand for services around secure application delivery.” While the will may be there, the knowledge levels are patchy – “Some organisations that are better informed than others,” said Tim. This lack of understanding translates to a lack of will to build security in, at the outset of a development project. Of course, it’s not just security that gets short shrift – we saw a similar factor at play when we looked at availability requirements (or lack of them).
It would be great to think that all security problems could in some way be magicked away through the use of security tools from the likes of IBM, HP, Fortify, Secerno or Qualys. Some of these tools help developers spot security weaknesses in code, whereas others look for run-time vulnerabilities. While there is undoubtedly a place for tools, they can only go so far – a common complaint is the generation of false positives, which then mask real issues when they arise.
Perhaps there really is no substitute for human intervention. “Tools are never as good as a manual pen tester,” Tim Orchard told me, “particularly when it comes to application logic flaws.” While he clearly had a vested interest to say so, I know from my own experience that he probably had a point. The issue is one of money – of course, we’d all love to get some top notch experts in, but in many cases the funding just isn’t be there.
So, what to do? The answer probably lies in facing up to security as early as possible in the application lifecycle. Ultimately, security is a business issue – combining reputational risk and financial risk – and by considering applications in this context, it becomes more straightforward to identify what might go wrong and what should be dealt with as a priority.
Funding will always be an issue. But engendering a more security-conscious mindset doesn’t have to be that expensive: for instance, there are many free security tools out there, either built into development suites (e.g. Microsoft Team System) or downloadable from the Web. Security tools vendors would of course say that free tools are no substitute for what they offer, but they are certainly better than doing nothing at all.
Jon Collins is a panel member on the live and interactive Regcast "Jump start your Application Security initiatives". This goes out at 6PM BST/1PM EST/10AM PST on July 21. Click here for more details.
It was drummed into me as "it's all very well trying to make your software idiot proof, but the problem is that the world keeps creating bigger and better idiots".
Re: Anonymous Coward @ 10:54
I'm with you. I've been coding since the early 80s and the assumption has always been the users are morons and there is no such thing as common sense so you have to verify EVERY piece of data you allow into your system.
I can't remember who said it but I think "Idiot Proof? Idiots are surprisingly resourceful!" sums it up quite well.
Let's not forget the tool vendors part in this...
One snag is the way that some tool vendors keep releasing significantly different toolset upgrades. Although I'm a fan of Microsoft's development tools/environments/languages etc., and most development shops can take these changes in their stride, organisations that do, or commission, a significant amount of internal development end up with problems. Not only is there the continual cost of training, but also the underlying framework on which a given application relies becomes often becomes obsolete in fairly short order.
Given that most organisations don't refactor their own apps EVER, but do try to keep abreast of new developments, they inevitably end up with a load of mis-matched, soon to be legacy, liabilities.
This breeds two evils. Firstly, even if the code doesn't have holes in it, the underlying frameworks in use probably will for the first couple of years and, more importantly, end users will start to use "work arounds" that may involve all sorts of spreadsheet and MS Access nastiness, to say nothing of things like exposing old databases that were designed for internal use to the Internet with huge attention to graphics but absolutely none to its basic suitability and security.
When I see these adverts promising "manageable code", "long term support", and all the other nonsense designed to tempt IT managers to part with their inadequate budgets, I say a small prayer that the ad's target market are a completely unreconstructed bunch of cynics who have to get wet before they'll believe it really is raining.
Windows carries a huge amount of bloat to ensure backwards compatibility, for perfectly sensible reasons - Couldn't tool vendors make a few more compromises here too?
Change isn't always good.