This article is more than 1 year old

Security damn well IS a dirty word, actually

Wash your mouth out with TLS 1.2

Sysadmin blog An interesting feature popped up on Ars Technica recently; website journo Nate Anderson discusses how he learned to crack passwords.

The feature is good; good enough for to me to flag it up despite that journalistic competition thing*. That said, the feature gently nudges – but does not explore – a few important points that are increasingly critical to consider in the context of any serious discussion about IT security.

In his feature, Nate describes himself as having learned to become a "script kiddie." While I won't dispute the nomenclature, reading the feature left me with the impression that he felt that tool chosen was an important part of what separates the script kiddie from more well versed malefactors.

The difference between a script kiddie and a decent cracker isn't the tool used. It is the time taken to understand how a tool works, why it works that way, what its limitations are and - ultimately - the effort made to increase the tool's efficiency and/or likelihood of success. Nate may have started his journey out as a script kiddie, but I suspect he's put far more thought into this than most script kiddies do. Were he to pursue this "addictive" line of investigation for a few more months, he'd be well on your way to what – in the 80s – we called a cracker.

The terms have been diluted over the years. A cracker was someone who put a lot of time and effort into breaking digital locks. It required a fair amount of knowledge to accomplish but was still a focused pursuit. A hacker – using the old school technology – would take this same iterative experimental approach to hardware. They would see software and hardware as two parts of a single whole.

For an old-school hacker the goal was to learn. The reward was solving another puzzle. These people still exist today; though increasingly underground as curiosity itself seems to be rapidly becoming illegal.

Wave your hands

Google Self Driving Car

Google's Self Driving Car: a security problem waiting to happen?

Computers are not magic. It is simultaneously a simple truth and the hardest element of their operation to intuitively grasp. There are so many layers between today's users and the underlying transistor logic that the operation of computers legitimately seems like magic, even to those who've spent a lifetime in the field. (Be rational all you want, printers were sent from hell to make us miserable.)

The problem with computers today – as with yesteryear – is the abstraction of these operating fundamentals from the usage of the device. Despite evolving existing interfaces, periodically reinventing the wheel and even changing form factors we are actually pretty bad at abstracting away the underlying flaws of computer design such that end users don't need to know how the widget works.

If you don't know how the widget works, you are ultimately going to be vulnerable to some security flaw you didn't even know existed. Despite this, proliferation of computers has trebled; the growth of deployment seems logarithmic with no asymptote in sight. Computers are in everything from our cars to our phones and soon our watches and even our eye glasses. If we can't secure the mess we have today what hope can we possibly have of locking down the much hyped internet of things?

It's dead, Jim

Anderson correctly highlights that the fragility of passwords is frightening. Password cracking software is shocking in its ease of use. What should be more frightening – but hasn't sunk in yet for most – is the ease with which virtually every other security mechanism we employ can also be compromised.

From encryption at rest (via RAM grabs, amongst others) to SSL/TLS (via, apparently, everything) on to nearly every other storage and transmission mechanism we've invented; the IT industry seems to birth crypto mechanisms that are really only practically secure for a few years - a decade at best.

More frustrating than this is that we do generate solutions to known vulnerabilities on a regular basis. In many cases they simply remain unimplemented. Consider the shocking lack of support for DNSSEC, or the fact that amongst the mainstream browsers TLS 1.1 is only enabled by default in Safari and Chrome while TLS 1.2 isn't enabled by default on iOS devices. (There's a good discussion on why here.)

The economies of most nations depend on the security and trustworthiness of these authentication mechanisms and yet the implementation of newer techniques is constantly held back. The multinationals making the gear we use circle each other and growl; each is looking to exploit the weaknesses that affect us all to their individual advantage.

Ultimately, I don't think education alone will help here. Some combination of "keeping one step ahead" on the cryptography front has to be combined with a UX that abstracts the "hard stuff" away from end users. As much as I'd love to teach 7 billion people proper password hygiene, I suspect this isn't the correct path.

Defeating security mechanisms is a challenging puzzle that offers wealth to those who accomplish it. Creating new security mechanisms – or fixing old ones – is hard and few are willing to engage in the activity unless a clear monetary advantage can be gained. We need a fundamental rethink regarding the economics of IT security. The market as it stands today isn't delivering. That failure promises to be a problem for us all. ®

*Though we'll put the link down here, eh, Trevor - Ed.

More about

TIP US OFF

Send us news


Other stories you might like