Q: Why do defenders keep losing to smaller cyberwarriors?
A: 'Ant smarts' not 'asymmetry'
Forget everything you've read on The Reg or anywhere else about wars that target computer networks, power grids and other essential electronic infrastructure because it's loaded with fallacies, a prominent security consultant said Wednesday.
Contrary to conventional wisdom, the damage from cyberwar can kill people, and those who wage it aren't as anonymous as most security experts and military advisers claim, Dave Aitel, CEO of Immunity said during a talk at the 20th Usenix Security Symposium in San Francisco. But the biggest myth of all, he submitted, is the idea that cyberwar inevitably favors the opponent, allowing people with modest means to inflict disproportionate mayhem on much larger opponents.
“People assume the current asymetricness, the current offense-seems-to-be winning feature of the internet, is built in and it's not,” he said during his 90-minute talk, which was titled “The Three Cyber-war Fallacies.” “This is a danger for attackers as well, because attackers can get lulled into a false sense of security. You have the advantage because you got lucky and the current field is on your side, but that changes quickly.”
Aitel said that contrary to the oft-repeated claim that cyberwar is “non-kinetic,” its effects include real physical effects that can be every bit as devastating as a bomb blowing up a bridge. He held up the ability of the Stuxnet worm to sabotage centrifuges used in Iran's uranium enrichment program as a prime example.
“People miss the message of Stuxnet, which wasn't: 'I blew up your nukes, I'm cool,'” he said. “The real message was: 'I can take out any factory you have at any time I choose.' That's a much scarier message.”
He also tried to debunk the idea that cyberwar is different from conventional war in that it's often impossible to attribute the party that's waging an attack. Attributing actors in more traditional war theaters is often tough, and security researchers often find plenty of clues about those behind sophisticated threats, he explained.
But Aitel saved his most biting criticism to challenge the notion advanced by a chorus of renowned security analysts – Bruce Schneier and Deputy Secretary of Defense Michael Lynn by name – that the fundamental characteristics of the internet and critical infrastructure allow a handful of well-trained operatives to wage guerrilla warfare campaigns against much larger adversaries.
He also criticized a keynote given at last year's Black Hat security conference in which Retired US General Michael Hayden referred to critical systems as “Poland on the web, invaded from the west on even-numbered centuries, invaded from the east on odd-numbered centuries.”
“None of this is inherent in the cyber domain,” Aitel said. “You don't have to be dumb, but just because you are doesn't mean you have to do it in the future.”
He held up the so-called ”smart meters” being deployed by the state of California as one example of the kind dumb behavior exhibited by defenders of criticial infrastructure.
“The unfortunate thing is that price pressure means you're going to [build meters] with chips that cost about a half cent,” he said. “So the smartness has to be very very minimal. We're talking about ant-level smarts at best, and that doesn't lead its way to security.”
At times, Aitel's talk seemed to digress into asides that undermined his premise. The ability of attackers half a world away to take out any factory they choose whenever they want only seems to strengthen the point that cyberwarfare is indeed asymmetrical. And he blithely referred to Google's Chrome OS as “a much more secure platform,” despite recent research showing it's vulnerable to many of the same devastating attacks that have plagued websites for a decade.
Aitel seemed to approach his speech (slides to which are here) as a series of Zen koans for the security set that was geared more toward making them grasp new ideas than providing them with practical observations about how people defending critical systems should operate differently.
But when pressed to do just that during a question and answer session, he offered some pithy advice.
“Very few corporations have the organization structure now which will say: 'All new things we buy we run through a security team. If it doesn't meet our mark, the security team can neg it.' That's a very painful thing to do for an organization, but you have to have that if you're going to move forward in any secure way.”
“Obviously, the defenders can make the decision not to run this crap, and it's a very easy one to make.” ®
"The ability of attackers half a world away to take out any factory they choose whenever they want only seems to strengthen the point that cyberwarfare is indeed asymmetrical."
This technically does not put his two statements in conflict, though it certainly doesn't help them either. The point of the assymetry statement was that the assymetry is not inherant to the internet and network security. There's no real reason that a handful of well-trained "operatives" should be able to take out a facility guarded by hundreds of similarly trained "operatives." The current reality is that it does appear this way, but more often it seems like a couple of experienced crackers manage to take out a poorly secured facility, which shouldn't be too surprising really.
The Stuxnet attack also doesn't really contradict the assymetry argument, because what assymetry exists in that attack was actually in favor of the attacker to begin with. Every major report on the likely attackers showed that it was probably performed by a large, experienced and well-funded organization. In return the defenders never showed much ability to defend against such a well-organized attack. Thus, the assymetry here seems more like a giant swatting a gnat.
A lot of the problem with computer & network security is that we have just learned how to build small forts and checkpoints for these computers. They seem to work well enough as near as we can tell, but the bad guys just keep smashing their way in anyway. Given time, we'll learn how to make huge nasty castles to defend our information and resources and the people trying to get in will find their work much harder, though still not impossible. However, at some point we may also need to figure out how to convince people to store their information in the safer castles, rather than their homemade forts.
Point: Most "industrial" systems shouldn't be world read/writeable.
And I mean at the bit-level, not a more human readable format.
Security starts from the ground, and works it's way up. Security also needs proper nutrition ... If your system's roots don't have the proper fertilizer, your crop is going to decompose. There is a reason for the term "bit-rot".
The main problem with computer and network security, in my mind, is that the folks running the computers and networks have absolutely zero clue as to the underlying details of what is going on at the bit-level.
I blame the "ease of use" myth, first popularized by Apple, then picked up by Microsoft, and now Canonical is playing the same bogus card.
Cyberwar: your worst enemies are your own people
They just aren't paranoid enough. They insist (despite all the education, procedures, regulations, warnings and threats of dismissal) on loading unapproved software or data onto supposedly secure computers. They take confidential information away on laptops or thumb drives - and then lose it. They don't bother to encrypt data they move around. They divulge passwords. They use company computers for personal entertainment and they leave them unattended with their work screens unsecured.
The biggest problem is that everything that goes on with computers is intangible. They never get to see the data that's so important and therefore disregard it. Even in cases where data is in physical form, such as paper, they STILL manage to treat it with such slapdash attitudes that it gets lost, left on trains or thrown away where anyone (who wanted it) could easily find it.
Hell, people don't even bother to cover their own tracks and delete emails that could land them, personally, in chokey.
I suppose the problem is that staff just aren't punished enough for their transgressions. Maybe that's because these systems aren't rigorously monitored and security protocols enforced: "Hey, Jim. I noticed you logged in to the central control machine yesterday without clearance. You know that's a sackable offence - pack your bags and this nice gentleman will escort you to the door." What we need for our secure and critical systems is the same sort of controls that banks have to prevent their staff sampling the product. It won't catch all offenders, but it should at least give us a better chance of repelling the invaders.