Why the FBI’s 'new Internet' is a dumb idea
Behaviour is the disease, insecurity is the symptom
The FBI’s Shawn Henry says the world needs a second Internet for critical systems – apparently never having been told what a “private network” is when you don’t prefix it with the word “virtual” – and the idea is taking off in other quarters.
Here’s why it’s a dumb idea: it won’t work.
It’s not just that the easiest defenses are the cheapest ones – as promulgated by Australia’s Defense Signals Directorate and now endorsed by the SANS Institute.
However, that’s a big part of it: if people can’t be trusted to apply patches and block obvious holes, how does creating a new, vastly expensive, probably-intrusive (since one idea doing the circuit is the registration of all machines) network change things? All it does is put the same insecurities and vulnerabilities and slack practices on a new network, which everybody will hail as “secure” up until the moment it’s penetrated.
And penetrated it will be.
It seems like everybody’s forgotten that Stuxnet wasn’t an Internet-borne attack. It was carried on a USB key: the kind of attack vector that will still exist on Henry’s proposed secure Internet.
Not only that: the kind of private networks that do exist – say, electricity utilities’ extensive in-house fibre, to pick an example – become vulnerable not because they’re directly connected to the Internet, but because somewhere in a large organization, there’s likely to be machines that exist on both the public and private networks.
They will still exist: it’s simply not feasible that any network of millions of machines will be entirely free of all possible bridges to other networks.
It seems to me that the Shawn Henry proposal is a recipe for tossing billions of dollars against walls the world over, and creating a user base believes themselves secure and becomes even more cack-handed and complacent at actually protecting themselves.
The real reason a “secure Internet” wouldn’t work is because, as the DSD and the SANS Institute have illustrated so efficiently, the problem is behavioural, not technical.
I’m going to propose an idea: use price signals to encourage the behaviour we want.
I believe – without the benefit of a single minute’s proper research, so I guess I’m handing some enterprising youngster a PhD outline on a plate here – that I can borrow an expression from the world of economics, the mis-pricing of risk, to explain what I mean.
How to price the risk?
When a lender puts the wrong price on their risk, they suffer a loss (OK, OK, or they get bailed out by already cash-strapped governments who don’t want the whole system to come crashing down around their ears).
The price of risk in computer security looks smaller than the price of security. It’s easy to add up the cost of security: firewalls plus servers plus IDS plus staff plus antivirus plus this fabulous quantum crypto kit …
However, until a breach actually occurs, the cost of risk is pretty much zero – you can’t predict the financial impact of a breach on any particular system until after the fact; and doing nothing is free until the sky falls in and someone’s dropped your customer list into Pastebin.
There is a group of people who are experienced in assessing the likely cost of something that hasn’t yet happened: actuaries.
Rather than trying to mandate technologies and network architectures and all the things that don’t help if the behaviour is wrong, why not look at the most effective way to encourage good behaviour – such as, for example, mandating “breach insurance” for all corporate and government computer systems connected to the Internet?
Today, someone deciding to connect internal System A to Internet-connected System B is encouraged to look at the business opportunity, and discount the risk. Someone deciding to replace an internal network with Internet services is encouraged to look at savings, and discount the risk. Only when something goes wrong, such as (for example) the Sony PlayStation Network hack, do we get an assessment of the cost involved when something goes wrong.
Because there is no balance-sheet price on risking a computer system, many or most of the people holding the purse strings begrudge the cost of securing it.
But if there’s a real price associated with a risk, then security gets a business case: “your premium will be $2 million, or $800,000 if you satisfy our security auditors.” Or even “we will never insure this system to be exposed to the Internet. If you must run it, you must do so on a private network.”
It’s not a complete solution. But it’s better than seeking truckloads of cash to try and replicate the Internet. ®