Out of the Slammer
Something needs to change
Opinion With the Slammer worm network security becomes literally a matter of life and death. Where do we go from here, asks Tim Mullen?
Three hundred and seventy-six bytes.
That's all there was to "Slammer," 376 bytes. When you think about it, it's amazing that a piece of code could have wreaked such havoc on the Internet and caused such widespread system failure -- at about the size of two paragraphs of this column.
What is even more amazing is that this worm was a success in the first place. Many layers of our security model had to fail for Slammer, or "Sapphire" as eEye dubbed it, to make it as far as it did: Not only did there need to be a large installation of un-patched SQL/MSDE boxes in use, but they also needed to be reachable over the Net on UDP 1434.
As usually, the worm starts and ends with the unpatched available system. Every major worm we have seen exploited a known vulnerability in a service, be it Nimda, Slapper, Slammer or Code Red. Code Red was supposed to be a wake-up call, but it is obvious that many hit "snooze," rolled over, and went back to sleep.
And this time, it cost us.
General Internet congestion is always expected for a worm like this, but the peripheral effects of Slammer caught many by surprise. They caught me by surprise. Financial institutions and government bodies were affected by this worm. I was skeptical of mainstream media reports of Slammer's infestation of a 911 emergency response system, so I contacted the reportedly hard-hit Bellvue, WA center directly. The conversation was sobering. According to an operator in the dispatch center, the worm forced them to switch to manual systems. If a non-trivial emergency event had occurred during this period -- a car pileup or a major fire or explosion -- there would have been a "most definite" risk to human life due to process delays and system unavailability.
I don't like it, you don't like it, but the truth is we will be stuck with the "install now, patch later" model for quite some time.
According to this official, someone could have died.
Like many others, I had taken the threat of "cyber-terrorism" with a large grain of salt. But where the interdependencies of multiple systems connected to the Internet make it possible for a worm to shut down normal operations of an emergency dispatch center by accident, it does make me wonder what could happen if someone launched a coordinated attack on purpose.
So, what needs to change?
Obviously, we need more secure software in the first place -- and not just from Microsoft; all software manufacturers ship vulnerable software. Resist the urge to blame Redmond for this one. You didn't buy SQL Server because it is "secure", you bought it because it is a kickass database engine that allows you to more easily and efficiently manage business processes and data so you can be more profitable. Like all software, it is just a tool, and we are responsible for making sure it is properly used and properly secured in our environment.
I don't like it, you don't like it, but the truth is we will be stuck with the "install now, patch later" model for quite some time, and we need to accept that and get good at it.
While we still need a better way to manage patches and updates, that is only part of the solution. Slammer showed us that worms can be as much of an infrastructure problem as they can be for Internet-facing systems. Remember, this was not just a server worm: it also infected workstations running un-patched versions of MSDE. And we all know that when it comes to riding on the Patch Train, desktop software is seated in the caboose.
That is why the concept of "least privilege" is so important. Had more firewalls been configured to "deny all and allow only needed" services as opposed to the "open it up and block what you know is bad" model, this worm would not have been a problem. (To that degree, I was surprised to see that Slammer did not set the source port to 53 in order to look like a DNS response and slip through even more firewalls. Small favors.)
You can bet that your security people know this. The problem is that most management teams don't give IT the resources it needs to do its job, or the power to set and enforce policy when it comes to securing the services your business units dictate must be available.
So, as it is with most issues, this all comes down to people and money. If you want your systems to be secure, you need to get the right people, you need to let them do their jobs, and you need to be prepared to pay for it.
This will become evident when security exposures start making businesses lose customers. I just hope we figure it all out before someone loses something much more precious.
Timothy M. Mullen is CIO and Chief Software Architect for AnchorIS.Com, a developer of secure, enterprise-based accounting software.
Sponsored: RAID: End of an era?