Overcoming ‘Security By Good Intentions’
The Redmondian Law of Unintended (But Accept It Anyway) Consequences
Opinion Last week Microsoft announced plans to revise the process it uses to provide patches that fix problems with its software. While IT executives around the world may be swooning in gratitude at this latest demonstration of 'Trustworthy Computing' in action, those in the real world of IT, such as system administrators, network engineers, and security staff - in other words, the "doers with a clue" - have little to rejoice about with this latest news from Redmond.
By now, anyone with a Windows computer knows that hardly a week passes without a software patch/hotfix/update issued by Microsoft to fix a problem in its products. For security professionals and system administrators alike, the number of alerts and advisories pertaining to a new Microsoft software problem showing up in our e-mail inboxes almost matches the number of e-mail offers for miracle drugs promising to increase the size of certain body parts overnight.
I've never been a big fan of Microsoft's product update process. In fact, there are times when I believe it's better not to install a Microsoft patch, since applying a patch for one problem tends to create numerous new ones - an ongoing cycle that I've dubbed the Redmondian Law of Unintended (But Accept It Anyway) Consequences. Anyone who suffered through the Windows NT Service Pack fiasco over the years knows what I'm talking about, especially since it's difficult, if not impossible, to remove a patch or service pack (or fully trust it's been removed) without a complete re-install of the operating system.
As a result, Windows users must hedge their bets: do they install a patch to fix today's problem now but risk creating newer ones costing additional time and labor to fix tomorrow? Or should they forgo the patch and, as US Homeland Security Circus-Master Tom Ridge says, "stay alert for suspicious [system] activity but go about their normal [computing] activities?"
Certainly, all operating systems require patches now and then. But the key difference is that the user's level of trust in such patches is made easier when they have access to the system internals and can see what's being affected by the patch. The closed nature of some operating systems means that users (especially home users without dedicated test equipment) must base their "trust" in the patch on how it behaves after installation, instead of beforehand. In other words, roll the dice and pray for the best.
Understandably, those charged with Windows system administration face an endless barrage of vendor alerts and are challenged with not only implementing the fixes they deem necessary but responding to the unforeseen problems such fixes may create once deployed. It's truly a Catch-22 situation. And, while it's easy to blame system administrators for allegedly being complacent in their duties - and some certainly are, no doubt - I believe the majority of blame and responsibility falls on Microsoft's own practices.
If Microsoft really wants to improve its product security, and provide a demonstrable example of truly 'Trustworthy' computing, it needs to stop perpetuating the illusion of its commitment to security and do something truly effective toward that noble and much needed goal.
As such, I humbly offer a few suggestions:
First, Microsoft needs to ensure that its product updates - hotfixes, patches, and service packs - do not break existing system installations when applied. This includes preventing updates from modifying network (or application) settings, network shares, and other software (or software dependencies) on the system, whether from Microsoft or a third party. If such breakage is truly unavoidable, it must be disclosed in the README.TXT file or other easily-located, hard-to-ignore (or overlook) place. Further, installing or updating applications should not modify parts of the operating system, user settings, or data. For example, if a user does not want Visual Basic Scripting (VBS) support when installing Microsoft Office, VBS should not mysteriously appear on his system after installing anything else from Microsoft in the future. The user, not Microsoft, must be the sole authority for determining what will (or will not) be installed on his computer, and how such systems - and applications - are configured.
Second, any - and I mean any - patches or product updates must be removable. If the user finds a problem created by a newly-applied update, he must be confident that he can "roll back" the system to its pre-patch configuration and not forced to rebuild the system from scratch. This capability should be an unconditional, required feature of patches or product updates. (Reportedly, Microsoft is working on this feature.)
Third, patches to fix security- or critical operational-related problems must be released separately from product updates. Being forced to first update the entire operating system - including Solitare and Notepad - to fix a buffer overflow vulnerability in Internet Explorer is absolutely the wrong approach to critical patch management. Using the patching process to force users to update their base systems to a more current product level may be convenient for Microsoft but arbitrarily imposes an unnecessary burden on system administrators, let alone injects potential new interoperability concerns for their computing environment.
Fourth, security and operational-related patches must not change the software license terms of the base system. As a Register article noted last year, Microsoft released a critical security update for the Windows Media Player and included - some would say quietly slipped in - a revised software license for the product that essentially granted Microsoft the user's consent to modify system settings at any time, such as to update Digital Restrictions Management technologies. Of course, users didn't have to accept the software license agreement that popped up when installing the patch, but doing so meant they didn't receive the needed security updates - something I equated in an article last year as a form of implied extortion. Security and operational-related fixes should never come with monopoly-affirming strings attached, especially in a world where few read the fine print of software licenses.
Fifth, and perhaps most importantly, Microsoft must place less emphasis on patch management and concentrate on releasing quality software code. If the underlying code is properly developed and effectively tested prior to release (both for real-world operability and security), there might not be the need for a much-ballyhooed streamlined patch management process, since it's likely there won't be as many recurring problems in the first place. Get it right at the start, and there will be fewer problems down the road.
The bottom line is that no matter how "easy" or "streamlined" Microsoft makes its software patching process, and no matter how good the company's product security commitment is promoted by public relations firms and the media, if these underlying problems aren't resolved, system admininistrators still won't rush to install the latest product patch - or will do so only after extended testing and evaluation, during which time they run the risk of being exploited. Either way, Windows-based networks will remain vulnerable and the software giant's vaunted 'Trustworthy Computing' platform fails to achieve its lofty aspirations despite the formidable roadmap established for that goal.
Recently, Scott Charney, Microsoft's Security Strategist, told the TECH*ED audience that for Microsoft, "an ounce of prevention is worth a pound of cure." Time will tell if Microsoft can live up to this age-old truism as it embraces its new product philosophy of "secure by design, secure by default, secure in deployment and communications."
I wish them well.
© 2003 Richard Forno. All Rights Reserved.
Sponsored: Hyper-scale data management