This article is more than 1 year old

Sysadmins: Step away from the Big Mac. No more Heartbleed-style 2am patch dashes

6 steps to a saner patching regime

Patching is a necessary evil for network administrators. Unfortunately, an awful lot of them have been burning not only the midnight oil, but also the weekend oil to keep up with patches such as – but not limited to – Heartbleed and Shellshock.

The bad news is that this is only the start. As software vendors move towards a more appliance-based approach, upgrades become that bit more difficult. Companies will start to proliferate tens of appliance VMs and they are all Linux-based. Black boxes, if you will.

Each company may have a different process to update. Some big players demand you redeploy an entire virtual appliance to patch it, making support that bit more time-consuming. Sometimes the updates don’t even work and you have to jump through several dozen hoops to get your data moved on to a new bug-fixed platform.

Patching costs a lot of resources, time and money. How can you do it efficiently and accurately?

Every site and situation is different, but the differences in how businesses implement patching depends a lot on the size of the company. It is an established fact that the bigger a company gets, the more red tape exists and the slower it moves.

This means the costs of rolling out a patch increase significantly due to the overheads incurred, both technical and non-technical. Each progression on this path from small to large environments increases the cost and complexity of patching exponentially. How can administrators manage this issue and costs at the same time?

Surprisingly, one of the bigger issues with larger vendors is the time scale between vulnerability identification and general patch availability. Without naming names, critical patch timescales have been known to stretch into several weeks for some vendors affected by Heartbleed and similar. Unfortunately, bugging them and escalating on a daily basis (assuming you have the clout) only has so much of an impact. They like to take their own sweet time. Once they arrive, what’s the plan?

How not to do it

My first job as a network administrator for a small single site with approximately one hundred users gives some insight into how patching used to happen within smaller companies. Patches were deployed once a month, by hand.

The company was too cheap to buy SMS or patching infrastructure products so it cost them a few hundred in overtime once a month (usually) for me to roll round the offices/server room, installing the latest patches whilst stuffing my face with McDonalds' finest.

Testing was as simple as trying the desired patches on the IT administrators machines or low level servers for a week before rolling the patches out. Test infrastructure was something only larger companies had. The only other prep that was required beforehand was a catch-all email 48 hours beforehand informing all the users that systems would be potentially unavailable for the best part of the weekend. Change control or contingency plans weren’t even a thought beyond a decent back-up. Fortunately, I never got bitten by the “OMG Noooooezzz” patch of death that truly busted a machine.

Doing it properly

The key aspect is forward planning. Patches are going to be needed; it’s not an optional requirement (for decent sysadmins). Below is a list of steps that can help admins get on top of the nightmare that is patch Tuesday (and other vendors too!)

1. Create a good, tested patch process, documenting the how and the where. Document what needs to happen for a patch deployment to be deemed successful. Include any paperwork or representations to change meetings that need to be made. Once the process exists and has been debugged of any issues, it can serve as a template for how to deploy patches in the future. This first step in creating a patching process helps ensure uniformity and consistency of patching in the environment, which is desirable.

2. Ask if this patch affects you. This may seem like an obvious question but not all bugs will affect all users. If the bug is in a service you don’t use and isn’t installed, there is little point in installing it.

More about

TIP US OFF

Send us news


Other stories you might like