CIO

8 ways your patch management policy is broken

These eight patching best practices mistakes get in the way of effective risk mitigation. Here's how to fix them.

Not appropriately patching your software and devices has been a top reason why organizations are compromised for three decades. In some years, a single unpatched application like Sun Java was responsible for 90% of all cybersecurity incidents. Unpatched software clearly needs to be mitigated effectively.

So, it’s surprising to see that most organizations don’t effectively do patch management even though they think they do. Here are some of the common ways patch management policy is broken.

1. Not patching the right things

The number one patching problem is not patching the highest risk applications first. You’ll find hundreds to thousands of things that need patching in almost any environment, but a handful of software program types are by far attacked the most. Those need to be patched first, best and quickest.

On client workstations, the following four types of software are attacked the most:

  • Internet browser add-ins
  • Internet browsers
  • Operating systems
  • Productivity applications (e.g., Office applications)

On servers, the following types of software are attacked the most:

  • Web server software
  • Database server software
  • Operating system
  • Remote server management software

These classes of software make up less than 5% of all software vulnerabilities, but more importantly, unless there is an active exploit in the wild, you don’t have to worry about it. Decades of data has shown that unless there is public exploit code “in-the-wild”, then it is unlikely the vulnerability will be exploited. Only about 2% of all publicly announced vulnerabilities end up in the wild

Solution: Patch the software most likely to be exploited software first, best and quickest.

2. Too focused on patch rate

I have rarely visited a customer site (and I have visited hundreds) that did not tell me that they have some incredible patching rate, like 99%. I have never visited a customer site that had a single device fully patched and I have never scanned a device that didn’t contain a critical vulnerability. Why the big disconnect?

What a “99% patch rate” usually means is that they are patching 99% of Microsoft applications on most of their devices, and even that is rarely true. If I check to see if they have any vulnerable remote management software or vulnerable versions of internet browser add-in programs, the answer is usually yes. Sometimes I’ll find five different versions of the same program and none of them are correctly patched.

More importantly, the 1% that isn’t patched represents the highest risk vulnerabilities. Does saying you have a 99% patch rate mean anything for overall security risk if you have nearly a 0% patch rate on the stuff that is most likely to be exploited? No, and yet that scenario accurately describes what I see in most environments.

Solution: Don’t worry about reporting overall patching success rates. Tell me how well you patch the vulnerabilities most likely to be exploited.

3. Not patching fast enough

All compliance guides say to patch critical vulnerabilities in a timely manner, whatever that means. What is should mean is that you patch them within a couple of days and at most a week. I understand the need for many people to wait a day or three to see if a just released patch has some serious bug in it, but I run across organizations with written policies to patch within a month. That’s crazy.

In a day when the latest patches are used to create wormable exploits in minutes of the patch’s release, you can’t wait a month for a patch of a critical component, especially one of the most attacked components. If you use “inline patches” to stop threats trying to exploit those unpatched vulnerabilities, the signatures should be deployed immediately.

Solution: Patch the components most likely to be attacked within a week.

4. Not clear who’s responsible for patching

It is the rare organization where one person or team is responsible for all patching. Usually, one person or team is responsible for patching a large part of it, but someone else is responsible to patch devices, another responsible to patch application servers, another responsible to patch database servers, and so on.

I rarely find an organization that isn’t missing lots of patches across lots of their computers. When I ask why is happening, they start pointing fingers. “I’m in charge of user workstations, but not the servers,” or “I’m not allowed to touch such-and-such servers,” or the “DNS administrators have decided not to patch that right now because it breaks yada-yada.” The excuses fly as fast as the fingerpointing. The only problem is that you’ve got lots of unpatched things with no one taking responsibility to patch.

Solution: Make one person/department solely responsible for all patching.

5. Patches not tested before deploying

Yes, patching will break some things. That’s no reason not to patch quickly, but anyone who has rolled out a patch to have it crash the device it was deployed on is forever burned. No one gets a raise for crashing a server even if it was due to installing a security patch. So, test. And by “test” I mean do something…anything.

The conventional wisdom is that all patches should be tested across broad swathes of different types of devices and configurations before testing. Only after thorough and complete testing are patches allowed to be deployed. That’s great, if you actually do it.

Most companies deploy patches without a single bit of patching. That’s just setting yourself up for a critical failure on the day you need it least. Instead of making patch testing a binary thing (i.e., you either do or don’t do it), do at least some testing before a wide-scale roll-out.

Define ahead of time which of your non-critical servers, user workstations and devices will be your full-time guinea pigs and then use them when it comes time to roll out the patches. Roll out the patches to your production test servers and users the day or two after they come out. Wait a day or two to see if they cause any problems, and if not, then deploy more widely but not everything else at once. Do production deployments in multi-day waves, but quick enough that you get everything deployed in a week. Start small and then spread out.

Again, don’t make testing a binary choice. If you can’t do it complete and right, at least do some testing. Have a good plan to back out of the patches in case one of them causes big problems.

Solution: Test patches before doing wide-scale production deployments and have a back-out plan in case a patch causes problems.

6. Patch management team has no authority

Every good patch management leader I talk to complains about having all the responsibility (if something successfully attacks something they haven’t patched yet) and none of the authority to force stakeholders of devices to do patching properly. For instance, when unpatched Sun/Oracle Java was responsible for 90% of all successful web exploits, most patch managers told me they couldn’t patch it because doing so broke too many legitimate programs. That, paired with the fact that Java was also the most popularly installed program after the operating system, led hackers to target it the most.

It is not acceptable to do nothing when you find an unpatched critical software program with public exploit code in the wild. You can do a lot of things but doing nothing is not one of them. Anytime I’ve heard of a program breaking because of a new patch, that has almost always been because the programmer did something they were not supposed to. Make sure your developers (or your vendor’s developers) aren’t being lazy and causing you patch management issues.

If you can’t patch a program, consider doing the following:

  • Removing it if not needed
  • Removing unpatched device off network or strongly isolating if possible
  • Using software to block any potential threats that can exploit unpatched vulnerability

Solution: Take alternative actions to mitigate risk when you are prevented from deploying a patch.

7. Vulnerabilities patched once and forgotten

Patching is not an install-and-forget-it problem. Patch management is not about buying a product that claims to patch everything every time perfectly. That patch management product does not exist. Patch management is about effective risk management and keeping a pulse on what is and isn’t being exploited in the wild.

Solution: Put a sophisticated risk manager in charge of your patch management program.

8. Patch managers’ incentives misaligned

Lastly, most patch management leaders, if specifically incentivized for how well they do patch management at all, are ranked on what percentage of all software programs they patch in a timely manner. I can tell you the answer. It’s 99%. It’s always 99%, and that 99% says nothing of your true risk management profile.

Instead, incentivize patch managers by how well and quickly the patch the most attacked programs. If the number of unpatched software programs exploited in a given time period in your environment goes down versus a previous time period and no critical attacks have taken place because of unpatched software, that should be considered success. I want to salute that patch management leader, because everything else is just lying using statistics.

Solution: Make sure a patch manager’s incentives are aligned with true risk reduction and not an arbitrary overall patching percentage.

Patch management is all about risk management. By following these recommendations, you can decrease cybersecurity risk by patching the right stuff better and faster.