14 Faults With Your Vulnerability Management Program You do not know

problems with your vulnerability management program you don't know

Photo by James Pond on Unsplash

One of the biggest headaches for organizations security-wise is having to deal with technical vulnerabilities (Faults With Vulnerability Management) periodically. While some organizations have a well mapped out process to manage vulnerabilities, some others still struggle with the “Whack-A-Mole” method, randomly dealing with vulnerabilities as they appear.

The issue addressed by this article is the fact that some organizations think they have a well mapped out process for handling vulnerabilities. They might be doing it wrong in some areas. Fourteen common faults related to a vulnerability management program have been highlighted below in no particular order:

1 – Absence of an automated asset inventory: Having an asset inventory is more or less the first step in technically implementing a Vulnerability Management Program as you can’t fix vulnerabilities on assets you don’t know you have. How do you accurately account for assets that are newly joined to the network? How do you dispose of assets that are no longer part of your network? How do you identify rogue assets?

confused
to be or not to be???

An automated asset inventory tool answers all these questions by immediately adding newly created assets, eliminating disposed assets, and alerting on rogue assets. This is way better than managing a manually updated inventory that gives room to the aforementioned blind spots.

2 – Not running authenticated scans: Although not entirely a fault on its own, unauthenticated scans fail to account for vulnerabilities that require authenticated access for exploitation. I hold the view that unauthenticated scans give a false sense of security. Here’s how I mean, imagine an attacker somewhere has obtained user-level credentials to a workstation that was “fully” patched using results from an unauthenticated scan. The attacker then proceeds to scan for privilege escalation vulnerabilities and finds out that such vulnerability indeed exists because unauthenticated scans in most cases would not discover it. You know the rest of the story. Authenticated scans give more detailed, thorough scan results to work with.

3 – Improper assignment of responsibilities: It’s wrong to assume that the members of the vulnerability management team are also responsible for patching/remediation. The following roles at minimum should be mapped out clearly for accountability purposes:

  • System Administrator: Mostly deals with installing OS based patches
  • Software Developer/Devops: Mostly deals with installing application dependency patches (e.g Java, .NET) and creating fixes for application related vulnerabilities
  • Network Engineer: Deals with vulnerabilities discovered in network devices e.g routers

That said, the responsibility for fixing vulnerabilities never fully lies with the vulnerability management team but is distributed across various other teams.

4 – Enabling interactive logon for authenticated scan credentials: Authenticated scans make use of service accounts. There is usually the option of allowing the credentials to be used interactively i.e used to logon via RDP for windows servers. The risk here crystalizes when someone manages to get hold of the credentials and uses them to further carry out attacks seeing the credentials are privileged. Interactive logon should be disabled for accounts used for authenticated scans.

5 – Not prioritizing remediation action: This right here is a faulty fault (*Laughs out loud). There are too many vulnerabilities to fix in so little time. Priority should be placed based on asset criticality first, severity and exploitability of vulnerabilities. Imagine fixing medium severity vulnerabilities on less important servers first because you want to reduce vulnerability count. Meanwhile, there is a couple of critical severity, exploitable vulnerabilities on critically rated assets. The impact of exploiting these vulnerabilities should be put into consideration before undertaking remediation.

There are a couple of vulnerability prioritization tools that can easily help solve this problem.

6 – Performing aggressive scans: Non techie management teams will readily spit fire and invalidate the purpose of a vulnerability management program if a scan happens to take down a server. Scanners can be configured to run less intrusively without causing any downtime. Consider this configuration before running any scans.

7 – Not documenting processes and procedures: It’s quite difficult to standardize a vulnerability management program if the recommended processes and procedures aren’t contained in a document. In worse cases, the documents are not distributed among teams for perusal and understanding. Anything not documented is as good as non-existent.

8 – Not testing patches before deployment: Ever installed a patch and broke an application afterwards? Not testing patches before deployment can cause massive downtime in an enterprise if the patch isn’t verified. There should be a defined process to verify patches in a test environment that shares similar configuration with the production environment. This way, down time in the production environment is minimized to the barest.

9 – Not tracking progress with metrics: How do you confirm if your vulnerability management program is achieving its objectives? How do you confirm that the various responsible teams are putting in the effort required? Not using metrics with a vulnerability management program somehow defeats the purpose as you can’t determine its effectiveness. Use metrics like number of unique vulnerabilities detected, number of exploitable vulnerabilities detected, percentage reduction of exploitable vulnerabilities, time to patch, time to detect vulnerabilities, etc.

10 – Absence of automation: Repeatable processes can be tiring and boring for analysts. Automating most of these tasks usually save a lot of time. It’s easier to schedule scans to run on a particular date and at a particular time than to remember to manually run it at the same date and time.

Patching is also a great headache! If not automated with tools like SOAR and other certified patch management tools, time and effort are more likely to be wasted.

11- Writing complex reports: After scanning assets for vulnerabilities, some analysts send the raw scan result to the patch team. Some others rewrite the reports in a way the various patch teams don’t really understand.

It makes sense to rewrite vulnerability scan reports in the simplest form possible, preferably in tabular form with fields like Vulnerability, Description, Severity, Impact, Affected Assets, Remediation Steps. Don’t forget to include a dashboard for metrics monitoring in the report.

12 – Manually managing scan credentials: It’s quite cumbersome doing so, and less effective too. There’ll be credentials for assets on different domains and standalone devices like Linux servers and network devices. It makes more sense to integrate the vulnerability scanner with a credential manager as it’s much secure and effective.

13 – Lack of management buy-in: If the executive management doesn’t support your vulnerability management program, there’s a very low chance of recording great success as there’s no one from “above” to hold any one responsible for exceptions. In fact, there might not even be a vulnerability management program to start with. Executive management support is ultimately necessary for the success of a vulnerability management program.

14 – Absence of a risk register for previously waived vulnerabilities: Imagine a scenario where the vulnerability management team has identified critical, exploitable vulnerabilities in a core production server. The business owner provides justification for not fixing the vulnerabilities, claiming that downtime as long as 120 seconds is harmful to the organization’s existence and also, there’s a virtual patching solution that prevents exploitation of all identified vulnerabilities on that server. Management accepts the risk and moves on but fails to document it.

Risks like this should be documented and reviewed periodically i.e annually as technology is changing at break-neck speed. It could happen that the virtual patching solution may no longer be effective at preventing exploitation because the threat landscape has changed.

Again, anything that isn’t documented is as good as non-existent

Download this Article

[hubspot type=form portal=8577853 id=3ad3e838-8b32-438e-8087-e1d78196daeb]

2 Comments

  • Ola

    March 22, 2021 - 1:18 pm

    Very insightful

  • Ola

    March 22, 2021 - 1:19 pm

    Very insightful. thanks for sharing.

Leave A Comment