Key Takeaways
- Due to the high volume of vulnerabilities that exist in large production infrastructures, differentiating between vulnerability and exploitability can allow teams to focus on the most dangerous exploits first.
- A good first step in any security program is to gain comprehensive visibility into the security of application code across the entire CI/CD pipeline. This allows for earlier disaster mitigation.
- When ranking vulnerabilities in terms of their exploitability, each vulnerability should be assessed within the context of the entire program, not just by its CVSS score.
- Use filtering technology to limit the effectiveness of reconnaissance activities, which are commonly used by attackers in preparation for an exploit.
- Monitoring internal networks, hosts, and workloads for both Indicators of Attack (IoA) and Indicators of Compromise (IoC) can help catch any attacks that have gotten past your organization's defenses.
Open source software underpins the large majority of internet-facing applications. The availability, accessibility, and quality of such projects allow enterprises to innovate and succeed. It’s a great example of a public good, and something that should be celebrated and protected.
The ubiquity of open source means that any vulnerabilities discovered have far-reaching impact. Attackers see an enormous opportunity, and large numbers of enterprise and other users have to respond quickly to identify instances of the vulnerable software in applications they develop and in third-party applications and components they use.
The reality is that software vulnerabilities are common. How can security professionals assess the risk posed by vulnerabilities and best focus their organization's efforts on fixing those vulnerabilities that matter most?
Build comprehensive visibility – you can’t secure what you can’t see
Security teams are responsible for the integrity of the entire application, including all the open source components and third-party dependencies that were not written by the enterprise's developers. Much work has been done to improve the security of the software development process and to track dependencies with "shift left" initiatives and SBOMs (software bill of materials) so that code shipped to production has a high confidence of security. But when a vulnerability is published, how can you quickly identify where it is present in deployed code that is already running in production? A first step in any security program is to gain comprehensive visibility into the security of application code across the entire CI/CD pipeline, from build all the way through production, and across all application and infrastructure modalities, including running containers, Kubernetes, cloud providers, VMs, and/or bare metal. Eliminate your blind spots so that you can detect and mitigate attacks early enough.
Focus on what matters most: exploitability vs vulnerability
After gaining full visibility, it’s not uncommon for organizations to see tens of thousands of vulnerabilities across large production infrastructures. However, a list of theoretical vulnerabilities is of little practical use. Of all the vulnerabilities an enterprise could spend time fixing, it's important to identify which are the most critical to the security of the application and therefore must be fixed first.
To be able to determine this, it's important to understand the difference between a vulnerability, which is a weakness in deployed software that could be exploited by attackers for particular result, and exploitability, which indicates the presence of an attack path that can be leveraged by an attacker to achieve a tangible gain.
Vulnerabilities that require high-privilege, local access in order to exploit are generally of lesser concern because an attack path would be difficult to achieve for a remote attacker (if a bad actor has already achieved privilege escalation on a local host, they have a great many opportunities to gain further control). Of higher concern are vulnerabilities that can be triggered by, for example, remote network traffic that would generally not be filtered by firewall devices, and which are present on hosts that routinely receive traffic directly from untrusted, internet sources.
Assess and rank potential exploits
When assessing a vulnerability to rank it by its exploitability, and therefore the priority in which to fix it, you’ll want to consider some or all of the following criteria:
- The severity of the vulnerability: CVSS (Common Vulnerability Scoring System) scores provide a good baseline of a vulnerability’s severity to enable you to compare vulnerabilities. However, CVSS scores do not consider the context of your own application and infrastructure, which is a gap you’ll need to shore up for the most accurate information.
- The attack vector – network vs local system access: Network accessible vulnerabilities often form the first step of an attack, and local system access vulnerabilities come into play once an attacker has gained a foothold within the app. This means you need to immediately seal off any network attack paths leading to your most exploitable vulnerable services and simultaneously look for signals of attack behavior on nodes and take corrective action.
- Proximity to the attack surface: Is there an attack path that provides a viable route by which an attacker can reach and exploit the vulnerability? When considering attack paths, make sure to consider ways an attacker could bypass firewalls, load balancers, proxies, and other hops, and address any exposure there, while your developers work on updating, testing, and redeploying the vulnerable applications.
- Presence of a network connection: Although all vulnerabilities that can be reached from external sources are a concern, vulnerabilities on applications that routinely handle general network connections, evidenced by current connections, are of the highest concern. These are the vulnerabilities that an attacker is most likely to discover using reconnaissance (recon) techniques.
The key here is to add runtime context to vulnerability data so that you can identify your most exploitable vulnerabilities, and therefore your list of which ones to patch first because they present the greatest danger to the security of your application.
Consider using tools like open source ThreatMapper to help you identify your most exploitable vulnerabilities. You should run tools like this repeatedly over time as conditions change to focus your security efforts where they are needed most.
Limit recon activities
An attacker will typically follow an established playbook, using tactics and techniques documented by MITRE ATT&CK. These tactics follow models such as the Cyber Kill Chain and begin with recon activities and then move to an initial exploit. The initial exploit generally aims to gain limited local control over a beachhead system. From the beachhead, the attacker has a great many options to explore, escalate privileges, install persistent control systems, and investigate adjacent systems in order to spread laterally and locate the greater prize.
To limit the effectiveness of recon activities, start with identifying the attack paths that an attacker might take to reach a vulnerable component. In a belt-and-braces fashion, ensure that each of these attack paths are secured with filtering technology:
- WAF to capture and drop known recon traffic
- Protocol and source-based filtering to limit the clients who can access that path
- Additional application-level filtering to:
- Ensure transactions are authenticated
- For API traffic, ensure transactions originate from a trusted client implementation
ThreatMapper open source visualizes the attack paths leading to your most exploitable vulnerabilities so you can determine how best to seal them off.
Scour for “Indicators of Attack” and “Indicators of Compromise”
Despite best efforts to secure the attack surface and limit visibility to attackers, exploits can still happen for a variety of reasons – zero day attacks, deliberate attempts to compromise the supply chain, lack of visibility into Shadow IT and other unmanaged assets, and so on. CVEs are published through the NVD at a rate of about 50 per day, so the risk of a new vulnerability being found in production is significant.
Another critical line of defense, therefore, is monitoring internal networks, hosts, and workloads for both Indicators of Attack (IoA) and Indicators of Compromise (IoC).
IoA can include exploratory, recon traffic from unusual sources, or unusual network traffic that may indicate the presence of C2C (container to container) networks, remote telemetry, or exfiltration attempts. IoC are on-host indications that something is amiss and an attacker has gained a foothold, including unusual process behavior, file system access, or file system modification.
It’s worth establishing a “red team” function that routinely explores applications and can determine the signals of attack and their impacts as they apply to your individual organization. Look to enterprise tools to help you automate and manage the huge volumes of IoA and IoC events generated from enterprise systems, including minimizing false positives, storing events for later analysis and, most importantly, correlating events to gain understanding of attack signatures and how those attacks progress against your applications. Armed with this knowledge, you can then deploy targeted, surgical countermeasures to block recon or attack traffic from internal or external sources, and/or to quarantine compromised workloads.
Conclusion
Log4j is another reminder that vulnerabilities are inevitable. But this shouldn’t detract organizations from using open source code as a driver for innovation and other worthwhile goals. By gaining comprehensive visibility into application traffic across all infrastructure modalities, incorporating strategies to assess and prioritize vulnerabilities based on their risk of exploit, and maintaining constant vigilance in the hunt for traces of attacks, security leaders can guide their organizations through mitigating the risks associated with Log4j and the next big vulnerabilities.