Wrong priorities, increased risk

Setting up priorities can be hard and challenging, and cybersecurity (like almost everything else in life) is a prioritization issue. Beyond treating cybersecurity as a business priority and setting up a risk mitigation strategy (with a clear vision, goals, and action plan), the importance of granularity, i.e., the level of detail, is often overlooked and might result in failed execution. For instance, you could have an objective to maintain a secure infrastructure through a specific set of actions that include a vulnerability management program for rapid identification and remediation of vulnerabilities. However, most IT teams struggle to understand what specific vulnerabilities are most likely to be targeted or their potential impact on the environment, and consequently cannot accurately determine which vulnerabilities and assets might require prioritized mitigation.

This basically means that defining priorities at the strategic plan level is much easier than the implementation level. Vulnerabilities, cyber threats, protective technologies, assets monitoring, awareness capabilities, measurement metrics, etc. are all important aspects within the process of dealing with risk where the execution details can be quite challenging. This includes selecting the most relevant vulnerabilities, the most appropriate protective defenses, or the most pertinent cyber threats and adversaries. Obviously, there is no one answer here. Threat modeling is one of the processes through which those priorities can be deduced. Another important process in dealing with the prioritization issue IMHO is the quantification of the attack surface. These two types of analysis are complementary; threat modeling is more centered around evolving threats and attack paths, while the attack surface analysis is more generally concerned with attack avenues to the system.

Attack surface analysis for better prioritization

So you have your strategic goals, target outcomes, and a nice action plan. But when you get to the implementation stage, you are generally faced with choices, which are essentially security/resources tradeoffs between vulnerabilities, defenses, threats, etc. In order to make those choices, you need clear visibility into your environment and that’s where the mapping and quantification of the attack surface might play a role.

Most descriptions of the attack surface come down to defining relevant properties of the system that are potential items of interest to an attacker. The properties, or unit of reasoning about the attack surface, are generally considered within a set of dimensions or themes. These represent the focus and interpretation under which the attack surface is measured. In this systematic literature review (SLR) on the use of the attack surface phrase, the authors give a sampling of some of the existing definitions, and identified six themes representing all of the interpretations of the attack surface in the literature; which are methods, adversaries, flows, features, barriers, and reachable vulnerabilities.

The reachable vulnerabilities theme is what focuses on the exposure associated with known vulnerabilities, which dominates the way security professionals describe their environments’ security nowadays. The abovementioned SLR study found, however, that the methods and adversaries themes are the most prevalent in the literature, and that the vulnerabilities theme is one of the least cited, along with the barriers theme, which focuses on security controls that an attacker must overcome to breach a system.

Regardless of the themes under which the attack surface is defined (it can be a combination of the above), they all have one thing in common: they think about the attack surface in terms of attack opportunities, i.e., characteristics of the system that are potential security risks and would require further review, test, or protection. Using these entry (exit) points into (from) the system, defenders can identify high-risk areas to their environment. This can be based on historical incidents/breach data, or broader threat intelligence reports. For instance, according to Verizon’s 2020 Data Breach Investigation Report, 2/3 of breaches featured hacking or error actions, and 80 percent of hacking actions involve either brute-forcing or stolen passwords. As a result, there will be a focus on strengthening remote services and further review of access control efforts for instance. Such analysis should help clarify where the environment is most exposed to attack and prioritize mitigation and compensating controls.

Finally, the good news here is that this process can be automated to a certain extent provided that the defender has access to extensive scan data of the environment (e.g., services, technologies, and functionalities of the system), and some proxy (context!) data representing the prevalence of all possible attack vectors in incidents and breaches, ideally specific to the environment under assessment, or otherwise global. I believe the attack surface analysis to be a great tool to prioritize and find a security/resources balance in risk management.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *