TrustedSec - How to Leverage Threat and Attack Intelligence in your Risk Assessments
Risk assessments methodologies in general are built before much of the information we have today was available. Thus, we need to take advantage of the latest advances in threat intelligence and attack intelligence to make security risk assessments more valuable and aligned with real-life.
“What the hell do you know about TCAP?”
Based on my last blog, you’ll know that I believe FAIR1 has some great qualities that are well thought out and reasoned. We have built the risk assessment in line with many of its principles. However, one of the issue I have with vanilla FAIR is the variable (one of at least ten, depending how you look at the framework) called “Threat Capability”, or TCAP. The TCAP variable is intended to rank an adversary’s ability to successfully exploit an existing vulnerability that then contributes to a breach event with associated losses. At face value this seems perfectly reasonable. In order to gauge the likelihood that a vulnerability may exploited by a threat actor, said actor should be talented enough to be able to successfully perform the exploit. However, this logic falls apart on several levels. But the root of why I have a problem with the FAIR TCAP variable runs slightly deeper into risk assessments as whole.
Most Risk Assessments haven’t kept up
Organizations at certain maturity levels that invest in, and/or have regulatory requirements for, risk assessments know that the majority of the organizations out there that are recommended to perform risk assessments tend to lean on the accounting firm/audit side of the house. I do not find anything specifically wrong about this in general. However, when you start looking at risk from a FAIR perspective, and begin to try to gauge Threat Capability, a huge hole in this process becomes very evident.
Large enterprise organizations, whether on the consulting side of risk assessment or not, are very likely to have separate business units for Governance/Risk/Compliance (GRC). This is where the risk assessment teams typically live. In many organizations, the technical security testing teams (e.g. Pentest/Red/Purple, etc.) do not live in the same GRC silo. The majority of large organizations I’ve worked with have difficulty sharing data between their siloed business units. Therefore, the technical security teams responsible for simulating adversary activity are also best equipped to calculate TCAP. That is their job. Siloed risk assessment teams are likely just guessing.
If you’re good, is a breach event more or less likely?
Interpretation of the TCAP variable is the other issue I have with the vanilla FAIR framework. Let’s say that your risk assessment report is describing a scenario that includes the Threat Capability variable set to “High”. What does that actually mean and how do you interpret that? Does the fact that the potential attacker’s skill level is high mean that the loss event is less likely to occur because there are fewer of those skilled individuals across the globe? This is how FAIR treats the TCAP variable. Or does it mean the loss event is more likely to occur because of the stronger skillset the threat agent may possess? That would depend on the actual complexity of the attack, which isn’t part of the default FAIR risk calculator.
Given the situation at hand, without any additional data and/or variables, FAIR has it right – treat the breach event likelihood as less likely given that there may not be a large population of actors at this skillset level. However, there are a couple of additional variables that must factor into an assessment of risk that FAIR is not taking into consideration. The two variables that I strongly believe need to be added into the Loss Event Frequency side of the FAIR risk assessment framework are Attack Complexity and Motivation.
Threat Actor Motive and Motivation (Level)
It is important to distinguish between Adversary Motive and Motivation Level. Motive itself is more of a classification of the different motivations for “bad” behavior. I’m a fan of the Structured Threat Information eXpression (STIX)2 language for cyber threat intelligence. This framework is currently in flux, and currently has some very different adversary motive definitions between version 1.1 and their 2.0 drafts. Ignoring the differences (which leaves an exercise for you, dear reader, to determine which of the STIX defined adversary motive definitions you prefer, if any), some examples of adversary motives include “opportunistic”, “ideology”, “financial/organized crime” and “organizational gain”.
Motivation level is the “how much do I really want your stuff” ranking. This variable is, in my mind, one of the most underrated risk variables around. Motivation level itself is directly dependent upon a number of standard business factors such as time, effort and cost. So much so that it makes complete sense to look at this variable in relation to adversary Return on Investment3. Data theft is a business. Every business has a budget. If your security controls can delay a threat actor long enough and to the point where their actions do not result in profits, there is a good probability that they will move on to find lower-hanging fruit. Obviously, there are exceptions to this equation. But history continues to show us that the continued spending of funds in an attempt to secure against and prevent low-probability/high-impact events inevitably does not prevent the highly motivated actor from achieving their intended business outcomes.
Using an Advanced Technical Intelligence for Attack Complexity
On the technical testing side, strong adversary simulation teams will be able to provide an Attack Complexity metric that gauges how difficult it is/was to successfully complete a certain attack path. The higher the complexity of the attack, requiring a higher level of expertise to complete, the less likely it is that a loss event will result given that the talent pool required to successfully execute that attack will be substantially smaller that the talent pool required to, say, load a Metasploit module and type run. The more public and automated an exploit is, the greater the likelihood of successful exploitation.
As technology advances, without incorporating the valuable contributions of researchers, penetration testers, and advanced security community contributions, traditional risk assessments are becoming less valuable. There is a clear need to use threat intel for adversary analysis and attack intelligence for adversary simulation, else traditional risk assessments will not meet the needs of their organizations.
References
- https://theartofservicelab.s3.amazonaws.com/All%20Toolkits/The%20Information%20risk%20management%20Toolkit/Act%20-%20Recommended%20Reading/Risk%20Management%20Insight.pdf
- https://oasis-open.github.io/cti-documentation/
- https://www.rsaconference.com/writable/presentations/file_upload/grc-303_corman_etue.pdf
The post How to Leverage Threat and Attack Intelligence in your Risk Assessments appeared first on TrustedSec.
from TrustedSec https://www.trustedsec.com/2018/05/threat_attack_risk_assessment/
Comments
Post a Comment