TrustedSec - Ensuring Risk Assessments have a (Business) Impact

Risk is a term that gets thrown around quite a bit, and like its distant cousin “pentest”, it has a tendency to be used to describe many very different things.

There are many “standard” Risk formulas out in the world today that typically include some combination of the terms Asset, Threat and Vulnerability.  Some of these formulas are taught by very reputable learning institutions such as SANS/GIAC and ISC2. There are a number of guideline frameworks out there that are intended to define the expected requirements of both risk assessments as well as entire risk management programs. Risk itself can be calculated quantitatively, qualitatively and even semi-quantitatively.

“We can rebuild him; we have the technology”

I’ve been playing with different risk formulas, frameworks and calculators for over two decades and, to be clear before moving on – I do not have a tribal affinity to any one method vs. another. The methods that have prospered in the real world have done so because they are at least good at what they were intended to do – communicate risks. But they each have strengths and weaknesses. So, the objective here is to come up with a hybrid methodology of sorts that is the most effective at accomplishing the most important aspect of why risk is calculated in the first place – effectively communicating to the business the associated ramifications of their business choices so they can choose to take as much risk as needed to be as innovative and competitive in their markets as possible.

Current Frameworks

At a high level there are a number of Risk Assessment frameworks and guidelines, including, but not limited to, ISO31000:20181 for Enterprise Risk and ISO270052 for Security Risk, NIST800-303 (a sub-component of the NIST Risk Management framework 800-39), OCTAVE4 and FAIR5. ISO, NIST and OCTAVE each spell out best practice components and associated requirements for an organization’s risk assessment program as well as individual process flows which typically include steps such as identifying assets, identifying vulnerabilities and threats and identifying and mitigating risks. FAIR is a bit different from the other frameworks in that not only does it present a process flow framework for a risk assessment, but it also serves as a quantitative risk calculator. What I like about FAIR is its use of Threat Event Frequency (likelihood) and Probable Loss Magnitude (impact) in its final risk calculations.

There is a reason that this frameworks explanation is a little light in content – there are gobs of papers and presentations on them out there already that I highly recommend reviewing. The base issue I have historically had with the majority of these frameworks was application and perceived value to the business. A pie chart with percentages of high/medium/low risk ratings goes about as far as a pie chart of high/medium/low vulnerability rankings in an executive board meeting. The quantitative-based FAIR starts speaking the right executive language, but the added time investment in many instances isn’t necessarily justified. And to reiterate, while most of the frameworks have a significant amount of overlap, there are also plenty of gaps between them.

What, Who and How

The initial challenge in assembling a hybrid risk assessment standard was dumbing it down while still providing value to the business as well as ensuring that this framework would also satisfy the risk assessment requirements of some of the regulatory frameworks. What ended up making the most sense was creating a conversation that moves through these high-level subjects:

  • “What is your business’ important stuff?”
  • “Who is out there motivated/talented enough to steal/disrupt your stuff?”
  • “How can that stuff be stolen/disrupted?”

The devil is in the details, but this simple conversation ends up including all of the variable components required across the majority of the frameworks. The result is a solid risk derivation, including the organization’s financial loss threshold based on the organization’s financial metrics, as well as our additional variables of Motivation Level and Attack Complexity that are missing from most of the current frameworks, specifically FAIR. We’ll be discussing this in a follow on blog.

This conversation that occurs in the “What” phase is important in both a qualitative as well as quantitative risk assessment, and results in a very easy transition from qualitative to semi-quantitative without the added overhead that most quantitative assessments require.

The “How” portion of this conversation is all about some level of technical adversary simulation (Pentest/Red/Purple, etc.). However, the real value of that engagement when applied to a risk assessment is directed by data gathered in the “What” and “Who” portions. Specifically, an understanding of what the organization’s top critical systems are and the associated asset inventory (targets), the actor groups currently active that are targeting your organization’s vertical (adversary motivation) and the tactics and specific techniques these groups are actively utilizing in their documented (and discovered) campaigns.

It is important to note that attribution is hard. Very hard. There are many very talented individuals and organizations that attempt to document and track these actor groups and profile their activities, but it is still a dark art. Much like with the cyber-insurance world where there isn’t really enough historical data yet to adequately and accurately cover organizations from loss due to information breaches and/or disruption. Your mileage is going to vary.

The Mitre ATT&CK6 project is an excellent starting point for researching some of the known actor groups and their associated vertical targeting as well as some of their known techniques. The 2017 and 2018 Verizon Data Breach Investigations Reports7 provide some great analysis broken down by vertical that can also be used when profiling the likely motivated actors.

If your Risk Assessment team is different from your technical Adversary Simulation team, the target and technique data can now be handed off to the attack team to ensure that their simulation is targeting the critical components of the business with the associated techniques of known actor groups, adding another layer of value to the risk assessment. Once the attack team has completed their testing we now have the “How” portion of the assessment and, more importantly for the risk assessment, an additional risk variable in the form of Attack Complexity (which I’ll discuss in the follow-on blog), that is intrinsically tied to Adversary Sophistication. Both variables need to be considered in the final risk calculations.

Risk Analysis

With the “What”, “Who” and “How” phases complete we can now run the results through our risk calculators to determine the final likelihood and business impact, which is risk. We ended up going with a modified version of the FAIR calculator for qualitative and semi-quantitative assessments in addition to quantitative assessments, which is what FAIR was originally designed for. Having the same underlying calculator for all three types of risk assessments improved our efficiencies while providing added value to the executive teams, especially in the semi-quantitative assessment where we illustrate not only current risk levels but also post-mitigation risk levels that are based on the adversary simulation’s mitigation recommendations. The data gathering differences only appear in the “What” phase, where different questionnaires are used to calculate quantitative and qualitative loss magnitude values to plug into the final calculator.

We found a need to incorporate added variables (Motivation Level, Adversary Sophistication, Attack Complexity) to a hybrid mix of several risk frameworks. These variables contribute to better Loss Event Frequency results, and yield more accurate risk results. This in turn gives organizations a better view into their actual risk landscape which allows them to make more accurate risk-appropriate business decisions. The ability to innovate is directly tied to taking risks and providing a more correct risk assessment naturally gives a business the ability to decide which risks are worth taking in addition to helping chip away at the age-old notion that security hinders business growth.

 

References

  1. https://www.iso.org/standard/65694.html
  2. https://www.iso.org/standard/56742.html
  3. https://csrc.nist.gov/publications/detail/sp/800-30/archive/2002-07-01
  4. https://resources.sei.cmu.edu/library/asset-view.cfm?assetID=51546
  5. https://theartofservicelab.s3.amazonaws.com/All%20Toolkits/The%20Information%20risk%20management%20Toolkit/Act%20-%20Recommended%20Reading/Risk%20Management%20Insight.pdf
  6. https://attack.mitre.org/wiki/Main_Page
  7. https://www.verizonenterprise.com/verizon-insights-lab/dbir/

 

The post Ensuring Risk Assessments have a (Business) Impact appeared first on TrustedSec.



from TrustedSec https://www.trustedsec.com/2018/05/ensuring-risk-assessments-have-a-business-impact/

Comments

Popular posts from this blog

KnowBe4 - Scam Of The Week: "When Users Add Their Names to a Wall of Shame"

Krebs - NY Charges First American Financial for Massive Data Leak

US-CERT - SB18-169: Vulnerability Summary for the Week of June 11, 2018