Updates to the Open FAIR™ Body of Knowledge, Part 2

By John Linford, Forum Director, The Open Group Security Forum and Open Trusted Technology Forum

The Open Group Security Forum is thrilled to announce the publication of an update to the Open FAIR™ Body of Knowledge (BoK). The Open FAIR BoK is comprised of The Open Group Risk Taxonomy (O-RT) Standard and The Open Group Risk Analysis (O-RA) Standard. The Open Group initiated a standards effort regarding FAIR ~10 years ago, and these standards define the official, open, vendor-neutral and consensus-developed definition of FAIR.

The update to the Open FAIR BoK brings O-RA to Version 2.0 and brings O-RT to Version 3.0. O-RT was the document originally brought into The Open Group Security Forum, and O-RA was created afterward. This then led to O-RT being updated to Version 2.0. As a result, there were several discrepancies and much redundancy between the documents. This time, the Security Forum made a concerted effort to update the documents side-by-side, removing the discrepancies and eliminating redundancy as much as possible.

Although this update to the Open FAIR BoK brings both O-RA and O-RT to new versions, there was not a substantial change to content in either document; rather, the documents were restructured to allow better introduction and description of current content.

This blog post is the second of three in a series to describe updates to the Open FAIR™ Body of Knowledge. It will describe specific updates to O-RA to bring it to Version 2.0. The first post [JL1] described revisions made to both O-RA and O-RT for consistency between the documents. The third post will describe specific updates to O-RT to bring it to Version 3.0.

Updates to The Open Group Risk Analysis (O-RA) Standard

The Open FAIR BoK Update Project Working Group made a deliberate effort to more logically present information in O-RA. In Section 4: Risk Measurement: Modeling and Estimate, the ideas of accuracy and precision are now presented before the concepts of subjectivity and objectivity, and the section ends with the concepts of estimates and calibration. O-RA now also emphasizes having usefully precise estimates; in other words, an estimate is usefully precise if more precision would not improve or change the decision being made with the information.

The concept of “Confidence Level in the Most Likely Value” as a parameter to model estimates has been removed from O-RA in bringing it to Version 2.0. Instead, this concept has been replaced by the choice of distribution that best represents what the Open FAIR risk analyst knows about the risk factor being modelled; however, Open FAIR is agnostic on the distribution type used.

O-RA Version 2.0 also takes inspiration the Open FAIR™ Risk Analysis Process Guide to better define how to do an Open FAIR risk analysis in Section 5: Risk Analysis Process and Methodology. To do this, O-RA specifies that a risk analyst must first scope the analysis by identifying a Loss Scenario (Stage 1). The Loss Scenario is the story of loss that forms a sentence from the perspective of the Primary Stakeholder. The Loss Scenario should include the Primary Stakeholder, Asset, Threat Agent/Community (identifiable using included common characteristics), Threat Event (including the type of Threat Event), and the Loss Event.

Figure 1: Open FAIR Loss Scenario

Upon defining the Loss Scenario, the risk analyst can then evaluate Loss Event Frequency (Stage 2); they do this utilizing a top-down approach. In other words, the risk analyst does not need to find estimates for Contact Frequency and Probability of Action unless one or both risk factors will be impacted by enacting a control or if the decision-maker requests that information.

Figure 2: Evaluate the Loss Event Frequency

After estimating Loss Event Frequency, the risk analyst can evaluate the Loss Magnitude (Stage 3). To do this, the analyst finds estimates for the Forms of Loss from the Primary Loss and any Secondary Loss Events.

Previously, Loss Magnitude had been comprised of Primary Loss and Secondary Loss and attempted to discuss the concepts of Primary Risk and Secondary Risk. With the update to O-RA Version 2.0, the concepts of Primary Risk and Secondary Risk were removed, and the Open FAIR Taxonomy was updated so that Loss Magnitude is comprised of Primary Loss Magnitude—the direct consequence of a Loss Event, evaluated as the economic cost directly associated by the observed confidentiality, integrity, or availability loss of the Asset—and Secondary Loss. Secondary Loss is comprised of the Secondary Loss Event Frequency—the conditional probability that a Primary Loss will result in a Secondary Loss, expressed as a probably, not events/year—and the Secondary Loss Magnitude—the sum of those loss forms resulting from the reactions of Secondary Stakeholder(s) that cause additional loss(es) to the Primary Stakeholder.

Figure 3: Evaluate the Loss Magnitude

Once the risk analyst has identified the Loss Scenario, Evaluated Loss Event Frequency, and evaluated Loss Magnitude, they can derive and articulate risk (Stage 4). How the risk analyst articulates risk will depend on the information request by the decision-maker. The update to O-RA incorporates several examples of ways of presenting results, whether this be the distribution generated by the Monte Carlo analysis or a single-number summary result, such as the average, most likely value, loss exceedance result, or the maximum/minimum simulated loss.

Figure 4: Decomposing a Loss Scenario

Depending on the purpose of the Open FAIR analysis, the analyst may also need to model the effect of controls (Stage 5). In updating O-RA to Version 2.0, the control categories themselves were not modified; they remain avoidance controls, deterrent controls, vulnerability controls, and responsive controls, as shown below. However, two broader, higher-level categories were introduced: Loss Prevention Controls (which impact Loss Event Frequency) and Loss Mitigation Controls (which impact Loss Magnitude).

Figure 5: Model the Effect of Controls

The risk analyst might be doing an Open FAIR risk analysis to fit the risk analysis requirement of a risk assessment framework, such as the NIST Cybersecurity Framework. Therefore, O-RA Version 2.0 now includes a figure demonstrating how Open FAIR maps to the NIST CSF Five Functions.

Figure 6: Decomposing an Open FAIR Loss Scenario, including the Open FAIR Control Categories and the NIST CSF Five Functions

Finally, the updated version of O-RA concludes with a section on risk analysis quality considerations. Topics addressed include…

  • Documenting assumptions and rationale
  • Diminishing returns to gathering more data and/or estimating lower levels of Open FAIR taxonomy
  • Capacity for Loss vs. Tolerance for Loss
  • Risk Qualifiers
  • Using Ordinal Scales for Analysis
  • Translating Quantitative Results into Qualitative Statements
  • Troubleshooting

Next Steps

With the updates to the Open FAIR™ Body of Knowledge, The Open Group will now ensure that the Open FAIR™ Conformance Requirements and Configuration Document are up to date before updating the Open FAIR Certification Program and Exam.

http://www.opengroup.org @theopengroup

John Linford is Forum Director of The Open Group Security Forum, known for the Open FAIR™ Risk Analysis Standard and work around Security and Zero Trust Architecture. He is also The Open Group Open Trusted Technology Forum (OTTF), known for the Open Trusted Technology Provider™ Standard (O-TTPS) and the Open Certified Trusted Technology Practitioner Profession (Open CTTP). John holds Master’s and Bachelor’s degrees from San Jose State University, and is based in the US.