Improving Return on Security Investment: Threat Modeling and The Open Group Open FAIR™ Risk Analysis as a KPI for Agile Projects

Post 4

Simone Curzi, Principal Consultant, Microsoft

John Linford, Security Portfolio Forum Director, The Open Group

Dan Riley, Vice President & Distinguished Engineer, Data Science, Kyndryl

Ken St. Cyr, Sr. Cybersecurity Architect, Microsoft

The first three posts of this series have laid plain the need to supplement ongoing threat modeling activities with quantitative risk analysis, such as the process described in The Open Group Open FAIR™ Body of Knowledge. They’ve briefly discussed a way to incorporate Open FAIR Risk Analysis in the threat modeling process and illustrate how the results would improve return on security investment by deliberately selecting cost-effective combinations of controls. But questions remain:

  • How can quantitative risk analysis keep up with agile projects and systems?
  • How do we know whether we are doing a good job and seeing improvements?

Nowadays, many projects adopt some Agile methodology like Scrum. The idea of those approaches is to build the solution through an iterative process where each iteration introduces more capabilities and refines the solution from the previous work. In that way, it’s much easier to get feedback from users early on and change the direction of the project, if necessary, without having to wait months or years to enjoy the benefits.

Such approaches have enabled businesses to grow much faster and to dynamically respond to environmental changes. The success of the model is causing organizations to rely even more on Agile methodologies to accelerate their business. This has increased the push to further accelerate the development process, which often means cutting the time for traditional non-functional requirements, like those linked with security. For this reason, many organizations are now doing security through automated Static Application Security Testing (SAST) tools or as a validation activity like penetration testing. That’s something, but it means that the solutions are less and less secure because significant numbers of high severity issues remain undetected, leaving plenty of opportunities to the attackers.

This situation is bound to get worse, because security features have limited to no visible value for the business until they are needed and used. But how do we change this perception?

The best way to address this situation is to make the value of security visible early on during the development lifecycle.

As already discussed with the second post in the series, we can calculate the overall cost we may face due to the potential attacks on a system by threat modeling it and then sampling the identified threats to determine the expected losses for the system as a whole. This enables us to perform a threat model after the Nth iteration, apply the methodology introduced with the second post, and estimate that we may face an overall loss for the identified threats that is within a given range.

Such analysis would consider the design of the system as it is defined during the Nth iteration with all the functional capabilities and the security controls designed at that point in time. If you stop developing the system, the obtained range would be the final loss estimation for the following year.

If we perform such an analysis only once, we would be able to determine if the current risk is acceptable or not, and that would already be useful to determine if the system represents an acceptable risk for the Organization. Unfortunately, that information may not be very useful for the development team. They would be interested to know if the situation is normal at that stage of the project, or if they need to do something to address it.

To address this need, we must consider how the situation changes over time.

As already discussed, iterative projects are characterized by the gradual introduction of the required functionalities. This means that each iteration naturally extends the attack surface. More functions and more code mean that attackers have more opportunities to compromise your system. If you do nothing for security, you will have overall risk increasing quickly. On the other hand, if you focus on security, you will be able to contain this situation. Ideally, you should be able to reduce the risk over time by implementing security controls in parallel with the corresponding functional capabilities.

It should be clear at this point that the best way to make the value of security visible is to repeat the Open FAIR analysis over updated threat models regularly and then compare the results. This means that you should do a first threat model as soon as possible. It is typically possible to do it around the second iteration, when some ideas about the system have already been defined, but everything is still in flux. You would then refresh and refine the threat model every following iteration or every other iteration, depending on the case, following it with the Open FAIR analysis.

This approach will allow you to get a diagram like the following one.

Figure 1 – A possible for the Threat Modeling + Open FAIR analysis performed on a system under development, over three consecutive iterations

The diagram above shows a situation where each iteration has a potential loss estimation that is worse than the previous one. In this situation, we could say that the team is not prioritizing security adequately, because of the trend of the ranges of potential loss.

If we had a situation that is more or less stable, like in the following chart, we could say that the situation is under control and that the team is doing a good job because it maintains the potential losses at the same level even if the complexity and the attack surface of the system increase.

Figure 2 – Another chart showing the results of the Threat Modeling + Open FAIR analysis performed on a system under development, over three consecutive iterations.

The situation where the trend is declining would represent an ideal case, where the team does a great job and is able to reduce overall risk despite increasing complexity.

This approach provides two advantages over the approaches commonly used nowadays. First, the teams have a clear understanding of what they need to do. Repeating Open FAIR analyses as discussed provides them with a clear and intuitive indication of how well they are securing the solution. It even provides a measure of the severity of the situation because the numbers this approach provides correspond with the estimated potential losses that the organization may face due to the system under development.

Second, the business decision makers have a clear reference they can use to determine what to do. For example, product owners can more easily decide what controls to implement. A million dollar investment for implementing a security control may sound too much for most organizations; but if you can demonstrate that it would reduce the potential losses by around $50,000,000, it suddenly becomes a possibility.

In general, quantitative risk analysis simply provides clear, defensible data for a broader decision. Security is instrumental for effective risk management, and risk can only be managed effectively when decisions are well informed.  The ideas discussed in this series help democratizing quantitative risk analysis, potentially extending its application to every system we develop.

The Open FAIR Body of Knowledge provides this risk model, and threat modeling is a widely recognized practice for analyzing an application or some IT infrastructure, identifying how it may be compromised, and defining an effective approach to mitigating the potential losses.  The combination of quantitative risk analysis with threat modeling practices has the potential to revolutionize how we do security, transforming it from being the bitter pill you must take to being a tool to optimize the balance of business costs and risks.

The Open Group Security Forum Using Quantitative Analysis in System Threat Modeling Project is actively seeking additional participants to help develop and refine these ideas. All Silver and Academic Members of the Security Forum as well as all Gold and Platinum Members of The Open Group are entitled and welcome to participate. To learn more about joining the project or The Open Group Security Forum, please contact Forum Director John Linford at j.linford@opengroup.org.

Simone has been working in Microsoft Services for more than 20 years. For the last 10 years, he has worked mostly on Secure Architecture, Application Security, and Threat Modeling. Application Security is his main area of interest, even before joining Microsoft: the very reason why Simone has been hired by Microsoft is a set of articles he has written on Security and specifically on Cryptography between the years 1998 and 1999.

 John Linford is Forum Director of The Open Group Security Forum, known for the Open FAIR™ Risk Analysis Standard and work around Security and Zero Trust Architecture. He is also Forum Director of The Open Group Open Trusted Technology Forum (OTTF), known for the Open Trusted Technology Provider™ Standard (O-TTPS) and the Open Certified Trusted Technology Practitioner Profession (Open CTTP). John holds Master’s and Bachelor’s degrees from San Jose State University, and is based in the US.

Dan Riley is a Distinguished Engineer and Vice President, Data Science at Kyndryl, where he applies Data Science to the cybersecurity domain and sponsors the Data Science profession. He also chairs The Open Group Open Data Scientist Work and is a Steering Committee Member of the Risk Forum. Dan is an Open Certified Distinguished Data Scientist and is The Open Group Certified: Open FAIR™ Foundation. He earned a Bachelor of Science from Purdue University in Interdisciplinary Mathematics and Statistics: Actuarial Science Option.

Ken St. Cyr has been an architect and senior engineer in the Microsoft technologies industry since 1997. His proven track record of results has been demonstrated in multiple globally-recognized efforts. Ken’s expertise includes directory services architecture, email systems architecture, information systems consolidation efforts, and enterprise strategy / technology roadmap development. In addition, Ken has worked across boundaries to define and meet requirements, SLAs, and defined operational strategies to support the technologies he’s designed to.