Tag Archives: risk management

The Open Group Philadelphia – Day Three Highlights

By Loren K. Baynes, Director, Global Marketing Communications at The Open Group.

We are winding down Day 3 and gearing up for the next two days of training and workshops.  Today’s subject areas included TOGAF®, ArchiMate®, Risk Management, Innovation Management, Open Platform 3.0™ and Future Trends.

The objective of the Future Trends session was to discuss “emerging business and technical trends that will shape enterprise IT”, according to Dave Lounsbury, Chief Technical Officer of The Open Group.

This track also featured a presentation by Dr. William Lafontaine, VP High Performance Computing, Analytics & Cognitive Markets, IBM Research, who gave an overview of the “Global Technology Outlook 2013”.  He stated the Mega Trends are:  Growing Scale/Lower Barrier of Entry; Increasing Complexity/Yet More Consumable; Fast Pace; Contextual Overload.  Mike Walker, Strategies & Enterprise Architecture Advisor for HP, noted the key disrupters that will affect our future are the business of IT, technology itself, expectation of consumers and globalization.

The session concluded with an in-depth Q&A with Bill, Dave, Mike (as shown below) and Allen Brown, CEO of The Open Group.Philly Day 3

Other sessions included presentations by TJ Virdi (Senior Enterprise Architect, Boeing) on Innovation Management, Jack Jones (President, CXOWARE, Inc.) on Risk Management and Stephen Bennett (Executive Principal, Oracle) on Big Data.

A special thanks goes to our many sponsors during this dynamic conference: Windstream, Architecting the Enterprise, Metaplexity, BIZZdesign, Corso, Avolution, CXOWARE, Penn State – Online Program in Enterprise Architecture, and Association of Enterprise Architects.

Stay tuned for post-conference proceedings to be posted soon!  See you at our conference in London, October 21-24.

Comments Off

Filed under ArchiMate®, Conference, Cybersecurity, Data management, Enterprise Architecture, Enterprise Transformation, Open Platform 3.0, RISK Management, Security Architecture, Standards, TOGAF®

Open Group Panel Explores Changing Field of Risk Management and Analysis in the Era of Big Data

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Panel Explores Changing Field of Risk Management and Analysis in Era of Big Data

This is a transcript of a sponsored podcast discussion on the threats from and promise of Big Data in securing enterprise information assets in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference itself is focusing on Big Data the transformation we need to embrace today.

We’re here now with a panel of experts to explore new trends and solutions in the area of risk management and analysis. We’ll learn how large enterprises are delivering risk assessments and risk analysis, and we’ll see how Big Data can be both an area to protect from in form of risks, but also as a tool for better understanding and mitigating risks.

With that, please join me in welcoming our panel. We’re here with Jack Freund, PhD, the Information Security Risk Assessment Manager at TIAA-CREF. Welcome, Jack.

Jack Freund: Hello Dana, how are you?

Gardner: I’m great. Glad you could join us.

We are also here with Jack Jones, Principal of CXOWARE. He has more than nine years of experience as a Chief Information Security Officer, is the inventor of the Factor Analysis Information Risk (FAIR) framework. Welcome, Jack.

Jack Jones: Thank you. And we’re also here with Jim Hietala, Vice President, Security for The Open Group. Welcome, Jim.

Jim Hietala: Thanks, Dana.

Gardner: All right, let’s start out with looking at this from a position of trends. Why is the issue of risk analysis so prominent now? What’s different from, say, five years ago? And we’ll start with you, Jack Jones.

Jones: The information security industry has struggled with getting the attention of and support from management and businesses for a long time, and it has finally come around to the fact that the executives care about loss exposure — the likelihood of bad things happening and how bad those things are likely to be.

It’s only when we speak of those terms or those issues in terms of risk, that we make sense to those executives. And once we do that, we begin to gain some credibility and traction in terms of getting things done.

Gardner: So we really need to talk about this in the terms that a business executive would appreciate, not necessarily an IT executive.

Effects on business

Jones: Absolutely. They’re tired of hearing about vulnerabilities, hackers, and that sort of thing. It’s only when we can talk in terms of the effect on the business that it makes sense to them.

Gardner: Jack Freund, I should also point out that you have more than 14 years in enterprise IT experience. You’re a visiting professor at DeVry University and you chair a risk-management subcommittee for ISACA? Is that correct?

Freund: ISACA, yes.

Gardner: And do you agree?

Freund: The problem that we have as a profession, and I think it’s a big problem, is that we have allowed ourselves to escape the natural trend that the other IT professionals have already taken.

There was a time, years ago, when you could code in the basement, and nobody cared much about what you were doing. But now, largely speaking, developers and systems administrators are very focused on meeting the goals of the organization.

Security has been allowed to miss that boat a little. We have been allowed to hide behind this aura of a protector and of an alerter of terrible things that could happen, without really tying ourselves to the problem that the organizations are facing and how can we help them succeed in what they’re doing.

Gardner: Jim Hietala, how do you see things that are different now than a few years ago when it comes to risk assessment?

Hietala: There are certainly changes on the threat side of the landscape. Five years ago, you didn’t really have hacktivism or this notion of an advanced persistent threat (APT).

That highly skilled attacker taking aim at governments and large organizations didn’t really exist -– or didn’t exist to the degree it does today. So that has changed.

You also have big changes to the IT platform landscape, all of which bring new risks that organizations need to really think about. The mobility trend, the Cloud trend, the big-data trend that we are talking about today, all of those things bring new risk to the organization.

As Jack Jones mentioned, business executives don’t want to hear about, “I’ve got 15 vulnerabilities in the mobility part of my organization.” They want to understand what’s the risk of bad things happening because of mobility, what we’re doing about it, and what’s happening to risk over time?

So it’s a combination of changes in the threats and attackers, as well as just changes to the IT landscape, that we have to take a different look at how we measure and present risk to the business.

Gardner: Because we’re at a big-data conference, do you share my perception, Jack Jones, that Big Data can be a source of risk and vulnerability, but also the analytics and the business intelligence (BI) tools that we’re employing with Big Data can be used to alert you to risks or provide a strong tool for better understanding your true risk setting or environment.

Crown jewels

Jones: You are absolutely right. You think of Big Data and, by definition, it’s where your crown jewels, and everything that leads to crown jewels from an information perspective, are going to be found. It’s like one-stop shopping for the bad guy, if you want to look at it in that context. It definitely needs to be protected. The architecture surrounding it and its integration across a lot of different platforms and such, can be leveraged and probably result in a complex landscape to try and secure.

There are a lot of ways into that data and such, but at least if you can leverage that same Big Data architecture, it’s an approach to information security. With log data and other threat and vulnerability data and such, you should be able to make some significant gains in terms of how well-informed your analyses and your decisions are, based on that data.

Gardner: Jack Freund, do you share that? How does Big Data fit into your understanding of the evolving arena of risk assessment and analysis?

Freund: If we fast-forward it five years, and this is even true today, a lot of people on the cutting edge of Big Data will tell you the problem isn’t so much building everything together and figuring out what it can do. They are going to tell you that the problem is what we do once we figure out everything that we have. This is the problem that we have traditionally had on a much smaller scale in information security. When everything is important, nothing is important.

Gardner: To follow up on that, where do you see the gaps in risk analysis in large organizations? In other words, what parts of organizations aren’t being assessed for risk and should be?

Freund: The big problems that exist largely today in the way that risk assessments are done, is the focus on labels. We want to quickly address the low, medium, and high things and know where they are. But the problem is that there are inherent problems in the way that we think about those labels, without doing any of the analysis legwork.

I think that’s what’s really missing is that true analysis. If the system goes offline, do we lose money? If the system becomes compromised, what are the cost-accounting things that will happen that allow us to figure out how much money we’re going to lose.

That analysis work is largely missing. That’s the gap. The gap is if the control is not in place, then there’s a risk that must be addressed in some fashion. So we end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

Every day, our organizations are out there selling products, offering services, which is and of itself, its own risky venture. So tying what we do from an information security perspective to that is critical for not just the success of the organization, but the success of our profession.

Gardner: So we can safely say that large companies are probably pretty good at a cost-benefit analysis or they wouldn’t be successful. Now, I guess we need to ask them to take that a step further and do a cost-risk analysis, but in business terms, being mindful that their IT systems might be a much larger part of that than they had at once considered. Is that fair, Jack?

Risk implications

Jones: Businesses have been making these decisions, chasing the opportunity, but generally, without any clear understanding of the risk implications, at least from the information security perspective. They will have us in the corner screaming and throwing red flags in there, and talking about vulnerabilities and threats from one thing or another.

But, we come to the table with red, yellow, and green indicators, and on the other side of the table, they’ve got numbers. Well, here is what we expect to earn in revenue from this initiative, and the information security people are saying it’s crazy. How do you normalize the quantitative revenue gain versus red, yellow, and green?

Gardner: Jim Hietala, do you see it in the same red, yellow, green or are there some other frameworks or standard methodologies that The Open Group is looking at to make this a bit more of a science?

Hietala: Probably four years ago, we published what we call the Risk Taxonomy Standard which is based upon FAIR, the management framework that Jack Jones invented. So, we’re big believers in bringing that level of precision to doing risk analysis. Having just gone through training for FAIR myself, as part of the standards effort that we’re doing around certification, I can say that it really brings a level of precision and a depth of analysis to risk analysis that’s been lacking frequently in IT security and risk management.

Gardner: We’ve talked about how organizations need to be mindful that their risks are higher and different than in the past and we’ve talked about how standardization and methodologies are important, helping them better understand this from a business perspective, instead of just a technology perspective.

But, I’m curious about a cultural and organizational perspective. Whose job should this fall under? Who is wearing the white hat in the company and can rally the forces of good and make all the bad things managed? Is this a single person, a cultural, an organizational mission? How do you make this work in the enterprise in a real-world way? Let’s go to you, Jack Freund.

Freund: The profession of IT risk management is changing. That profession will have to sit between the business and information security inclusive of all the other IT functions that make that happen.

In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups. You have to be able to understand profit and loss and capital expenditure on the business side. On the IT risk side, you have to be technical enough to do all those sorts of things.

But I think the sum total of those two things is probably only about 50 percent of the job of IT risk management today. The other 50 percent is communication. Finding ways to translate that language and to understand the needs and concerns of each side of that relationship is really the job of IT risk management.

To answer your question, I think it’s absolutely the job of IT risk management to do that. From my own experiences with the FAIR framework, I can say that using FAIR is the Rosetta Stone for speaking between those two groups.

Necessary tools

It gives you the tools necessary to speak in the insurance and risk terms that business appreciate. And it gives you the ability to be as technical and just nerdy, if you will, as you need to be in order to talk to IT security and the other IT functions in order to make sure everybody is on the same page and everyone feels like their concerns are represented in the risk-assessment functions that are happening.

Gardner: Jack Jones, can you add to that?

Jones: I agree with what Jack said wholeheartedly. I would add, though, that integration or adoption of something like this is a lot easier the higher up in the organization you go.

For CFOs traditionally, their neck is most clearly on the line for risk-related issues within most organizations. At least in my experience, if you get their ear on this and present the information security data analyses to them, they jump on board, they drive it through the organization, and it’s just brain-dead easy.

If you try to drive it up through the ranks, maybe you get an enthusiastic supporter in the information security organization, especially if it’s below the CISO level, and they try a grassroots sort of effort to bring it in, it’s a tougher thing. It can still work. I’ve seen it work very well, but, it’s a longer row to hoe.

Gardner: There have been a lot of research, studies, and surveys on data breaches. What are some of the best sources, or maybe not so good sources, for actually measuring this? How do you know if you’re doing it right? How do you know if you’re moving from yellow to green, instead of to red? To you, Jack Freund.

Freund: There are a couple of things in that question. The first is there’s this inherent assumption in a lot of organizations that we need to move from yellow to green, and that may not be the case. So, becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

That’s part of the official mindset of IT security. When you graduate an information security person today, they are minted knowing that there are a lot of bad things out there, and their goal in life is to reduce them. But, that may not be the case. The case may very well be that things are okay now, but we have bigger things to fry over here that we’re going to focus on. So, that’s one thing.

The second thing, and it’s a very good question, is how we know that we’re getting better? How do we trend that over time? Overall, measuring that value for the organization has to be able to show a reduction of a risk or at least reduction of risk to the risk-tolerance levels of the organization.

Calculating and understanding that requires something that I always phrase as we have to become comfortable with uncertainty. When you are talking about risk in general, you’re talking about forward-looking statements about things that may or may not happen. So, becoming comfortable with the fact that they may or may not happen means that when you measure them today, you have to be willing to be a little bit squishy in how you’re representing that.

In FAIR and in other academic works, they talk about using ranges to do that. So, things like high, medium, and low, could be represented in terms of a minimum, maximum, and most likely. And that tends to be very, very effective. People can respond to that fairly well.

Gathering data

Jones: With regard to the data sources, there are a lot of people out there doing these sorts of studies, gathering data. The problem that’s hamstringing that effort is the lack of a common set of definitions, nomenclature, and even taxonomy around the problem itself.

You will have one study that will have defined threat, vulnerability, or whatever differently from some other study, and so the data can’t be normalized. It really harms the utility of it. I see data out there and I think, “That looks like that can be really useful.” But, I hesitate to use it because I don’t understand. They don’t publish their definitions, approach, and how they went after it.

There’s just so much superficial thinking in the profession on this that we now have dug under the covers. Too often, I run into stuff that just can’t be defended. It doesn’t make sense, and therefore the data can’t be used. It’s an unfortunate situation.

I do think we’re heading in a positive direction. FAIR can provide a normalizing structure for that sort of thing. The VERIS framework, which by the way, is also derived in part from FAIR, also has gained real attraction in terms of the quality of the research they have done and the data they’re generating. We’re headed in the right direction, but we’ve got a long way to go.

Gardner: Jim Hietala, we’re seemingly looking at this on a company-by-company basis. But, is there a vertical industry slice or industry-wide slice where we could look at what’s happening to everyone and put some standard understanding, or measurement around what’s going on in the overall market, maybe by region, maybe by country?

Hietala: There are some industry-specific initiatives and what’s really needed, as Jack Jones mentioned, are common definitions for things like breach, exposure, loss, all those, so that the data sources from one organization can be used in another, and so forth. I think about the financial services industry. I know that there is some information sharing through an organization called the FS-ISAC about what’s happening to financial services organizations in terms of attacks, loss, and those sorts of things.

There’s an opportunity for that on a vertical-by-vertical basis. But, like Jack said, there is a long way to go on that. In some industries, healthcare for instance, you are so far from that, it’s ridiculous. In the US here, the HIPAA security rule says you must do a risk assessment. So, hospitals have done annual risk assessments, will stick the binder on the shelf, and they don’t think much about information security in between those annual risk assessments. That’s a generalization, but various industries are at different places on a continuum of maturity of their risk management approaches.

Gardner: As we get better with having a common understanding of the terms and the measurements and we share more data, let’s go back to this notion of how to communicate this effectively to those people that can use it and exercise change management as a result. That could be the CFO, the CEO, what have you, depending on the organization.

Do you have any examples? Can we look to an organization that’s done this right, and examine their practices, the way they’ve communicated it, some of the tools they’ve used and say, “Aha, they’re headed in the right direction maybe we could follow a little bit.” Let’s start with you, Jack Freund.

Freund: I have worked and consulted for various organizations that have done risk management at different levels. The ones that have embraced FAIR tend to be the ones that overall feel that risk is an integral part of their business strategy. And I can give a couple of examples of scenarios that have played out that I think have been successful in the way they have been communicated.

Coming to terms

The key to keep in mind with this is that one of the really important things is that when you’re a security professional, you’re again trained to feel like you need results. But, the results for the IT risk management professional are different. The results are “I’ve communicated this effectively, so I am done.” And then whatever the results are, are the results that needed to be. And that’s a really hard thing to come to terms with.

I’ve been involved in large-scale efforts to assess risk for a Cloud venture. We needed to move virtually every confidential record that we have to the Cloud in order to be competitive with the rest of our industry. If our competitors are finding ways to utilize the Cloud before us, we can lose out. So, we need to find a way to do that, and to be secure and compliant with all the laws and regulations and such.

Through that scenario, one of the things that came out was that key ownership became really, really important. We had the opportunity to look at the various control structures and we analyzed them using FAIR. What we ended up with was sort of a long-tail risk. Most people will probably do their job right over a long enough period of time. But, over that same long period of time, the odds of somebody making a mistake not in your favor are probably likely, but, not significantly enough so that you can’t make the move.

But, the problem became that the loss side, the side that typically gets ignored with traditional risk-assessment methodologies, was so significant that the organization needed to make some judgment around that, and they needed to have a sense of what we needed to do in order to minimize that.

That became a big point of discussion for us and it drove the conversation away from bad things could happen. We didn’t bury the lead. The lead was that this is the most important thing to this organization in this particular scenario.

So, let’s talk about things we can do. Are we comfortable with it? Do we need to make any sort of changes? What are some control opportunities? How much do they cost? This is a significantly more productive conversation than just, “Here is a bunch of bad things that happen. I’m going to cross my arms and say no.”

Gardner: Jack Jones, examples at work?

Jones: In an organization that I’ve been working with recently, their board of directors said they wanted a quantitative view of information security risk. They just weren’t happy with the red, yellow, green. So, they came to us, and there were really two things that drove them there. One was that they were looking at cyber insurance. They wanted to know how much cyber insurance they should take out, and how do you figure that out when you’ve got a red, yellow, green scale?

They were able to do a series of analyses on a population of the scenarios that they thought were relevant in their world, get an aggregate view of their annualized loss exposure, and make a better informed decision about that particular problem.

Gardner: I’m curious how prevalent cyber insurance is, and is that going to be a leveling effect in the industry where people speak a common language the equivalent of actuarial tables, but for security in enterprise and cyber security?

Jones: One would dream and hope, but at this point, what I’ve seen out there in terms of the basis on which insurance companies are setting their premiums and such is essentially the same old “risk assessment” stuff that the industry has been doing poorly for years. It’s not based on data or any real analysis per se, at least what I’ve run into. What they do is set their premiums high to buffer themselves and typically cover as few things as possible. The question of how much value it’s providing the customers becomes a problem.

Looking to the future

Gardner: We’re coming up on our time limit. So, let’s quickly look to the future. Is there such thing as risk management as a service? Can we outsource this? Is there a way in which moving more of IT into Cloud or hybrid models would mitigate risk, because the Cloud provider would standardize? Then, many players in that environment, those who were buying those services, would be under that same umbrella? Let’s start with you Jim Hietala. What’s the future of this and what do the Cloud trends bring to the table?

Hietala: I’d start with a maxim that comes out of the financial services industry, which is that you can outsource the function, but you still own the risk. That’s an unfortunate reality. You can throw things out in the Cloud, but it doesn’t absolve you from understanding your risk and then doing things to manage it to transfer it if there’s insurance or whatever the case may be.

That’s just a reality. Organizations in the risky world we live in are going to have to get more serious about doing effective risk analysis. From The Open Group standpoint, we see this as an opportunity area.

As I mentioned, we’ve standardized the taxonomy piece of FAIR. And we really see an opportunity around the profession going forward to help the risk-analysis community by further standardizing FAIR and launching a certification program for a FAIR-certified risk analyst. That’s in demand from large organizations that are looking for evidence that people understand how to apply FAIR and use it in doing risk analyses.

Gardner: Jack Freund, looking into your crystal ball, how do you see this discipline evolving?

Freund: I always try to consider things as they exist within other systems. Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

Risk will always be owned by the organization that is offering that service. If we decide at some point that we can move to the Cloud and all these other things, we need to look to the legal system. There is a series of pressures that they are going to apply, and who is going to own that, and how that plays itself out.

If we look to the Europeans and the way that they’re managing risk and compliance, they’re still as strict as we in United States think that they may be about things, but there’s still a lot of leeway in a lot of the ways that laws are written. You’re still being asked to do things that are reasonable. You’re still being asked to do things that are standard for your industry. But, we’d still like the ability to know what that is, and I don’t think that’s going to go away anytime soon.

Judgment calls

We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things. Whenever that happens, you have to make a judgment call. What’s the most important thing that I care about? And that’s why risk management exists, because there’s a certain series of things that we have to deal with. We don’t have the resources to do them all, and I don’t think that’s going to change over time. Regardless of whether the landscape changes, that’s the one that remains true.

Gardner: The last word to you, Jack Jones. It sounds as if we’re continuing down the path of being mostly reactive. Is there anything you can see on the horizon that would perhaps tip the scales, so that the risk management and analysis practitioners can really become proactive and head things off before they become a big problem?

Jones: If we were to take a snapshot at any given point in time of an organization’s loss exposure, how much risk they have right then, that’s a lagging indicator of the decisions they’ve made in the past, and their ability to execute against those decisions.

We can do some great root-cause analysis around that and ask how we got there. But, we can also turn that coin around and ask how good we are at making well-informed decisions, and then executing against them, the asking what that implies from a risk perspective downstream.

If we understand the relationship between our current state, and past and future states, we have those linkages defined, especially, if we have an analytic framework underneath it. We can do some marvelous what-if analysis.

What if this variable changed in our landscape? Let’s run a few thousand Monte Carlo simulations against that and see what comes up. What does that look like? Well, then let’s change this other variable and then see which combination of dials, when we turn them, make us most robust to change in our landscape.

But again, we can’t begin to get there, until we have this foundational set of definitions, frameworks, and such to do that sort of analysis. That’s what we’re doing with FAIR, but without some sort of framework like that, there’s no way you can get there.

Gardner: I am afraid we’ll have to leave it there. We’ve been talking with a panel of experts on how new trends and solutions are emerging in the area of risk management and analysis. And we’ve seen how new tools for communication and using Big Data to understand risks are also being brought to the table.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference in Newport Beach, California. I’d like to thank our panel: Jack Freund, PhD, Information Security Risk Assessment Manager at TIAA-CREF. Thanks so much Jack.

Freund: Thank you, Dana.

Gardner: We’ve also been speaking with Jack Jones, Principal at CXOWARE.

Jones: Thank you. Thank you, pleasure to be here.

Gardner: And last, Jim Hietala, the Vice President for Security at The Open Group. Thanks.

Hietala: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions; your host and moderator through these thought leadership interviews. Thanks again for listening and come back next time.

Comments Off

Filed under Security Architecture

Improving Signal-to-Noise in Risk Management

By Jack Jones, CXOWARE

One of the most important responsibilities of the information security professional (or any IT professional, for that matter) is to help management make well-informed decisions. Unfortunately, this has been an illusive objective when it comes to risk. Although we’re great at identifying control deficiencies, and we can talk all day long about the various threats we face, we have historically had a poor track record when it comes to risk. There are a number of reasons for this, but in this article I’ll focus on just one — definition.

You’ve probably heard the old adage, “You can’t manage what you can’t measure.”  Well, I’d add to that by saying, “You can’t measure what you haven’t defined.” The unfortunate fact is that the information security profession has been inconsistent in how it defines and uses the term “risk.” Ask a number of professionals to define the term, and you will get a variety of definitions.

Besides inconsistency, another problem regarding the term “risk” is that many of the common definitions don’t fit the information security problem space or simply aren’t practical. For example, the ISO27000 standard defines risk as, “the effect of uncertainty on objectives.” What does that mean? Fortunately (or perhaps unfortunately), I must not be the only one with that reaction because the ISO standard goes on to define “effect,” “uncertainty,” and “objectives,” as follows:

  • Effect: A deviation from the expected — positive and/or negative
  • Uncertainty: The state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence or likelihood
  • Objectives: Can have different aspects (such as financial, health and safety, information security, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product and process)

NOTE: Their definition for ”objectives” doesn’t appear to be a definition at all, but rather an example. 

Although I understand, conceptually, the point this definition is getting at, my first concern is practical in nature. As a Chief Information Security Officer (CISO), I invariably have more to do than I have resources to apply. Therefore, I must prioritize and prioritization requires comparison and comparison requires measurement. It isn’t clear to me how “uncertainty regarding deviation from the expected (positive and/or negative) that might affect my organization’s objectives” can be applied to measure, and thus compare and prioritize, the issues I’m responsible for dealing with.

This is just an example though, and I don’t mean to pick on ISO because much of their work is stellar. I could have chosen any of several definitions in our industry and expressed varied concerns.

In my experience, information security is about managing how often loss takes place, and how much loss will be realized when/if it occurs. That is our profession’s value proposition, and it’s what management cares about. Consequently, whatever definition we use needs to align with this purpose.

The Open Group’s Risk Taxonomy (shown below), based on Factor Analysis of Information Risk (FAIR), helps to solve this problem by providing a clear and practical definition for risk. In this taxonomy, Risk is defined as, “the probable frequency and probable magnitude of future loss.”

Taxonomy image

The elements below risk in the taxonomy form a Bayesian network that models risk factors and acts as a framework for critically evaluating risk. This framework has been evolving for more than a decade now and is helping information security professionals across many industries understand, measure, communicate and manage risk more effectively.

In the communications context, you have to have a very clear understanding of what constitutes signal before you can effectively and reliably filter it out from noise. The Open Group’s Risk Taxonomy gives us an important foundation for achieving a much clearer signal.

I will be discussing this topic in more detail next week at The Open Group Conference in Newport Beach. For more information on my session or the conference, visit: http://www.opengroup.org/newportbeach2013.

Jack Jones HeadshotJack Jones has been employed in technology for the past twenty-nine years, and has specialized in information security and risk management for twenty-two years.  During this time, he’s worked in the United States military, government intelligence, consulting, as well as the financial and insurance industries.  Jack has over nine years of experience as a CISO, with five of those years at a Fortune 100 financial services company.  His work there was recognized in 2006 when he received the 2006 ISSA Excellence in the Field of Security Practices award at that year’s RSA conference.  In 2007, he was selected as a finalist for the Information Security Executive of the Year, Central United States, and in 2012 was honored with the CSO Compass award for leadership in risk management.  He is also the author and creator of the Factor Analysis of Information Risk (FAIR) framework.

1 Comment

Filed under Cybersecurity

Operational Resilience through Managing External Dependencies

By Ian Dobson & Jim Hietala, The Open Group

These days, organizations are rarely self-contained. Businesses collaborate through partnerships and close links with suppliers and customers. Outsourcing services and business processes, including into Cloud Computing, means that key operations that an organization depends on are often fulfilled outside their control.

The challenge here is how to manage the dependencies your operations have on factors that are outside your control. The goal is to perform your risk management so it optimizes your operational success through being resilient against external dependencies.

The Open Group’s Dependency Modeling (O-DM) standard specifies how to construct a dependency model to manage risk and build trust over organizational dependencies between enterprises – and between operational divisions within a large organization. The standard involves constructing a model of the operations necessary for an organization’s success, including the dependencies that can affect each operation. Then, applying quantitative risk sensitivities to each dependency reveals those operations that have highest exposure to risk of not being successful, informing business decision-makers where investment in reducing their organization’s exposure to external risks will result in best return.

O-DM helps you to plan for success through operational resilience, assured business continuity, and effective new controls and contingencies, enabling you to:

  • Cut costs without losing capability
  • Make the most of tight budgets
  • Build a resilient supply chain
  •  Lead programs and projects to success
  • Measure, understand and manage risk from outsourcing relationships and supply chains
  • Deliver complex event analysis

The O-DM analytical process facilitates organizational agility by allowing you to easily adjust and evolve your organization’s operations model, and produces rapid results to illustrate how reducing the sensitivity of your dependencies improves your operational resilience. O-DM also allows you to drill as deep as you need to go to reveal your organization’s operational dependencies.

O-DM support training on the development of operational dependency models conforming to this standard is available, as are software computation tools to automate speedy delivery of actionable results in graphic formats to facilitate informed business decision-making.

The O-DM standard represents a significant addition to our existing Open Group Risk Management publications:

The O-DM standard may be accessed here.

Ian Dobson is the director of the Security Forum and the Jericho Forum for The Open Group, coordinating and facilitating the members to achieve their goals in our challenging information security world.  In the Security Forum, his focus is on supporting development of open standards and guides on security architectures and management of risk and security, while in the Jericho Forum he works with members to anticipate the requirements for the security solutions we will need in future.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

1 Comment

Filed under Cybersecurity, Security Architecture

Creation of a strategy for the consumption and management of Cloud Services in the TOGAF® Preliminary Phase

By Serge Thorn, Architecting the Enterprise

In an article on my blog, Cloud Computing requires Enterprise Architecture and TOGAF 9 can show the way I described the need to define a strategy as an additional step in the TOGAF 9 Preliminary Phase. This article describes in more detail what could be the content of such a document, specifically, what are the governance activities related to the Consumption and Management of Cloud Services.

Before deciding to switch over to Cloud Computing, companies should first fully understand the concepts and implications of an internal IT investment or buying this as a service. There are different approaches, which may have to be considered from an enterprise level when Cloud Computing is considered: Public Cloud vs. Private Clouds vs. Hybrid Clouds. Despite the fact that many people already know what the differences are, below are some summaries of the various models:

  • A public Cloud is one in which the consumer of Cloud services and the provider of Cloud services exist in separate enterprises. The ownership of the assets used to deliver Cloud services remains with the provider
  • A private Cloud is one in which both the consumer of Cloud services and the provider of those services exist within the same enterprise. The ownership of the Cloud assets resides within the same enterprise providing and consuming Cloud services. It is really a description of a highly virtualized, on-premise data center that is behaving as if it were that of a public Cloud provider
  • A hybrid Cloud combines multiple elements of public and private Cloud, including any combination of providers and consumers

Once the major Business stakeholders understand the concepts, some initial decisions may have to be made and included in that document. The same may also apply to the various Cloud Computing categorisations such as diagrammed below:

The categories the enterprise may be interested in related to existing problems can already be included as a section in the document.

Quality Management

There is need of a system for evaluating performance, whether in the delivery of Cloud services or the quality of products provided to consumers, or customers. This may include:

  • A test planning and a test asset management from business requirements to defects
  • A Project governance and release decisions based on some standards such as Prince 2/PMI and ITIL
  • A Data quality control (all data uploaded to a Cloud Computing service provider must ensure it fits the requirements of the provider). This should be detailed and provided by the provider
  • Detailed and documented Business Processes as defined in ISO 9001:
    • Systematically defining the activities necessary to obtain a desired result
    • Establishing clear responsibility and accountability for managing key activities
    • Analyzing and measuring of the capability of key activities
    • Identifying the interfaces of key activities within and between the functions of the organization
    • Focusing on the factors such as resources, methods, and materials that will improve key activities of the organization
    • Evaluating risks, consequences and impacts of activities on customers, suppliers and other interested parties

Security Management

This would address and document specific topics such as:

  • Eliminating the need to constantly reconfigure static security infrastructure for a dynamic computing environment
  • Define how services are able to securely connect and reliably communicate with internal IT services and other public services
  • Penetration security checks
  • How a Security Management/System Management/Network Management teams monitor that security and the availability

Semantic Management

The amount of unstructured electronic information in an enterprise environment is growing rapidly. Business people have to collaboratively realise the reconciliation of their heterogeneous metadata and consequently the application of the derived business semantic patterns to establish alignment between the underlying data structures. The way this will be handled may also be included.

IT Service Management (ITIL)

IT Service Management or IT Operations teams will have to address many new challenges due to the Cloud. This will need to be addressed for some specific processes such as:

  • Incident Management
    • The Cloud provider must ensure that all outages or exceptions to normal operations are resolved as quickly as possible while capturing all of the details for the actions that were taken and are communicated to the customer.
  • Change Management
    • Strict change management practices must be adhered to and all changes implemented during approved maintenance windows must be tracked, monitored, and validated.
  • Configuration Management (Service Asset and…)
    • Companies who have a CMDB must provide this to the Cloud providers with detailed descriptions of the relationships between configuration items (CI)
    • CI relationships empowers change and incident managers need to determine that a modification to one service may impact several other related services and the components of those services
    • This provides more visibility into the Cloud environment, allowing consumers and providers to make more informed decisions not only when preparing for a change but also when diagnosing incidents and problems
  • Problem Management
    • The Cloud provider needs to identify the root cause analysis in case of problems

  • Service Level Management
    • Service Level Agreements (or Underpinning contracts) must be transparent and accessible to the end users.  The business representatives should be negotiating these agreements. They will need to effectively negotiate commercial, technical, and legal terms. It will be important to establish these concrete, measurable Service Level Agreements (SLAs). Without these, and  an effective means for verifying compliance, the damage from poor service levels will only be exacerbated
  • Vendor Management
    • Relationship between a vendor and their customers changes
    • Contractual arrangements
  • Capacity Management  and Availability Management
    • Reporting on performance

Other activities must be documented such as:

Monitoring

  • Monitoring will be a very important activity and should be described in the Strategy document. The assets and infrastructure that make up the Cloud service is not within the enterprise. They are owned by the Cloud providers, which will most likely have a focus on maximizing their revenue, not necessarily optimizing the performance and availability of the enterprise’s services. Establishing sound monitoring practices for the Cloud services from the outset will bring significant benefits in the long term. Outsourcing delivery of service does not necessarily imply that we can outsource the monitoring of that service. Besides, today very few Cloud providers are offering any form of service level monitoring to their customers. Quite often, they are providing the Cloud service but not proving that they are providing that service.
  • The resource usage and consumption must be monitored and managed in order to support strategic decision making
  • Whenever possible, the Cloud providers should furnish the relevant tools for management and reporting and take away the onerous tasks of patch management, version upgrades, high availability, disaster recovery and the like. This obviously will impact IT Service Continuity for the enterprise.
  • Service Measurement, Service Reporting and Service Improvement processes must be considered

Consumption and costs

  • Service usage (when and how) to determine the intrinsic value that the service is providing to the Business, and IT can also use this information to compute the Return On Investment for their Cloud Computing initiatives and related services. This would be related to the process IT Financial Management.

Risk Management

The TOGAF 9 risk management method should be considered to address the various risks associated such as:

  • Ownership, Cost, Scope, Provider relationship, Complexity, Contractual, Client acceptance, etc
  • Other risks should also be considered such as : Usability, Security (obviously…) and Interoperability

Asset Management and License Management

When various Cloud approaches are considered (services on-premise via the Cloud), hardware and software license management should be defined to ensure companies can meet their governance and contractual requirements

Transactions

Ensuring the safety of confidential data is a mission critical aspect of the business. Cloud Computing gives them concerns over the lack of control that they will have over company data, and does not enable them to monitor the processes used to organize the information.

Being able to manage the transactions in the Cloud is vital and Business transaction safety should be considered (recording, tracking, alerts, electronic signatures, etc…).

There may be other aspects, which should be integrated in this Strategy document that may vary according to the level of maturity of the enterprise or existing best practices in use.

When considering Cloud Computing, the Preliminary phase will include in the definition of the Architecture Governance Framework most of the touch points with other processes as described above. At completion, touch-points and impacts should be clearly understood and agreed by all relevant stakeholders.

This article has previously appeared in Serge Thorn’s personal blog.

Cloud will be a topic of discussion at The Open Group Conference, London, May 9-13. Join us for best practices, case studies and the future of information security, presented by preeminent thought leaders in the industry.

Serge Thorn is CIO of Architecting the Enterprise.  He has worked in the IT Industry for over 25 years, in a variety of roles, which include; Development and Systems Design, Project Management, Business Analysis, IT Operations, IT Management, IT Strategy, Research and Innovation, IT Governance, Architecture and Service Management (ITIL). He has more than 20 years of experience in Banking and Finance and 5 years of experience in the Pharmaceuticals industry. Among various roles, he has been responsible for the Architecture team in an international bank, where he gained wide experience in the deployment and management of information systems in Private Banking, Wealth Management, and also in IT architecture domains such as the Internet, dealing rooms, inter-banking networks, and Middle and Back-office. He then took charge of IT Research and Innovation (a function which consisted of motivating, encouraging creativity, and innovation in the IT Units), with a mission to help to deploy a TOGAF based Enterprise Architecture, taking into account the company IT Governance Framework. He also chaired the Enterprise Architecture Governance worldwide program, integrating the IT Innovation initiative in order to identify new business capabilities that were creating and sustaining competitive advantage for his organization. Serge has been a regular speaker at various conferences, including those by The Open Group. His topics have included, “IT Service Management and Enterprise Architecture”, “IT Governance”, “SOA and Service Management”, and “Innovation”. Serge has also written several articles and whitepapers for different magazines (Pharma Asia, Open Source Magazine). He is the Chairman of the itSMF (IT Service Management forum) Swiss chapter and is based in Geneva, Switzerland.

8 Comments

Filed under Cloud/SOA, TOGAF®

Security Forum Completes Third & Final Phase of Risk Management Project: Cookbook for ISO/IEC 27005:2005

By Jim Hietala, The Open Group

The Open Group Security Forum recently completed the last phase of our major risk management initiative with the publication of the Cookbook for ISO/IEC 27005:2005. The Cookbook is the culmination of the work the members of the Security Forum have undertaken over the past two and a half years — a comprehensive initiative aimed at eliminating widespread industry confusion about risk management among risk managers, security and IT professionals, as well as business managers.

The new Cookbook for ISO/IEC 27005:2005 is meant to be a “recipe” of sorts, providing a detailed description of how to apply The Open Group’s FAIR (Factor Analysis for Information Risk) Risk Taxonomy Standard to any other risk management framework to help improve the consistency and accuracy of the resulting framework. By following the “cookbook” example in the guide, risk technology practitioners can apply the example with significantly beneficial outcomes when using other frameworks of their choice.

We created the guide for anyone tasked with selecting, performing, evaluating, or developing a risk assessment methodology, including all stakeholders responsible for areas with anything risk related, such as business managers, information security/risk management professionals, auditors, and regulators (both policy-makers and as law-makers).

The initiative started in the summer of 2008 with Phase 1, the Risk Taxonomy Standard, which is based on the FAIR methodology and specifies a standard definition and taxonomy for information security risk, and how to apply this to perform risk assessments. A year later, we completed the second phase and published a technical guide entitled Requirements for Risk Assessment Methodologies, that describes key risk assessment traits, provides advice on quantitative versus qualitative measurements and addresses the need for senior management involvement. The Cookbook completes our project.

As we wrap up our work on this initiative and look at the current state of security, with escalating cyber threats, growing risks around mobile computing, and evolving government regulations, I can say with confidence that we have met our goals in creating comprehensive and needed guidance and standards in the area of risk analysis.

Looking ahead at the rest of 2011, The Open Group Security Forum has an active pipeline of projects to address the increasing risk and compliance concerns facing IT departments across organizations today. Be on the lookout for the publication of the ISM3 standard, revised Enterprise Security Architecture Guide, and ACEML standard in the late spring/early summer months!

Jim HietalaAn IT security industry veteran, Jim Hietala is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

4 Comments

Filed under Cybersecurity

PODCAST: Impact of Security Issues on Doing Business in 2011 And Beyond

By Dana Gardner, Interabor Solutions

Listen to this recorded podcast here: BriefingsDirect-The Open Group Conference Cyber Security Panel

The following is the transcript of a sponsored podcast panel discussion on how enterprises need to change their thinking to face cyber threats, from The Open Group Conference, San Diego 2011.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion in conjunction with The Open Group Conference, held in San Diego in the week of February 7, 2011. We’ve assembled a panel to examine the business risk around cyber security threats.

Looking back over the past few years, it seems like threats are only getting worse. We’ve had the Stuxnet Worm, The WikiLeaks affair, China originating attacks against Google and others, and the recent Egypt Internet blackout. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

But, are cyber security dangers, in fact, getting much worse or rather perceptions that are at odds with what is really important in terms of security? In any event, how can businesses best protect themselves from the next round of risks, especially as Cloud, mobile, and social media activities increase? How can architecting for security become effective and pervasive? We’ll pose these and other serious questions to our panel to deeply examine the cyber business risks and ways to head them off.

Please join me now in welcoming our panel, we’re here with Jim Hietala, the Vice President of Security at The Open Group. Welcome back, Jim.

Jim Hietala: Hi, Dana. Good to be with you.

Gardner: And, we’re here with Mary Ann Mezzapelle, Chief Technologist in the CTO’s Office at HP. Welcome.

Mary Ann Mezzapelle: Thank you, Dana.

Gardner: We’re also here with Jim Stikeleather, Chief Innovation Officer at Dell Services. Welcome, Jim.

Jim Stikeleather: Thank you, Dana. Glad to be here.

Gardner: As I mentioned, there have been a lot of things in the news about security. I’m wondering, what are the real risks that are worth being worried about? What should you be staying up late at night thinking about, Jim?

Stikeleather: Pretty much everything, at this time. One of the things that you’re seeing is a combination of factors. When people are talking about the break-ins, you’re seeing more people actually having discussions of what’s happened and what’s not happening. You’re seeing a new variety of the types of break-ins, the type of exposures that people are experiencing. You’re also seeing more organization and sophistication on the part of the people who are actually breaking in.

The other piece of the puzzle has been that legal and regulatory bodies step in and say, “You are now responsible for it.” Therefore, people are paying a lot more attention to it. So, it’s a combination of all these factors that are keeping people up right now.

Gardner: Is it correct, Mary Ann, to say that it’s not just a risk for certain applications or certain aspects of technology, but it’s really a business-level risk?

Key component

Mezzapelle: That’s one of the key components that we like to emphasize. It’s about empowering the business, and each business is going to be different.

If you’re talking about a Department of Defense (DoD) military implementation, that’s going to be different than a manufacturing concern. So it’s important that you balance the risk, the cost, and the usability to make sure it empowers the business.

Gardner: How about complexity, Jim Hietala? Is that sort of an underlying current here? We now think about the myriad mobile devices, moving applications to a new tier, native apps for different platforms, more social interactions that are encouraging collaboration. This is good, but just creates more things for IT and security people to be aware of. So how about complexity? Is that really part of our main issue?

Hietala: It’s a big part of the challenge, with changes like you have mentioned on the client side, with mobile devices gaining more power, more ability to access information and store information, and cloud. On the other side, we’ve got a lot more complexity in the IT environment, and much bigger challenges for the folks who are tasked for securing things.

Gardner: Just to get a sense of how bad things are, Jim Stikeleather, on a scale of 1 to 10 — with 1 being you’re safe and sound and you can sleep well, and 10 being all the walls of your business are crumbling and you’re losing everything — where are we?

Stikeleather: Basically, it depends on who you are and where you are in the process. A major issue in cyber security right now is that we’ve never been able to construct an intelligent return on investment (ROI) for cyber security.

There are two parts to that. One, we’ve never been truly able to gauge how big the risk really is. So, for one person it maybe a 2, and most people it’s probably a 5 or a 6. Some people may be sitting there at a 10. But, you need to be able to gauge the magnitude of the risk. And, we never have done a good job of saying what exactly the exposure is or if the actual event took place. It’s the calculation of those two that tell you how much you should be able to invest in order to protect yourself.

So, I’m not really sure it’s a sense of exposure the people have, as people don’t have a sense of risk management — where am I in this continuum and how much should I invest actually to protect myself from that?

We’re starting to see a little bit of a sea change, because starting with HIPAA-HITECH in 2009, for the first time, regulatory bodies and legislatures have put criminal penalties on companies who have exposures and break-ins associated with them.

So we’re no longer talking about ROI. We’re starting to talk about risk of incarceration , and that changes the game a little bit. You’re beginning to see more and more companies do more in the security space — for example, having a Sarbanes-Oxley event notification to take place.

The answer to the question is that it really depends, and you almost can’t tell, as you look at each individual situation.

Gardner: Mary Ann, it seems like assessment then becomes super-important. In order to assess your situation, you can start to then plan for how to ameliorate it and/or create a strategy to improve, and particularly be ready for the unknown unknowns that are perhaps coming down the pike. When it comes to assessment, what would you recommend for your clients?

Comprehensive view

Mezzapelle: First of all we need to make sure that they have a comprehensive view. In some cases, it might be a portfolio approach, which is unique to most people in a security area. Some of my enterprise customers have more than a 150 different security products that they’re trying to integrate.

Their issue is around complexity, integration, and just knowing their environment — what levels they are at, what they are protecting and not, and how does that tie to the business? Are you protecting the most important asset? Is it your intellectual property (IP)? Is it your secret sauce recipe? Is it your financial data? Is it your transactions being available 24/7?

And, to Jim’s point, that makes a difference depending on what organization you’re in. It takes some discipline to go back to that InfoSec framework and make sure that you have that foundation in place, to make sure you’re putting your investments in the right way.

Stikeleather: One other piece of it is require an increased amount of business knowledge on the part of the IT group and the security group to be able to make the assessment of where is my IP, which is my most valuable data, and what do I put the emphasis on.

One of the things that people get confused about is, depending upon which analyst report you read, most data is lost by insiders, most data is lost from external hacking, or most data is lost through email. It really depends. Most IP is lost through email and social media activities. Most data, based upon a recent Verizon study, is being lost by external break-ins.

We’ve kind of always have the one-size-fits-all mindset about security. When you move from just “I’m doing security” to “I’m doing risk mitigation and risk management,” then you have to start doing portfolio and investment analysis in making those kinds of trade-offs.

That’s one of the reasons we have so much complexity in the environment, because every time something happens, we go out, we buy any tool to protect against that one thing, as opposed to trying to say, “Here are my staggered differences and here’s how I’m going to protect what is important to me and accept the fact nothing is perfect and some things I’m going to lose.”

Gardner: Perhaps a part of having an assessment of where you are is to look at how things have changed, Jim Hietala, thinking about where we were three or four years ago, what is fundamentally different about how people are approaching security and/or the threats that they are facing from just a few years ago?

Hietala: One of the big things that’s changed that I’ve observed is if you go back a number of years, the sorts of cyber threats that were out there were curious teenagers and things like that. Today, you’ve got profit-motivated individuals who have perpetrated distributed denial of service attacks to extort money. Now, they’ve gotten more sophisticated and are dropping Trojan horses on CFO’s machines and they can to try in exfiltrate passwords and log-ins to the bank accounts.

We had a case that popped up in our newspaper in Colorado, where a mortgage company, a title company lost a million dollars worth of mortgage money that was loans in the process of funding. All of a sudden, five homeowners are faced with paying two mortgages, because there was no insurance against that.

When you read through the details of what happened it was, it was clearly a Trojan horse that had been put on this company’s system. Somebody was able to walk off with a million dollars worth of these people’s money.

State-sponsored acts

So you’ve got profit-motivated individuals on the one side, and you’ve also got some things happening from another part of the world that look like they’re state-sponsored, grabbing corporate IP and defense industry and government sites. So, the motivation of the attackers has fundamentally changed and the threat really seems pretty pervasive at this point.

Gardner: Pervasive threat. Is that how you see it, Jim Stikeleather?

Stikeleather: I agree. The threat is pervasive. The only secure computer in the world right now is the one that’s turned off in a closet, and that’s the nature. You have to make decisions about what you’re putting on and where you’re putting it on. I’s a big concern that if we don’t get better with security, we run the risk of people losing trust in the Internet and trust in the web.

When that happens, we’re going to see some really significant global economic concerns. If you think about our economy, it’s structured around the way the Internet operates today. If people lose trust in the transactions that are flying across it, then we’re all going to be in pretty bad world of hurt.

Gardner: All right, well I am duly scared. Let’s think about what we can start doing about this. How should organizations rethink security? And is that perhaps the way to do this, Mary Ann? If you say, “Things have changed. I have to change, not only in how we do things tactically, but really at that high level strategic level,” how do you rethink security properly now?

Mezzapelle: It comes back to one of the bottom lines about empowering the business. Jim talked about having that balance. It means that not only do the IT people need to know more about the business, but the business needs to start taking ownership for the security of their own assets, because they are the ones that are going to have to belay the loss, whether it’s data, financial, or whatever.

They need to really understand what that means, but we as IT professionals need to be able to explain what that means, because it’s not common sense. We need to connect the dots and we need to have metrics. We need to look at it from an overall threat point of view, and it will be different based on what company you’re about.

You need to have your own threat model, who you think the major actors would be and how you prioritize your money, because it’s an unending bucket that you can pour money into. You need to prioritize.

Gardner: How would this align with your other technology and business innovation activities? If you’re perhaps transforming your business, if you’re taking more of a focus at the process level, if you’re engaged with enterprise architecture and business architecture, is security a sideline, is it central, does it come first? How do you organize what’s already fairly complex in security with these other larger initiatives?

Mezzapelle: The way that we’ve done that is this is we’ve had a multi-pronged approach. We communicate and educate the software developers, so that they start taking ownership for security in their software products, and that we make sure that that gets integrated into every part of portfolio.

The other part is to have that reference architecture, so that there’s common services that are available to the other services as they are being delivered and that we can not control it but at least manage from a central place.

You were asking about how to pay for it. It’s like Transformation 101. Most organizations spend about 80 percent of their spend on operations. And so they really need to look at their operational spend and reduce that cost to be able to fund the innovation part.

Getting benchmarks

It may not be in security. You may not be spending enough in security. There are several organizations that will give you some kind of benchmark about what other organizations in your particular industry are spending, whether it’s 2 percent on the low end for manufacturing up to 10-12 percent for financial institutions.

That can give you a guideline as to where you should start trying to move to. Sometimes, if you can use automation within your other IT service environment, for example, that might free up the cost to fuel that innovation.

Stikeleather: Mary Ann makes a really good point. The starting point is really architecture. We’re actually at a tipping point in the security space, and it comes from what’s taking place in the legal and regulatory environments with more-and-more laws being applied to privacy, IP, jurisdictional data location, and a whole series of things that the regulators and the lawyers are putting on us.

One of the things I ask people, when we talk to them, is what is the one application everybody in the world, every company in the world has outsourced. They think about it for a minute, and they all go payroll. Nobody does their own payroll any more. Even the largest companies don’t do their own payroll. It’s not because it’s difficult to run payroll. It’s because you can’t afford all of the lawyers and accountants necessary to keep up with all of the jurisdictional rules and regulations for every place that you operate in.

Data itself is beginning to fall under those types of constraints. In a lot of cases, it’s medical data. For example, Massachusetts just passed a major privacy law. PCI is being extended to anybody who takes credit cards.

The security issue is now also a data governance and compliance issue as well. So, because all these adjacencies are coming together, it’s a good opportunity to sit down and architect with a risk management framework. How am I going to deal with all of this information?

Plus you have additional funding capabilities now, because of compliance violations you can actually identify what the ROI is for of avoiding that. The real key to me is people stepping back and saying, “What is my business architecture? What is my risk profile associated with it? What’s the value associated with that information? Now, engineer my systems to follow that.”

Mezzapelle: You need to be careful that you don’t equate compliance with security? There are a lot of organizations that are good at compliance checking, but that doesn’t mean that they are really protecting against their most vulnerable areas, or what might be the largest threat. That’s just a letter of caution — you need to make sure that you are protecting the right assets.

Gardner: It’s a cliché, but people, process, and technology are also very important here. It seems to me that governance would be an overriding feature of bringing those into some alignment.

Jim Hietala, how should organizations approach these issues with a governance mindset? That is to say, following procedures, forcing those procedures, looking and reviewing them, and then putting into place the means by which security becomes in fact part-and-parcel with doing business?

Risk management

Hietala: I guess I’d go back to the risk management issue. That’s something that I think organizations frequently miss. There tends to be a lot of tactical security spending based upon the latest widget, the latest perceived threat — buy something, implement it, and solve the problem.

Taking a step back from that and really understanding what the risks are to your business, what the impacts of bad things happening are really, is doing a proper risk analysis. Risk assessment is what ought to drive decision-making around security. That’s a fundamental thing that gets lost a lot in organizations that are trying to grapple the security problems.

Gardner: Jim Stikeleather, any thoughts about governance as an important aspect to this?

Stikeleather: Governance is a critical aspect. The other piece of it is education. There’s an interesting fiction in both law and finance. The fiction of the reasonable, rational, prudent man. If you’ve done everything a reasonable, rational and prudent person has done, then you are not culpable for whatever the event was.

I don’t think we’ve done a good job of educating our users, the business, and even some of the technologists on what the threats are, and what are reasonable, rational, and prudent things to do. One of my favorite things are the companies that make you change your password every month and you can’t repeat a password for 16 or 24 times. The end result is that you get as this little thing stuck on the notebook telling them exactly what the password is.

So, it’s governance, but it’s also education on top of governance. We teach our kids not to cross the street in the middle of the road and don’t talk to strangers. Well, we haven’t quite created that same thing for cyberspace. Governance plus education may even be more important than the technological solutions.

Gardner: One sort of push-back on that is that the rate of change is so rapid and the nature of the risks can be so dynamic, how does one educate? How you keep up with that?

Stikeleather: I don’t think that it’s necessary. The technical details of the risks are changing rapidly, but the nature of the risk themselves, the higher level of the taxonomy, is not changing all that much.

If you just introduce safe practices so to speak, then you’re protected up until someone comes up with a totally new way of doing things, and there really hasn’t been a lot of that. Everything has been about knowing that you don’t put certain data on the system, or if you do, this data is always encrypted. At the deep technical details, yes, things change rapidly. At the level with which a person would exercise caution, I don’t think any of that has changed in the last ten years.

Gardner: We’ve now entered into the realm of behaviors and it strikes me also that it’s quite important and across the board. There are behaviors at different levels of the organization. Some of them can be good for ameliorating risk and others would be very bad and prolonged. How do you incentivize people? How do you get them to change their behavior when it comes to security, Mary Ann?

Mezzapelle: The key is to make it personalized to them or their job, and part of that is the education as Jim talked about. You also show them how it becomes a part of their job.

Experts don’t know

I have a little bit different view that it is so complex that even security professionals don’t always know what the reasonable right thing to do it. So, I think it’s very unreasonable for us to expect that of our business users, or consumers, or as I like to say, my mom. I use her as a use case quite a lot of times about what would she do, how would she react and would she recognize when she clicked on, “Yes, I want to download that antivirus program,” which just happened to be a virus program.

Part of it is the awareness so that you keep it in front of them, but you also have to make it a part of their job, so they can see that it’s a part of the culture. I also think it’s a responsibility of the leadership to not just talk about security, but make it evident in their planning, in their discussions, and in their viewpoints, so that it’s not just something that they talk about but ignore operationally.

Gardner: One other area I want to touch on is the notion of cloud computing, doing more outsourced services, finding a variety of different models that extend beyond your enterprise facilities and resources.

There’s quite a bit of back and forth about, is cloud better for security or worse for security? Can I impose more of these automation and behavioral benefits if I have a cloud provider or a single throat to choke, or is this something that opens up? I’ve got a sneaking suspicion I am going to hear “It depends” here, Jim Stikeleather, but I am going to go with you anyway. Cloud: I can’t live with it, can’t live without it. How does it work?

Stikeleather: You’re right, it depends. I can argue both sides of the equation. On one side, I’ve argued that cloud can be much more secure. If you think about it, and I will pick on Google, Google can expend a lot more on security than any other company in the world, probably more than the federal government will spend on security. The amount of investment does not necessarily tie to a quality of investment, but one would hope that they will have a more secure environment than a regular company will have.

On the flip side, there are more tantalizing targets. Therefore they’re going to draw more sophisticated attacks. I’ve also argued that you have statistical probability of break-in. If somebody is trying to break into Google, and you’re own Google running Google Apps or something like that, the probability of them getting your specific information is much less than if they attack XYZ enterprise. If they break in there, they are going to get your stuff.

Recently I was meeting with a lot of NASA CIOs and they think that the cloud is actually probably a little bit more secure than what they can do individually. On the other side of the coin it depends on the vendor. I’ve always admired astronauts, because they’re sitting on top of this explosive device built by the lowest-cost provider. I’ve always thought that took more bravery than anybody could think of. So the other piece of that puzzle is how much is the cloud provider actually providing in terms of security.

You have to do your due diligence, like with everything else in the world. I believe, as we move forward, cloud is going to give us an opportunity to reinvent how we do security.

I’ve often argued that a lot of what we are doing in security today is fighting the last war, as opposed to fighting the current war. Cloud is going to introduce some new techniques and new capabilities. You’ll see more systemic approaches, because somebody like Google can’t afford to put in 150 different types of security. They will put one more integrated. They will put in, to Mary Ann’s point, the control panels and everything that we haven’t seen before.

So, you’ll see better security there. However, in the interim, a lot of the software-as-a-service (SaaS) providers, some of the simpler platform-as-a-service (PaaS) providers haven’t made that kind of investment. You’re probably not as secured in those environments.

Gardner: Mary Ann, do you also see cloud as a catalyst to a better security either from technology process or implementation?

Lowers the barrier

Mezzapelle: For the small and medium size business it offers the opportunity to be more secure, because they don’t necessarily have the maturity of processes and tools to be able to address those kinds of things. So, it lowers that barrier to entry for being secure.

For enterprise customers, cloud solutions need to develop and mature more. They may want to do with hybrid solution right now, where they have more control and the ability to audit and to

have more influence over things in specialized contracts, which are not usually the business model for cloud providers.

I would disagree with Jim in some aspects. Just because there is a large provider on the Internet that’s creating a cloud service, security may not have been the key guiding principle in developing a low-cost or free product. So, size doesn’t always mean secure.

You have to know about it, and that’s where the sophistication of the business user comes in, because cloud is being bought by the business user, not by the IT people. That’s another component that we need to make sure gets incorporated into the thinking.

Stikeleather: I am going to reinforce what Mary Ann said. What’s going on in cloud space is almost a recreation of the late ’70s and early ’80s when PCs came into organizations. It’s the businesspeople that are acquiring the cloud services and again reinforces the concept of governance and education. They need to know what is it that they’re buying.

I absolutely agree with Mary. I didn’t mean to imply size means more security, but I do think that the expectation, especially for small and medium size businesses, is they will get a more secure environment than they can produce for themselves.

Gardner: Jim Hietala, we’re hearing a lot about frameworks, and governance, and automation. Perhaps even labeling individuals with responsibility for security and we are dealing with some changeable dynamics that move to cloud and issues around cyber security in general, threats from all over. What is The Open Group doing? It sounds like a huge opportunity for you to bring some clarity and structure to how this is approached from a professional perspective, as well as a process and framework perspective?

Hietala: It is a big opportunity. There are a number of different groups within The Open Group doing work in various areas. The Jericho Forum is tackling identity issues as it relates to cloud computing. There will be some new work coming out of them over the next few months that lay out some of the tough issues there and present some approaches to those problems.

We also have the Trusted Technology Forum (TTF) and the Trusted Technology Provider Framework (TTPF) that are being announced here at this conference. They’re looking at supply chain issues related to IT hardware and software products at the vendor level. It’s very much an industry-driven initiative and will benefit government buyers, as well as large enterprises, in terms of providing some assurance of products they’re procuring are secure and good commercial products.

Also in the Security Forum, we have a lot of work going on in security architecture and information security management. There are a number projects that are aimed at practitioners, providing them the guidance they need to do a better job of securing, whether it’s a traditional enterprise, IT environment, cloud and so forth. Our Cloud Computing Work Group is doing work on a cloud security reference architecture. So, there are number of different security activities going on in The Open Group related to all this.

Gardner: What have you seen in a field in terms of a development of what we could call a security professional? We’ve seen Chief Security Officer, but is there a certification aspect to identifying people as being qualified to step in and take on some of these issues?

Certification programs

Hietala: There are a number of certification programs for security professionals that exist out there. There was legislation, I think last year, that was proposed that was going to put some requirements at the federal level around certification of individuals. But, the industry is fairly well-served by the existing certifications that are out there. You’ve got CISSP, you’ve got a number of certification from SANS and GIAC that get fairly specialized, and there are lots of opportunities today for people to go out and get certifications in improving their expertise in a given topic.

Gardner: My last question will go to you on this same issue of certification. If you’re on the business side and you recognize these risks and you want to bring in the right personnel, what would you look for? Is there a higher level of certification or experience? How do you know when you’ve got a strategic thinker on security, Mary Ann?

Mezzapelle: The background that Jim talked about CISSP, CSSLP from (ISC)2, there is also the CISM or Certified Information Security Manager that’s from an audit point of view, but I don’t think there’s a certification that’s going to tell you that they’re a strategic thinker. I started out as a technologist, but it’s that translation to the business and it’s that strategic planning, but applying it to a particular area and really bringing it back to the fundamentals.

Gardner: Does this become then part of enterprise architecture (EA)?

Mezzapelle: It is a part of EA, and, as Jim talked, about we’ve done some work on The Open Group with Information Security Management model that extend some of other business frameworks like ITIL into the security space to have a little more specificity there.

Gardner: Last word to you, Jim Stikeleather, on this issue of how do you get the right people in the job and is this something that should be part and parcel with the enterprise or business architect?

Stikeleather: I absolutely agree with what Mary Ann said. It’s like a CPA. You can get a CPA and they know certain things, but that doesn’t guarantee that you’ve got a businessperson. That’s where we are with security certifications as well. They give you a comfort level that the fundamental knowledge of the issues and the techniques and stuff are there, but you still need someone who has experience.

At the end of the day it’s the incorporation of everything into EA, because you can’t bolt on security. It just doesn’t work. That’s the situation we’re in now. You have to think in terms of the framework of the information that the company is going to use, how it’s going to use it, the value that’s associated with it, and that’s the definition of EA.

Gardner: Well, great. We have been discussing the business risk around cyber security threats and how to perhaps position yourself to do a better job and anticipate some of the changes in the field. I’d like to thank our panelists. We have been joined by Jim Hietala, Vice President of Security for The Open Group. Thank you, Jim.

Hietala: Thank you, Dana.

Gardner: Mary Ann Mezzapelle, Chief Technologist in the Office of the CTO for HP. Thank you.

Mezzapelle: Thanks, Dana.

Gardner: And lastly, Jim Stikeleather,Chief Innovation Officer at Dell Services. Thank you.

Stikeleather: Thank you, Dana.

Gardner: This is Dana Gardner. You’ve been listening to a sponsored BriefingsDirect podcast in conjunction with The Open Group Conference here in San Diego, the week of February 7th, 2011. I want to thank all for joining and come back next time.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2011. All rights reserved.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirectblogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cybersecurity

Open Group conference next week focuses on role and impact of enterprise architecture amid shifting sands for IT and business

by Dana Gardner, Interarbor Solutions

Republished from his blog, BriefingsDirect, originally published Feb. 2, 2011

Next week’s The Open Group Conference in San Diego comes at an important time in the evolution of IT and business. And it’s not too late to attend the conference, especially if you’re looking for an escape from the snow and ice.

From Feb. 7 through 9 at the Marriott San Diego Mission Valley, the 2011 conference is organized around three key themes: architecting cyber securityenterprise architecture (EA) and business transformation, and the business and financial impact of cloud computingCloudCamp San Diego will be held in conjunction with the conference on Wednesday, Feb. 9. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Registration is open to both members and non-members of The Open Group. For more information, or to register for the conference in San Diego please visit:http://www.opengroup.org/sandiego2011/register.htm. Registration is free for members of the press and industry analysts.

The Open Group is a vendor- and technology-neutral consortium, whose vision ofBoundaryless Information Flow™ will enable access to integrated information within and between enterprises based on open standards and global interoperability.

I’ve found these conferences over the past five years an invaluable venue for meeting and collaborating with CIOs, enterprise architects, standards stewards and thought leaders on enterprise issues. It’s one of the few times when the mix of technology, governance and business interests mingle well for mutual benefit.

The Security Practitioners Conference, being held on Feb. 7, provides guidelines on how to build trusted solutions; take into account government and legal considerations; and connects architecture and information security management. Confirmed speakers include James Stikeleather, chief innovation officer, Dell Services; Bruce McConnell, cybersecurity counselor, National Protection and Programs Directorate, U.S. Department of Homeland Security; and Ben Calloni, Lockheed Martin Fellow, Software Security, Lockheed Martin Corp.

Change management processes requiring an advanced, dynamic and resilient EA structure will be discussed in detail during The Enterprise Architecture Practitioners Conference on Feb. 8. The Cloud Computing track, on Feb. 9, includes sessions on the business and financial impact of cloud computing; cloud security; and how to architect for the cloud — with confirmed speakers Steve Else, CEO, EA Principals; Pete Joodi, distinguished engineer, IBM; and Paul Simmonds, security consultant, the Jericho Forum.

General conference keynote presentation speakers include Dawn Meyerriecks, assistant director of National Intelligence for Acquisition, Technology and Facilities, Office of the Director of National Intelligence; David Mihelcic, CTO, the U.S. Defense Information Systems Agency; and Jeff Scott, senior analyst, Forrester Research.

I’ll be moderating an on-stage panel on Wednesday on the considerations that must be made when choosing a cloud solution — custom or “shrink-wrapped” — and whether different forms of cloud computing are appropriate for different industry sectors. The tension between plain cloud offerings and enterprise demands for customization is bound to build, and we’ll work to find a better path to resolution.

I’ll also be hosting and producing a set of BriefingsDirect podcasts at the conference, on such topics as the future of EA groups, EA maturity and future roles, security risk management, and on the new Trusted Technology Forum (OTTF) established in December. Look for those podcasts, blog summaries and transcripts here over the next few days and weeks.

For the first time, The Open Group Photo Contest will encourage the members and attendees to socialize, collaborate and share during Open Group conferences, as well as document and share their favorite experiences. Categories include best photo on the conference floor, best photo of San Diego, and best photo of the conference outing (dinner aboard the USS Midway in San Diego Harbor). The winner of each category will receive a $125 Amazon gift card. The winners will be announced on Monday, Feb. 14 via social media communities.

It’s not too late to join in, or to plan to look for the events and presentations online. Registration is open to both members and non-members of The Open Group. For more information, or to register for the conference in San Diego please visit:http://www.opengroup.org/sandiego2011/register.htm. Registration is free for members of the press and industry analysts.

You may also be interested in:

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirectblogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

2 Comments

Filed under Uncategorized

What’s the future of information security?

Today, Jan. 28, is Data Privacy Day around the world. While it’s meant to bring attention to personal privacy, it’s also a good time to think about organizational and global challenges relating to data security.

What is your organization’s primary cybersecurity challenge? Take our poll below, and read on to learn about some of The Open Group’s resources for security professionals.

The Open Group has several active working groups and forums dealing with various areas of information security. If your organization is in need of guidance or fresh thinking on information security challenges, we invite you to check out some of these security resources (all of which may be accessed at no charge):

  • The Open Group Jericho Forum®. Many useful guidance documents on topics including the Jericho Commandments (design principles), de-perimeterization, cloud security, secure collaboration, and identity management are available on The Open Group website.
  • Many of the Jericho Forum® members share their thoughts on a blog hosted by Computerworld UK.
  • The Open Group Security Forum: Access a series of documents on the topic of risk management published by the Security Forum over the past couple of years. These include the Risk Management Taxonomy Technical Standard, Requirements for Risk Assessment Methodologies, and the FAIR / ISO 27005 Cookbook. These and other useful publications may be accessed by searching for subject = security on our website’s publications page.

Cybersecurity will be a major topic at The Open Group Conference, San Diego, Feb. 7-11. Join us for plenary sessions on security, security-themed tracks, best practices, case studies and the future of information security, presented by preeminent thought leaders in the industry.

Comments Off

Filed under Cybersecurity, Information security

The Trusted Technology Forum: Best practices for securing the global technology supply chain

By Mary Ann Davidson, Oracle

Hello, I am Mary Ann Davidson. I am the Chief Security Officer for Oracle and I want to talk about The Open Group Trusted Technology Provider Frameworkhardware (O-TTPF). What, you may ask, is that? The Trusted Technology Forum (OTTF) is an effort within The Open Group to develop a body of practices related to software and hardware manufacturing — the O-TTPF — that will address procurers’ supply chain risk management concerns.

That’s a mouthful, isn’t it? Putting it in layman’s terms, if you are an entity purchasing hardware and software for mission-critical systems, you want to know that your supplier has reasonable practices as to how they build and maintain their products that addresses specific (and I would argue narrow, more on which below) supply chain risks. The supplier ought to be doing “reasonable and prudent” practices to mitigate those risks and to be able to tell their buyers, “here is what I did.” Better industry practices related to supply chain risks with more transparency to buyers are both, in general, good things.

Real-world solutions

One of the things I particularly appreciate is that the O-TTPF is being developed by, among others, actual builders of software and hardware. So many of the “supply chain risk frameworks” I’ve seen to date appear to have been developed by people who have no actual software development and/or hardware manufacturing expertise. I think we all know that even well-intended and smart people without direct subject matter experience who want to “solve a problem” will often not solve the right problem, or will mandate remedies that may be ineffective, expensive and lack the always-needed dose of “real world pragmatism.”  In my opinion, an ounce of “pragmatic and implementable” beats a pound of “in a perfect world with perfect information and unlimited resources” any day of the week.

I know this from my own program management office in software assurance. When my team develops good ideas to improve software, we always vet them by our security leads in development, to try to achieve consensus and buy-in in some key areas:

  • Are our ideas good?
  • Can they be implemented?  Specifically, is our proposal the best way to solve the stated problem?
  • Given the differences in development organizations and differences in technology, is there a body of good practices that development can draw from rather than require a single practice for everyone?

That last point is a key one. There is almost never a single “best practice” that everybody on the planet should adhere in almost any area of life. The reality is that there are often a number of ways to get to a positive outcome, and the nature of business – particularly, the competitiveness and innovation that enables business – depends on flexibility.  The OTTF is outcomes-focused and “body of practice” oriented, because there is no single best way to build hardware and software and there is no single, monolithic supply chain risk management practice that will work for everybody or is appropriate for everybody.

BakingIt’s perhaps a stretch, but consider baking a pie. There is – last time I checked – no International Organization for Standardization (ISO) standard for how to bake a cherry pie (and God forbid there ever is one). Some people cream butter and sugar together before adding flour. Other people dump everything in a food processor. (I buy pre-made piecrusts and skip this step.) Some people add a little liqueur to the cherries for a kick, other people just open a can of cherries and dump it in the piecrust. There are no standards organization smack downs over two-crust vs. one-crust pies, and whether to use a crumble on the top or a pastry crust to constitute a “standards-compliant cherry pie.” Pie consumers want to know that the baker used reasonable ingredients – piecrust and cherries – that none of the ingredients were bad and that the baker didn’t allow any errant flies to wander into the dough or the filling. But the buyer should not be specifying exactly how the baker makes the pie or exactly how they keep flies out of the pie (or they can bake it themselves). The only thing that prescribing a single “best” way to bake a cherry pie will lead to is a chronic shortage of really good cherry pies and a glut of tasteless and mediocre ones.

Building on standards

Another positive aspect of the O-TTPF is that it is intended to build upon and incorporate existing standards – such as the international Common Criteria – rather than replace them. Incorporating and referring to existing standards is important because supply chain risk is not the same thing as software assurance — though they are related. For example, many companies evaluate ­one or more products, but not all products they produce. Therefore, even to the extent their CC evaluations incorporate a validation of the “security of the software development environment,” it is related to a product, and not necessarily to the overall corporate development environment. More importantly, one of the best things about the Common Criteria is that it is an existing ISO standard (ISO/IEC 15408:2005) and, thanks to the Common Criteria recognition arrangement (CCRA), a vendor can do a single evaluation accepted in many countries. Having to reevaluate the same product in multiple locations – or having to do a “supply chain certification” that covers the same sorts of areas that the CC covers – would be wasteful and expensive. The O-TTPF builds on but does not replace existing standards.

Another positive: The focus I see on “solving the right problems.” Too many supply chain risk discussions fail to define “supply chain risk” and in particular define every possible concern with a product as a supply chain risk. (If I buy a car that turns out to be a lemon, is it a supply chain risk problem? Or just a “lemon?”) For example, consider a system integrator who took a bunch of components and glued them together without delivering the resultant system in a locked down configuration. The weak configuration is not, per se, a supply chain risk; though arguably it is poor security practice and I’d also say it’s a weak software assurance practice. With regard to OTTF, we defined “supply chain attack” as (paraphrased) an attempt to deliberately subvert the manufacturing process rather than exploiting defects that happened to be in the product. Every product has defects, some are security defects, and some of those are caused by coding errors. That’s a lot different – and profoundly different — from someone putting a back door in code. The former is a software assurance problem and the second is a supply chain attack.

Why does this matter? Because supply chain risk – real supply chain risk, not every single concern either a vendor or a customer could have aboutManufacturing a product – needs focus to be able to address the concern. As has been said about priorities, if everything is priority number one, then nothing is.  In particular, if everything is “a supply chain risk,” then we can’t focus our efforts, and hone in on a reasonable, achievable, practical and implementable set  – “set” meaning “multiple avenues that lead to positive outcomes” – of practices that can lead to better supply chain practices for all, and a higher degree of confidence among purchasers.

Consider the nature of the challenges that OTTF is trying to address, and the nature of the challenges our industry faces, I am pleased that Oracle is participating in the OTTF. I look forward to working with peers – and consumers of technology – to help improve everyone’s supply chain risk management practices and the confidence of consumers of our technologies.

Mary Ann DavidsonMary Ann Davidson is the Chief Security Officer at Oracle Corporation, responsible for Oracle product security, as well as security evaluations, assessments and incident handling. She had been named one of Information Security’s top five “Women of Vision,” is a Fed100 award recipient from Federal Computer Week and was recently named to the Information Systems Security Association Hall of Fame. She has testified on the issue of cybersecurity multiple times to the US Congress. Ms. Davidson has a B.S.M.E. from the University of Virginia and a M.B.A. from the Wharton School of the University of Pennsylvania. She has also served as a commissioned officer in the U.S. Navy Civil Engineer Corps. She is active in The Open Group Trusted Technology Forum and writes a blog at Oracle.

6 Comments

Filed under Cybersecurity, Supply chain risk