Category Archives: Cybersecurity

The Open Group Baltimore 2015 Highlights

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

The Open Group Baltimore 2015, Enabling Boundaryless Information Flow™, July 20-23, was held at the beautiful Hyatt Regency Inner Harbor. Over 300 attendees from 16 countries, including China, Japan, Netherlands and Brazil, attended this agenda-packed event.

The event kicked off on July 20th with a warm Open Group welcome by Allen Brown, President and CEO of The Open Group. The first plenary speaker was Bruce McConnell, Senior VP, East West Institute, whose presentation “Global Cooperation in Cyberspace”, gave a behind-the-scenes look at global cybersecurity issues. Bruce focused on US – China cyber cooperation, major threats and what the US is doing about them.

Allen then welcomed Christopher Davis, Professor of Information Systems, University of South Florida, to The Open Group Governing Board as an Elected Customer Member Representative. Chris also serves as Chair of The Open Group IT4IT™ Forum.

The plenary continued with a joint presentation “Can Cyber Insurance Be Linked to Assurance” by Larry Clinton, President & CEO, Internet Security Alliance and Dan Reddy, Adjunct Faculty, Quinsigamond Community College MA. The speakers emphasized that cybersecurity is not a simply an IT issue. They stated there are currently 15 billion mobile devices and there will be 50 billion within 5 years. Organizations and governments need to prepare for new vulnerabilities and the explosion of the Internet of Things (IoT).

The plenary culminated with a panel “US Government Initiatives for Securing the Global Supply Chain”. Panelists were Donald Davidson, Chief, Lifecycle Risk Management, DoD CIO for Cybersecurity, Angela Smith, Senior Technical Advisor, General Services Administration (GSA) and Matthew Scholl, Deputy Division Chief, NIST. The panel was moderated by Dave Lounsbury, CTO and VP, Services, The Open Group. They discussed the importance and benefits of ensuring product integrity of hardware, software and services being incorporated into government enterprise capabilities and critical infrastructure. Government and industry must look at supply chain, processes, best practices, standards and people.

All sessions concluded with Q&A moderated by Allen Brown and Jim Hietala, VP, Business Development and Security, The Open Group.

Afternoon tracks (11 presentations) consisted of various topics including Information & Data Architecture and EA & Business Transformation. The Risk, Dependability and Trusted Technology theme also continued. Jack Daniel, Strategist, Tenable Network Security shared “The Evolution of Vulnerability Management”. Michele Goetz, Principal Analyst at Forrester Research, presented “Harness the Composable Data Layer to Survive the Digital Tsunami”. This session was aimed at helping data professionals understand how Composable Data Layers set digital and the Internet of Things up for success.

The evening featured a Partner Pavilion and Networking Reception. The Open Group Forums and Partners hosted short presentations and demonstrations while guests also enjoyed the reception. Areas focused on were Enterprise Architecture, Healthcare, Security, Future Airborne Capability Environment (FACE™), IT4IT™ and Open Platform™.

Exhibitors in attendance were Esteral Technologies, Wind River, RTI and SimVentions.

By Loren K. Baynes, Director, Global Marketing CommunicationsPartner Pavilion – The Open Group Open Platform 3.0™

On July 21, Allen Brown began the plenary with the great news that Huawei has become a Platinum Member of The Open Group. Huawei joins our other Platinum Members Capgemini, HP, IBM, Philips and Oracle.

By Loren K Baynes, Director, Global Marketing CommunicationsAllen Brown, Trevor Cheung, Chris Forde

Trevor Cheung, VP Strategy & Architecture Practice, Huawei Global Services, will be joining The Open Group Governing Board. Trevor posed the question, “what can we do to combine The Open Group and IT aspects to make a customer experience transformation?” His presentation entitled “The Value of Industry Standardization in Promoting ICT Innovation”, addressed the “ROADS Experience”. ROADS is an acronym for Real Time, On-Demand, All Online, DIY, Social, which need to be defined across all industries. Trevor also discussed bridging the gap; the importance of combining Customer Experience (customer needs, strategy, business needs) and Enterprise Architecture (business outcome, strategies, systems, processes innovation). EA plays a key role in the digital transformation.

Allen then presented The Open Group Forum updates. He shared roadmaps which include schedules of snapshots, reviews, standards, and publications/white papers.

Allen also provided a sneak peek of results from our recent survey on TOGAF®, an Open Group standard. TOGAF® 9 is currently available in 15 different languages.

Next speaker was Jason Uppal, Chief Architecture and CEO, iCareQuality, on “Enterprise Architecture Practice Beyond Models”. Jason emphasized the goal is “Zero Patient Harm” and stressed the importance of Open CA Certification. He also stated that there are many roles of Enterprise Architects and they are always changing.

Joanne MacGregor, IT Trainer and Psychologist, Real IRM Solutions, gave a very interesting presentation entitled “You can Lead a Horse to Water… Managing the Human Aspects of Change in EA Implementations”. Joanne discussed managing, implementing, maintaining change and shared an in-depth analysis of the psychology of change.

“Outcome Driven Government and the Movement Towards Agility in Architecture” was presented by David Chesebrough, President, Association for Enterprise Information (AFEI). “IT Transformation reshapes business models, lean startups, web business challenges and even traditional organizations”, stated David.

Questions from attendees were addressed after each session.

In parallel with the plenary was the Healthcare Interoperability Day. Speakers from a wide range of Healthcare industry organizations, such as ONC, AMIA and Healthway shared their views and vision on how IT can improve the quality and efficiency of the Healthcare enterprise.

Before the plenary ended, Allen made another announcement. Allen is stepping down in April 2016 as President and CEO after more than 20 years with The Open Group, including the last 17 as CEO. After conducting a process to choose his successor, The Open Group Governing Board has selected Steve Nunn as his replacement who will assume the role with effect from November of this year. Steve is the current COO of The Open Group and CEO of the Association of Enterprise Architects. Please see press release here.By Loren K. Baynes, Director, Global Marketing Communications

Steve Nunn, Allen Brown

Afternoon track topics were comprised of EA Practice & Professional Development and Open Platform 3.0™.

After a very informative and productive day of sessions, workshops and presentations, event guests were treated to a dinner aboard the USS Constellation just a few minutes walk from the hotel. The USS Constellation constructed in 1854, is a sloop-of-war, the second US Navy ship to carry the name and is designated a National Historic Landmark.

By Loren K. Baynes, Director, Global Marketing CommunicationsUSS Constellation

On Wednesday, July 22, tracks continued: TOGAF® 9 Case Studies and Standard, EA & Capability Training, Knowledge Architecture and IT4IT™ – Managing the Business of IT.

Thursday consisted of members-only meetings which are closed sessions.

A special “thank you” goes to our sponsors and exhibitors: Avolution, SNA Technologies, BiZZdesign, Van Haren Publishing, AFEI and AEA.

Check out all the Twitter conversation about the event – @theopengroup #ogBWI

Event proceedings for all members and event attendees can be found here.

Hope to see you at The Open Group Edinburgh 2015 October 19-22! Please register here.

By Loren K. Baynes, Director, Global Marketing CommunicationsLoren K. Baynes, Director, Global Marketing Communications, joined The Open Group in 2013 and spearheads corporate marketing initiatives, primarily the website, blog, media relations and social media. Loren has over 20 years experience in brand marketing and public relations and, prior to The Open Group, was with The Walt Disney Company for over 10 years. Loren holds a Bachelor of Business Administration from Texas A&M University. She is based in the US.

Leave a comment

Filed under Accreditations, Boundaryless Information Flow™, Cybersecurity, Enterprise Architecture, Enterprise Transformation, Healthcare, Internet of Things, Interoperability, Open CA, Open Platform 3.0, Security, Security Architecture, The Open Group Baltimore 2015, TOGAF®

Securing Business Operations and Critical Infrastructure: Trusted Technology, Procurement Paradigms, Cyber Insurance

Following is the transcript of an Open Group discussion on ways to address supply chain risk in the information technology sector marketplace.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Sponsor: The Open Group

Dana Gardner: Hello, and welcome to a special Thought Leadership Panel Discussion, coming to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator as we explore ways to address supply chain risk in the information technology sector market.

We’ll specifically examine how The Open Group Trusted Technology Forum (OTTF) standards and accreditation activities are enhancing the security of global supply chains and improving the integrity of openly available IT products and components.

We’ll also learn how the age-old practice of insurance is coming to bear on the problem of IT supply-chain risk, and by leveraging insurance models, the specter of supply chain disruption and security yields may be significantly reduced.

To update us on the work of the OTTF and explain the workings and benefits of supply-chain insurance, we’re joined by our panel of experts. Please join me in welcoming Sally Long, Director of The Open Group Trusted Technology Forum. Welcome, Sally.

Sally Long: Thank you.

Gardner: We’re also here with Andras Szakal, Vice President and Chief Technology Officer for IBM U.S. Federal and Chairman of The Open Group Trusted Technology Forum. Welcome back, Andras.

Andras Szakal: Thank you for having me.

Gardner: And Bob Dix joins us. He is Vice President of Global Government Affairs and Public Policy for Juniper Networks and is a member of The Open Group Trusted Technology Forum. Welcome, Bob.

Bob Dix: Thank you for the invitation. Glad to be here.

Gardner: Lastly, we are joined by Dan Reddy, Supply Chain Assurance Specialist, college instructor and Lead of The Open Group Trusted Technology Forum Global Outreach and Standards Harmonization Work Group. Thanks for being with us, Dan.

Dan Reddy: Glad to be here, Dana.

Gardner: Sally, let’s start with you. Why don’t we just get a quick update on The Open Group Trusted Technology Forum (OTTF) and the supply-chain accreditation process generally? What has been going on?

OTTP standard

Long: For some of you who might not have heard of the O-TTPS, which is the standard, it’s called The Open Trusted Technology Provider™ Standard. The effort started with an initiative in 2009, a roundtable discussion with U.S. government and several ICT vendors, on how to identify trustworthy commercial off-the-shelf (COTS) information and communication technology (ICT), basically driven by the fact that governments were moving away from high assurance customized solution and more and more using COTS ICT.

That ad-hoc group formed under The OTTF and proceeded to deliver a standard and an accreditation program.

The standard really provides a set of best practices to be used throughout the COTS ICT product life cycle. That’s both during in-house development, as well as with outsourced development and manufacturing, including the best practices to use for security in the supply chain, encompassing all phases from design to disposal.

Just to bring you up to speed on just some of the milestones that we’ve had, we released our 1.0 version of the standard in 2013, launched our accreditation program to help assure conformance to the standard in February 2014, and then in July, we released our 1.1 version of the standard. We have now submitted that version to ISO for approval as a publicly available specification (PAS) and it’s a fast track for ISO.

The PAS is a process for adopting standards developed in other standards development organizations (SDOs), and the O-TTPS has passed the draft ISO ballot. Now, it’s coming up for final ballot.

That should bring folks up to speed, Dana, and let them know where we are today.

Gardner: Is there anything in particular at The Open Group Conference in Baltimore, coming up in July, that pertains to these activities? Is this something that’s going to be more than just discussed? Is there something of a milestone nature here too?

Long: Monday, July 20, is the Cyber Security Day of the Baltimore Conference. We’re going to be meeting in the plenary with many of the U.S. government officials from NIST, GSA, and the Department of Homeland Security. So there is going to be a big plenary discussion on cyber security and supply chain.

We’ll also be meeting separately as a member forum, but the whole open track on Monday will be devoted to cyber security and supply chain security.

The one milestone that might coincide is that we’re publishing our Chinese translation version of the standard 1.1 and we might be announcing that then. I think that’s about it, Dana.

OTTF background

Gardner: Andras, for the benefit of our listeners and readers who might be new to this concept, perhaps you could fill us in on the background on the types of problems that OTTF and the initiatives and standards are designed to solve. What’s the problem that we need to address here?

Szakal: That’s a great question. We realized, over the last 5 to 10 years, that the traditional supply-chain management practices, supply-chain integrity practices, where we were ensuring the integrity of the delivery of a product to the end customer, ensuring that it wasn’t tampered with, effectively managing our suppliers to ensure they provided us with quality components really had expanded as a result of the adoption of technology and the pervasive growth of technology in all aspects of manufacturing, but especially as IT has expanded into the Internet of Things, critical infrastructure and mobile technologies, and now obviously cloud and big data.

And as we manufacture those IT products we have to recognize that now we’re in a global environment, and manufacturing and sourcing of components occurs worldwide. In some cases, some of these components are even open source or freely available. We’re concerned, obviously, about the lineage, but also the practices of how these products are manufactured from a secure engineering perspective, as well as the supply-chain integrity and supply-chain security practices.

What we’ve recognized here is that the traditional life cycle of supplychain security and integrity has expanded to include all the way down to the design aspects of the product through sustainment and managing that product over a period of time, from cradle to grave, and disposal of the product to ensure that those components, if they were hardware-based, don’t actually end up recycled in a way that they pose a threat to our customers.

Gardner: So it’s as much a lifecycle as it is a procurement issue.

Szakal: Absolutely. When you talk about procurement, you’re talking about lifecycle and about mitigating risks to those two different aspects from sourcing and from manufacturing.

So from the customer’s perspective, they need to be considering how they actually apply techniques to ensure that they are sourcing from authorized channels, that they are also applying the same techniques that we use for secure engineering when they are doing the integration of their IT infrastructure.

But from a development perspective, it’s ensuring that we’re applying secure engineering techniques, that we have a well-defined baseline for our life cycle, and that we’re controlling our assets effectively. We understand who our partners are and we’re able to score them and ensure that we’re tracking their integrity and that we’re applying new techniques around secure engineering, like threat analysis and risk analysis to the supply chain.

We’re understanding the current risk landscape and applying techniques like vulnerability analysis and runtime protection techniques that would allow us to mitigate many of these risks as we build out our products and manufacture them.

It goes all the way through sustainment. You probably recognize now, most people would, that your products are no longer a shrink-wrap product that you get, install, and it lives for a year or two before you update it. It’s constantly being updated. So to ensure that the integrity and delivery of that update is consistent with the principles that we are trying to espouse is also really important.

Collaborative effort

Gardner: And to that point, no product stands alone. It’s really a result of a collaborative effort, very complex number of systems coming together. Not only are standards necessary, but cooperation among all those players in that ecosystem becomes necessary.

Dan Reddy, how have we done in terms of getting mutual assurance across a supply chain that all the participants are willing to take part? It seems to me that, if there is a weak link, everyone would benefit by shoring that up. So how do we go beyond the standards? How are we getting cooperation, get all the parties interested in contributing and being part of this?

Reddy: First of all, it’s an evolutionary process, and we’re still in the early days of fully communicating what the best practices are, what the standards are, and getting people to understand how that relates to their place in the supply chain.

Certainly, the supplier community would benefit by following some common practices so they don’t wind up answering customized survey questions from all of their customers.

That’s what’s happening today. It’s pretty much a one-off situation, where each customer says, “I need to protect my supply chain. Let me go find out what all of my suppliers are doing.” The real benefit here is to have the common language of the requirements in our standard and a way to measure it.

So there should be an incentive for the suppliers to take a look at that and say, “I’m tired of answering these individual survey questions. Maybe if I just document my best practices, I can avoid some of the effort that goes along with that individual approach.”

Everyone needs to understand that value proposition across the supply chain. Part of what we’re trying to do with the Baltimore conference is to talk to some thought leaders and continue to get the word out about the value proposition here.

Gardner: Bob Dix, the government in the U.S., and of course across the globe, all the governments, are major purchasers of technology and also have a great stake in security and low risk. What’s been driving some of the government activities? Of course, they’re also interested in using off-the-shelf technology and cutting costs. So what’s the role that governments can play in driving some of these activities around the OTTF?

Risk management

Dix: This issue of supply chain assurance and cyber security is all about risk management, and it’s a shared responsibility. For too long I think that the government has had a tendency to want to point a finger at the private sector as not sufficiently attending to this matter.

The fact is, Dana, that many in the private sector make substantial investments in their product integrity program, as Andras was talking about, from product conception, to delivery, to disposal. What’s really important is that when that investment is made and when companies apply the standard the OTTF has put forward, it’s incumbent upon the government to do their part in purchasing from authorized and trusted sources.

In today’s world, we still have a culture that’s pervasive across the government acquisition community, where decision-making on procurements is often driven by cost and schedule, and product authenticity, assurance, and security are not necessarily a part of that equation. It’s driven in many cases by budgets and other considerations, but nonetheless, we must change that culture to focus to include authenticity and assurance as a part of the decision making process.

The result of focusing on cost and schedule is often those acquisitions are made from untrusted and unauthorized sources, which raises the risk of acquiring counterfeit, tainted, or even malicious equipment.

Part of the work of the OTTF is to present to all stakeholders, in industry and government alike, that there is a process that can be uniform, as has been stated by Sally and Dan as well, that can be applied in an environment to raise the bar of authenticity, security, and assurance to improve upon that risk management approach.

Gardner: Sally, we’ve talked about where you’re standing in terms of some progress in your development around these standards and activities. We’ve heard about the challenges and the need for improvement.

Before we talk about this really interesting concept of insurance that would come to bear on perhaps encouraging standardization and giving people more ways to reduce their risk and adhere to best practices, what do you expect to see in a few years? If things go well and if this is adopted widely and embraced in true good practices, what’s the result? What do we expect to see as an improvement?

What I am trying to get at here is that if there’s a really interesting golden nugget to shoot for, a golden ring to grab for, what is that we can accomplish by doing this well?

Powerful impact

Long: The most important and significant aspect of the accreditation program is when you look at the holistic nature of the program and how it could have a very powerful impact if it’s widely adopted.

The idea of an accreditation program is that a provider gets accredited for conforming to the best practices. A provider that can get accredited could be an integrator, an OEM, the component suppliers of hardware and software that provide the components to the OEM, and the value-add resellers and distributors.

Every important constituent in that supply chain could be accredited. So not only from a business perspective is it important for governments and commercial customers to look on the Accreditation Registry and see who has been accredited for the integrators they want to work with or for the OEMs they want to work with, but it’s also important and beneficial for OEMs to be able to look at that register and say, “These component suppliers are accredited. So I’ll work with them as business partners.” It’s the same for value-add resellers and distributors.

It builds in these real business-market incentives to make the concept work, and in the end, of course, the ultimate goal of having a more secure supply chain and more products with integrity will be achieved.

To me, that is one of the most important aspects that we can reach for, especially if we reach out internationally. What we’re starting to see internationally is that localized requirements are cropping up in different countries. What that’s going to mean is that vendors need to meet those different requirements, increasing their cost, and sometimes even there will end up being trade barriers.

Back to what Dan and Bob were saying, we need to look at this global standard and accreditation program that already exists. It’s not in development; we’ve been working on it for five years with consensus from many, many of the major players in the industry and government. So urging global adoption of what already exists and what could work holistically is really an important objective for our next couple of years.

Gardner: It certainty sounds like a win, win, win if everyone can participate, have visibility, and get designated as having followed through on those principles. But as you know and as you mentioned, it’s the marketplace. Economics often drives business behavior. So in addition to a standards process and the definitions being available, what is it about this notion of insurance that might be a parallel market force that would help encourage better practices and ultimately move more companies in this direction?

Let’s start with Dan. Explain to me how cyber insurance, as it pertains to the supply chain, would work?

Early stages

Reddy: It’s an interesting question. The cyber insurance industry is still in the early stages, even though it goes back to the ’70s, where crime insurance started applying to outsiders gaining physical access to computer systems. You didn’t really see the advent of hacker insurance policies until the late ’90s. Then, starting in 2000, some of the first forms of cyber insurance covering first and third party started to appear.

What we’re seeing today is primarily related to the breaches that we hear about in the paper everyday, where some organization has been comprised, and sensitive information, like credit card information, is exposed for thousands of customers. The remediation is geared toward the companies that have to pay the claim and sign people up for identity protection. It’s pretty cut and dried. That’s the wave that the insurance industry is riding right now.

What I see is that as attacks get to be more sophisticated and potentially include attacks on the supply chain, it’s going to represent a whole new area for cyber insurance. Having consistent ways to address supplier-related risk, as well as the other infrastructure related risks that go beyond simple data breach, is going to be where the marketplace has to make an adjustment. Standardization is critical there.

Gardner: Andras, how does this work in conjunction with OTTF? Would insurance companies begin their risk assessment by making sure that participants in the supply chain are already adhering to your standards and seeking accreditation? Then, maybe they would have premiums that would reflect the diligence that companies extend into their supply chains. Maybe you could just explain to me, not just the insurance, but how it would work in conjunction with OTTF, maybe to each’s mutual benefit.

Szakal: You made a really great point earlier about the economic element that would drive compliance. For us in IBM, the economic element is the ability to prove that we’re providing the right assurance that is being specified in the requests for proposals (RFPs), not only in the federal sector, but outside the federal sector in critical infrastructure and finance. We continue to win those opportunities, and that’s driven our compliance, as well as the government policy aspect worldwide.

But from an insurance point of view, insurance comes in two forms. I buy policy insurance in a case where there are risks that are out of my control, and I apply protective measures that are under my control. So in the case of the supply chain, the OTTF is a set of practices that help you gain control and lower the risk of threat in the manufacturing process.

The question is, do you buy a policy, and what’s the balance here between a cyber threat that is in your control, and those aspects of supply chain security which are out of your control? This is with the understanding that there is an infinite number of a resources or revenue that you can apply to allocate to both of these aspects.

There’s going to have to be a balance, and it really is going to be case by case, with respect to customers and manufacturers, as to where the loss of potential intellectual property (IP) with insurance, versus applying controls. Those resources are better applied where they actually have control, versus that of policies that are protecting you against things that are out of your control.

For example, you might buy a policy for providing code to a third party, which has high value IP to manufacture a component. You have to share that information with that third-party supplier to actually manufacture that component as part of the overarching product, but with the realization that if that third party is somehow hacked or intruded on and that IP is stolen, you have lost some significant amount of value. That will be an area where insurance would be applicable.

What’s working

Gardner: Bob Dix, if insurance comes to bear in conjunction with standards like what the OTTF is developing in supply chain assurance, it seems to me that the insurance providers themselves would be in a position of gathering information for their actuarial decisions and could be a clearing house for what’s working and what isn’t working.

It would be in their best interest to then share that back into the marketplace in order to reduce the risk. That’s a market-driven, data-driven approach that could benefit everyone. Do you see the advent of insurance as a benefit or accelerant to improvement here?

Dix: It’s a tool. This is a conversation that’s been going on in the community for quite some time, the lack of actuarial data for catastrophic losses produced by cyber events, that is impacting some of the rate setting and premium setting by insurance companies, and that has continued to be a challenge.

But from an incentive standpoint, it’s just like in your home. If you have an alarm system, if you have a fence, if you do other kinds of protective measures, your insurance on your homeowners or liability insurance may get a reduction in premium for those actions that you have taken.

As an incentive, the opportunity to have an insurance policy to either transfer or buy down risk can be driven by the type of controls that you have in your environment. The standard that the OTTF has put forward provides guidance about how best to accomplish that. So, there is an opportunity to leverage, as an incentive, the reduction in premiums for insurance to transfer or buy down risk.

Gardner: It’s interesting, Sally, that the insurance industry could benefit from OTTF, and by having more insurance available in the marketplace, it could encourage more participation and make the standard even more applicable and valuable. So it’s interesting to see over time how that plays out.

Any thoughts or comments on the relationship between what you are doing at OTTF and The Open Group and what the private insurance industry is moving toward?

Long: I agree with what everyone has said. It’s an up-and-coming field, and there is a lot more focus on it. I hear at every conference I go to, there is a lot more research on cyber security insurance. There is a place for the O-TTPS in terms of buying down risk, as Bob was mentioning.

The other thing that’s interesting is the NIST Cybersecurity Framework. That whole paradigm started out with the fact that there would be incentives for those that followed the NIST Cybersecurity Framework – that incentive piece became very hard to pull together, and still is. To my knowledge, there are no incentives yet associated with it. But insurance was one of the ideas they talked about for incentivizing adopters of the CSF.

The other thing that I think came out of one of the presentations that Dan and Larry Clinton will be giving at our Baltimore Conference, is that insurers are looking for simplicity. They don’t want to go into a client’s environment and have them prove that they are doing all of these things required of them or filling out a long checklist.

That’s why, in terms of simplicity, asking for O-TTPS-accredited providers or lowering their rates based on that – would be a very simplistic approach, but again not here yet. As Bob said, it’s been talked about a lot for a long time, but I think it is coming to the fore.

Market of interest

Gardner: Dan Reddy, back to you. When there is generally a large addressable market of interest in a product or service, there often rises a commercial means to satisfy that. How can enterprises, the people who are consuming these products, encourage acceptance of these standards, perhaps push for a stronger insurance capability in the marketplace, or also get involved with some of these standards and practices that we have been talking about?

If you’re a publicly traded company, you would want to reduce your exposure and be able to claim accreditation and insurance as well. Let’s look at this from the perspective of the enterprise. What should and could they be doing to improve on this?

Reddy: I want to link back to what Sally said about the NIST Cyber Security Framework. What’s been very useful in publishing the Framework is that it gives enterprises a way to talk about their overall operational risk in a consistent fashion.

I was at one of the workshops sponsored by NIST where enterprises that had adopted it talked about what they were doing internally in their own enterprises in changing their practices, improving their security, and using the language of the framework to address that.

Yet, when they talked about one aspect of their risk, their supplier risk, they were trying to send the NIST Cybersecurity Framework risk questions to their suppliers, and those questions aren’t really sufficient. They’re interesting. You care about the enterprise of your supplier, but you really care about the products of your supplier.

So one of the things that the OTTF did is look at the requirements in our standard related to suppliers and link them specifically to the same operational areas that were included in the NIST Cybersecurity Framework.

This gives the standard enterprise looking at risk, trying to do standard things, a way to use the language of our requirements in the standard and the accreditation program as a form of measurement to see how that aspect of supplier risk would be addressed.

But remember, cyber insurance is more than just the risk of suppliers. It’s the risk at the enterprise level. But the attacks are going to change over time, and we’ll go beyond the simple breaches. That’s where the added complexity will be needed.

Gardner: Andras, any suggestions for how enterprises, suppliers, vendors, systems integrators, and now, of course, the cloud services providers, should get involved? Where can they go for more information? What can they do to become part of the solution on this?

International forum

Szakal: Well, they can always become a member of the Trusted Technology Forum, where we have an international forum.

Gardner: I thought you might say that.

Szakal: That’s an obvious one, right? But there are a couple of places where you can go to learn more about this challenge.

One is certainly our website. Download the framework, which was a compendium of best practices, which we gathered as a result of a lot of hard work of sharing in an open, penalty-free environment all of the best practices that the major vendors are employing to mitigate risks to counterfeit and maliciously tainted products, as well as other supply chain risks. I think that’s a good start, understanding the standard.

Then, it’s looking at how you might measure the standard against what your practices are currently using the accreditation criteria that we have established.

Other places would be NIST. I believe that it’s 161 that is the current pending standard for protecting supply chain security. There are several really good reports that the Defense Science Board and other organizations have conducted in the past within the federal government space. There are plenty of materials out there, a lot of discussion about challenges.

But I think the only place where you really find solutions, or at least one of the only places that I have seen is in the TTF, embedded in the standard as a set of practices that are very practical to implement.

Gardner: Sally, the same question to you. Where can people go to get involved? What should they perhaps do to get started?

Long: I’d reiterate what Andras said. I’d also point them toward the accreditation website, which is www.opengroup.org/accreditation/o-ttps. And on that accreditation site you can see the policy, standard and supporting docs. We publicize our assessment procedures so you have a good idea of what the assessment process will entail.

The program is based on evidence of conformance as well as a warranty from the applicant. So the assessment procedures being public will allow any organizations thinking about getting accredited to know exactly what they need to do.

As always, we would appreciate any new members, because we’ll be evolving the standard and the accreditation program, and it is done by consensus. So if you want a say in that, whether our standard needs to be stronger, weaker, broader, etc., join the forum and help us evolve it.

Impact on business

Gardner: Dan Reddy, when we think about managing these issues, often it falls on the shoulders of IT and their security apparatus, the Chief Information Security Officer perhaps. But it seems that the impact on business is growing. So should other people in the enterprise be thinking about this? I am thinking about procurement or the governance risk and compliance folks. Who else should be involved other than IT in their security apparatus in mitigating the risks as far as IT supply chain activity?

Reddy: You’re right that the old model of everything falls on IT is expanding, and now you see issues of enterprise risk and supply chain risk making it up to the boards of directors, who are asking tough questions. That’s one reason why boards look at cyber insurance as a way to mitigate some of the risk that they can’t control.

They’re asking tough questions all the way around, and I think acquisition people do need to understand what are the right questions to ask of technology providers.

To me, this comes back to scalability. This one-off approach of everyone asking questions of each of their vendors just isn’t going to make it. The advantage that we have here is that we have a consistent standard, built by consensus, freely available, and it’s measurable.

There are a lot of other good documents that talk about supply chain risk and secure engineering, but you can’t get a third-party assessment in a straightforward method, and I think that’s going to be appealing over time.

Gardner: Bob Dix, last word to you. What do you see happening in the area of government affairs and public policy around these issues? What should we hope for or expect from different governments in creating an atmosphere that improves risk across supply chain?

Dix: A couple things have to happen, Dana. First, we have got to quit blaming victims when we have breaches and compromises and start looking at solutions. The government has a tendency in the United States and in other countries around the world, to look at legislating and trying to pass regulatory measures that impose requirements on industry without a full understanding of what industry is already doing.

In this particular example, the government has had a tendency to take an approach that excludes vendors from being able to participate in federal procurement activities based on a risk level that they determine.

The really great thing about the work of the OTTF and the standard that’s being produced is it allows a different way to look at it and instead look at those that are accredited as having met the standard and being able to provide a higher assurance level of authenticity and security around the products and services that they deliver. I think that’s a much more productive approach.

Working together

And from a standpoint of public policy, this example on the great work that’s being done by industry and government working together globally to be able to deliver the standard provides the government a basis by which they can think about it a little differently.

Instead of just focusing on who they want to exclude, let’s look at who actually is delivering the value and meeting the requirements to be a trusted provider. That’s a different approach and it’s one that we are very proud of in terms of the work of The Open Group and we will continue to work that going forward.

Gardner: Excellent. I’m afraid we will have to leave it there. We’ve been exploring ways to address supply chain risk in the information technology sector marketplace, and we’ve seen how The Open Group Trusted Technology Forum standards and accreditation activities are enhancing the security of global supply chain and improving the integrity of openly available IT products and components. And we have also learned how the age-old practice of insurance is coming to bear on the problem of IT supply chain risk.

This special BriefingsDirect Thought Leadership Panel Discussion comes to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore. It’s not too late to register on The Open Group’s website or to follow the proceedings online and via Twitter and other social media during the week of the presentation.

So a big thank you to our guests. We’ve been joined today by Sally Long, Director of The Open Group Trusted Technology Forum. Thanks so much, Sally.

Long: Thank you, Dana.

Gardner: And a big thank you to Andras Szakal, Vice President and Chief Technology Officer for IBM U.S. Federal and Chairman of The Open Group Trusted Technology Forum. Thank you, Andras.

Szakal: Thank you very much for having us and come join the TTF. We can use all the help we can get.

Gardner: Great. A big thank you too to Bob Dix, Vice President of Global Government Affairs & Public Policy for Juniper Networks and a member of The Open Group Trusted Technology Forum. Thanks, Bob.

Dix: Appreciate the invitation. I look forward to joining you again.

Gardner: And lastly, thank you to Dan Reddy, Supply Chain Assurance Specialist, college instructor and Lead of The Open Group Trusted Technology Forum Global Outreach and Standards Harmonization Work Group. I appreciate your input, Dan.

Reddy: Glad to be here.

Gardner: And lastly, a big thank you to our audience for joining us at the special Open Group sponsored Thought Leadership Panel Discussion.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for these Open Group discussions associated with the Baltimore Conference ( (Register Here). Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Sponsor: The Open Group

Join the conversation @theopengroup #ogchat #ogBWI

Transcript of a Briefings Direct discussion on ways to address supply chain risk in the information technology sector marketplace. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

1 Comment

Filed under Cybersecurity, OTTF, Supply chain risk, The Open Group Baltimore 2015

Global Cooperation and Cybersecurity: A Q&A with Bruce McConnell

By The Open Group

Cyber threats are becoming an increasingly critical issue for both companies and governments. The recent disclosure that the U.S. Office of Personnel Management had been hacked is proof that it’s not just private industry that is vulnerable to attack. In order to address the problems that countries and industry face, there must be more global cooperation in terms of what behaviors are acceptable and unacceptable in cyberspace.

Bruce McConnell is Senior Vice President of the EastWest Institute (EWI), and is responsible for its global cooperation in cyberspace initiative. Bruce has served in the U.S. Department of Homeland Security and as Deputy Under Secretary Cybersecurity, where he was responsible for ensuring the cybersecurity of all federal civilian agencies and helping the owners and operators of the most critical U.S. infrastructure protect themselves from cyber threats. We recently spoke with him in advance of The Open Group Baltimore event about the threats facing government and businesses today, the need for better global cooperation in cyberspace and the role that standards can play in helping to foster that cooperation.

In your role as Deputy Under Secretary for Cybersecurity in the Obama Administration, you were responsible for protecting U.S. infrastructure from cyber threats. In your estimation, what are the most serious threats in cyberspace today?

User error. I say that because a lot of people these days like to talk about these really scary sounding cyber threats, like some nation state or terrorist group that is going to take down the grid or turn off Wall Street, and I think we spend too much time focusing on the threat and less time focusing on other aspects of the risk equation.

The three elements of risk are threats, vulnerability and consequences. A lot of what needs to be done is to reduce vulnerability. Part of what EWI is working on is promoting the availability of more secure information and communications in technology so that buyers and users can start with an infrastructure that is actually defensible as opposed to the infrastructure we have today which is very difficult to defend. We figure that, yes, there are threats, and yes, there are potential consequences, but one of the places that we need more work in particular is reducing vulnerabilities.

EWI is also working on reducing threats and consequences by working with countries to, for example, agree that certain key assets, such as core Internet infrastructure or financial services markets and clearinghouses should not be attacked by anybody. You have to work all aspects of the equation.

What steps can be taken by governments or businesses to better shore up the infrastructure from cyber threats?

One of the things that has been missing is a signal from the marketplace that it wants more secure technology. There’s been complacency for a long time and denial that this is really a problem, and the increasing visibility of these high profile attacks, like on Target, Sony, JP Morgan Chase and others, are getting companies at the most senior level—in the C-Suite and in the Boardroom—to start paying attention and asking questions of their IT team: ‘How are we protecting ourselves?’ ‘Are we going to be the next ones?’ Because there are two kinds of companies in the U.S.—those that have been hacked and those that know they’ve been hacked.

One of the things EWI has been working on with The Open Group and some of the large IT companies is a set of questions that buyers of IT could ask suppliers about what they do to make sure their products are secure—how they are paying attention to their supply chain, who’s responsible for security at their organization, etc. We think that companies and the government—from the standpoint of education, not regulation—can do more to send signals to the marketplace and suppliers so that they offer more secure technology. In the past customers haven’t been willing to pay more for security—it does cost more. I think that’s changing, but we need to give them tools to be able to ask that question in a smart way.

With respect to government specifically, I think one of the great things the U.S government has done recently is coming out with a Cybersecurity Framework, which was developed mostly by the private sector. NIST, of course, acted as the facilitator, but there’s a lot of uptake there that we’re seeing in terms of companies and sectors—like the financial services sector—adopting and adapting it. It has raised the level of security inside corporations. Insurance carriers are starting to use it as the basis for underwriting insurance policies. It’s not mandatory but it’s a good guidepost, and I think it will become a standard of care.

Why has there been that level of complacency for so long?

I think it’s two things, and they’re both cultural.

One is that the IT community inside companies has not been able to communicate effectively to senior management regarding the nature of the threat or the degree of risk. They don’t speak the same language. When the CFO comes into the CEO’s office and talks about foreign exchange exposure or the General Counsel comes in and speaks about reputational risk, they’re speaking a language that most CEOs can understand. But when the IT guy comes in and talks about Trojans and botnets, he’s speaking a foreign language. There’s been a tendency for that message to not be expressed in business terms that the CEO can understand or be able to quantify and think about as a risk. But it’s a risk just like any of those other risks—foreign exchange risk, competitive risk, natural disasters, cyber attacks. I think that’s changing now, and some companies are pulling the Chief Information Security Officer out from under the CIO and having them report to the Chief Risk Officer, whether it’s the General Counsel or the CFO. That puts them in a different position, and then it can be positioned against other risks and managed in a different way. It’s not a technology problem, it’s as much a human problem—it’s about training employees, it’s about background checks on systems administrators.

The second piece is that it’s invisible. Unlike a hurricane or fire, where you can see the damage, the damage from a cyber attack is invisible. When I was at Homeland Security, we said, ‘What’s it going to take for people to wake up? Well, something really bad will have to happen.’ And something really bad is happening all the time. There’s billions of dollars of financial fraud and theft, there’s theft of intellectual property, the theft of identities—there’s lots of bad things happening but they’re kind of invisible. People don’t react to something they can’t see, we react to the threats that we can see. I think that there’s just a conceptual gap that security professionals haven’t figured out how to convert into something tangible.

How much difference is there anymore in the threats that governments are facing as opposed to businesses? Are these things converging more?

We certainly saw the Office of Personnel Management got the same kind of breaches that Target got: people’s personal data. In the intellectual property area, attackers steal from both businesses and governments. Fraud is probably more directed at businesses and banks just because they handle the money, although some of the IRS data will probably be used to perpetrate fraud. Certainly the government has some systems that are of higher value to society than any single corporate system, but if the core Internet infrastructure, which is owned and run by companies, went down, that would be bad for everybody.

I think the threats are converging also in the sense that attackers are always looking for high-value targets so both governments and companies these days have high-value targets. And they use similar tactics—what we saw was that one family of malware would be used to attack government systems and a slightly different version of that family would be used to attack commercial systems. It was the same kind of malware, and maybe the same perpetrators.

Your session at The Open Group Baltimore event is focused on global cooperation in cyberspace. Where does global cooperation in cyberspace stand today, and why is it important to have that cooperation?

It’s in the spirit of the Baltimore event—Boundaryless Information Flow™. The Internet is a global phenomenon and not a great respecter of national boundaries. The information and technology we all use comes from all over the world. From a security and management standpoint, this is not something that any single government can manage on its own. In order to allow for the boundaryless movement of information in a secure way, governments have to work together to put the right policies and incentives in place. That includes cooperating on catching and investigating cyber criminals. It involves the matter of ensuring buyers can get the best, most secure technology no matter where it is manufactured. It involves cooperating on the types of behavior that are unacceptable in cyberspace. Even reaching agreement on what institutions can be used to manage this global resource is crucial because there’s no real governance of the Internet—it’s still run on an ad hoc basis. That’s been great, but the Internet is becoming too important to be left to everybody’s good will. I’ll cover these issues in more depth in Baltimore.

Who is working on these issues right now and what kind of things are they doing? Who are the “allies” in trying to put together global cooperation initiatives?

There are a lot of different coalitions of people working together. They range from a group called the United Nations Group of Governmental Experts, which by the time of the Baltimore conference will have conducted its fourth in a series of meetings over a two-year period to discuss norms of behavior in cyberspace, along the lines of what kinds of behaviors should nation states not engage in vis a vis cyberattacks. There’s a case where you have a U.N.-based organization and 20 countries or so working together to try to come up with some agreements in that area. Certainly EWI’s work is supported primarily by companies, both U.S. and foreign companies. We bring a broad multi-stakeholder group of people together from countries, companies and non-profit organizations from all the major cyber powers, whether they are national cyber powers like China, Russia, U.S, Germany, India, or corporate cyber powers like Microsoft and Huawei Technologies because in the Internet, companies are important. There are a lot of different activities going on to find ways of cooperating and increasingly recognize the seriousness of the problem.

In terms of better cooperation, what are some of the issues that need to be addressed first and how can those things be better accomplished?

There are so many things to work on. Despite efforts, the state of cooperation isn’t great. There’s a lot of rhetoric being applied and countries are leveling charges and accusing each other of attacking them. Whether or not those charges are true, this is not the way to build trust and cooperation. One of the first things that governments really need to do if they want to cooperate with each other is tone down the rhetoric. They need to sit down, listen to each other and try to understand where the other one’s coming from rather than just trading charges in public. That’s the first thing.

There’s also a reflection of the lack of trust between the major cyber powers these days. How do you build trust? You build trust by working together on easy projects first, and then working your way up to more difficult topics. EWI has been promoting conversations between governments about how to respond if there’s a server in one country that’s been captured by a bot and is attacking machines in another country. You have to say, ‘Could you take a look at that?’ But what are the procedures for reducing the impact of an incident in one country caused by malware coming from a server in of another country? This assumes, of course, that the country itself is not doing it deliberately. In a lot of these attacks people are spoofing servers so it looks like they’re coming from one place but it’s actually originating someplace else. Maybe if we can get governments cooperating on mutual assistance in incident response, it would help build confidence and trust that we could work on larger issues.

As the Internet becomes increasingly more crucial to businesses and government and there are more attacks out there, will this necessitate a position or department that needs to be a bridge between state departments and technology? Do you envision a role for someone to be a negotiator in that area and is that a diplomatic or technological position or both?

Most of the major national powers have cyber ambassadors. The German’s Foreign Office has a cyber ambassador, the Chinese have one. The U.S. has a cyber coordinator, the French have a cyber ambassador and the British just named a new cyber ambassador. States are recognizing there is a role for the foreign ministry to play in this area. It’s not just a diplomatic conversation.

There are also global forums where countries, companies and NGOs get together to talk about these things. EWI hosts one every year – this year’ it’s in New York September 9-10. I think there are a lot of places where the conversations are happening. That gets to a different question: At some point do we need more structure in the way these issues are managed on a global basis? There’s a big debate right now just on the topic of the assignment of Internet names and numbers as the U.S. lets go of its contract with ICANN—who’s going to take that on, what’s it going to look like? Is it going to be a multi-stakeholder body that involves companies sitting at the table or is it only going to be only governments?

Do you see a role for technology standards in helping to foster better cooperation in cyberspace? What role can they play?

Absolutely. In the work we’re doing to try to tell companies they want more secure products. We’re referencing a lot of different standards including those The Open Group and the Trusted Technology Forum have been developing. Those kind of technical standards are critical to getting everyone on a level playing fields in terms of being able to measure how secure products are and to having a conversation that’s fact-based instead of brochure based. There’s a lot of work to be done, but they’re going to be critical to the implementation of any of these larger cooperative agreements. There’s a lot of exciting work going on.

Join the conversation @theopegroup #ogchat #ogBWI

*********

Beginning in 2009, Bruce McConnell provided programmatic and policy leadership to the cybersecurity mission at the U.S. Department of Homeland Security. He became Deputy Under Secretary for Cybersecurity in 2013, and responsible for ensuring the cybersecurity of all federal civilian agencies and for helping the owners and operators of the most critical U.S. infrastructure protect themselves from growing cyber threats. During his tenure, McConnell was instrumental in building the national and international credibility of DHS as a trustworthy partner that relies on transparency and collaboration to protect privacy and enhance security.

Before DHS, McConnell served on the Obama-Biden Presidential Transition Team, working on open government and technology issues. From 2000-2008 he created, built, and sold McConnell International and Government Futures, boutique consultancies that provided strategic and tactical advice to clients in technology, business and government markets. From 2005-2008, he served on the Commission on Cybersecurity for the 44th Presidency.

From 1999-2000, McConnell was Director of the International Y2K Cooperation Center, sponsored by the United Nations and the World Bank, where he coordinated regional and global preparations of governments and critical private sector organizations to successfully defeat the Y2K bug.

McConnell was Chief of Information Policy and Technology in the U.S. Office of Management and Budget from 1993-1999, where he led the government-industry team that reformed U.S. encryption export policy, created an information security strategy for government agencies, redirected government technology procurement and management along commercial lines, and extended the presumption of open government information onto the Internet.

McConnell is also a senior advisor at the Center for Strategic and International Studies. He received a Master of Public Administration from the Evans School for Public Policy at the University of Washington, where he maintains a faculty affiliation, and a Bachelor of Sciences from Stanford University.

 

1 Comment

Filed under Cybersecurity, RISK Management, the open group, The Open Group Baltimore 2015

Using Risk Management Standards: A Q&A with Ben Tomhave, Security Architect and Former Gartner Analyst

By The Open Group

IT Risk Management is currently in a state of flux with many organizations today unsure not only how to best assess risk but also how to place it within the context of their business. Ben Tomhave, a Security Architect and former Gartner analyst, will be speaking at The Open Group Baltimore on July 20 on “The Strengths and Limitations of Risk Management Standards.”

We recently caught up with Tomhave pre-conference to discuss the pros and cons of today’s Risk Management standards, the issues that organizations are facing when it comes to Risk Management and how they can better use existing standards to their advantage.

How would you describe the state of Risk Management and Risk Management standards today?

The topic of my talk is really on the state of standards for Security and Risk Management. There’s a handful of significant standards out there today, varying from some of the work at The Open Group to NIST and the ISO 27000 series, etc. The problem with most of those is that they don’t necessarily provide a prescriptive level of guidance for how to go about performing or structuring risk management within an organization. If you look at ISO 31000 for example, it provides a general guideline for how to structure an overall Risk Management approach or program but it’s not designed to be directly implementable. You can then look at something like ISO 27005 that provides a bit more detail, but for the most part these are fairly high-level guides on some of the key components; they don’t get to the point of how you should be doing Risk Management.

In contrast, one can look at something like the Open FAIR standard from The Open Group, and that gets a bit more prescriptive and directly implementable, but even then there’s a fair amount of scoping and education that needs to go on. So the short answer to the question is, there’s no shortage of documented guidance out there, but there are, however, still a lot of open-ended questions and a lot of misunderstanding about how to use these.

What are some of the limitations that are hindering risk standards then and what needs to be added?

I don’t think it’s necessarily a matter of needing to fix or change the standards themselves, I think where we’re at is that we’re still at a fairly prototypical stage where we have guidance as to how to get started and how to structure things but we don’t necessarily have really good understanding across the industry about how to best make use of it. Complicating things further is an open question about just how much we need to be doing, how much value can we get from these, do we need to adopt some of these practices? If you look at all of the organizations that have had major breaches over the past few years, all of them, presumably, were doing some form of risk management—probably qualitative Risk Management—and yet they still had all these breaches anyway. Inevitably, they were compliant with any number of security standards along the way, too, and yet bad things happen. We have a lot of issues with how organizations are using standards less than with the standards themselves.

Last fall The Open Group fielded an IT Risk Management survey that found that many organizations are struggling to understand and create business value for Risk Management. What you’re saying really echoes those results. How much of this has to do with problems within organizations themselves and not having a better understanding of Risk Management?

I think that’s definitely the case. A lot of organizations are making bad decisions in many areas right now, and they don’t know why or aren’t even aware and are making bad decisions up until the point it’s too late. As an industry we’ve got this compliance problem where you can do a lot of work and demonstrate completion or compliance with check lists and still be compromised, still have massive data breaches. I think there’s a significant cognitive dissonance that exists, and I think it’s because we’re still in a significant transitional period overall.

Security should really have never been a standalone industry or a standalone environment. Security should have just been one of those attributes of the operating system or operating environments from the outset. Unfortunately, because of the dynamic nature of IT (and we’re still going through what I refer to as this Digital Industrial Revolution that’s been going on for 40-50 years), everything’s changing everyday. That will be the case until we hit a stasis point that we can stabilize around and grow a generation that’s truly native with practices and approaches and with the tools and technologies underlying this stuff.

An analogy would be to look at Telecom. Look at Telecom in the 1800s when they were running telegraph poles and running lines along railroad tracks. You could just climb a pole, put a couple alligator clips on there and suddenly you could send and receive messages, too, using the same wires. Now we have buried lines, we have much greater integrity of those systems. We generally know when we’ve lost integrity on those systems for the most part. It took 100 years to get there. So we’re less than half that way with the Internet and things are a lot more complicated, and the ability of an attacker, one single person spending all their time to go after a resource or a target, that type of asymmetric threat is just something that we haven’t really thought about and engineered our environments for over time.

I think it’s definitely challenging. But ultimately Risk Management practices are about making better decisions. How do we put the right amount of time and energy into making these decisions and providing better information and better data around those decisions? That’s always going to be a hard question to answer. Thinking about where the standards really could stand to improve, it’s helping organizations, helping people, understand the answer to that core question—which is, how much time and energy do I have to put into this decision?

When I did my graduate work at George Washington University, a number of years ago, one of the courses we had to take went through decision management as a discipline. We would run through things like decision trees. I went back to the executives at the company that I was working at and asked them, ‘How often do you use decision trees to make your investment decisions?” And they just looked at me funny and said, ‘Gosh, we haven’t heard of or thought about decision trees since grad school.’ In many ways, a lot of the formal Risk Management stuff that we talk about and drill into—especially when you get into the quantitative risk discussions—a lot of that goes down the same route. It’s great academically, it’s great in theory, but it’s not the kind of thing where on a daily basis you need to pull it out and use it for every single decision or every single discussion. Which, by the way, is where the FAIR taxonomy within Open FAIR provides an interesting and very valuable breakdown point. There are many cases where just using the taxonomy to break down a problem and think about it a little bit is more than sufficient, and you don’t have to go the next step of populating it with the actual quantitative estimates and do the quantitative estimations for a FAIR risk analysis. You can use it qualitatively and improve the overall quality and defensibility of your decisions.

How mature are most organizations in their understanding of risk today, and what are some of the core reasons they’re having such a difficult time with Risk Management?

The answer to that question varies to a degree by industry. Industries like financial services just seem to deal with this stuff better for the most part, but then if you look at multibillion dollar write offs for JP Morgan Chase, you think maybe they don’t understand risk after all. I think for the most part most large enterprises have at least some people in the organization that have a nominal understanding of Risk Management and risk assessment and how that factors into making good decisions.

That doesn’t mean that everything’s perfect. Look at the large enterprises that had major breaches in 2014 and 2013 and clearly you can look at those and say ‘Gosh, you guys didn’t make very good decisions.’ Home Depot is a good example or even the NSA with the Snowden stuff. In both cases, they knew they had an exposure, they had done a reasonable job of risk management, they just didn’t move fast enough with their remediation. They just didn’t get stuff in place soon enough to make a meaningful difference.

For the most part, larger enterprises or organizations will have better facilities and capabilities around risk management, but they may have challenges with velocity in terms of being able to put to rest issues in a timely fashion. Now slip down to different sectors and you look at retail, they continue to have issues with cardholder data and that’s where the card brands are asserting themselves more aggressively. Look at healthcare. Healthcare organizations, for one thing, simply don’t have the budget or the control to make a lot of changes, and they’re well behind the curve in terms of protecting patient records and data. Then look at other spaces like SMBs, which make up more than 90 percent of U.S. employment firms or look at the education space where they simply will never have the kinds of resources to do everything that’s expected of them.

I think we have a significant challenge here – a lot of these organizations will never have the resources to have adequate Risk Management in-house, and they will always be tremendously resource-constrained, preventing them from doing all that they really need to do. The challenge for them is, how do we provide answers or tools or methods to them that they can then use that don’t require a lot of expertise but can guide them toward making better decisions overall even if the decision is ‘Why are we doing any of this IT stuff at all when we can simply be outsourcing this to a service that specializes in my industry or specializes in my SMB business size that can take on some of the risk for me that I wasn’t even aware of?’

It ends up being a very basic educational awareness problem in many regards, and many of these organizations don’t seem to be fully aware of the type of exposure and legal liability that they’re carrying at any given point in time.

One of the other IT Risk Management Survey findings was that where the Risk Management function sits in organizations is pretty inconsistent—sometimes IT, sometimes risk, sometimes security—is that part of the problem too?

Yes and no—it’s a hard question to answer directly because we have to drill in on what kind of Risk Management we’re talking about. Because there’s enterprise Risk Management reporting up to a CFO or CEO, and one could argue that the CEO is doing Risk Management.

One of the problems that we historically run into, especially from a bottom-up perspective, is a lot of IT Risk Management people or IT Risk Management professionals or folks from the audit world have mistakenly thought that everything should boil down to a single, myopic view of ‘What is risk?’ And yet it’s really not how executives run organizations. Your chief exec, your board, your CFO, they’re not looking at performance on a single number every day. They’re looking at a portfolio of risk and how different factors are balancing out against everything. So it’s really important for folks in Op Risk Management and IT Risk Management to really truly understand and make sure that they’re providing a portfolio view up the chain that adequately represents the state of the business, which typically will represent multiple lines of business, multiple systems, multiple environments, things like that.

I think one of the biggest challenges we run into is just in an ill-conceived desire to provide value that’s oversimplified. We end up hyper-aggregating results and data, and suddenly everything boils down to a stop light that IT today is either red, yellow or green. That’s not really particularly informative, and it doesn’t help you make better decisions. How can I make better investment decisions around IT systems if all I know is that today things are yellow? I think it comes back to the educational awareness topic. Maybe people aren’t always best placed within organizations but really it’s more about how they’re representing the data and whether they’re getting it into the right format that’s most accessible to that audience.

What should organizations look for in choosing risk standards?

I usually get a variety of questions and they’re all about risk assessment—‘Oh, we need to do risk assessment’ and ‘We hear about this quant risk assessment thing that sounds really cool, where do we get started?’ Inevitably, it comes down to, what’s your actual Risk Management process look like? Do you actually have a context for making decisions, understanding the business context, etc.? And the answer more often than not is no, there is no actual Risk Management process. I think really where people can leverage the standards is understanding what the overall risk management process looks like or can look like and in constructing that, making sure they identify the right stakeholders overall and then start to drill down to specifics around impact analysis, actual risk analysis around remediation and recovery. All of these are important components but they have to exist within the broader context and that broader context has to functionally plug into the organization in a meaningful, measurable manner. I think that’s really where a lot of the confusion ends up occurring. ‘Hey I went to this conference, I heard about this great thing, how do I make use of it?’ People may go through certification training but if they don’t know how to go back to their organization and put that into practice not just on a small-scale decision basis, but actually going in and plugging it into a larger Risk Management process, it will never really demonstrate a lot of value.

The other piece of the puzzle that goes along with this, too, is you can’t just take these standards and implement them verbatim; they’re not designed to do that. You have to spend some time understanding the organization, the culture of the organization and what will work best for that organization. You have to really get to know people and use these things to really drive conversations rather than hoping that one of these risk assessments results will have some meaningful impact at some point.

How can organizations get more value from Risk Management and risk standards?

Starting with latter first, the value of the Risk Management standards is that you don’t have to start from scratch, you don’t have to reinvent the wheel. There are, in fact, very consistent and well-conceived approaches to structuring risk management programs and conducting risk assessment and analysis. That’s where the power of the standards come from, from establishing a template or guideline for establishing things.

The challenge of course is you have to have it well-grounded within the organization. In order to get value from a Risk Management program, it has to be part of daily operations. You have to plug it into things like procurement cycles and other similar types of decision cycles so that people aren’t just making gut decisions based off whatever their existing biases are.

One of my favorite examples is password complexity requirements. If you look back at the ‘best practice’ standards requirements over the years, going all the way back to the Orange Book in the 80s or the Rainbow Series which came out of the federal government, they tell you ‘oh, you have to have 8-character passwords and they have to have upper case, lower, numbers, special characters, etc.’ The funny thing is that while that was probably true in 1985, that is probably less true today. When we actually do risk analysis to look at the problem, and understand what the actual scenario is that we’re trying to guard against, password complexity ends up causing more problems than it solves because what we’re really protecting against is a brute force attack against a log-in interface or guessability on a log-in interface. Or maybe we’re trying to protect against a password database being compromised and getting decrypted. Well, password complexity has nothing to do with solving how that data is protected in storage. So why would we look at something like password complexity requirements as some sort of control against compromise of a database that may or may not be encrypted?

This is where Risk Management practices come into play because you can use Risk Management and risk assessment techniques to look at a given scenario—whether it be technology decisions or security control decisions, administrative or technical controls—we can look at this and say what exactly are we trying to protect against, what problem are we trying to solve? And then based on our understanding of that scenario, let’s look at the options that we can apply to achieve an appropriate degree of protection for the organization.

That ultimately is what we should be trying to achieve with Risk Management. Unfortunately, that’s usually not what we see implemented. A lot of the time, what’s described as risk management is really just an extension of audit practices and issuing a bunch of surveys, questionnaires, asking a lot of questions but never really putting it into a proper business context. Then we see a lot of bad practices applied, and we start seeing a lot of math-magical practices come in where we take categorical data—high, medium, low, more or less, what’s the impact to the business? A lot, a little—we take these categorical labels and suddenly start assigning numerical values to them and doing arithmetic calculations on them, and this is a complete violation of statistical principles. You shouldn’t be doing that at all. By definition, you don’t do arithmetic on categorical data, and yet that’s what a lot of these alleged Risk Management and risk assessment programs are doing.

I think Risk Management gets a bad rap as a result of these poor practices. Conducting a survey, asking questions is not a risk assessment. A risk assessment is taking a scenario, looking at the business impact analysis for that scenario, looking at the risk tolerance, what the risk capacity is for that scenario, and then looking at what the potential threats and weaknesses are within that scenario that could negatively impact the business. That’s a risk assessment. Asking people a bunch of questions about ‘Do you have passwords? Do you use complex passwords? Have you hardened the server? Are there third party people involved?’ That’s interesting information but it’s not usually reflective of the risk state and ultimately we want to find out what the risk state is.

How do you best determine that risk state?

If you look at any of the standards—and again this is where the standards do provide some value—if you look at what a Risk Management process is and the steps that are involved in it, take for example ISO 31000—step one is establishing context, which includes establishing potential business impact or business importance, business priority for applications and data, also what the risk tolerance, risk capacity is for a given scenario. That’s your first step. Then the risk assessment step is taking that data and doing additional analysis around that scenario.

In the technical context, that’s looking at how secure is this environment, what’s the exposure of the system, who has access to it, how is the data stored or protected? From that analysis, you can complete the assessment by saying ‘Given that this is a high value asset, there’s sensitive data in here, but maybe that data is strongly encrypted and access controls have multiple layers of defense, etc., the relative risk here of a compromise or attack being successful is fairly low.’ Or ‘We did this assessment, and we found in the application that we could retrieve data even though it was supposedly stored in an encrypted state, so we could end up with a high risk statement around the business impact, we’re looking at material loss,’ or something like that.

Pulling all of these pieces together is really key, and most importantly, you cannot skip over context setting. If you don’t ever do context setting, and establish the business importance, nothing else ends up mattering. Just because a system has a vulnerability doesn’t mean that it’s a material risk to the business. And you can’t even know that unless you establish the context.

In terms of getting started, leveraging the standards makes a lot of sense, but not from a perspective of this is a compliance check list that I’m going to use verbatim. You have to use it as a structured process, you have to get some training and get educated on how these things work and then what requirements you have to meet and then do what makes sense for the organizational role. At the end of the day, there’s no Easy Button for these things, you have to invest some time and energy and build something that makes sense and is functional for your organization.

To download the IT Risk Management survey summary, please click here.

By The Open GroupFormer Gartner analyst Ben Tomhave (MS, CISSP) is Security Architect for a leading online education organization where he is putting theories into practice. He holds a Master of Science in Engineering Management (Information Security Management concentration) from The George Washington University, and is a member and former co-chair of the American Bar Association Information Security Committee, senior member of ISSA, former board member of the Northern Virginia OWASP chapter, and member and former board member for the Society of Information Risk Analysts. He is a published author and an experienced public speaker, including recent speaking engagements with the RSA Conference, the ISSA International Conference, Secure360, RVAsec, RMISC, and several Gartner events.

Join the conversation! @theopengroup #ogchat #ogBWI

1 Comment

Filed under Cybersecurity, RISK Management, Security, Security Architecture, Standards, The Open Group Baltimore 2015, Uncategorized

The Open Group Madrid 2015 – Day Two Highlights

By The Open Group

On Tuesday, April 21, Allen Brown, President & CEO of The Open Group, began the plenary presenting highlights of the work going on in The Open Group Forums. The Open Group is approaching 500 memberships in 40 countries.

Big Data & Open Platform 3.0™ – a Big Deal for Open Standards

Ron Tolido, Senior Vice President of Capgemini’s group CTO network and Open Group Board Member, discussed the digital platform as the “fuel” of enterprise transformation today, citing a study published in the book “Leading Digital.” The DNA of companies that successfully achieve transform has the following factors:

  • There is no escaping from mastering the digital technology – this is an essential part of leading transformation. CEO leadership is a success factor.
  • You need a sustainable technology platform embraced by both the business and technical functions

Mastering digital transformation shows a payoff in financial results, both from the standpoint of efficient revenue generation and maintaining and growing market share. The building blocks of digital capability are:

  • Customer Experience
  • Operations
  • New business models

Security technology must move from being a constraint or “passion killer” to being a driver for digital transformation. Data handling must change it’s model – the old structured and siloed approach to managing data no longer works, resulting in business units bypassing or ignoring the “single souce” data repository. He recommended the “Business Data Lake” approach as a approach to overcoming this, and suggested it should be considered as an open standard as part of the work of the Open Platform 3.0 Forum.

In the Q&A session, Ron suggested establishing hands-on labs to help people embrace digital transformation, and presented the analogy of DatOps as an analogy to DevOps for business data.

Challengers in the Digital Era

Mariano Arnaiz, Chief Information Officer in the CESCE Group, presented the experiences of CESCE in facing challenges of:

  • Changing regulation
  • Changing consumer expectations
  • Changing technology
  • Changing competition and market entrants based on new technology

The digital era represents a new language for many businesses, which CESCE faced during the financial crisis of 2008. They chose the “path less traveled” of becoming a data-driven company, using data and analytics to improve business insight, predict behavior and act on it. CESCE receives over 8000 risk analysis requests per day; using analytics, over 85% are answered in real time, when it used to take more than 20 days. Using analytics has given them unique competitive products such as variable pricing and targeted credit risk coverage while reducing loss ratio.

To drive transformation, the CIO must move beyond IT service supporting the business to helping drive business process improvement. Aligning IT to business is no longer enough for EA – EA must also help align business to transformational technology.

In the Q&A, Mariano said that the approach of using analytics and simulation for financial risk modeling could be applied to some cybersecurity risk analysis cases.

Architecting the Internet of Things

Kary Främling,  CEO of the Finnish company ControlThings and Professor of Practice in Building Information Modeling (BIM) at Aalto University, Finland, gave a history of the Internet of Things (IoT), the standards landscape, issues on security in IoT, and real-world examples.

IoT today is characterized by an increasing number of sensors and devices each pushing large amounts of data to their own silos, with communication limited to their own network. Gaining benefit from IoT requires standards to take a systems view of IoT providing horizontal integration among IoT devices and sensors with data collected as and when needed, and two-way data flows between trusted entities within a vision of Closed-Loop Lifecycle Management. These standards are being developed in The Open Group Open Platform 3.0 Forum’s IoT work stream; published standards such as Open Messaging interface (O-MI) and Open Data Format (O-DF) that allow discovery and interoperability of sensors using open protocols, similar to the way http and html enable interoperability on the Web.

Kary addressed the issues of security and privacy in IoT, noting this is an opportunity for The Open Group to use our EA and Security work to to assess these issues at the scale IoT will bring.By The Open Group

Kary Främling

Comments Off on The Open Group Madrid 2015 – Day Two Highlights

Filed under big data, Boundaryless Information Flow™, Cybersecurity, Enterprise Architecture, Internet of Things

Survey Shows Organizations Are Experiencing an Identity Crisis When it Comes to IT Risk Management

By Jim Hietala, VP, Business Development & Security, The Open Group

Last fall, The Open Group Security Forum fielded its first IT Risk Management Survey in conjunction with the Society of Information Risk Analysts (SIRA) and CXOWARE The purpose of the survey was to better understand how mature organizations are when it comes to IT Risk Management today. The survey also aimed to discover which risk management frameworks are currently most prevalent within organizations and how successful those frameworks are in measuring and managing risk.

Consisting of an online questionnaire that included both multiple choice and open text answer formats with questions, the survey explored a number of different parameters in regard to the principles, frameworks and processes organizations are using to manage risk. The sampling included more than 100 information technology and security executives, professionals, analysts and architects that have some responsibility for risk management, as well as full-time risk management professionals within their respective organizations.

Considering the fragmented state of security within most organizations today, it should not come as much surprise that the primary survey finding is that many organizations today are experiencing what might be called an identity crisis when it comes to IT Risk Management. Although many of the organizations surveyed generally believe their Risk Management teams and efforts are providing value to their organizations, they are also experiencing considerable difficulty when it comes to understanding, demonstrating and creating business value for those efforts.

This is likely due to the lack of a common definition for risk relative to IT Risk Management, in particular, as well as the resulting difficulty in communicating the value of something organizations are struggling to clearly define. In addition, the IT Risk Management teams among the companies surveyed do not have much visibility within their organizations and the departments to which they report are inconsistent across the organizations surveyed, with some reporting to senior management and others reporting to IT or to Risk Managers.

Today, Risk Management is becoming increasingly important for IT departments. With the increased digitalization of business and data becoming ever more valuable, companies of all shapes and sizes must begin looking to apply risk management principles to their IT infrastructure in order to guard against the potentially negative financial, competitive and reputational loss that data breaches may bring. A myriad of high-profile breaches at large retailers, financial services firms, entertainment companies and government agencies over the past couple of years serve as frightening examples of what can—and will—happen to more and more companies if they fail to better assess their vulnerability to risk.

This IT Risk Management survey essentially serves as a benchmark for the state of IT Risk Management today. When it comes to IT risk, the ways and means to manage it are still emerging, and IT Risk Management programs are still in the nascent stages within most organizations. We believe that there is not only a lot of room for growth within the discipline of IT Risk Management but are optimistic that organizations will continue to mature in this area as they learn to better understand and prove their intrinsic value within their organizations.

The full survey summary can be viewed here. We recommend that those interested in Risk Management review the full summary as there are a number of deeper observations explored there that look at the value risk teams believe they are providing to their organizations and the level of maturity of those organizations.

By Jim Hietala, The Open GroupJim Hietala, Open FAIR, CISSP, GSEC, is Vice President, Business Development and Security for The Open Group, where he manages the business team, as well as Security and Risk Management programs and standards activities,  He has participated in the development of several industry standards including O-ISM3, O-ESA, O-RT (Risk Taxonomy Standard), O-RA (Risk Analysis Standard), and O-ACEML. He also led the development of compliance and audit guidance for the Cloud Security Alliance v2 publication.

Jim is a frequent speaker at industry conferences. He has participated in the SANS Analyst/Expert program, having written several research white papers and participated in several webcasts for SANS. He has also published numerous articles on information security, risk management, and compliance topics in publications including CSO, The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

An IT security industry veteran, he has held leadership roles at several IT security vendors.

Jim holds a B.S. in Marketing from Southern Illinois University.

Join the conversation @theopengroup #ogchat #ogSecurity

 

 

 

Comments Off on Survey Shows Organizations Are Experiencing an Identity Crisis When it Comes to IT Risk Management

Filed under Cybersecurity, Enterprise Transformation, Information security, IT, RISK Management, Security, Security Architecture, Uncategorized

Risk, Security and the Internet of Things: Madrid 2015 Preview

By Jim Hietala, Vice President, Business Development & Security, The Open Group

The Internet of Things (IoT) is a fast evolving phenomenon. From smartphones and tablets to connected cars and industrial control systems, the number of IoT devices is continuing to explode. In fact, according to a report by Cisco, the number of connected devices is set to reach 30 billion in 2020, creating a $19 trillion opportunity for businesses around the world.

However as this technology grows, it’s important to consider the potential risks that IoT could introduce to the enterprise and even to society. To put it simply, not much is being done at the moment in terms of IoT security.

The risks brought about by IoT aren’t just restricted to industries handling highly-sensitive personal data, such as Healthcare. Look at industries like energy, transport, manufacturing and mining, which are all starting to report the benefits of IoT ranging from faster time to market, better equipment efficiency and improved productivity. In any industrial setting, if high-value IoT data that gives an organization a competitive advantage was to leave the company, it could have serious consequences.

Arguably there are many vendors producing IoT enabled devices which are not taking risk or basic security mechanisms into account. Vendors are putting Internet Protocols (IPs) onto devices without any consideration about how to properly secure them. It’s fair to say, there are currently more problems than solutions.

This is happening, and it’s happening fast. As IoT technology continues to race way ahead, security standards are trying to catch up. Currently, there isn’t a consensus around the right way to secure the vast number of connected devices.

It’s important that we as an industry get to grips with IoT Security and start to apply a common sense strategy as soon as possible. That’s why we want people to start thinking about the risks and where best practices are lacking, a key issue we’ll be discussing at The Open Group Madrid 2015.

We’ll be exploring the implications of IoT from the standpoint of Security and Risk, looking at the areas where work will need to be done and where The Open Group Security Forum can help. What are the burning issues in each vertical industry – from retail to Healthcare – and what is the best way to identify the key IoT-enabled assets that need securing?

As organizations start to permit IoT-enabled equipment, whether it’s connected cars or factory equipment, IT departments need to consider the Security requirements of those networks. From a Security Architecture point of view, it’s vital that organizations do everything in their power to ensure they meet customers’ needs.

Registration for The Open Group Madrid 2015 is open now and available to members and non-members.  Please visit here.

By Jim Hietala, The Open GroupJim Hietala, Open FAIR, CISSP, GSEC, is Vice President, Business Development and Security for The Open Group, where he manages the business team, as well as Security and Risk Management programs and standards activities,  He has participated in the development of several industry standards including O-ISM3, O-ESA, O-RT (Risk Taxonomy Standard), O-RA (Risk Analysis Standard), and O-ACEML. He also led the development of compliance and audit guidance for the Cloud Security Alliance v2 publication.

Jim is a frequent speaker at industry conferences. He has participated in the SANS Analyst/Expert program, having written several research white papers and participated in several webcasts for SANS. He has also published numerous articles on information security, risk management, and compliance topics in publications including CSO, The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

An IT security industry veteran, he has held leadership roles at several IT security vendors.

Jim holds a B.S. in Marketing from Southern Illinois University.

Join the conversation @theopengroup #ogchat #ogMAD

2 Comments

Filed under Information security, Internet of Things, RISK Management, Security, Security Architecture, Uncategorized