Tag Archives: OTTF

The Open Group Baltimore 2015 Highlights

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

The Open Group Baltimore 2015, Enabling Boundaryless Information Flow™, July 20-23, was held at the beautiful Hyatt Regency Inner Harbor. Over 300 attendees from 16 countries, including China, Japan, Netherlands and Brazil, attended this agenda-packed event.

The event kicked off on July 20th with a warm Open Group welcome by Allen Brown, President and CEO of The Open Group. The first plenary speaker was Bruce McConnell, Senior VP, East West Institute, whose presentation “Global Cooperation in Cyberspace”, gave a behind-the-scenes look at global cybersecurity issues. Bruce focused on US – China cyber cooperation, major threats and what the US is doing about them.

Allen then welcomed Christopher Davis, Professor of Information Systems, University of South Florida, to The Open Group Governing Board as an Elected Customer Member Representative. Chris also serves as Chair of The Open Group IT4IT™ Forum.

The plenary continued with a joint presentation “Can Cyber Insurance Be Linked to Assurance” by Larry Clinton, President & CEO, Internet Security Alliance and Dan Reddy, Adjunct Faculty, Quinsigamond Community College MA. The speakers emphasized that cybersecurity is not a simply an IT issue. They stated there are currently 15 billion mobile devices and there will be 50 billion within 5 years. Organizations and governments need to prepare for new vulnerabilities and the explosion of the Internet of Things (IoT).

The plenary culminated with a panel “US Government Initiatives for Securing the Global Supply Chain”. Panelists were Donald Davidson, Chief, Lifecycle Risk Management, DoD CIO for Cybersecurity, Angela Smith, Senior Technical Advisor, General Services Administration (GSA) and Matthew Scholl, Deputy Division Chief, NIST. The panel was moderated by Dave Lounsbury, CTO and VP, Services, The Open Group. They discussed the importance and benefits of ensuring product integrity of hardware, software and services being incorporated into government enterprise capabilities and critical infrastructure. Government and industry must look at supply chain, processes, best practices, standards and people.

All sessions concluded with Q&A moderated by Allen Brown and Jim Hietala, VP, Business Development and Security, The Open Group.

Afternoon tracks (11 presentations) consisted of various topics including Information & Data Architecture and EA & Business Transformation. The Risk, Dependability and Trusted Technology theme also continued. Jack Daniel, Strategist, Tenable Network Security shared “The Evolution of Vulnerability Management”. Michele Goetz, Principal Analyst at Forrester Research, presented “Harness the Composable Data Layer to Survive the Digital Tsunami”. This session was aimed at helping data professionals understand how Composable Data Layers set digital and the Internet of Things up for success.

The evening featured a Partner Pavilion and Networking Reception. The Open Group Forums and Partners hosted short presentations and demonstrations while guests also enjoyed the reception. Areas focused on were Enterprise Architecture, Healthcare, Security, Future Airborne Capability Environment (FACE™), IT4IT™ and Open Platform™.

Exhibitors in attendance were Esteral Technologies, Wind River, RTI and SimVentions.

By Loren K. Baynes, Director, Global Marketing CommunicationsPartner Pavilion – The Open Group Open Platform 3.0™

On July 21, Allen Brown began the plenary with the great news that Huawei has become a Platinum Member of The Open Group. Huawei joins our other Platinum Members Capgemini, HP, IBM, Philips and Oracle.

By Loren K Baynes, Director, Global Marketing CommunicationsAllen Brown, Trevor Cheung, Chris Forde

Trevor Cheung, VP Strategy & Architecture Practice, Huawei Global Services, will be joining The Open Group Governing Board. Trevor posed the question, “what can we do to combine The Open Group and IT aspects to make a customer experience transformation?” His presentation entitled “The Value of Industry Standardization in Promoting ICT Innovation”, addressed the “ROADS Experience”. ROADS is an acronym for Real Time, On-Demand, All Online, DIY, Social, which need to be defined across all industries. Trevor also discussed bridging the gap; the importance of combining Customer Experience (customer needs, strategy, business needs) and Enterprise Architecture (business outcome, strategies, systems, processes innovation). EA plays a key role in the digital transformation.

Allen then presented The Open Group Forum updates. He shared roadmaps which include schedules of snapshots, reviews, standards, and publications/white papers.

Allen also provided a sneak peek of results from our recent survey on TOGAF®, an Open Group standard. TOGAF® 9 is currently available in 15 different languages.

Next speaker was Jason Uppal, Chief Architecture and CEO, iCareQuality, on “Enterprise Architecture Practice Beyond Models”. Jason emphasized the goal is “Zero Patient Harm” and stressed the importance of Open CA Certification. He also stated that there are many roles of Enterprise Architects and they are always changing.

Joanne MacGregor, IT Trainer and Psychologist, Real IRM Solutions, gave a very interesting presentation entitled “You can Lead a Horse to Water… Managing the Human Aspects of Change in EA Implementations”. Joanne discussed managing, implementing, maintaining change and shared an in-depth analysis of the psychology of change.

“Outcome Driven Government and the Movement Towards Agility in Architecture” was presented by David Chesebrough, President, Association for Enterprise Information (AFEI). “IT Transformation reshapes business models, lean startups, web business challenges and even traditional organizations”, stated David.

Questions from attendees were addressed after each session.

In parallel with the plenary was the Healthcare Interoperability Day. Speakers from a wide range of Healthcare industry organizations, such as ONC, AMIA and Healthway shared their views and vision on how IT can improve the quality and efficiency of the Healthcare enterprise.

Before the plenary ended, Allen made another announcement. Allen is stepping down in April 2016 as President and CEO after more than 20 years with The Open Group, including the last 17 as CEO. After conducting a process to choose his successor, The Open Group Governing Board has selected Steve Nunn as his replacement who will assume the role with effect from November of this year. Steve is the current COO of The Open Group and CEO of the Association of Enterprise Architects. Please see press release here.By Loren K. Baynes, Director, Global Marketing Communications

Steve Nunn, Allen Brown

Afternoon track topics were comprised of EA Practice & Professional Development and Open Platform 3.0™.

After a very informative and productive day of sessions, workshops and presentations, event guests were treated to a dinner aboard the USS Constellation just a few minutes walk from the hotel. The USS Constellation constructed in 1854, is a sloop-of-war, the second US Navy ship to carry the name and is designated a National Historic Landmark.

By Loren K. Baynes, Director, Global Marketing CommunicationsUSS Constellation

On Wednesday, July 22, tracks continued: TOGAF® 9 Case Studies and Standard, EA & Capability Training, Knowledge Architecture and IT4IT™ – Managing the Business of IT.

Thursday consisted of members-only meetings which are closed sessions.

A special “thank you” goes to our sponsors and exhibitors: Avolution, SNA Technologies, BiZZdesign, Van Haren Publishing, AFEI and AEA.

Check out all the Twitter conversation about the event – @theopengroup #ogBWI

Event proceedings for all members and event attendees can be found here.

Hope to see you at The Open Group Edinburgh 2015 October 19-22! Please register here.

By Loren K. Baynes, Director, Global Marketing CommunicationsLoren K. Baynes, Director, Global Marketing Communications, joined The Open Group in 2013 and spearheads corporate marketing initiatives, primarily the website, blog, media relations and social media. Loren has over 20 years experience in brand marketing and public relations and, prior to The Open Group, was with The Walt Disney Company for over 10 years. Loren holds a Bachelor of Business Administration from Texas A&M University. She is based in the US.

Leave a comment

Filed under Accreditations, Boundaryless Information Flow™, Cybersecurity, Enterprise Architecture, Enterprise Transformation, Healthcare, Internet of Things, Interoperability, Open CA, Open Platform 3.0, Security, Security Architecture, The Open Group Baltimore 2015, TOGAF®

Securing Business Operations and Critical Infrastructure: Trusted Technology, Procurement Paradigms, Cyber Insurance

Following is the transcript of an Open Group discussion on ways to address supply chain risk in the information technology sector marketplace.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Sponsor: The Open Group

Dana Gardner: Hello, and welcome to a special Thought Leadership Panel Discussion, coming to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator as we explore ways to address supply chain risk in the information technology sector market.

We’ll specifically examine how The Open Group Trusted Technology Forum (OTTF) standards and accreditation activities are enhancing the security of global supply chains and improving the integrity of openly available IT products and components.

We’ll also learn how the age-old practice of insurance is coming to bear on the problem of IT supply-chain risk, and by leveraging insurance models, the specter of supply chain disruption and security yields may be significantly reduced.

To update us on the work of the OTTF and explain the workings and benefits of supply-chain insurance, we’re joined by our panel of experts. Please join me in welcoming Sally Long, Director of The Open Group Trusted Technology Forum. Welcome, Sally.

Sally Long: Thank you.

Gardner: We’re also here with Andras Szakal, Vice President and Chief Technology Officer for IBM U.S. Federal and Chairman of The Open Group Trusted Technology Forum. Welcome back, Andras.

Andras Szakal: Thank you for having me.

Gardner: And Bob Dix joins us. He is Vice President of Global Government Affairs and Public Policy for Juniper Networks and is a member of The Open Group Trusted Technology Forum. Welcome, Bob.

Bob Dix: Thank you for the invitation. Glad to be here.

Gardner: Lastly, we are joined by Dan Reddy, Supply Chain Assurance Specialist, college instructor and Lead of The Open Group Trusted Technology Forum Global Outreach and Standards Harmonization Work Group. Thanks for being with us, Dan.

Dan Reddy: Glad to be here, Dana.

Gardner: Sally, let’s start with you. Why don’t we just get a quick update on The Open Group Trusted Technology Forum (OTTF) and the supply-chain accreditation process generally? What has been going on?

OTTP standard

Long: For some of you who might not have heard of the O-TTPS, which is the standard, it’s called The Open Trusted Technology Provider™ Standard. The effort started with an initiative in 2009, a roundtable discussion with U.S. government and several ICT vendors, on how to identify trustworthy commercial off-the-shelf (COTS) information and communication technology (ICT), basically driven by the fact that governments were moving away from high assurance customized solution and more and more using COTS ICT.

That ad-hoc group formed under The OTTF and proceeded to deliver a standard and an accreditation program.

The standard really provides a set of best practices to be used throughout the COTS ICT product life cycle. That’s both during in-house development, as well as with outsourced development and manufacturing, including the best practices to use for security in the supply chain, encompassing all phases from design to disposal.

Just to bring you up to speed on just some of the milestones that we’ve had, we released our 1.0 version of the standard in 2013, launched our accreditation program to help assure conformance to the standard in February 2014, and then in July, we released our 1.1 version of the standard. We have now submitted that version to ISO for approval as a publicly available specification (PAS) and it’s a fast track for ISO.

The PAS is a process for adopting standards developed in other standards development organizations (SDOs), and the O-TTPS has passed the draft ISO ballot. Now, it’s coming up for final ballot.

That should bring folks up to speed, Dana, and let them know where we are today.

Gardner: Is there anything in particular at The Open Group Conference in Baltimore, coming up in July, that pertains to these activities? Is this something that’s going to be more than just discussed? Is there something of a milestone nature here too?

Long: Monday, July 20, is the Cyber Security Day of the Baltimore Conference. We’re going to be meeting in the plenary with many of the U.S. government officials from NIST, GSA, and the Department of Homeland Security. So there is going to be a big plenary discussion on cyber security and supply chain.

We’ll also be meeting separately as a member forum, but the whole open track on Monday will be devoted to cyber security and supply chain security.

The one milestone that might coincide is that we’re publishing our Chinese translation version of the standard 1.1 and we might be announcing that then. I think that’s about it, Dana.

OTTF background

Gardner: Andras, for the benefit of our listeners and readers who might be new to this concept, perhaps you could fill us in on the background on the types of problems that OTTF and the initiatives and standards are designed to solve. What’s the problem that we need to address here?

Szakal: That’s a great question. We realized, over the last 5 to 10 years, that the traditional supply-chain management practices, supply-chain integrity practices, where we were ensuring the integrity of the delivery of a product to the end customer, ensuring that it wasn’t tampered with, effectively managing our suppliers to ensure they provided us with quality components really had expanded as a result of the adoption of technology and the pervasive growth of technology in all aspects of manufacturing, but especially as IT has expanded into the Internet of Things, critical infrastructure and mobile technologies, and now obviously cloud and big data.

And as we manufacture those IT products we have to recognize that now we’re in a global environment, and manufacturing and sourcing of components occurs worldwide. In some cases, some of these components are even open source or freely available. We’re concerned, obviously, about the lineage, but also the practices of how these products are manufactured from a secure engineering perspective, as well as the supply-chain integrity and supply-chain security practices.

What we’ve recognized here is that the traditional life cycle of supplychain security and integrity has expanded to include all the way down to the design aspects of the product through sustainment and managing that product over a period of time, from cradle to grave, and disposal of the product to ensure that those components, if they were hardware-based, don’t actually end up recycled in a way that they pose a threat to our customers.

Gardner: So it’s as much a lifecycle as it is a procurement issue.

Szakal: Absolutely. When you talk about procurement, you’re talking about lifecycle and about mitigating risks to those two different aspects from sourcing and from manufacturing.

So from the customer’s perspective, they need to be considering how they actually apply techniques to ensure that they are sourcing from authorized channels, that they are also applying the same techniques that we use for secure engineering when they are doing the integration of their IT infrastructure.

But from a development perspective, it’s ensuring that we’re applying secure engineering techniques, that we have a well-defined baseline for our life cycle, and that we’re controlling our assets effectively. We understand who our partners are and we’re able to score them and ensure that we’re tracking their integrity and that we’re applying new techniques around secure engineering, like threat analysis and risk analysis to the supply chain.

We’re understanding the current risk landscape and applying techniques like vulnerability analysis and runtime protection techniques that would allow us to mitigate many of these risks as we build out our products and manufacture them.

It goes all the way through sustainment. You probably recognize now, most people would, that your products are no longer a shrink-wrap product that you get, install, and it lives for a year or two before you update it. It’s constantly being updated. So to ensure that the integrity and delivery of that update is consistent with the principles that we are trying to espouse is also really important.

Collaborative effort

Gardner: And to that point, no product stands alone. It’s really a result of a collaborative effort, very complex number of systems coming together. Not only are standards necessary, but cooperation among all those players in that ecosystem becomes necessary.

Dan Reddy, how have we done in terms of getting mutual assurance across a supply chain that all the participants are willing to take part? It seems to me that, if there is a weak link, everyone would benefit by shoring that up. So how do we go beyond the standards? How are we getting cooperation, get all the parties interested in contributing and being part of this?

Reddy: First of all, it’s an evolutionary process, and we’re still in the early days of fully communicating what the best practices are, what the standards are, and getting people to understand how that relates to their place in the supply chain.

Certainly, the supplier community would benefit by following some common practices so they don’t wind up answering customized survey questions from all of their customers.

That’s what’s happening today. It’s pretty much a one-off situation, where each customer says, “I need to protect my supply chain. Let me go find out what all of my suppliers are doing.” The real benefit here is to have the common language of the requirements in our standard and a way to measure it.

So there should be an incentive for the suppliers to take a look at that and say, “I’m tired of answering these individual survey questions. Maybe if I just document my best practices, I can avoid some of the effort that goes along with that individual approach.”

Everyone needs to understand that value proposition across the supply chain. Part of what we’re trying to do with the Baltimore conference is to talk to some thought leaders and continue to get the word out about the value proposition here.

Gardner: Bob Dix, the government in the U.S., and of course across the globe, all the governments, are major purchasers of technology and also have a great stake in security and low risk. What’s been driving some of the government activities? Of course, they’re also interested in using off-the-shelf technology and cutting costs. So what’s the role that governments can play in driving some of these activities around the OTTF?

Risk management

Dix: This issue of supply chain assurance and cyber security is all about risk management, and it’s a shared responsibility. For too long I think that the government has had a tendency to want to point a finger at the private sector as not sufficiently attending to this matter.

The fact is, Dana, that many in the private sector make substantial investments in their product integrity program, as Andras was talking about, from product conception, to delivery, to disposal. What’s really important is that when that investment is made and when companies apply the standard the OTTF has put forward, it’s incumbent upon the government to do their part in purchasing from authorized and trusted sources.

In today’s world, we still have a culture that’s pervasive across the government acquisition community, where decision-making on procurements is often driven by cost and schedule, and product authenticity, assurance, and security are not necessarily a part of that equation. It’s driven in many cases by budgets and other considerations, but nonetheless, we must change that culture to focus to include authenticity and assurance as a part of the decision making process.

The result of focusing on cost and schedule is often those acquisitions are made from untrusted and unauthorized sources, which raises the risk of acquiring counterfeit, tainted, or even malicious equipment.

Part of the work of the OTTF is to present to all stakeholders, in industry and government alike, that there is a process that can be uniform, as has been stated by Sally and Dan as well, that can be applied in an environment to raise the bar of authenticity, security, and assurance to improve upon that risk management approach.

Gardner: Sally, we’ve talked about where you’re standing in terms of some progress in your development around these standards and activities. We’ve heard about the challenges and the need for improvement.

Before we talk about this really interesting concept of insurance that would come to bear on perhaps encouraging standardization and giving people more ways to reduce their risk and adhere to best practices, what do you expect to see in a few years? If things go well and if this is adopted widely and embraced in true good practices, what’s the result? What do we expect to see as an improvement?

What I am trying to get at here is that if there’s a really interesting golden nugget to shoot for, a golden ring to grab for, what is that we can accomplish by doing this well?

Powerful impact

Long: The most important and significant aspect of the accreditation program is when you look at the holistic nature of the program and how it could have a very powerful impact if it’s widely adopted.

The idea of an accreditation program is that a provider gets accredited for conforming to the best practices. A provider that can get accredited could be an integrator, an OEM, the component suppliers of hardware and software that provide the components to the OEM, and the value-add resellers and distributors.

Every important constituent in that supply chain could be accredited. So not only from a business perspective is it important for governments and commercial customers to look on the Accreditation Registry and see who has been accredited for the integrators they want to work with or for the OEMs they want to work with, but it’s also important and beneficial for OEMs to be able to look at that register and say, “These component suppliers are accredited. So I’ll work with them as business partners.” It’s the same for value-add resellers and distributors.

It builds in these real business-market incentives to make the concept work, and in the end, of course, the ultimate goal of having a more secure supply chain and more products with integrity will be achieved.

To me, that is one of the most important aspects that we can reach for, especially if we reach out internationally. What we’re starting to see internationally is that localized requirements are cropping up in different countries. What that’s going to mean is that vendors need to meet those different requirements, increasing their cost, and sometimes even there will end up being trade barriers.

Back to what Dan and Bob were saying, we need to look at this global standard and accreditation program that already exists. It’s not in development; we’ve been working on it for five years with consensus from many, many of the major players in the industry and government. So urging global adoption of what already exists and what could work holistically is really an important objective for our next couple of years.

Gardner: It certainty sounds like a win, win, win if everyone can participate, have visibility, and get designated as having followed through on those principles. But as you know and as you mentioned, it’s the marketplace. Economics often drives business behavior. So in addition to a standards process and the definitions being available, what is it about this notion of insurance that might be a parallel market force that would help encourage better practices and ultimately move more companies in this direction?

Let’s start with Dan. Explain to me how cyber insurance, as it pertains to the supply chain, would work?

Early stages

Reddy: It’s an interesting question. The cyber insurance industry is still in the early stages, even though it goes back to the ’70s, where crime insurance started applying to outsiders gaining physical access to computer systems. You didn’t really see the advent of hacker insurance policies until the late ’90s. Then, starting in 2000, some of the first forms of cyber insurance covering first and third party started to appear.

What we’re seeing today is primarily related to the breaches that we hear about in the paper everyday, where some organization has been comprised, and sensitive information, like credit card information, is exposed for thousands of customers. The remediation is geared toward the companies that have to pay the claim and sign people up for identity protection. It’s pretty cut and dried. That’s the wave that the insurance industry is riding right now.

What I see is that as attacks get to be more sophisticated and potentially include attacks on the supply chain, it’s going to represent a whole new area for cyber insurance. Having consistent ways to address supplier-related risk, as well as the other infrastructure related risks that go beyond simple data breach, is going to be where the marketplace has to make an adjustment. Standardization is critical there.

Gardner: Andras, how does this work in conjunction with OTTF? Would insurance companies begin their risk assessment by making sure that participants in the supply chain are already adhering to your standards and seeking accreditation? Then, maybe they would have premiums that would reflect the diligence that companies extend into their supply chains. Maybe you could just explain to me, not just the insurance, but how it would work in conjunction with OTTF, maybe to each’s mutual benefit.

Szakal: You made a really great point earlier about the economic element that would drive compliance. For us in IBM, the economic element is the ability to prove that we’re providing the right assurance that is being specified in the requests for proposals (RFPs), not only in the federal sector, but outside the federal sector in critical infrastructure and finance. We continue to win those opportunities, and that’s driven our compliance, as well as the government policy aspect worldwide.

But from an insurance point of view, insurance comes in two forms. I buy policy insurance in a case where there are risks that are out of my control, and I apply protective measures that are under my control. So in the case of the supply chain, the OTTF is a set of practices that help you gain control and lower the risk of threat in the manufacturing process.

The question is, do you buy a policy, and what’s the balance here between a cyber threat that is in your control, and those aspects of supply chain security which are out of your control? This is with the understanding that there is an infinite number of a resources or revenue that you can apply to allocate to both of these aspects.

There’s going to have to be a balance, and it really is going to be case by case, with respect to customers and manufacturers, as to where the loss of potential intellectual property (IP) with insurance, versus applying controls. Those resources are better applied where they actually have control, versus that of policies that are protecting you against things that are out of your control.

For example, you might buy a policy for providing code to a third party, which has high value IP to manufacture a component. You have to share that information with that third-party supplier to actually manufacture that component as part of the overarching product, but with the realization that if that third party is somehow hacked or intruded on and that IP is stolen, you have lost some significant amount of value. That will be an area where insurance would be applicable.

What’s working

Gardner: Bob Dix, if insurance comes to bear in conjunction with standards like what the OTTF is developing in supply chain assurance, it seems to me that the insurance providers themselves would be in a position of gathering information for their actuarial decisions and could be a clearing house for what’s working and what isn’t working.

It would be in their best interest to then share that back into the marketplace in order to reduce the risk. That’s a market-driven, data-driven approach that could benefit everyone. Do you see the advent of insurance as a benefit or accelerant to improvement here?

Dix: It’s a tool. This is a conversation that’s been going on in the community for quite some time, the lack of actuarial data for catastrophic losses produced by cyber events, that is impacting some of the rate setting and premium setting by insurance companies, and that has continued to be a challenge.

But from an incentive standpoint, it’s just like in your home. If you have an alarm system, if you have a fence, if you do other kinds of protective measures, your insurance on your homeowners or liability insurance may get a reduction in premium for those actions that you have taken.

As an incentive, the opportunity to have an insurance policy to either transfer or buy down risk can be driven by the type of controls that you have in your environment. The standard that the OTTF has put forward provides guidance about how best to accomplish that. So, there is an opportunity to leverage, as an incentive, the reduction in premiums for insurance to transfer or buy down risk.

Gardner: It’s interesting, Sally, that the insurance industry could benefit from OTTF, and by having more insurance available in the marketplace, it could encourage more participation and make the standard even more applicable and valuable. So it’s interesting to see over time how that plays out.

Any thoughts or comments on the relationship between what you are doing at OTTF and The Open Group and what the private insurance industry is moving toward?

Long: I agree with what everyone has said. It’s an up-and-coming field, and there is a lot more focus on it. I hear at every conference I go to, there is a lot more research on cyber security insurance. There is a place for the O-TTPS in terms of buying down risk, as Bob was mentioning.

The other thing that’s interesting is the NIST Cybersecurity Framework. That whole paradigm started out with the fact that there would be incentives for those that followed the NIST Cybersecurity Framework – that incentive piece became very hard to pull together, and still is. To my knowledge, there are no incentives yet associated with it. But insurance was one of the ideas they talked about for incentivizing adopters of the CSF.

The other thing that I think came out of one of the presentations that Dan and Larry Clinton will be giving at our Baltimore Conference, is that insurers are looking for simplicity. They don’t want to go into a client’s environment and have them prove that they are doing all of these things required of them or filling out a long checklist.

That’s why, in terms of simplicity, asking for O-TTPS-accredited providers or lowering their rates based on that – would be a very simplistic approach, but again not here yet. As Bob said, it’s been talked about a lot for a long time, but I think it is coming to the fore.

Market of interest

Gardner: Dan Reddy, back to you. When there is generally a large addressable market of interest in a product or service, there often rises a commercial means to satisfy that. How can enterprises, the people who are consuming these products, encourage acceptance of these standards, perhaps push for a stronger insurance capability in the marketplace, or also get involved with some of these standards and practices that we have been talking about?

If you’re a publicly traded company, you would want to reduce your exposure and be able to claim accreditation and insurance as well. Let’s look at this from the perspective of the enterprise. What should and could they be doing to improve on this?

Reddy: I want to link back to what Sally said about the NIST Cyber Security Framework. What’s been very useful in publishing the Framework is that it gives enterprises a way to talk about their overall operational risk in a consistent fashion.

I was at one of the workshops sponsored by NIST where enterprises that had adopted it talked about what they were doing internally in their own enterprises in changing their practices, improving their security, and using the language of the framework to address that.

Yet, when they talked about one aspect of their risk, their supplier risk, they were trying to send the NIST Cybersecurity Framework risk questions to their suppliers, and those questions aren’t really sufficient. They’re interesting. You care about the enterprise of your supplier, but you really care about the products of your supplier.

So one of the things that the OTTF did is look at the requirements in our standard related to suppliers and link them specifically to the same operational areas that were included in the NIST Cybersecurity Framework.

This gives the standard enterprise looking at risk, trying to do standard things, a way to use the language of our requirements in the standard and the accreditation program as a form of measurement to see how that aspect of supplier risk would be addressed.

But remember, cyber insurance is more than just the risk of suppliers. It’s the risk at the enterprise level. But the attacks are going to change over time, and we’ll go beyond the simple breaches. That’s where the added complexity will be needed.

Gardner: Andras, any suggestions for how enterprises, suppliers, vendors, systems integrators, and now, of course, the cloud services providers, should get involved? Where can they go for more information? What can they do to become part of the solution on this?

International forum

Szakal: Well, they can always become a member of the Trusted Technology Forum, where we have an international forum.

Gardner: I thought you might say that.

Szakal: That’s an obvious one, right? But there are a couple of places where you can go to learn more about this challenge.

One is certainly our website. Download the framework, which was a compendium of best practices, which we gathered as a result of a lot of hard work of sharing in an open, penalty-free environment all of the best practices that the major vendors are employing to mitigate risks to counterfeit and maliciously tainted products, as well as other supply chain risks. I think that’s a good start, understanding the standard.

Then, it’s looking at how you might measure the standard against what your practices are currently using the accreditation criteria that we have established.

Other places would be NIST. I believe that it’s 161 that is the current pending standard for protecting supply chain security. There are several really good reports that the Defense Science Board and other organizations have conducted in the past within the federal government space. There are plenty of materials out there, a lot of discussion about challenges.

But I think the only place where you really find solutions, or at least one of the only places that I have seen is in the TTF, embedded in the standard as a set of practices that are very practical to implement.

Gardner: Sally, the same question to you. Where can people go to get involved? What should they perhaps do to get started?

Long: I’d reiterate what Andras said. I’d also point them toward the accreditation website, which is www.opengroup.org/accreditation/o-ttps. And on that accreditation site you can see the policy, standard and supporting docs. We publicize our assessment procedures so you have a good idea of what the assessment process will entail.

The program is based on evidence of conformance as well as a warranty from the applicant. So the assessment procedures being public will allow any organizations thinking about getting accredited to know exactly what they need to do.

As always, we would appreciate any new members, because we’ll be evolving the standard and the accreditation program, and it is done by consensus. So if you want a say in that, whether our standard needs to be stronger, weaker, broader, etc., join the forum and help us evolve it.

Impact on business

Gardner: Dan Reddy, when we think about managing these issues, often it falls on the shoulders of IT and their security apparatus, the Chief Information Security Officer perhaps. But it seems that the impact on business is growing. So should other people in the enterprise be thinking about this? I am thinking about procurement or the governance risk and compliance folks. Who else should be involved other than IT in their security apparatus in mitigating the risks as far as IT supply chain activity?

Reddy: You’re right that the old model of everything falls on IT is expanding, and now you see issues of enterprise risk and supply chain risk making it up to the boards of directors, who are asking tough questions. That’s one reason why boards look at cyber insurance as a way to mitigate some of the risk that they can’t control.

They’re asking tough questions all the way around, and I think acquisition people do need to understand what are the right questions to ask of technology providers.

To me, this comes back to scalability. This one-off approach of everyone asking questions of each of their vendors just isn’t going to make it. The advantage that we have here is that we have a consistent standard, built by consensus, freely available, and it’s measurable.

There are a lot of other good documents that talk about supply chain risk and secure engineering, but you can’t get a third-party assessment in a straightforward method, and I think that’s going to be appealing over time.

Gardner: Bob Dix, last word to you. What do you see happening in the area of government affairs and public policy around these issues? What should we hope for or expect from different governments in creating an atmosphere that improves risk across supply chain?

Dix: A couple things have to happen, Dana. First, we have got to quit blaming victims when we have breaches and compromises and start looking at solutions. The government has a tendency in the United States and in other countries around the world, to look at legislating and trying to pass regulatory measures that impose requirements on industry without a full understanding of what industry is already doing.

In this particular example, the government has had a tendency to take an approach that excludes vendors from being able to participate in federal procurement activities based on a risk level that they determine.

The really great thing about the work of the OTTF and the standard that’s being produced is it allows a different way to look at it and instead look at those that are accredited as having met the standard and being able to provide a higher assurance level of authenticity and security around the products and services that they deliver. I think that’s a much more productive approach.

Working together

And from a standpoint of public policy, this example on the great work that’s being done by industry and government working together globally to be able to deliver the standard provides the government a basis by which they can think about it a little differently.

Instead of just focusing on who they want to exclude, let’s look at who actually is delivering the value and meeting the requirements to be a trusted provider. That’s a different approach and it’s one that we are very proud of in terms of the work of The Open Group and we will continue to work that going forward.

Gardner: Excellent. I’m afraid we will have to leave it there. We’ve been exploring ways to address supply chain risk in the information technology sector marketplace, and we’ve seen how The Open Group Trusted Technology Forum standards and accreditation activities are enhancing the security of global supply chain and improving the integrity of openly available IT products and components. And we have also learned how the age-old practice of insurance is coming to bear on the problem of IT supply chain risk.

This special BriefingsDirect Thought Leadership Panel Discussion comes to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore. It’s not too late to register on The Open Group’s website or to follow the proceedings online and via Twitter and other social media during the week of the presentation.

So a big thank you to our guests. We’ve been joined today by Sally Long, Director of The Open Group Trusted Technology Forum. Thanks so much, Sally.

Long: Thank you, Dana.

Gardner: And a big thank you to Andras Szakal, Vice President and Chief Technology Officer for IBM U.S. Federal and Chairman of The Open Group Trusted Technology Forum. Thank you, Andras.

Szakal: Thank you very much for having us and come join the TTF. We can use all the help we can get.

Gardner: Great. A big thank you too to Bob Dix, Vice President of Global Government Affairs & Public Policy for Juniper Networks and a member of The Open Group Trusted Technology Forum. Thanks, Bob.

Dix: Appreciate the invitation. I look forward to joining you again.

Gardner: And lastly, thank you to Dan Reddy, Supply Chain Assurance Specialist, college instructor and Lead of The Open Group Trusted Technology Forum Global Outreach and Standards Harmonization Work Group. I appreciate your input, Dan.

Reddy: Glad to be here.

Gardner: And lastly, a big thank you to our audience for joining us at the special Open Group sponsored Thought Leadership Panel Discussion.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for these Open Group discussions associated with the Baltimore Conference ( (Register Here). Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Sponsor: The Open Group

Join the conversation @theopengroup #ogchat #ogBWI

Transcript of a Briefings Direct discussion on ways to address supply chain risk in the information technology sector marketplace. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

1 Comment

Filed under Cybersecurity, OTTF, Supply chain risk, The Open Group Baltimore 2015

Cybersecurity Standards: The Open Group Explores Security and Ways to Assure Safer Supply Chains

Following is a transcript of part of the proceedings from The Open Group San Diego 2015 in February.

The following presentations and panel discussion, which together examine the need and outlook for Cybersecurity standards amid supply chains, are provided by moderator Dave Lounsbury, Chief Technology Officer, The Open Group; Mary Ann Davidson, Chief Security Officer, Oracle; Dr. Ron Ross, Fellow of the National Institute of Standards and Technology (NIST), and Jim Hietala, Vice President of Security for The Open Group.

Here are some excerpts:

By The Open GroupDave Lounsbury: Mary Ann Davidson is responsible for Oracle Software Security Assurance and represents Oracle on the Board of Directors for the Information Technology Information Sharing and Analysis Center, and on the international Board of the ISSA.

Dr. Ron Ross leads the Federal Information Security Management Act Implementation Project. It sounds like a big job to fulfill, developing the security standards and guidelines for the federal government.

This session is going to look at the cybersecurity and supply chain landscape from a standards perspective. So Ron and Mary Ann, thank you very much.

By The Open GroupRon Ross: All of us are part of the technology explosion and revolution that we have been experiencing for the last couple of decades.

I would like to have you leave today with a couple of major points, at least from my presentation, things that we have observed in cybersecurity for the last 25 years: where we are today and where I think we might need to go in the future. There is no right or wrong answer to this problem of cybersecurity. It’s probably one of the most difficult and challenging sets of problems we could ever experience.

In our great country, we work on what I call the essential partnership. It’s a combination of government, industry, and academia all working together. We have the greatest technology producers, not just in this country, but around the world, who are producing some fantastic things to which we are all “addicted.” I think we have an addiction to the technology.

Some of the problems we’re going to experience going forward in cybersecurity aren’t just going to be technology problems. They’re going to be cultural problems and organizational problems. The key issue is how we organize ourselves, what our risk tolerance is, how we are going to be able to accomplish all of our critical missions and business operations that Dawn talked about this morning, and do so in a world that’s fairly dangerous. We have to protect ourselves.

Movie App

I think I can sum it up. I was at a movie. I don’t go to movies very often anymore, but about a month ago, I went to a movie. I was sitting there waiting for the main movie to start, and they were going through all the coming attractions. Then they came on the PA and they said that there is an app you can download. I’m not sure you have ever seen this before, but it tells you for that particular movie when is the optimal time to go to the restroom during the movie.

I bring this up because that’s a metaphor for where we are today. We are consumed. There are great companies out there, producing great technologies. We’re buying it up faster than you can shake a stick at it, and we are developing the most complicated IT infrastructure ever.

So when I look at this problem, I look at this from a scientist’s point of view, an engineering point of view. I’m saying to myself, knowing what I know about what it takes  to — I don’t even use the word “secure” anymore, because I don’t think we can ever get there with the current complexity — build the most secure systems we can and be able to manage risk in the world that we live in.

In the Army, we used to have a saying. You go to war with the army that you have, not the army that you want. We’ve heard about all the technology advances, and we’re going to be buying stuff, commercial stuff, and we’re going to have to put it together into systems. Whether it’s the Internet of Things (IoT) or cyber-physical convergence, it all goes back to some fairly simple things.

The IoT and all this stuff that we’re talking about today really gets back to computers. That’s the common denominator. They’re everywhere. This morning, we talked about your automobile having more compute power than Apollo 11. In your toaster, your refrigerator, your building, the control of the temperature, industrial control systems in power plants, manufacturing plants, financial institutions, the common denominator is the computer, driven by firmware and software.

When you look at the complexity of the things that we’re building today, we’ve gone past the time when we can actually understand what we have and how to secure it.

That’s one of the things that we’re going to do at NIST this year and beyond. We’ve been working in the FISMA world forever it seems, and we have a whole set of standards, and that’s the theme of today: how can standards help you build a more secure enterprise?

The answer is that we have tons of standards out there and we have lots of stuff, whether it’s on the federal side with 853 or the Risk Management Framework, or all the great things that are going on in the standards world, with The Open Group, or ISO, pick your favorite standard.

The real question is how we use those standards effectively to change the current outlook and what we are experiencing today because of this complexity? The adversary has a significant advantage in this world, because of complexity. They really can pick the time, the place, and the type of attack, because the attack surface is so large when you talk about not just the individual products.

We have many great companies just in this country and around the world that are doing a lot to make those products more secure. But then they get into the engineering process and put them together in a system, and that really is an unsolved problem. We call it a Composability Problem. I can have a trusted product here and one here, but what is the combination of those two when you put them together in the systems context? We haven’t solved that problem yet, and it’s getting more complicated everyday.

Continuous Monitoring

For the hard problems, we in the federal government do a lot of stuff in continuous monitoring. We’re going around counting our boxes and we are patching stuff and we are configuring our components. That’s loosely called cyber hygiene. It’s very important to be able to do all that and do it quickly and efficiently to make your systems as secure as they need to be.

But even the security controls in our control catalog, 853, when you get into the technical controls —  I’m talking about access control mechanisms, identification, authentication, encryption, and audit — those things are buried in the hardware, the software, the firmware, and the applications.

Most of our federal customers can’t even see those. So when I ask them if they have all their access controls in place, they can nod their head yes, but they can’t really prove that in a meaningful way.

So we have to rely on industry to make sure those mechanisms, those functions, are employed within the component products that we then will put together using some engineering process.

This is the below-the-waterline problem I talk about. We’re in some kind of digital denial today, because below the water line, most consumers are looking at their smartphones, their tablets, and all their apps — that’s why I used that movie example — and they’re not really thinking about those vulnerabilities, because they can’t see them, until it affects them personally.

I had to get three new credit cards last year. I shop at Home Depot and Target, and JPMorgan Chase is our federal credit card. That’s not a pain point for me because I’m indemnified. Even if there are fraudulent charges, I don’t get hit for those.

If your identity is stolen, that’s a personal pain point. We haven’t reached that national pain point yet. All of the security stuff that we do we talk about it a lot and we do a lot of it, but if you really want to effect change, you’re going to start to hear more at this conference about assurance, trustworthiness, and resiliency. That’s the world that we want to build and we are not there today.

That’s the essence of where I am hoping we are going to go. It’s these three areas: software assurance, systems security engineering, and supply-chain risk management.

My colleague Jon Boyens is here today and he is the author, along with a very talented team of coauthors, of the NIST 800-161 document. That’s the supply chain risk document.

It’s going to work hand-in-hand with another publication that we’re still working on, the 800-160 document. We are taking an IEEE and an ISO standard, 15288, and we’re trying to infuse into that standard. They are coming out with the update of that standard this year. We’re trying to infuse security into every step of the lifecycle.

Wrong Reasons

The reason why we are not having a lot of success on the cybersecurity front today is because security ends up appearing either too late or by the wrong people for the wrong reasons.

I’ll give you one example. In the federal government, we have a huge catalog of security controls, and they are allocated into different baselines: low, moderate, and high. So you will pick a baseline, you will tailor, and you’ll come to the system owner or the authorizing official and say, “These are all the controls that NIST says we have to do.” Well, the mission business owner was never involved in that discussion.

One of the things we are going to do with the new document is focus on the software and systems engineering process from the start of the stakeholders, all the way through requirements, analysis, definition, design, development, implementation, operation, and sustainment, all the way to disposal. Critical things are going to happen at every one of those places in the lifecycle

The beauty of that process is that you involve the stakeholders early. So when those security controls are actually selected they can be traced back to a specific security requirement, which is part of a larger set of requirements that support that mission or business operation, and now you have the stakeholders involved in the process.

Up to this point in time, security operates in its own vacuum. It’s in the little office down the hall, and we go down there whenever there’s a problem. But unless and until security gets integrated and we disappear as being our own discipline, we now are part of the Enterprise Architecture, whether it’s TOGAF® or whatever architecture construct you are following, or the systems engineering process. The system development lifecycle is the third one, and people ask what is acquisition and procurement.

Unless we have our stakeholders at those tables to influence, we are going to continue to deploy systems that are largely indefensible not against all cyber attacks but against the high-end attacks.

We have to do a better job getting at the C-Suite and I tried to capture the five essential areas that this discussion has to revolve around. The acronym is TACIT, and it just happens to be a happy coincidence that it fit into an acronym. But it’s basically looking at the threat, how you configure your assets, and how you categorize your assets with regard to criticality.

How complex is the system you’re building? Are you managing that complexity in trying to reduce it, integrating security across the entire set of business practices within the organization? Then, the last component, which really ties into The Open Group, and the things you’re doing here with all the projects that were described in the first session, that is the trustworthiness piece.

Are we building products and systems that are, number one, more penetration resistance to cyber attacks; and number two, since we know we can’t stop all attacks, because we can never reduce complexity to where we thought we could two or three decades ago. Are we building the essential resiliency into that system. Even when the adversary comes to the boundary and the malware starts to work, how far does it spread, and what can it do?

That’s the key question. You try to limit the time on target for the advisory, and that can be done very, very easily with good architectural and good engineering solutions. That’s my message for 2015 and beyond, at least from a lot of things at NIST. We’re going to start focusing on the architecture and the engineering, how to really affect things at the ground level?

Processes are Important

Now we always will have the people, the processes, the technologies kind of this whole ecosystem that we have to deal with, and you’re going to always have to worry about your sys admins that go bad and dump all the stuff that you don’t want dumped on the Internet. But that’s part of system process. Processes are very important because they give us structure, discipline, and the ability to communicate with our partners.

I was talking to Rob Martin from Mitre. He’s working on a lot of important projects there with the CWEs, CVEs. It gives you the ability to communicate a level of trustworthiness and assurance that other people can have that dialogue, because without that, we’re not going to be communicating with each other. We’re not going to trust each other, and that’s critical, having that common understanding. Frameworks provide that common dialogue of security controls in a common process, how we build things, and what is the level of risk that we are willing to accept in that whole process.

These slides, and they’ll be available, go very briefly into the five areas. Understanding the modern threat today is critical because, even if you don’t have access to classified threat data, there’s a lot of great data out there with Symantec and Verizon reports, and there’s open-source threat information available.

If you haven’t had a chance to do that, I know the folks who work on the high assurance stuff in The Open Group RT&ES. look at that stuff a lot, because they’re building a capability that is intended to stop some of those types of threats.

The other thing about assets is that we don’t do a very good job of criticality analysis. In other words, most of our systems are running, processing, storing, and transmitting data and we’re not segregating the critical data into its own domain where necessary.

I know that’s hard to do sometimes. People say, “I’ve got to have all this stuff ready to go 24×7,” but when you look at some of the really bad breaches that we have had over the last several years establishing a domain for critical data, where that domain can be less complex, which means you can better defend it, and then you can invest more resources into defending those things that are the most critical.

I used a very simple example of a safe deposit box. I can’t get all my stuff into the safe deposit box. So I have to make decisions. I put important papers in there, maybe a coin collection, whatever.  I have locks on my house on the front door, but they’re not strong enough to stop some of those bad guys out there. So I make those decisions. I put it in the bank, and it goes in a vault. It’s a pain in the butt to go down there and get the stuff out, but it gives me more assurance, greater trustworthiness. That’s an example of the things we have to be able to do.

Complexity is something that’s going to be very difficult to address because of our penchant for bringing in new technologies. Make no mistake about it, these are great technologies. They are compelling. They are making us more efficient. They are allowing us to do things we never imagined, like finding out the optimal time to go to the restroom during a movie, I mean who could have imagined we could do that a decade ago.

But as with every one of our customers out there, the kinds of things we’re talking about flies below their radar. When you download 100 apps on your smartphone, people in general, even the good folks in Cybersecurity, have no idea where those apps are coming from, where the pedigree is, have they been tested at all, have they been evaluated, are they running on a trusted operating system?

Ultimately, that’s what this business is all about, and that’s what 800-161 is all about. It’s about a lifecycle of the entire stack from applications, to middleware, to operating systems, to firmware, to integrated circuits, to include the supply chain.

The adversary is all over that stack. They now figure out how to compromise our firmware so we have to come up with firmware integrity controls in our control catalog, and that’s the world we live in today.

Managing Complexity

I was smiling this morning when I talked about the DNI, the Director of National Intelligence in building their cloud, if that’s going to go to the public cloud or not. I think Dawn is probably right, you probably won’t see that going to the public cloud anytime soon, but cloud computing gives us an opportunity to manage complexity. You can figure out what you want to send to the public cloud.

They do a good job through the FedRAMP program of deploying controls and they’ve got a business model that’s important to make sure they protect their customers’ assets. So that’s built into their business model and they do a lot of great things out there to try to protect that information.

Then, for whatever stays behind in your enterprise, you can start to employ some of the architectural constructs that you’ll see here at this conference, some of the security engineering constructs that we’re going to talk about in 800-160, and you can better defend what stays behind within your organization.

So cloud is a way to reduce that complexity. Enterprise Architecture, TOGAF®, an Open Group standard, all of those architectural things allow you to provide discipline and structure and thinking about what you’re building: how to protect it, how much it’s going to cost and is it worth it? That is the essence of good security. It’s not about running around with a barrel full of security controls or ISO 27000 saying, hey, you’ve got to do all this stuff, or this guy is going to fall, those days are over.

Integration we talked about. This is also hard. We are working with stovepipes today. Enterprise Architects typically don’t talk to security people. Acquisition folks, in most cases, don’t talk to security people.

I see it everyday. You see RFPs go out and there is a whole long list of requirements, and then, when it comes to security, they say the system or the product they are buying must be FISMA compliant. They know that’s a law and they know they have to do that, but they really don’t give the industry or the potential contractors any specificity as to what they need to do to bring that product or the system to the state where it needs to be.

And so it’s all about expectations. I believe our industry, whether it’s here or overseas, wherever these great companies operate, the one thing we can be sure of is that they want to please their customers. So maybe what the message I’m going to send everyday is that we have to be more informed consumers. We have to ask for things that we know we need.

It’s like if you go back with the automobile. When I first started driving a long time ago,  40 years ago, cars just had seatbelts. There were no airbags and no steel-reinforced doors. Then, you could actually buy an airbag as an option at some point. When you fast-forward to today, every car has an airbag, seatbelt, steel-reinforced doors. It comes as part of the basic product. We don’t have to ask for it, but as consumers we know it’s there, and it’s important to us.

We have to start to look at the IT business in the same way, just like when we cross a bridge or fly in an airplane. All of you who flew here in airplanes and came across bridges had confidence in those structures. Why? Because they are built with good scientific and engineering practices.

So least functionality, least privilege, those are kind of foundational concepts in our world and cybersecurity. You really can’t look at a smartphone or a tablet and talk about least functionality anymore, at least if you are running that movie app, and you want to have all of that capability.

The last point about trustworthiness is that we have four decades of best practices in trusted systems development. It failed 30 years ago because we had the vision back then of trusted operating systems, but the technology and the development far outstripped our ability to actually achieve that.

Increasingly Difficult

We talked about a kernel-based operating system having 2,000, 3,000, 4,000, 5,000 lines of code and being highly trusted. Well, those concepts are still in place. It’s just that now the operating systems are 50 million lines of code, and so it becomes increasingly difficult.

And this is the key thing. As a society, we’re going to have to figure out, going forward, with all this great technology, what kind of world do we want to have for ourselves and our grandchildren? Because with all this technology, as good as it is, if we can’t provide a basis of security and privacy that customers can feel comfortable with, then at some point this party is going to stop.

I don’t know when that time is going to come, but I call it the national pain point in this digital denial. We will come to that steady state. We just haven’t had enough time yet to get to that balance point, but I’m sure we will.

I talked about the essential partnership, but I don’t think we can solve any problem without a collaborative approach, and that’s why I use the essential partnership: government, industry, and academia.

Certainly all of the innovation, or most of the innovation, comes from our great industry. Academia is critical, because the companies like Oracle or Microsoft want to hire students who have been educated in what I call the STEM disciplines: Science, Technology, Engineering — whether it’s “double e” or computer science — and Mathematics. They need those folks to be able to build the kind of products that have the capabilities, function-wise, and also are trusted.

And government plays some role — maybe some leadership, maybe a bully pulpit, cheerleading where we can — bringing things together. But the bottom line is that we have to work together, and I believe that we’ll do that. And when that happens I think all of us will be able to sit in that movie and fire up that app about the restroom and feel good that it’s secure.

By The Open GroupMary Ann Davidson: I guess I’m preaching to the converted, if I can use a religious example without offending somebody. One of the questions you asked is, why do we even have standards in this area? And of course some of them are for technical reasons. Crypto it turns out is easy for even very smart people to get wrong. Unfortunately, we have reason to find out.

So there is technical correctness. Another reason would be interoperability to get things to work better in a more secure manner. I’ve worked in this industry long enough to remember the first SSL implementation, woo-hoo, and then it turns out 40 bits wasn’t really 40, bits because it wasn’t random enough, shall we say.

Trustworthiness. ISO has a standard — The Common Criteria. It’s an ISO standard. We talk about what does it mean to have secure software, what type of threats does it address, how do you prove that it does what you say you do? There are standards for that, which helps. It helps everybody. It certainly helps buyers understand a little bit more about what they’re getting.

No Best Practices

And last, but not least, and the reason it’s in quotes, “best practices,” is because there actually are no best practices. Why do I say that — and I am seeing furrowed brows back there? First of all, lawyers don’t like them in contracts, because then if you are not doing the exact thing, you get sued.

There are good practices and there are worst practices. There typically isn’t one thing that everyone can do exactly the same way that’s going to be the best practice. So that’s why that’s in quotation marks.

Generally speaking, I do think standards, particularly in general, can be a force for good in the universe, particularly in cybersecurity, but they are not always a force for good, depending on other factors.

And what is the ecosystem? Well, we have a lot of people. We have standards makers, people who work on them. Some of them are people who review things. Like when NIST is very good, which I appreciate, about putting drafts out and taking comments, as opposed to saying, “Here it is, take it or leave it.” That’s actually a very constructive dialogue, which I believe a lot of people appreciate. I know that I do.

Sometimes there are mandators. You’ll get an RFP that says, “Verily, thou shall comply with this, less thee be an infidel in the security realm.” And that can be positive. It can  be a leading edge of getting people to do something good that, in many cases, they should do anyway.

Implementers, who have to take this and decipher and figure out why they are doing it. People who make sure that you actually did what you said you were going to do.

And last, but not least, there are weaponizers. What do I mean by that? We all know who they are. They are people who will try to develop a standard and then get it mandated. Actually, it isn’t a standard. It’s something they came up with, which might be very good, but it’s handing them regulatory capture.

And we need to be aware of those people. I like the Oracle database. I have to say that, right? There are a lot of other good databases out there. If I went in and said, purely objectively speaking, everybody should standardize on the Oracle database, because it’s the most secure. Well, nice work if I can get it.

Is that in everybody else’s interest? Probably not. You get better products in something that is not a monopoly market. Competition is good.

So I have an MBA, or had one in a prior life, and they used to talk in the marketing class about the three Ps of marketing. Don’t know what they are anymore; it’s been a while. So I thought I would come up with Four Ps of a Benevolent Standard, which are Problem Statement, Precise Language, Pragmatic Solutions, and Prescriptive Minimization.

Economic Analysis

And the reason I say this is one of the kind of discussions I have to have a lot of times, particularly sometimes with people in the government. I’m not saying this in any pejorative way. So please don’t take it that way. It’s the importance of economic analysis, because nobody can do everything.

So being able to say that I can’t boil the ocean, because you are going to boil everything else in it, but I can do these things. If I could do these things, it’s very clear what I am trying to do. It’s very clear what the benefit is. We’ve analyzed it, and it’s probably something everybody can do. Then, we can get to better.

Better is better than omnibus. Omnibus is something everybody gets thrown under if you make something too big. Sorry, I had to say that.

So Problem Statement: why is this important? You would think it’s obvious, Mary Ann, except that it isn’t, because so often the discussions I have with people, tell me what problem you are worried about? What are you trying to accomplish? If you don’t tell me that, then we’re going to be all over the map. You say potato and I say “potahto,” and the chorus of that song is, “let’s call the whole thing off.”

I use supply chain as an example, because this one is all over the map. Bad quality? Well, buying a crappy product is a risk of doing business. It’s not, per se, a supply chain risk. I’m not saying it’s not important, but it it’s certainly not a cyber-specific supply chain risk.

Bad security: well, that’s important, but again, that’s a business risk.

Backdoor bogeyman: this is the popular one. How do I know you didn’t put a backdoor in there? Well, you can’t actually, and that’s not a solvable problem.

Assurance, supply chain shutdown: yeah, I would like to know that a critical parts supplier isn’t going to go out of business. So these are all important, but they are all different problems.

So if you don’t say what you’re worried about, and it can’t be all the above. Almost every business has some supplier of some sort, even if it’s just healthcare. If you’re not careful how you define this, you will be trying to define a 100 percent of any entity’s business operations. And that’s not appropriate.

Use cases are really important, because you may have a Problem Statement. I’ll give you one, and this is not to ding NIST in any way, shape, or form, but I just read this. It’s the Cryptographic Key Management System draft. The only reason I cite this as an example is that I couldn’t actually find a use case in there.

So whatever the merits of that are saying, are you trying to develop a super secret key management system for government, very sensitive cryptographic things you are building from scratch, or you are trying to define a key management system that we have to use for things like TLS or any encryption that any commercial product does, because that’s way out of scope?

So without that, what are you worried about? And also what’s going to happen is somebody is going to cite this in an RFP and it’s going to be, are you compliant with bladdy-blah? And you have no idea whether that even should apply.

Problem Statement

So that Problem Statement is really important, because without that, you can’t have that dialogue in groups like this. Well, what are we trying to accomplish? What are we worried about? What are the worst problems to solve?

Precise Language is also very important. Why? Because it turns out everybody speaks a slightly different language, even if we all speak some dialect of geek, and that is, for example, a vulnerability.

If you say vulnerability to my vulnerability handling team, they think of that as a security vulnerability that’s caused by a defect in software.

But I’ve seen it used to include, well, you didn’t configure the product properly. I don’t know what that is, but it’s not a vulnerability, at least not to a vendor. You implemented a policy incorrectly. It might lead to vulnerability, but it isn’t one. So you are seeing where I am going with this. If you don’t have language to find very crisply the same thing, you read something and you go off and do it and you realize you solved the wrong problem.

I am very fortunate. One of my colleagues from Oracle, who works on our hardware, and I also saw a presentation by people in that group at the Cryptographic Conference in November. They talked about how much trouble we got into because if you say, “module” to a hardware person, it’s a very different thing from what it meant to somebody trying to certify it. This is a huge problem because again you say, potato, I say “potahto.” It’s not the same thing to everybody. So it needs to be very precisely defined.

Scope is also important. I don’t know why. I have to say this a lot and it does get kind of tiresome, I am sure to the recipients, COTS isn’t GOTS. Commercial software is not government software, and it’s actually globally developed. That’s the only way you get commercial software, the feature rich, reads frequently. We have access to global talent.

It’s not designed for all threat environments. It can certainly be better, and I think most people are moving towards better software, most likely because we’re getting beaten up by hackers and then our customers, and it’s good business. But there is no commercial market for high-assurance software or hardware, and that’s really important, because there is only so much that you can do to move the market.

So even a standards developer or big U.S. governments, is an important customer in the market for a lot of people, but they’re not big enough to move the marketplace on their own, and so you are limited by the business dynamic.

So that’s important, you can get to better. I tell people, “Okay, anybody here have a Volkswagen? Okay, is it an MRAP vehicle? No, it’s not, is it? You bought a Volkswagen and you got a Volkswagen. You can’t take a Volkswagen and drive it around streets and expect it to perform like an MRAP vehicle. Even a system integrator, a good one, cannot sprinkle pixie dust over that Volkswagen and turn it into an MRAP vehicle. Those are very different threat environments.

Why you think commercial software and hardware is different? It’s not different. It’s exactly the same thing. You might have a really good Volkswagen, and it’s great for commuting, but it is never going to perform in an IED environment. It wasn’t designed for that, and there is nothing you can do or make it designed to perform in that environment.

Pragmatism

Pragmatism; I really wish anybody working on any standard would do some economic analysis, because economics rules the world. Even if it’s something really good, a really good idea, time, money, and people, particularly qualified security people, are constrained resourses.

So if you make people do something that looks good on paper, but it’s really time-consuming, it’s an opportunity, the cost is too high. That means what is the value of something you could do with those resources that would either cost less or deliver higher benefit. And if you don’t do that analysis, then you have people say, “Hey, that’s a great idea. Wow, that’s great too. I’d like that.” It’s like asking your kid, “Do you want candy. Do want new toys? Do want more footballs?” Instead of saying, “Hey, you have 50 bucks, what you are going to do with it?”

And then there are unintended consequences, because if you make this too complex, you just have fewer suppliers. People will never say, “I’m just not going to bid because it’s impossible.” I’m going to give you three examples and again I’m trying to be respectful here. This is not to dis anybody who worked on these. In some cases, these things have been subsequent revisions that have been modified, which I really appreciate. But there are examples of, when you think about it, what were you asking for in the first place.

I think this was an early version of NISTR 7622 and has since been excised. There was a requirement that the purchaser wanted to be notified of personnel changes involving maintenance. Okay, what does that mean?

I know what I think they wanted, which is, if you are outsourcing the human resources for the Defense Department and you move the whole thing to “Hackistan,” obviously they would want to be notified. I got that, but that’s not what it said.

So I look at that and say, we have 5,000 products, at least, at Oracle. We have billions and billions of lines of code everyday. Somebody checks out a transaction, getting some code, and they do some work on it and they didn’t write it in the first place.

So am I going to tweet all that to somebody. What’s that going to do for you? Plus you have things like the German Workers Council. We are going to tell the US Government that Jurgen worked on this line of code. Oh no, that’s not going to happen.

So what was it you were worried about, because that is not sustainable, tweeting people 10,000 times a day with code changes is just going to consume a lot of resource.

In another one, had this in an early version of something they were trying to do. They wanted to know, for each phase of development for each project, how many foreigners worked on it? What’s a foreigner? Is it a Green Card holder? Is it someone who has a dual passport? What is that going to do for you?

Now again if you had a super custom code for some intelligence, I can understand there might be cases in which that would matter. But general-purpose software is not one of them. As I said, I can give you that information. We’re a big company and we’ve got lots of resource. A smaller company probably can’t. Again, what will I do for you, because I am taking resources I could be using on something much more valuable and putting them on something really silly.

Last, but not least, and again, with respect, I think I know why this was in there. It might have been the secure engineering draft standard that you came up with that has many good parts to it.

Root Cause Analysis

I think vendors will probably understand this pretty quickly. Root Cause Analysis. If you have a vulnerability, one of the first things you should use is Root Cause Analysis. If you’re a vendor and you have a CVSS 10 Security vulnerability in a product that’s being exploited, what do you think the first thing you are going to do is?

Get a patch in your customers’ hands or work around? Yeah, probably, that’s probably the number one priority. Also, Root Cause Analysis, particularly for really nasty security bugs, is really important. CVSS 0, who cares? But for 9 or 10, you should be doing that common analysis.

I’ve got a better one. We have a technology we have called Java. Maybe you’ve heard of it. We put a lot of work into fixing Java. One of the things we did is not only Root Cause Analysis, for CVSS 9 and higher. They have to go in front of my boss. Every Java developer had to sit through that briefing. How did this happen?

Last but not least, looking for other similar instances, not just root cause, how did that get in there and how do we avoid it. Where else does this problem exist. I am not saying this to make us look good; I ‘m saying for the analytics. What are you really trying to solve here. Root Cause Analysis is important, but it’s important in context. If I have to do it for everything, it’s probably not the best use of a scarce resource.

My last point is to minimize prescriptiveness within limits. For example, probably some people in here don’t know how to bake or maybe you made a pie. There is no one right way to bake a cherry pie. Some people go down to Ralphs and they get a frozen Marie Callendar’s out of the freezer, they stick it in the oven, and they’ve got a pretty good cherry pie.

Some people make everything from scratch. Some people use a prepared pie crust and they do something special with the cherries they picked off their tree, but there is no one way to do that that is going to work for everybody.

Best practice for something. For example, I can say truthfully that a best development practice would not be just start coding, number one; and number two, it compiles without too many errors on the base platform, and ship it. That is not good development practice.

If you mandate too much, it will stifle innovation and it won’t work for people. Plus, as I mentioned, you will have an opportunity cost. If I’m doing something that somebody says I have to do, but there is a more innovative way of doing that.

We don’t have a single development methodology in Oracle, mostly because of acquisitions. We buy a great company, we don’t tell them, “You know, that agile thing you are doing, it’s the last year. You have to do waterfall.” That’s not going to work very well, but there are good practices even within those different methodologies.

Allowing for different hows is really important. Static analysis is one of them. I think static analysis is kind of industry practice now, and people should be doing it. Third party is really bad. I have been opining about this, this morning.

Third-party Analysis

Let just say, I have a large customer, I won’t name who used a third-party static analysis service. They broke their license agreement with us. They’re getting a lot of it from us. Worse, they give us a report that included vulnerabilities from one of our competitors. I don’t want to know about those, right? I can’t fix some. I did tell my competitor, “You should know this report exist, because I’m sure you want to analyze this.”

Here’s the worst part. How many of those vulnerabilities the third-party found you think had any merit? Run tool is nothing; analyzing results is everything. That customer and the vendor wasted the time of one of our best security leads, trying to make sure there was no there there, and there wasn’t.

So again, and last but not least, government can use their purchasing power in lot of very good ways, but realize that regulatory things are probably going to lag actual practice. You could be specifying buggy whip standards and the reality is that nobody uses buggy whips anymore. It’s not always about the standard, particularly if you are using resources in a less than optimal way.

One of the things I like about The Open Group is that here we have actual practitioners. This is one of the best forums I have seen, because there are people who have actual subject matter expertise to bring to the table, which is so important in saying what is going to work and can be effective.

The last thing I am going to say is a nice thank you to the people in The Open Group Trusted Technology Forum (OTTF), because I appreciate the caliber of my colleagues, and also Sally Long. They talk about this type of an effort as herding cats, and at least for me, it’s probably like herding a snarly cat. I can be very snarly. I’m sure you can pick up on that.

So I truly appreciate the professionalism and the focus and the targeting. Targeting a good slice of making a supply-chain problem better, not boiling the ocean, but very focused and targeted and with very high-caliber participation. So thank you to my colleagues and particularly thank you to Sally, and that’s it, I will turn it over to others.

By The Open GroupJim Hietala: We do, we have a few questions from the audience. So the first one and both here could feel free to chime in on this. Something you brought up Dr. Ross, building security in looking at software and systems engineering processes. How do you bring industry along in terms of commercial off-the-shelf products and services especially when you look at things like IoT, where we have got IP interfaces grafted on to all sorts of devices?

Ross: As Mary Ann was saying before, the strength of any standard is really its implementability out there. When we talk about, in particular, the engineering standard, the 15288 extension, if we do that correctly every organization out there who’s already using — let’s say a security development lifecycle like the 27034, you can pick your favorite standard — we should be able to reflect those activities in the different lanes of the 15288 processes.

This is a very important point that I got from Mary Ann’s discussion. We have to win the hearts and minds and be able to reflect things in a disciplined and structured process that doesn’t take people off their current game. If they’re doing good work, we should be able to reflect that good work and say, “I’m doing these activities whether it’s SDL, and this is how it would map to those activities that we are trying to find in the 15288.”

And that can apply to the IoT. Again, it goes back to the computer, whether it’s Oracle database or a Microsoft operating system. It’s all about the code and the discipline and structure of building that software and integrating it into a system. This is where we can really bring together industry, academia, and government and actually do something that we all agree on.

Different Take

Davidson: I would have a slightly different take on this. I know this is not a voice crying in the wilderness. My concern about the IoT goes back to things I learned in business school in financial market theory, which unfortunately has been borne out in 2008.

There are certain types of risks you can mitigate. If I cross a busy street, I’m worried about getting hit by a car. I can look both ways. I can mitigate that. You can’t mitigate systemic risk. It means that you created a fragile system. That is the problem with the IoT, and that is a problem that no jury of engineering will solve.

If it’s not a problem, why aren’t we giving nuclear weapons’ IP addresses? Okay, I am not making this up. The Air Force thought about that at one point. You’re laughing. Okay, Armageddon, there is an app for that.

That’s the problem. I know this is going to happen anyway. whether or not I approve of it, but I really wish that people could look at this, not just in terms of how many of these devices and what a great opportunity, but what is a systemic risk that we are creating by doing this.

My house is not connected to the Internet directly and I do not want somebody to shut my appliances off or shut down my refrigerator or lock it so that I can’t get into it or use that for launching an attack, those are the discussions we should be having — at least as much as how we make sure that people designing these things have a clue.

Hietala: The next question is, how do customers and practitioners value the cost of security, and then a kind of related question on what can global companies due to get C-Suite attention and investment on cybersecurity, that whole ROI value discussion?

Davidson: I know they value it because nobody calls me up and says, “I am bored this week. Don’t you have more security patches for me to apply?” That’s actually true. We know what it costs us to produce a lot of these patches, and it’s important for the amount of resources we spend on that I would much rather be putting them on building something new and innovative, where we could charge money for it and provide more value to customers.

So it’s cost avoidance, number one; number two more people have an IT backbone. They understand the value of having it be reliable. Probably one of the reasons people are moving to clouds is that it’s hard to maintain all these and hard to find the right people to maintain them. But also I do have more customers asking us now about our security practices, which is be careful what you wish for

I said this 10 years ago. People should be demanding. They know what we’re doing and now I am going to spend a lot of time answering RFPs, but that’s good. These people are aware of this. They’re running their business on our stuff and they want to know what kind of care we’re taking to make sure we’re protecting their data and their mission-critical applications as if it were ours.

Difficult Question

Ross: The ROI question is very difficult with regard to security. I think this goes back to what I said earlier. The sooner we get security out of its stovepipe and integrated as just part of the best practices that we do everyday, whether it’s in the development work at a company or whether it’s in our enterprises as part of our mainstream organizational management things like the SDLC, or if we are doing any engineering work within the organization, or if we have the Enterprise Architecture group involved. That integration makes security less of  “hey, I am special” and more of just a part of the way we do business.

So customers are looking for reliability and dependability. They rely on this great bed of IT product systems and services and they’re not always focused on the security aspects. They just want to make sure it works and that if there is an attack and the malware goes creeping through their system, they can be as protected as they need to be, and sometimes that flies way below their radar.

So it’s got to be a systemic process and an organizational transformation. I think we have to go through it, and we are not quite there just yet.

Davidson: Yeah, and you really do have to bake it in. I have a team of — I’ve got three more headcount, hoo-hoo — 45 people, but we have about 1,600 people in development whose jobs are to be security points of contact and security leads. They’re the boots on the ground who implement our program, because I don’t want to have an organization that peers over everybody’s shoulder to make sure they are writing good code. It’s not cost-effective, not a good way to do it. It’s cultural.

One of the ways that you do that is seeding those people in the organization, so they become the boots on the ground and they have authority to do things, because you’re not going to succeed otherwise.

Going back to Java, that was the first discussion I had with one of the executives that this is a cultural thing. Everybody needs to feel that he or she is personally responsible for security, not those 10-20 whatever those people are, whoever the security weenie is. It’s got to be everybody and when you can do that, you really have to see change and how things happen. Everybody is not going to be a security expert, but everybody has some responsibility for security.

Transcript available here.

Transcript of part of the proceedings from The Open Group San Diego 2015 in February. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

Join the conversation! @theopengroup #ogchat

You may also be interested in:

 

Comments Off on Cybersecurity Standards: The Open Group Explores Security and Ways to Assure Safer Supply Chains

Filed under Cloud, Cloud/SOA, Conference, Cybersecurity, Enterprise Architecture, Information security, Internet of Things, IT, OTTF, RISK Management, Security, Standards, TOGAF®, Uncategorized

Using the Open FAIR Body of Knowledge with Other Open Group Standards

By Jim Hietala, VP Security, and Andrew Josey, Director of Standards, The Open Group

This is the third in our four part blog series introducing the Open FAIR Body of Knowledge. In this blog, we look at how the Open FAIR Body of Knowledge can be used with other Open Group standards.

The Open FAIR Body of Knowledge provides a model with which to decompose, analyze, and measure risk. Risk analysis and management is a horizontal enterprise capability that is common to many aspects of running a business. Risk management in most organizations exists at a high level as Enterprise Risk Management, and it exists in specialized parts of the business such as project risk management and IT security risk management. Because the proper analysis of risk is a fundamental requirement for different areas of Enterprise Architecture (EA), and for IT system operation, the Open FAIR Body of Knowledge can be used to support several other Open Group standards and frameworks.

The TOGAF® Framework

In the TOGAF 9.1 standard, Risk Management is described in Part III: ADM Guidelines and Techniques. Open FAIR can be used to help improve the measurement of various types of Risk, including IT Security Risk, Project Risk, Operational Risk, and other forms of Risk. Open FAIR can help to improve architecture governance through improved, consistent risk analysis and better Risk Management. Risk Management is described in the TOGAF framework as a necessary capability in building an EA practice. Use of the Open FAIR Body of Knowledge as part of an EA risk management capability will help to produce risk analysis results that are accurate and defensible, and that are more easily communicated to senior management and to stakeholders.

O-ISM3

The Open Information Security Management Maturity Model (O-ISM3) is a process-oriented approach to building an Information Security Management System (ISMS). Risk Management as a business function exists to identify risk to the organization, and in the context of O-ISM3, information security risk. Open FAIR complements the implementation of an O-ISM3-based ISMS by providing more accurate analysis of risk, which the ISMS can then be designed to address.

O-ESA

The Open Enterprise Security Architecture (O-ESA) from The Open Group describes a framework and template for policy-driven security architecture. O-ESA (in Sections 2.2 and 3.5.2) describes risk management as a governance principle in developing an enterprise security architecture. Open FAIR supports the objectives described in O-ESA by providing a consistent taxonomy for decomposing and measuring risk. Open FAIR can also be used to evaluate the cost and benefit, in terms of risk reduction, of various potential mitigating security controls.

O-TTPS

The O-TTPS standard, developed by The Open Group Trusted Technology Forum, provides a set of guidelines, recommendations, and requirements that help assure against maliciously tainted and counterfeit products throughout commercial off-the-shelf (COTS) information and communication technology (ICT) product lifecycles. The O-TTPS standard includes requirements to manage risk in the supply chain (SC_RSM). Specific requirements in the Risk Management section of O-TTPS include identifying, assessing, and prioritizing risk from the supply chain. The use of the Open FAIR taxonomy and risk analysis method can improve these areas of risk management.

The ArchiMate® Modeling Language

The ArchiMate modeling language, as described in the ArchiMate Specification, can be used to model Enterprise Architectures. The ArchiMate Forum is also considering extensions to the ArchiMate language to include modeling security and risk. Basing this risk modeling on the Risk Taxonomy (O-RT) standard will help to ensure that the relationships between the elements that create risk are consistently understood and applied to enterprise security and risk models.

O-DA

The O-DA standard ((Dependability Through Assuredness), developed by The Open Group Real-time and Embedded Systems Forum, provides the framework needed to create dependable system architectures. The requirements process used in O-DA requires that risk be analyzed before developing dependability requirements. Open FAIR can help to create a solid risk analysis upon which to build dependability requirements.

In the final installment of this blog series, we will look at the Open FAIR certification for people program.

The Open FAIR Body of Knowledge consists of the following Open Group standards:

  • Risk Taxonomy (O-RT), Version 2.0 (C13K, October 2013) defines a taxonomy for the factors that drive information security risk – Factor Analysis of Information Risk (FAIR).
  • Risk Analysis (O-RA) (C13G, October 2013) describes process aspects associated with performing effective risk analysis.

These can be downloaded from The Open Group publications catalog at http://www.opengroup.org/bookstore/catalog.

Our other publications include a Pocket Guide and a Certification Study Guide.

By Jim Hietala and Andrew JoseyJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT Security, Risk Management and Healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on Information Security, Risk Management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

 

By Andrew JoseyAndrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate® 2.1, IEEE Std 1003.1,2013 edition (POSIX), and the core specifications of the Single UNIX® Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

 

Comments Off on Using the Open FAIR Body of Knowledge with Other Open Group Standards

Filed under Uncategorized, Enterprise Architecture, Cybersecurity, TOGAF®, ArchiMate®, Standards, O-TTF, OTTF, RISK Management, real-time and embedded systems, O-TTPS, Security

Global Open Trusted Technology Provider™ Standard

By Sally Long, Forum Director, Open Trusted Technology Forum, The Open Group

A First Line of Defense in Protecting Critical Infrastructure – A Technical Solution that can Help Address a Geo-political Issue

The challenges associated with Cybersecurity and critical infrastructure, which include the security of global supply chains, are enormous. After working almost exclusively on supply chain security issues for the past 5 years, I am still amazed at the number of perspectives that need to be brought to bear on this issue.

Recently I had the opportunity to participate in a virtual panel sponsored by InfoSecurity Magazine entitled: “Protecting Critical Infrastructure: Developing a Framework to Mitigate Cybersecurity Risks and Build Resilience”. The session was recorded and the link can be found at the end of this blog.

The panelists were:

  • Jonathan Pollet, Founder, Executive Director Red Tiger Security
  • Ernie Hayden, Executive Consultant, Securicon LLC
  • Sean Paul McGurk, Global Managing Principal, Critical Infrastructure Protection Cybersecurity, Verizon
  • Sally Long, Director, The Open Group Trusted Technology Forum

One perspective I brought to the discussion was the importance of product integrity and the security of ICT global supply chains as a first line of defense to mitigate vulnerabilities that can lead to maliciously tainted and counterfeit products. This first line of defense must not be ignored when discussing how to prevent damage to critical infrastructure and the horrific consequences that can ensue.

The other perspective I highlighted was that securing global supply chains is both a technical and a global geo-political issue. And that addressing the technical perspective in a vendor-neutral and country-neutral manner can have a positive effect on diminishing the geo-political issues as well.

The technical perspective is driven by the simple fact that most everything has a global supply chain – virtually nothing is built from just one company or in just one country. In order for products to have integrity and their supply chains to be secure all constituents in the chain must follow best practices for security – both in-house and in their supply chains.

The related but separate geo-political perspective, driven by a desire to protect against malicious attackers and a lack of trust of/from nation-states, is pushing many countries to consider approaches that are disconcerting, to put it mildly. This is not just a US issue; every country is concerned about securing their critical infrastructures and their underlying supply chains. Unfortunately we are beginning to see attempts to address these global concerns through local solutions (i.e. country specific and disparate requirements that raise the toll on suppliers and could set up barriers to trade).

The point is that an international technical solution (e.g. a standard and accreditation program for all constituents in global supply chains), which all countries can adopt, helps address the geo-political issues by having a common standard and common conformance requirements, raising all boats on the river toward becoming trusted suppliers.

To illustrate the point, I provided some insight into a technical solution from The Open Group Trusted Technology Provider Forum. The Open Group announced the release of the Open Trusted Technology Provider™ Standard (O-TTPS) – Mitigating Maliciously Tainted and Counterfeit Products. A standard of best practices that addresses product integrity and supply chain security throughout a product’s life cycle (from design through disposal). In February 2014, The Open Group announced the O-TTPS Accreditation Program that enables a technology provider (e.g. integrator, OEM, hardware or software component supplier, or reseller) that conforms to the standard to be accredited – positioning them on the public accreditation registry so they can be identified as an Open Trusted Technology Provider™.

Establishing a global standard and accreditation program like the O-TTPS – a program which helps mitigate the risk of maliciously tainted and counterfeit products from being integrated into critical infrastructure – a program that is already available and is available to any technology provider in any country regardless if they are based in the US, China, Germany, India, Brazil, or in any other country in the world – is most certainly a step in the right direction.

For a varied set of perspectives and opinions from critical infrastructure and supply chain subject matter experts, you can view the recording at the following link. Please note that you may need to log in to the InfoSecurity website for access:

http://view6.workcast.net/?pak=1316915596199100&cpak=9135816490522516

To learn more about the Open Trusted Technology Provider Standard and Accreditation Program, please visit the OTTF site: http://www.opengroup.org/subjectareas/trusted-technology

Sally LongSally Long is the Director of The Open Group Trusted Technology Forum (OTTF). She has managed customer supplier forums and collaborative development projects for over twenty years. She was the release engineering section manager for all multi-vendor collaborative technology development projects at The Open Software Foundation (OSF) in Cambridge Massachusetts. Following the merger of the OSF and X/Open under The Open Group, she served as director for multiple forums in The Open Group. Sally has a Bachelor of Science degree in Electrical Engineering from Northeastern University in Boston, Massachusetts.

Contact:  s.long@opengroup.org; @sallyannlong

Comments Off on Global Open Trusted Technology Provider™ Standard

Filed under COTS, Cybersecurity, O-TTF, O-TTPS, OTTF, Security, Standards, supply chain, Supply chain risk, Uncategorized

Why Technology Must Move Toward Dependability through Assuredness™

By Allen Brown, President and CEO, The Open Group

In early December, a technical problem at the U.K.’s central air traffic control center in Swanwick, England caused significant delays that were felt at airports throughout Britain and Ireland, also affecting flights in and out of the U.K. from Europe to the U.S. At Heathrow—one of the world’s largest airports—alone, there were a reported 228 cancellations, affecting 15 percent of the 1,300 daily flights flying to and from the airport. With a ripple effect that also disturbed flight schedules at airports in Birmingham, Dublin, Edinburgh, Gatwick, Glasgow and Manchester, the British National Air Traffic Services (NATS) were reported to have handled 20 percent fewer flights that day as a result of the glitch.

According to The Register, the problem was caused when a touch-screen telephone system that allows air traffic controllers to talk to each other failed to update during what should have been a routine shift change from the night to daytime system. According to news reports, the NATS system is the largest of its kind in Europe, containing more than a million lines of code. It took the engineering and manufacturing teams nearly a day to fix the problem. As a result of the snafu, Irish airline Ryanair even went so far as to call on Britain’s Civil Aviation Authority to intervene to prevent further delays and to make sure better contingency efforts are in place to prevent such failures happening again.

Increasingly complex systems

As businesses have come to rely more and more on technology, the systems used to keep operations running smoothly from day to day have gotten not only increasingly larger but increasingly complex. We are long past the days where a single mainframe was used to handle a few batch calculations.

Today, large global organizations, in particular, have systems that are spread across multiple centers of technical operations, often scattered in various locations throughout the globe. And with industries also becoming more inter-related, even individual company systems are often connected to larger extended networks, such as when trading firms are connected to stock exchanges or, as was the case with the Swanwick failure, airlines are affected by NATS’ network problems. Often, when systems become so large that they are part of even larger interconnected systems, the boundaries of the entire system are no longer always known.

The Open Group’s vision for Boundaryless Information Flow™ has never been closer to fruition than it is today. Systems have become increasingly open out of necessity because commerce takes place on a more global scale than ever before. This is a good thing. But as these systems have grown in size and complexity, there is more at stake when they fail than ever before.

The ripple effect felt when technical problems shut down major commercial systems cuts far, wide and deep. Problems such as what happened at Swanwick can affect the entire extended system. In this case, NATS, for example, suffers from damage to its reputation for maintaining good air traffic control procedures. The airlines suffer in terms of cancelled flights, travel vouchers that must be given out and angry passengers blasting them on social media. The software manufacturers and architects of the system are blamed for shoddy planning and for not having the foresight to prevent failures. And so on and so on.

Looking for blame

When large technical failures happen, stakeholders, customers, the public and now governments are beginning to look for accountability for these failures, for someone to assign blame. When the Obamacare website didn’t operate as expected, the U.S. Congress went looking for blame and jobs were lost. In the NATS fiasco, Ryanair asked for the government to intervene. Risk.net has reported that after the Royal Bank of Scotland experienced a batch processing glitch last summer, the U.K. Financial Services Authority wrote to large banks in the U.K. requesting they identify the people in their organization’s responsible for business continuity. And when U.S. trading company Knight Capital lost $440 million in 40 minutes when a trading software upgrade failed in August, U.S. Securities and Exchange Commission Chairman Mary Schapiro was quoted in the same article as stating: “If there is a financial loss to be incurred, it is the firm committing the error that should suffer that loss, not its customers or other investors. That more than anything sends a wake-up call to the entire industry.”

As governments, in particular, look to lay blame for IT failures, companies—and individuals—will no longer be safe from the consequences of these failures. And it won’t just be reputations that are lost. Lawsuits may ensue. Fines will be levied. Jobs will be lost. Today’s organizations are at risk, and that risk must be addressed.

Avoiding catastrophic failure through assuredness

As any IT person or Enterprise Architect well knows, completely preventing system failure is impossible. But mitigating system failure is not. Increasingly the task of keeping systems from failing—rather than just up and running—will be the job of CTOs and enterprise architects.

When systems grow to a level of massive complexity that encompasses everything from old legacy hardware to Cloud infrastructures to worldwide data centers, how can we make sure those systems are reliable, highly available, secure and maintain optimal information flow while still operating at a maximum level that is cost effective?

In August, The Open Group introduced the first industry standard to address the risks associated with large complex systems, the Dependability through Assuredness™ (O-DA) Framework. This new standard is meant to help organizations both determine system risk and help prevent failure as much as possible.

O-DA provides guidelines to make sure large, complex, boundaryless systems run according to the requirements set out for them while also providing contingencies for minimizing damage when stoppage occurs. O-DA can be used as a standalone or in conjunction with an existing architecture development method (ADM) such as the TOGAF® ADM.

O-DA encompasses lessons learned within a number of The Open Group’s forums and work groups—it borrows from the work of the Security Forum’s Dependency Modeling (O-DM) and Risk Taxonomy (O-RT) standards and also from work done within the Open Group Trusted Technology Forum and the Real-Time and Embedded Systems Forums. Much of the work on this standard was completed thanks to the efforts of The Open Group Japan and its members.

This standard addresses the issue of responsibility for technical failures by providing a model for accountability throughout any large system. Accountability is at the core of O-DA because without accountability there is no way to create dependability or assuredness. The standard is also meant to address and account for the constant change that most organization’s experience on a daily basis. The two underlying principles within the standard provide models for both a change accommodation cycle and a failure response cycle. Each cycle, in turn, provides instructions for creating a dependable and adaptable architecture, providing accountability for it along the way.

oda2

Ultimately, the O-DA will help organizations identify potential anomalies and create contingencies for dealing with problems before or as they happen. The more organizations can do to build dependability into large, complex systems, hopefully the less technical disasters will occur. As systems continue to grow and their boundaries continue to blur, assuredness through dependability and accountability will be an integral part of managing complex systems into the future.

Allen Brown

Allen Brown is President and CEO, The Open Group – a global consortium that enables the achievement of business objectives through IT standards.  For over 14 years Allen has been responsible for driving The Open Group’s strategic plan and day-to-day operations, including extending its reach into new global markets, such as China, the Middle East, South Africa and India. In addition, he was instrumental in the creation of the AEA, which was formed to increase job opportunities for all of its members and elevate their market value by advancing professional excellence.

Comments Off on Why Technology Must Move Toward Dependability through Assuredness™

Filed under Dependability through Assuredness™, Standards

New Accreditation Program – Raises the Bar for Securing Global Supply Chains

By Sally Long, Director of The Open Group Trusted Technology Forum (OTTF)™

In April 2013, The Open Group announced the release of the Open Trusted Technology Provider™ Standard (O-TTPS) 1.0 – Mitigating Maliciously Tainted and Counterfeit Products. Now we are announcing the O-TTPS Accreditation Program, launched on February 3, 2014, which enables organizations that conform to the standard to be accredited as Open Trusted Technology Providers™.

The O-TTPS, a standard of The Open Group, provides a set of guidelines, recommendations and requirements that help assure against maliciously tainted and counterfeit products throughout commercial off-the-shelf (COTS) information and communication technology (ICT) product lifecycles. The standard includes best practices throughout all phases of a product’s life cycle: design, sourcing, build, fulfillment, distribution, sustainment, and disposal, thus enhancing the integrity of COTS ICT products and the security of their global supply chains.

This accreditation program is one of the first of its kind in providing accreditation for conforming to standards for product integrity coupled with supply chain security.

The standard and the accreditation program are the result of a collaboration between government, third party evaluators and some of industry’s most mature and respected providers who came together and, over a period of four years, shared their practices for integrity and security, including those used in-house and those used with their own supply chains.

Applying for O-TTPS Accreditation

When the OTTF started this initiative, one of its many mantras was “raise all boats.” The  objective was to raise the security bar across the full spectrum of the supply chain, from small component suppliers to the providers who include those components in their products and to the integrators who incorporate those providers’ products into customers’ systems.

The O-TTPS Accreditation Program is open to all component suppliers, providers and integrators. The holistic aspect of this program’s potential, as illustrated in the diagram below should not be underestimated—but it will take a concerted effort to reach and encourage all constituents in the supply chain to become involved.

OTTPSThe importance of mitigating the risk of maliciously tainted and counterfeit products

The focus on mitigating the risks of tainted and counterfeit products by increasing the security of the supply chain is critical in today’s global economy. Virtually nothing is made from one source.

COTS ICT supply chains are complex. A single product can be comprised of hundreds of components from multiple component suppliers from numerous different areas around the world—and providers can change their component suppliers frequently depending on the going rate for a particular component.  If, along the supply chain, bad things happen, such as inserting counterfeit components in place of authentic ones or inserting maliciously tainted code or the double-hammer—maliciously tainted counterfeit parts—then terrible things can happen when that product is installed at a customer site.

With the threat of tainted and counterfeit technology products posing a major risk to global organizations, it is increasingly important for those organizations to take what steps they can to mitigate these risks. The O-TTPS Accreditation Program is one of those steps. Can an accreditation program completely eliminate the risk of tainted and counterfeit components? No!  Does it reduce the risk? Absolutely!

How the Accreditation Program works

The Open Group, with over 25 years’ experience managing vendor- and technology-neutral certification programs, will assume the role of the Accreditation Authority over the entire program. Additionally the program will utilize third-party assessors to assess conformance to the O-TTPS requirements.

Companies seeking accreditation will declare their Scope of Accreditation, which means they can choose to be accredited for conforming to the O-TTPS standard and adhering to the best practice requirements across their entire enterprise, within a specific product line or business unit or within an individual product.  Organizations applying for accreditation are then required to provide evidence of conformance for each of the O-TTPS requirements, demonstrating they have the processes in place to secure in-house development and their supply chains across the entire COTS ICT product lifecycle. O-TTPS accredited organizations will then be able to identify themselves as Open Trusted Technology Providers™ and will become part of a public registry of trusted providers.

The Open Group has also instituted the O-TTPS Recognized Assessor Program, which assures that Recognized Assessor (companies) meet certain criteria as assessor organizations and that their assessors (individuals) meet an additional set of criteria and have passed the O-TTPS Assessor exam, before they can be assigned to an O-TTPS Assessment. The Open Group will operate this program, grant O-TTPS Recognized Assessor certificates and list those qualifying organizations on a public registry of recognized assessor companies.

Efforts to increase awareness of the program

The Open Group understands that to achieve global uptake we need to reach out to other countries across the globe for market adoption, as well as to other standards groups for harmonization. The forum has a very active outreach and harmonization work group and the OTTF is increasingly being recognized for its efforts. A number of prominent U.S. government agencies, including the General Accounting Office and NASA have recognized the standard as an important supply chain security effort. Dave Lounsbury, the CTO of The Open Group, has testified before Congress on the value of this initiative from the industry-government partnership perspective. The Open Group has also met with President Obama’s Cybersecurity Coordinators (past and present) to apprise them of our work. We continue to work closely with NIST from the perspective of the Cybersecurity Framework, which recognizes the supply chain as a critical area for the next version, and the OTTF work is acknowledged in NIST’s Special Publication 161. We have liaisons with ISO and are working internally at mapping our standards and accreditation to Common Criteria. The O-TTPS has also been discussed with government agencies in China, India, Japan and the UK.

The initial version of the standard and the accreditation program are just the beginning. OTTF members will continue to evolve both the standard and the accreditation program to provide additional versions that refine existing requirements, introduce additional requirements, and cover additional threats. And the outreach and harmonization efforts will continue to strengthen so that we can reach that holistic potential of Open Trusted Technology Providers™ throughout all global supply chains.

For more details on the O-TTPS accreditation program, to apply for accreditation, or to learn more about becoming an O-TTPS Recognized Assessor visit the O-TTPS Accreditation page.

For more information on The Open Group Trusted Technology Forum please visit the OTTF Home Page.

The O-TTPS standard and the O-TTPS Accreditation Policy they are freely available from the Trusted Technology Section in The Open Group Bookstore.

For information on joining the OTTF membership please contact Mike Hickey – m.hickey@opengroup.org

Sally LongSally Long is the Director of The Open Group Trusted Technology Forum (OTTF). She has managed customer supplier forums and collaborative development projects for over twenty years. She was the release engineering section manager for all multi-vendor collaborative technology development projects at The Open Software Foundation (OSF) in Cambridge Massachusetts. Following the merger of the OSF and X/Open under The Open Group, she served as director for multiple forums in The Open Group. Sally has a Bachelor of Science degree in Electrical Engineering from Northeastern University in Boston, Massachusetts.

Comments Off on New Accreditation Program – Raises the Bar for Securing Global Supply Chains

Filed under Cybersecurity, OTTF, Supply chain risk