Tag Archives: Interarbor Solutions

The Open Group Trusted Technology Forum is Leading the Way to Securing Global IT Supply Chains

By Dana Gardner, Interarbor Solutions

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on Enterprise Architecture (EA), enterprise transformation, and securing global supply chains.

We’re joined in advance by some of the main speakers at the conference to examine the latest efforts to make global supply chains for technology providers more secure, verified, and therefore trusted. We’ll examine the advancement of The Open Group Trusted Technology Forum (OTTF) to gain an update on the effort’s achievements, and to learn more about how technology suppliers and buyers can expect to benefit.

The expert panel consists of Dave Lounsbury, Chief Technical Officer at The Open Group; Dan Reddy, Senior Consultant Product Manager in the Product Security Office at EMC Corp.; Andras Szakal, Vice President and Chief Technology Officer at IBM’s U.S. Federal Group, and also the Chair of the OTTF, and Edna Conway, Chief Security Strategist for Global Supply Chain at Cisco. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why this is an important issue, and why is there a sense of urgency in the markets?

Lounsbury: The Open Group has a vision of boundaryless information flow, and that necessarily involves interoperability. But interoperability doesn’t have the effect that you want, unless you can also trust the information that you’re getting, as it flows through the system.

Therefore, it’s necessary that you be able to trust all of the links in the chain that you use to deliver your information. One thing that everybody who watches the news would acknowledge is that the threat landscape has changed. As systems become more and more interoperable, we get more and more attacks on the system.

As the value that flows through the system increases, there’s a lot more interest in cyber crime. Unfortunately, in our world, there’s now the issue of state-sponsored incursions in cyberspace, whether officially state-sponsored or not, but politically motivated ones certainly.

So there is an increasing awareness on the part of government and industry that we must protect the supply chain, both through increasing technical security measures, which are handled in lots of places, and in making sure that the vendors and consumers of components in the supply chain are using proper methodologies to make sure that there are no vulnerabilities in their components.

I’ll note that the demand we’re hearing is increasingly for work on standards in security. That’s top of everybody’s mind these days.

Reddy: One of the things that we’re addressing is the supply chain item that was part of the Comprehensive National Cybersecurity Initiative (CNCI), which spans the work of two presidents. Initiative 11 was to develop a multi-pronged approach to global supply chain risk management. That really started the conversation, especially in the federal government as to how private industry and government should work together to address the risks there.

In the OTTF, we’ve tried create a clear measurable way to address supply-chain risk. It’s been really hard to even talk about supply chain risk, because you have to start with getting a common agreement about what the supply chain is, and then talk about how to deal with risk by following best practices.

Szakal: One of the observations that I’ve made over the last couple of years is that this group of individuals, who are now part of this standards forum, have grown in their ability to collaborate, define, and rise to the challenges, and work together to solve the problem.

Standards process

Technology supply chain security and integrity are not necessarily a set of requirements or an initiative that has been taken on by the standards committee or standards groups up to this point The people who are participating in this aren’t your traditional IT standards gurus. They had to learn the standards process. They had to understand how to approach the standardization of best practices, which is how we approach solving this problem.

It’s sharing information. It’s opening up across the industry to share best practices on how to secure the supply chain and how to ensure its overall integrity. Our goal has been to develop a framework of best practices and then ultimately take those codified best practices and instantiate them into a standard, which we can then assess providers against. It’s a big effort, but I think we’re making tremendous progress.

Gardner: Because The Open Group Conference is taking place in Washington, D.C., what’s the current perception in the U.S. Government about this in terms of its role?

Szakal:The government has always taken a prominent role, at least to help focus the attention of the industry.

Now that they’ve corralled the industry and they’ve got us moving in the right direction, in many ways, we’ve fought through many of the intricate complex technology supply chain issues and we’re ahead of some of the thinking of folks outside of this group because the industry lives these challenges and understands the state of the art. Some of the best minds in the industry are focused on this, and we’ve applied some significant internal resources across our membership to work on this challenge.

So the government is very interested in it. We’ve had collaborations all the way from the White House across the Department of Defense (DoD) and within the Department of Homeland Security (DHS), and we have members from the government space in NASA and DoD.

It’s very much a collaborative effort, and I’m hoping that it can continue to be so and be utilized as a standard that the government can point to, instead of coming up with their own policies and practices that may actually not work as well as those defined by the industry.

Conway: Our colleagues on the public side of the public-private partnership that is addressing supply-chain integrity have recognized that we need to do it together.

More importantly, you need only to listen to a statement, which I know has often been quoted, but it’s worth noting again from EU Commissioner Algirdas Semeta. He recently said that in a globalized world, no country can secure the supply chain in isolation. He recognized that, again quoting, national supply chains are ineffective and too costly unless they’re supported by enhanced international cooperation.

Mindful focus

The one thing that we bring to bear here is a mindful focus on the fact that we need a public-private partnership to address comprehensively in our information and communications technology industry supply chain integrity internationally. That has been very important in our focus. We want to be a one-stop shop of best practices that the world can look at, so that we continue to benefit from commercial technology which sells globally and frequently builds once or on a limited basis.

Combining that international focus and the public-private partnership is something that’s really coming home to roost in everyone’s minds right now, as we see security value migrating away from an end point and looking comprehensively at the product lifecycle or the global supply chain.

Lounsbury:I had the honor of testifying before the U.S. House Energy and Commerce Committee on Oversight Investigations, on the view from within the U.S. Government on IT security.

It was very gratifying to see that the government does recognize this problem. We had witnesses in from the DoD and Department of Energy (DoE). I was there, because I was one of the two voices on industry that the government wants to tap into to get the industry’s best practices into the government.

It was even more gratifying to see that the concerns that were raised in the hearings were exactly the ones that the OTTF is pursuing. How do you validate a long and complex global supply chain in the face of a very wide threat environment, recognizing that it can’t be any single country? Also, it really does need to be not a process that you apply to a point, but something where you have a standard that raises the bar for our security for all the participants in your supply chain.

So it was really good to know that we were on track and that the government, and certainly the U.S. Government, as we’ve heard from Edna, the European governments, and I suspect all world governments are looking at exactly how to tap into this industry activity.

Gardner: Where we are in the progression of OTTF?

Lounsbury: In the last 18 months, there has been a tremendous amount of progress. The thing that I’ll highlight is that early in 2012, the OTTF published a snapshot of the standard. A snapshot is what The Open Group uses to give a preview of what we expect the standards will apply. It has fleshed out two areas, one on tainted products and one on counterfeit products, the standards and best practices needed to secure a supply chain against those two vulnerabilities.

So that’s out there. People can take a look at that document. Of course, we would welcome their feedback on it. We think other people have good answers too. Also, if they want to start using that as guidance for how they should shape their own practices, then that would be available to them.

Normative guidance

That’s the top development topic inside the OTTF itself. Of course, in parallel with that, we’re continuing to engage in an outreach process and talking to government agencies that have a stake in securing the supply chain, whether it’s part of government policy or other forms of steering the government to making sure they are making the right decisions. In terms of exactly where we are, I’ll defer to Edna and Andras on the top priority in the group.

Gardner: Edna, what’s been going on at OTTF and where do things stand?

Conway: We decided that this was, in fact, a comprehensive effort that was going to grow over time and change as the challenges change. We began by looking at two primary areas, which were counterfeit and taint in that communications technology arena. In doing so, we first identified a set of best practices, which you referenced briefly inside of that snapshot.

Where we are today is adding the diligence, and extracting the knowledge and experience from the broad spectrum of participants in the OTTF to establish a set of rigorous conformance criteria that allow a balance between flexibility and how one goes about showing compliance to those best practices, while also assuring the end customer that there is rigor sufficient to ensure that certain requirements are met meticulously, but most importantly comprehensively.

We have a practice right now where we’re going through each and every requirement or best practice and thinking through the broad spectrum of the development stage of the lifecycle, as well as the end-to-end nodes of the supply chain itself.

This is to ensure that there are requirements that would establish conformance that could be pointed to, by both those who would seek accreditation to this international standard, as well as those who would rely on that accreditation as the imprimatur of some higher degree of trustworthiness in the products and solutions that are being afforded to them, when they select an OTTF accredited provider.

Gardner: Andras, I’m curious where in an organization like IBM that these issues are most enforceable. Where within the private sector is the knowledge and the expertise to reside?

Szakal: Speaking for IBM, we recently celebrated our 100th anniversary in 2011. We’ve had a little more time than some folks to come up with a robust engineering and development process, which harkens back to the IBM 701 and the beginning of the modern computing era.

Integrated process

We have what we call the integrated product development process (IPD), which all products follow and that includes hardware and software. And we have a very robust quality assurance team, the QSE team, which ensures that the folks are following those practices that are called out. Within each of line of business there exist specific requirements that apply more directly to the architecture of a particular product offering.

For example, the hardware group obviously has additional standards that they have to follow during the course of development that is specific to hardware development and the associated supply chain, and that is true with the software team as well.

The product development teams are integrated with the supply chain folks, and we have what we call the Secure Engineering Framework, of which I was an author and the Secure Engineering Initiative which we have continued to evolve for quite some time now, to ensure that we are effectively engineering and sourcing components and that we’re following these Open Trusted Technology Provider Standard (O-TTPS) best practices.

In fact, the work that we’ve done here in the OTTF has helped to ensure that we’re focused in all of the same areas that Edna’s team is with Cisco, because we’ve shared our best practices across all of the members here in the OTTF, and it gives us a great view into what others are doing, and helps us ensure that we’re following the most effective industry best practices.

Gardner: Dan, at EMC, is the Product Security Office something similar to what Andras explained for how IBM operates? Perhaps you could just give us a sense of how it’s done there?

Reddy: At EMC in our Product Security Office, we house the enabling expertise to define how to build their products securely. We’re interested in building that in as soon as possible throughout the entire lifecycle. We work with all of our product teams to measure where they are, to help them define their path forward, as they look at each of the releases of their other products. And we’ve done a lot of work in sharing our practices within the industry.

One of the things this standard does for us, especially in the area of dealing with the supply chain, is it gives us a way to communicate what our practices are with our customers. Customers are looking for that kind of assurance and rather than having a one-by-one conversation with customers about what our practices are for a particular organization. This would allow us to have a way of demonstrating the measurement and the conformance against a standard to our own customers.

Also, as we flip it around and take a look at our own suppliers, we want to be able to encourage suppliers, which may be small suppliers, to conform to a standard, as we go and select who will be our authorized suppliers.

Gardner: Dave, what would you suggest for those various suppliers around the globe to begin the process?

Publications catalog

Lounsbury: Obviously, the thing I would recommend right off is to go to The Open Group website, go to the publications catalog, and download the snapshot of the OTTF standard. That gives a good overview of the two areas of best practices for protection from tainted and counterfeit products we’ve mentioned on the call here.

That’s the starting point, but of course, the reason it’s very important for the commercial world to lead this is that commercial vendors face the commercial market pressures and have to respond to threats quickly. So the other part of this is how to stay involved and how to stay up to date?

And of course the two ways that The Open Group offers to let people do that is that you can come to our quarterly conferences, where we do regular presentations on this topic. In fact, the Washington meeting is themed on the supply chain security.

Of course, the best way to do it is to actually be in the room as these standards are evolved to meet the current and the changing threat environment. So, joining The Open Group and joining the OTTF is absolutely the best way to be on the cutting edge of what’s happening, and to take advantage of the great information you get from the companies represented on this call, who have invested years-and-years, as Andras said, in making their own best practices and learning from them.

Gardner:Edna, what’s on the short list of next OTTF priorities?

Conway: You’ve heard us talk about CNCI, and the fact that cybersecurity is on everyone’s minds today. So while taint embodies that to some degree, we probably need to think about partnering in a more comprehensive way under the resiliency and risk umbrella that you heard Dan talk about and really think about embedding security into a resilient supply chain or a resilient enterprise approach.

In fact, to give that some forethought, we actually have invited at the upcoming conference, a colleague who I’ve worked with for a number of years who is a leading expert in enterprise resiliency and supply chain resiliency to join us and share his thoughts.

He is a professor at MIT, and his name is Yossi Sheffi. Dr. Sheffi will be with us. It’s from that kind of information sharing, as we think in a more comprehensive way, that we begin to gather the expertise that not only resides today globally in different pockets, whether it be academia, government, or private enterprise, but also to think about what the next generation is going to look like.

Resiliency, as it was known five years ago, is nothing like supply chain resiliency today, and where we want to take it into the future. You need only look at the US national strategy for global supply chain security to understand that. When it was announced in January of this year at Davos by Secretary Napolitano of the DHS, she made it quite clear that we’re now putting security at the forefront, and resiliency is a part of that security endeavor.

So that mindset is a change, given the reliance ubiquitously on communications, for everything, everywhere, at all times — not only critical infrastructure, but private enterprise, as well as all of us on a daily basis today. Our communications infrastructure is essential to us.

Thinking about resiliency

Given that security has taken top ranking, we’re probably at the beginning of this stage of thinking about resiliency. It’s not just about continuity of supply, not just about prevention from the kinds of cyber incidents that we’re worried about, but also to be cognizant of those nation-state concerns or personal concerns that would arise from those parties who are engaging in malicious activity, either for political, religious or reasons.

Or, as you know, some of them are just interested in seeing whether or not they can challenge the system, and that causes loss of productivity and a loss of time. In some cases, there are devastating negative impacts to infrastructure.

Szakal: There’s another area too that I am highly focused on, but have kind of set aside, and that’s the continued development and formalization of the framework itself that is to continue the collective best practices from the industry and provide some sort of methods by which vendors can submit and externalize those best practices. So those are a couple of areas that I think that would keep me busy for the next 12 months easily.

Gardner: What do IT vendors companies gain if they do this properly?

Secure by Design

Szakal: Especially now in this day and age, any time that you actually approach security as part of the lifecycle — what we call an IBM Secure by Design – you’re going to be ahead of the market in some ways. You’re going to be in a better place. All of these best practices that we’ve defined are additive in effect. However, the very nature of technology as it exists today is that it will be probably another 50 or so years, before we see a perfect security paradigm in the way that we all think about it.

So the researchers are going to be ahead of all of the providers in many ways in identifying security flaws and helping us to remediate those practices. That’s part of what we’re doing here, trying to make sure that we continue to keep these practices up to date and relevant to the entire lifecycle of commercial off-the-shelf technology (COTS) development.

So that’s important, but you also have to be realistic about the best practices as they exist today. The bar is going to move as we address future challenges.

************

For more information on The Open Group’s upcoming conference in Washington, D.C., please visit: http://www.opengroup.org/dc2012

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

Comments Off

Filed under Cybersecurity, Information security, OTTF, Supply chain risk

Learn How Enterprise Architects Can Better Relate TOGAF and DoDAF to Bring Best IT Practices to Defense Contracts

By Dana Gardner, Interarbor Solutions

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how Enterprise Architecture (EA), enterprise transformation, and securing global supply chains.

We’re joined by one of the main speakers at the July 16 conference, Chris Armstrong, President of Armstrong Process Group, to examine how governments in particular are using various frameworks to improve their architectural planning and IT implementations.

Armstrong is an internationally recognized thought leader in EA, formal modeling, process improvement, systems and software engineering, requirements management, and iterative and agile development.

He represents the Armstrong Process Group at the Open Group, the Object Management Group (OMG), and Eclipse Foundation. Armstrong also co-chairs The Open Group Architectural Framework (TOGAF®), and Model Driven Architecture (MDA) process modeling efforts, and also the TOGAF 9 Tool Certification program, all at The Open Group.

At the conference, Armstrong will examine the use of TOGAF 9 to deliver Department of Defense (DoD) Architecture Framework or DoDAF 2 capabilities. And in doing so, we’ll discuss how to use TOGAF architecture development methods to drive the development and use of DoDAF 2 architectures for delivering new mission and program capabilities. His presentation will also be Livestreamed free from The Open Group Conference. The full podcast can be found here.

Here are some excerpts:

Gardner: TOGAF and DoDAF, where have they been? Where are they going? And why do they need to relate to one another more these days?

Armstrong: TOGAF [forms] a set of essential components for establishing and operating an EA capability within an organization. And it contains three of the four key components of any EA.

First, the method by which EA work is done, including how it touches other life cycles within the organization and how it’s governed and managed. Then, there’s a skills framework that talks about the skills and experiences that the individual practitioners must have in order to participate in the EA work. Then, there’s a taxonomy framework that describes the semantics and form of the deliverables and the knowledge that the EA function is trying to manage.

One-stop shop

One of the great things that TOGAF has going for it is that, on the one hand, it’s designed to be a one-stop shop — namely providing everything that a end-user organization might need to establish an EA practice. But it does acknowledge that there are other components, predominantly in the various taxonomies and reference models, that various end-user organizations may want to substitute or augment.

It turns out that TOGAF has a nice synergy with other taxonomies, such as DoDAF, as it provides the backdrop for how to establish the overall EA capability, how to exploit it, and put it into practice to deliver new business capabilities.

Frameworks, such as DoDAF, focus predominantly on the taxonomy, mainly the kinds of things we’re keeping track of, the semantics relationships, and perhaps some formalism on how they’re structured. There’s a little bit of method guidance within DoDAF, but not a lot. So we see the marriage of the two as a natural synergy.

Gardner: So their complementary natures allows for more particulars on the defense side, but the overall TOGAF looks at the implementation method and skills for how this works best. Is this something new, or are we just learning to do it better?

Armstrong: I think we’re seeing the state of industry advance and looking at trying to have the federal government, both United States and abroad, embrace global industry standards for EA work. Historically, particularly in the US government, a lot of defense agencies and their contractors have often been focusing on a minimalistic compliance perspective with respect to DoDAF. In order to get paid for this work or be authorized to do this work, one of our requirements is we must produce DoDAF.

People are doing that because they’ve been commanded to do it. We’re seeing a new level of awareness. There’s some synergy with what’s going on in the DoDAF space, particularly as it relates to migrating from DoDAF 1.5 to DoDAF 2.

Agencies need some method and technique guidance on exactly how to come up with those particular viewpoints that are going to be most relevant, and how to exploit what DoDAF has to offer, in a way that advances the business as opposed to just solely being to conforming or compliant?

Gardner: Have there been hurdles, perhaps culturally, because of the landscape of these different companies and their inability to have that boundary-less interaction. What’s been the hurdle? What’s prevented this from being more beneficial at that higher level?

Armstrong: Probably overall organizational and practitioner maturity. There certainly are a lot of very skilled organizations and individuals out there. However, we’re trying to get them all lined up with the best practice for establishing an EA capability and then operating it and using it to a business strategic advantage, something that TOGAF defines very nicely and which the DoDAF taxonomy and work products hold in very effectively.

Gardner: Help me understand, Chris. Is this discussion that you’ll be delivering on July 16 primarily for TOGAF people to better understand how to implement vis-à-vis, DoDAF, is this the other direction, or is it a two-way street?

Two-way street

Armstrong: It’s a two-way street. One of the big things that particularly the DoD space has going for it is that there’s quite a bit of maturity in the notion of formally specified models, as DoDAF describes them, and the various views that DoDAF includes.

We’d like to think that, because of that maturity, the general TOGAF community can glean a lot of benefit from the experience they’ve had. What does it take to capture these architecture descriptions, some of the finer points about managing some of those assets. People within the TOGAF general community are always looking for case studies and best practices that demonstrate to them that what other people are doing is something that they can do as well.

We also think that the federal agency community also has a lot to glean from this. Again, we’re trying to get some convergence on standard methods and techniques, so that they can more easily have resources join their teams and immediately be productive and add value to their projects, because they’re all based on a standard EA method and framework.

One of the major changes between DoDAF 1 and DoDAF 2 is the focusing on fitness for purpose. In the past, a lot of organizations felt that it was their obligation to describe all architecture viewpoints that DoDAF suggests without necessarily taking a step back and saying, “Why would I want to do that?”

So it’s trying to make the agencies think more critically about how they can be the most agile, mainly what’s the least amount of architecture description that we can invest and that has the greatest possible value. Organizations now have the discretion to determine what fitness for purpose is.

Then, there’s the whole idea in DoDAF 2, that the architecture is supposed to be capability-driven. That is, you’re not just describing architecture, because you have some tools that happened to be DoDAF conforming, but there is a new business capability that you’re trying to inject into the organization through capability-based transformation, which is going to involve people, process, and tools.

One of the nice things that TOGAF’s architecture development method has to offer is a well-defined set of activities and best practices for deciding how you determine what those capabilities are and how you engage your stakeholders to really help collect the requirements for what fit for purpose means.

Gardner: As with the private sector, it seems that everyone needs to move faster. I see you’ve been working on agile development. With organizations like the OMG and Eclipse is there something that doing this well — bringing the best of TOGAF and DoDAF together — enables a greater agility and speed when it comes to completing a project?

Different perspectives

Armstrong: Absolutely. When you talk about what agile means to the general community, you may get a lot of different perspectives and a lot of different answers. Ultimately, we at APG feel that agility is fundamentally about how well your organization responds to change.

If you take a step back, that’s really what we think is the fundamental litmus test of the goodness of an architecture. Whether it’s an EA, a segment architecture, or a system architecture, the architects need to think thoughtfully and considerately about what things are almost certainly going to happen in the near future. I need to anticipate, and be able to work these into my architecture in such a way that when these changes occur, the architecture can respond in a timely, relevant fashion.

We feel that, while a lot of people think that agile is just a pseudonym for not planning, not making commitments, going around in circles forever, we call that chaos, another five letter word. But agile in our experience really demands rigor, and discipline.

Of course, a lot of the culture of the DoD brings that rigor and discipline to it, but also the experience that that community has had, in particular, of formally modeling architecture description. That sets up those government agencies to act agilely much more than others.

Gardner: Do you know of anyone that has done it successfully or is in the process? Even if you can’t name them, perhaps you can describe how something like this works?

Armstrong: First, there has been some great work done by the MITRE organization through their work in collaboration at The Open Group. They’ve written a white paper that talks about which DoDAF deliverables are likely to be useful in specific architecture development method activities. We’re going to be using that as a foundation for the talk we’re going to be giving at the conference in July.

The biggest thing that TOGAF has to offer is that a nascent organization that’s jumping into the DoDAF space may just look at it from an initial compliance perspective, saying, “We have to create an AV-1, and an OV-1, and a SvcV-5,” and so on.

Providing guidance

TOGAF will provide the guidance for what is EA. Why should I care? What kind of people do I need within my organization? What kind of skills do they need? What kind of professional certification might be appropriate to get all of the participants up on the same page, so that when we’re talking about EA, we’re all using the same language?

TOGAF also, of course, has a great emphasis on architecture governance and suggests that immediately, when you’re first propping up your EA capability, you need to put into your plan how you’re going to operate and maintain these architectural assets, once they’ve been produced, so that you can exploit them in some reuse strategy moving forward.

So, the preliminary phase of the TOGAF architecture development method provides those agencies best practices on how to get going with EA, including exactly how an organization is going to exploit what the DoDAF taxonomy framework has to offer.

Then, once an organization or a contractor is charged with doing some DoDAF work, because of a new program or a new capability, they would immediately begin executing Phase A: Architecture Vision, and follow the best practices that TOGAF has to offer.

Just what is that capability that we’re trying to describe? Who are the key stakeholders, and what are their concerns? What are their business objectives and requirements? What constraints are we going to be placed under?

Part of that is to create a high-level description of the current or baseline architecture descriptions, and then the future target state, so that all parties have at least a coarse-grained idea of kind of where we’re at right now, and what our vision is of where we want to be.

Because this is really a high level requirements and scoping set of activities, we expect that that’s going to be somewhat ambiguous. As the project unfolds, they’re going to discover details that may cause some adjustment to that final target.

Internalize best practices

So, we’re seeing defense contractors being able to internalize some of these best practices, and really be prepared for the future so that they can win the greatest amount of business and respond as rapidly and appropriately as possible, as well as how they can exploit these best practices to affect greater business transformation across their enterprises.

Gardner: We mentioned that your discussion on these issues, on July 16 will be Livestreamed for free, but you’re also doing some pre-conference and post-conference activities — webinars, and other things. Tell us how this is all coming together, and for those who are interested, how they could take advantage of all of these.

Armstrong: We’re certainly very privileged that The Open Group has offered this as opportunity to share this content with the community. On Monday, June 25, we’ll be delivering a webinar that focuses on architecture change management in the DoDAF space, particularly how an organization migrates from DoDAF 1 to DoDAF 2.

I’ll be joined by a couple of other people from APG, David Rice, one of our Principal Enterprise Architects who is a member of the DoDAF 2 Working Group, as well as J.D. Baker, who is the Co-chair of the OMG’s Analysis and Design Taskforce, and a member of the Unified Profile for DoDAF and MODAF (UPDM) work group, a specification from the OMG.

We’ll be talking about things that organizations need to think about as they migrate from DoDAF 1 to DoDAF 2. We’ll be focusing on some of the key points of the DoDAF 2 meta-model, namely the rearrangement of the architecture viewpoints and the architecture partitions and how that maps from the classical DoDAF 1.5 viewpoint, as well as focusing on this notion of capability-driven architectures and fitness for purpose.

We also have the great privilege after the conference to be delivering a follow-up webinar on implementation methods and techniques around advanced DoDAF architectures. Particularly, we’re going to take a closer look at something that some people may be interested in, namely tool interoperability and how the DoDAF meta-model offers that through what’s called the Physical Exchange Specification (PES).

We’ll be taking a look a little bit more closely at this UPDM thing I just mentioned, focusing on how we can use formal modeling languages based on OMG standards, such as UML, SysML, BPMN, and SoaML, to do very formal architectural modeling.

One of the big challenges with EA is, at the end of the day, EA comes up with a set of policies, principles, assets, and best practices that talk about how the organization needs to operate and realize new solutions within that new framework. If EA doesn’t have a hand-off to the delivery method, namely systems engineering and solution delivery, then none of this architecture stuff makes a bit of a difference.

Driving the realization

We’re going to be talking a little bit about how DoDAF-based architecture description and TOGAF would drive the realization of those capabilities through traditional systems, engineering, and software development method.

************

For more information on The Open Group’s upcoming conference in Washington, D.C., please visit: http://www.opengroup.org/dc2012

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

Comments Off

Filed under Conference, Enterprise Architecture, Enterprise Transformation, TOGAF®

Corporate Data, Supply Chains Remain Vulnerable to Cyber Crime Attacks, Says Open Group Conference Speaker

By Dana Gardner, Interarbor Solutions 

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how security impacts the Enterprise Architecture, enterprise transformation, and global supply chain activities in organizations, both large and small.

We’re now joined on the security front with one of the main speakers at the conference, Joel Brenner, the author of America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare.”

Joel is a former Senior Counsel at the National Security Agency (NSA), where he advised on legal and policy issues relating to network security. Mr. Brenner currently practices law in Washington at Cooley LLP, specializing in cyber security. Registration remains open for The Open Group Conference in Washington, DC beginning July 16.

Previously, he served as the National Counterintelligence Executive in the Office of the Director of National Intelligence, and as the NSA’s Inspector General. He is a graduate of University of Wisconsin–Madison, the London School of Economics, and Harvard Law School. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Your book came out last September and it affirmed this notion that the United States, or at least open Western cultures and societies, are particularly vulnerable to being infiltrated, if you will, from cybercrime, espionage, and dirty corporate tricks.

Why are we particularly vulnerable, when we should be most adept at using cyber activities to our advantage?

Brenner: Let’s make a distinction here between the political-military espionage that’s gone on since pre-biblical times and the economic espionage that’s going on now and, in many cases, has nothing to do at all to do with military, defense, or political issues.

The other stuff has been going on forever, but what we’ve seen in the last 15 or so years is a relentless espionage attack on private companies for reasons having nothing to do with political-military affairs or defense.

So the countries that are adept at cyber, but whose economies are relatively undeveloped compared to ours, are at a big advantage, because they’re not very lucrative targets for this kind of thing, and we are. Russia, for example, is paradoxical. While it has one of the most educated populations in the world and is deeply cultured, it has never been able to produce a commercially viable computer chip.

Not entrepreneurial

We’re not going to Russia to steal advanced technology. We’re not going to China to steal advanced technology. They’re good at engineering and they’re good at production, but so far, they have not been good at making themselves into an entrepreneurial culture.

That’s one just very cynical reason why we don’t do economic espionage against the people who are mainly attacking us, which are China, Russia, and Iran. I say attack in the espionage sense.

The other reason is that you’re stealing intellectual property when you’re doing economic espionage. It’s a bedrock proposition of American economics and political strategy around the world to defend the legal regime that protects intellectual property. So we don’t do that kind of espionage. Political-military stuff we’re real good at.

Gardner: Wouldn’t our defense rise to the occasion? Why hasn’t it?

Brenner: The answer has a lot to do with the nature of the Internet and its history. The Internet, as some of your listeners will know, was developed starting in the late ’60s by the predecessor of the Defense Advanced Research Projects Agency (DARPA), a brilliant operation which produced a lot of cool science over the years.

It was developed for a very limited purpose, to allow the collaboration of geographically dispersed scientists who worked under contract in various universities with the Defense Department’s own scientists. It was bringing dispersed brainpower to bear.

It was a brilliant idea, and the people who invented this, if you talk to them today, lament the fact that they didn’t build a security layer into it. They thought about it. But it wasn’t going to be used for anything else but this limited purpose in a trusted environment, so why go to the expense and aggravation of building a lot of security into it?

Until 1992, it was against the law to use the Internet for commercial purposes. Dana, this is just amazing to realize. That’s 20 years ago, a twinkling of an eye in the history of a country’s commerce. That means that 20 years ago, nobody was doing anything commercial on the Internet. Ten years ago, what were you doing on the Internet, Dana? Buying a book for the first time or something like that? That’s what I was doing, and a newspaper.

In the intervening decade, we’ve turned this sort of Swiss cheese, cool network, which has brought us dramatic productivity and all and pleasure into the backbone of virtually everything we do.

International finance, personal finance, command and control of military, manufacturing controls, the controls in our critical infrastructure, all of our communications, virtually all of our activities are either on the Internet or exposed to the Internet. And it’s the same Internet that was Swiss cheese 20 years ago and it’s Swiss cheese now. It’s easy to spoof identities on it.

So this gives a natural and profound advantage to attack on this network over defense. That’s why we’re in the predicament we’re in.

Both directions

Gardner: Let’s also look at this notion of supply chain, because corporations aren’t just islands unto themselves. A business is really a compendium of other businesses, products, services, best practices, methodologies, and intellectual property that come together to create a value add of some kind. It’s not just attacking the end point, where that value is extended into the market. It’s perhaps attacking anywhere along that value chain.

What are the implications for this notion of the ecosystem vulnerability versus the enterprise vulnerability?

Brenner: Well, the supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements. For example, this software, where was it written? Maybe it was written in Russia — or maybe somewhere in Ohio or in Nevada, but by whom? We don’t know.

There are two fundamental different issues for supply chain, depending on the company. One is counterfeiting. That’s a bad problem. Somebody is trying to substitute shoddy goods under your name or the name of somebody that you thought you could trust. That degrades performance and presents real serious liability problems as a result.

The other problem is the intentional hooking, or compromising, of software or chips to do things that they’re not meant to do, such as allow backdoors and so on in systems, so that they can be attacked later. That’s a big problem for military and for the intelligence services all around the world.

The reason we have the problem is that nobody knows how to vet a computer chip or software to see that it won’t do thesesquirrelly things. We can test that stuff to make sure it will do what it’s supposed to do, but nobody knows how to test the computer chip or two million lines of software reliably to be sure that it won’t also do certain things we don’t want it to do.

You can put it in a sandbox or a virtual environment and you can test it for a lot of things, but you can’t test it for everything. It’s just impossible. In hardware and software, it is thestrategic supply chain problem now. That’s why we have it.

If you have a worldwide supply chain, you have to have a worldwide supply chain management system. This is hard and it means getting very specific. It includes not only managing a production process, but also the shipment process. A lot of squirrelly things happen on loading docks, and you have to have a way not to bring perfect security to that — that’s impossible — but to make it really harder to attack your supply chain.

Notion of cost

Gardner: So many organizations today, given the economy and the lagging growth, have looked to lowest cost procedures, processes, suppliers, materials, and aren’t factoring in the risk and the associated cost around these security issues. Do people need to reevaluate cost in the supply chain by factoring in what the true risks are that we’re discussing?

Brenner: Yes, but of course, when the CEO and the CFO get together and start to figure this stuff out, they look at the return on investment (ROI) of additional security. It’s very hard to be quantitatively persuasive about that. That’s one reason why you may see some kinds of production coming back into the United States. How one evaluates that risk depends on the business you’re in and how much risk you can tolerate.

This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things.

Gardner: We’ve seen other aspects of commerce in which we can’t lock down the process. We can’t know all the information, but what we can do is offer deterrence, perhaps in the form of legal recourse, if something goes wrong, if in fact, decisions were made that countered the contracts or were against certain laws or trade practices.

Brenner: For a couple of years now, I’ve struggled with the question why it is that liability hasn’t played a bigger role in bringing more cyber security to our environment, and there are a number of reasons.

We’ve created liability for the loss of personal information, so you can quantify that risk. You have a statute that says there’s a minimum damage of $500 or $1,000 per person whose identifiable information you lose. You add up the number of files in the breach and how much the lawyers and the forensic guys cost and you come up with a calculation of what these things cost.

But when it comes to just business risk, not legal risk, and the law says intellectual property to a company that depends on that intellectual property, you have a business risk. You don’t have much of a legal risk at this point.

You may have a shareholder suit issue, but there hasn’t been an awful lot of that kind of litigation so far. So I don’t know. I’m not sure that’s quite the question you were asking me, Dana.

Gardner: My follow on to that was going to be where would you go to sue across borders anyway? Is there an über-regulatory or legal structure across borders to target things like supply chain, counterfeit, cyber espionage, or mistreatment of business practice?

Depends on the borders

Brenner: It depends on the borders you’re talking about. The Europeans have a highly developed legal and liability system. You can bring actions in European courts. So it depends what borders you mean.

If you’re talking about the border of Russia, you have very different legal issues. China has different legal issues, different from Russia, as well from Iran. There are an increasing number of cases where actions are being brought in China successfully for breaches of intellectual property rights. But you wouldn’t say that was the case in Nigeria. You wouldn’t say that was the case in a number of other countries where we’ve had a lot of cybercrime originating from.

So there’s no one solution here. You have to think in terms of all kinds of layered defenses. There are legal actions you can take sometimes, but the fundamental problem we’re dealing with is this inherently porous Swiss-cheesy system. In the long run, we’re going to have to begin thinking about the gradual reengineering of the way the Internet works, or else this basic dynamic, in which lawbreakers have advantage over law-abiding people, is not going to go away.

Think about what’s happened in cyber defenses over the last 10 years and how little they’ve evolved — even 20 years for that matter. They almost all require us to know the attack mode or the sequence of code in order to catch it. And we get better at that, but that’s a leapfrog business. That’s fundamentally the way we do it.

Whether we do it at the perimeter, inside, or even outside before the attack gets to the perimeter, that’s what we’re looking for — stuff we’ve already seen. That’s a very poor strategy for doing security, but that’s where we are. It hasn’t changed much in quite a long time and it’s probably not going to.

Gardner: Why is that the case? Is this not a perfect opportunity for a business-government partnership to come together and re-architect the Internet at least for certain types of business activities, permit a two-tier approach, and add different levels of security into that? Why hasn’t it gone anywhere?

Brenner: What I think you’re saying is different tiers or segments. We’re talking about the Balkanization of the Internet. I think that’s going to happen as more companies demand a higher level of protection, but this again is a cost-benefit analysis. You’re going to see even more Balkanization of the Internet as you see countries like Russia and China, with some success, imposing more controls over what can be said and done on the Internet. That’s not going to be acceptable to us.

Gardner: We’ve seen a lot with Cloud Computing and more businesses starting to go to third-party Cloud providers for their applications, services, data storage, even integration to other business services and so forth.

More secure

If there’s a limited lumber, or at least a finite number, of Cloud providers and they can institute the proper security and take advantage of certain networks within networks, then wouldn’t that hypothetically make a Cloud approach more secure and more managed than every-man-for-himself, which is what we have now in enterprises and small to medium-sized businesses (SMBs)?

Brenner: I think the short answer is, yes. The SMBs will achieve greater security by basically contracting it out to what are called Cloud providers. That’s because managing the patching of vulnerabilities and other aspects and encryption is beyond what’s most small businesses and many medium-sized businesses can do, are willing to do, or can do cost-effectively.

For big businesses in the Cloud, it just depends on how good the big businesses’ own management of IT is as to whether it’s an improvement or not. But there are some problems with the Cloud.

People talk about security, but there are different aspects of it. You and I have been talking just now about security meaning the ability to prevent somebody from stealing or corrupting your information. But availability is another aspect of security. By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.

Consequently, it seems to me that backup issues are really critical for people who are going to the Cloud. Are you going to rely on your Cloud provider to provide the backup? Are you going to rely on the Cloud provider to provide all of your backup? Are you going to go to a second Cloud provider? Are you going to keep some information copied in-house?

What would happen if your information is good, but you can’t get to it? That means you can’t get to anything anymore. So that’s another aspect of security people need to think through.

Gardner: How do you know you’re doing the right thing? How do you know that you’re protecting? How do you know that you’ve gone far enough to ameliorate the risk?

Brenner: This is really hard. If somebody steals your car tonight, Dana, you go out to the curb or the garage in the morning, and you know it’s not there. You know it’s been stolen.

When somebody steals your algorithms, your formulas, or your secret processes, you’ve still got them. You don’t know they’re gone, until three or four years later, when somebody in Central China or Siberia is opening a factory and selling stuff into your market that you thought you were going to be selling — and that’s your stuff. Then maybe you go back and realize, “Oh, that incident three or four years ago, maybe that’s when that happened, maybe that’s when I lost it.”

What’s going out

So you don’t even know necessarily when things have been stolen. Most companies don’t do a good job. They’re so busy trying to find out what’s coming into their network, they’re not looking at what’s going out.

That’s one reason the stuff is hard to measure. Another is that ROI is very tough. On the other hand, there are lots of things where business people have to make important judgments in the face of risks and opportunities they can’t quantify, but we do it.

We’re right to want data whenever we can get it, because data generally means we can make better decisions. But we make decisions about investment in R&D all the time without knowing what the ROI is going to be and we certainly don’t know what the return on a particular R&D expenditure is going to be. But we make that, because people are convinced that if they don’t make it, they’ll fall behind and they’ll be selling yesterday’s products tomorrow.

Why is it that we have a bias toward that kind of risk, when it comes to opportunity, but not when it comes to defense? I think we need to be candid about our own biases in that regard, but I don’t have a satisfactory answer to your question, and nobody else does either. This is one where we can’t quantify that answer.

Gardner: It sounds as if people need to have a healthy dose of paranoia to tide them over across these areas. Is that a fair assessment?

Brenner: Well, let’s say skepticism. People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.

In many companies, you don’t see a willingness to do that, but that varies a great deal from company to company. Things are not always what they seem. That is not how we Americans approach life. We are trusting folks, which is why this is a great country to do business in and live in. But we’re having our pockets picked and it’s time we understood that.

Gardner: And, as we pointed out earlier, this picking of pockets is not just on our block, but could be any of our suppliers, partners, or other players in our ecosystem. If their pockets get picked, it ends up being our problem too.

Brenner: Yeah, I described this risk in my book, America the Vulnerable,” at great length and in my practice, here at Cooley, I deal with this every day. I find myself, Dana, giving briefings to businesspeople that 5, 10, or 20 years ago, you wouldn’t have given to anybody who wasn’t a diplomat or a military person going outside the country. Now this kind of cyber pilferage is an aspect of daily commercial life, I’m sorry to say.

************

For more information on The Open Group’s upcoming conference in Washington, D.C., please visit: http://www.opengroup.org/dc2012

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

2 Comments

Filed under Cloud, Cybersecurity, Supply chain risk

Open Group Security Gurus Dissect the Cloud: Higher of Lower Risk

By Dana Gardner, Interarbor Solutions

For some, any move to the Cloud — at least the public Cloud — means a higher risk for security.

For others, relying more on a public Cloud provider means better security. There’s more of a concentrated and comprehensive focus on security best practices that are perhaps better implemented and monitored centrally in the major public Clouds.

And so which is it? Is Cloud a positive or negative when it comes to cyber security? And what of hybrid models that combine public and private Cloud activities, how is security impacted in those cases?

We posed these and other questions to a panel of security experts at last week’s Open Group Conference in San Francisco to deeply examine how Cloud and security come together — for better or worse.

The panel: Jim Hietala, Vice President of Security for The Open Group; Stuart Boardman, Senior Business Consultant at KPN, where he co-leads the Enterprise Architecture Practice as well as the Cloud Computing Solutions Group; Dave Gilmour, an Associate at Metaplexity Associates and a Director at PreterLex Ltd., and Mary Ann Mezzapelle, Strategist for Enterprise Services and Chief Technologist for Security Services at HP.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Is this notion of going outside the firewall fundamentally a good or bad thing when it comes to security?

Hietala: It can be either. Talking to security people in large companies, frequently what I hear is that with adoption of some of those services, their policy is either let’s try and block that until we get a grip on how to do it right, or let’s establish a policy that says we just don’t use certain kinds of Cloud services. Data I see says that that’s really a failed strategy. Adoption is happening whether they embrace it or not.

The real issue is how you do that in a planned, strategic way, as opposed to letting services like Dropbox and other kinds of Cloud Collaboration services just happen. So it’s really about getting some forethought around how do we do this the right way, picking the right services that meet your security objectives, and going from there.

Gardner: Is Cloud Computing good or bad for security purposes?

Boardman: It’s simply a fact, and it’s something that we need to learn to live with.

What I’ve noticed through my own work is a lot of enterprise security policies were written before we had Cloud, but when we had private web applications that you might call Cloud these days, and the policies tend to be directed toward staff’s private use of the Cloud.

Then you run into problems, because you read something in policy — and if you interpret that as meaning Cloud, it means you can’t do it. And if you say it’s not Cloud, then you haven’t got any policy about it at all. Enterprises need to sit down and think, “What would it mean to us to make use of Cloud services and to ask as well, what are we likely to do with Cloud services?”

Gardner: Dave, is there an added impetus for Cloud providers to be somewhat more secure than enterprises?

Gilmour: It depends on the enterprise that they’re actually supplying to. If you’re in a heavily regulated industry, you have a different view of what levels of security you need and want, and therefore what you’re going to impose contractually on your Cloud supplier. That means that the different Cloud suppliers are going to have to attack different industries with different levels of security arrangements.

The problem there is that the penalty regimes are always going to say, “Well, if the security lapses, you’re going to get off with two months of not paying” or something like that. That kind of attitude isn’t going to go in this kind of security.

What I don’t understand is exactly how secure Cloud provision is going to be enabled and governed under tight regimes like that.

An opportunity

Gardner: Jim, we’ve seen in the public sector that governments are recognizing that Cloud models could be a benefit to them. They can reduce redundancy. They can control and standardize. They’re putting in place some definitions, implementation standards, and so forth. Is the vanguard of correct Cloud Computing with security in mind being managed by governments at this point?

Hietala: I’d say that they’re at the forefront. Some of these shared government services, where they stand up Cloud and make it available to lots of different departments in a government, have the ability to do what they want from a security standpoint, not relying on a public provider, and get it right from their perspective and meet their requirements. They then take that consistent service out to lots of departments that may not have had the resources to get IT security right, when they were doing it themselves. So I think you can make a case for that.

Gardner: Stuart, being involved with standards activities yourself, does moving to the Cloud provide a better environment for managing, maintaining, instilling, and improving on standards than enterprise by enterprise by enterprise? As I say, we’re looking at a larger pool and therefore that strikes me as possibly being a better place to invoke and manage standards.

Boardman: Dana, that’s a really good point, and I do agree. Also, in the security field, we have an advantage in the sense that there are quite a lot of standards out there to deal with interoperability, exchange of policy, exchange of credentials, which we can use. If we adopt those, then we’ve got a much better chance of getting those standards used widely in the Cloud world than in an individual enterprise, with an individual supplier, where it’s not negotiation, but “you use my API, and it looks like this.”

Having said that, there are a lot of well-known Cloud providers who do not currently support those standards and they need a strong commercial reason to do it. So it’s going to be a question of the balance. Will we get enough specific weight of people who are using it to force the others to come on board? And I have no idea what the answer to that is.

Gardner: We’ve also seen that cooperation is an important aspect of security, knowing what’s going on on other people’s networks, being able to share information about what the threats are, remediation, working to move quickly and comprehensively when there are security issues across different networks.

Is that a case, Dave, where having a Cloud environment is a benefit? That is to say more sharing about what’s happening across networks for many companies that are clients or customers of a Cloud provider rather than perhaps spotty sharing when it comes to company by company?

Gilmour: There is something to be said for that, Dana. Part of the issue, though, is that companies are individually responsible for their data. They’re individually responsible to a regulator or to their clients for their data. The question then becomes that as soon as you start to share a certain aspect of the security, you’re de facto sharing the weaknesses as well as the strengths.

So it’s a two-edged sword. One of the problems we have is that until we mature a little bit more, we won’t be able to actually see which side is the sharpest.

Gardner: So our premise that Cloud is good and bad for security is holding up, but I’m wondering whether the same things that make you a risk in a private setting — poor adhesion to standards, no good governance, too many technologies that are not being measured and controlled, not instilling good behavior in your employees and then enforcing that — wouldn’t this be the same either way? Is it really Cloud or not Cloud, or is it good security practices or not good security practices? Mary Ann?

No accountability

Mezzapelle: You’re right. It’s a little bit of that “garbage in, garbage out,” if you don’t have the basic things in place in your enterprise, which means the policies, the governance cycle, the audit, and the tracking, because it doesn’t matter if you don’t measure it and track it, and if there is no business accountability.

David said it — each individual company is responsible for its own security, but I would say that it’s the business owner that’s responsible for the security, because they’re the ones that ultimately have to answer that question for themselves in their own business environment: “Is it enough for what I have to get done? Is the agility more important than the flexibility in getting to some systems or the accessibility for other people, as it is with some of the ubiquitous computing?”

So you’re right. If it’s an ugly situation within your enterprise, it’s going to get worse when you do outsourcing, out-tasking, or anything else you want to call within the Cloud environment. One of the things that we say is that organizations not only need to know their technology, but they have to get better at relationship management, understanding who their partners are, and being able to negotiate and manage that effectively through a series of relationships, not just transactions.

Gardner: If data and sharing data is so important, it strikes me that Cloud component is going to be part of that, especially if we’re dealing with business processes across organizations, doing joins, comparing and contrasting data, crunching it and sharing it, making data actually part of the business, a revenue generation activity, all seems prominent and likely.

So to you, Stuart, what is the issue now with data in the Cloud? Is it good, bad, or just the same double-edged sword, and it just depends how you manage and do it?

Boardman: Dana, I don’t know whether we really want to be putting our data in the Cloud, so much as putting the access to our data into the Cloud. There are all kinds of issues you’re going to run up against, as soon as you start putting your source information out into the Cloud, not the least privacy and that kind of thing.

A bunch of APIs

What you can do is simply say, “What information do I have that might be interesting to people? If it’s a private Cloud in a large organization elsewhere in the organization, how can I make that available to share?” Or maybe it’s really going out into public. What a government, for example, can be thinking about is making information services available, not just what you go and get from them that they already published. But “this is the information,” a bunch of APIs if you like. I prefer to call them data services, and to make those available.

So, if you do it properly, you have a layer of security in front of your data. You’re not letting people come in and do joins across all your tables. You’re providing information. That does require you then to engage your users in what is it that they want and what they want to do. Maybe there are people out there who want to take a bit of your information and a bit of somebody else’s and mash it together, provide added value. That’s great. Let’s go for that and not try and answer every possible question in advance.

Gardner: Dave, do you agree with that, or do you think that there is a place in the Cloud for some data?

Gilmour: There’s definitely a place in the Cloud for some data. I get the impression that there is going to drive out of this something like the insurance industry, where you’ll have a secondary Cloud. You’ll have secondary providers who will provide to the front-end providers. They might do things like archiving and that sort of thing.

Now, if you have that situation where your contractual relationship is two steps away, then you have to be very confident and certain of your cloud partner, and it has to actually therefore encompass a very strong level of governance.

The other issue you have is that you’ve got then the intersection of your governance requirements with that of the cloud provider’s governance requirements. Therefore you have to have a really strongly — and I hate to use the word — architected set of interfaces, so that you can understand how that governance is actually going to operate.

Gardner: Wouldn’t data perhaps be safer in a cloud than if they have a poorly managed network?

Mezzapelle: There is data in the Cloud and there will continue to be data in the Cloud, whether you want it there or not. The best organizations are going to start understanding that they can’t control it that way and that perimeter-like approach that we’ve been talking about getting away from for the last five or seven years.

So what we want to talk about is data-centric security, where you understand, based on role or context, who is going to access the information and for what reason. I think there is a better opportunity for services like storage, whether it’s for archiving or for near term use.

There are also other services that you don’t want to have to pay for 12 months out of the year, but that you might need independently. For instance, when you’re running a marketing campaign, you already share your data with some of your marketing partners. Or if you’re doing your payroll, you’re sharing that data through some of the national providers.

Data in different places

So there already is a lot of data in a lot of different places, whether you want Cloud or not, but the context is, it’s not in your perimeter, under your direct control, all of the time. The better you get at managing it wherever it is specific to the context, the better off you will be.

Hietala: It’s a slippery slope [when it comes to customer data]. That’s the most dangerous data to stick out in a Cloud service, if you ask me. If it’s personally identifiable information, then you get the privacy concerns that Stuart talked about. So to the extent you’re looking at putting that kind of data in a Cloud, looking at the Cloud service and trying to determine if we can apply some encryption, apply the sensible security controls to ensure that if that data gets loose, you’re not ending up in the headlines of The Wall Street Journal.

Gardner: Dave, you said there will be different levels on a regulatory basis for security. Wouldn’t that also play with data? Wouldn’t there be different types of data and therefore a spectrum of security and availability to that data?

Gilmour: You’re right. If we come back to Facebook as an example, Facebook is data that, even if it’s data about our known customers, it’s stuff that they have put out there with their will. The data that they give us, they have given to us for a purpose, and it is not for us then to distribute that data or make it available elsewhere. The fact that it may be the same data is not relevant to the discussion.

Three-dimensional solution

That’s where I think we are going to end up with not just one layer or two layers. We’re going to end up with a sort of a three-dimensional solution space. We’re going to work out exactly which chunk we’re going to handle in which way. There will be significant areas where these things crossover.

The other thing we shouldn’t forget is that data includes our software, and that’s something that people forget. Software nowadays is out in the Cloud, under current ways of running things, and you don’t even always know where it’s executing. So if you don’t know where your software is executing, how do you know where your data is?

It’s going to have to be just handled one way or another, and I think it’s going to be one of these things where it’s going to be shades of gray, because it cannot be black and white. The question is going to be, what’s the threshold shade of gray that’s acceptable.

Gardner: Mary Ann, to this notion of the different layers of security for different types of data, is there anything happening in the market that you’re aware of that’s already moving in that direction?

Mezzapelle: The experience that I have is mostly in some of the business frameworks for particular industries, like healthcare and what it takes to comply with the HIPAA regulation, or in the financial services industry, or in consumer products where you have to comply with the PCI regulations.

There has continued to be an issue around information lifecycle management, which is categorizing your data. Within a company, you might have had a document that you coded private, confidential, top secret, or whatever. So you might have had three or four levels for a document.

You’ve already talked about how complex it’s going to be as you move into trying understand, not only for that data, that the name Mary Ann Mezzapelle, happens to be in five or six different business systems over a 100 instances around the world.

That’s the importance of something like an Enterprise Architecture that can help you understand that you’re not just talking about the technology components, but the information, what they mean, and how they are prioritized or critical to the business, which sometimes comes up in a business continuity plan from a system point of view. That’s where I’ve advised clients on where they might start looking to how they connect the business criticality with a piece of information.

One last thing. Those regulations don’t necessarily mean that you’re secure. It makes for good basic health, but that doesn’t mean that it’s ultimately protected.You have to do a risk assessment based on your own environment and the bad actors that you expect and the priorities based on that.

Leaving security to the end

Boardman: I just wanted to pick up here, because Mary Ann spoke about Enterprise Architecture. One of my bugbears — and I call myself an enterprise architect — is that, we have a terrible habit of leaving security to the end. We don’t architect security into our Enterprise Architecture. It’s a techie thing, and we’ll fix that at the back. There are also people in the security world who are techies and they think that they will do it that way as well.

I don’t know how long ago it was published, but there was an activity to look at bringing the SABSA Methodology from security together with TOGAF®. There was a white paper published a few weeks ago.

The Open Group has been doing some really good work on bringing security right in to the process of EA.

Hietala: In the next version of TOGAF, which has already started, there will be a whole emphasis on making sure that security is better represented in some of the TOGAF guidance. That’s ongoing work here at The Open Group.

Gardner: As I listen, it sounds as if the in the Cloud or out of the Cloud security continuum is perhaps the wrong way to look at it. If you have a lifecycle approach to services and to data, then you’ll have a way in which you can approach data uses for certain instances, certain requirements, and that would then apply to a variety of different private Cloud, public Cloud, hybrid Cloud.

Is that where we need to go, perhaps have more of this lifecycle approach to services and data that would accommodate any number of different scenarios in terms of hosting access and availability? The Cloud seems inevitable. So what we really need to focus on are the services and the data.

Boardman: That’s part of it. That needs to be tied in with the risk-based approach. So if we have done that, we can then pick up on that information and we can look at a concrete situation, what have we got here, what do we want to do with it. We can then compare that information. We can assess our risk based on what we have done around the lifecycle. We can understand specifically what we might be thinking about putting where and come up with a sensible risk approach.

You may come to the conclusion in some cases that the risk is too high and the mitigation too expensive. In others, you may say, no, because we understand our information and we understand the risk situation, we can live with that, it’s fine.

Gardner: It sounds as if we are coming at this as an underwriter for an insurance company. Is that the way to look at it?

Current risk

Gilmour: That’s eminently sensible. You have the mortality tables, you have the current risk, and you just work the two together and work out what’s the premium. That’s probably a very good paradigm to give us guidance actually as to how we should approach intellectually the problem.

Mezzapelle: One of the problems is that we don’t have those actuarial tables yet. That’s a little bit of an issue for a lot of people when they talk about, “I’ve got $100 to spend on security. Where am I going to spend it this year? Am I going to spend it on firewalls? Am I going to spend it on information lifecycle management assessment? What am I going to spend it on?” That’s some of the research that we have been doing at HP is to try to get that into something that’s more of a statistic.

So, when you have a particular project that does a certain kind of security implementation, you can see what the business return on it is and how it actually lowers risk. We found that it’s better to spend your money on getting a better system to patch your systems than it is to do some other kind of content filtering or something like that.

Gardner: Perhaps what we need is the equivalent of an Underwriters Laboratories (UL) for permeable organizational IT assets, where the security stamp of approval comes in high or low. Then, you could get you insurance insight– maybe something for The Open Group to look into. Any thoughts about how standards and a consortium approach would come into that?

Hietala: I don’t know about the UL for all security things. That sounds like a risky proposition.

Gardner: It could be fairly popular and remunerative.

Hietala: It could.

Mezzapelle: An unending job.

Hietala: I will say we have one active project in the Security Forum that is looking at trying to allow organizations to measure and understand risk dependencies that they inherit from other organizations.

So if I’m outsourcing a function to XYZ corporation, being able to measure what risk am I inheriting from them by virtue of them doing some IT processing for me, could be a Cloud provider or it could be somebody doing a business process for me, whatever. So there’s work going on there.

I heard just last week about a NSF funded project here in the U.S. to do the same sort of thing, to look at trying to measure risk in a predictable way. So there are things going on out there.

Gardner: We have to wrap up, I’m afraid, but Stuart, it seems as if currently it’s the larger public Cloud provider, something of Amazon and Google and among others that might be playing the role of all of these entities we are talking about. They are their own self-insurer. They are their own underwriter. They are their own risk assessor, like a UL. Do you think that’s going to continue to be the case?

Boardman: No, I think that as Cloud adoption increases, you will have a greater weight of consumer organizations who will need to do that themselves. You look at the question that it’s not just responsibility, but it’s also accountability. At the end of the day, you’re always accountable for the data that you hold. It doesn’t matter where you put it and how many other parties they subcontract that out to.

The weight will change

So there’s a need to have that, and as the adoption increases, there’s less fear and more, “Let’s do something about it.” Then, I think the weight will change.

Plus, of course, there are other parties coming into this world, the world that Amazon has created. I’d imagine that HP is probably one of them as well, but all the big names in IT are moving in here, and I suspect that also for those companies there’s a differentiator in knowing how to do this properly in their history of enterprise involvement.

So yeah, I think it will change. That’s no offense to Amazon, etc. I just think that the balance is going to change.

Gilmour: Yes. I think that’s how it has to go. The question that then arises is, who is going to police the policeman and how is that going to happen? Every company is going to be using the Cloud. Even the Cloud suppliers are using the Cloud. So how is it going to work? It’s one of these never-decreasing circles.

Mezzapelle: At this point, I think it’s going to be more evolution than revolution, but I’m also one of the people who’ve been in that part of the business — IT services — for the last 20 years and have seen it morph in a little bit different way.

Stuart is right that there’s going to be a convergence of the consumer-driven, cloud-based model, which Amazon and Google represent, with an enterprise approach that corporations like HP are representing. It’s somewhere in the middle where we can bring the service level commitments, the options for security, the options for other things that make it more reliable and risk-averse for large corporations to take advantage of it.

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

1 Comment

Filed under Cloud, Cloud/SOA, Conference, Cybersecurity, Information security, Security Architecture