Category Archives: Uncategorized

The Open Group Edinburgh—The State of Boundaryless Information Flow™ Today

By The Open Group

This year marks the 20th anniversary of the first version of TOGAF®, an Open Group standard, and the publication of “The Boundaryless Organization,” a book that defined how companies should think about creating more open, flexible and engaging organizations. We recently sat down with The Open Group President and CEO Allen Brown and Ron Ashkenas, Senior Partner at Schaffer Consulting and one of the authors of “The Boundaryless Organization,” to get a perspective on Boundaryless Information Flow™ and where the concept stands today. Brown and Ashkenas presented their perspectives on this topic at The Open Group Edinburgh event on Oct. 20.

In the early 1990s, former GE CEO Jack Welch challenged his team to create what he called a “boundaryless organization”—an organization where the barriers between employees, departments, customers and suppliers had been broken down. He also suggested that in the 21st Century, all organizations would need to move in this direction.

Based on the early experience of GE, and a number of other companies, the first book on the subject, “The Boundaryless Organization,” was published in 1995. This was the same year that The Open Group released the first version of the TOGAF® standard, which provided an architectural framework to help companies achieve interoperability by providing a structure for interconnecting legacy IT systems. Seven years later, The Open Group adopted the concept of Boundaryless Information Flow™—achieved through global interoperability in a secure, reliable and timely manner—as the ongoing vision and mission for the organization. According to Allen Brown, President and CEO of The Open Group, that vision has sustained The Open Group over the years and continues to do so as the technology industry faces unprecedented and rapid change.

Brown’s definition of Boundaryless Information Flowis rooted in the notion of permeability. Ron Ashkenas, a co-author of “The Boundaryless Organization” emphasizes that organizations still need boundaries—without some boundaries they would become “dis-organized.” But like the cells walls in the human body, those boundaries need to be sufficiently permeable so that information can easily flow back and forth in real time, without being distorted, fragmented or blocked.

In that context, Brown believes that learning to be boundaryless today is more important than ever for organizations, despite the fact that many of the boundaries that existed in 1995 no longer exist, and the technical ability to share information around the world will continue to evolve. What often holds organizations back however, says Ashkenas, are the social and cultural patterns within organizations, not the technology.

“We have a tendency to protect information in different parts of the organization,” says Ashkenas. “Different functions, locations, and business units want their own version of ‘the truth’ rather than being held accountable to a common standard. This problem becomes even more acute across companies in a globally connected ecosystem and world. So despite our technological capabilities, we still end up with different systems, different information available at different times. We don’t have the common true north. The need to be more boundaryless is still there.  In fact it’s greater now, even though we have the capability to do it.”

Although the technical capabilities for Boundaryless Information Flow are largely here, the larger issue is getting people to agree and collaborate on how to do things. As Ashkenas explains, “It’s not just the technical challenges, it’s also cultural challenges, and the two have to go hand-in-hand.”

What’s more, collaboration is not just an issue of getting individuals to change, but of making changes at much larger levels on a much larger scale. Not only are boundaries now blurring within organizations, they’re blurring between institutions and across global ecosystems, which may include anything from apps, devices and technologies to companies, countries and cultures.

Ashkenas says that’s where standards, such as those being created by The Open Group, can help make a difference.  He says, “Standards used to come after technology. Now they need to come before the changes and weave together some of the ecosystem partners. I think that’s one of the exciting new horizons for The Open Group and its members—they can make a fundamental difference in the next few years.”

Brown agrees. He says that there are two major forces currently facing how The Open Group will continue to shape the Boundaryless Information Flow vision. One is the need for standards to address the changing perspective needed of the IT function from an “inside-out” to “outside-in” model fueled by a combination of boundaryless thinking and the convergence of social, mobile, Cloud, the Internet of Things and Big Data. The other is the need to shift from IT strategies being derived from business strategies and reacting to the business agenda that leads to redundancy and latency in delivering new solutions. Instead, IT must shift to recognize technology as increasingly driving business opportunity and that IT must be managed as a business in its own right.

For example, twenty years ago a standard might lag behind a technology. Once companies no longer needed to compete on the technology, Brown says, they would standardize. With things like Open Platform 3.0™ and the need to manage the business of IT (IT4IT™) quickly changing the business landscape, now standards need to be at the forefront, along with technology development, so that companies have guidance on how to navigate a more boundaryless world while maintaining security and reliability.

“This is only going to get more and more exciting and more and more interesting,” Brown says.

How boundaryless are we?

Just how boundaryless are we today? Ashkenas says a lot has been accomplished in the past 20 years. Years ago, he says, people in most organizations would have thought that Boundaryless Information Flow was either not achievable or they would have shrugged their shoulders and ignored the concept. Today there’s a strong acceptance of the need for it. In fact, a recent survey of The Open Group members found that 65 percent of those surveyed believe boundarylessness as a positive thing within their organization. And while most organizations are making strides toward boundarylessness, only a minority–15 percent—of those surveyed felt that Boundaryless Information Flow was something that would be impossible to achieve in a large international organization such as theirs.

According to Brown and Ashkenas, the next horizon for many organizations will be to truly make information flow more accessible in real-time for all stakeholders. Ashkenas says in most organizations the information people need is not always available when people need it, whether this is due to different systems, cultural constraints or even time zones. The next step will be to provide managers real-time, anywhere access to all the information they need. IT can help play a bigger part in providing people a “one truth” view of information, he says.

Another critical—but potentially difficult—next step is to try to get people to agree on how to make boundarylessness work across ecosystems. Achieving this will be a challenge because ways of doing things—even standards development—will need to adapt to different cultures in order for them to ultimately work. What makes sense in the U.S. or Europe from a business standpoint may not make sense in China or India or Brazil, for example.

“What are the fundamentals that have to be accepted by everyone and where is there room for customization to local cultures?” asks Ashkenas. “Figuring out the difference between the two will be critical in the coming years.”

Brown and Ashkenas say that we can expect technical innovations to evolve at greater speed and with greater effectiveness in the coming years. This is another reason why Enterprise Architecture and standards development will be critical for helping organizations transform themselves and adapt as boundaries blur even more.

As Brown notes, the reason that the architecture discipline and standards such as TOGAF arose 20 years ago was exactly because organizations were beginning to move toward boundarylessness and they needed a way to figure out how to frame those environments and make them work together. Before then, when IT departments were implementing different siloed systems for different functions across organizations, they had no inkling that someday people might need to share that information across systems or departments, let alone organizations.

“It never crossed our minds that we’d need to add people or get information from disparate systems and put them together to make some sense. It wasn’t information, it was just data. The only way in any complex organization that you can start weaving this together and see how they join together is to have some sort of architecture, an overarching view of the enterprise. Complex organizations have that view and say ‘this is how that information comes together and this is how it’s going to be designed.’ We couldn’t have gotten there without Enterprise Architecture in large organizations,” he says.

In the end, the limitations to Boundaryless Information Flow will largely be organizational and cultural, not a question of technology. Technologically boundarylessness is largely achievable. The question for organizations, Brown says, is whether they’ll be able to adjust to the changes that technology brings.

“The limitations are in the organization and the culture. Can they make the change? Can they absorb it all? Can they adapt?” he says.

1 Comment

Filed under Uncategorized

Strategic Planning – Ideas to Delivery

By Martin Owen, CEO, Corso

Most organizations operate at a fast pace of change. Businesses are constantly evaluating market demands and enacting change to drive growth and develop a competitive edge.

These market demands come from a broad number of sources, and include economic changes, market trends, regulations, technology improvements and resource management. Knowing where the demands originated, whether they are important and if they are worth acting on can be difficult.

We look at how innovation, Enterprise Architecture and successful project delivery needs to be intertwined and traceable.

In the past, managing ideation to the delivery of innovation has not been done, or has been attempted in organizational silos, leading to disconnections. This in turn results in change not being implemented properly or a focus on the wrong type of change.

How Does an Organization Successfully Embrace Change?

Many companies start with campaigns and ideation. They run challenges and solicit ideas from within and outside of their walls. Ideas are then prioritized and evaluated. Sometimes prototypes are built and tested, but what happens next?

Many organizations turn to the blueprints or roadmaps generated by their enterprise architectures, IT architectures and or business process architectures for answers. They evaluate how a new idea and its supporting technology, such as SOA or enterprise-resource planning (ERP), fits into the broader architecture. They manage their technology portfolio by looking at their IT infrastructure needs.

Organizations often form program management boards to evaluate ideas, initiatives and their costs. In reality, these evaluations are based on lightweight business cases without the broader context. organizations don’t have a comprehensive understanding of what systems, processes and resources they have, what they are being used for, and how much they cost and the effects of regulations. Projects are delivered and viewed on a project-by-project basis without regard to the bigger picture. Enterprise, technology and process-related decisions are made within the flux of change and without access to the real knowledge contained within the organisation or in the market place. IT is often in the hot seat of this type of decision-making.

Challenges of IT Planning

IT planning takes place in reaction to and anticipation of these market demands and initiatives. There may be a need for a new CRM or accounting system, or new application for manufacturing or product development. While IT planning should be part of a broader enterprise architecture or market analysis, IT involvement in technology investments are often done close to the end of the strategic planning process and without proper access to enterprise or market data.

The following questions illustrate the competing demands found within the typical IT environment:

How can we manage the prioritization of business, architectural-and project-driven initiatives?

Stakeholders place a large number of both tactical and strategic requirements on IT. IT is required to offer different technology investment options, but is often constrained by a competition for resources.

How do we balance enterprise architecture’s role with IT portfolio management?

An enterprise architect provides a high-level view of the risks and benefits of a project and the alignment to future goals. It can illustrate the project complexities and the impact of change. Future state architectures and transition plans can be used to define investment portfolio content. At the same time, portfolio management provides a detailed perspective of development and implementation. Balancing these often-competing viewpoints can be tricky.

How well are application lifecycles being managed?

Application management requires a product/service/asset view over time. Well-managed application lifecycles demand a process of continuous releases, especially when time to market is key. The higher level view required by portfolio management provides a broader perspective of how all assets work together. Balancing application lifecycle demands against a broader portfolio framework can present an inherent conflict about priorities and a struggle for resources.

How do we manage the numerous and often conflicting governance requirements across the delivery process?

As many organizations move to small-team agile development, coordinating the various application development projects becomes more difficult. Managing the development process using waterfall methods can shorten schedules but can also increase the chance of errors and a disconnect with broader portfolio and enterprise goals.

How do we address different lifecycles and tribes in the organization?

Lifecycles such as innovation management, enterprise architecture, business process management and solution delivery are all necessary but are not harmonised across the enterprise. The connection among these lifecycles is important to the effective delivery of initiatives and understanding the impact of change.

The enterprise view, down through innovation management, portfolio management, application lifecycle management and agile development represent competing IT viewpoints that can come together using an ideas to delivery framework.

Agile Development and DevOps

A key component of the drive from ideas to delivery is how strategic planning and the delivery of software are related or more directly the relevance of Agile Enterprise Architecture to DevOps.

DevOps is a term that has been around since the end of the last decade, originating from the Agile development movement and is a fusion of “development” and “operations”. In more practical terms it integrates developers and operations teams in order to improve collaboration and productivity by automating infrastructure, workflows and continuously measuring application performance.

The drivers behind the approach are the competing needs to incorporate new products into production whilst maintaining 99.9% uptime to customers in an agile manner.

To understand further the increase in complexity we need to look at how new features and functions need to be applied to our delivery of software. The world of mobile apps, middleware and cloud deployment has reduced release cycles to weeks not months with an emphasis on delivering incremental change. Previously a business release would be every few months with a series of modules and hopefully still relevant to the business goals.

The shorter continuous delivery lifecycle will help organizations:

  • Achieve shorter releases by incremental delivery and delivering faster innovation.
  • Be more responsive to business needs by improved collaboration, better quality and more frequent releases.
  • Manage the number of applications impacted by business release by allowing local variants for a global business and continuous delivery within releases.

The Devops approach achieves this by providing an environment that:

  • Will minimize software delivery batch sizes to increase flexibility and enable continuous feedback as every team delivers features to production as they are completed.
  • Has the notion of projects replaced by release trains which minimizes batch waiting time to reduce lead times and waste.
  • Has a shift from central planning to decentralized execution with a pull philosophy thus minimizing batch transaction cost to improve efficiency.
  • Makes DevOps economically feasible through test virtualization, build automation, and automated release management as we prioritize and sequence batches to maximize business value and select the right batches, sequence them in the right order, guide the implementation, track execution and make planning adjustments to maximize business value.

By Martin Owen, CEO, CorsoFigure 1: DevOps lifecycle

Thus far we have only looked at the delivery aspects, so how does this approach integrate with an enterprise architecture view?

To understand this we need to look more closely at the strategic Planning Lifecycle. Figure 2 shows how the strategic planning lifecycle supports an ‘ideas to delivery’ framework.

By Martin Owen, CEO, Corso

Figure 2: The strategic planning lifecycle

You can see here, the high level relationship between the strategy and goals of an organization and the projects that deliver the change to meet these goals. The enterprise architecture provides the model to govern the delivery of projects in line with these goals.

However we must ensure that any model that is built must be just enough EA to provide the right level of analysis and this has been discussed in previous sections of this book regarding the use of Kanban to drive change. The Agile EA model is then one that can both provide enough analysis to plan which projects should be undertaken and then to ensure full architectural governance over the delivery. The last part of this is achieved by connecting to the tools used in the Agile space.

By Martin Owen, CEO, Corso

Figure 3: Detailed view of the strategic planning lifecycle

There are a number of tools that can be used within DevOps. One example is the IBM toolset, which uses open standards to link to other products within the overall lifecycle. This approach integrates the Agile enterprise architecture process with the Agile Development process and connects project delivery with effective governance of the project lifecycle and ensures that even if the software delivery process is agile the link to goals and associated business needs are met.

To achieve this goal a number of internal processes must interoperate and this is a significant challenge, but one that can be met by building an internal center of excellence and finding a solution by starting small and building a working environment.

The Strategic Planning Lifecycle Summary

The organization begins by revisiting its corporate vision and strategy. What things will differentiate the organization from its competitors in five years? What value propositions will it offer customers to create that differentiation? The organization can create a series of campaigns or challenges to solicit new ideas and requirements for its vision and strategy.

The ideas and requirements are rationalized into a value proposition that can be examined in more detail.

The company can look at what resources it needs to have on both the business side and the IT side to deliver the capabilities needed to realize the value propositions. For example, a superior customer experience might demand better internet interactions and new applications, processes, and infrastructure on which to run. Once the needs are understood, they are compared to what the organization already has. The transition planning determines how the gaps will be addressed.

An enterprise architecture is a living thing with a lifecycle of its own. Figure 3 shows the ongoing EA processes. With the strategy and transition plan in place, EA execution begins. The transition plan provides input to project prioritization and planning since those projects aligned with the transition plan are typically prioritized over those that do not align. This determines which projects are funded and entered into, or continue to the Devops stage. As the solutions are developed, enterprise architecture assets such as models, building blocks, rules, patterns, constraints and guidelines are used and followed. Where the standard assets aren’t suitable for a project, exceptions are requested from the governance board. These exceptions are tracked carefully. Where assets are frequently the subject of exception requests, they must be examined to see if they really are suitable for the organization.

If we’re not doing things the way we said we wanted them done, then we must ask if our target architectures are still correct. This helps keep the EA current and useful.

Periodic updates to the organization’s vision and strategy require a reassessment of the to-be state of the enterprise architecture. This typically results in another look at how the organization will differentiate itself in five years, what value propositions it will offer, the capabilities and resources needed, and so on. Then the transition plan is examined to see if it is still moving us in the right direction. If not, it is updated.

Figure 3, separates the organization’s strategy and vision, the enterprise architecture lifecycle components and the solution development & delivery. Some argue that the strategy and vision are part of the EA while others argue against this. Both views are valid since they simply depend on how you look at the process. If the CEO’s office is responsible for the vision and strategy and the reporting chain as responsible for its execution, then the separation of it from the EA makes sense. In practice, the top part of the reporting chain participates in the vision and strategy exercise and is encouraged to “own” it, at least from an execution perspective. In that case, it might be fair to consider it part of the EA. Or you can say it drives the EA. The categorization isn’t as important as understanding how the vision and strategy interacts with the EA, or the rest of the EA, however you see it.

Note that the overall goal here is to have traceability from our ideas and initiatives, all the way through to strategic delivery. This comes with clear feedback from delivery assets to the ideas and requirements that they were initiated from.

By Martin Owen, CEO, CorsoMartin Owen, CEO, Corso, has held executive and senior management and technical positions in IBM, Telelogic and Popkin. He has been instrumental in driving forward the product management of enterprise architecture, portfolio management and asset management tooling.

Martin is also active with industry standards bodies and was the driver behind the first business process-modelling notation (BPMN) standard.

Martin has led the ArchiMate® and UML mapping initiatives at The Open Group and is part of the capability based planning standards team.

Martin is responsible for strategy, products and direction at Corso.

1 Comment

Filed under Uncategorized

Mac OS X El Capitan Achieves UNIX® Certification

By The Open Group

The Open Group, an international vendor- and technology-neutral consortium, has announced that Apple, Inc. has achieved UNIX® certification for its latest operating system – Mac OS X version 10.11 known as “El Capitan.”

El Capitan was announced on September 29, 2015 following it being registered as conforming to The Open Group UNIX® 03 standard on the September 7, 2015.

The UNIX® trademark is owned and managed by The Open Group, with the trademark licensed exclusively to identify operating systems that have passed the tests identifying that they conform to The Single UNIX Specification, a standard of The Open Group. UNIX certified operating systems are trusted for mission critical applications because they are powerful and robust, they have a small footprint and are inherently more secure and more stable than the alternatives.

Mac OS X is the most widely used UNIX desktop operating system. Apple’s installed base is now over 80 million users. It’s commitment to the UNIX standard as a platform enables wide portability of applications between compliant and compatible operating systems.

Leave a comment

Filed under Uncategorized

The Open Group to Host Event in Edinburgh in October

By The Open Group

The Open Group, the vendor-neutral IT consortium, is hosting its latest event in Edinburgh October 19-22 2015. The Open Group Edinburgh 2015 will focus on the challenge of architecting a Boundaryless Organization in an age of new technology and changing trends. The event will look at this issue in more depth during individual sessions on October 19 and 20 around Cybersecurity, Risk, Internet of Things and Enterprise Architecture.

Key speakers at the event include:

  • Allen Brown, President & CEO, The Open Group
  • Steve Cole, CIO, BAE Systems Submarine Solutions
  • John Wilcock, Head of Operations Transformation, BAE Systems Submarine Solutions
  • Mark Orsborn, Director, IoT EMEA, Salesforce
  • Heather Kreger, CTO, International Standards, IBM
  • Kevin Coyle, Solution Architect, the Student Loans Company
  • Rob Akershoek, Solution Architect, Shell International

Full details on the range of track speakers at the event can be found here.

The event, which is being held at Edinburgh’s International Conference Centre, provides attendees with the opportunity to participate in Forums and Work Groups to help develop the next generation of certifications and standards. Attendees will be able to learn from the experiences of their peers and gain valuable knowledge from other relevant industry experts, such as a detailed case study of how BAE Submarines introduced an Enterprise Architecture approach to transform their Business Operations.

Attendees will also be able to hear about the upcoming launch of the new IT4IT™ Reference Architecture v2.0 Standard and hear from Mary Jarrett, IT4IT Manager at Shell, on “Why Shell is Adopting an Open Standard for Managing IT”.

Additional key topics of discussion at the Edinburgh event include:

  • An architected approach to Business Transformation in the manufacturing sector
  • The Boundaryless Organization and Boundaryless Information Flow™, and their relevance to information technology executives today
  • The role of Enterprise Architecture in Business Transformation, especially transformations riven by merging and disruptive technologies
  • Risk, Dependability & Trusted Technology: the Cybersecurity connection and securing the global supply chain.
  • Open Platform 3.0™ – Social, Mobile, Big Data & Analytics, Cloud and SOA – how organizations can achieve business objectives by adopting new technologies and processes as part of Business Transformation management principles
  • IT4IT™ – A new operating model to manage the business of IT, providing prescriptive guidance on how to design, procure and implement the functionality required to run IT
  • Enabling healthcare interoperability
  • Developing better interoperability and communication across organizational boundaries and pursuing global standards for Enterprise Architecture that are highly relevant to all industries

The Open Group will be hosting a networking event at Edinburgh Castle on Tuesday October 20, with an evening of traditional Scottish food and entertainment.

Registration for The Open Group Edinburgh is open now and available to members and non-members and can be found here.

Leave a comment

Filed under Uncategorized

The Open Trusted Technology Provider™ Standard (O-TTPS) Approved as ISO/IEC International Standard

The Open Trusted Technology Provider™ Standard (O-TTPS), a Standard from The Open Group for Product Integrity and Supply Chain Security, Approved as ISO/IEC International Standard

Doing More to Secure IT Products and their Global Supply Chains

By Sally Long, The Open Group Trusted Technology Forum Director

As the Director of The Open Group Trusted Technology Forum, I am thrilled to share the news that The Open Trusted Technology Provider™ Standard – Mitigating Maliciously Tainted and Counterfeit Products (O-TTPS) v 1.1 is approved as an ISO/IEC International Standard (ISO/IEC 20243:2015).

It is one of the first standards aimed at assuring both the integrity of commercial off-the-shelf (COTS) information and communication technology (ICT) products and the security of their supply chains.

The standard defines a set of best practices for COTS ICT providers to use to mitigate the risk of maliciously tainted and counterfeit components from being incorporated into each phase of a product’s lifecycle. This encompasses design, sourcing, build, fulfilment, distribution, sustainment, and disposal. The best practices apply to in-house development, outsourced development and manufacturing, and to global supply chains.

The ISO/IEC standard will be published in the coming weeks. In advance of the ISO/IEC 20243 publication, The Open Group edition of the standard, technically identical to the ISO/IEC approved edition, is freely available here.

The standardization effort is the result of a collaboration in The Open Group Trusted Technology Provider Forum (OTTF), between government, third party evaluators and some of industry’s most mature and respected providers who came together as members and, over a period of five years, shared and built on their practices for integrity and security, including those used in-house and those used with their own supply chains. From these, they created a set of best practices that were standardized through The Open Group consensus review process as the O-TTPS. That was then submitted to the ISO/IEC JTC1 process for Publicly Available Specifications (PAS), where it was recently approved.

The Open Group has also developed an O-TTPS Accreditation Program to recognize Open Trusted Technology Providers who conform to the standard and adhere to best practices across their entire enterprise, within a specific product line or business unit, or within an individual product. Accreditation is applicable to all ICT providers in the chain: OEMS, integrators, hardware and software component suppliers, value-add distributors, and resellers.

While The Open Group assumes the role of the Accreditation Authority over the entire program, it also uses third-party assessors to assess conformance to the O-TTPS requirements. The Accreditation Program and the Assessment Procedures are publicly available here. The Open Group is also considering submitting the O-TTPS Assessment Procedures to the ISO/IEC JTC1 PAS process.

This international approval comes none-too-soon, given the global threat landscape continues to change dramatically, and cyber attacks – which have long targeted governments and big business – are growing in sophistication and prominence. We saw this most clearly with the Sony hack late last year. Despite successes using more longstanding hacking methods, maliciously intentioned cyber criminals are looking at new ways to cause damage and are increasingly looking at the technology supply chain as a potentially profitable avenue. In such a transitional environment, it is worth reviewing again why IT products and their supply chains are so vulnerable and what can be done to secure them in the face of numerous challenges.

Risk lies in complexity

Information Technology supply chains depend upon complex and interrelated networks of component suppliers across a wide range of global partners. Suppliers deliver parts to OEMS, or component integrators who build products from them, and in turn offer products to customers directly or to system integrators who integrate them with products from multiple providers at a customer site. This complexity leaves ample opportunity for malicious components to enter the supply chain and leave vulnerabilities that can potentially be exploited.

As a result, organizations now need assurances that they are buying from trusted technology providers who follow best practices every step of the way. This means that they not only follow secure development and engineering practices in-house while developing their own software and hardware pieces, but also that they are following best practices to secure their supply chains. Modern cyber criminals go through strenuous efforts to identify any sort of vulnerability that can be exploited for malicious gain and the supply chain is no different.

Untracked malicious behavior and counterfeit components

Tainted products introduced into the supply chain pose significant risk to organizations because altered products introduce the possibility of untracked malicious behavior. A compromised electrical component or piece of software that lies dormant and undetected within an organization could cause tremendous damage if activated externally. Customers, including governments are moving away from building their own high assurance and customized systems and moving toward the use of commercial off the shelf (COTS) information and communication technology (ICT), typically because they are better, cheaper and more reliable. But a maliciously tainted COTS ICT product, once connected or incorporated, poses a significant security threat. For example, it could allow unauthorized access to sensitive corporate data including intellectual property, or allow hackers to take control of the organization’s network. Perhaps the most concerning element of the whole scenario is the amount of damage that such destructive hardware or software could inflict on safety or mission critical systems.

Like maliciously tainted components, counterfeit products can also cause significant damage to customers and providers resulting in failed or inferior products, revenue and brand equity loss, and disclosure of intellectual property. Although fakes have plagued manufacturers and suppliers for many years, globalization has greatly increased the number of out-sourced components and the number of links in every supply chain, and with that comes increased risk of tainted or counterfeit parts making it into operational environments. Consider the consequences if a faulty component was to fail in a government, financial or safety critical system or if it was also maliciously tainted for the sole purpose of causing widespread catastrophic damage.

Global solution for a global problem – the relevance of international standards

One of the emerging challenges is the rise of local demands on IT providers related to cybersecurity and IT supply chains. Despite technology supply chains being global in nature, more and more local solutions are cropping up to address some of the issues mentioned earlier, resulting in multiple countries with different policies that included disparate and variable requirements related to cybersecurity and their supply chains. Some are competing local standards, but many are local solutions generated by governmental policies that dictate which country to buy from and which not to. The supply chain has become a nationally charged issue that requires the creation of a level playing field regardless of where your company is based. Competition should be based on the quality, integrity and security of your products and processes and not where the products were developed, manufactured, or assembled.

Having transparent criteria through global international standards like our recently approved O-TTPS standard (ISO/IEC 20243) and objective assessments like the O-TTPS Accreditation Program that help assure conformance to those standards is critical to both raise the bar on global suppliers and to provide equal opportunity (vendor-neutral and country-nuetral) for all constituents in the chain to reach that bar – regardless of locale.

The approval by ISO/IEC of this universal product integrity and supply chain security standard is an important next step in the continued battle to secure ICT products and protect the environments in which they operate. Suppliers should explore what they need to do to conform to the standard and buyers should consider encouraging conformance by requesting conformance to it in their RFPs. By adhering to relevant international standards and demonstrating conformance we will have a powerful tool for technology providers and component suppliers around the world to utilize in combating current and future cyber attacks on our critical infrastructure, our governments, our business enterprises and even on the COTS ICT that we have in our homes. This is truly a universal problem that we can begin to solve through adoption and adherence to international standards.

By Sally Long, OTTF DirectorSally Long is the Director of The Open Group Trusted Technology Forum (OTTF). She has managed customer supplier forums and collaborative development projects for over twenty years. She was the release engineering section manager for all multi-vendor collaborative technology development projects at The Open Software Foundation (OSF) in Cambridge Massachusetts. Following the merger of the OSF and X/Open under The Open Group, she served as director for multiple forums in The Open Group. Sally has a Bachelor of Science degree in Electrical Engineering from Northeastern University in Boston, Massachusetts.

Contact:; @sallyannlong

Leave a comment

Filed under Uncategorized

Using Risk Management Standards: A Q&A with Ben Tomhave, Security Architect and Former Gartner Analyst

By The Open Group

IT Risk Management is currently in a state of flux with many organizations today unsure not only how to best assess risk but also how to place it within the context of their business. Ben Tomhave, a Security Architect and former Gartner analyst, will be speaking at The Open Group Baltimore on July 20 on “The Strengths and Limitations of Risk Management Standards.”

We recently caught up with Tomhave pre-conference to discuss the pros and cons of today’s Risk Management standards, the issues that organizations are facing when it comes to Risk Management and how they can better use existing standards to their advantage.

How would you describe the state of Risk Management and Risk Management standards today?

The topic of my talk is really on the state of standards for Security and Risk Management. There’s a handful of significant standards out there today, varying from some of the work at The Open Group to NIST and the ISO 27000 series, etc. The problem with most of those is that they don’t necessarily provide a prescriptive level of guidance for how to go about performing or structuring risk management within an organization. If you look at ISO 31000 for example, it provides a general guideline for how to structure an overall Risk Management approach or program but it’s not designed to be directly implementable. You can then look at something like ISO 27005 that provides a bit more detail, but for the most part these are fairly high-level guides on some of the key components; they don’t get to the point of how you should be doing Risk Management.

In contrast, one can look at something like the Open FAIR standard from The Open Group, and that gets a bit more prescriptive and directly implementable, but even then there’s a fair amount of scoping and education that needs to go on. So the short answer to the question is, there’s no shortage of documented guidance out there, but there are, however, still a lot of open-ended questions and a lot of misunderstanding about how to use these.

What are some of the limitations that are hindering risk standards then and what needs to be added?

I don’t think it’s necessarily a matter of needing to fix or change the standards themselves, I think where we’re at is that we’re still at a fairly prototypical stage where we have guidance as to how to get started and how to structure things but we don’t necessarily have really good understanding across the industry about how to best make use of it. Complicating things further is an open question about just how much we need to be doing, how much value can we get from these, do we need to adopt some of these practices? If you look at all of the organizations that have had major breaches over the past few years, all of them, presumably, were doing some form of risk management—probably qualitative Risk Management—and yet they still had all these breaches anyway. Inevitably, they were compliant with any number of security standards along the way, too, and yet bad things happen. We have a lot of issues with how organizations are using standards less than with the standards themselves.

Last fall The Open Group fielded an IT Risk Management survey that found that many organizations are struggling to understand and create business value for Risk Management. What you’re saying really echoes those results. How much of this has to do with problems within organizations themselves and not having a better understanding of Risk Management?

I think that’s definitely the case. A lot of organizations are making bad decisions in many areas right now, and they don’t know why or aren’t even aware and are making bad decisions up until the point it’s too late. As an industry we’ve got this compliance problem where you can do a lot of work and demonstrate completion or compliance with check lists and still be compromised, still have massive data breaches. I think there’s a significant cognitive dissonance that exists, and I think it’s because we’re still in a significant transitional period overall.

Security should really have never been a standalone industry or a standalone environment. Security should have just been one of those attributes of the operating system or operating environments from the outset. Unfortunately, because of the dynamic nature of IT (and we’re still going through what I refer to as this Digital Industrial Revolution that’s been going on for 40-50 years), everything’s changing everyday. That will be the case until we hit a stasis point that we can stabilize around and grow a generation that’s truly native with practices and approaches and with the tools and technologies underlying this stuff.

An analogy would be to look at Telecom. Look at Telecom in the 1800s when they were running telegraph poles and running lines along railroad tracks. You could just climb a pole, put a couple alligator clips on there and suddenly you could send and receive messages, too, using the same wires. Now we have buried lines, we have much greater integrity of those systems. We generally know when we’ve lost integrity on those systems for the most part. It took 100 years to get there. So we’re less than half that way with the Internet and things are a lot more complicated, and the ability of an attacker, one single person spending all their time to go after a resource or a target, that type of asymmetric threat is just something that we haven’t really thought about and engineered our environments for over time.

I think it’s definitely challenging. But ultimately Risk Management practices are about making better decisions. How do we put the right amount of time and energy into making these decisions and providing better information and better data around those decisions? That’s always going to be a hard question to answer. Thinking about where the standards really could stand to improve, it’s helping organizations, helping people, understand the answer to that core question—which is, how much time and energy do I have to put into this decision?

When I did my graduate work at George Washington University, a number of years ago, one of the courses we had to take went through decision management as a discipline. We would run through things like decision trees. I went back to the executives at the company that I was working at and asked them, ‘How often do you use decision trees to make your investment decisions?” And they just looked at me funny and said, ‘Gosh, we haven’t heard of or thought about decision trees since grad school.’ In many ways, a lot of the formal Risk Management stuff that we talk about and drill into—especially when you get into the quantitative risk discussions—a lot of that goes down the same route. It’s great academically, it’s great in theory, but it’s not the kind of thing where on a daily basis you need to pull it out and use it for every single decision or every single discussion. Which, by the way, is where the FAIR taxonomy within Open FAIR provides an interesting and very valuable breakdown point. There are many cases where just using the taxonomy to break down a problem and think about it a little bit is more than sufficient, and you don’t have to go the next step of populating it with the actual quantitative estimates and do the quantitative estimations for a FAIR risk analysis. You can use it qualitatively and improve the overall quality and defensibility of your decisions.

How mature are most organizations in their understanding of risk today, and what are some of the core reasons they’re having such a difficult time with Risk Management?

The answer to that question varies to a degree by industry. Industries like financial services just seem to deal with this stuff better for the most part, but then if you look at multibillion dollar write offs for JP Morgan Chase, you think maybe they don’t understand risk after all. I think for the most part most large enterprises have at least some people in the organization that have a nominal understanding of Risk Management and risk assessment and how that factors into making good decisions.

That doesn’t mean that everything’s perfect. Look at the large enterprises that had major breaches in 2014 and 2013 and clearly you can look at those and say ‘Gosh, you guys didn’t make very good decisions.’ Home Depot is a good example or even the NSA with the Snowden stuff. In both cases, they knew they had an exposure, they had done a reasonable job of risk management, they just didn’t move fast enough with their remediation. They just didn’t get stuff in place soon enough to make a meaningful difference.

For the most part, larger enterprises or organizations will have better facilities and capabilities around risk management, but they may have challenges with velocity in terms of being able to put to rest issues in a timely fashion. Now slip down to different sectors and you look at retail, they continue to have issues with cardholder data and that’s where the card brands are asserting themselves more aggressively. Look at healthcare. Healthcare organizations, for one thing, simply don’t have the budget or the control to make a lot of changes, and they’re well behind the curve in terms of protecting patient records and data. Then look at other spaces like SMBs, which make up more than 90 percent of U.S. employment firms or look at the education space where they simply will never have the kinds of resources to do everything that’s expected of them.

I think we have a significant challenge here – a lot of these organizations will never have the resources to have adequate Risk Management in-house, and they will always be tremendously resource-constrained, preventing them from doing all that they really need to do. The challenge for them is, how do we provide answers or tools or methods to them that they can then use that don’t require a lot of expertise but can guide them toward making better decisions overall even if the decision is ‘Why are we doing any of this IT stuff at all when we can simply be outsourcing this to a service that specializes in my industry or specializes in my SMB business size that can take on some of the risk for me that I wasn’t even aware of?’

It ends up being a very basic educational awareness problem in many regards, and many of these organizations don’t seem to be fully aware of the type of exposure and legal liability that they’re carrying at any given point in time.

One of the other IT Risk Management Survey findings was that where the Risk Management function sits in organizations is pretty inconsistent—sometimes IT, sometimes risk, sometimes security—is that part of the problem too?

Yes and no—it’s a hard question to answer directly because we have to drill in on what kind of Risk Management we’re talking about. Because there’s enterprise Risk Management reporting up to a CFO or CEO, and one could argue that the CEO is doing Risk Management.

One of the problems that we historically run into, especially from a bottom-up perspective, is a lot of IT Risk Management people or IT Risk Management professionals or folks from the audit world have mistakenly thought that everything should boil down to a single, myopic view of ‘What is risk?’ And yet it’s really not how executives run organizations. Your chief exec, your board, your CFO, they’re not looking at performance on a single number every day. They’re looking at a portfolio of risk and how different factors are balancing out against everything. So it’s really important for folks in Op Risk Management and IT Risk Management to really truly understand and make sure that they’re providing a portfolio view up the chain that adequately represents the state of the business, which typically will represent multiple lines of business, multiple systems, multiple environments, things like that.

I think one of the biggest challenges we run into is just in an ill-conceived desire to provide value that’s oversimplified. We end up hyper-aggregating results and data, and suddenly everything boils down to a stop light that IT today is either red, yellow or green. That’s not really particularly informative, and it doesn’t help you make better decisions. How can I make better investment decisions around IT systems if all I know is that today things are yellow? I think it comes back to the educational awareness topic. Maybe people aren’t always best placed within organizations but really it’s more about how they’re representing the data and whether they’re getting it into the right format that’s most accessible to that audience.

What should organizations look for in choosing risk standards?

I usually get a variety of questions and they’re all about risk assessment—‘Oh, we need to do risk assessment’ and ‘We hear about this quant risk assessment thing that sounds really cool, where do we get started?’ Inevitably, it comes down to, what’s your actual Risk Management process look like? Do you actually have a context for making decisions, understanding the business context, etc.? And the answer more often than not is no, there is no actual Risk Management process. I think really where people can leverage the standards is understanding what the overall risk management process looks like or can look like and in constructing that, making sure they identify the right stakeholders overall and then start to drill down to specifics around impact analysis, actual risk analysis around remediation and recovery. All of these are important components but they have to exist within the broader context and that broader context has to functionally plug into the organization in a meaningful, measurable manner. I think that’s really where a lot of the confusion ends up occurring. ‘Hey I went to this conference, I heard about this great thing, how do I make use of it?’ People may go through certification training but if they don’t know how to go back to their organization and put that into practice not just on a small-scale decision basis, but actually going in and plugging it into a larger Risk Management process, it will never really demonstrate a lot of value.

The other piece of the puzzle that goes along with this, too, is you can’t just take these standards and implement them verbatim; they’re not designed to do that. You have to spend some time understanding the organization, the culture of the organization and what will work best for that organization. You have to really get to know people and use these things to really drive conversations rather than hoping that one of these risk assessments results will have some meaningful impact at some point.

How can organizations get more value from Risk Management and risk standards?

Starting with latter first, the value of the Risk Management standards is that you don’t have to start from scratch, you don’t have to reinvent the wheel. There are, in fact, very consistent and well-conceived approaches to structuring risk management programs and conducting risk assessment and analysis. That’s where the power of the standards come from, from establishing a template or guideline for establishing things.

The challenge of course is you have to have it well-grounded within the organization. In order to get value from a Risk Management program, it has to be part of daily operations. You have to plug it into things like procurement cycles and other similar types of decision cycles so that people aren’t just making gut decisions based off whatever their existing biases are.

One of my favorite examples is password complexity requirements. If you look back at the ‘best practice’ standards requirements over the years, going all the way back to the Orange Book in the 80s or the Rainbow Series which came out of the federal government, they tell you ‘oh, you have to have 8-character passwords and they have to have upper case, lower, numbers, special characters, etc.’ The funny thing is that while that was probably true in 1985, that is probably less true today. When we actually do risk analysis to look at the problem, and understand what the actual scenario is that we’re trying to guard against, password complexity ends up causing more problems than it solves because what we’re really protecting against is a brute force attack against a log-in interface or guessability on a log-in interface. Or maybe we’re trying to protect against a password database being compromised and getting decrypted. Well, password complexity has nothing to do with solving how that data is protected in storage. So why would we look at something like password complexity requirements as some sort of control against compromise of a database that may or may not be encrypted?

This is where Risk Management practices come into play because you can use Risk Management and risk assessment techniques to look at a given scenario—whether it be technology decisions or security control decisions, administrative or technical controls—we can look at this and say what exactly are we trying to protect against, what problem are we trying to solve? And then based on our understanding of that scenario, let’s look at the options that we can apply to achieve an appropriate degree of protection for the organization.

That ultimately is what we should be trying to achieve with Risk Management. Unfortunately, that’s usually not what we see implemented. A lot of the time, what’s described as risk management is really just an extension of audit practices and issuing a bunch of surveys, questionnaires, asking a lot of questions but never really putting it into a proper business context. Then we see a lot of bad practices applied, and we start seeing a lot of math-magical practices come in where we take categorical data—high, medium, low, more or less, what’s the impact to the business? A lot, a little—we take these categorical labels and suddenly start assigning numerical values to them and doing arithmetic calculations on them, and this is a complete violation of statistical principles. You shouldn’t be doing that at all. By definition, you don’t do arithmetic on categorical data, and yet that’s what a lot of these alleged Risk Management and risk assessment programs are doing.

I think Risk Management gets a bad rap as a result of these poor practices. Conducting a survey, asking questions is not a risk assessment. A risk assessment is taking a scenario, looking at the business impact analysis for that scenario, looking at the risk tolerance, what the risk capacity is for that scenario, and then looking at what the potential threats and weaknesses are within that scenario that could negatively impact the business. That’s a risk assessment. Asking people a bunch of questions about ‘Do you have passwords? Do you use complex passwords? Have you hardened the server? Are there third party people involved?’ That’s interesting information but it’s not usually reflective of the risk state and ultimately we want to find out what the risk state is.

How do you best determine that risk state?

If you look at any of the standards—and again this is where the standards do provide some value—if you look at what a Risk Management process is and the steps that are involved in it, take for example ISO 31000—step one is establishing context, which includes establishing potential business impact or business importance, business priority for applications and data, also what the risk tolerance, risk capacity is for a given scenario. That’s your first step. Then the risk assessment step is taking that data and doing additional analysis around that scenario.

In the technical context, that’s looking at how secure is this environment, what’s the exposure of the system, who has access to it, how is the data stored or protected? From that analysis, you can complete the assessment by saying ‘Given that this is a high value asset, there’s sensitive data in here, but maybe that data is strongly encrypted and access controls have multiple layers of defense, etc., the relative risk here of a compromise or attack being successful is fairly low.’ Or ‘We did this assessment, and we found in the application that we could retrieve data even though it was supposedly stored in an encrypted state, so we could end up with a high risk statement around the business impact, we’re looking at material loss,’ or something like that.

Pulling all of these pieces together is really key, and most importantly, you cannot skip over context setting. If you don’t ever do context setting, and establish the business importance, nothing else ends up mattering. Just because a system has a vulnerability doesn’t mean that it’s a material risk to the business. And you can’t even know that unless you establish the context.

In terms of getting started, leveraging the standards makes a lot of sense, but not from a perspective of this is a compliance check list that I’m going to use verbatim. You have to use it as a structured process, you have to get some training and get educated on how these things work and then what requirements you have to meet and then do what makes sense for the organizational role. At the end of the day, there’s no Easy Button for these things, you have to invest some time and energy and build something that makes sense and is functional for your organization.

To download the IT Risk Management survey summary, please click here.

By The Open GroupFormer Gartner analyst Ben Tomhave (MS, CISSP) is Security Architect for a leading online education organization where he is putting theories into practice. He holds a Master of Science in Engineering Management (Information Security Management concentration) from The George Washington University, and is a member and former co-chair of the American Bar Association Information Security Committee, senior member of ISSA, former board member of the Northern Virginia OWASP chapter, and member and former board member for the Society of Information Risk Analysts. He is a published author and an experienced public speaker, including recent speaking engagements with the RSA Conference, the ISSA International Conference, Secure360, RVAsec, RMISC, and several Gartner events.

Join the conversation! @theopengroup #ogchat #ogBWI

1 Comment

Filed under Cybersecurity, RISK Management, Security, Security Architecture, Standards, The Open Group Baltimore 2015, Uncategorized

The Open Group Madrid 2015 – Day One Highlights

By The Open Group

On Monday, April 20, Allen Brown, President & CEO of The Open Group, welcomed 150 attendees to the Enabling Boundaryless Information Flow™ summit held at the Madrid Eurobuilding Hotel.  Following are highlights from the plenary:

The Digital Transformation of the Public Administration of Spain – Domingo Javier Molina Moscoso

Domingo Molina, the first Spanish national CIO, said that governments must transform digitally to meet public expectations, stay nationally competitive, and control costs – the common theme in transformation of doing more with less. Their CORA commission studied what commercial businesses did, and saw the need for an ICT platform as part of the reform, along with coordination and centralization of ICT decision making across agencies.

Three Projects:

  • Telecom consolidation – €125M savings, reduction in infrastructure and vendors
  • Reduction in number of data centers
  • Standardizing and strengething security platform for central administration – only possible because of consolidation of telecom.

The Future: Increasing use of mobile, social networks, online commercial services such as banking – these are the expectations of young people. The administration must therefore be in the forefront of providing digital services to citizens. They have set a transformation target of having citizens being able to interact digitally with all government services by 2020.


  • Any use of formal methods for transformation such as EA? Looked at other countries – seen models such as outsourcing. They are taking a combined approach of reusing their experts and externalizing.
  • How difficult has it been to achieve savings in Europe given labor laws? Model is to re-assign people to higher-value tasks.
  • How do you measure progress: Each unit has own ERP for IT governance – no unified reporting. CIO requests and consolidates data. Working on common IT tool to do this.

An Enterprise Transformation Approach for Today’s Digital Business – Fernando García Velasco

Computing has moved from tabulating systems to the internet and moving into an era of “third platform” of Cloud, Analytics, Mobile and Social (CAMS) and cognitive computing. The creates a “perfect storm” for disruption of enterprise IT delivery.

  • 58% say SMAC will reduce barriers to entry
  • 69% say it will increase competition
  • 41% expect this competition to come from outside traditional market players

These trends are being collected and consolidated in The Open Group Open Platform 3.0™ standard.

He sees the transformation happening in three ways:

  1. Top-down – a transformation view
  2. Meet in the middle: Achieving innovation through EA
  3. Bottom-up: the normal drive for incremental improvement

Gartner: EA is the discipline for leading enterprise response to disruptive forces. IDC: EA is mandatory for managing transformation to third platform.

EA Challenges & Evolution – a Younger Perspective

Steve Nunn, COO of The Open Group and CEO of the Association of Enterprise Architects (AEA), noted the AEA is leading the development of EA as a profession, and is holding the session to recognize the younger voices joining the EA profession. He introduced the panelists: Juan Abal, Itziar Leguinazabal, Mario Gómez Velasco, Daniel Aguado Pérez, Ignacio Macias Jareño.

The panelists talked about their journey as EAs, noting that their training focused on development with little exposure to EA or Computer Science concepts. Schools aren’t currently very interested in teaching EA, so it is hard to get a start. Steve Nunn noted the question of how to enter EA as a profession is a worldwide concern. The panelists said they started looking at EA as a way of gaining a wider perspective of the development or administrative projects they were working on. Mentoring is important, and there is a challenge in learning about the business side when coming from a technical world. Juan Abal said such guidance and mentoring by senior architects is one of the benefits the AEA chapter offers.

Q: What advice would you give to someone entering into the EA career? A: If you are starting from a CS or engineering perspective, you need to start learning about the business. Gain a deep knowledge of your industry. Expect a lot of hard work, but it will have the reward of having more impact on decisions. Q: EA is really about business and strategy. Does the AEA have a strategy for making the market aware of this? A: The Spanish AEA chapter focuses on communicating that EA is a mix, and that EAs need to develop business skills. It is a concern that young architects are focused on IT aspects of EA, and how they can be shown the path to understand the business side.

Q: Should EA be part of the IT program or the CS program in schools? A: We have seen around the world a history of architects coming from IT and that only a few universities have specific IT programs. Some offer it at the postgraduate level. The EA is trying globally to raise awareness of the need for EA education. continuing education as part of a career development path is a good way to manage the breadth of skills a good EA needs; organizations should also be aware of the levels of The Open Group Open CA certifications.

Q: If EA is connected to business, should EAs be specialized to the vertical sector, or should EA be business agnostic? A: Core EA skills are industry-agnostic, and these need to be supplemented by industry-specific reference models. Methodology, Industry knowledge and interpersonal skills are all critical, and these are developed over time.

Q: Do you use EA tools in your job? A: Not really – the experience to use complex tools comes over time.

Q: Are telecom companies adopting EA? A: Telecom companies are adopting standard reference architectures. This sector has not made much progress in EA, though it is critical for transformation in the current market. Time pressure in a changing market is also a barrier.

Q: Is EA being grown in-house or outsourced? A: We are seeing increased uptake among end-user companies in using EA to achieve transformation – this is happening across sectors and is a big opportunity in Spain right now.

Join the conversation! @theopengroup #ogMAD

Comments Off on The Open Group Madrid 2015 – Day One Highlights

Filed under Uncategorized, Enterprise Architecture, Standards, Professional Development, Open Platform 3.0, Boundaryless Information Flow™, Internet of Things