Tag Archives: cloud

The UNIX® Based Cloud

By Harry Foxwell, PhD, Principal Consultant, Oracle®

Oracle® Solaris continues to evolve as the foundation for critical private cloud implementations.  As the premier UNIX®  operating system in the IT industry, certified against The Open Group exacting standards for enterprise-level operating systems, Solaris 11 enables Oracle customers and partners to provide the elasticity, security, scalability, and stability required for today’s demanding Cloud Computing requirements.

As Chris Riggin, Enterprise Architect at Verizon, said at last fall’s Oracle OpenWorld, the cloud services enabled by Solaris provide the massive scaling for Verizon’s 135 million customers and 180,000 employees needed to speed service delivery and to maintain Verizon’s competitive edge.  Using Solaris’ and SPARC’s innovative virtualization technologies and Oracle-supported OpenStack, Verizon serves both customers and employees with a UNIX-based cloud infrastructure that implements enhanced agility, superior performance, easy maintainability, and effective cost control.

Solaris has continually led the evolution of UNIX as the primary choice for enterprise computing.  Oracle’s leadership in The Open Group Governing Board ensures that UNIX will maintain and extend its prominent role in cloud computing.

UNIX® is a Registered Trademark of The Open Group.
Oracle® Solaris is a Registered Trademark of Oracle Corporation.

By Harry Foxwell, Oracle

Harry Foxwell is a principal consultant at Oracle’s Public Sector division in the Washington, DC area, where he is responsible for solutions consulting and customer education on cloud computing, operating systems, and virtualization technologies. Harry has worked for Sun Microsystems, now part of Oracle, since 1995. Prior to that, he worked as a UNIX and Internet specialist for Digital Equipment Corporation; he has worked with UNIX systems since 1979 and with Linux systems since 1995.

Harry is coauthor of two Sun BluePrints: “Slicing and Dicing Servers: A Guide to Virtualization and Containment Technologies” (Sun BluePrints Online, October 2005), and “The Sun BluePrints Guide to Solaris Containers: Virtualization in the Solaris Operating System” (Sun BluePrints Online, October 2006). He coauthored the book Pro OpenSolaris (Apress, 2009), and blogs about cloud computing at http://http://blogs.oracle.com/drcloud/.

He earned his doctorate in information technology in 2003 from George Mason University (Fairfax, VA), and has since taught graduate courses there in operating systems, computer architecture and security, and electronic commerce.

Harry is a Vietnam veteran; he served as a platoon sergeant in the US Army’s 1st Infantry Division in 1968-1969. He was awarded an Air Medal and a Bronze Star. He is also an amateur astronomer and contributing member of the Northern Virginia Astronomy Club. In addition, Harry is a USA Table Tennis (USATT) member and competitive table tennis player. He is also a US Soccer Federation (USSF) soccer referee.

For additional information about Harry, please visit his home page: http://cs.gmu.edu/~hfoxwell.

 

Leave a comment

Filed under Cloud, cloud computing, Enterprise Architecture, IT, standards, The Open Group, UNIX

The Open Group to Hold Next Event in San Francisco

The Open Group, the vendor-neutral IT consortium, is hosting its next event in San Francisco January 25-28. The Open Group San Francisco 2016 will focus on how Enterprise Architecture is empowering companies to build better systems by architecting for digital business strategies. The event will go into depth on this topic through various individual sessions and keynotes.

Some of the many topics of discussion at the event include Business Architecture; how to architect systems using tools and frameworks such as TOGAF® and ArchiMate® (both Open Group standards); Social, Mobile, Analytics and Cloud (SMAC); Risk Management and Cybersecurity; Business Transformation; Professional Development, and improving the security and dependability of IT, including the global supply chain on which they rely.

Key speakers at the event, taking place at San Francisco’s Marriott Union Square, include:

  • Steve Nunn,  President & CEO, The Open Group
  • Trevor Cheung, VP Strategy and Architecture Practice, Huawei Global Services
  • Jeff Matthews, Director of Venture Strategy and Research, Space Frontier Foundation
  • Ajit Gaddam, Chief Security Architect, Visa
  • Eric Cohen, Chief Enterprise Architect, Thales
  • Heather Kreger, Distinguished Engineer, CTO International Standards, IBM

Full details on the range of track speakers at the event can be found here.

There will also be the inaugural TOGAF® User Group meeting taking place on January 25. Facilitated breakout sessions will bring together key stakeholders and users to share best practices, information and learn from each other.

Other subject areas at the three day event will include:

  • Open Platform 3.0™ – The Customer Experience and Digital Business
  • IT4IT – Managing the Business of IT. Case study presentations and a vendor panel to discuss the release of The Open Group IT4IT Reference Architecture Version 2.0 standard
    • Plus deep dive presentations into the four streams of the IT Value Chain along with the latest information on the IT4IT training and certification program.
  • EA & Business Transformation – Understand what role EA, as currently practiced, plays in Business Transformation, especially transformations driven by emerging and disruptive technologies.
  • Risk, Dependability & Trusted Technology – The cybersecurity connection – securing the global supply chain.
  • Enabling Healthcare
  • TOGAF® 9 and ArchiMate® – Case studies and the harmonization of the standards.
  • Understand how to develop better interoperability & communication across organizational boundaries and pursue global standards for Enterprise Architecture that are highly relevant to all industries.

Registration for The Open Group San Francisco is open now, is available to members and non-members, and can be found here.

Join the conversation @theopengroup #ogSFO

Leave a comment

Filed under Boundaryless Information Flow, The Open Group, The Open Group San Francisco 2016, TOGAF, Uncategorized

Driving Digital Transformation Using Enterprise Architecture

By Sunil Kr. Singh, Senior Architecture and Digital Consultant at TATA Consultancy Services

Driving Digital Transformation Using Enterprise Architecture as a Problem Solving Tool Set

If I start talking to an audience and start with, “Enterprise Architecture is required for driving Digital Transformation of an organization”, I guess, I would be talking to an empty hall in 30 seconds. However, I believe it is worth the effort; too many transformations are on. You might be starting to wonder what I have here to say.

Changes are happening rapidly

Business Transformation is becoming the normalized playing ground for everyone! It is happening more frequently and far rapidly. It does not end here; it further makes it more challenging by reducing the time to play catch up. As this Digital Tsunami is hitting us, adopting and developing a standardized approach to implement or execute digital transformation initiatives is important to be successful. The key is to develop the competency to be agile or incremental in a very dynamic environment.

Consumerization and Commoditisation of Product and Services, driven by innovation, knowledge sharing, collaboration and crowd-driven mechanics, is driving rapid evolution of business landscape. The desire to use information in better ways was always there. However, the cost and the scale with which it is possible now was only in books and labs even a decade back. If I still have you here and everything sounds familiar, you might be starting to wonder what is so special about the Digital Transformation. This is the right question and I would encourage you to ask this question many times, as you take up the Digital Transformation journey!

I strongly believe that transformation is definitely an old subject for you. Business has been engaged in transformation for a long time; driving transformation by formulating new business strategies. The same is true for Information Technology (IT) departments; they had moved from mainframe to distributed systems, from independent web applications to Portals to mobile applications. We are all seasoned soldiers of Transformation! Still…

One of the biggest causes of starting to feel the butterflies are the uncertainties around how big of a force is the Tsunami. As we see business domains collapse, we wonder what we should do now. Shall we act or watch and catch the next wave? Which waves to catch, there’s no abating of waves!

Too often disruption in business model

The driver for Google Compare is unprecedented! Who can become the car manufacturer? Alternatively, who wants to play in the card payment market? All establishment looks like pack of cards; they are to be blown away and rebuilt by the Digital Tsunami.

The speciality here is, change in pattern for “Transformation” when the prefix “Digital” gets associated. It is no longer IT for Business. It is technology-enabled business, literally! The basics of market place of how one get their 4Ps together to generate values is changing and thus newer Business Model. That is where the critical differentiation comes in. This drives in a couple of thoughts: A) Business Gurus need to understand information and technology B) Technical Gurus need to understand business. It is no longer a question of business and IT alignment, it is a question of merger and how the mix looks like!

Everyone understands this and understands that change is unavoidable. However, they are also apprehensive of repeating “past failures to transform”. Though enough transformation experience exists, it has also taught the Knights that it was never easy and this time the target itself is fuzzy.

Nevertheless, with tons of questions in their mind, everyone is queuing up for getting a makeover done! Key question for the image makeover gurus, what image makeover tools are at their disposal?

EA is the short answer. Nevertheless, not everyone is doing EA – how can someone explain the success stories that are out there? I am sure there are plenty of individual charismatic leaders who do these in Godly ways. However, the challenge starts when they start to convey their ideas to others. Our expressions are always, “She or he doesn’t get all the Challenges!” Alternatively, “The Devil is in the detail!” Neither do we get what they are trying to drive us to. The friction is huge and more than often companies are stuck here, missing their agility! Is there anything that can break the stalemate?

In this situation the toolset that will be of help are tools around Enterprise Architecture (EA). I can see jaws drop – “What?” “We’ll never be able to transform if we let the Enterprise Architecture drive the show!” Let us take away the people aspects. The tool tries to present a merged image of business and IT. This is the need of the hour. I agree with the challenges that the industry has been experiencing with EA, however, there is a lot of potential to this practice. On the other hand, EA needs to mature as well. This is the symbiotic opportunity! I would like to hear about options available other than EA to drive Digital Transformation.

The point that I am going to make here is simple. The challenge in front of Business Leaders and IT leaders is to drive things quickly and deliver continuous business value through incremental adoption of change. The opportunity for the transformation team is to use a set of tools around EA to let the leaders achieve their goals.

Below I have picked up three different focus areas where EA Practice and its tool set can be valuable for enabling the Digital Transformation.

  1. Unified View:

As we are all experiencing, at any given point in time there are multiple different strategies in execution in different areas of the organization. For example, what is commonly being observed these days, as some team is creating a 360 degree view of their partner, other team may be engaged in various phases of IT system reengineering. I need not get into the details of how they influence each other!

The above phenomenon is almost like solving the Rubik’s cube. When we try to align one side, arrangement on the other side is broken. The different sides of the Rubik’s cube are like different areas of the organization or initiatives. Enterprise Architecture explicitly handles these through Views. Case in point, during an eGovernance initiative to reorganize the IT Systems and Processes, the organization had to start a parallel initiative to modernize the Data Center. It did not end here, the Government was planning to enable unprecedented amount of self-service to the public. Different business departments were driving these; the IT teams were in silos. Result was a no brainer! Multiple starts and stop resulting in overshooting of budget and timeline!

Let us see the Digital Transformation situation. For most contemporary situations, an organization will have cyber security initiatives, digital initiatives, core system modernizations and a few innovation initiatives, all running in parallel.

Therefore, how the situation on the ground does look like? A typical meeting room situation! In a meeting room of a particular program the lead architect or a shared developer points out – “Oh, I know there is a security initiative going on in the data center and that may impact our time line”. The project manager makes a note of it to check this out. The subsequent situations would be familiar too. When the project manager communicates to her counterpart, no one really understands the language of each other (though they are speaking the same language, English, German or Hindi). They decide to keep each one of them separate so that each one can go live! What is the Result? The organization now has two different security gateways!

The above paragraph is an imaginary situation. However, we can all recollect many similar situations. When these different teams or their representatives get into conversations, they may not have all the structures in front of them to understand the possible impacts. It may sound obvious, however, the devils are in the details; and the details are in different jargon or lingo of each initiative.

The EA exactly tries to solve this problem and drive organization forward. There are many different tools, for example, Vision, Business Motivation Models, Business Capability Models, Business Services Models, Business Processes Models, IT Services Models, and Technology Models, which helps in sustained dialogue. The stakeholders within the enterprise will understand the impact of an initiative when they understand the behaviour of the target state; it is possible to explain the behaviour when there is a good structure to depict and define the behaviour.

There is a classical problem here, whether to focus on the forest and ignore the trees or to look at an individual tree and ignore the forest. In reality one need to do both! The tools mentioned above helps to orchestrate between these different perspectives. It provides a mechanism to do it in a relatively easy way. I have mentioned relatively because nothing is easy if one does not put in effort to build the competency around it.

Let us consider the area under Digital Transformation, Digital Experience, which is most widely in vocabulary today. It touches almost every part of the Enterprise. This initiative may directly affect some process simplification and improvement initiatives that may be underway to drive Operational Excellence. The organization typically gets into a chicken and egg scenario and this result into losing momentum over how to resolve the issues. Instead of trying to tie everything together, the EA tools will help to create building blocks. These building blocks are implemented independently. They are then moved to operations independently and magic, it works.

One way to let initiatives move independently and be confident of their effectiveness is through the usage of Architecture Contract.

It is important to understand what the expected outcome is. For example, in case of “Customer Digital Experience” the question would be, is it a pure Information Technology initiative or does it influence the Business Architecture and Business Model? This is a decisive moment to understand whether the changes are just to leverage some new technology capabilities like Mobile, Wearable, or Big Data. For all good reasons, the initiative may be just that. In that case recommendation would be to run them under any typical IT programs and please do not boil the ocean by putting them under the “Digital Transformation” initiatives. However, if one organization were really looking for changing the business playing field, then adopting EA practices would help immensely.

  1. Enterprise Architecture Tools:

For Digital Transformation, Business Architecture, Technology Architecture, Information Architecture Views and various tools related to them are pillars of the Enterprise Architecture. In fact understanding the Business Capabilities and being able to map the impact of the Digital Forces on the capabilities will be critical for final success of the outcome.

However, a few other areas of the Enterprise Architecture practice help in navigating through the entire effort of Enterprise Architecture, when one is trying to solve the problem of Digital Transformation. For now, I thought of venturing into these EA Tools; may return to applying Business Architecture, Information Architecture and Technology Architecture tools and practices to Digital Transformation in a latter article.

You might be wondering why I am ignoring the pillars. The pillars are something, which we have to go through anyways, however, to get them in place there are other vehicles required and I often find that the teams are struggling with them. For instance, Business Capabilities are going to be the pillars of Business Architecture for driving the Digital Transformation work; however, teams often struggle to find out what business motivations are going to affect the existing capabilities.

Now let us go through a few of the tools here.

To find out what is required to realize the Digital Strategy – If the organization has developed a Digital Strategy, then that is a big achievement. However, that is not the end of the journey. We have all been in situation where it takes months to decide on next steps and years to see the strategy taking bloom. One may like to see a few common reasons why Strategies fail.

A tool that can help untangle different aspects of what you are trying to achieve through the Digital strategy is the Business Motivation Model (BMM). The ArchiMate® supports to create a very effective abridged version of BMM. BMM can help in identifying the next set of activities by helping you to create a model that relates requirement to goals to stakeholders. This can quickly let one see through the next steps and what values it is trying to bring in to move towards the desired target.

EA Methodology – The idea is to move incrementally. Fail with an idea faster so that one can learn faster and apply the learning for success sooner! It is desirable to take incremental steps through modifications of existing Business Model using Business Architecture, keep the IT Architecture aligned during the iterations and the intermediate steps.

TOGAF®, an Open Group standard, ADM is a good place to start; other frameworks like, DODAF, FEA and methodologies around them can help to enrich the ADM. The important point to look for while one is iterating through the ADM or even evaluating it is to consider the kinds of customization required for the enterprise in scope. However, focus to mature the methodology incrementally.

Stakeholders Management: who is impacted and how – In a complex engagement like implementing Digital Transformation, stakeholder management can be challenging. Understanding the stakeholder’s goals and drivers can be daunting. Besides, understanding the real need and what does it mean, under the applicable constraints can be confusing. I have seen organizations stuck in tackling stakeholders and unable to come out of the labyrinth for months to years. There are tools within Archimate to lay down the stakeholders, connect them to their drivers, assessment, goals, and requirement. There are other tools, which independently or with ArchiMate extensions helps in doing the same.

It would be a good idea to lay down multiple levels of stakeholders, overall Digital Organization level, Program level and then various initiatives/project levels. Having an interaction model among these will help one to understand various Enterprise Architecture Views required in meeting the objectives of different stakeholders.

What does the enterprise wants to achieve during the incremental initiative: EA Vision – This is a critical and tricky part. Until now, the Digital Strategy work had mapped the Business Strategy to a clear Business Vision, mapped tactics to realize the Business Strategy. Sometime, each of the tactics may entail into EA Vision for the cycle (there may be multiple EA cycles for an EA vision too – pyramid of visions is the theme). I have seen organizations running with big transformation exercises and not all stakeholders clearly understand all different aspects; there is a lack of EA vision or there is not a well-developed structure other than Words of Mouth and slides. The recommendation is to lay down the EA vision as a subset of the organizational vision; however, the alignment needs to be clear by following a well-defined approach.

Make the EA vision clear, however, need not be something too insurmountable to achieve over a given period. EA Vision is not a blue-sky dream that may take one to the top of the mountain! It is a pragmatic value proposition that the organization is trying to achieve.

How do the milestones on the road look like: Roadmap – The recommendation will be to execute the road mapping activities under the EA initiative of the Digital Transformation. This will allow creating the right alignment from Business Perspective and will help to bind all the stakeholders to the common cause. There is significant number of examples where large programs have surprised the stakeholders with the outcome in a negative way. It would be a more difficult journey for Digital Transformation without the right level of effort or EA effort.

Can we do it better next time: Housekeeping – A significant part of the EA assets and activities that exist today in Literature and are more popular, are around the Housekeeping activities. One of them is EA repository. This is extremely important; however, practitioners should recognize this and appropriately position the activities around the Repository. I would not think positioning a significant amount of housekeeping activities while one is trying to build the house would do justice to the time and effort spent.

Nevertheless, this would be a good time where you can start with a clean EA repository and start populating with the artifacts being produced. Then, in a parallel thread or latter thread start tying things together. The benefit of this approach is to be able to avoid diluting the focus area of using EA as a problem-solving tool and keep the accelerated momentum of the transformation on.

The EA Repository can be helpful for Managing Business Assets, especially those focused around Information Technology (we are discussing technology enabled business transformation). Business Capability creation, impacts on business capabilities, visibility to key stakeholders will receive a boost, through traceability and reusability.

It may appear that the transformation team will adopt the tools and techniques, mentioned above, even without the EA umbrella. The point is – instead of doing these activities in silos of Business or IT at different points in time, the EA can bring all these together. It will help the organization make efficient progress. A few ways these can help, create “Views” for different “concern or focus” areas; thus allowing different groups to visualize their respective stake and impact, as different initiatives run in parallel. All these initiatives are large which is transforming the DNA of the organization; it would be important to understand the impact and be able to manoeuvre the steering.

  1. Enterprise Agility:

Why Enterprise “Agility” is important in the context of Digital Transformation? Is it just because Agility is the fad these days? I believe it is the environment. It has become very dynamic, for all reasons mentioned above. On the other hand, agility is being driven by the fact that it is possible to be agile with both information and tools available. We have moved quite far since the days of Mr. Ford’s era of “Any customer can have a car painted any colour that he wants as long as it is black.” In a very dynamic environment, ability to iterate is important. The points shared so far will help to achieve agility and make incremental progress. The main pillars to achieve incremental transformation are:

  1. Ability to have a single coherent view, though multiple threads are being run independently
  2. Conceptualize and initiate multiple iterations of Enterprise Architecture, driven by a vision (or pyramid of visions)
  3. A strong enterprise architecture repository so that every iterations and every independent thread is contributing to the common goal; this doesn’t mean that it is the most important thing (one of the important element)

As one is moving through the transformation, it is imperative to have a clear vision of what one wants to achieve. Then, it is required to break it down (architectural decomposition) into smaller achievable chunks and then iteratively implement the chunks. Approaches other than EA would fail to maintain the stability of the Enterprise System, after each of the viable iteration. This means that at every point in time during the transformation business should function in a seamless way with transformed and existing business and IT functions; there should be seamless flow of information across all business functions. Moreover, business benefits should be clear and measurable during each of the iteration.

Summing it up, if the technical initiatives with Big Data, Machine Learning or Artificial Intelligence or Cloud or Mobility or Social does not affect the Business (apart from adoption of new Information Services or Technology Services) then EA may not be required. However, if one wants to change business functions by leveraging digital tools or would have to change it because of Digital Forces, then EA would be the best vehicle to board to take up the journey of transformation!

The risk of not taking an architecture-centric approach is that it is too complex to handle the different variables that can influence the net outcome of Digital Transformation. The immediate success can soon wane out into an unmanageable mess of different organizations, departments, roles, systems and information. There are too many variables; which a few individuals can relate them, communicate them, and track them as they changes.

The promise of Digital in the business space is the capability to use information, move incrementally, and continuously optimize. Transformation of Enterprises (large or small) incrementally is not an easy affair, as we have realized and experienced it! Thus, without using a tool set that helps to ease out the transformation, the cost of technology and its rapid evolution will be difficult to manage.

During the whole journey of transformation, EA can produce tangible outputs. The organization can refer back to these outputs at any point in time to understand the rational for failure or success. Organizations, not matured in implementing strategies often, grapple with the outcome if it is not a great success. Their success seems to depend too much on the binary nature of success or failure, though business is continuous. There is plenty of opportunity to avoid the binary result and follow a path of incremental change.

By Sunil Kr. Singh, TATASunil Kr. Singh is a Senior Architecture and Digital Consultant at TATA Consultancy Services. He has more than 16 years of experience with Information Technology driven transformation and developing IT systems for business solutions. He has a wide range of hands on experience; established Enterprise Architecture Practices, streamlined IT and business processes, developed, designed and architected business systems.
https://ca.linkedin.com/in/sunilsingh1

The opinions expressed in this article/presentation are those of the author; no organization that the author is affiliated or works for is related to these views.

Join the conversation @theopengroup #ogchat

Leave a comment

Filed under ArchiMate®, cloud, EA, TOGAF, Uncategorized

IT4IT™ Reference Architecture Version 2.0, an Open Group Standard

By The Open Group

1 Title/Current Version

IT4IT™ Reference Architecture Version 2.0, an Open Group Standard

2 The Basics

The Open Group IT4IT Reference Architecture standard comprises a reference architecture and a value chain-based operating model for managing the business of IT.

The IT Value Chain

The IT Value Chain has four value streams supported by a reference architecture to drive efficiency and agility. The four value streams are:

  • Strategy to Portfolio
  • Request to Fulfill
  • Requirement to Deploy
  • Detect to Correct

Each IT Value Stream is centered on a key aspect of the service model, the essential data objects (information model), and functional components (functional model) that support it. Together, the four value streams play a vital role in helping IT control the service model as it advances through its lifecycle.

The IT4IT Reference Architecture

  • Provides prescriptive guidance on the specification of and interaction with a consistent service model backbone (common data model/context)
  • Supports real-world use-cases driven by the Digital Economy (e.g., Cloud-sourcing, Agile, DevOps, and service brokering)
  • Embraces and complements existing process frameworks and methodologies (e.g., ITIL®, CoBIT®, SAFe, and TOGAF®) by taking a data-focused implementation model perspective, essentially specifying an information model across the entire value chain

3 Summary

The IT4IT Reference Architecture standard consists of the value chain and a three-layer reference architecture. Level 1 is shown below.

By The Open GroupThe IT4IT Reference Architecture provides prescriptive, holistic guidance for the implementation of IT management capabilities for today’s digital enterprise. It is positioned as a peer to comparable reference architectures such as NRF/ARTS, TMF Framework (aka eTOM), ACORD, BIAN, and other such guidance.

Together, the four value streams play a vital role in helping IT control the service model as it advances through its lifecycle:

By The Open GroupThe Strategy to Portfolio (S2P) Value Stream:

  • Provides the strategy to balance and broker your portfolio
  • Provides a unified viewpoint across PMO, enterprise 
architecture, and service portfolio
  • Improves data quality for decision-making
  • Provides KPIs and roadmaps to improve business communication

The Requirement to Deploy (R2D) Value Stream:

  • Provides a framework for creating, modifying, or sourcing a service
  • Supports agile and traditional development methodologies
  • Enables visibility of the quality, utility, schedule, and cost of 
the services you deliver
  • Defines continuous integration and deployment control points

The Request to Fulfill (R2F) Value Stream:

  • Helps your IT organization transition to a service broker model
  • Presents a single catalog with items from multiple supplier 
catalogs
  • Efficiently manages subscriptions and total cost of service
  • Manages and measures fulfillments across multiple suppliers

The Detect to Correct (D2C) Value Stream:

  • Brings together IT service operations to enhance results and efficiency
  • Enables end-to-end visibility using a shared configuration model
  • Identifies issues before they affect users
  • Reduces the mean time to repair

4 Target Audience

The target audience for the standard consists of:

  • IT executives
  • IT process analysts
  • Architects tasked with “business of IT” questions
  • Development and operations managers
  • Consultants and trainers active in the IT industry

5 Scope

The Open Group IT4IT standard is focused on defining, sourcing, consuming, and managing IT services by looking holistically at the entire IT Value Chain. While existing frameworks and standards have placed their main emphasis on process, this standard is process-agnostic, focused instead on the data needed to manage a service through its lifecycle. It then describes the functional components (software) that are required to produce and consume the data. Once integrated together, a system of record fabric for IT management is created that ensures full visibility and traceability of the service from cradle to grave.

IT4IT is neutral with respect to development and delivery models. It is intended to support Agile as well as waterfall approaches, and lean Kanban process approaches as well as fully elaborated IT service management process models.

The IT4IT Reference Architecture relates to TOGAF®, ArchiMate®, and ITIL® as shown below:

By The Open Group6 Relevant Website

For further details on the IT4IT Reference Architecture standard, visit www.opengroup.org/IT4IT.

Comments Off on IT4IT™ Reference Architecture Version 2.0, an Open Group Standard

Filed under IT4IT, Standards, Value Chain

The Open Group Edinburgh—The State of Boundaryless Information Flow™ Today

By The Open Group

This year marks the 20th anniversary of the first version of TOGAF®, an Open Group standard, and the publication of “The Boundaryless Organization,” a book that defined how companies should think about creating more open, flexible and engaging organizations. We recently sat down with The Open Group President and CEO Allen Brown and Ron Ashkenas, Senior Partner at Schaffer Consulting and one of the authors of “The Boundaryless Organization,” to get a perspective on Boundaryless Information Flow™ and where the concept stands today. Brown and Ashkenas presented their perspectives on this topic at The Open Group Edinburgh event on Oct. 20.

In the early 1990s, former GE CEO Jack Welch challenged his team to create what he called a “boundaryless organization”—an organization where the barriers between employees, departments, customers and suppliers had been broken down. He also suggested that in the 21st Century, all organizations would need to move in this direction.

Based on the early experience of GE, and a number of other companies, the first book on the subject, “The Boundaryless Organization,” was published in 1995. This was the same year that The Open Group released the first version of the TOGAF® standard, which provided an architectural framework to help companies achieve interoperability by providing a structure for interconnecting legacy IT systems. Seven years later, The Open Group adopted the concept of Boundaryless Information Flow™—achieved through global interoperability in a secure, reliable and timely manner—as the ongoing vision and mission for the organization. According to Allen Brown, President and CEO of The Open Group, that vision has sustained The Open Group over the years and continues to do so as the technology industry faces unprecedented and rapid change.

Brown’s definition of Boundaryless Information Flowis rooted in the notion of permeability. Ron Ashkenas, a co-author of “The Boundaryless Organization” emphasizes that organizations still need boundaries—without some boundaries they would become “dis-organized.” But like the cells walls in the human body, those boundaries need to be sufficiently permeable so that information can easily flow back and forth in real time, without being distorted, fragmented or blocked.

In that context, Brown believes that learning to be boundaryless today is more important than ever for organizations, despite the fact that many of the boundaries that existed in 1995 no longer exist, and the technical ability to share information around the world will continue to evolve. What often holds organizations back however, says Ashkenas, are the social and cultural patterns within organizations, not the technology.

“We have a tendency to protect information in different parts of the organization,” says Ashkenas. “Different functions, locations, and business units want their own version of ‘the truth’ rather than being held accountable to a common standard. This problem becomes even more acute across companies in a globally connected ecosystem and world. So despite our technological capabilities, we still end up with different systems, different information available at different times. We don’t have the common true north. The need to be more boundaryless is still there.  In fact it’s greater now, even though we have the capability to do it.”

Although the technical capabilities for Boundaryless Information Flow are largely here, the larger issue is getting people to agree and collaborate on how to do things. As Ashkenas explains, “It’s not just the technical challenges, it’s also cultural challenges, and the two have to go hand-in-hand.”

What’s more, collaboration is not just an issue of getting individuals to change, but of making changes at much larger levels on a much larger scale. Not only are boundaries now blurring within organizations, they’re blurring between institutions and across global ecosystems, which may include anything from apps, devices and technologies to companies, countries and cultures.

Ashkenas says that’s where standards, such as those being created by The Open Group, can help make a difference.  He says, “Standards used to come after technology. Now they need to come before the changes and weave together some of the ecosystem partners. I think that’s one of the exciting new horizons for The Open Group and its members—they can make a fundamental difference in the next few years.”

Brown agrees. He says that there are two major forces currently facing how The Open Group will continue to shape the Boundaryless Information Flow vision. One is the need for standards to address the changing perspective needed of the IT function from an “inside-out” to “outside-in” model fueled by a combination of boundaryless thinking and the convergence of social, mobile, Cloud, the Internet of Things and Big Data. The other is the need to shift from IT strategies being derived from business strategies and reacting to the business agenda that leads to redundancy and latency in delivering new solutions. Instead, IT must shift to recognize technology as increasingly driving business opportunity and that IT must be managed as a business in its own right.

For example, twenty years ago a standard might lag behind a technology. Once companies no longer needed to compete on the technology, Brown says, they would standardize. With things like Open Platform 3.0™ and the need to manage the business of IT (IT4IT™) quickly changing the business landscape, now standards need to be at the forefront, along with technology development, so that companies have guidance on how to navigate a more boundaryless world while maintaining security and reliability.

“This is only going to get more and more exciting and more and more interesting,” Brown says.

How boundaryless are we?

Just how boundaryless are we today? Ashkenas says a lot has been accomplished in the past 20 years. Years ago, he says, people in most organizations would have thought that Boundaryless Information Flow was either not achievable or they would have shrugged their shoulders and ignored the concept. Today there’s a strong acceptance of the need for it. In fact, a recent survey of The Open Group members found that 65 percent of those surveyed believe boundarylessness as a positive thing within their organization. And while most organizations are making strides toward boundarylessness, only a minority–15 percent—of those surveyed felt that Boundaryless Information Flow was something that would be impossible to achieve in a large international organization such as theirs.

According to Brown and Ashkenas, the next horizon for many organizations will be to truly make information flow more accessible in real-time for all stakeholders. Ashkenas says in most organizations the information people need is not always available when people need it, whether this is due to different systems, cultural constraints or even time zones. The next step will be to provide managers real-time, anywhere access to all the information they need. IT can help play a bigger part in providing people a “one truth” view of information, he says.

Another critical—but potentially difficult—next step is to try to get people to agree on how to make boundarylessness work across ecosystems. Achieving this will be a challenge because ways of doing things—even standards development—will need to adapt to different cultures in order for them to ultimately work. What makes sense in the U.S. or Europe from a business standpoint may not make sense in China or India or Brazil, for example.

“What are the fundamentals that have to be accepted by everyone and where is there room for customization to local cultures?” asks Ashkenas. “Figuring out the difference between the two will be critical in the coming years.”

Brown and Ashkenas say that we can expect technical innovations to evolve at greater speed and with greater effectiveness in the coming years. This is another reason why Enterprise Architecture and standards development will be critical for helping organizations transform themselves and adapt as boundaries blur even more.

As Brown notes, the reason that the architecture discipline and standards such as TOGAF arose 20 years ago was exactly because organizations were beginning to move toward boundarylessness and they needed a way to figure out how to frame those environments and make them work together. Before then, when IT departments were implementing different siloed systems for different functions across organizations, they had no inkling that someday people might need to share that information across systems or departments, let alone organizations.

“It never crossed our minds that we’d need to add people or get information from disparate systems and put them together to make some sense. It wasn’t information, it was just data. The only way in any complex organization that you can start weaving this together and see how they join together is to have some sort of architecture, an overarching view of the enterprise. Complex organizations have that view and say ‘this is how that information comes together and this is how it’s going to be designed.’ We couldn’t have gotten there without Enterprise Architecture in large organizations,” he says.

In the end, the limitations to Boundaryless Information Flow will largely be organizational and cultural, not a question of technology. Technologically boundarylessness is largely achievable. The question for organizations, Brown says, is whether they’ll be able to adjust to the changes that technology brings.

“The limitations are in the organization and the culture. Can they make the change? Can they absorb it all? Can they adapt?” he says.

1 Comment

Filed under Uncategorized

A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cloud and Bimodal IT Era

Transcript of an Open Group discussion/podcast on the role of Cloud Governance and Enterprise Architecture and how they work together in the era of increasingly fragmented IT.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Sponsor: The Open Group

Dana Gardner: Hello, and welcome to a special Thought Leadership Panel Discussion, coming to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator as we examine the role that Cloud Governance and Enterprise Architecture play in an era of increasingly fragmented IT.

Not only are IT organizations dealing with so-called shadow IT and myriad proof-of-concept affairs, there is now a strong rationale for fostering what Gartner calls Bimodal IT. There’s a strong case to be made for exploiting the strengths of several different flavors of IT, except that — at the same time — businesses are asking IT in total to be faster, better, and cheaper.

The topic before us today is how to allow for the benefits of Bimodal IT or even Multimodal IT, but without IT fragmentation leading to a fractured and even broken business.

Here to update us on the work of The Open Group Cloud Governance initiatives and working groups and to further explore the ways that companies can better manage and thrive with hybrid IT are our guests. We’re here today with Dr. Chris Harding, Director for Interoperability and Cloud Computing Forum Director at The Open Group. Welcome, Chris.

Dr. Chris Harding: Thank you, Dana. It’s great to be here.

Gardner: We’re also here with David Janson, Executive IT Architect and Business Solutions Professional with the IBM Industry Solutions Team for Central and Eastern Europe and a leading contributor to The Open Group Cloud Governance Project. Welcome, David.

David Janson: Thank you. Glad to be here.

Gardner: Lastly, we here with Nadhan, HP Distinguished Technologist and Cloud Advisor and Co-Chairman of The Open Group Cloud Governance Project. Welcome, Nadhan.

Nadhan: Thank you, Dana. It’s a pleasure to be here.

IT trends

Gardner: Before we get into an update on The Open Group Cloud Governance Initiatives, in many ways over the past decades IT has always been somewhat fragmented. Very few companies have been able to keep all their IT oars rowing in the same direction, if you will. But today things seem to be changing so rapidly that we seem to acknowledge that some degree of disparate IT methods are necessary. We might even think of old IT and new IT, and this may even be desirable.

But what are the trends that are driving this need for a Multimodal IT? What’s accelerating the need for different types of IT, and how can we think about retaining a common governance, and even a frameworks-driven enterprise architecture umbrella, over these IT elements?

Nadhan: Basically, the change that we’re going through is really driven by the business. Business today has much more rapid access to the services that IT has traditionally provided. Business has a need to react to its own customers in a much more agile manner than they were traditionally used to.

We now have to react to demands where we’re talking days and weeks instead of months and years. Businesses today have a choice. Business units are no longer dependent on the traditional IT to avail themselves of the services provided. Instead, they can go out and use the services that are available external to the enterprise.

To a great extent, the advent of social media has also resulted in direct customer feedback on the sentiment from the external customer that businesses need to react to. That is actually changing the timelines. It is requiring IT to be delivered at the pace of business. And the very definition of IT is undergoing a change, where we need to have the right paradigm, the right technology, and the right solution for the right business function and therefore the right application.

Since the choices have increased with the new style of IT, the manner in which you pair them up, the solutions with the problems, also has significantly changed. With more choices, come more such pairs on which solution is right for which problem. That’s really what has caused the change that we’re going through.

A change of this magnitude requires governance that goes across building up on the traditional governance that was always in play, requiring elements like cloud to have governance that is more specific to solutions that are in the cloud across the whole lifecycle of cloud solutions deployment.

Gardner: David, do you agree that this seems to be a natural evolution, based on business requirements, that we basically spin out different types of IT within the same organization to address some of these issues around agility? Or is this perhaps a bad thing, something that’s unnatural and should be avoided?

Janson: In many ways, this follows a repeating pattern we’ve seen with other kinds of transformations in business and IT. Not to diminish the specifics about what we’re looking at today, but I think there are some repeating patterns here.

There are new disruptive events that compete with the status quo. Those things that have been optimized, proven, and settled into sort of a consistent groove can compete with each other. Excitement about the new value that can be produced by new approaches generates momentum, and so far this actually sounds like a healthy state of vitality.

Good governance

However, one of the challenges is that the excitement potentially can lead to overlooking other important factors, and that’s where I think good governance practices can help.

For example, governance helps remind people about important durable principles that should be guiding their decisions, important considerations that we don’t want to forget or under-appreciate as we roll through stages of change and transformation.

At the same time, governance practices need to evolve so that it can adapt to new things that fit into the governance framework. What are those things and how do we govern those? So governance needs to evolve at the same time.

There is a pattern here with some specific things that are new today, but there is a repeating pattern as well, something we can learn from.

Gardner: Chris Harding, is there a built-in capability with cloud governance that anticipates some of these issues around different styles or flavors or even velocity of IT innovation that can then allow for that innovation and experimentation, but then keep it all under the same umbrella with a common management and visibility?

Harding: There are a number of forces at play here, and there are three separate trends that we’ve seen, or at least that I have observed, in discussions with members within The Open Group that relate to this.

The first is one that Nadhan mentioned, the possibility of outsourcing IT. I remember a member’s meeting a few years ago, when one of our members who worked for a company that was starting a cloud brokerage activity happened to mention that two major clients were going to do away with their IT departments completely and just go for cloud brokerage. You could see the jaws drop around the table, particularly with the representatives who were from company corporate IT departments.

Of course, cloud brokers haven’t taken over from corporate IT, but there has been that trend towards things moving out of the enterprise to bring in IT services from elsewhere.

That’s all very well to do that, but from a governance perspective, you may have an easy life if you outsource all of your IT to a broker somewhere, but if you fail to comply with regulations, the broker won’t go to jail; you will go to jail.

So you need to make sure that you retain control at the governance level over what is happening from the point of view of compliance. You probably also want to make sure that your architecture principles are followed and retain governance control to enable that to happen. That’s the first trend and the governance implication of it.

In response to that, a second trend that we see is that IT departments have reacted often by becoming quite like brokers themselves — providing services, maybe providing hybrid cloud services or private cloud services within the enterprise, or maybe sourcing cloud services from outside. So that’s a way that IT has moved in the past and maybe still is moving.

Third trend

The third trend that we’re seeing in some cases is that multi-discipline teams within line of business divisions, including both business people and technical people, address the business problems. This is the way that some companies are addressing the need to be on top of the technology in order to innovate at a business level. That is an interesting and, I think, a very healthy development.

So maybe, yes, we are seeing a bimodal splitting in IT between the traditional IT and the more flexible and agile IT, but maybe you could say that that second part belongs really in the line of business departments, rather than in the IT departments. That’s at least how I see it.

Nadhan: I’d like to build on a point that David made earlier about repeating patterns. I can relate to that very well within The Open Group, speaking about the Cloud Governance Project. Truth be told, as we continue to evolve the content in cloud governance, some of the seeding content actually came from the SOA Governance Project that The Open Group worked on a few years back. So the point David made about the repeating patterns resonates very well with that particular case in mind.

Gardner: So we’ve been through this before. When there is change and disruption, sometimes it’s required for a new version of methodologies and best practices to emerge, perhaps even associated with specific technologies. Then, over time, we see that folded back in to IT in general, or maybe it’s pushed back out into the business, as Chris alluded to.

My question, though, is how we make sure that these don’t become disruptive and negative influences over time. Maybe governance and enterprise architecture principles can prevent that. So is there something about the cloud governance, which I think really anticipates a hybrid model, particularly a cloud hybrid model, that would be germane and appropriate for a hybrid IT environment?

David Janson, is there a cloud governance benefit in managing hybrid IT?

Janson: There most definitely is. I tend to think that hybrid IT is probably where we’re headed. I don’t think this is avoidable. My editorial comment upon that is that’s an unavoidable direction we’re going in. Part of the reason I say that is I think there’s a repeating pattern here of new approaches, new ways of doing things, coming into the picture.

And then some balancing acts goes on, where people look at more traditional ways versus the new approaches people are talking about, and eventually they look at the strengths and weaknesses of both.

There’s going to be some disruption, but that’s not necessarily bad. That’s how we drive change and transformation. What we’re really talking about is making sure the amount of disruption is not so counterproductive that it actually moves things backward instead of forward.

I don’t mind a little bit of disruption. The governance processes that we’re talking about, good governance practices, have an overall life cycle that things move through. If there is a way to apply governance, as you work through that life cycle, at each point, you’re looking at the particular decision points and actions that are going to happen, and make sure that those decisions and actions are well-informed.

We sometimes say that governance helps us do the right things right. So governance helps people know what the right things are, and then the right way to do those things..

Bimodal IT

Also, we can measure how well people are actually adapting to those “right things” to do. What’s “right” can vary over time, because we have disruptive change. Things like we are talking about with Bimodal IT is one example.

Within a narrower time frame in the process lifecycle,, there are points that evolve across that time frame that have particular decisions and actions. Governance makes sure that people are well informed as they’re rolling through that about important things they shouldn’t forget. It’s very easy to forget key things and optimize for only one factor, and governance helps people remember that.

Also, just check to see whether we’re getting the benefits that people expected out of it. Coming back around and looking afterward to see if we accomplish what we thought we would or did we get off in the wrong direction. So it’s a bit like a steering mechanism or a feedback mechanism, in it that helps keep the car on the road, rather than going off in the soft shoulder. Did we overlook something important? Governance is key to making this all successful.

Gardner: Let’s return to The Open Group’s upcoming conference on July 20 in Baltimore and also learn a bit more about what the Cloud Governance Project has been up to. I think that will help us better understand how cloud governance relates to these hybrid IT issues that we’ve been discussing.

Nadhan, you are the co-chairman of the Cloud Governance Project. Tell us about what to expect in Baltimore with the concepts of Boundaryless Information Flow™, and then also perhaps an update on what the Cloud Governance Project has been up to.

Nadhan: Absolutely, Dana. When the Cloud Governance Project started, the first question we challenged ourselves with was, what is it and why do we need it, especially given that SOA governance, architecture governance, IT governance, enterprise governance, in general are all out there with frameworks? We actually detailed out the landscape with different standards and then identified the niche or the domain that cloud governance addresses.

After that, we went through and identified the top five principles that matter for cloud governance to be done right. Some of the obvious ones being that cloud is a business decision, and the governance exercise should keep in mind whether it is the right business decision to go to the cloud rather than just jumping on the bandwagon. Those are just some examples of the foundational principles that drive how cloud governance must be established and exercised.

Subsequent to that, we have a lifecycle for cloud governance defined and then we have gone through the process of detailing it out by identifying and decoupling the governance process and the process that is actually governed.

So there is this concept of process pairs that we have going, where we’ve identified key processes, key process pairs, whether it be the planning, the architecture, reusing cloud service, subscribing to it, unsubscribing, retiring, and so on. These are some of the defining milestones in the life cycle.

We’ve actually put together a template for identifying and detailing these process pairs, and the template has an outline of the process that is being governed, the key phases that the governance goes through, the desirable business outcomes that we would expect because of the cloud governance, as well as the associated metrics and the key roles.

Real-life solution

The Cloud Governance Framework is actually detailing each one. Where we are right now is looking at a real-life solution. The hypothetical could be an actual business scenario, but the idea is to help the reader digest the concepts outlined in the context of a scenario where such governance is exercised. That’s where we are on the Cloud Governance Project.

Let me take the opportunity to invite everyone to be part of the project to continue it by subscribing to the right mailing list for cloud governance within The Open Group.

Gardner: Thank you. Chris Harding, just for the benefit of our readers and listeners who might not be that familiar with The Open Group, perhaps you could give us a very quick overview of The Open Group — its mission, its charter, what we could expect at the Baltimore conference, and why people should get involved, either directly by attending, or following it on social media or the other avenues that The Open Group provides on its website?

Harding: Thank you, Dana. The Open Group is a vendor-neutral consortium whose vision is Boundaryless Information Flow. That is to say the idea that information should be available to people within an enterprise, or indeed within an ecosystem of enterprises, as and when needed, not locked away into silos.

We hold main conferences, quarterly conferences, four times a year and also regional conferences in various parts of the world in between those, and we discuss a variety of topics.

In fact, the main topics for the conference that we will be holding in July in Baltimore are enterprise architecture and risk and security. Architecture and security are two of the key things for which The Open Group is known, Enterprise Architecture, particularly with its TOGAF® Framework, is perhaps what The Open Group is best known for.

We’ve been active in a number of other areas, and risk and security is one. We also have started a new vertical activity on healthcare, and there will be a track on that at the Baltimore conference.

There will be tracks on other topics too, including four sessions on Open Platform 3.0™. Open Platform 3.0 is The Open Group initiative to address how enterprises can gain value from new technologies, including cloud computing, social computing, mobile computing, big data analysis, and the Internet of Things.

We’ll have a number of presentations related to that. These will include, in fact, a perspective on cloud governance, although that will not necessarily reflect what is happening in the Cloud Governance Project. Until an Open Group standard is published, there is no official Open Group position on the topic, and members will present their views at conferences. So we’re including a presentation on that.

Lifecycle governance

There is also a presentation on another interesting governance topic, which is on Information Lifecycle Governance. We have a panel session on the business context for Open Platform 3.0 and a number of other presentations on particular topics, for example, relating to the new technologies that Open Platform 3.0 will help enterprises to use.

There’s always a lot going on at Open Group conferences, and that’s a brief flavor of what will happen at this one.

Gardner: Thank you. And I’d just add that there is more available at The Open Group website, opengroup.org.

Going to one thing you mentioned about a standard and publishing that standard — and I’ll throw this out to any of our guests today — is there a roadmap that we could look to in order to anticipate the next steps or milestones in the Cloud Governance Project? When would such a standard emerge and when might we expect it?

Nadhan: As I said earlier, the next step is to identify the business scenario and apply it. I’m expecting, with the right level of participation, that it will take another quarter, after which it would go through the internal review with The Open Group and the company reviews for the publication of the standard. Assuming we have that in another quarter, Chris, could you please weigh in on what it usually takes, on average, for those reviews before it gets published.

Harding: You could add on another quarter. It shouldn’t actually take that long, but we do have a thorough review process. All members of The Open Group are invited to participate. The document is posted for comment for, I would think, four weeks, after which we review the comments and decide what actually needs to be taken.

Certainly, it could take only two months to complete the overall publication of the standard from the draft being completed, but it’s safer to say about a quarter.

Gardner: So a real important working document could be available in the second half of 2015. Let’s now go back to why a cloud governance document and approach is important when we consider the implications of Bimodal or Multimodal IT.

One of things that Gartner says is that Bimodal IT projects require new project management styles. They didn’t say project management products. They didn’t say, downloads or services from a cloud provider. We’re talking about styles.

So it seems to me that, in order to prevent the good aspects of Bimodal IT to be overridden by negative impacts of chaos and the lack of coordination that we’re talking about, not about a product or a download, we’re talking about something that a working group and a standards approach like the Cloud Governance Project can accommodate.

David, why is it that you can’t buy this in a box or download it as a product? What is it that we need to look at in terms of governance across Bimodal IT and why is that appropriate for a style? Maybe the IT people need to think differently about accomplishing this through technology alone?

First question

Janson: When I think of anything like a tool or a piece of software, the first question I tend to have is what is that helping me do, because the tool itself generally is not the be-all and end-all of this. What process is this going to help me carry out?

So, before I would think about tools, I want to step back and think about what are the changes to project-related processes that new approaches require. Then secondly, think about how can tools help me speed up, automate, or make those a little bit more reliable?

It’s an easy thing to think about a tool that may have some process-related aspects embedded in it as sort of some kind of a magic wand that’s going to automatically make everything work well, but it’s the processes that the tool could enable that are really the important decision. Then, the tools simply help to carry that out more effectively, more reliably, and more consistently.

We’ve always seen an evolution about the processes we use in developing solutions, as well as tools. Technology requires tools to adapt. As to the processes we use, as they get more agile, we want to be more incremental, and see rapid turnarounds in how we’re developing things. Tools need to evolve with that.

But I’d really start out from a governance standpoint, thinking about challenging the idea that if we’re going to make a change, how do we know that it’s really an appropriate one and asking some questions about how we differentiate this change from just reinventing the wheel. Is this an innovation that really makes a difference and isn’t just change for the sake of change?

Governance helps people challenge their thinking and make sure that it’s actually a worthwhile step to take to make those adaptations in project-related processes.

Once you’ve settled on some decisions about evolving those processes, then we’ll start looking for tools that help you automate, accelerate, and make consistent and more reliable what those processes are.

I tend to start with the process and think of the technology second, rather than the other way around. Where governance can help to remind people of principles we want to think about. Are you putting the cart before the horse? It helps people challenge their thinking a little bit to be sure they’re really going in the right direction.

Gardner: Of course, a lot of what you just mentioned pertains to enterprise architecture generally as well.

Nadhan, when we think about Bimodal or Multimodal IT, this to me is going to be very variable from company to company, given their legacy, given their existing style, the rate of adoption of cloud or other software as a service (SaaS), agile, or DevOps types of methods. So this isn’t something that’s going to be a cookie-cutter. It really needs to be looked at company by company and timeline by timeline.

Is this a vehicle for professional services, for management consulting more than IT and product? What is n the relationship between cloud governance, Bimodal IT, and professional services?

Delineating systems

Nadhan: It’s a great question Dana. Let me characterize Bimodal IT slightly differently, before answering the question. Another way to look at Bimodal IT, where we are today, is delineating systems of record and systems of engagement.

In traditional IT, typically, we’re looking at the systems of record, and systems of engagement with the social media and so on are in the live interaction. Those define the continuously evolving, growing-by-the-second systems of engagement, which results in the need for big data, security, and definitely the cloud and so on.

The coexistence of both of these paradigms requires the right move to the cloud for the right reason. So even though they are the systems of record, some, if not most, do need to get transformed to the cloud, but that doesn’t mean all systems of engagement eventually get transformed to the cloud.

There are good reasons why you may actually want to leave certain systems of engagement the way they are. The art really is in combining the historical data that the systems of record have with the continual influx of data that we get through the live channels of social media, and then, using the right level of predictive analytics to get information.

I said a lot in there just to characterize the Bimodal IT slightly differently, making the point that what really is at play, Dana, is a new style of thinking. It’s a new style of addressing the problems that have been around for a while.

But a new way to address the same problems, new solutions, a new way of coming up with the solution models would address the business problems at hand. That requires an external perspective. That requires service providers, consulting professionals, who have worked with multiple customers, perhaps other customers in the same industry, and other industries with a healthy dose of innovation.

That’s where this is a new opportunity for professional services to work with the CxOs, the enterprise architects, the CIOs to exercise the right business decision with the rights level of governance.

Because of the challenges with the coexistence of both systems of record and systems of engagement and harvesting the right information to make the right business decision, there is a significant opportunity for consulting services to be provided to enterprises today.

Drilling down

Gardner: Before we close off I wanted to just drill down on one thing, Nadhan, that you brought up, which is that ability to measure and know and then analyze and compare.

One of the things that we’ve seen with IT developing over the past several years as well is that the big data capabilities have been applied to all the information coming out of IT systems so that we can develop a steady state and understand those systems of record, how they are performing, and compare and contrast in ways that we couldn’t have before.

So on our last topic for today, David Janson, how important is it for that measuring capability in a governance context, and for organizations that want to pursue Bimodal IT, but keep it governed and keep it from spinning out of control? What should they be thinking about putting in place, the proper big data and analytics and measurement and visibility apparatus and capabilities?

Janson: That’s a really good question. One aspect of this is that, when I talk with people about the ideas around governance, it’s not unusual that the first idea that people have about what governance is is about the compliance or the policing aspect that governance can play. That sounds like that’s interference, sand in the gears, but it really should be the other way around.

A governance framework should actually make it very clear how people should be doing things, what’s expected as the result at the end, and how things are checked and measured across time at early stages and later stages, so that people are very clear about how things are carried out and what they are expected to do. So, if someone does use a governance-compliance process to see if things are working right, there is no surprise, there is no slowdown. They actually know how to quickly move through that.

Good governance has communicated that well enough, so that people should actually move faster rather than slower. In other words, there should be no surprises.

Measuring things is very important, because if you haven’t established the objectives that you’re after and some metrics to help you determine whether you’re meeting those, then it’s kind of an empty suit, so to speak, with governance. You express some ideas that you want to achieve, but you have no way of knowing or answering the question of how we know if this is doing what we want to do. Metrics are very important around this.

We capture metrics within processes. Then, for the end result, is it actually producing the effects people want? That’s pretty important.

One of the things that we have built into the Cloud Governance Framework is some idea about what are the outcomes and the metrics that each of these process pairs should have in mind. It helps to answer the question, how do we know? How do we know if something is doing what we expect? That’s very, very essential.

Gardner: I am afraid we’ll have to leave it there. We’ve been examining the role of cloud governance and enterprise architecture and how they work together in the era of increasingly fragmented IT. And we’ve seen how The Open Group Cloud Governance Initiatives and Working Groups can help allow for the benefits of Bimodal IT, but without necessarily IT fragmentation leading to a fractured or broken business process around technology and innovation.

This special Thought Leadership Panel Discussion comes to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore. And it’s not too late to register on The Open Group’s website or to follow the proceedings online and via social media such as Twitter, LinkedIn and Facebook.

So, thank you to our guests today. We’ve been joined by Dr. Chris Harding, Director for Interoperability and Cloud Computing Forum Director at The Open Group; David Janson, Executive IT Architect and Business Solutions Professional with the IBM Industry Solutions Team for Central and Eastern Europe and a leading contributor to The Open Group Cloud Governance Project, and Nadhan, HP Distinguished Technologist and Cloud Advisor and Co-Chairman of The Open Group Cloud Governance Project.

And a big thank you, too, to our audience for joining this special Open Group-sponsored discussion. This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this thought leadership panel discussion series. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android.

Sponsor: The Open Group

Transcript of an Open Group discussion/podcast on the role of Cloud Governance and Enterprise Architecture and how they work together in the era of increasingly fragmented IT. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

Join the conversation! @theopengroup #ogchat #ogBWI

You may also be interested in:

Comments Off on A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cloud and Bimodal IT Era

Filed under Accreditations, Boundaryless Information Flow™, Cloud, Cloud Governance, Interoperability, IoT, The Open Group Baltimore 2015

Cybersecurity Standards: The Open Group Explores Security and Ways to Assure Safer Supply Chains

Following is a transcript of part of the proceedings from The Open Group San Diego 2015 in February.

The following presentations and panel discussion, which together examine the need and outlook for Cybersecurity standards amid supply chains, are provided by moderator Dave Lounsbury, Chief Technology Officer, The Open Group; Mary Ann Davidson, Chief Security Officer, Oracle; Dr. Ron Ross, Fellow of the National Institute of Standards and Technology (NIST), and Jim Hietala, Vice President of Security for The Open Group.

Here are some excerpts:

By The Open GroupDave Lounsbury: Mary Ann Davidson is responsible for Oracle Software Security Assurance and represents Oracle on the Board of Directors for the Information Technology Information Sharing and Analysis Center, and on the international Board of the ISSA.

Dr. Ron Ross leads the Federal Information Security Management Act Implementation Project. It sounds like a big job to fulfill, developing the security standards and guidelines for the federal government.

This session is going to look at the cybersecurity and supply chain landscape from a standards perspective. So Ron and Mary Ann, thank you very much.

By The Open GroupRon Ross: All of us are part of the technology explosion and revolution that we have been experiencing for the last couple of decades.

I would like to have you leave today with a couple of major points, at least from my presentation, things that we have observed in cybersecurity for the last 25 years: where we are today and where I think we might need to go in the future. There is no right or wrong answer to this problem of cybersecurity. It’s probably one of the most difficult and challenging sets of problems we could ever experience.

In our great country, we work on what I call the essential partnership. It’s a combination of government, industry, and academia all working together. We have the greatest technology producers, not just in this country, but around the world, who are producing some fantastic things to which we are all “addicted.” I think we have an addiction to the technology.

Some of the problems we’re going to experience going forward in cybersecurity aren’t just going to be technology problems. They’re going to be cultural problems and organizational problems. The key issue is how we organize ourselves, what our risk tolerance is, how we are going to be able to accomplish all of our critical missions and business operations that Dawn talked about this morning, and do so in a world that’s fairly dangerous. We have to protect ourselves.

Movie App

I think I can sum it up. I was at a movie. I don’t go to movies very often anymore, but about a month ago, I went to a movie. I was sitting there waiting for the main movie to start, and they were going through all the coming attractions. Then they came on the PA and they said that there is an app you can download. I’m not sure you have ever seen this before, but it tells you for that particular movie when is the optimal time to go to the restroom during the movie.

I bring this up because that’s a metaphor for where we are today. We are consumed. There are great companies out there, producing great technologies. We’re buying it up faster than you can shake a stick at it, and we are developing the most complicated IT infrastructure ever.

So when I look at this problem, I look at this from a scientist’s point of view, an engineering point of view. I’m saying to myself, knowing what I know about what it takes  to — I don’t even use the word “secure” anymore, because I don’t think we can ever get there with the current complexity — build the most secure systems we can and be able to manage risk in the world that we live in.

In the Army, we used to have a saying. You go to war with the army that you have, not the army that you want. We’ve heard about all the technology advances, and we’re going to be buying stuff, commercial stuff, and we’re going to have to put it together into systems. Whether it’s the Internet of Things (IoT) or cyber-physical convergence, it all goes back to some fairly simple things.

The IoT and all this stuff that we’re talking about today really gets back to computers. That’s the common denominator. They’re everywhere. This morning, we talked about your automobile having more compute power than Apollo 11. In your toaster, your refrigerator, your building, the control of the temperature, industrial control systems in power plants, manufacturing plants, financial institutions, the common denominator is the computer, driven by firmware and software.

When you look at the complexity of the things that we’re building today, we’ve gone past the time when we can actually understand what we have and how to secure it.

That’s one of the things that we’re going to do at NIST this year and beyond. We’ve been working in the FISMA world forever it seems, and we have a whole set of standards, and that’s the theme of today: how can standards help you build a more secure enterprise?

The answer is that we have tons of standards out there and we have lots of stuff, whether it’s on the federal side with 853 or the Risk Management Framework, or all the great things that are going on in the standards world, with The Open Group, or ISO, pick your favorite standard.

The real question is how we use those standards effectively to change the current outlook and what we are experiencing today because of this complexity? The adversary has a significant advantage in this world, because of complexity. They really can pick the time, the place, and the type of attack, because the attack surface is so large when you talk about not just the individual products.

We have many great companies just in this country and around the world that are doing a lot to make those products more secure. But then they get into the engineering process and put them together in a system, and that really is an unsolved problem. We call it a Composability Problem. I can have a trusted product here and one here, but what is the combination of those two when you put them together in the systems context? We haven’t solved that problem yet, and it’s getting more complicated everyday.

Continuous Monitoring

For the hard problems, we in the federal government do a lot of stuff in continuous monitoring. We’re going around counting our boxes and we are patching stuff and we are configuring our components. That’s loosely called cyber hygiene. It’s very important to be able to do all that and do it quickly and efficiently to make your systems as secure as they need to be.

But even the security controls in our control catalog, 853, when you get into the technical controls —  I’m talking about access control mechanisms, identification, authentication, encryption, and audit — those things are buried in the hardware, the software, the firmware, and the applications.

Most of our federal customers can’t even see those. So when I ask them if they have all their access controls in place, they can nod their head yes, but they can’t really prove that in a meaningful way.

So we have to rely on industry to make sure those mechanisms, those functions, are employed within the component products that we then will put together using some engineering process.

This is the below-the-waterline problem I talk about. We’re in some kind of digital denial today, because below the water line, most consumers are looking at their smartphones, their tablets, and all their apps — that’s why I used that movie example — and they’re not really thinking about those vulnerabilities, because they can’t see them, until it affects them personally.

I had to get three new credit cards last year. I shop at Home Depot and Target, and JPMorgan Chase is our federal credit card. That’s not a pain point for me because I’m indemnified. Even if there are fraudulent charges, I don’t get hit for those.

If your identity is stolen, that’s a personal pain point. We haven’t reached that national pain point yet. All of the security stuff that we do we talk about it a lot and we do a lot of it, but if you really want to effect change, you’re going to start to hear more at this conference about assurance, trustworthiness, and resiliency. That’s the world that we want to build and we are not there today.

That’s the essence of where I am hoping we are going to go. It’s these three areas: software assurance, systems security engineering, and supply-chain risk management.

My colleague Jon Boyens is here today and he is the author, along with a very talented team of coauthors, of the NIST 800-161 document. That’s the supply chain risk document.

It’s going to work hand-in-hand with another publication that we’re still working on, the 800-160 document. We are taking an IEEE and an ISO standard, 15288, and we’re trying to infuse into that standard. They are coming out with the update of that standard this year. We’re trying to infuse security into every step of the lifecycle.

Wrong Reasons

The reason why we are not having a lot of success on the cybersecurity front today is because security ends up appearing either too late or by the wrong people for the wrong reasons.

I’ll give you one example. In the federal government, we have a huge catalog of security controls, and they are allocated into different baselines: low, moderate, and high. So you will pick a baseline, you will tailor, and you’ll come to the system owner or the authorizing official and say, “These are all the controls that NIST says we have to do.” Well, the mission business owner was never involved in that discussion.

One of the things we are going to do with the new document is focus on the software and systems engineering process from the start of the stakeholders, all the way through requirements, analysis, definition, design, development, implementation, operation, and sustainment, all the way to disposal. Critical things are going to happen at every one of those places in the lifecycle

The beauty of that process is that you involve the stakeholders early. So when those security controls are actually selected they can be traced back to a specific security requirement, which is part of a larger set of requirements that support that mission or business operation, and now you have the stakeholders involved in the process.

Up to this point in time, security operates in its own vacuum. It’s in the little office down the hall, and we go down there whenever there’s a problem. But unless and until security gets integrated and we disappear as being our own discipline, we now are part of the Enterprise Architecture, whether it’s TOGAF® or whatever architecture construct you are following, or the systems engineering process. The system development lifecycle is the third one, and people ask what is acquisition and procurement.

Unless we have our stakeholders at those tables to influence, we are going to continue to deploy systems that are largely indefensible not against all cyber attacks but against the high-end attacks.

We have to do a better job getting at the C-Suite and I tried to capture the five essential areas that this discussion has to revolve around. The acronym is TACIT, and it just happens to be a happy coincidence that it fit into an acronym. But it’s basically looking at the threat, how you configure your assets, and how you categorize your assets with regard to criticality.

How complex is the system you’re building? Are you managing that complexity in trying to reduce it, integrating security across the entire set of business practices within the organization? Then, the last component, which really ties into The Open Group, and the things you’re doing here with all the projects that were described in the first session, that is the trustworthiness piece.

Are we building products and systems that are, number one, more penetration resistance to cyber attacks; and number two, since we know we can’t stop all attacks, because we can never reduce complexity to where we thought we could two or three decades ago. Are we building the essential resiliency into that system. Even when the adversary comes to the boundary and the malware starts to work, how far does it spread, and what can it do?

That’s the key question. You try to limit the time on target for the advisory, and that can be done very, very easily with good architectural and good engineering solutions. That’s my message for 2015 and beyond, at least from a lot of things at NIST. We’re going to start focusing on the architecture and the engineering, how to really affect things at the ground level?

Processes are Important

Now we always will have the people, the processes, the technologies kind of this whole ecosystem that we have to deal with, and you’re going to always have to worry about your sys admins that go bad and dump all the stuff that you don’t want dumped on the Internet. But that’s part of system process. Processes are very important because they give us structure, discipline, and the ability to communicate with our partners.

I was talking to Rob Martin from Mitre. He’s working on a lot of important projects there with the CWEs, CVEs. It gives you the ability to communicate a level of trustworthiness and assurance that other people can have that dialogue, because without that, we’re not going to be communicating with each other. We’re not going to trust each other, and that’s critical, having that common understanding. Frameworks provide that common dialogue of security controls in a common process, how we build things, and what is the level of risk that we are willing to accept in that whole process.

These slides, and they’ll be available, go very briefly into the five areas. Understanding the modern threat today is critical because, even if you don’t have access to classified threat data, there’s a lot of great data out there with Symantec and Verizon reports, and there’s open-source threat information available.

If you haven’t had a chance to do that, I know the folks who work on the high assurance stuff in The Open Group RT&ES. look at that stuff a lot, because they’re building a capability that is intended to stop some of those types of threats.

The other thing about assets is that we don’t do a very good job of criticality analysis. In other words, most of our systems are running, processing, storing, and transmitting data and we’re not segregating the critical data into its own domain where necessary.

I know that’s hard to do sometimes. People say, “I’ve got to have all this stuff ready to go 24×7,” but when you look at some of the really bad breaches that we have had over the last several years establishing a domain for critical data, where that domain can be less complex, which means you can better defend it, and then you can invest more resources into defending those things that are the most critical.

I used a very simple example of a safe deposit box. I can’t get all my stuff into the safe deposit box. So I have to make decisions. I put important papers in there, maybe a coin collection, whatever.  I have locks on my house on the front door, but they’re not strong enough to stop some of those bad guys out there. So I make those decisions. I put it in the bank, and it goes in a vault. It’s a pain in the butt to go down there and get the stuff out, but it gives me more assurance, greater trustworthiness. That’s an example of the things we have to be able to do.

Complexity is something that’s going to be very difficult to address because of our penchant for bringing in new technologies. Make no mistake about it, these are great technologies. They are compelling. They are making us more efficient. They are allowing us to do things we never imagined, like finding out the optimal time to go to the restroom during a movie, I mean who could have imagined we could do that a decade ago.

But as with every one of our customers out there, the kinds of things we’re talking about flies below their radar. When you download 100 apps on your smartphone, people in general, even the good folks in Cybersecurity, have no idea where those apps are coming from, where the pedigree is, have they been tested at all, have they been evaluated, are they running on a trusted operating system?

Ultimately, that’s what this business is all about, and that’s what 800-161 is all about. It’s about a lifecycle of the entire stack from applications, to middleware, to operating systems, to firmware, to integrated circuits, to include the supply chain.

The adversary is all over that stack. They now figure out how to compromise our firmware so we have to come up with firmware integrity controls in our control catalog, and that’s the world we live in today.

Managing Complexity

I was smiling this morning when I talked about the DNI, the Director of National Intelligence in building their cloud, if that’s going to go to the public cloud or not. I think Dawn is probably right, you probably won’t see that going to the public cloud anytime soon, but cloud computing gives us an opportunity to manage complexity. You can figure out what you want to send to the public cloud.

They do a good job through the FedRAMP program of deploying controls and they’ve got a business model that’s important to make sure they protect their customers’ assets. So that’s built into their business model and they do a lot of great things out there to try to protect that information.

Then, for whatever stays behind in your enterprise, you can start to employ some of the architectural constructs that you’ll see here at this conference, some of the security engineering constructs that we’re going to talk about in 800-160, and you can better defend what stays behind within your organization.

So cloud is a way to reduce that complexity. Enterprise Architecture, TOGAF®, an Open Group standard, all of those architectural things allow you to provide discipline and structure and thinking about what you’re building: how to protect it, how much it’s going to cost and is it worth it? That is the essence of good security. It’s not about running around with a barrel full of security controls or ISO 27000 saying, hey, you’ve got to do all this stuff, or this guy is going to fall, those days are over.

Integration we talked about. This is also hard. We are working with stovepipes today. Enterprise Architects typically don’t talk to security people. Acquisition folks, in most cases, don’t talk to security people.

I see it everyday. You see RFPs go out and there is a whole long list of requirements, and then, when it comes to security, they say the system or the product they are buying must be FISMA compliant. They know that’s a law and they know they have to do that, but they really don’t give the industry or the potential contractors any specificity as to what they need to do to bring that product or the system to the state where it needs to be.

And so it’s all about expectations. I believe our industry, whether it’s here or overseas, wherever these great companies operate, the one thing we can be sure of is that they want to please their customers. So maybe what the message I’m going to send everyday is that we have to be more informed consumers. We have to ask for things that we know we need.

It’s like if you go back with the automobile. When I first started driving a long time ago,  40 years ago, cars just had seatbelts. There were no airbags and no steel-reinforced doors. Then, you could actually buy an airbag as an option at some point. When you fast-forward to today, every car has an airbag, seatbelt, steel-reinforced doors. It comes as part of the basic product. We don’t have to ask for it, but as consumers we know it’s there, and it’s important to us.

We have to start to look at the IT business in the same way, just like when we cross a bridge or fly in an airplane. All of you who flew here in airplanes and came across bridges had confidence in those structures. Why? Because they are built with good scientific and engineering practices.

So least functionality, least privilege, those are kind of foundational concepts in our world and cybersecurity. You really can’t look at a smartphone or a tablet and talk about least functionality anymore, at least if you are running that movie app, and you want to have all of that capability.

The last point about trustworthiness is that we have four decades of best practices in trusted systems development. It failed 30 years ago because we had the vision back then of trusted operating systems, but the technology and the development far outstripped our ability to actually achieve that.

Increasingly Difficult

We talked about a kernel-based operating system having 2,000, 3,000, 4,000, 5,000 lines of code and being highly trusted. Well, those concepts are still in place. It’s just that now the operating systems are 50 million lines of code, and so it becomes increasingly difficult.

And this is the key thing. As a society, we’re going to have to figure out, going forward, with all this great technology, what kind of world do we want to have for ourselves and our grandchildren? Because with all this technology, as good as it is, if we can’t provide a basis of security and privacy that customers can feel comfortable with, then at some point this party is going to stop.

I don’t know when that time is going to come, but I call it the national pain point in this digital denial. We will come to that steady state. We just haven’t had enough time yet to get to that balance point, but I’m sure we will.

I talked about the essential partnership, but I don’t think we can solve any problem without a collaborative approach, and that’s why I use the essential partnership: government, industry, and academia.

Certainly all of the innovation, or most of the innovation, comes from our great industry. Academia is critical, because the companies like Oracle or Microsoft want to hire students who have been educated in what I call the STEM disciplines: Science, Technology, Engineering — whether it’s “double e” or computer science — and Mathematics. They need those folks to be able to build the kind of products that have the capabilities, function-wise, and also are trusted.

And government plays some role — maybe some leadership, maybe a bully pulpit, cheerleading where we can — bringing things together. But the bottom line is that we have to work together, and I believe that we’ll do that. And when that happens I think all of us will be able to sit in that movie and fire up that app about the restroom and feel good that it’s secure.

By The Open GroupMary Ann Davidson: I guess I’m preaching to the converted, if I can use a religious example without offending somebody. One of the questions you asked is, why do we even have standards in this area? And of course some of them are for technical reasons. Crypto it turns out is easy for even very smart people to get wrong. Unfortunately, we have reason to find out.

So there is technical correctness. Another reason would be interoperability to get things to work better in a more secure manner. I’ve worked in this industry long enough to remember the first SSL implementation, woo-hoo, and then it turns out 40 bits wasn’t really 40, bits because it wasn’t random enough, shall we say.

Trustworthiness. ISO has a standard — The Common Criteria. It’s an ISO standard. We talk about what does it mean to have secure software, what type of threats does it address, how do you prove that it does what you say you do? There are standards for that, which helps. It helps everybody. It certainly helps buyers understand a little bit more about what they’re getting.

No Best Practices

And last, but not least, and the reason it’s in quotes, “best practices,” is because there actually are no best practices. Why do I say that — and I am seeing furrowed brows back there? First of all, lawyers don’t like them in contracts, because then if you are not doing the exact thing, you get sued.

There are good practices and there are worst practices. There typically isn’t one thing that everyone can do exactly the same way that’s going to be the best practice. So that’s why that’s in quotation marks.

Generally speaking, I do think standards, particularly in general, can be a force for good in the universe, particularly in cybersecurity, but they are not always a force for good, depending on other factors.

And what is the ecosystem? Well, we have a lot of people. We have standards makers, people who work on them. Some of them are people who review things. Like when NIST is very good, which I appreciate, about putting drafts out and taking comments, as opposed to saying, “Here it is, take it or leave it.” That’s actually a very constructive dialogue, which I believe a lot of people appreciate. I know that I do.

Sometimes there are mandators. You’ll get an RFP that says, “Verily, thou shall comply with this, less thee be an infidel in the security realm.” And that can be positive. It can  be a leading edge of getting people to do something good that, in many cases, they should do anyway.

Implementers, who have to take this and decipher and figure out why they are doing it. People who make sure that you actually did what you said you were going to do.

And last, but not least, there are weaponizers. What do I mean by that? We all know who they are. They are people who will try to develop a standard and then get it mandated. Actually, it isn’t a standard. It’s something they came up with, which might be very good, but it’s handing them regulatory capture.

And we need to be aware of those people. I like the Oracle database. I have to say that, right? There are a lot of other good databases out there. If I went in and said, purely objectively speaking, everybody should standardize on the Oracle database, because it’s the most secure. Well, nice work if I can get it.

Is that in everybody else’s interest? Probably not. You get better products in something that is not a monopoly market. Competition is good.

So I have an MBA, or had one in a prior life, and they used to talk in the marketing class about the three Ps of marketing. Don’t know what they are anymore; it’s been a while. So I thought I would come up with Four Ps of a Benevolent Standard, which are Problem Statement, Precise Language, Pragmatic Solutions, and Prescriptive Minimization.

Economic Analysis

And the reason I say this is one of the kind of discussions I have to have a lot of times, particularly sometimes with people in the government. I’m not saying this in any pejorative way. So please don’t take it that way. It’s the importance of economic analysis, because nobody can do everything.

So being able to say that I can’t boil the ocean, because you are going to boil everything else in it, but I can do these things. If I could do these things, it’s very clear what I am trying to do. It’s very clear what the benefit is. We’ve analyzed it, and it’s probably something everybody can do. Then, we can get to better.

Better is better than omnibus. Omnibus is something everybody gets thrown under if you make something too big. Sorry, I had to say that.

So Problem Statement: why is this important? You would think it’s obvious, Mary Ann, except that it isn’t, because so often the discussions I have with people, tell me what problem you are worried about? What are you trying to accomplish? If you don’t tell me that, then we’re going to be all over the map. You say potato and I say “potahto,” and the chorus of that song is, “let’s call the whole thing off.”

I use supply chain as an example, because this one is all over the map. Bad quality? Well, buying a crappy product is a risk of doing business. It’s not, per se, a supply chain risk. I’m not saying it’s not important, but it it’s certainly not a cyber-specific supply chain risk.

Bad security: well, that’s important, but again, that’s a business risk.

Backdoor bogeyman: this is the popular one. How do I know you didn’t put a backdoor in there? Well, you can’t actually, and that’s not a solvable problem.

Assurance, supply chain shutdown: yeah, I would like to know that a critical parts supplier isn’t going to go out of business. So these are all important, but they are all different problems.

So if you don’t say what you’re worried about, and it can’t be all the above. Almost every business has some supplier of some sort, even if it’s just healthcare. If you’re not careful how you define this, you will be trying to define a 100 percent of any entity’s business operations. And that’s not appropriate.

Use cases are really important, because you may have a Problem Statement. I’ll give you one, and this is not to ding NIST in any way, shape, or form, but I just read this. It’s the Cryptographic Key Management System draft. The only reason I cite this as an example is that I couldn’t actually find a use case in there.

So whatever the merits of that are saying, are you trying to develop a super secret key management system for government, very sensitive cryptographic things you are building from scratch, or you are trying to define a key management system that we have to use for things like TLS or any encryption that any commercial product does, because that’s way out of scope?

So without that, what are you worried about? And also what’s going to happen is somebody is going to cite this in an RFP and it’s going to be, are you compliant with bladdy-blah? And you have no idea whether that even should apply.

Problem Statement

So that Problem Statement is really important, because without that, you can’t have that dialogue in groups like this. Well, what are we trying to accomplish? What are we worried about? What are the worst problems to solve?

Precise Language is also very important. Why? Because it turns out everybody speaks a slightly different language, even if we all speak some dialect of geek, and that is, for example, a vulnerability.

If you say vulnerability to my vulnerability handling team, they think of that as a security vulnerability that’s caused by a defect in software.

But I’ve seen it used to include, well, you didn’t configure the product properly. I don’t know what that is, but it’s not a vulnerability, at least not to a vendor. You implemented a policy incorrectly. It might lead to vulnerability, but it isn’t one. So you are seeing where I am going with this. If you don’t have language to find very crisply the same thing, you read something and you go off and do it and you realize you solved the wrong problem.

I am very fortunate. One of my colleagues from Oracle, who works on our hardware, and I also saw a presentation by people in that group at the Cryptographic Conference in November. They talked about how much trouble we got into because if you say, “module” to a hardware person, it’s a very different thing from what it meant to somebody trying to certify it. This is a huge problem because again you say, potato, I say “potahto.” It’s not the same thing to everybody. So it needs to be very precisely defined.

Scope is also important. I don’t know why. I have to say this a lot and it does get kind of tiresome, I am sure to the recipients, COTS isn’t GOTS. Commercial software is not government software, and it’s actually globally developed. That’s the only way you get commercial software, the feature rich, reads frequently. We have access to global talent.

It’s not designed for all threat environments. It can certainly be better, and I think most people are moving towards better software, most likely because we’re getting beaten up by hackers and then our customers, and it’s good business. But there is no commercial market for high-assurance software or hardware, and that’s really important, because there is only so much that you can do to move the market.

So even a standards developer or big U.S. governments, is an important customer in the market for a lot of people, but they’re not big enough to move the marketplace on their own, and so you are limited by the business dynamic.

So that’s important, you can get to better. I tell people, “Okay, anybody here have a Volkswagen? Okay, is it an MRAP vehicle? No, it’s not, is it? You bought a Volkswagen and you got a Volkswagen. You can’t take a Volkswagen and drive it around streets and expect it to perform like an MRAP vehicle. Even a system integrator, a good one, cannot sprinkle pixie dust over that Volkswagen and turn it into an MRAP vehicle. Those are very different threat environments.

Why you think commercial software and hardware is different? It’s not different. It’s exactly the same thing. You might have a really good Volkswagen, and it’s great for commuting, but it is never going to perform in an IED environment. It wasn’t designed for that, and there is nothing you can do or make it designed to perform in that environment.

Pragmatism

Pragmatism; I really wish anybody working on any standard would do some economic analysis, because economics rules the world. Even if it’s something really good, a really good idea, time, money, and people, particularly qualified security people, are constrained resourses.

So if you make people do something that looks good on paper, but it’s really time-consuming, it’s an opportunity, the cost is too high. That means what is the value of something you could do with those resources that would either cost less or deliver higher benefit. And if you don’t do that analysis, then you have people say, “Hey, that’s a great idea. Wow, that’s great too. I’d like that.” It’s like asking your kid, “Do you want candy. Do want new toys? Do want more footballs?” Instead of saying, “Hey, you have 50 bucks, what you are going to do with it?”

And then there are unintended consequences, because if you make this too complex, you just have fewer suppliers. People will never say, “I’m just not going to bid because it’s impossible.” I’m going to give you three examples and again I’m trying to be respectful here. This is not to dis anybody who worked on these. In some cases, these things have been subsequent revisions that have been modified, which I really appreciate. But there are examples of, when you think about it, what were you asking for in the first place.

I think this was an early version of NISTR 7622 and has since been excised. There was a requirement that the purchaser wanted to be notified of personnel changes involving maintenance. Okay, what does that mean?

I know what I think they wanted, which is, if you are outsourcing the human resources for the Defense Department and you move the whole thing to “Hackistan,” obviously they would want to be notified. I got that, but that’s not what it said.

So I look at that and say, we have 5,000 products, at least, at Oracle. We have billions and billions of lines of code everyday. Somebody checks out a transaction, getting some code, and they do some work on it and they didn’t write it in the first place.

So am I going to tweet all that to somebody. What’s that going to do for you? Plus you have things like the German Workers Council. We are going to tell the US Government that Jurgen worked on this line of code. Oh no, that’s not going to happen.

So what was it you were worried about, because that is not sustainable, tweeting people 10,000 times a day with code changes is just going to consume a lot of resource.

In another one, had this in an early version of something they were trying to do. They wanted to know, for each phase of development for each project, how many foreigners worked on it? What’s a foreigner? Is it a Green Card holder? Is it someone who has a dual passport? What is that going to do for you?

Now again if you had a super custom code for some intelligence, I can understand there might be cases in which that would matter. But general-purpose software is not one of them. As I said, I can give you that information. We’re a big company and we’ve got lots of resource. A smaller company probably can’t. Again, what will I do for you, because I am taking resources I could be using on something much more valuable and putting them on something really silly.

Last, but not least, and again, with respect, I think I know why this was in there. It might have been the secure engineering draft standard that you came up with that has many good parts to it.

Root Cause Analysis

I think vendors will probably understand this pretty quickly. Root Cause Analysis. If you have a vulnerability, one of the first things you should use is Root Cause Analysis. If you’re a vendor and you have a CVSS 10 Security vulnerability in a product that’s being exploited, what do you think the first thing you are going to do is?

Get a patch in your customers’ hands or work around? Yeah, probably, that’s probably the number one priority. Also, Root Cause Analysis, particularly for really nasty security bugs, is really important. CVSS 0, who cares? But for 9 or 10, you should be doing that common analysis.

I’ve got a better one. We have a technology we have called Java. Maybe you’ve heard of it. We put a lot of work into fixing Java. One of the things we did is not only Root Cause Analysis, for CVSS 9 and higher. They have to go in front of my boss. Every Java developer had to sit through that briefing. How did this happen?

Last but not least, looking for other similar instances, not just root cause, how did that get in there and how do we avoid it. Where else does this problem exist. I am not saying this to make us look good; I ‘m saying for the analytics. What are you really trying to solve here. Root Cause Analysis is important, but it’s important in context. If I have to do it for everything, it’s probably not the best use of a scarce resource.

My last point is to minimize prescriptiveness within limits. For example, probably some people in here don’t know how to bake or maybe you made a pie. There is no one right way to bake a cherry pie. Some people go down to Ralphs and they get a frozen Marie Callendar’s out of the freezer, they stick it in the oven, and they’ve got a pretty good cherry pie.

Some people make everything from scratch. Some people use a prepared pie crust and they do something special with the cherries they picked off their tree, but there is no one way to do that that is going to work for everybody.

Best practice for something. For example, I can say truthfully that a best development practice would not be just start coding, number one; and number two, it compiles without too many errors on the base platform, and ship it. That is not good development practice.

If you mandate too much, it will stifle innovation and it won’t work for people. Plus, as I mentioned, you will have an opportunity cost. If I’m doing something that somebody says I have to do, but there is a more innovative way of doing that.

We don’t have a single development methodology in Oracle, mostly because of acquisitions. We buy a great company, we don’t tell them, “You know, that agile thing you are doing, it’s the last year. You have to do waterfall.” That’s not going to work very well, but there are good practices even within those different methodologies.

Allowing for different hows is really important. Static analysis is one of them. I think static analysis is kind of industry practice now, and people should be doing it. Third party is really bad. I have been opining about this, this morning.

Third-party Analysis

Let just say, I have a large customer, I won’t name who used a third-party static analysis service. They broke their license agreement with us. They’re getting a lot of it from us. Worse, they give us a report that included vulnerabilities from one of our competitors. I don’t want to know about those, right? I can’t fix some. I did tell my competitor, “You should know this report exist, because I’m sure you want to analyze this.”

Here’s the worst part. How many of those vulnerabilities the third-party found you think had any merit? Run tool is nothing; analyzing results is everything. That customer and the vendor wasted the time of one of our best security leads, trying to make sure there was no there there, and there wasn’t.

So again, and last but not least, government can use their purchasing power in lot of very good ways, but realize that regulatory things are probably going to lag actual practice. You could be specifying buggy whip standards and the reality is that nobody uses buggy whips anymore. It’s not always about the standard, particularly if you are using resources in a less than optimal way.

One of the things I like about The Open Group is that here we have actual practitioners. This is one of the best forums I have seen, because there are people who have actual subject matter expertise to bring to the table, which is so important in saying what is going to work and can be effective.

The last thing I am going to say is a nice thank you to the people in The Open Group Trusted Technology Forum (OTTF), because I appreciate the caliber of my colleagues, and also Sally Long. They talk about this type of an effort as herding cats, and at least for me, it’s probably like herding a snarly cat. I can be very snarly. I’m sure you can pick up on that.

So I truly appreciate the professionalism and the focus and the targeting. Targeting a good slice of making a supply-chain problem better, not boiling the ocean, but very focused and targeted and with very high-caliber participation. So thank you to my colleagues and particularly thank you to Sally, and that’s it, I will turn it over to others.

By The Open GroupJim Hietala: We do, we have a few questions from the audience. So the first one and both here could feel free to chime in on this. Something you brought up Dr. Ross, building security in looking at software and systems engineering processes. How do you bring industry along in terms of commercial off-the-shelf products and services especially when you look at things like IoT, where we have got IP interfaces grafted on to all sorts of devices?

Ross: As Mary Ann was saying before, the strength of any standard is really its implementability out there. When we talk about, in particular, the engineering standard, the 15288 extension, if we do that correctly every organization out there who’s already using — let’s say a security development lifecycle like the 27034, you can pick your favorite standard — we should be able to reflect those activities in the different lanes of the 15288 processes.

This is a very important point that I got from Mary Ann’s discussion. We have to win the hearts and minds and be able to reflect things in a disciplined and structured process that doesn’t take people off their current game. If they’re doing good work, we should be able to reflect that good work and say, “I’m doing these activities whether it’s SDL, and this is how it would map to those activities that we are trying to find in the 15288.”

And that can apply to the IoT. Again, it goes back to the computer, whether it’s Oracle database or a Microsoft operating system. It’s all about the code and the discipline and structure of building that software and integrating it into a system. This is where we can really bring together industry, academia, and government and actually do something that we all agree on.

Different Take

Davidson: I would have a slightly different take on this. I know this is not a voice crying in the wilderness. My concern about the IoT goes back to things I learned in business school in financial market theory, which unfortunately has been borne out in 2008.

There are certain types of risks you can mitigate. If I cross a busy street, I’m worried about getting hit by a car. I can look both ways. I can mitigate that. You can’t mitigate systemic risk. It means that you created a fragile system. That is the problem with the IoT, and that is a problem that no jury of engineering will solve.

If it’s not a problem, why aren’t we giving nuclear weapons’ IP addresses? Okay, I am not making this up. The Air Force thought about that at one point. You’re laughing. Okay, Armageddon, there is an app for that.

That’s the problem. I know this is going to happen anyway. whether or not I approve of it, but I really wish that people could look at this, not just in terms of how many of these devices and what a great opportunity, but what is a systemic risk that we are creating by doing this.

My house is not connected to the Internet directly and I do not want somebody to shut my appliances off or shut down my refrigerator or lock it so that I can’t get into it or use that for launching an attack, those are the discussions we should be having — at least as much as how we make sure that people designing these things have a clue.

Hietala: The next question is, how do customers and practitioners value the cost of security, and then a kind of related question on what can global companies due to get C-Suite attention and investment on cybersecurity, that whole ROI value discussion?

Davidson: I know they value it because nobody calls me up and says, “I am bored this week. Don’t you have more security patches for me to apply?” That’s actually true. We know what it costs us to produce a lot of these patches, and it’s important for the amount of resources we spend on that I would much rather be putting them on building something new and innovative, where we could charge money for it and provide more value to customers.

So it’s cost avoidance, number one; number two more people have an IT backbone. They understand the value of having it be reliable. Probably one of the reasons people are moving to clouds is that it’s hard to maintain all these and hard to find the right people to maintain them. But also I do have more customers asking us now about our security practices, which is be careful what you wish for

I said this 10 years ago. People should be demanding. They know what we’re doing and now I am going to spend a lot of time answering RFPs, but that’s good. These people are aware of this. They’re running their business on our stuff and they want to know what kind of care we’re taking to make sure we’re protecting their data and their mission-critical applications as if it were ours.

Difficult Question

Ross: The ROI question is very difficult with regard to security. I think this goes back to what I said earlier. The sooner we get security out of its stovepipe and integrated as just part of the best practices that we do everyday, whether it’s in the development work at a company or whether it’s in our enterprises as part of our mainstream organizational management things like the SDLC, or if we are doing any engineering work within the organization, or if we have the Enterprise Architecture group involved. That integration makes security less of  “hey, I am special” and more of just a part of the way we do business.

So customers are looking for reliability and dependability. They rely on this great bed of IT product systems and services and they’re not always focused on the security aspects. They just want to make sure it works and that if there is an attack and the malware goes creeping through their system, they can be as protected as they need to be, and sometimes that flies way below their radar.

So it’s got to be a systemic process and an organizational transformation. I think we have to go through it, and we are not quite there just yet.

Davidson: Yeah, and you really do have to bake it in. I have a team of — I’ve got three more headcount, hoo-hoo — 45 people, but we have about 1,600 people in development whose jobs are to be security points of contact and security leads. They’re the boots on the ground who implement our program, because I don’t want to have an organization that peers over everybody’s shoulder to make sure they are writing good code. It’s not cost-effective, not a good way to do it. It’s cultural.

One of the ways that you do that is seeding those people in the organization, so they become the boots on the ground and they have authority to do things, because you’re not going to succeed otherwise.

Going back to Java, that was the first discussion I had with one of the executives that this is a cultural thing. Everybody needs to feel that he or she is personally responsible for security, not those 10-20 whatever those people are, whoever the security weenie is. It’s got to be everybody and when you can do that, you really have to see change and how things happen. Everybody is not going to be a security expert, but everybody has some responsibility for security.

Transcript available here.

Transcript of part of the proceedings from The Open Group San Diego 2015 in February. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

Join the conversation! @theopengroup #ogchat

You may also be interested in:

 

Comments Off on Cybersecurity Standards: The Open Group Explores Security and Ways to Assure Safer Supply Chains

Filed under Cloud, Cloud/SOA, Conference, Cybersecurity, Enterprise Architecture, Information security, Internet of Things, IT, OTTF, RISK Management, Security, Standards, TOGAF®, Uncategorized