Category Archives: Data management

The Open Group San Diego Panel Explores Synergy Among Major Frameworks in Enterprise Architecture

Following is a transcript of part of the proceedings from The Open Group San Diego event in February – a panel discussion on The Synergy of Enterprise Architecture frameworks.

The following panel, which examines the synergy among the major Enterprise Architecture frameworks, consists of moderator Allen Brown, President and Chief Executive Officer, The Open Group; Iver Bank, an Enterprise Architect at Cambia Health Solutions; Dr. Beryl Bellman, Academic Director, FEAC Institute; John Zachman, Chairman and CEO of Zachman International, and originator of the Zachman Framework; and Chris Forde, General Manager, Asia and Pacific Region and Vice President, Enterprise Architecture, The Open Group.

Here are some excerpts:

By The Open GroupIver Band: As an Enterprise Architect at Cambia Health Solutions, I have been working with the ArchiMate® Language for over four years now, both working with and on it in the ArchiMate® Forum. As soon as I discovered it in late 2010, I could immediately see, as an Enterprise Architect, how it filled an important gap.

It’s very interesting to see the perspective of John Zachman. I will briefly present how the ArchiMate Language allows you to fully support enterprise architecture using The Zachman Framework. So I am going to very briefly talk about enterprise architecture with the ArchiMate Language and The Zachman Framework.

What is the ArchiMate Language? Well, it’s a language we use for building understanding across disciplines in an organization and communicating and managing change.  It’s a graphical notation with formal semantics. It’s a language.

It’s a framework that describes and relates the business, application, and technology layers of an enterprise, and it has extensions for modelling motivation, which includes business strategy, external factors affecting the organization, requirements for putting them altogether and for showing them from different stakeholder perspectives.

You can show conflicting stakeholder perspectives, and even politics. I’ve used it to model organizational politics that were preventing a project from going forward.

It has a rich set of techniques in its viewpoint mechanism for visualizing and analyzing what’s going on in your enterprise. Those viewpoints are tailored to different stakeholders.  And, of course, ArchiMate®, like TOGAF®, is an open standard managed by The Open Group.

Taste of Archimate

This is just a taste of ArchiMate for people who haven’t seen it before. This is actually excerpted from the presentation my colleague Chris McCurdy and I are doing at this conference on Guiding Agile Solution Delivery with the ArchiMate Language.

What this shows is the Business and Application Layers of ArchiMate. It shows a business process at the top. Each process is represented by a symbol. It shows a data model of business objects, then, at the next layer, in yellow.

Below that, it shows a data model actually realized by the application, the actual data that’s being processed.

Below that, it shows an application collaboration, a set of applications working together, that reads and writes that data and realizes the business data model that our business processes use.

All in all, it presents a vision of an integrated project management toolset for a particular SDLC that uses the phases that you see across the top.

We are going to dissect this model, how you would build it, and how you would develop it in an agile environment in our presentation tomorrow.

I have done some analysis of The Zachman Framework, comparing it to the ArchiMate Language. What’s really clear is that ArchiMate supports enterprise architecture with The Zachman Framework. You see a rendering of The Zachman Framework and then you see a rendering of the components of the ArchiMate Language. You see the Business Layer, the Application Layer, the Technology Layer, its ability to express information, behavior, and structure, and then the Motivation and Implementation and Migration extensions.

So how does it support it? Well, there are two key things here. The first is that ArchiMate models answer the questions that are posed by The Zachman Framework columns.

For what: for Inventory. We are basically talking about what is in the organization. There are Business and Data Objects, Products, Contracts, Value, and Meaning.

For how: for process. We can model Business Processes and Functions. We can model Flow and Triggering Relationships between them.

Where: for the Distribution of our assets. We can model Locations, we can model Devices, and we can model Networks, depending on how you define Location within a network or within a geography.

For who: We can model Responsibility, with Business Actors, Collaborations, and Roles.

When: for Timing. We have Business Events, Plateaus of System Evolution, relatively stable systems states, and we have Triggering Relationships.

Why: We have a rich Motivation extension, Stakeholders, Drivers, Assessments, Principles, Goals, Requirements, etc., and we show how those different components influence and realize each other.

Zachman perspectives

Finally, ArchiMate models express The Zachman Row Perspectives. For the contextual or boundary perspective, where Scope Lists are required, we can make catalogs of ArchiMate Concepts. ArchiMate has broad tool support, and in a repository-based tool, while ArchiMate is a graphical language, you can very easily take list of concepts, as I do regularly, and put them in catalog or metrics form. So it’s easy to come up with those Scope Lists.

Secondly, for the Conceptual area, the Business Model, we have a rich set of Business Layer Viewpoints. Like the top of the — that focus on the top of the diagram that I showed you; Business Processes, Actors, Collaborations, Interfaces, Business Services that are brought to market.

Then at the Logical Layer we have System. We have a rich set of Application Layer Viewpoints and Viewpoints that show how Applications use Infrastructure.

For Physical, we have an Infrastructure Layer, which can be used to model any type of Infrastructure: Hosting, Network, Storage, Virtualization, Distribution, and Failover. All those types of things can be modeled.

And for Configuration and Instantiation, the Application and Technology Layer Viewpoints are available, particularly more detailed ones, but are also important is the Mappings to standard design languages such as BPMN, UML and ERD. Those are straightforward for experienced modelers. We also have a white paper on using the ArchiMate language with UML. Thank you.

By The Open GroupDr. Beryl Bellman: I have been doing enterprise architecture for quite a long time, for what you call pre-enterprise architecture work, probably about 30 years, and I first met John Zachman well over 20 years ago.

In addition to being an enterprise architect I am also a University Professor at California State University, Los Angeles. My focus there is on Organizational Communications. While being a professor, I always have been involved in doing contract consulting for companies like Digital Equipment Corporation, ASK, AT&T, NCR, then Ptech.

About 15 years ago, a colleague of mine and I founded the FEAC Institute. The initial name for that was the Federal Enterprise Architecture Certification Institute, and then we changed it to Federated. It actually goes by both names.

The business driver of that was the Clinger–Cohen Bill in 1996 when it was mandated by government that all federal agencies must have an enterprise architecture.

And then around 2000, they began to enforce that regulation. My business partner at that time, Felix Rausch, and I felt that we need some certification in how to go about doing and meeting those requirements, both for the federal agencies and the Department of Defense. And so that’s when we created the FEAC Institute.

Beginning of FEAC

In our first course, we had the Executive Office of the President, US Department of Fed, which I believe was the first Department of the Federal Government that was hit by OMB which held up their budget for not having an enterprise architecture on file. So they were pretty desperate, and that began the first of the beginning of the FEAC.

Since that time, a lot of people have come in from the commercial world and from international areas. And the idea of FEAC was that you start off with learning how to do enterprise architecture. In a lot of programs, including TOGAF, you sort of have to already know a little bit about enterprise architecture, the hermeneutical circle. You have to know what it is to know.

In FEAC we had a position that you want to provide training and educating in how to do enterprise architecture that will get you from a beginning state to be able to take full responsibility for work doing enterprise architecture in a matter of three months. It’s associated with the California State University System, and you can get, if you so desire, 12 graduate academic units in Engineering Management that can be applied toward a degree or you can get continuing education units.

So that’s how we began that. Then, a couple of years ago, my business partner decided he wanted to retire, and fortunately there was this guy named John Zachman, who will never retire. He’s a lot younger than all of us in this room, right? So he purchased the FEAC Institute.

I still maintain a relationship with it as Academic Director, in which primarily my responsibilities are as a liaison to the universities. My colleague, Cort Coghill, is sort of the Academic Coordinator of the FEAC Institute.

FEAC is an organization that also incorporates a lot of the training and education programs of Zachman International, which includes managing the FEAC TOGAF courses, as well as the Zachman certified courses, which we will tell you more about.

‘m just a little bit surprised by this idea, the panel, the way we are constructed here, because I didn’t have a presentation. I’m doing it off the top, as you can see. I was told we are supposed to have a panel discussion about the synergies of enterprise architecture. So I prepared in my mind the synergies between the different enterprise architectures that are out there.

For that, I just wanted to make a strong point. I wanted to talk about synergy like a bifurcation between on the one hand, “TOGAF and Zachman” as being standing on one side, whereas the statement has been made earlier this morning and throughout the meeting is “TOGAF and.”

Likewise, we have Zachman, and it’s not “Zachman or, but it’s ‘Zachman and.” Zachman provides that ontology, as John talks about it in his periodic table of basic elements of primitives through which we can constitute any enterprise architecture. To attempt to build an architecture out of composites and then venting composites and just modeling  you’re just getting a snapshot in time and you’re really not having an enterprise architecture that is able to adapt and change. You might be able to have a picture of it, but that’s all you really have.

That’s the power of The Zachman Framework. Hopefully, most of you will attend our demonstration this afternoon and a workshop where we are actually going to have people work with building primitives and looking at the relationship of primitives, the composites with a case study.

Getting lost

On the other side of that, Schekkerman wrote something about the forest of architectural frameworks and getting lost in that. There are a lot of enterprise architectural frameworks out there.

I’m not counting TOGAF, because TOGAF has its own architectural content metamodel, with its own artifacts, but those does not require one to use the artifacts in the architectural content metamodel. They suggest that you can use DoDAF. You can use MODAF. You can use commercial ones like NCR’s GITP. You can use any one.

Those are basically the competing models. Some of them are commercial-based, where organizations have their own proprietary stamps and the names of the artifacts, and the wrong names for it, and others want to give it its own take.

I’m more familiar nowadays with the governmental sectors. For example, FEAF, Federal Enterprise Architecture Framework Version 2. Are you familiar with that? Just go on the Internet, type in FEAF v2. Since Scott Bernard has been the head, he is the Chief Architect for the US Government at the OMB, he has developed a model of enterprise architecture, what he calls the Architecture Cube Model, which is an iteration off of John’s, but he is pursuing a cube form rather than a triangle form.

Also, for him the FEAF-II, as enterprise architecture, fits into his FEAF-II, because at the top level he has the strategic plans of an organization.

It goes down to different layers, but then, at one point, it drops off and becomes not only a solution, but it gets into the manufacturing of the solution. He has these whole series of artifacts that pertain to these different layers, but at the lower levels, you have a computer wiring closet diagram model, which is a little bit more detailed than what we would consider to be at a level of enterprise architecture.

Then you have the MODAF, the DoDAF, and all of these other ones, where a lot of those compete with each other more on the basis of political choices.

With the MODAF, the British obviously don’t want to use DoDAF, they have their own, but they are very similar to each other. One view, the acquisition view, differs from the project view, but they do the same things. You can define them in terms of each other.

Then there is the Canadian, NAF, and all that, and they are very similar. Now, we’re trying to develop the unified MODAF, DoDAF, and NAF architecture, UPDM, which is still in its planning stages. So we are moving toward a more integrated system.

BrownAllen Brown: Let’s move on to some of the questions that folks are interested in. Moving away from what the frameworks are, there is a question here. How does enterprise architecture take advantage of the impact of new emerging technologies like social, mobile, analytics, cloud, and so on?

Bidirectional change

John A Zachman: The change can take place in the enterprise either from the top, where we change the context of the enterprise, or from the bottom, where we change the technologies.

So technology is expressed in the context of the enterprise, what I would call Rule 4, and that’s the physical domain. And it’s the same way in any other — the building architecture, the airplane architecture, or anything. You can implement the logic, the as-designed logic, in different technologies.

Whatever the technology is, I made an observation that you want to engineer for flexibility. You separate the independent variables. So you separate the logic at Rule 3 from the physics of Rule 4, and then you can change Rule 4 without changing Rule 3. Basically that’s the idea, so you can accommodate whatever the emerging technologies are.

Bellman: I would just continue with that. I agree with John. Thinking about the synergy between the different architectures, basically every enterprise architecture contains, or should contain, considerations of those primitives. Then, it’s a matter of which a customer wants, which a customer feels comfortable with? Basically as long as you have those primitives defined, then you essentially can use any architecture. That constitute the synergy between the architectures.

Band: I agree with what’s been said. It’s also true that I think that one of the jobs of an enterprise architect is to establish a view of the organization that can be used to promote understanding and communicate and manage change. With cloud-based systems, they are generally based on metadata, and the major platforms, like Salesforce.com as an example. They publish their data models and their APIs.

So I think that there’s going to be a new generation of systems that provide a continuously synchronized, real-time view of what’s going on in the enterprise. So the architectural model will model this in the future, where things need to go, and they will do analyses, but we will be using cloud, big data, and even sensor technologies to understand the state of the enterprise.

Bellman: In the DoDaF 2.0, when that initially came out, I think it was six years ago or so, they have services architecture, a services view, and a systems view. And one of the points they make within the context, not as a footnote, is that they expect the systems view to sort of disappear and there will be a cloud view that will take its place. So I think you are right on that.

Chris Forde: The way I interpreted the question was, how does EA or architecture approach the things help you manage disruptive things? And if you accept the idea that enterprise architecture actually is a management discipline, it’s going to help you ask the right questions to understand what you are dealing with, where it should be positioned, what the risks and value proposition is around those particular things, whether that’s the Internet of Things, cloud computing, or all of these types of activities.

So going back to the core of what Terry’s presentation was about is a decision making framework with informed questions to help you understand what you should be doing to either mitigate the risk, take advantage of the innovation, and deploy the particular thing in a way that’s useful to your business. That’s the way I read the question.

Impact of sensors

Band: Just to reinforce what Chris says, as an enterprise architect in healthcare, one of the things that I am looking at very closely is the evaluation of the impact of health sensor technology. Gartner Group says that by 2020, the average lifespan in a developed country will be increased by six months due to mobile health monitoring.

And so there are vast changes in the whole healthcare delivery system, of which my company is at the center as a major healthcare payer and investor in all sorts of healthcare companies. I use enterprise architecture techniques to begin to understand the impact of that and show the opportunities to our health insurance business.

Brown: If you think about social and mobile and you look at the entire enterprise architecture, now you are starting to expand that beyond the limits of the organization, aren’t you? You’re starting to look at, not just the organization and the ecosystem, your business partners, but you are also looking at the impact of bringing mobile devices into the organization, of managers doing things on their own with cloud that wasn’t part of the architecture. You have got the relationship with consumers out there that are using social and mobile. How do you capture all of that in enterprise architecture?

Forde: Allen, if I had the answer to that question I would form my own business and I would go sell it.

Back in the day, when I was working in large organizations, we were talking about the extended enterprise, that kind of ecosystem view of things. And at that time the issue was more problematic. We knew we were in an extended ecosystem, but we didn’t really have the technologies that effectively supported it.

The types of technologies that are available today, the ones that The Open Group has white papers about — cloud computing, the Internet of Things, this sort of stuff — architectures can help you classify those things. And the technologies that are being deployed can help you track them, and they can help you track them not as documents of the instance, but of the thing in real time that is talking to you about what its state is, and what its future state will be, and then you have to manage that information in vast quantities.

So an architecture can help you within your enterprise understand those things and it can help you connect to other enterprises or other information sources to allow you to make sense of all those things. But again, it’s a question of scoping, filtering, making sense, and abstracting — that key phrase that John pointed out earlier, of abstracting this stuff up to a level that is comprehensible and not overwhelming.

Brown: So Iver, at Cambia Health, you must have this kind of problem now, mustn’t you?

Provide value

Band: That’s exactly what I am doing. I am figuring out what will be the impact of certain technologies and how our businesses can use them to differentiate and provide value.

In fact, I was just on a call this morning with JeffSTAT, because the whole ecosystem is changing, and we know that healthcare is really changing. The current model is not financially sustainable, and there is also tremendous amount of waste in our healthcare system today. The executives of our company say that about a third of the $2.7 trillion and rising spent on healthcare in the US doesn’t do anyone any good.

There’s a tremendous amount of IT investment in that, and that requires architecture to tie it altogether. It has to do with all the things ranging from the logic with which we edit claims, to the follow-up we provide people with particularly dangerous and consequently expensive diseases. So there is just a tremendous amount going through an enterprise architecture. It’s necessary to have a coherent narrative of what the organization needs to do.

Bellman: One thing we all need to keep in mind is even more dynamic than that, if you believe even a little bit of Kurzweil’s possibilities is that — are people familiar with Ray Kurzweil’s ‘The Singularity Is Near’ — 2037 will be around the singularly between computers and human beings.

So I think that the wrap where he argues that the amount of change is not linear but exponential, and so in a sense you will never catch up, but you need an architecture to manage that.

By The Open GroupZachman: The way we deal with complexity is through classification. I suggest that there is more than one way to classify things. One is one-dimensional classification, taxonomy, or hierarchy, in effect, decompositions, one-dimensional classification, and that’s really helpful for manufacturing. From an engineering standpoint of a two-dimensional classification, where we have classified things so that they are normalized, one effect in one place.

Then if you have the problems identified, you can postulate several technology changes or several changes and simulate the various implications of it.

The whole reason why I do architecture has to do with change. You deal with extreme complexity and then you have to accommodate extreme change. There is no other way to deal with it. Humanity, for thousands of years, has not been able to figure out a better way to deal with complexity and change other than architecture.

Forde: Maybe we shouldn’t apply architecture to some things.

For example, maybe the technologies or the opportunity is so new, we need to have the decision-making framework that says, you know what, let’s not try and figure out all this, just to self-control their stuff in advance, okay? Let’s let it run and see what happens, and then when it’s at the appropriate point for architecture, let’s apply it, this is a more organic view of the way nature and life works than the enterprise view of it.

So what I am saying is that architecture is not irrelevant in that context. It’s actually there is a part of the decision-making framework to not architect something at this point in time because it’s inappropriate to do so.

Funding and budgeting

Band: Yeah, I agree that wholeheartedly. If it can’t be health solutions, we are a completely agile shop. All the technology development is on the same sprint cycle, and we have three-week sprints, but we also have certain things that are still annual and wonderful like funding and budgeting.

We live in a tension. People say, well, what are you going to do, what budget do you need, but at the same time, I haven’t figured everything out. So I am constantly living in that gap of what do I need to meet a certain milestone to get my project funded, and what do I need to do to go forward? Obviously, in a fully agile organization, all those things would be fluid. But then there’s financial reporting, and we would also have to be fluid too. So there are barriers to that.

For instance, the Scaled Agile Framework, which I think is a fascinating thing, has a very clear place for enterprise architecture. As Chris said, you don’t want to do too much of it in advance.  I am constantly getting the gap between how can I visualize what’s going to happen a year out and how can I give the development teams what they need for the sprint. So I am always living in that paradox.

Bellman: The Gartner Group, not too long ago, came up with the concept of emerging enterprise architecture and what we are dealing with. Enterprises don’t exist like buildings. A building is an object, but an enterprise is a group of human beings communicating with one another.

As a very famous organizational psychologist Karl Weick once pointed out, “The effective organization is garrulous, clumsy, superstitious, hypocritical, mostrous, octopoid, wandering, and grouchy.” Why? Because an organization is continually adapting, continually changing, and continually adapting to the changing business and technological landscape.

To expect anything other than that is not having a realistic view of the enterprise. It is emerging and it is a continually emerging phenomena. So in a sense, having an architecture concept I would not contest, but architecting is always worthwhile. It’s like it’s an organic phenomena, and that in order to deal with that what we can also understand and have an architecture for organic phenomena that change and rapidly adapt.

Brown: Chris, where you were going follows the lines of what great companies do, right?

There is a great book published about 30 years ago called ‘In Search of Excellence.’ If you haven’t read it, I suggest that people do. Written by Peters and Waterman, and Tom Peters has tried for ever since to try and recreate something with that magic, but one of the lessons learned was what great companies do, is something that goes simultaneous loose-tight properties. So you let somethings be very tightly controlled, and other things as are suggesting, let them flourish and see where they go before I actually box them in. So that’s a good thought.

So what do we think, as a panel, about evolving TOGAF to become an engineering methodology as well as a manufacturing methodology?

Zachman: I really think it’s a good idea.

Brown: Chris, do you have any thoughts on that?

Interesting proposal

Forde: I think it’s an interesting proposal and I think we need to look at it fairly seriously. The Open Group approach to things is, don’t lock people into a specific way of thinking, but we also advocate disciplined approach to doing things. So I would susspect that we are going to be exploring John’s proposal pretty seriously.

Brown: You mentioned in your talk that decision-making process is a precondition, the decision-making process to govern IT investments, and the question that comes in is how about other types of investments including facilities, inventory and acquisitions?

By The Open GroupForde: The wording of the presentation was very specific. Most organizations have a process or decision-making framework on an annual basis or quarterly whatever the cycles are for allocation of funding to do X, Y or Z. So the implication wasn’t that IT was the only space that it would be applied.

However, the question is how effective is that decision-making framework? In many organizations, or in a lot of organizations, the IT function is essentially an enterprise-wide activity that’s supporting the financial activities, the plant activities, these sorts of things. So you have the P&Ls from those things flowing in some way into the funding that comes to the IT organization.

The question is, when there are multiple complexities in an organization, multiple departments with independent P&Ls, they are funding IT activities in a way that may not be optimized, may or may not be optimized. For the architects, in my view, one of the avenues for success is in inserting yourself into that planning cycle and influencing,  because normally the architecture team does not have direct control over the spend, but influencing how that spend goes.

Over time gradually improving the enterprise’s ability to optimize and make effective the funding it applies for IT to support the rest of the business.

Zachman: Yeah, I was just wondering, you’ve got to make observation.

Band: I agree, I think that the battle to control shadow IT has been permanently lost. We are in a technology obsessed society. Every department wants to control some technology and even develop it to their needs. There are some controls that you do have, and we do have some, but we have core health insurance businesses that are nearly a 100 years old.

Cambia is constantly investing and acquiring new companies that are transforming healthcare. Cambia has over a 100 million customers all across the country even though our original business was a set of regional health plans.

Build relationships

You can’t possibly rationalize all of everything I want you to pay for on that thing. It is incumbent upon the architects, especially the senior ones, to build relationships with the people in these organizations and make sure everything is synergetic.

Many years ago, there was a senior architect. I asked him what he did, and he said, “Well, I’m just the glue. I go to a lot of meetings.” There are deliverables and deadlines too, but there is a part of consistently building the relationships and noticing things, so that when there is time to make a decision or someone needs something, it gets done right.

Zachman: I was in London when Bank of America got bought by NationsBank, and it was touted as the biggest banking merger in the history of the banking industry.

Actually it wasn’t a merger, it was an acquisition NationsBank acquired Bank of America and then changed the name to Bank of America. There was a London paper that was  observing that the headline you always see is, “The biggest merger in the history of the industry.” The headline you never see is, “This merger didn’t work.”

The cost of integrating the two enterprises exceeded the value of the acquisition. Therefore, we’re going to have to break this thing up in pieces and sell off the pieces as surreptitiously as possible, so nobody will notice that we buried any accounting notes someplace or other. You never see that article. You’ll only see the one about the biggest merger.

If I was the CEO and my strategy was to grow by acquisition, I would get really interested in enterprise architecture. Because you have to be able to anticipate the integration of the cost, if you want to merge two enterprises. In fact, you’re changing the scope of the enterprise. I have talked a little bit about the role on models, but you are changing the scope. As soon as you change a scope, you’re now going to be faced with an integration issue.

Therefore you have to make a choice, scrap and rework. There is no way, after the fact, to integrate parts that don’t fit together. So you’re gong to be faced a decision whether you want to scrap and rework or not. I would get really interested in enterprise architecture, because that’s what you really want to know before you make the expenditure. You acquire and obviously you’ve already blown out all the money. So now you’ve got a problem.

Once again, if I was the CEO and I want to grow by acquisition or merger acquisition, I would get really interested in enterprise architecture.

Cultural issues

Beryl Bellman: One of the big problems we are addressing here is also the cultural and political problems of organizations or enterprises. You could have the best design type of system, and if people and politics don’t agree, there are going to be these kind of conflicts.

I was involved in my favorite projects at consulting. I was involved in consulting with NCR, who was dealing with Hyundai and Samsung and trying to get them together at a conjoint project. They kept fighting with each other in terms of knowledge management, technology transfer, and knowledge transfer. My role of it was to do an architecture of that whole process.

It was called RIAC Research Institute in Computer Technology. On one side of the table, you had Hyundai and Samsung. On the other side of the table, you had NCR. They were throwing PowerPoint slides back and forth at each other. I brought up that the software we used at that time was METIS, and METIS modeled all the processes, everything that was involved.

Samsung said you just hit it with a 2×4. I used to be demonstrating it, rather than tossing out slides, here are the relationships, and be able to show that it really works. To me that was a real demonstration that I can even overcome some of the politics and cultural differences within enterprises.

Brown: I want to give one more question. I think this is more of a concern that we have raised in some people’s minds today, which is, we are talking about all these different frameworks and ontologies, and so there is a first question.

The second one is probably the key one that we are looking at, but it asks what does each of the frameworks lack, what are the key elements that are missing, because that leads on to the second question that says, isn’t needing to understand old enterprise architecture frameworks is not a complex exercise for a practitioner?

Band: My job is not about understanding frameworks. I have been doing enterprises solution architecture in HP at a standard and diversified financial services company and now at health insurance and health solutions company out for quite a while, and it’s really about communicating and understanding in a way that’s useful to your stakeholders.

The frameworks about creating shared understanding of what we have and where are we going to go, and the frameworks are just a set of tools that you have in your toolbox that most people don’t understand.

So the idea is not to understand everything but to get a set of tools, just like a mechanic would, that you carry around that you use all the time. For instance, there are certain types of ArchiMate views that I use when I am in a group. I will draw an ArchiMate business process view with application service use of the same. What are the business processes you need to be and what are the exposed application behaviors that they need to consume?

I had that discussion with people on the business who are in IT, and we drove those diagrams. That’s a useful tool, it works for me, it works for the people around me, it works in my culture, but there is no understanding over frameworks unless that’s your field of study. They are all missing the exact thing you need for a particular interaction, but most likely there is something in there that you can base the next critical interaction on.

Six questions

Zachman: I spent most of my life thinking about my frameworks. There are six questions you have to answer to have a complete description of whatever it is, what I will describe, what, how, where, who, and why. So that’s complete.

The philosophers have established six transformations interestingly enough, the transfer of idea into an instantiation, so that’s a complete set, and I did not invent either one of these, so the six interrogatives. They have the six stages of transformation and that framework has to, by definition, accommodate any factor that’s relevant to the existence of the object of the enterprise.  Therefore any fact has to be classifiable in that structure.

My framework is complete in that regard. For many years, I would have been reluctant to make a categorical statement, but we exercised this, and there is no anomaly. I can’t find an anomaly. Therefore I have a high level of confidence that you can classify any fact in that context.

There is one periodic table. There are n different compound manufacturing processes. You can manufacture anything out of the periodic table. That metaphor is really helpful. There’s one enterprise architecture framework ontology. I happened to stumble across, by accident, the ontology for classifying all of the facts relevant to an enterprise.

I wish I could tell you that I was so smart and understood all of these things at the beginning, but I knew nothing about this, I just happened to stumble across it. The framework fell on my desk one day and I saw the pattern. All I did was I put enterprise names on the same pattern for descriptive representation of anything. You’ve heard me tell quite a bit of the story this afternoon. In terms of completeness I think my framework is complete. I can find no anomalies and you can classify anything relative to that framework.

And I agree with Iver, that there are n different tools you might want to use. You don’t have to know everything about every framework. One thing is, whatever the tool is that you need to deal with and out of the context of the periodic table metaphor, the ontological construct of The Zachman Framework, you can accommodate whatever artifacts the tool creates.

You don’t have to analyze every tool, whatever tool is necessary, if you want to do with business architecture, you can create whatever the business architecture manifestation is. If you want to know what DoDAF is, you can create the DoDAF artifacts. You can create any composite, and you can create any compound from the periodic table. It’s the same idea.

I wouldn’t spend my life trying to understand all these frameworks. You have to operate the enterprise, you have to manage the enterprise and whatever the tool is, it’s required to do whatever it is that you need to do and there is something good about everything and nothing necessarily does everything.

So use the tool that’s appropriate and then you can create whatever the composite constructs are required by that tool out of the primitive components of the framework. I wouldn’t try to understand all the frameworks.

What’s missing

Forde: On a daily basis there is a line of people at these conferences coming to tell me what’s missing from TOGAF. Recently we conducted a survey through the Association of Enterprise Architects about what people needed to see. Basically the stuff came back pretty much, please give us more guidance that’s specific to my situation, a recipe for how to solve world hunger, or something like that. We are not in the role of providing that level of prescriptive detail.

The value side of the equation is the flexibility of the framework to a certain degree to allow many different industries and many different practitioners drive value for their business out of using that particular tool.

So some people will find value in the content metamodel in the TOGAF Framework and the other components of it, but if you are not happy with that, if it doesn’t meet your need, flip over to The Zachman Framework or vice versa.

John made it very clear earlier that the value in the framework that he has built throughout his career and has been used repeatedly around the world is its rigor, it’s comprehensiveness, but he made very clear, it’s not a method. There is nothing in there to tell you how to go do it.

So you could criticize The Zachman Framework for a lack of method or you could spend your time talking about the value of it as a very useful tool to get X, Y, and Z done.

From a practitioner’s standpoint, what one practitioner does is interesting in a value, but if you have a practice between 200 and 400 architects, you don’t want everybody running around like a loose cannon doing it their way, in my opinion. As a practice manager or leader you need something that makes those resources very, very effective. And when you are in a practice of that size, you probably have a handful of people trying to figure out how the frameworks come together, but most of the practitioners are tasked with taking what the organization says is their best practice and executing on it.

We are looking at improving the level of guidance provided by the TOGAF material, the standard and guidance about how to do specific scenarios.

For example, how to jumpstart an architecture practice, how to build a secure architecture, how to do business architecture well? Those are the kinds of things that we have had feedback on and we are working on around that particular specification.

Brown: So if you are employed by the US Department of Defense you would be required to use DoDAF, if you are an enterprise architect, because of the views it provides. But people like Terri Blevins that did work in the DoD many years, used TOGAF to populate DoDAF. It’s a method, and the method is the great strength.

If you want to have more information on that, there are a number of white papers on our website about using TOGAF with DoDAF, TOGAF with COBIT, TOGAF with Zachman, TOGAF with everything else.

Forde: TOGAF with frameworks, TOGAF with buy in, the thing to look at is the ecosystem of information around these frameworks is where the value proposition really is. If you are trying to bootstrap your standards practice inside, the framework is of interest, but applied use, driving to the value proposition for your business function is the critical area to focus on.

The panel, which examined the synergy among the major EA frameworks, consists of moderator Allen Brown, President and Chief Executive Officer, The Open Group; Iver Bank, an Enterprise Architect, Cambia Health Solutions; Dr. Beryl Bellman, Academic Director, FEAC Institute; John Zachman, Chairman and CEO of Zachman International, and originator of the Zachman Framework; and Chris Forde, General Manager, Asia and Pacific Region and Vice President, Enterprise Architecture, The Open Group.

Transcript available here.

Join the conversation @theopengroup #ogchat

You may also be interested in:

 

Leave a comment

Filed under ArchiMate®, big data, Business Architecture, Cloud, Data management, Enterprise Architecture, Enterprise Transformation, Standards, TOGAF®, Uncategorized

The Open Group London 2014 – Day One Highlights

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

On a crisp October Monday in London yesterday, The Open Group hosted the first day of its event at Central Methodist Hall, Westminster. Almost 200 attendees from 32 countries explored how to “Empower Your Business; Enabling Boundaryless Information Flow™”.

Just across the way from another landmark in the form of Westminster Abbey, the day began with a welcome from Allen Brown, President and CEO of The Open Group, before Magnus Lindkvist, the Swedish trendspotter and futurologist, began his keynote on “Competition and Creation in Globulent Times”.

In a very thought-provoking talk, Magnus pondered on how quickly the world now moves, declaring that we now live in a 47 hour world, where trends can spread quicker than ever before. Magnus argued that this was a result of an R&D process – rip off and duplicate, rather than organic innovation occurring in multiple places.

Magnus went on to consider the history of civilization which he described as “nothing, nothing, a little bit, then everything” as well as providing a comparison of vertical and horizontal growth. Magnus posited that while we are currently seeing a lot of horizontal growth globally (the replication of the same activity), there is very little vertical growth, or what he described as “magic”. Magnus argued that in business we are seeing companies less able to create as they are focusing so heavily on simply competing.

To counter this growth, Magnus told attendees that they should do the following in their day-to-day work:

  • Look for secrets – Whether it be for a certain skill or a piece of expertise that is as yet undiscovered but which could reap significant benefit
  • Experiment – Ensure that there is a place for experimentation within your organization, while practicing it yourself as well
  • Recycle failures – It’s not always the idea that is wrong, but the implementation, which you can try over and over again
  • Be patient and persistent – Give new ideas time and the good ones will eventually succeed

Following this session was the long anticipated launch of The Open Group IT4IT™ Forum, with Christopher Davis from the University of South Florida detailing the genesis of the group before handing over to Georg Bock from HP Software who talked about the Reference Architecture at the heart of the IT4IT Forum.

Hans Van Kesteren, VP & CIO of Global Functions at Shell, then went into detail about how his company has helped to drive the growth of the IT4IT Forum. Starting with an in-depth background to the company’s IT function, Hans described how as a provider of IT on a mass scale, the changing technology landscape has had a significant impact on Shell and the way it manages IT. He described how the introduction of the IT4IT Forum will help his organization and others like it to adapt to the convergence of technologies, allowing for a more dynamic yet structured IT department.

Subsequently Daniel Benton, Global Managing Director of IT Strategy at Accenture, and Georg Bock, Senior Director IT Management Software Portfolio Strategy at HP, provided their vision for the IT4IT Forum before a session where the speakers took questions from the floor. Those individuals heavily involved in the establishment of the IT4IT Forum received particular thanks from attendees for their efforts, as you can see in the accompanying picture.

In its entirety, the various presentations from the IT4IT Forum members provided a compelling vision for the future of the group. Watch this space for further developments now it has been launched.

IT4IT

The Open Group IT4IT™ Forum Founding Members

In the afternoon, the sessions were split into tracks illustrating the breadth of the material that The Open Group covers. On Monday this provided an opportunity for a range of speakers to present to attendees on topics from the architecture of banking to shaping business transformation. Key presenters included Thomas Obitz, Senior Manager, FSO Advisory Performance Improvement, EY, UK and Dr. Daniel Simon, Managing Partner, Scape Consulting, Germany.

The plenary and many of the track presentations are available at livestream.com.

The day concluded with an evening drinks reception within Central Hall Westminster, where attendees had the opportunity to catch up with acquaintances old and new. More to come on day two!

Join the conversation – @theopengroup #ogLON

Loren K. BaynesLoren K. Baynes, Director, Global Marketing Communications, joined The Open Group in 2013 and spearheads corporate marketing initiatives, primarily the website, blog and media relations. Loren has over 20 years experience in brand marketing and public relations and, prior to The Open Group, was with The Walt Disney Company for over 10 years. Loren holds a Bachelor of Business Administration from Texas A&M University. She is based in the US.

Comments Off

Filed under architecture, Boundaryless Information Flow™, Business Architecture, Conference, Data management, Enterprise Architecture, Enterprise Transformation, Open Platform 3.0, Professional Development, Standards, Uncategorized

Open FAIR Blog Series – Five Reasons You Should Use the Open FAIR Body of Knowledge

By Jim Hietala, VP, Security and Andrew Josey, Director of Standards, The Open Group

This is the second in our blog series introducing the Open FAIR Body of Knowledge.

In this blog, we provide 5 reasons why you should use the Open FAIR Body of Knowledge for Risk Analysis:

1. Emphasis on Risk

Often the emphasis in such analyses is placed on security threats and controls, without due consideration of impact.  For example, we have a firewall protecting all our customer information – but what if the firewall is breached and the customer information stolen or changed? Risk analysis using Open FAIR evaluates both the probability that bad things will happen, and the impact if they do happen. By using the Open FAIR Body of Knowledge, the analyst measures and communicates the risk, which is what management cares about.

2. Logical and Rational Framework

It provides a framework that explains the how and why of risk analysis. It improves consistency in undertaking analyses.

3. Quantitative

It’s easy to measure things without considering the risk context – for example, the systems should be maintained in full patch compliance – but what does that mean in terms of loss frequency or the magnitude of loss? The Open FAIR taxonomy and method provide the basis for meaningful metrics.

4. Flexible

Open FAIR can be used at different levels of abstraction to match the need, the available resources, and available data.

5. Rigorous

There is often a lack of rigor in risk analysis: statements are made such as: “that new application is high risk, we could lose millions …” with no formal rationale to support them. The Open FAIR risk analysis method provides a more rigorous approach that helps to reduce gaps and analyst bias. It improves the ability to defend conclusions and recommendations.

In our next blog, we will look at how the Open FAIR Body of Knowledge can be used with other Open Group standards.

The Open FAIR Body of Knowledge consists of the following Open Group standards:

  • Risk Taxonomy (O-RT), Version 2.0 (C13K, October 2013) defines a taxonomy for the factors that drive information security risk – Factor Analysis of Information Risk (FAIR).
  • Risk Analysis (O-RA) (C13G, October 2013) describes process aspects associated with performing effective risk analysis.

These can be downloaded from The Open Group publications catalog at http://www.opengroup.org/bookstore/catalog.

Our other publications include a Pocket Guide and a Certification Study Guide.

62940-hietalaJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT Security, Risk Management and Healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on Information Security, Risk Management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

 

andrew-small1Andrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate® 2.0, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX® Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

Comments Off

Filed under Data management, digital technologies, Information security, Open FAIR Certification, RISK Management, Security, Uncategorized

Open FAIR Blog Series – An Introduction to Risk Analysis and the Open FAIR Body of Knowledge

By Jim Hietala, VP, Security and Andrew Josey, Director of Standards, The Open Group

This is the first in a four-part series of blogs introducing the Open FAIR Body of Knowledge. In this first blog. we look at what the Open FAIR Body of Knowledge provides, and why a taxonomy is needed for Risk Analysis.

An Introduction to Risk Analysis and the Open FAIR Body of Knowledge

The Open FAIR Body of Knowledge provides a taxonomy and method for understanding, analyzing and measuring information risk. It allows organizations to:

  • Speak in one language concerning their risk using the standard taxonomy and terminology, and communicate risk effectively to senior management
  • Consistently study and apply risk analysis principles to any object or asset
  • View organizational risk in total
  • Challenge and defend risk decisions
  • Compare risk mitigation options

What does FAIR stand for?

FAIR is an acronym for Factor Analysis of Information Risk.

Risk Analysis: The Need for an Accurate Model and Taxonomy

Organizations seeking to analyze and manage risk encounter some common challenges. Put simply, it is difficult to make sense of risk without having a common understanding of both the factors that (taken together) contribute to risk, and the relationships between those factors. The Open FAIR Body of Knowledge provides such a taxonomy.

Here’s an example that will help to illustrate why a standard taxonomy is important. Let’s assume that you are an information security risk analyst tasked with determining how much risk your company is exposed to from a “lost or stolen laptop” scenario. The degree of risk that the organization experiences in such a scenario will vary widely depending on a number of key factors. To even start to approach an analysis of the risk posed by this scenario to your organization, you will need to answer a number of questions, such as:

  • Whose laptop is this?
  • What data resides on this laptop?
  • How and where did the laptop get lost or stolen?
  • What security measures were in place to protect the data on the laptop?
  • How strong were the security controls?

The level of risk to your organization will vary widely based upon the answers to these questions. The degree of overall organizational risk posed by lost laptops must also include an estimation of the frequency of occurrence of lost or stolen laptops across the organization.

In one extreme, suppose the laptop belonged to your CTO, who had IP stored on it in the form of engineering plans for a revolutionary product in a significant new market. If the laptop was unprotected in terms of security controls, and it was stolen while he was on a business trip to a country known for state-sponsored hacking and IP theft, then there is likely to be significant risk to your organization. On the other extreme, suppose the laptop belonged to a junior salesperson a few days into their job, it contained no customer or prospect lists, and it was lost at a security checkpoint at an airport. In this scenario, there’s likely to be much less risk. Or consider a laptop which is used by the head of sales for the organization, who has downloaded Personally Identifiable Information (PII) on customers from the CRM system in order to do sales analysis, and has his or her laptop stolen. In this case, there could be Primary Loss to the organization, and there might also be Secondary Losses associated with reactions by the individuals whose data is compromised.

The Open FAIR Body of Knowledge is designed to help you to ask the right questions to determine the asset at risk (is it the laptop itself, or the data?), the magnitude of loss, the skill level and motivations of the attacker, the resistance strength of any security controls in place, the frequency of occurrence of the threat and of an actual loss event, and other factors that contribute to the overall level of risk for any specific risk scenario.

In our next blog in this series, we will consider 5 reasons why you should use The Open FAIR Body of Knowledge for Risk Analysis.

The Open FAIR Body of Knowledge consists of the following Open Group standards:

  • Risk Taxonomy (O-RT), Version 2.0 (C13K, October 2013) defines a taxonomy for the factors that drive information security risk – Factor Analysis of Information Risk (FAIR).
  • Risk Analysis (O-RA) (C13G, October 2013) describes process aspects associated with performing effective risk analysis.

These can be downloaded from The Open Group publications catalog at http://www.opengroup.org/bookstore/catalog.

Our other publications include a Pocket Guide and a Certification Study Guide.

62940-hietalaJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT Security, Risk Management and Healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on Information Security, Risk Management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

 

andrew-small1Andrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate® 2.0, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX® Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

1 Comment

Filed under Data management, digital technologies, Identity Management, Information security, Open FAIR Certification, RISK Management, Security, Standards, Uncategorized

The Open Group Panel: Internet of Things – Opportunities and Obstacles

Below is the transcript of The Open Group podcast exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data.

Listen to the podcast.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with recent The Open Group Boston 2014 on July 21 in Boston.

Dana Gardner I’m Dana Gardner, principal analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow.

We’re going to now specifically delve into the Internet of Things with a panel of experts. The conference has examined how Open Platform 3.0™ leverages the combined impacts of cloud, big data, mobile, and social. But to each of these now we can add a new cresting wave of complexity and scale as we consider the rapid explosion of new devices, sensors, and myriad endpoints that will be connected using internet protocols, standards and architectural frameworks.

This means more data, more cloud connectivity and management, and an additional tier of “things” that are going to be part of the mobile edge — and extending that mobile edge ever deeper into even our own bodies.

When we think about inputs to these social networks — that’s going to increase as well. Not only will people be tweeting, your device could be very well tweet, too — using social networks to communicate. Perhaps your toaster will soon be sending you a tweet about your English muffins being ready each morning.

The Internet of Things is more than the “things” – it means a higher order of software platforms. For example, if we are going to operate data centers with new dexterity thanks to software-definited networking (SDN) and storage (SDS) — indeed the entire data center being software-defined (SDDC) — then why not a software-defined automobile, or factory floor, or hospital operating room — or even a software-defined city block or neighborhood?

And so how does this all actually work? Does it easily spin out of control? Or does it remain under proper management and governance? Do we have unknown unknowns about what to expect with this new level of complexity, scale, and volume of input devices?

Will architectures arise that support the numbers involved, interoperability, and provide governance for the Internet of Things — rather than just letting each type of device do its own thing?

To help answer some of these questions, The Open Group assembled a distinguished panel to explore the practical implications and limits of the Internet of Things. So please join me in welcoming Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC, and a primary representative to the Industrial Internet Consortium; Penelope Gordon, Emerging Technology Strategist at 1Plug Corporation; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technical Officer at The Open Group.

Jean-Francois, we have heard about this notion of “cities as platforms,” and I think the public sector might offer us some opportunity to look at what is going to happen with the Internet of Things, and then extrapolate from that to understand what might happen in the private sector.

Hypothetically, the public sector has a lot to gain. It doesn’t have to go through the same confines of a commercial market development, profit motive, and that sort of thing. Tell us a little bit about what the opportunity is in the public sector for smart cities.

Barsoum_Jean-FrancoisJean-Francois Barsoum: It’s immense. The first thing I want to do is link to something that Marshall Van Alstyne (Professor at Boston University and Researcher at MIT) had talked about, because I was thinking about his way of approaching platforms and thinking about how cities represent an example of that.

You don’t have customers; you have citizens. Cities are starting to see themselves as platforms, as ways to communicate with their customers, their citizens, to get information from them and to communicate back to them. But the complexity with cities is that as a good a platform as they could be, they’re relatively rigid. They’re legislated into existence and what they’re responsible for is written into law. It’s not really a market.

Chris Harding (Forum Director of The Open Group Open Platform 3.0) earlier mentioned, for example, water and traffic management. Cities could benefit greatly by managing traffic a lot better.

Part of the issue is that you might have a state or provincial government that looks after highways. You might have the central part of the city that looks after arterial networks. You might have a borough that would look after residential streets, and these different platforms end up not talking to each other.

They gather their own data. They put in their own widgets to collect information that concerns them, but do not necessarily share with their neighbor. One of the conditions that Marshall said would favor the emergence of a platform had to do with how much overlap there would be in your constituents and your customers. In this case, there’s perfect overlap. It’s the same citizen, but they have to carry an Android and an iPhone, despite the fact it is not the best way of dealing with the situation.

The complexities are proportional to the amount of benefit you could get if you could solve them.

Gardner: So more interoperability issues?

Barsoum: Yes.

More hurdles

Gardner: More hurdles, and when you say commensurate, you’re saying that the opportunity is huge, but the hurdles are huge and we’re not quite sure how this is going to unfold.

Barsoum: That’s right.

Gardner: Let’s go to an area where the opportunity outstrips the challenge, manufacturing. Said, what is the opportunity for the software-defined factory floor for recognizing huge efficiencies and applying algorithmic benefits to how management occurs across domains of supply-chain, distribution, and logistics. It seems to me that this is a no-brainer. It’s such an opportunity that the solution must be found.

Tabet_SaidSaid Tabet: When it comes to manufacturing, the opportunities are probably much bigger. It’s where we can see a lot of progress that has already been done and still work is going on. There are two ways to look at it.

One is the internal side of it, where you have improvements of business processes. For example, similar to what Jean-Francois said, in a lot of the larger companies that have factories all around the world, you’ll see such improvements on a factory base level. You still have those silos at that level.

Now with this new technology, with this connectedness, those improvements are going to be made across factories, and there’s a learning aspect to it in terms of trying to manage that data. In fact, they do a better job. We still have to deal with interoperability, of course, and additional issues that could be jurisdictional, etc.

However, there is that learning that allows them to improve their processes across factories. Maintenance is one of them, as well as creating new products, and connecting better with their customers. We can see a lot of examples in the marketplace. I won’t mention names, but there are lots of them out there with the large manufacturers.

Gardner: We’ve had just-in-time manufacturing and lean processes for quite some time, trying to compress the supply chain and distribution networks, but these haven’t necessarily been done through public networks, the internet, or standardized approaches.

But if we’re to benefit, we’re going to need to be able to be platform companies, not just product companies. How do you go from being a proprietary set of manufacturing protocols and approaches to this wider, standardized interoperability architecture?

Tabet: That’s a very good question, because now we’re talking about that connection to the customer. With the airline and the jet engine manufacturer, for example, when the plane lands and there has been some monitoring of the activity during the whole flight, at that moment, they’ll get that data made available. There could be improvements and maybe solutions available as soon as the plane lands.

Interoperability

That requires interoperability. It requires Platform 3.0 for example. If you don’t have open platforms, then you’ll deal with the same hurdles in terms of proprietary technologies and integration in a silo-based manner.

Gardner: Penelope, you’ve been writing about the obstacles to decision-making that might become apparent as big data becomes more prolific and people try to capture all the data about all the processes and analyze it. That’s a little bit of a departure from the way we’ve made decisions in organizations, public and private, in the past.

Of course, one of the bigger tenets of Internet of Things is all this great data that will be available to us from so many different points. Is there a conundrum of some sort? Is there an unknown obstacle for how we, as organizations and individuals, can deal with that data? Is this going to be chaos, or is this going to be all the promises many organizations have led us to believe around big data in the Internet of Things?

Gordon_PenelopePenelope Gordon: It’s something that has just been accelerated. This is not a new problem in terms of the decision-making styles not matching the inputs that are being provided into the decision-making process.

Former US President Bill Clinton was known for delaying making decisions. He’s a head-type decision-maker and so he would always want more data and more data. That just gets into a never-ending loop, because as people collect data for him, there is always more data that you can collect, particularly on the quantitative side. Whereas, if it is distilled down and presented very succinctly and then balanced with the qualitative, that allows intuition to come to fore, and you can make optimal decisions in that fashion.

Conversely, if you have someone who is a heart-type or gut-type decision-maker and you present them with a lot of data, their first response is to ignore the data. It’s just too much for them to take in. Then you end up completely going with whatever you feel is correct or whatever you have that instinct that it’s the correct decision. If you’re talking about strategic decisions, where you’re making a decision that’s going to influence your direction five years down the road, that could be a very wrong decision to make, a very expensive decision, and as you said, it could be chaos.

It just brings to mind to me Dr. Suess’s The Cat in the Hat with Thing One and Thing Two. So, as we talk about the Internet of Things, we need to keep in mind that we need to have some sort of structure that we are tying this back to and understanding what are we trying to do with these things.

Gardner: Openness is important, and governance is essential. Then, we can start moving toward higher-order business platform benefits. But, so far, our panel has been a little bit cynical. We’ve heard that the opportunity and the challenges are commensurate in the public sector and that in manufacturing we’re moving into a whole new area of interoperability, when we think about reaching out to customers and having a boundary that is managed between internal processes and external communications.

And we’ve heard that an overload of data could become a very serious problem and that we might not get benefits from big data through the Internet of Things, but perhaps even stumble and have less quality of decisions.

So Dave Lounsbury of The Open Group, will the same level of standardization work? Do we need a new type of standards approach, a different type of framework, or is this a natural path and course what we have done in the past?

Different level

Lounsbury_DaveDave Lounsbury: We need to look at the problem at a different level than we institutionally think about an interoperability problem. Internet of Things is riding two very powerful waves, one of which is Moore’s Law, that these sensors, actuators, and network get smaller and smaller. Now we can put Ethernet in a light switch right, a tag, or something like that.

Also, Metcalfe’s Law that says that the value of all this connectivity goes up with the square of the number of connected points, and that applies to both the connection of the things but more importantly the connection of the data.

The trouble is, as we have said, that there’s so much data here. The question is how do you manage it and how do you keep control over it so that you actually get business value from it. That’s going to require us to have this new concept of a platform to not only to aggregate, but to just connect the data, aggregate it, correlate it as you said, and present it in ways that people can make decisions however they want.

Also, because of the raw volume, we have to start thinking about machine agency. We have to think about the system actually making the routine decisions or giving advice to the humans who are actually doing it. Those are important parts of the solution beyond just a simple “How do we connect all the stuff together?”

Gardner: We might need a higher order of intelligence, now that we have reached this border of what we can do with our conventional approaches to data, information, and process.

Thinking about where this works best first in order to then understand where it might end up later, I was intrigued again this morning by Professor Van Alstyne. He mentioned that in healthcare, we should expect major battles, that there is a turf element to this, that the organization, entity or even commercial corporation that controls and manages certain types of information and access to that information might have some very serious platform benefits.

The openness element now is something to look at, and I’ll come back to the public sector. Is there a degree of openness that we could legislate or regulate to require enough control to prevent the next generation of lock-in, which might not be to a platform to access to data information and endpoints? Where is it in the public sector that we might look to a leadership position to establish needed openness and not just interoperability.

Barsoum: I’m not even sure where to start answering that question. To take healthcare as an example, I certainly didn’t write the bible on healthcare IT systems and if someone did write that, I think they really need to publish it quickly.

We have a single-payer system in Canada, and you would think that would be relatively easy to manage. There is one entity that manages paying the doctors, and everybody gets covered the same way. Therefore, the data should be easily shared among all the players and it should be easy for you to go from your doctor, to your oncologist, to whomever, and maybe to your pharmacy, so that everybody has access to this same information.

We don’t have that and we’re nowhere near having that. If I look to other areas in the public sector, areas where we’re beginning to solve the problem are ones where we face a crisis, and so we need to address that crisis rapidly.

Possibility of improvement

In the transportation infrastructure, we’re getting to that point where the infrastructure we have just doesn’t meet the needs. There’s a constraint in terms of money, and we can’t put much more money into the structure. Then, there are new technologies that are coming in. Chris had talked about driverless cars earlier. They’re essentially throwing a wrench into the works or may be offering the possibility of improvement.

On any given piece of infrastructure, you could fit twice as many driverless cars as cars with human drivers in them. Given that set of circumstances, the governments are going to find they have no choice but to share data in order to be able to manage those. Are there cases where we could go ahead of a crisis in order to manage it? I certainly hope so.

Gardner: How about allowing some of the natural forces of marketplaces, behavior, groups, maybe even chaos theory, where if sufficient openness is maintained there will be some kind of a pattern that will emerge? We need to let this go through its paces, but if we have artificial barriers, that might be thwarted or power could go to places that we would regret later.

Barsoum: I agree. People often focus on structure. So the governance doesn’t work. We should find some way to change the governance of transportation. London has done a very good job of that. They’ve created something called Transport for London that manages everything related to transportation. It doesn’t matter if it’s taxis, bicycles, pedestrians, boats, cargo trains, or whatever, they manage it.

You could do that, but it requires a lot of political effort. The other way to go about doing it is saying, “I’m not going to mess with the structures. I’m just going to require you to open and share all your data.” So, you’re creating a new environment where the governance, the structures, don’t really matter so much anymore. Everybody shares the same data.

Gardner: Said, to the private sector example of manufacturing, you still want to have a global fabric of manufacturing capabilities. This is requiring many partners to work in concert, but with a vast new amount of data and new potential for efficiency.

How do you expect that openness will emerge in the manufacturing sector? How will interoperability play when you don’t have to wait for legislation, but you do need to have cooperation and openness nonetheless?

Tabet: It comes back to the question you asked Dave about standards. I’ll just give you some examples. For example, in the automotive industry, there have been some activities in Europe around specific standards for communication.

The Europeans came to the US and started to have discussions, and the Japanese have interest, as well as the Chinese. That shows, because there is a common interest in creating these new models from a business standpoint, that these challenges they have to be dealt with together.

Managing complexity

When we talk about the amounts of data, what we call now big data, and what we are going to see in about five years or so, you can’t even imagine. How do we manage that complexity, which is multidimensional? We talked about this sort of platform and then further, that capability and the data that will be there. From that point of view, openness is the only way to go.

There’s no way that we can stay away from it and still be able to work in silos in that new environment. There are lots of things that we take for granted today. I invite some of you to go back and read articles from 10 years ago that try to predict the future in technology in the 21st century. Look at your smart phones. Adoption is there, because the business models are there, and we can see that progress moving forward.

Collaboration is a must, because it is a multidimensional level. It’s not just manufacturing like jet engines, car manufacturers, or agriculture, where you have very specific areas. They really they have to work with their customers and the customers of their customers.

Adoption is there, because the business models are there, and we can see that progress moving forward.

Gardner: Dave, I have a question for both you and Penelope. I’ve seen some instances where there has been a cooperative endeavor for accessing data, but then making it available as a service, whether it’s an API, a data set, access to a data library, or even analytics applications set. The Ocean Observatories Initiative is one example, where it has created a sensor network across the oceans and have created data that then they make available.

Do you think we expect to see an intermediary organization level that gets between the sensors and the consumers or even controllers of the processes? Is there’s a model inherent in that that we might look to — something like that cooperative data structure that in some ways creates structure and governance, but also allows for freedom? It’s sort of an entity that we don’t have yet in many organizations or many ecosystems and that needs to evolve.

Lounsbury: We’re already seeing that in the marketplace. If you look at the commercial and social Internet of Things area, we’re starting to see intermediaries or brokers cropping up that will connect the silo of my android ecosystem to the ecosystem of package tracking or something like that. There are dozens and dozens of these cropping up.

In fact, you now see APIs even into a silo of what you might consider a proprietary system and what people are doing is to to build a layer on top of those APIs that intermediate the data.

This is happening on a point-to-point basis now, but you can easily see the path forward. That’s going to expand to large amounts of data that people will share through a third party. I can see this being a whole new emerging market much as what Google did for search. You could see that happening for the Internet of Things.

Gardner: Penelope, do you have any thoughts about how that would work? Is there a mutually assured benefit that would allow people to want to participate and cooperate with that third entity? Should they have governance and rules about good practices, best practices for that intermediary organization? Any thoughts about how data can be managed in this sort of hierarchical model?

Nothing new

Gordon: First, I’ll contradict it a little bit. To me, a lot of this is nothing new, particularly coming from a marketing strategy perspective, with business intelligence (BI). Having various types of intermediaries, who are not only collecting the data, but then doing what we call data hygiene, synthesis, and even correlation of the data has been around for a long time.

It was an interesting, when I looked at recent listing of the big-data companies, that some notable companies were excluded from that list — companies like Nielsen. Nielsen’s been collecting data for a long time. Harte-Hanks is another one that collects a tremendous amount of information and sells that to companies.

That leads into the another part of it that I think there’s going to be. We’re seeing an increasing amount of opportunity that involves taking public sources of data and then providing synthesis on it. What remains to be seen is how much of the output of that is going to be provided for “free”, as opposed to “fee”. We’re going to see a lot more companies figuring out creative ways of extracting more value out of data and then charging directly for that, rather than using that as an indirect way of generating traffic.

Gardner: We’ve seen examples of how this has been in place. Does it scale and does the governance or lack of governance that might be in the market now sustain us through the transition into Platform 3.0 and the Internet of Things.

Gordon: That aspect is the lead-on part of “you get what you pay for”. If you’re using a free source of data, you don’t have any guarantee that it is from authoritative sources of data. Often, what we’re getting now is something somebody put it in a blog post, and then that will get referenced elsewhere, but there was nothing to go back to. It’s the shaky supply chain for data.

You need to think about the data supply and that is where the governance comes in. Having standards is going to increasingly become important, unless we really address a lot of the data illiteracy that we have. A lot of people do not understand how to analyze data.

One aspect of that is a lot of people expect that we have to do full population surveys, as opposed representative sampling to get much more accurate and much more cost-effective collection of data. That’s just one example, and we do need a lot more in governance and standards.

Gardner: What would you like to see changed most in order for the benefits and rewards of the Internet of Things to develop and overcome the drawbacks, the risks, the downside? What, in your opinion, would you like to see happen to make this a positive, rapid outcome? Let’s start with you Jean-Francois.

Barsoum: There are things that I have seen cities start to do now. There are couple of examples: Philadelphia is one and Barcelona does this too. Rather than do the typical request for proposal (RFP), where they say, “This is the kind of solution we’re looking for, and here are our parameters. Can l you tell us how much it is going to cost to build,” they come to you with the problem and they say, “Here is the problem I want to fix. Here are my priorities, and you’re at liberty to decide how best to fix the problem, but tell us how much that would cost.”

If you do that and you combine it with access to the public data that is available — if public sector opens up its data — you end up with a very powerful combination that liberates a lot of creativity. You can create a lot of new business models. We need to see much more of that. That’s where I would start.

More education

Tabet: I agree with Jean-Francois on that. What I’d like to add is that I think we need to push the relation a little further. We need more education, to your point earlier, around the data and the capabilities.

We need these platforms that we can leverage a little bit further with the analytics, with machine learning, and with all of these capabilities that are out there. We have to also remember, when we talk about the Internet of Things, it is things talking to each other.

So it is not human-machine communication. Machine-to-machine automation will be further than that, and we need more innovation and more work in this area, particularly more activity from the governments. We’ve seen that, but it is a little bit frail from that point of view right now.

Gardner: Dave Lounsbury, thoughts about what need to happen in order to keep this on the tracks?

Lounsbury: We’ve touched on lot of them already. Thank you for mentioning the machine-to-machine part, because there are plenty of projections that show that it’s going to be the dominant form of Internet communication, probably within the next four years.

So we need to start thinking of that and moving beyond our traditional models of humans talking through interfaces to set of services. We need to identify the building blocks of capability that you need to manage, not only the information flow and the skilled person that is going to produce it, but also how you manage the machine-to-machine interactions.

Gordon: I’d like to see not so much focus on data management, but focus on what is the data managing and helping us to do. Focusing on the machine-to-machine and the devices is great, but it should be not on the devices or on the machines… it should be on what can they accomplish by communicating; what can you accomplish with the devices and then have a reverse engineer from that.

Gardner: Let’s go to some questions from the audience. The first one asks about a high order of intelligence which we mentioned earlier. It could be artificial intelligence, perhaps, but they ask whether that’s really the issue. Is the nature of the data substantially different, or we are just creating more of the same, so that it is a storage, plumbing, and processing problem? What, if anything, are we lacking in our current analytics capabilities that are holding us back from exploiting the Internet of Things?

Gordon: I’ve definitely seen that. That has a lot to do with not setting your decision objectives and your decision criteria ahead of time so that you end up collecting a whole bunch of data, and the important data gets lost in the mix. There is a term “data smog.”

Most important

The solution is to figure out, before you go collecting data, what data is most important to you. If you can’t collect certain kinds of data that are important to you directly, then think about how to indirectly collect that data and how to get proxies. But don’t try to go and collect all the data for that. Narrow in on what is going to be most important and most representative of what you’re trying to accomplish.

Gardner: Does anyone want to add to this idea of understanding what current analytics capabilities are lacking, if we have to adopt and absorb the Internet of Things?

Barsoum: There is one element around projection into the future. We’ve been very good at analyzing historical information to understand what’s been happening in the past. We need to become better at projecting into the future, and obviously we’ve been doing that for some time already.

But so many variables are changing. Just to take the driverless car as an example. We’ve been collecting data from loop detectors, radar detectors, and even Bluetooth antennas to understand how traffic moves in the city. But we need to think harder about what that means and how we understand the city of tomorrow is going to work. That requires more thinking about the data, a little bit like what Penelope mentioned, how we interpret that, and how we push that out into the future.

Lounsbury: I have to agree with both. It’s not about statistics. We can use historical data. It helps with lot of things, but one of the major issues we still deal with today is the question of semantics, the meaning of the data. This goes back to your point, Penelope, around the relevance and the context of that information – how you get what you need when you need it, so you can make the right decisions.

Gardner: Our last question from the audience goes back to Jean-Francois’s comments about the Canadian healthcare system. I imagine it applies to almost any healthcare system around the world. But it asks why interoperability is so difficult to achieve, when we have the power of the purse, that is the market. We also supposedly have the power of the legislation and regulation. You would think between one or the other or both that interoperability, because the stakes are so high, would happen. What’s holding it up?

Barsoum: There are a couple of reasons. One, in the particular case of healthcare, is privacy, but that is one that you could see going elsewhere. As soon as you talk about interoperability in the health sector, people start wondering where is their data going to go and how accessible is it going to be and to whom.

You need to put a certain number of controls over top of that. What is happening in parallel is that you have people who own some data, who believe they have some power from owning that data, and that they will lose that power if they share it. That can come from doctors, hospitals, anywhere.

So there’s a certain amount of change management you have to get beyond. Everybody has to focus on the welfare of the patient. They have to understand that there has to be a priority, but you also have to understand the welfare of the different stakeholders in the system and make sure that you do not forget about them, because if you forget about them they will find some way to slow you down.

Use of an ecosystem

Lounsbury: To me, that’s a perfect example of what Marshall Van Alstyne talked about this morning. It’s the change from focus on product to a focus on an ecosystem. Healthcare traditionally has been very focused on a doctor providing product to patient, or a caregiver providing a product to a patient. Now, we’re actually starting to see that the only way we’re able to do this is through use of an ecosystem.

That’s a hard transition. It’s a business-model transition. I will put in a plug here for The Open Group Healthcare vertical, which is looking at that from architecture perspective. I see that our Forum Director Jason Lee is over here. So if you want to explore that more, please see him.

Gardner: I’m afraid we will have to leave it there. We’ve been discussing the practical implications of the Internet of Things and how it is now set to add a new dimension to Open Platform 3.0 and Boundaryless Information Flow.

We’ve heard how new thinking about interoperability will be needed to extract the value and orchestrate out the chaos with such vast new scales of inputs and a whole new categories of information.

So with that, a big thank you to our guests: Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC; Penelope Gordon, Emerging Technology Strategist at 1Plug Corp.; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technology Officer at The Open Group.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow at The Open Group Conference, recently held in Boston. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript.

Transcript of The Open Group podcast exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Comments Off

Filed under Boundaryless Information Flow™, Business Architecture, Cloud, Cloud/SOA, Data management, digital technologies, Enterprise Architecture, Future Technologies, Information security, Internet of Things, Interoperability, Open Platform 3.0, Service Oriented Architecture, Standards, Strategy, Supply chain risk, Uncategorized

Q&A with Marshall Van Alstyne, Professor, Boston University School of Management and Research Scientist MIT Center for Digital Business

By The Open Group

The word “platform” has become a nearly ubiquitous term in the tech and business worlds these days. From “Platform as a Service” (PaaS) to IDC’s Third Platform to The Open Group Open Platform 3.0™ Forum, the concept of platforms and building technology frames and applications on top of them has become the next “big thing.”

Although the technology industry tends to conceive of “platforms” as the vehicle that is driving trends such as mobile, social networking, the Cloud and Big Data, Marshall Van Alstyne, Professor at Boston University’s School of Management and a Research Scientist at the MIT Center for Digital Business, believes that the radical shifts that platforms bring are not just technological.

We spoke with Van Alstyne prior to The Open Group Boston 2014, where he presented a keynote, about platforms, how they have shifted traditional business models and how they are impacting industries everywhere.

The title of your session at the Boston conference was “Platform Shift – How New Open Business Models are Changing the Shape of Industry.” How would you define both “platform” and “open business model”?

I think of “platform” as a combination of two things. One, a set of standards or components that folks can take up and use for production of goods and services. The second thing is the rules of play, or the governance model – who has the ability to participate, how do you resolve conflict, and how do you divide up the royalty streams, or who gets what? You can think of it as the two components of the platform—the open standard together with the governance model. The technologists usually get the technology portion of it, and the economists usually get the governance and legal portions of it, but you really need both of them to understand what a ‘platform’ is.

What is the platform allowing then and how is that different from a regular business model?

The platform allows third parties to conduct business using system resources so they can actually meet and exchange goods across the platform. Wonderful examples of that include AirBnB where you can rent rooms or you can post rooms, or eBay, where you can sell goods or exchange goods, or iTunes where you can go find music, videos, apps and games provided by others, or Amazon where third parties are even allowed to set up shop on top of Amazon. They have moved to a business model where they can take control of the books in addition to allowing third parties to sell their own books and music and products and services through the Amazon platform. So by opening it up to allow third parties to participate, you facilitate exchange and grow a market by helping that exchange.

How does this relate to the concept of the technology industry is defining the “third platform”?

I think of it slightly differently. The tech industry uses mobile and social and cloud and data to characterize it. In some sense this view offers those as the attributes that characterize platforms or the knowledge base that enable platforms. But we would add to that the economic forces that actually shape platforms. What we want to do is give you some of the strategic tools, the incentives, the rules that will actually help you control their trajectory by helping you improve who participates and then measure and improve the value they contribute to the platform. So a full ecosystem view is not just the technology and the data, it also measures the value and how you divide that value. The rules of play really become important.

I think the “third platform” offers marvelous concepts and attributes but you also need to add the economics to it: Why do you participate, who gets what portions of the value, and who ultimately owns control.

Who does control the platform then?

A platform has multiple parts. Determining who controls what part is the art and design of the governance model. You have to set up control in the right way to motivate people to participate. But before we get to that, let’s go back and complete the idea of what’s an ‘open platform.’

To define an open platform, consider both the right of access and the right to manipulate platform resources, then consider granting those rights to four different parties. One is the user—can they access one another, can they access data, can they access system resources? Another group is developers—can they manipulate system resources, can they add new features to it, can they sell through the platform? The third group is the platform providers. You often think of them as those folks that facilitate access across the platform. To give you an example, iTunes is a single monolithic store, so the provider is simply Apple, but Android, in contrast, allows multiple providers, so there’s a Samsung Android store, an LTC Android store, a Google Android store, there’s even an Amazon version that uses a different version of Android. So that platform has multiple providers each with rights to access users. The fourth group is the party that controls the underlying property rights, who owns the IP. The ability modify the underlying standard and also the rights of access for other parties is the bottom-most layer.

So to answer the question of what is ‘open,’ you have to consider the rights of access of all four groups—the users, developers, the providers and IP rights holders, or sponsors, underneath.

Popping back up a level, we’re trying to motivate different parties to participate in the ecosystem. So what do you give the users? Usually it’s some kind of value. What do you give developers? Usually it’s some set of SDKs and APIs, but also some level of royalties. It’s fascinating. If you look back historically, Amazon initially tried a publishing royalty where they took 70% and gave a minority 30% back to developers. They found that didn’t fly very well and they had to fall back to the app store or software-style royalty, where they’re taking a lower percentage. I think Apple, for example, takes 30 percent, and Amazon is now close to that. You see ranges of royalties going anywhere from a few percent—an example is credit cards—all the way up to iStock photo where they take roughly 70 percent. That’s an extremely high rate, and one that I don’t recommend. We were just contracting for designs at 99Designs and they take a 20 percent cut. That’s probably more realistic, but lower might perhaps even be better—you can create stronger network effect if that’s the case.

Again, the real question of control is how you motivate third parties to participate and add value? If you are allowing them to use resources to create value and keep a lot of that value, then they’re more motivated to participate, to invest, to bring their resources to your platform. If you take most of the value they create, they won’t participate. They won’t add value. One of the biggest challenges for open platforms—what you might call the ‘Field of Dreams’ approach—is that most folks open their platform and assume ‘if you build it, they will come,’ but you really need to reward them to do so. Why would they want to come build with you? There are numerous instances of platforms that opened but no developer chooses to add value—the ecosystem is too small. You have to solve the chicken and egg problem where if you don’t have users, developers don’t want to build for you, but if you don’t have developer apps, then why do users participate? So you’ve got a huge feedback problem. And those are where the economics become critical, you must solve the chicken and egg problem to build and roll out platforms.

It’s not just a technology question; it’s also an economics and rewards question.

Then who is controlling the platform?

The answer depends on the type of platform. Giving different groups a different set of rights creates different types of platform. Consider the four different parties: users, developers, providers, and sponsors. At one extreme, the Apple Mac platform of the 1980s reserved most rights for development, for producing hardware (the provider layer), and for modifying the IP (the sponsor layer) all to Apple. Apple controlled the platform and it remained closed. In contrast, Microsoft relaxed platform control in specific ways. It licensed to multiple providers, enabling Dell, HP, Compaq and others to sell the platform. It gave developers rights of access to SDKs and APIs, enabling them to extend the platform. These control choices gave Microsoft more than six times the number of developers and more than twenty times the market share of Apple at the high point of Microsoft’s dominance of desktop operating systems. Microsoft gave up some control in order to create a more inclusive platform and a much bigger market.

Control is not a single concept. There are many different control rights you can grant to different parties. For example, you often want to give users an ability to control their own data. You often want to give developers intellectual property rights for the apps that they create and often over the data that their users create. You may want to give them some protections against platform misappropriation. Developers resent it if you take their ideas. So if the platform sees a really clever app that’s been built on top of its platform, what’s the guarantee that the platform simply doesn’t take it or build a competing app? You need to protect your developers in that case. Same thing’s true of the platform provider—what guarantees do they provide users for the quality of content provided on their ecosystem? For example, the Android ecosystem is much more open than the iPhone ecosystem, which means you have more folks offering stores. Simultaneously, that means that there are more viruses and more malware in Android, so what rights and guarantees do you require of the platform providers to protect the users in order that they want to participate? And then at the bottom, what rights do other participants have to control the direction of the platform growth? In the Visa model, for example, there are multiple member banks that help to influence the general direction of that credit card standard. Usually the most successful platforms have a single IP rights holder, but there are several examples of that have multiple IP rights holders.

So, in the end control defines the platform as much as the platform defines control.

What is the “secret” of the Internet-driven marketplace? Is that indeed the platform?

The secret is that, in effect, the goal of the platform is to increase transaction volume and value. If you can do that—and we can give you techniques for doing it—then you can create massive scale. Increasing the transaction value and transactions volume across your platform means that the owner of the platform doesn’t have to be the sole source of content and new ideas provided on the platform. If the platform owner is the only source of value then the owner is also the bottleneck. The goal is to consummate matches between producers and consumers of value. You want to help users find the content, find the resources, find the other people that they want to meet across your platform. In Apple’s case, you’re helping them find the music, the video, the games, and the apps that they want. In AirBnB’s case, you’re helping them find the rooms that they want, or Uber, you’re helping them find a driver. On Amazon, the book recommendations help you find the content that you want. In all the truly successful platforms, the owner of the platform is not providing all of that value. They’re enabling third parties to add that value, and that’s one reasy why The Open Group’s ideas are so important—you need open systems for this to happen.

What’s wrong with current linear business models? Why is a network-driven approach superior?

The fundamental reason why the linear business model no longer works is that it does not manage network effects. Network effects allow you to build platforms where users attract other users and you get feedback that grows your system. As more users join your platform, more developers join your platform, which attracts more users, which attracts more developers. You can see it on any of the major platforms. This is also true of Google. As advertisers use Google Search, the algorithms get better, people find the content that they want, so more advertisers use it. As more drivers join Uber, more people are happier passengers, which attracts more drivers. The more merchants accept Visa, the more consumers are willing to carry it, which attracts more merchants, which attracts more consumers. You get positive feedback.

The consequence of that is that you tend to get market concentration—you get winner take all markets. That’s where platforms dominate. So you have a few large firms within a given category, whether this is rides or books or hotels or auctions. Further, once you get network effects changing your business model, the linear insights into pricing, into inventory management, into innovation, into strategy breakdown.

When you have these multi-sided markets, pricing breaks down because you often price differently to one side than another because one side attracts the other. Inventory management practices breakdown because you’re selling inventory that you don’t even own. Your R&D strategies breakdown because now you’re motivating innovation and research outside the boundaries of the firm, as opposed to inside the internal R&D group. And your strategies breakdown because you’re not just looking for cost leadership or product differentiation, now you’re looking to shape the network effects as you create barriers to entry.

One of the things that I really want to argue strenuously is that in markets where platforms will emerge, platforms beat product every time. So the platform business model will inevitably beat the linear, product-based business model. Because you’re harnessing new forces in order to develop a different kind of business model.

Think of it the following way–imagine that value is growing as users consume your product. Think of any of the major platforms, as more folks use Google, search gets better, the more recommendations improve on Amazon, and the easier it is to find a ride on Uber, so more folks want to be on there. It is easier to scale network effects outside your business than inside your business. There’s simply more people outside than inside. The moment that happens, the locus control, the locus of innovation, moves from inside the firm to outside the firm. So the rules change. Pricing changes, your innovation strategies change, your inventory policies change, your R&D changes. You’re now managing resources outside the firm, rather than inside, in order to capture scale. This is different than the traditional industrial supply economies of scale.

Old systems are giving away to new systems. It’s not that the whole system breaks down, it’s simply that you’re looking to manage network effects and manage new business models. Another way to see this is that previously you were managing capital. In the industrial era, you were managing steel, you were managing large amounts of finance in banking, you were managing auto parts—huge supply economies of scale. In telecommunications, you were managing infrastructure. Now, you’re managing communities and these are managed outside the firm. The value that’s been created at Facebook or WhatsApp or Instagram or any of the new acquisitions, it’s not the capital that’s critical, it’s the communities that are critical, and these are built outside the firm.

There is a lot of talk in the industry about the Nexus of Forces as Gartner calls it, or Third Platform (IDC). The Open Group calls it Open Platform 3.0. Your concept goes well beyond technology—how does Open Platform 3.0 enable new business models?

Those are the enablers—they’re shall we say necessary, but they’re not sufficient. You really must harness the economic forces in addition to those enablers—mobile, social, Cloud, data. You must manage communities outside the firm, that’s the mobile and the social element of it. But this also involves designing governance and setting incentives. How are you capturing users outside the organization, how are they contributing, how are they being motivated to participate, why are they spreading your products to their peers? The Cloud allows it to scale—so Instagram and What’s App and others scale. Data allows you to “consummate the match.” You use that data to help people find what they need, to add value, so all of those things are the enablers. Then you have to harness the economics of the enablers to encourage people to do the right thing. You can see the correct intuition if you simply ask what happens if all you offer is a Cloud service and nothing more. Why will anyone use it? What’s the value to that system? If you open APIs to it, again, if you don’t have a user base, why are developers going to contribute? Developers want to reach users. Users want valuable functionality.

You must manage the motives and the value-add on the platform. New business models come from orchestrating not just the technology but also the third party sources of value. One of the biggest challenges is to grow these businesses from scratch—you’ve got the cold start chicken and egg problem. You don’t have network effects if you don’t have a user base, if you don’t have users, you don’t have network effects.

Do companies need to transform themselves into a “business platform” to succeed in this new marketplace? Are there industries immune to this shift?

There is a continuum of companies that are going to be affected. It starts at one end with companies that are highly information intense—anything that’s an information intensive business will be dramatically affected, anything that’s community or fashion-based business will be dramatically affected. Those include companies involved in media and news, songs, music, video; all of those are going to be the canaries in the coalmine that see this first. Moving farther along will be those industries that require some sort of certification—those include law and medicine and education—those, too, will also be platformized, so the services industries will become platforms. Farther down that are the ones that are heavily, heavily capital intensive where control of physical capital is paramount, those include trains and oil rigs and telecommunications infrastructure—eventually those will be affected by platform business models to the extent that data helps them gain efficiencies or add value, but they will in some sense be the last to be affected by platform business models. Look for the businesses where the cost side is shrinking in proportion to the delivery of value and where the network effects are rising as a proportional increase in value. Those forces will help you predict which industries will be transformed.

How can Enterprise Architecture be a part of this and how do open standards play a role?

The second part of that question is actually much easier. How do open standards play a role? The open standards make it much easier for third parties to attach and incorporate technology and features such that they can in turn add value. Open standards are essential to that happening. You do need to ask the question as to who controls those standards—is it completely open or is it a proprietary standard, a published standard but it’s not manipulable by a third party.

There will be at least two or three different things that Enterprise Architects need to do. One of these is to design modular components that are swappable, so as better systems become available, the better systems can be swapped in. The second element will be to watch for components of value that should be absorbed into the platform itself. As an example, in operating systems, web browsing has effectively been absorbed into the platform, streaming has been absorbed into the platform so that they become aware of how that actually works. A third thing they need to do is talk to the legal team to see where it is that the third parties property rights can be protected so that they invest. One of the biggest mistakes that firms make is to simply assume that because they own the platform, because they have the rights of control, that they can do what they please. If they do that, they risk alienating their ecosystems. So they should talk to their potential developers to incorporate developer concerns. One of my favorite examples is the Intel Architecture Lab which has done a beautiful job of articulating the voices of developers in their own architectural plans. A fourth thing that can be done is an idea borrowed from SAP, that builds Enterprise Architecture—they articulate an 18-24 month roadmap where they say these are the features that are coming, so you can anticipate and build on those. Also it gives you an idea of what features will be safe to build on so you won’t lose the value you’ve created.

What can companies do to begin opening their business models and more easily architect that?

What they should do is to consider four groups articulated earlier— those are the users, the providers, the developers and the sponsors—each serve a different role. Firms need to understand what their own role will be in order that they can open and architect the other roles within their ecosystem. They’ll also need to choose what levels of exclusivity they need to give their ecosystem partners in a different slice of the business. They should also figure out which of those components they prefer to offer themselves as unique competencies and where they need to seek third party assistance, either in new ideas or new resources or even new marketplaces. Those factors will help guide businesses toward different kinds of partnerships, and they’ll have to be open to those kinds of partners. In particular, they should think about where are they most likely to be missing ideas or missing opportunities. Those technical and business areas should open in order that third parties can take advantage of those opportunities and add value.

 

vanalstynemarshallProfessor Van Alstyne is one of the leading experts in network business models. He conducts research on information economics, covering such topics as communications markets, the economics of networks, intellectual property, social effects of technology, and productivity effects of information. As co-developer of the concept of “two sided networks” he has been a major contributor to the theory of network effects, a set of ideas now taught in more than 50 business schools worldwide.

Awards include two patents, National Science Foundation IOC, SGER, SBIR, iCorp and Career Awards, and six best paper awards. Articles or commentary have appeared in Science, Nature, Management Science, Harvard Business Review, Strategic Management Journal, The New York Times, and The Wall Street Journal.

Comments Off

Filed under architecture, Cloud, Conference, Data management, digital technologies, Enterprise Architecture, Governance, Open Platform 3.0, Standards, Uncategorized

Discussing Enterprise Decision-Making with Penelope Everall Gordon

By The Open Group

Most enterprises today are in the process of jumping onto the Big Data bandwagon. The promise of Big Data, as we’re told, is that if every company collects as much data as they can—about everything from their customers to sales transactions to their social media feeds—executives will have “the data they need” to make important decisions that can make or break the company. Not collecting and using your data, as the conventional wisdom has it, can have deadly consequences for any business.

As is often the case with industry trends, the hype around Big Data contains both a fair amount of truth and a fair amount of fuzz. The problem is that within most organizations, that conventional wisdom about the power of data for decision-making is usually just the tip of the iceberg when it comes to how and why organizational decisions are made.

According to Penelope Gordon, a consultant for 1Plug Corporation who was recently a Cloud Strategist at Verizon and was formerly a Service Product Strategist at IBM, that’s why big “D” (Data) needs to be put back into the context of enterprise decision-making. Gordon, who spoke at The Open Group Boston 2014, in the session titled “Putting the D Back in Decision” with Jean-Francois Barsoum of IBM, argues that a focus on collecting a lot of data has the potential to get in the way of making quality decisions. This is, in part, due to the overabundance of data that’s being collected under the assumption that you never know where there’s gold to be mined in your data, so if you don’t have all of it at hand, you may just miss something.

Gordon says that assuming the data will make decisions obvious also ignores the fact that ultimately decisions are made by people—and people usually make decisions based on their own biases. According to Gordon, there are three types of natural decision making styles—heart, head and gut styles—corresponding to different personality types, she said; the greater the amount of data the more likely that it will not balance the natural decision-making style.

Head types, Gordon says, naturally make decisions based on quantitative evidence. But what often happens is that head types often put off making a decision until more data can be collected, wanting more and more data so that they can make the best decision based on the facts. She cites former President Bill Clinton as a classic example of this type. During his presidency, he was famous for putting off decision-making in favor of gathering more and more data before making the decision, she says. Relying solely on quantitative data also can mean you may miss out on other important factors in making optimal decisions based on either heart (qualitative) or instinct. Conversely, a gut-type presented with too much data will likely just end up ignoring data and acting on instinct, much like former President George W. Bush, Gordon says.

Gordon believes part of the reason that data and decisions are more disconnected than one might think is because IT and Marketing departments have become overly enamored with what technology can offer. These data providers need to step back and first examine the decision objectives as well as the governance behind those decisions. Without understanding the organization’s decision-making processes and the dynamics of the decision-makers, it can be difficult to make optimal and effective strategic recommendations, she says, because you don’t have the full picture of what the stakeholders will or will not accept in terms of a recommendation, data or no data.

Ideally, Gordon says, you want to get to a point where you can get to the best decision outcome possible by first figuring out the personal and organizational dynamics driving decisions within the organization, shifting the focus from the data to the decision for which the data is an input.

“…what you’re trying to do is get the optimal outcome, so your focus needs to be on the outcome, so when you’re collecting the data and assessing the data, you also need to be thinking about ‘how am I going to present this data in a way that it is going to be impactful in improving the decision outcomes?’ And that’s where the governance comes into play,” she said.

Governance is of particular importance now, Gordon says, because decisions are increasingly being made by individual departments, such as when departments buy their own cloud-enabled services, such as sales force automation. In that case, an organization needs to have a roadmap in place with compensation to incent decision-makers to adhere to that roadmap and decision criteria for buying decisions, she said.

Gordon recommends that companies put in place 3-5 top criteria for each decision that needs to be made so that you can ensure that the decision objectives are met. This distillation of the metrics gives decision-makers a more comprehensible picture of their data so that their decisions don’t become either too subjective or disconnected from the data. Lower levels of metrics can be used underneath each of those top-level criteria to facilitate a more nuanced valuation. For example, if an organization needing to find good partner candidates scored and ranked (preferably in tiers) potential partners using decision criteria based on the characteristics of the most attractive partner, rather than just assuming that companies with the best reputation or biggest brands will be the best, then they will expeditiously identify the optimal partner candidates.

One of the reasons that companies have gotten so concerned with collecting and storing data rather than just making better decisions, Gordon believes, is that business decisions have become inherently more risky. The required size of investment is increasing in tandem with an increase in the time to return; time to return is a key determinant of risk. Data helps people feel like they are making competent decisions but in reality does little to reduce risk.

“If you’ve got lots of data, then the thinking is, ‘well, I did the best that I could because I got all of this data.’ People are worried that they might miss something,“ she said. “But that’s where I’m trying to come around and say, ‘yeah, but going and collecting more data, if you’ve got somebody like President Clinton, you’re just feeding into their tendency to put off making decisions. If you’ve got somebody like President Bush and you’re feeding into their tendency to ignore it, then there may be some really good information, good recommendations they’re ignoring.”

Gordon also says that having all the data possible to work with isn’t usually necessary—generally a representative sample will do. For example, she says the U.S Census Bureau takes the approach where it tries to count every citizen; consequently certain populations are chronically undercounted and inaccuracies pass undetected. The Canadian census, on the other hand, uses representative samples and thus tends to be much more accurate—and much less expensive to conduct. Organizations should also think about how they can find representative or “proxy” data in cases where collecting data that directly addresses a top-level decision criteria isn’t really practical.

To begin shifting the focus from collecting data inputs to improving decision outcomes, Gordon recommends clearly stating the decision objectives for each major decision and then identifying and defining the 3-5 criteria that are most important for achieving the decision objectives. She also recommends ensuring that there is sufficient governance and a process in place for making decisions including mechanisms for measuring the performance of the decision-making process and the outcomes resulting from the execution of that process. In addition, companies need to consider whether their decisions are made in a centralized or decentralized manner and then adapt decision governance accordingly.

One way that Enterprise Architects can help to encourage better decision-making within the organizations in which they work is to help in developing that governance rather than just providing data or data architectures, Gordon says. They should help stakeholders identify and define the important decision criteria, determine when full population rather than representative sampling is justified, recognize better methods for data analysis, and form decision recommendations based on that analysis. By gauging the appropriate blend of quantitative and qualitative data for a particular decision maker, an Architect can moderate gut types’ reliance on instinct and stimulate head and heart types’ intuition – thereby producing an optimally balanced decision. Architects should help lead and facilitate execution of the decision process, as well as help determine how data is presented within organizations in order to support the recommendations with the highest potential for meeting the decision objectives.

Join the conversation – #ogchat

penelopegordonPenelope Gordon recently led the expansion of the channel and service packaging strategies for Verizon’s cloud network products. Previously she was an IBM Strategist and Product Manager bringing emerging technologies such as predictive analytics to market. She helped to develop one of the world’s first public clouds.

 

 

2 Comments

Filed under architecture, Conference, Data management, Enterprise Architecture, Governance, Professional Development, Uncategorized