Tag Archives: cloud

The Open Group San Diego Panel Explores Synergy Among Major Frameworks in Enterprise Architecture

Following is a transcript of part of the proceedings from The Open Group San Diego event in February – a panel discussion on The Synergy of Enterprise Architecture frameworks.

The following panel, which examines the synergy among the major Enterprise Architecture frameworks, consists of moderator Allen Brown, President and Chief Executive Officer, The Open Group; Iver Bank, an Enterprise Architect at Cambia Health Solutions; Dr. Beryl Bellman, Academic Director, FEAC Institute; John Zachman, Chairman and CEO of Zachman International, and originator of the Zachman Framework; and Chris Forde, General Manager, Asia and Pacific Region and Vice President, Enterprise Architecture, The Open Group.

Here are some excerpts:

By The Open GroupIver Band: As an Enterprise Architect at Cambia Health Solutions, I have been working with the ArchiMate® Language for over four years now, both working with and on it in the ArchiMate® Forum. As soon as I discovered it in late 2010, I could immediately see, as an Enterprise Architect, how it filled an important gap.

It’s very interesting to see the perspective of John Zachman. I will briefly present how the ArchiMate Language allows you to fully support enterprise architecture using The Zachman Framework. So I am going to very briefly talk about enterprise architecture with the ArchiMate Language and The Zachman Framework.

What is the ArchiMate Language? Well, it’s a language we use for building understanding across disciplines in an organization and communicating and managing change.  It’s a graphical notation with formal semantics. It’s a language.

It’s a framework that describes and relates the business, application, and technology layers of an enterprise, and it has extensions for modelling motivation, which includes business strategy, external factors affecting the organization, requirements for putting them altogether and for showing them from different stakeholder perspectives.

You can show conflicting stakeholder perspectives, and even politics. I’ve used it to model organizational politics that were preventing a project from going forward.

It has a rich set of techniques in its viewpoint mechanism for visualizing and analyzing what’s going on in your enterprise. Those viewpoints are tailored to different stakeholders.  And, of course, ArchiMate®, like TOGAF®, is an open standard managed by The Open Group.

Taste of Archimate

This is just a taste of ArchiMate for people who haven’t seen it before. This is actually excerpted from the presentation my colleague Chris McCurdy and I are doing at this conference on Guiding Agile Solution Delivery with the ArchiMate Language.

What this shows is the Business and Application Layers of ArchiMate. It shows a business process at the top. Each process is represented by a symbol. It shows a data model of business objects, then, at the next layer, in yellow.

Below that, it shows a data model actually realized by the application, the actual data that’s being processed.

Below that, it shows an application collaboration, a set of applications working together, that reads and writes that data and realizes the business data model that our business processes use.

All in all, it presents a vision of an integrated project management toolset for a particular SDLC that uses the phases that you see across the top.

We are going to dissect this model, how you would build it, and how you would develop it in an agile environment in our presentation tomorrow.

I have done some analysis of The Zachman Framework, comparing it to the ArchiMate Language. What’s really clear is that ArchiMate supports enterprise architecture with The Zachman Framework. You see a rendering of The Zachman Framework and then you see a rendering of the components of the ArchiMate Language. You see the Business Layer, the Application Layer, the Technology Layer, its ability to express information, behavior, and structure, and then the Motivation and Implementation and Migration extensions.

So how does it support it? Well, there are two key things here. The first is that ArchiMate models answer the questions that are posed by The Zachman Framework columns.

For what: for Inventory. We are basically talking about what is in the organization. There are Business and Data Objects, Products, Contracts, Value, and Meaning.

For how: for process. We can model Business Processes and Functions. We can model Flow and Triggering Relationships between them.

Where: for the Distribution of our assets. We can model Locations, we can model Devices, and we can model Networks, depending on how you define Location within a network or within a geography.

For who: We can model Responsibility, with Business Actors, Collaborations, and Roles.

When: for Timing. We have Business Events, Plateaus of System Evolution, relatively stable systems states, and we have Triggering Relationships.

Why: We have a rich Motivation extension, Stakeholders, Drivers, Assessments, Principles, Goals, Requirements, etc., and we show how those different components influence and realize each other.

Zachman perspectives

Finally, ArchiMate models express The Zachman Row Perspectives. For the contextual or boundary perspective, where Scope Lists are required, we can make catalogs of ArchiMate Concepts. ArchiMate has broad tool support, and in a repository-based tool, while ArchiMate is a graphical language, you can very easily take list of concepts, as I do regularly, and put them in catalog or metrics form. So it’s easy to come up with those Scope Lists.

Secondly, for the Conceptual area, the Business Model, we have a rich set of Business Layer Viewpoints. Like the top of the — that focus on the top of the diagram that I showed you; Business Processes, Actors, Collaborations, Interfaces, Business Services that are brought to market.

Then at the Logical Layer we have System. We have a rich set of Application Layer Viewpoints and Viewpoints that show how Applications use Infrastructure.

For Physical, we have an Infrastructure Layer, which can be used to model any type of Infrastructure: Hosting, Network, Storage, Virtualization, Distribution, and Failover. All those types of things can be modeled.

And for Configuration and Instantiation, the Application and Technology Layer Viewpoints are available, particularly more detailed ones, but are also important is the Mappings to standard design languages such as BPMN, UML and ERD. Those are straightforward for experienced modelers. We also have a white paper on using the ArchiMate language with UML. Thank you.

By The Open GroupDr. Beryl Bellman: I have been doing enterprise architecture for quite a long time, for what you call pre-enterprise architecture work, probably about 30 years, and I first met John Zachman well over 20 years ago.

In addition to being an enterprise architect I am also a University Professor at California State University, Los Angeles. My focus there is on Organizational Communications. While being a professor, I always have been involved in doing contract consulting for companies like Digital Equipment Corporation, ASK, AT&T, NCR, then Ptech.

About 15 years ago, a colleague of mine and I founded the FEAC Institute. The initial name for that was the Federal Enterprise Architecture Certification Institute, and then we changed it to Federated. It actually goes by both names.

The business driver of that was the Clinger–Cohen Bill in 1996 when it was mandated by government that all federal agencies must have an enterprise architecture.

And then around 2000, they began to enforce that regulation. My business partner at that time, Felix Rausch, and I felt that we need some certification in how to go about doing and meeting those requirements, both for the federal agencies and the Department of Defense. And so that’s when we created the FEAC Institute.

Beginning of FEAC

In our first course, we had the Executive Office of the President, US Department of Fed, which I believe was the first Department of the Federal Government that was hit by OMB which held up their budget for not having an enterprise architecture on file. So they were pretty desperate, and that began the first of the beginning of the FEAC.

Since that time, a lot of people have come in from the commercial world and from international areas. And the idea of FEAC was that you start off with learning how to do enterprise architecture. In a lot of programs, including TOGAF, you sort of have to already know a little bit about enterprise architecture, the hermeneutical circle. You have to know what it is to know.

In FEAC we had a position that you want to provide training and educating in how to do enterprise architecture that will get you from a beginning state to be able to take full responsibility for work doing enterprise architecture in a matter of three months. It’s associated with the California State University System, and you can get, if you so desire, 12 graduate academic units in Engineering Management that can be applied toward a degree or you can get continuing education units.

So that’s how we began that. Then, a couple of years ago, my business partner decided he wanted to retire, and fortunately there was this guy named John Zachman, who will never retire. He’s a lot younger than all of us in this room, right? So he purchased the FEAC Institute.

I still maintain a relationship with it as Academic Director, in which primarily my responsibilities are as a liaison to the universities. My colleague, Cort Coghill, is sort of the Academic Coordinator of the FEAC Institute.

FEAC is an organization that also incorporates a lot of the training and education programs of Zachman International, which includes managing the FEAC TOGAF courses, as well as the Zachman certified courses, which we will tell you more about.

‘m just a little bit surprised by this idea, the panel, the way we are constructed here, because I didn’t have a presentation. I’m doing it off the top, as you can see. I was told we are supposed to have a panel discussion about the synergies of enterprise architecture. So I prepared in my mind the synergies between the different enterprise architectures that are out there.

For that, I just wanted to make a strong point. I wanted to talk about synergy like a bifurcation between on the one hand, “TOGAF and Zachman” as being standing on one side, whereas the statement has been made earlier this morning and throughout the meeting is “TOGAF and.”

Likewise, we have Zachman, and it’s not “Zachman or, but it’s ‘Zachman and.” Zachman provides that ontology, as John talks about it in his periodic table of basic elements of primitives through which we can constitute any enterprise architecture. To attempt to build an architecture out of composites and then venting composites and just modeling  you’re just getting a snapshot in time and you’re really not having an enterprise architecture that is able to adapt and change. You might be able to have a picture of it, but that’s all you really have.

That’s the power of The Zachman Framework. Hopefully, most of you will attend our demonstration this afternoon and a workshop where we are actually going to have people work with building primitives and looking at the relationship of primitives, the composites with a case study.

Getting lost

On the other side of that, Schekkerman wrote something about the forest of architectural frameworks and getting lost in that. There are a lot of enterprise architectural frameworks out there.

I’m not counting TOGAF, because TOGAF has its own architectural content metamodel, with its own artifacts, but those does not require one to use the artifacts in the architectural content metamodel. They suggest that you can use DoDAF. You can use MODAF. You can use commercial ones like NCR’s GITP. You can use any one.

Those are basically the competing models. Some of them are commercial-based, where organizations have their own proprietary stamps and the names of the artifacts, and the wrong names for it, and others want to give it its own take.

I’m more familiar nowadays with the governmental sectors. For example, FEAF, Federal Enterprise Architecture Framework Version 2. Are you familiar with that? Just go on the Internet, type in FEAF v2. Since Scott Bernard has been the head, he is the Chief Architect for the US Government at the OMB, he has developed a model of enterprise architecture, what he calls the Architecture Cube Model, which is an iteration off of John’s, but he is pursuing a cube form rather than a triangle form.

Also, for him the FEAF-II, as enterprise architecture, fits into his FEAF-II, because at the top level he has the strategic plans of an organization.

It goes down to different layers, but then, at one point, it drops off and becomes not only a solution, but it gets into the manufacturing of the solution. He has these whole series of artifacts that pertain to these different layers, but at the lower levels, you have a computer wiring closet diagram model, which is a little bit more detailed than what we would consider to be at a level of enterprise architecture.

Then you have the MODAF, the DoDAF, and all of these other ones, where a lot of those compete with each other more on the basis of political choices.

With the MODAF, the British obviously don’t want to use DoDAF, they have their own, but they are very similar to each other. One view, the acquisition view, differs from the project view, but they do the same things. You can define them in terms of each other.

Then there is the Canadian, NAF, and all that, and they are very similar. Now, we’re trying to develop the unified MODAF, DoDAF, and NAF architecture, UPDM, which is still in its planning stages. So we are moving toward a more integrated system.

BrownAllen Brown: Let’s move on to some of the questions that folks are interested in. Moving away from what the frameworks are, there is a question here. How does enterprise architecture take advantage of the impact of new emerging technologies like social, mobile, analytics, cloud, and so on?

Bidirectional change

John A Zachman: The change can take place in the enterprise either from the top, where we change the context of the enterprise, or from the bottom, where we change the technologies.

So technology is expressed in the context of the enterprise, what I would call Rule 4, and that’s the physical domain. And it’s the same way in any other — the building architecture, the airplane architecture, or anything. You can implement the logic, the as-designed logic, in different technologies.

Whatever the technology is, I made an observation that you want to engineer for flexibility. You separate the independent variables. So you separate the logic at Rule 3 from the physics of Rule 4, and then you can change Rule 4 without changing Rule 3. Basically that’s the idea, so you can accommodate whatever the emerging technologies are.

Bellman: I would just continue with that. I agree with John. Thinking about the synergy between the different architectures, basically every enterprise architecture contains, or should contain, considerations of those primitives. Then, it’s a matter of which a customer wants, which a customer feels comfortable with? Basically as long as you have those primitives defined, then you essentially can use any architecture. That constitute the synergy between the architectures.

Band: I agree with what’s been said. It’s also true that I think that one of the jobs of an enterprise architect is to establish a view of the organization that can be used to promote understanding and communicate and manage change. With cloud-based systems, they are generally based on metadata, and the major platforms, like Salesforce.com as an example. They publish their data models and their APIs.

So I think that there’s going to be a new generation of systems that provide a continuously synchronized, real-time view of what’s going on in the enterprise. So the architectural model will model this in the future, where things need to go, and they will do analyses, but we will be using cloud, big data, and even sensor technologies to understand the state of the enterprise.

Bellman: In the DoDaF 2.0, when that initially came out, I think it was six years ago or so, they have services architecture, a services view, and a systems view. And one of the points they make within the context, not as a footnote, is that they expect the systems view to sort of disappear and there will be a cloud view that will take its place. So I think you are right on that.

Chris Forde: The way I interpreted the question was, how does EA or architecture approach the things help you manage disruptive things? And if you accept the idea that enterprise architecture actually is a management discipline, it’s going to help you ask the right questions to understand what you are dealing with, where it should be positioned, what the risks and value proposition is around those particular things, whether that’s the Internet of Things, cloud computing, or all of these types of activities.

So going back to the core of what Terry’s presentation was about is a decision making framework with informed questions to help you understand what you should be doing to either mitigate the risk, take advantage of the innovation, and deploy the particular thing in a way that’s useful to your business. That’s the way I read the question.

Impact of sensors

Band: Just to reinforce what Chris says, as an enterprise architect in healthcare, one of the things that I am looking at very closely is the evaluation of the impact of health sensor technology. Gartner Group says that by 2020, the average lifespan in a developed country will be increased by six months due to mobile health monitoring.

And so there are vast changes in the whole healthcare delivery system, of which my company is at the center as a major healthcare payer and investor in all sorts of healthcare companies. I use enterprise architecture techniques to begin to understand the impact of that and show the opportunities to our health insurance business.

Brown: If you think about social and mobile and you look at the entire enterprise architecture, now you are starting to expand that beyond the limits of the organization, aren’t you? You’re starting to look at, not just the organization and the ecosystem, your business partners, but you are also looking at the impact of bringing mobile devices into the organization, of managers doing things on their own with cloud that wasn’t part of the architecture. You have got the relationship with consumers out there that are using social and mobile. How do you capture all of that in enterprise architecture?

Forde: Allen, if I had the answer to that question I would form my own business and I would go sell it.

Back in the day, when I was working in large organizations, we were talking about the extended enterprise, that kind of ecosystem view of things. And at that time the issue was more problematic. We knew we were in an extended ecosystem, but we didn’t really have the technologies that effectively supported it.

The types of technologies that are available today, the ones that The Open Group has white papers about — cloud computing, the Internet of Things, this sort of stuff — architectures can help you classify those things. And the technologies that are being deployed can help you track them, and they can help you track them not as documents of the instance, but of the thing in real time that is talking to you about what its state is, and what its future state will be, and then you have to manage that information in vast quantities.

So an architecture can help you within your enterprise understand those things and it can help you connect to other enterprises or other information sources to allow you to make sense of all those things. But again, it’s a question of scoping, filtering, making sense, and abstracting — that key phrase that John pointed out earlier, of abstracting this stuff up to a level that is comprehensible and not overwhelming.

Brown: So Iver, at Cambia Health, you must have this kind of problem now, mustn’t you?

Provide value

Band: That’s exactly what I am doing. I am figuring out what will be the impact of certain technologies and how our businesses can use them to differentiate and provide value.

In fact, I was just on a call this morning with JeffSTAT, because the whole ecosystem is changing, and we know that healthcare is really changing. The current model is not financially sustainable, and there is also tremendous amount of waste in our healthcare system today. The executives of our company say that about a third of the $2.7 trillion and rising spent on healthcare in the US doesn’t do anyone any good.

There’s a tremendous amount of IT investment in that, and that requires architecture to tie it altogether. It has to do with all the things ranging from the logic with which we edit claims, to the follow-up we provide people with particularly dangerous and consequently expensive diseases. So there is just a tremendous amount going through an enterprise architecture. It’s necessary to have a coherent narrative of what the organization needs to do.

Bellman: One thing we all need to keep in mind is even more dynamic than that, if you believe even a little bit of Kurzweil’s possibilities is that — are people familiar with Ray Kurzweil’s ‘The Singularity Is Near’ — 2037 will be around the singularly between computers and human beings.

So I think that the wrap where he argues that the amount of change is not linear but exponential, and so in a sense you will never catch up, but you need an architecture to manage that.

By The Open GroupZachman: The way we deal with complexity is through classification. I suggest that there is more than one way to classify things. One is one-dimensional classification, taxonomy, or hierarchy, in effect, decompositions, one-dimensional classification, and that’s really helpful for manufacturing. From an engineering standpoint of a two-dimensional classification, where we have classified things so that they are normalized, one effect in one place.

Then if you have the problems identified, you can postulate several technology changes or several changes and simulate the various implications of it.

The whole reason why I do architecture has to do with change. You deal with extreme complexity and then you have to accommodate extreme change. There is no other way to deal with it. Humanity, for thousands of years, has not been able to figure out a better way to deal with complexity and change other than architecture.

Forde: Maybe we shouldn’t apply architecture to some things.

For example, maybe the technologies or the opportunity is so new, we need to have the decision-making framework that says, you know what, let’s not try and figure out all this, just to self-control their stuff in advance, okay? Let’s let it run and see what happens, and then when it’s at the appropriate point for architecture, let’s apply it, this is a more organic view of the way nature and life works than the enterprise view of it.

So what I am saying is that architecture is not irrelevant in that context. It’s actually there is a part of the decision-making framework to not architect something at this point in time because it’s inappropriate to do so.

Funding and budgeting

Band: Yeah, I agree that wholeheartedly. If it can’t be health solutions, we are a completely agile shop. All the technology development is on the same sprint cycle, and we have three-week sprints, but we also have certain things that are still annual and wonderful like funding and budgeting.

We live in a tension. People say, well, what are you going to do, what budget do you need, but at the same time, I haven’t figured everything out. So I am constantly living in that gap of what do I need to meet a certain milestone to get my project funded, and what do I need to do to go forward? Obviously, in a fully agile organization, all those things would be fluid. But then there’s financial reporting, and we would also have to be fluid too. So there are barriers to that.

For instance, the Scaled Agile Framework, which I think is a fascinating thing, has a very clear place for enterprise architecture. As Chris said, you don’t want to do too much of it in advance.  I am constantly getting the gap between how can I visualize what’s going to happen a year out and how can I give the development teams what they need for the sprint. So I am always living in that paradox.

Bellman: The Gartner Group, not too long ago, came up with the concept of emerging enterprise architecture and what we are dealing with. Enterprises don’t exist like buildings. A building is an object, but an enterprise is a group of human beings communicating with one another.

As a very famous organizational psychologist Karl Weick once pointed out, “The effective organization is garrulous, clumsy, superstitious, hypocritical, mostrous, octopoid, wandering, and grouchy.” Why? Because an organization is continually adapting, continually changing, and continually adapting to the changing business and technological landscape.

To expect anything other than that is not having a realistic view of the enterprise. It is emerging and it is a continually emerging phenomena. So in a sense, having an architecture concept I would not contest, but architecting is always worthwhile. It’s like it’s an organic phenomena, and that in order to deal with that what we can also understand and have an architecture for organic phenomena that change and rapidly adapt.

Brown: Chris, where you were going follows the lines of what great companies do, right?

There is a great book published about 30 years ago called ‘In Search of Excellence.’ If you haven’t read it, I suggest that people do. Written by Peters and Waterman, and Tom Peters has tried for ever since to try and recreate something with that magic, but one of the lessons learned was what great companies do, is something that goes simultaneous loose-tight properties. So you let somethings be very tightly controlled, and other things as are suggesting, let them flourish and see where they go before I actually box them in. So that’s a good thought.

So what do we think, as a panel, about evolving TOGAF to become an engineering methodology as well as a manufacturing methodology?

Zachman: I really think it’s a good idea.

Brown: Chris, do you have any thoughts on that?

Interesting proposal

Forde: I think it’s an interesting proposal and I think we need to look at it fairly seriously. The Open Group approach to things is, don’t lock people into a specific way of thinking, but we also advocate disciplined approach to doing things. So I would susspect that we are going to be exploring John’s proposal pretty seriously.

Brown: You mentioned in your talk that decision-making process is a precondition, the decision-making process to govern IT investments, and the question that comes in is how about other types of investments including facilities, inventory and acquisitions?

By The Open GroupForde: The wording of the presentation was very specific. Most organizations have a process or decision-making framework on an annual basis or quarterly whatever the cycles are for allocation of funding to do X, Y or Z. So the implication wasn’t that IT was the only space that it would be applied.

However, the question is how effective is that decision-making framework? In many organizations, or in a lot of organizations, the IT function is essentially an enterprise-wide activity that’s supporting the financial activities, the plant activities, these sorts of things. So you have the P&Ls from those things flowing in some way into the funding that comes to the IT organization.

The question is, when there are multiple complexities in an organization, multiple departments with independent P&Ls, they are funding IT activities in a way that may not be optimized, may or may not be optimized. For the architects, in my view, one of the avenues for success is in inserting yourself into that planning cycle and influencing,  because normally the architecture team does not have direct control over the spend, but influencing how that spend goes.

Over time gradually improving the enterprise’s ability to optimize and make effective the funding it applies for IT to support the rest of the business.

Zachman: Yeah, I was just wondering, you’ve got to make observation.

Band: I agree, I think that the battle to control shadow IT has been permanently lost. We are in a technology obsessed society. Every department wants to control some technology and even develop it to their needs. There are some controls that you do have, and we do have some, but we have core health insurance businesses that are nearly a 100 years old.

Cambia is constantly investing and acquiring new companies that are transforming healthcare. Cambia has over a 100 million customers all across the country even though our original business was a set of regional health plans.

Build relationships

You can’t possibly rationalize all of everything I want you to pay for on that thing. It is incumbent upon the architects, especially the senior ones, to build relationships with the people in these organizations and make sure everything is synergetic.

Many years ago, there was a senior architect. I asked him what he did, and he said, “Well, I’m just the glue. I go to a lot of meetings.” There are deliverables and deadlines too, but there is a part of consistently building the relationships and noticing things, so that when there is time to make a decision or someone needs something, it gets done right.

Zachman: I was in London when Bank of America got bought by NationsBank, and it was touted as the biggest banking merger in the history of the banking industry.

Actually it wasn’t a merger, it was an acquisition NationsBank acquired Bank of America and then changed the name to Bank of America. There was a London paper that was  observing that the headline you always see is, “The biggest merger in the history of the industry.” The headline you never see is, “This merger didn’t work.”

The cost of integrating the two enterprises exceeded the value of the acquisition. Therefore, we’re going to have to break this thing up in pieces and sell off the pieces as surreptitiously as possible, so nobody will notice that we buried any accounting notes someplace or other. You never see that article. You’ll only see the one about the biggest merger.

If I was the CEO and my strategy was to grow by acquisition, I would get really interested in enterprise architecture. Because you have to be able to anticipate the integration of the cost, if you want to merge two enterprises. In fact, you’re changing the scope of the enterprise. I have talked a little bit about the role on models, but you are changing the scope. As soon as you change a scope, you’re now going to be faced with an integration issue.

Therefore you have to make a choice, scrap and rework. There is no way, after the fact, to integrate parts that don’t fit together. So you’re gong to be faced a decision whether you want to scrap and rework or not. I would get really interested in enterprise architecture, because that’s what you really want to know before you make the expenditure. You acquire and obviously you’ve already blown out all the money. So now you’ve got a problem.

Once again, if I was the CEO and I want to grow by acquisition or merger acquisition, I would get really interested in enterprise architecture.

Cultural issues

Beryl Bellman: One of the big problems we are addressing here is also the cultural and political problems of organizations or enterprises. You could have the best design type of system, and if people and politics don’t agree, there are going to be these kind of conflicts.

I was involved in my favorite projects at consulting. I was involved in consulting with NCR, who was dealing with Hyundai and Samsung and trying to get them together at a conjoint project. They kept fighting with each other in terms of knowledge management, technology transfer, and knowledge transfer. My role of it was to do an architecture of that whole process.

It was called RIAC Research Institute in Computer Technology. On one side of the table, you had Hyundai and Samsung. On the other side of the table, you had NCR. They were throwing PowerPoint slides back and forth at each other. I brought up that the software we used at that time was METIS, and METIS modeled all the processes, everything that was involved.

Samsung said you just hit it with a 2×4. I used to be demonstrating it, rather than tossing out slides, here are the relationships, and be able to show that it really works. To me that was a real demonstration that I can even overcome some of the politics and cultural differences within enterprises.

Brown: I want to give one more question. I think this is more of a concern that we have raised in some people’s minds today, which is, we are talking about all these different frameworks and ontologies, and so there is a first question.

The second one is probably the key one that we are looking at, but it asks what does each of the frameworks lack, what are the key elements that are missing, because that leads on to the second question that says, isn’t needing to understand old enterprise architecture frameworks is not a complex exercise for a practitioner?

Band: My job is not about understanding frameworks. I have been doing enterprises solution architecture in HP at a standard and diversified financial services company and now at health insurance and health solutions company out for quite a while, and it’s really about communicating and understanding in a way that’s useful to your stakeholders.

The frameworks about creating shared understanding of what we have and where are we going to go, and the frameworks are just a set of tools that you have in your toolbox that most people don’t understand.

So the idea is not to understand everything but to get a set of tools, just like a mechanic would, that you carry around that you use all the time. For instance, there are certain types of ArchiMate views that I use when I am in a group. I will draw an ArchiMate business process view with application service use of the same. What are the business processes you need to be and what are the exposed application behaviors that they need to consume?

I had that discussion with people on the business who are in IT, and we drove those diagrams. That’s a useful tool, it works for me, it works for the people around me, it works in my culture, but there is no understanding over frameworks unless that’s your field of study. They are all missing the exact thing you need for a particular interaction, but most likely there is something in there that you can base the next critical interaction on.

Six questions

Zachman: I spent most of my life thinking about my frameworks. There are six questions you have to answer to have a complete description of whatever it is, what I will describe, what, how, where, who, and why. So that’s complete.

The philosophers have established six transformations interestingly enough, the transfer of idea into an instantiation, so that’s a complete set, and I did not invent either one of these, so the six interrogatives. They have the six stages of transformation and that framework has to, by definition, accommodate any factor that’s relevant to the existence of the object of the enterprise.  Therefore any fact has to be classifiable in that structure.

My framework is complete in that regard. For many years, I would have been reluctant to make a categorical statement, but we exercised this, and there is no anomaly. I can’t find an anomaly. Therefore I have a high level of confidence that you can classify any fact in that context.

There is one periodic table. There are n different compound manufacturing processes. You can manufacture anything out of the periodic table. That metaphor is really helpful. There’s one enterprise architecture framework ontology. I happened to stumble across, by accident, the ontology for classifying all of the facts relevant to an enterprise.

I wish I could tell you that I was so smart and understood all of these things at the beginning, but I knew nothing about this, I just happened to stumble across it. The framework fell on my desk one day and I saw the pattern. All I did was I put enterprise names on the same pattern for descriptive representation of anything. You’ve heard me tell quite a bit of the story this afternoon. In terms of completeness I think my framework is complete. I can find no anomalies and you can classify anything relative to that framework.

And I agree with Iver, that there are n different tools you might want to use. You don’t have to know everything about every framework. One thing is, whatever the tool is that you need to deal with and out of the context of the periodic table metaphor, the ontological construct of The Zachman Framework, you can accommodate whatever artifacts the tool creates.

You don’t have to analyze every tool, whatever tool is necessary, if you want to do with business architecture, you can create whatever the business architecture manifestation is. If you want to know what DoDAF is, you can create the DoDAF artifacts. You can create any composite, and you can create any compound from the periodic table. It’s the same idea.

I wouldn’t spend my life trying to understand all these frameworks. You have to operate the enterprise, you have to manage the enterprise and whatever the tool is, it’s required to do whatever it is that you need to do and there is something good about everything and nothing necessarily does everything.

So use the tool that’s appropriate and then you can create whatever the composite constructs are required by that tool out of the primitive components of the framework. I wouldn’t try to understand all the frameworks.

What’s missing

Forde: On a daily basis there is a line of people at these conferences coming to tell me what’s missing from TOGAF. Recently we conducted a survey through the Association of Enterprise Architects about what people needed to see. Basically the stuff came back pretty much, please give us more guidance that’s specific to my situation, a recipe for how to solve world hunger, or something like that. We are not in the role of providing that level of prescriptive detail.

The value side of the equation is the flexibility of the framework to a certain degree to allow many different industries and many different practitioners drive value for their business out of using that particular tool.

So some people will find value in the content metamodel in the TOGAF Framework and the other components of it, but if you are not happy with that, if it doesn’t meet your need, flip over to The Zachman Framework or vice versa.

John made it very clear earlier that the value in the framework that he has built throughout his career and has been used repeatedly around the world is its rigor, it’s comprehensiveness, but he made very clear, it’s not a method. There is nothing in there to tell you how to go do it.

So you could criticize The Zachman Framework for a lack of method or you could spend your time talking about the value of it as a very useful tool to get X, Y, and Z done.

From a practitioner’s standpoint, what one practitioner does is interesting in a value, but if you have a practice between 200 and 400 architects, you don’t want everybody running around like a loose cannon doing it their way, in my opinion. As a practice manager or leader you need something that makes those resources very, very effective. And when you are in a practice of that size, you probably have a handful of people trying to figure out how the frameworks come together, but most of the practitioners are tasked with taking what the organization says is their best practice and executing on it.

We are looking at improving the level of guidance provided by the TOGAF material, the standard and guidance about how to do specific scenarios.

For example, how to jumpstart an architecture practice, how to build a secure architecture, how to do business architecture well? Those are the kinds of things that we have had feedback on and we are working on around that particular specification.

Brown: So if you are employed by the US Department of Defense you would be required to use DoDAF, if you are an enterprise architect, because of the views it provides. But people like Terri Blevins that did work in the DoD many years, used TOGAF to populate DoDAF. It’s a method, and the method is the great strength.

If you want to have more information on that, there are a number of white papers on our website about using TOGAF with DoDAF, TOGAF with COBIT, TOGAF with Zachman, TOGAF with everything else.

Forde: TOGAF with frameworks, TOGAF with buy in, the thing to look at is the ecosystem of information around these frameworks is where the value proposition really is. If you are trying to bootstrap your standards practice inside, the framework is of interest, but applied use, driving to the value proposition for your business function is the critical area to focus on.

The panel, which examined the synergy among the major EA frameworks, consists of moderator Allen Brown, President and Chief Executive Officer, The Open Group; Iver Bank, an Enterprise Architect, Cambia Health Solutions; Dr. Beryl Bellman, Academic Director, FEAC Institute; John Zachman, Chairman and CEO of Zachman International, and originator of the Zachman Framework; and Chris Forde, General Manager, Asia and Pacific Region and Vice President, Enterprise Architecture, The Open Group.

Transcript available here.

Join the conversation @theopengroup #ogchat

You may also be interested in:

 

Leave a comment

Filed under ArchiMate®, big data, Business Architecture, Cloud, Data management, Enterprise Architecture, Enterprise Transformation, Standards, TOGAF®, Uncategorized

A World Without IT4IT: Why It’s Time to Run IT Like a Business

By Dave Lounsbury, CTO, The Open Group

IT departments today are under enormous pressure. In the digital world, businesses have become dependent on IT to help them remain competitive. However, traditional IT departments have their roots in skills such as development or operations and have not been set up to handle a business and technology environment that is trying to rapidly adapt to a constantly changing marketplace. As a result, many IT departments today may be headed for a crisis.

At one time, IT departments led technology adoption in support of business. Once a new technology was created—departmental servers, for instance—it took a relatively long time before businesses took advantage of it and even longer before they became dependent on the technology. But once a business did adopt the technology, it became subject to business rules—expectations and parameters for reliability, maintenance and upgrades that kept the technology up to date and allowed the business it supported to keep up with the market.

As IT became more entrenched in organizations throughout the 1980s and 1990s, IT systems increased in size and scope as technology companies fought to keep pace with market forces. In large enterprises, in particular, IT’s function became to maintain large infrastructures, requiring small armies of IT workers to sustain them.

A number of forces have combined to change all that. Today, most businesses do their business operations digitally—what Constellation Research analyst Andy Mulholland calls “Front Office Digital Business.” Technology-as-a-service models have changed how the technologies and applications are delivered and supported, with support and upgrades coming from outsourced vendors, not in-house staff. With Cloud models, an IT department may not even be necessary. Entrepreneurs can spin up a company with a swipe of a credit card and have all the technology they need at their fingertips, hosted remotely in the Cloud.

The Gulf between IT and Business

Although the gap between IT and business is closing, the gulf in how IT is run still remains. In structure, most IT departments today remain close to their technology roots. This is, in part, because IT departments are still run by technologists and engineers whose primary skills lie in the challenge (and excitement) of creating new technologies. Not every skilled engineer makes a good businessperson, but in most organizations, people who are good at their jobs often get promoted into management whether or not they are ready to manage. The Peter Principle is a problem that hinders many organizations, not just IT departments.

What has happened is that IT departments have not traditionally been run as if they were a business. Good business models for how IT should be run have been piecemeal or slow to develop—despite IT’s role in how the rest of the business is run. Although some standards have been developed as guides for how different parts of IT should be run (COBIT for governance, ITIL for service management, TOGAF®, an Open Group standard, for architecture), no overarching standard has been developed that encompasses how to holistically manage all of IT, from systems administration to development to management through governance and, of course, staffing. For all its advances, IT has yet to become a well-oiled business machine.

The business—and technological—climate today is not the same as it was when companies took three years to do a software upgrade. Everything in today’s climate happens nearly instantaneously. “Convergence” technologies like Cloud Computing, Big Data, social media, mobile and the Internet of Things are changing the nature of IT. New technical skills and methodologies are emerging every day, as well. Although languages such as Java or C may remain the top programming languages, new languages like Pig or Hive are emerging everyday, as are new approaches to development, such as Scrum, Agile or DevOps.

The Consequences of IT Business as Usual

With these various forces facing IT, departments will either need to change and adopt a model where IT is managed more effectively or departments may face some impending chaos that ends up hindering their organizations.

Without an effective management model for IT, companies won’t be able to mobilize quickly for a digital age. Even something as simple as an inability to utilize data could result in problems such as investing in a product prototype that customers aren’t interested in. Those are mistakes most companies can’t afford to make these days.

Having an umbrella view of what all of IT does also allows the department to make better decisions. With technology and development trends changing so quickly, how do you know what will fit your organization’s business goals? You want to take advantage of the trends or technologies that make sense for the company and leave behind those that don’t.

For example, in DevOps, one of the core concepts is to bring the development phase into closer alignment with releasing and operating the software. You need to know your business’s operating model to determine whether this approach will actually work or not. Having a sense of that also allows IT to make decisions about whether it’s wise to invest in training or hiring staff skilled in those methods or buying new technologies that will allow you to adopt the model.

Not having that management view can leave companies subject to the whims of technological evolution and also to current IT fads. If you don’t know what’s valuable to your business, you run the risk of chasing every new fad that comes along. There’s nothing worse—as the IT guy—than being the person who comes to the management meeting each month saying you’re trying yet another new approach to solve a problem that never seems to get solved. Business people won’t respond to that and will wonder if you know what you’re doing. IT needs to be decisive and choose wisely.

These issues not only affect the IT department but to trickle up to business operations. Ineffective IT shops will not know when to invest in the correct technologies, and they may miss out on working with new technologies that could benefit the business. Without a framework to plan how technology fits into the business, you could end up in the position of having great IT bows and arrows but when you walk out into the competitive world, you get machine-gunned.

The other side is cost and efficiency—if the entire IT shop isn’t running smoothly throughout then you end up spending too much money on problems, which in turn takes money away from other parts of the business that can keep the organization competitive. Failing to manage IT can lead to competitive loss across numerous areas within a business.

A New Business Model

To help prevent the consequences that may result if IT isn’t run more like a business, industry leaders such as Accenture; Achmea; AT&T; HP IT; ING Bank; Munich RE; PwC; Royal Dutch Shell; and University of South Florida, recently formed a consortium to address how to better run the business of IT. With billions of dollars invested in IT each year, these companies realized their investments must be made wisely and show governable results in order succeed.

The result of their efforts is The Open Group IT4IT™ Forum, which released a Snapshot of its proposed Reference Architecture for running IT more like a business this past November. The Reference Architecture is meant to serve as an operating model for IT, providing the “missing link” that previous IT-function specific models have failed to address. The model allows IT to achieve the same level of business, discipline, predictability and efficiency as other business functions.

The Snapshot includes a four-phase Value Chain for IT that provides both an operating model for an IT business and outlines how value can be added at every stage of the IT process. In addition to providing suggested best practices for delivery, the Snapshot includes technical models for the IT tools that organizations can use, whether for systems monitoring, release monitoring or IT point solutions. Providing guidance around IT tools will allow these tools to become more interoperable so that they can exchange information at the right place at the right time. In addition, it will allow for better control of information flow between various parts of the business through the IT shop, thus saving IT departments the time and hassle of aggregating tools or cobbling together their own tools and solutions. Staffing guidance models are also included in the Reference Architecture.

Why IT4IT now? Digitalization cannot be held back, particularly in an era of Cloud, Big Data and an impending Internet of Things. An IT4IT Reference Architecture provides more than just best practices for IT—it puts IT in the context of a business model that allows IT to be a contributing part of an enterprise, providing a roadmap for digital businesses to compete and thrive for years to come.

Join the conversation! @theopengroup #ogchat

By The Open GroupDavid is Chief Technical Officer (CTO) and Vice President, Services for The Open Group. As CTO, he ensures that The Open Group’s people and IT resources are effectively used to implement the organization’s strategy and mission.  As VP of Services, David leads the delivery of The Open Group’s proven collaboration processes for collaboration and certification both within the organization and in support of third-party consortia.

David holds a degree in Electrical Engineering from Worcester Polytechnic Institute, and is holder of three U.S. patents.

1 Comment

Filed under Cloud, digital technologies, Enterprise Transformation, Internet of Things, IT, IT4IT, TOGAF, TOGAF®

The Open Group Executive Round Table Event at Mumbai

By Bala Peddigari, Head – HiTech TEG and Innovation Management, Tata Consultancy Services Limited

The Open Group organized the Executive Round Table Event at Taj Lands End in Mumbai on November 12, 2014. The goal was to brief industry executives on how The Open Group can help in promoting Enterprise Architecture within the organization, and how it helps to stay relevant to the Indianized context in realizing and bringing in positive change. Executives from the Government of Maharastra, Reserve Bank of India, NSDL, Indian Naval Service, SVC Bank, Vodafone, SVC Bank, SP Jain Institute, Welingkar Institute of Management, VSIT,Media Lab Asia, Association of Enterprise Architects (AEA), Computer Society of India and others were present.

By Bala PeddigariJames de Raeve, Vice President, Certification of The Open Group introduced The Open Group to the executives and explained the positive impact it is creating in driving Enterprise Architecture. He noted most of the EA functions, Work Groups and Forums are driven by the participating companies and Architects associated with them. James revealed facts stating that India is in fourth position in TOGAF® certification and Bangalore is second only to London. He also discussed the newest Forum, The Open Group IT4IT™ Forum and its objective to solve some of the key business problems and build Reference Architecture for managing the business of IT.  The mission of The Open Group IT4IT Forum is to develop, evolve and drive the adoption of the vendor-neutral IT4IT Reference Architecture.

Rajesh Aggarwal, Principal Secretary IT, Government of Maharashtra, attended the Round Table and shared his view on how Enterprise Architecture can help some of the key Government initiatives drive citizen-centric change. An example he used is the change in policies for senior citizens who seek pension. They show up every November at the bank to identify themselves for Life Certificate to continue getting pension. This process can be simplified through IT. He used an excellent analogy of making phone calls to have pizza delivered from Pizza Hut and consumer goods from Flipkart. Similarly his vision is to get Smart and Digital Governance where citizens can call and get the services at their door.

MumbaiRajeesh Aggarwal

70886-uppalJason Uppal, Chief Architect (Open CA Level 3 Certified), QR Systems in Canada presented a session on “Digital Economy and Enterprise Architecture”. Jason emphasized the need for Enterprise Architecture and why now in the networked and digital economy you need intent but not money to drive change. He also shared his thoughts on tools for this new game – Industrial Engineering and Enterprise Architecture focus to improve the performance capabilities across the value chain. Jason explained how EA can help in building the capability in the organization, defined value chain leveraging EA capabilities and transforming enterprise capabilities to apply those strategies. The key performance indicators of Enterprise Architecture can be measured through Staff Engagement, Time and Cost, Project Efficiency, Capability Effectiveness, Information Quality which explains the maturity of Enterprise Architecture in the organization. During his talk, Jason brought out many analogies to share his own experiences where Enterprise Architecture simplified and brought in much transformation in Healthcare. Jason shared an example of Carlos Ghosn who manages three companies worth $140 billion USD. He explains further the key to his success is to protect his change-agents and provide them the platform and opportunity to experiment. Enterprise Architecture is all about people who make it happen and bring impact.

The heart of the overall Executive Round Table Event was a panel session on “Enterprise Architecture in India Context”. Panelists were Jason Uppal, Rakhi Gupta from TCS and myself who shared perspectives on the following questions:

  1. Enterprise Architecture and Agile – Do they complement?
  2. How are CIOs seeing Enterprise Architecture when compared to other CXOs?
  3. I have downloaded TOGAF, what should I do next?
  4. How is Enterprise Architecture envisioned in the next 5 years?
  5. How can Enterprise Architecture help the “Make in India” initiative?
  6. Should Enterprise Architecture have a course in academics for students?

I explained how Enterprise Architecture is relevant in academics and how it can enable the roots to build agile-based system to quickly respond to the changes. I also brought in my perspective how Enterprise Architecture can show strengths while covering the weaknesses. Furthermore, TOGAF applies and benefits the context of the Indian future economy. Jason explained the change in dynamics in the education system to build a query-based learning approach to find and use. Rakhi shared her thoughts based on experience associated with Department of Posts Transformation keeping a citizen-centric Enterprise Architecture approach.

Overall, it has created a positive wave of understanding the importance of Enterprise Architecture and applying the TOGAF knowledge consistently to pave the road for the future. The event was well organized by Abraham Koshy and team, with good support from CSI Mumbai and AEA Mumbai chapters.

By Bala PeddigariBala Prasad Peddigari has worked with Tata Consultancy Services Limited for over 15 years. Bala practices Enterprise Architecture and evangelizes platform solutions, performance and scalable architectures and Cloud technology initiatives within TCS.  He heads the Technology Excellence Group for HiTech Vertical. Bala drives the architecture and technology community initiatives within TCS through coaching, mentoring and grooming techniques.

Bala has a Masters in Computer Applications from University College of Engineering, Osmania. He is an Open Group Master IT Certified Architect and serves as a Board Member in The Open Group Certifying Authority. He received accolades for his cloud architectural strengths and published his papers in IEEE.  Bala is a regular speaker in Open Group and technology events and is a member of The Open Group Open Platform 3.0™.

 

Comments Off

Filed under Accreditations, architecture, Certifications, Cloud, Conference, Enterprise Architecture, Open CA, Open CITS, Open Platform 3.0, Standards, TOGAF, TOGAF®

The Emergence of the Third Platform

By Andras Szakal, Vice President and Chief Technology Officer, IBM U.S. Federal

By 2015 there will be more than 5.6 billion personal devices in use around the world. Personal mobile computing, business systems, e-commerce, smart devices and social media are generating an astounding 2.5 billion gigabytes of data per day. Non-mobile network enabled intelligent devices, often referred to as the Internet of Things (IoT), is poised to explode to over 1 trillion devices by 2015.

Rapid innovation and astounding growth in smart devices is driving new business opportunities and enterprise solutions. Many of these new opportunities and solutions are based on deep insight gained through analysis of the vast amount of data being generated.

The expansive growth of personal and pervasive computing power continues to drive innovation that is giving rise to a new class of systems and a pivot to a new generation of computing platform. Over the last fifty years, two generations of computing platform have dominated the business and consumer landscape. The first generation was dominated by the monolithic mainframe, while distributed computing and the Internet characterized the second generation. Cloud computing, Big Data/Analytics, the Internet of Things (IoT), mobile computing and even social media are the core disruptive technologies that are now converging at the cross roads of the emergence of a third generation of computing platform.

This will require new approaches to enterprise and business integration and interoperability. Industry bodies like The Open Group must help guide customers through the transition by facilitating customer requirements, documenting best practices, establishing integration standards and transforming the current approach to Enterprise Architecture, to adapt to the change in which organizations will build, use and deploy the emerging third generation of computing platform.

Enterprise Computing Platforms

An enterprise computing platform provides the underlying infrastructure and operating environment necessary to support business interactions. Enterprise systems are often comprised of complex application interactions necessary to support business processes, customer interactions, and partner integration. These interactions coupled with the underlying operating environment define an enterprise systems architecture.

The hallmark of successful enterprise systems architecture is a standardized and stable systems platform. This is an underlying operating environment that is stable, supports interoperability, and is based on repeatable patterns.

Enterprise platforms have evolved from the monolithic mainframes of the 1960s and 1970s through the advent of the distributed systems in the 1980s. The mainframe-based architecture represented the first true enterprise operating platform, referred to henceforth as the First Platform. The middleware-based distributed systems that followed and ushered in the dawn of the Internet represented the second iteration of platform architecture, referred to as the Second Platform.

While the creation of the Internet and the advent of web-based e-commerce are of historical significance, the underlying platform was still predominantly based on distributed architectures and therefore is not recognized as a distinct change in platform architecture. However, Internet-based e-commerce and service-based computing considerably contributed to the evolution toward the next distinct version of the enterprise platform. This Third Platform will support the next iteration of enterprise systems, which will be born out of multiple simultaneous and less obvious disruptive technology shifts.

The Convergence of Disruptive Technologies

The emergence of the third generation of enterprise platforms is manifested at the crossroads of four distinct, almost simultaneous, disruptive technology shifts; cloud computing, mobile computing, big data-based analytics and the IoT. The use of applications based on these technologies, such as social media and business-driven insight systems, have contributed to both the convergence and rate of adoption.

These technologies are dramatically changing how enterprise systems are architected, how customers interact with business, and the rate and pace of development and deployment across the enterprise. This is forcing vendors, businesses, and governments to shift their systems architectures to accommodate integrated services that leverage cloud infrastructure, while integrating mobile solutions and supporting the analysis of the vast amount of data being generated by mobile solutions and social media. All this is happening while maintaining the integrity of the evolving businesses capabilities, processes, and transactions that require integration with business systems such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM).

Cloud computing and the continued commoditization of computer storage are key facilitating elements of this convergence. Cloud computing lowers the complexity of enterprise computing through virtualization and automated infrastructure provisioning, while solid-state and software-based Internet storage has made big data practical and affordable. Cloud computing solutions continue to evolve and offer innovative services like Platform as a Service (PaaS)-based development environments that integrate directly with big data solutions. Higher density, cloud-based and solid-state storage continue to lower the cost and complexity of storage and big data solutions.

The emergence of the smartphone and enterprise mobile computing is a key impetus for the emergence of big data solutions and an explosion of innovative storage technologies. The modern mobile platform, with all its rich applications, device sensors, and access to social networks, is almost single-handedly responsible for the explosion of data and the resulting rush to provide solutions to analyze and act on the insight contained in the vast ocean of personalized information. In turn, this phenomenon has created a big data market ecosystem based on the premise that open data is the new natural resource.

The emergence of sensor-enabled smartphones has foreshadowed the potential value of making everyday devices interconnected and intelligent by adding network-based sensors that allow devices to enhance their performance by interacting with their environment, and through collaboration with other devices and enterprise systems in the IoT. For example, equipment manufacturers are using sensors to gain insight into the condition of fielded equipment. This approach reduces both the mean time to failure and pinpoints manufacturing quality issues and potential design flaws. This system of sensors also integrates with the manufacturer’s internal supply chain systems to identify needed parts, and optimizes the distribution process. In turn, the customer benefits by avoiding equipment downtime through scheduling maintenance before a part fails.

Over time, the IoT will require an operating environment for devices that integrates with existing enterprise business systems. But this will require that smart devices effectively integrate with cloud-based enterprise business systems, the enterprise customer engagement systems, as well as the underlying big data infrastructure responsible for gleaning insight into the data this vast network of sensors will generate. While each of these disruptive technology shifts has evolved separately, they share a natural affinity for interaction, collaboration, and enterprise integration that can be used to optimize an enterprise’s business processes.

Evolving Enterprise Business Systems

Existing enterprise systems (ERP, CRM, Supply Chain, Logistics, etc.) are still essential to the foundation of a business or government and form Systems of Record (SoR) that embody core business capabilities and the authoritative processes based on master data records. The characteristics of SoR are:

  • Encompass core business functions
  • Transactional in nature
  • Based on structured databases
  • Authoritative source of information (master data records)
  • Access is regulated
  • Changes follow a rigorous governance process.

Mobile systems, social media platforms, and Enterprise Market Management (EMM) solutions form another class of systems called Systems of Engagement (SoE). Their characteristics are:

  • Interact with end-users through open collaborative interfaces (mobile, social media, etc.)
  • High percentage of unstructured information
  • Personalized to end-user preferences
  • Context-based analytical business rules and processing
  • Access is open and collaborative
  • Evolves quickly and according to the needs of the users.

The emergence of the IoT is embodied in a new class of system, Systems of Sensors (SoS), which includes pervasive computing and control. Their characteristics are:

  • Based on autonomous network-enabled devices
  • Devices that use sensors to collect information about the environment
  • Interconnected with other devices or enterprise engagement systems
  • Changing behavior based on intelligent algorithms and environmental feedback
  • Developed through formal product engineering process
  • Updates to device firmware follow a continuous lifecycle.

The Third Platform

The Third Platform is a convergence of cloud computing, big data solutions, mobile systems and the IoT integrated into the existing enterprise business systems.

The Three Classes of System

Figure 1: The Three Classes of Systems within the Third Platform

The successful implementation and deployment of enterprise SoR has been embodied in best practices, methods, frameworks, and techniques that have been distilled into enterprise architecture. The same level of rigor and pattern-based best practices will be required to ensure the success of solutions based on Third Platform technologies. Enterprise architecture methods and models need to evolve to include guidance, governance, and design patterns for implementing business solutions that span the different classes of system.

The Third Platform builds upon many of the concepts that originated with Service-Oriented Architecture (SOA) and dominated the closing stanza of the period dominated by the Second Platform technologies. The rise of the Third Platform provides the technology and environment to enable greater maturity of service integration within an enterprise.

The Open Group Service Integration Maturity Model (OSIMM) standard[1] provides a way in which an organization can assess its level of service integration maturity. Adoption of the Third Platform inherently addresses many of the attributes necessary to achieve the highest levels of service integration maturity defined by OSIMM. It will enable new types of application architecture that can support dynamically reconfigurable business and infrastructure services across a wide variety of devices (SoS), internal systems (SoR), and user engagement platforms (SoE).

Solution Development

These new architectures and the underlying technologies will require adjustments to how organizations approach enterprise IT governance, to lower the barrier of entry necessary to implement and integrate the technologies. Current adoption requires extensive expertise to implement, integrate, deploy, and maintain the systems. First market movers have shown the rest of the industry the realm of the possible, and have reaped the rewards of the early adopter.

The influence of cloud and mobile-based technologies has changed the way in which solutions will be developed, delivered, and maintained. SoE-based solutions interact directly with customers and business partners, which necessitates a continuous delivery of content and function to align with the enterprise business strategy.

Most cloud-based services employ a roll-forward test and delivery model. A roll-forward model allows an organization to address functional inadequacies and defects in almost real-time, with minimal service interruptions. The integration and automation of development and deployment tools and processes reduces the risk of human error and increases visibility into quality. In many cases, end-users are not even aware of updates and patch deployments.

This new approach to development and operations deployment and maintenance is referred to as DevOps – which combines development and operations tools, governance, and techniques into a single tool set and management practice. This allows the business to dictate, not only the requirements, but also the rate and pace of change aligned to the needs of the enterprise.

[1] The Open Group Service Integration Maturity Model (OSIMM), Open Group Standard (C117), published by The Open Group, November 2011; refer to: www.opengroup.org/bookstore/catalog/c117.htm

Andras2

Figure 2: DevOps: The Third Platform Solution Lifecycle

The characteristics of an agile DevOps approach are:

  • Harmonization of resources and practices between development and IT operations
  • Automation and integration of the development and deployment processes
  • Alignment of governance practices to holistically address development and operations with business needs
  • Optimization of the DevOps process through continuous feedback and metrics.

In contrast to SoE, SoR have a slower velocity of delivery. Such systems are typically released on fixed, pre-planned release schedules. Their inherent stability of features and capabilities necessitates a more structured and formal development approach, which traditionally equates to fewer releases over time. Furthermore, the impact changes to SoR have on core business functionality limits the magnitude and rate of change an organization is able to tolerate. But the emergence of the Third Platform will continue to put pressure on these core business systems to become more agile and flexible in order to adapt to the magnitude of events and information generated by mobile computing and the IoT.

As the technologies of the Third Platform coalesce, organizations will need to adopt hybrid development and delivery models based on agile DevOps techniques that are tuned appropriately to the class of system (SoR, SoS or SoS) and aligned with an acceptable rate of change.

DevOps is a key attribute of the Third Platform that will shift the fundamental management structure of the IT department. The Third Platform will usher in an era where one monolithic IT department is no longer necessary or even feasible. The line between business function and IT delivery will be imperceptible as this new platform evolves. The lines of business will become intertwined with the enterprise IT functions, ultimately leading to the IT department and business capability becoming synonymous. The recent emergence of the Enterprise Market Management organizations is an example where the marketing capabilities and the IT delivery systems are managed by a single executive – the Enterprise Marketing Officer.

The Challenge

The emergence of a new enterprise computing platform will usher in opportunity and challenge for businesses and governments that have invested in the previous generation of computing platforms. Organizations will be required to invest in both expertise and technologies to adopt the Third Platform. Vendors are already offering cloud-based Platform as a Service (PaaS) solutions that will provide integrated support for developing applications across the three evolving classes of systems – SoS, SoR, and SoE. These new development platforms will continue to evolve and give rise to new application architectures that were unfathomable just a few years ago. The emergence of the Third Platform is sure to spawn an entirely new class of dynamically reconfigurable intelligent applications and devices where applications reprogram their behavior based on the dynamics of their environment.

Almost certainly this shift will result in infrastructure and analytical capacity that will facilitate the emergence of cognitive computing which, in turn, will automate the very process of deep analysis and, ultimately, evolve the enterprise platform into the next generation of computing. This shift will require new approaches, standards and techniques for ensuring the integrity of an organization’s business architecture, enterprise architecture and IT systems architectures.

To effectively embrace the Third Platform, organizations will need to ensure that they have the capability to deliver boundaryless systems though integrated services that are comprised of components that span the three classes of systems. This is where communities like The Open Group can help to document architectural patterns that support agile DevOps principles and tooling as the Third Platform evolves.

Technical standardization of the Third Platform has only just begun; for example, standardization of the cloud infrastructure has only recently crystalized around OpenStack. Mobile computing platform standardization remains fragmented across many vendor offerings even with the support of rigid developer ecosystems and open sourced runtime environments. The standardization and enterprise support for SoS is still nascent but underway within groups like the Allseen Alliance and with the Open Group’s QLM workgroup.

Call to Action

The rate and pace of innovation, standardization, and adoption of Third Platform technologies is astonishing but needs the guidance and input from the practitioner community. It is incumbent upon industry communities like the Open Group to address the gaps between traditional Enterprise Architecture and an approach that scales to the Internet timescales being imposed by the adoption of the Third Platform.

The question is not whether Third Platform technologies will dominate the IT landscape, but rather how quickly this pivot will occur. Along the way, the industry must apply the open standards processes to ensure against the fragmentation into multiple incompatible technology platforms.

The Open Group has launched a new forum to address these issues. The Open Group Open Platform 3.0™ Forum is intended to provide a vendor-neutral environment where members share knowledge and collaborate to develop standards and best practices necessary to help guide the evolution of Third Platform technologies and solutions. The Open Platform 3.0 Forum will provide a place where organizations can help illuminate their challenges in adopting Third Platform technologies. The Open Platform 3.0 Forum will help coordinate standards activities that span existing Open Group Forums and ensure a coordinated approach to Third Platform standardization and development of best practices.

Innovation itself is not enough to ensure the value and viability of the emerging platform. The Open Group can play a unique role through its focus on Boundaryless Information Flow™ to facilitate the creation of best practices and integration techniques across the layers of the platform architecture.

andras-szakalAndras Szakal, VP and CTO, IBM U.S. Federal, is responsible for IBM’s industry solution technology strategy in support of the U.S. Federal customer. Andras was appointed IBM Distinguished Engineer and Director of IBM’s Federal Software Architecture team in 2005. He is an Open Group Distinguished Certified IT Architect, IBM Certified SOA Solution Designer and a Certified Secure Software Lifecycle Professional (CSSLP).  Andras holds undergraduate degrees in Biology and Computer Science and a Masters Degree in Computer Science from James Madison University. He has been a driving force behind IBM’s adoption of government IT standards as a member of the IBM Software Group Government Standards Strategy Team and the IBM Corporate Security Executive Board focused on secure development and cybersecurity. Andras represents the IBM Software Group on the Board of Directors of The Open Group and currently holds the Chair of The Open Group Certified Architect (Open CA) Work Group. More recently, he was appointed chair of The Open Group Trusted Technology Forum and leads the development of The Open Trusted Technology Provider Framework.

1 Comment

Filed under big data, Cloud, Internet of Things, Open Platform 3.0

The Open Group London 2014 Preview: A Conversation with RTI’s Stan Schneider about the Internet of Things and Healthcare

By The Open Group

RTI is a Silicon Valley-based messaging and communications company focused on helping to bring the Industrial Internet of Things (IoT) to fruition. Recently named “The Most Influential Industrial Internet of Things Company” by Appinions and published in Forbes, RTI’s EMEA Manager Bettina Swynnerton will be discussing the impact that the IoT and connected medical devices will have on hospital environments and the Healthcare industry at The Open Group London October 20-23. We spoke to RTI CEO Stan Schneider in advance of the event about the Industrial IoT and the areas where he sees Healthcare being impacted the most by connected devices.

Earlier this year, industry research firm Gartner declared the Internet of Things (IoT) to be the most hyped technology around, having reached the pinnacle of the firm’s famed “Hype Cycle.”

Despite the hype around consumer IoT applications—from FitBits to Nest thermostats to fashionably placed “wearables” that may begin to appear in everything from jewelry to handbags to kids’ backpacks—Stan Schneider, CEO of IoT communications platform company RTI, says that 90 percent of what we’re hearing about the IoT is not where the real value will lie. Most of media coverage and hype is about the “Consumer” IoT like Google glasses or sensors in refrigerators that tell you when the milk’s gone bad. However, most of the real value of the IoT will take place in what GE has coined as the “Industrial Internet”—applications working behind the scenes to keep industrial systems operating more efficiently, says Schneider.

“In reality, 90 percent of the real value of the IoT will be in industrial applications such as energy systems, manufacturing advances, transportation or medical systems,” Schneider says.

However, the reality today is that the IoT is quite new. As Schneider points out, most companies are still trying to figure out what their IoT strategy should be. There isn’t that much active building of real systems at this point.

Most companies, at the moment, are just trying to figure out what the Internet of Things is. I can do a webinar on ‘What is the Internet of Things?’ or ‘What is the Industrial Internet of Things?’ and get hundreds and hundreds of people showing up, most of whom don’t have any idea. That’s where most companies are. But there are several leading companies that very much have strategies, and there are a few that are even executing their strategies, ” he said. According to Schneider, these companies include GE, which he says has a 700+ person team currently dedicated to building their Industrial IoT platform, as well as companies such as Siemens and Audi, which already have some applications working.

For its part, RTI is actively involved in trying to help define how the Industrial Internet will work and how companies can take disparate devices and make them work with one another. “We’re a nuts-and-bolts, make-it-work type of company,” Schneider notes. As such, openness and standards are critical not only to RTI’s work but to the success of the Industrial IoT in general, says Schneider. RTI is currently involved in as many as 15 different industry standards initiatives.

IoT Drivers in Healthcare

Although RTI is involved in IoT initiatives in many industries, from manufacturing to the military, Healthcare is one of the company’s main areas of focus. For instance, RTI is working with GE Healthcare on the software for its CAT scanner machines. GE chose RTI’s DDS (data distribution service) product because it will let GE standardize on a single communications platform across product lines.

Schneider says there are three big drivers that are changing the medical landscape when it comes to connectivity: the evolution of standalone systems to distributed systems, the connection of devices to improve patient outcome and the replacement of dedicated wiring with networks.

The first driver is that medical devices that have been standalone devices for years are now being built on new distributed architectures. This gives practitioners and patients easier access to the technology they need.

For example, RTI customer BK Medical, a medical device manufacturer based in Denmark, is in the process of changing their ultrasound product architecture. They are moving from a single-user physical system to a wirelessly connected distributed design. Images will now be generated in and distributed by the Cloud, thus saving significant hardware costs while making the systems more accessible.

According to Schneider, ultrasound machine architecture hasn’t really changed in the last 30 or 40 years. Today’s ultrasound machines are still wheeled in on a cart. That cart contains a wired transducer, image processing hardware or software and a monitor. If someone wants to keep an image—for example images of fetuses in utero—they get carry out physical media. Years ago it was a Polaroid picture, today the images are saved to CDs and handed to the patient.

In contrast, BK’s new systems will be completely distributed, Schneider says. Doctors will be able to carry a transducer that looks more like a cellphone with them throughout the hospital. A wireless connection will upload the imaging data into the cloud for image calculation. With a distributed scenario, only one image processing system may be needed for a hospital or clinic. It can even be kept in the cloud off-site. Both patients and caregivers can access images on any display, wherever they are. This kind of architecture makes the systems much cheaper and far more efficient, Schneider says. The days of the wheeled-in cart are numbered.

The second IoT driver in Healthcare is connecting medical devices together to improve patient outcomes. Most hospital devices today are completely independent and standalone. So, if a patient is hooked up to multiple monitors, the only thing that really “connects” those devices today is a piece of paper at the end of a hospital bed that shows how each should be functioning. Nurses are supposed to check these devices on an hourly basis to make sure they’re working correctly and the patient is ok.

Schneider says this approach is error-ridden. First, the nurse may be too busy to do a good job checking the devices. Worse, any number of things can set off alarms whether there’s something wrong with the patient or not. As anyone who has ever visited a friend or relative in the hospital attest to, alarms are going off constantly, making it difficult to determine when someone is really in distress. In fact, one of the biggest problems in hospital settings today, Schneider says, is a phenomenon known as “alarm fatigue.” Single devices simply can’t reliably tell if there’s some minor glitch in data or if the patient is in real trouble. Thus, 80% of all device alarms in hospitals are turned off. Meaningless alarms fatigue personnel, so they either ignore or turn off the alarms…and people can die.

To deal with this problem, new technologies are being created that will connect devices together on a network. Multiple devices can then work in tandem to really figure out when something is wrong. If the machines are networked, alarms can be set to go off only when multiple distress indicators are indicated rather than just one. For example, if oxygen levels drop on both an oxygen monitor on someone’s finger and on a respiration monitor, the alarm is much more likely a real patient problem than if only one source shows a problem. Schneider says the algorithms to fix these problems are reasonably well understood; the barrier is the lack of networking to tie all of these machines together.

The third area of change in the industrial medical Internet is the transition to networked systems from dedicated wired designs. Surgical operating rooms offer a good example. Today’s operating room is a maze of wires connecting screens, computers, and video. Videos, for instance, come from dynamic x-ray imaging systems, from ultrasound navigation probes and from tiny cameras embedded in surgical instruments. Today, these systems are connected via HDMI or other specialized cables. These cables are hard to reconfigure. Worse, they’re difficult to sterilize, Schneider says. Thus, the surgical theater is hard to configure, clean and maintain.

In the future, the mesh of special wires can be replaced by a single, high-speed networking bus. Networks make the systems easier to configure and integrate, easier to use and accessible remotely. A single, easy-to-sterilize optical network cable can replace hundreds of wires. As wireless gets faster, even that cable can be removed.

“By changing these systems from a mesh of TV-cables to a networked data bus, you really change the way the whole system is integrated,” he said. “It’s much more flexible, maintainable and sharable outside the room. Surgical systems will be fundamentally changed by the Industrial IoT.”

IoT Challenges for Healthcare

Schneider says there are numerous challenges facing the integration of the IoT into existing Healthcare systems—from technical challenges to standards and, of course, security and privacy. But one of the biggest challenges facing the industry, he believes, is plain old fear. In particular, Schneider says, there is a lot of fear within the industry of choosing the wrong path and, in effect, “walking off a cliff” if they choose the wrong direction. Getting beyond that fear and taking risks, he says, will be necessary to move the industry forward, he says.

In a practical sense, the other thing currently holding back integration is the sheer number of connected devices currently being used in medicine, he says. Manufacturers each have their own systems and obviously have a vested interest in keeping their equipment in hospitals, so many have been reluctant to develop or become standards-compliant and push interoperability forward, Schneider says.

This is, of course, not just a Healthcare issue. “We see it in every single industry we’re in. It’s a real problem,” he said.

Legacy systems are also a problematic area. “You can’t just go into a Kaiser Permanente and rip out $2 billion worth of equipment,” he says. Integrating new systems with existing technology is a process of incremental change that takes time and vested leadership, says Schneider.

Cloud Integration a Driver

Although many of these technologies are not yet very mature, Schneider believes that the fundamental industry driver is Cloud integration. In Schneider’s view, the Industrial Internet is ultimately a systems problem. As with the ultrasound machine example from BK Medical, it’s not that an existing ultrasound machine doesn’t work just fine today, Schneider says, it’s that it could work better.

“Look what you can do if you connect it to the Cloud—you can distribute it, you can make it cheaper, you can make it better, you can make it faster, you can make it more available, you can connect it to the patient at home. It’s a huge system problem. The real overwhelming striking value of the Industrial Internet really happens when you’re not just talking about the hospital but you’re talking about the Cloud and hooking up with practitioners, patients, hospitals, home care and health records. You have to be able to integrate the whole thing together to get that ultimate value. While there are many point cases that are compelling all by themselves, realizing the vision requires getting the whole system running. A truly connected system is a ways out, but it’s exciting.”

Open Standards

Schneider also says that openness is absolutely critical for these systems to ultimately work. Just as agreeing on a standard for the HTTP running on the Internet Protocol (IP) drove the Web, a new device-appropriate protocol will be necessary for the Internet of Things to work. Consensus will be necessary, he says, so that systems can talk to each other and connectivity will work. The Industrial Internet will push that out to the Cloud and beyond, he says.

“One of my favorite quotes is from IBM, he says – IBM said, ‘it’s not a new Internet, it’s a new Web.’” By that, they mean that the industry needs new, machine-centric protocols to run over the same Internet hardware and base IP protocol, Schneider said.

Schneider believes that this new web will eventually evolve to become the new architecture for most companies. However, for now, particularly in hospitals, it’s the “things” that need to be integrated into systems and overall architectures.

One example where this level of connectivity will make a huge difference, he says, is in predictive maintenance. Once a system can “sense” or predict that a machine may fail or if a part needs to be replaced, there will be a huge economic impact and cost savings. For instance, he said Siemens uses acoustic sensors to monitor the state of its wind generators. By placing sensors next to the bearings in the machine, they can literally “listen” for squeaky wheels and thus figure out whether a turbine may soon need repair. These analytics let them know when the bearing must be replaced before the turbine shuts down. Of course, the infrastructure will need to connect all of these “things” to the each other and the cloud first. So, there will need to be a lot of system level changes in architectures.

Standards, of course, will be key to getting these architectures to work together. Schneider believes standards development for the IoT will need to be tackled from both horizontal and vertical standpoint. Both generic communication standards and industry specific standards like how to integrate an operating room must evolve.

“We are a firm believer in open standards as a way to build consensus and make things actually work. It’s absolutely critical,” he said.

stan_schneiderStan Schneider is CEO at Real-Time Innovations (RTI), the Industrial Internet of Things communications platform company. RTI is the largest embedded middleware vendor and has an extensive footprint in all areas of the Industrial Internet, including Energy, Medical, Automotive, Transportation, Defense, and Industrial Control.  Stan has published over 50 papers in both academic and industry press. He speaks at events and conferences widely on topics ranging from networked medical devices for patient safety, the future of connected cars, the role of the DDS standard in the IoT, the evolution of power systems, and understanding the various IoT protocols.  Before RTI, Stan managed a large Stanford robotics laboratory, led an embedded communications software team and built data acquisition systems for automotive impact testing.  Stan completed his PhD in Electrical Engineering and Computer Science at Stanford University, and holds a BS and MS from the University of Michigan. He is a graduate of Stanford’s Advanced Management College.

 

Comments Off

Filed under architecture, Cloud, digital technologies, Enterprise Architecture, Healthcare, Internet of Things, Open Platform 3.0, Standards, Uncategorized

Business Benefit from Public Data

By Dr. Chris Harding, Director for Interoperability, The Open Group

Public bodies worldwide are making a wealth of information available, and encouraging its commercial exploitation. This sounds like a bonanza for the private sector at the public expense, but entrepreneurs are holding back. A healthy market for products and services that use public-sector information would provide real benefits for everyone. What can we do to bring it about?

Why Governments Give Away Data

The EU directive of 2003 on the reuse of public sector information encourages the Member States to make as much information available for reuse as possible. This directive was revised and strengthened in 2013. The U.S. Open Government Directive of 2009 provides similar encouragement, requiring US government agencies to post at least three high-value data sets online and register them on its data.gov portal. Other countries have taken similar measures to make public data publicly available.

Why are governments doing this? There are two main reasons.

One is that it improves the societies that they serve and the governments themselves. Free availability of information about society and government makes people more effective citizens and makes government more efficient. It illuminates discussion of civic issues, and points a searchlight at corruption.

The second reason is that it has a positive effect on the wealth of nations and their citizens. The EU directive highlights the ability of European companies to exploit the potential of public-sector information, and contribute to economic growth and job creation. Information is not just the currency of democracy. It is also the lubricant of a successful economy.

Success Stories

There are some big success stories.

If you drive a car, you probably use satellite navigation to find your way about, and this may use public-sector information. In the UK, for example, map data that can be used by sat-nav systems is supplied for commercial use by a government agency, the Ordnance Survey.

When you order something over the web for delivery to your house, you often enter a postal code and see most of the address auto-completed by the website. Postcode databases are maintained by national postal authorities, which are generally either government departments or regulated private corporations, and made available by them for commercial use. Here, the information is not directly supporting a market, but is contributing to the sale of a range of unrelated products and services.

The data may not be free. There are commercial arrangements for supply of map and postcode data. But it is available, and is the basis for profitable products and for features that make products more competitive.

The Bonanza that Isn’t

These successes are, so far, few in number. The economic benefits of open government data could be huge. The McKinsey Global Institute estimates a potential of between 3 and 5 trillion dollars annually. Yet the direct impact of Open Data on the EU economy in 2010, seven years after the directive was issued, is estimated by Capgemini at only about 1% of that, although the EU accounts for nearly a quarter of world GDP.

The business benefits to be gained from using map and postcode data are obvious. There are other kinds of public sector data, where the business benefits may be substantial, but they are not easy to see. For example, data is or could be available about public transport schedules and availability, about population densities, characteristics and trends, and about real estate and land use. These are all areas that support substantial business activity, but businesses in these areas seldom make use of public sector information today.

Where are the Products?

Why are entrepreneurs not creating these potentially profitable products and services? There is one obvious reason. The data they are interested in is not always available and, where it is available, it is provided in different ways, and comes in different formats. Instead of a single large market, the entrepreneur sees a number of small markets, none of which is worth tackling. For example, the market for an application that plans public transport journeys across a single town is not big enough to justify substantial investment in product development. An application that could plan journeys across any town in Europe would certainly be worthwhile, but is not possible unless all the towns make this data available in a common format.

Public sector information providers often do not know what value their data has, or understand its applications. Working within tight budgets, they cannot afford to spend large amounts of effort on assembling and publishing data that will not be used. They follow the directives but, without common guidelines, they simply publish whatever is readily to hand, in whatever form it happens to be.

The data that could support viable products is not available everywhere and, where it is available, it comes in different formats. (One that is often used is PDF, which is particularly difficult to process as an information source.) The result is that the cost of product development is high, and the expected return is low.

Where is the Market?

There is a second reason why entrepreneurs hesitate. The shape of the market is unclear. In a mature market, everyone knows who the key players are, understands their motivations, and can predict to some extent how they will behave. The market for products and services based on public sector information is still taking shape. No one is even sure what kinds of organization will take part, or what they will do. How far, for example, will public-sector bodies go in providing free applications? Can large corporations buy future dominance with loss-leader products? Will some unknown company become an overnight success, like Facebook? With these unknowns, the risks are very high.

Finding the Answers

Public sector information providers and standards bodies are tackling these problems. The Open Group participates in SHARE-PSI, the European network for the exchange of experience and ideas around implementing open data policies in the public sector. The experience gained by SHARE-PSI will be used by the World-Wide Web Consortium as a basis for standards and guidelines for publication of public sector information. These standards and guidelines may be used, not just by the public sector, but by not-for-profit bodies and even commercial corporations, many of which have information that they want to make freely available.

The Open Group is making a key contribution by helping to map the shape of the market. It is using the Business Scenario technique from its well-known Enterprise Architecture methodology TOGAF® to identify the kinds of organization that will take part, and their objectives and concerns.

There will be a preview of this on October 22 at The Open Group event in London which will feature a workshop session on Open Public Sector Data. This workshop will look at how Open Data can help business, present a draft of the Business Scenario, and take input from participants to help develop its conclusions.

The developed Business Scenario will be presented at the SHARE-PSI workshop in Lisbon on December 3-4. The theme of this workshop is encouraging open data usage by commercial developers. It will bring a wide variety of stakeholders together to discuss and build the relationship between the public and private sectors. It will also address, through collaboration with the EU LAPSI project, the legal framework for use of open public sector data.

Benefit from Participation!

If you are thinking about publishing or using public-sector data, you can benefit from these workshops by gaining an insight into the way that the market is developing. In the long term, you can influence the common standards and guidelines that are being developed. In the short term, you can find out what is happening and network with others who are interested.

The social and commercial benefits of open public-sector data are not being realized today. They can be realized through a healthy market in products and services that process the data and make it useful to citizens. That market will emerge when public bodies and businesses clearly understand the roles that they can play. Now is the time to develop that understanding and begin to profit from it.

Register for The Open Group London 2014 event at http://www.opengroup.org/london2014/registration.

Find out how to participate in the Lisbon SHARE-PSI workshop at http://www.w3.org/2013/share-psi/workshop/lisbon/#Participation

 

Chris HardingDr. Chris Harding is Director for Interoperability at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0™ Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

Comments Off

Filed under big data, Cloud, digital technologies, Enterprise Architecture, Open Platform 3.0, TOGAF®, Uncategorized

The Open Group London 2014: Open Platform 3.0™ Panel Preview with Capgemini’s Ron Tolido

By The Open Group

The third wave of platform technologies is poised to revolutionize how companies do business not only for the next few years but for years to come. At The Open Group London event in October, Open Group CTO Dave Lounsbury will be hosting a panel discussion on how The Open Group Open Platform 3.0™ will affect Enterprise Architectures. Panel speakers include IBM Vice President and CTO of U.S. Federal IMT Andras Szakal and Capgemini Senior Vice President and CTO for Application Services Ron Tolido.

We spoke with Tolido in advance of the event about the progress companies are making in implementing third platform technologies, the challenges facing the industry as Open Platform 3.0 evolves and the call to action he envisions for The Open Group as these technologies take hold in the marketplace.

Below is a transcript of that conversation.

From my perspective, we have to realize: What is the call to action that we should have for ourselves? If we look at the mission of Boundaryless Information Flow™ and the need for open standards to accommodate that, what exactly can The Open Group and any general open standards do to facilitate this next wave in IT? I think it’s nothing less than a revolution. The first platform was the mainframe, the second platform was the PC and now the third platform is anything beyond the PC, so all sorts of different devices, sensors and ways to access information, to deploy solutions and to connect. What does it mean in terms of Boundaryless Information Flow and what is the role of open standards to make that platform succeed and help companies to thrive in such a new world?

That’s the type of call to action I’m envisioning. And I believe there are very few Forums or Work Groups within The Open Group that are not affected by this notion of the third platform. Firstly, I believe an important part of the Open Platform 3.0 Forum’s mission will be to analyze, to understand, the impacts of the third platform, of all those different areas that we’re evolving currently in The Open Group, and, if you like, orchestrate them a bit or be a catalyst in all the working groups and forums.

In a blog you wrote this summer for Capgemini’s CTO Blog you cited third platform technologies as being responsible for a renewed interest in IT as an enabler of business growth. What is it about the Third Platform is driving that interest?

It’s the same type of revolution as we’ve seen with the PC, which was the second platform. A lot of people in business units—through the PC and client/server technologies and Windows and all of these different things—realized that they could create solutions of a whole new order. The second platform meant many more applications, many more uses, much more business value to be achieved and less direct dependence on the central IT department. I think we’re seeing a very similar evolution right now, but the essence of the move is not that it moves us even further away from central IT but it puts the power of technology right in the business. It’s much easier to create solutions. Nowadays, there are many more channels that are so close in business that it takes business people to understand them. This explains also why business people like the third platform so much—it’s the Cloud, it’s mobile, social, it’s big data, all of these are waves that bring technology closer to the business, and are easy to use with very apparent business value that haven’t seen before, certainly not in the PC era. So we’re seeing a next wave, almost a revolution in terms of how easy it is to create solutions and how widely spread these solutions can be. Because again, as with the PC, it’s many more applications yet again and many more potential uses that can be connected through these applications, so that’s the very nature of the revolution and that also explains why business people like the third platform so much. So what people say to me these days on the business side is ‘We love IT, it’s just these bloody IT people that are the problem.’

Due to the complexities of building the next wave of platform computing, do you think that we may hit a point of fatigue as companies begin to tackle everything that is involved in creating that platform and making it work together?

The way I see it, that’s still the work of the IT community and the Enterprise Architect and the platform designer. It’s the very nature of the platform is that it’s attractive to use it, not to build it. The very nature of the platform is to connect to it and launch from it, but building the platform is an entirely different story. I think it requires platform designers and Enterprise Architects, if you like, and people to do the plumbing and do the architecting and the design underneath. But the real nature of the platform is to use it and to build upon it rather than to create it. So the happy view is that the “business people” don’t have to construct this.

I do believe, by the way, that many of the people in The Open Group will be on the side of the builders. They’re supposed to like complexity and like reducing it, so if we do it right the users of the platform will not notice this effort. It’s the same with the Cloud—the problem with the Cloud nowadays is that many people are tempted to run their own clouds, their own technologies, and before they know it, they only have additional complexity on their agenda, rather than reduced, because of the Cloud. It’s the same with the third platform—it’s a foundation which is almost a no-brainer to do business upon, for the next generation of business models. But if we do it wrong, we only have additional complexity on our hands, and we give IT a bad name yet again. We don’t want to do that.

What are Capgemini customers struggling with the most in terms of adopting these new technologies and putting together an Open Platform 3.0?

What you currently see—and it’s not always good to look at history—but if you look at the emergence of the second platform, the PC, of course there were years in which central IT said ‘nobody needs a PC, we can do it all on the mainframe,’ and they just didn’t believe it and business people just started to do it themselves. And for years, we created a mess as a result of it, and we’re still picking up some of the pieces of that situation. The question for IT people, in particular, is to understand how to find this new rhythm, how to adopt the dynamics of this third platform while dealing with all the complexity of the legacy platform that’s already there. I think if we are able to accelerate creating such a platform—and I think The Open Group will be very critical there—what exactly should be in the third platform, what type of services should you be developing, how would these services interact, could we create some set of open standards that the industry could align to so that we don’t have to do too much work in integrating all that stuff. If we, as The Open Group, can create that industry momentum, that, at least, would narrow the gap between business and IT that we currently see. Right now IT’s very clearly not able to deliver on the promise because they have their hands full with surviving the existing IT landscape, so unless they do something about simplifying it on the one hand and bridging that old world with the new one, they might still be very unpopular in the forthcoming years. That’s not what you want as an IT person—you want to enable business and new business. But I don’t think we’ve been very effective with that for the past ten years as an industry in general, so that’s a big thing that we have to deal with, bridging the old world with the new world. But anything we can do to accelerate and simplify that job from The Open Group would be great, and I think that’s the very essence of where our actions would be.

What are some of the things that The Open Group, in particular, can do to help affect these changes?

To me it’s still in the evangelization phase. Sooner or later people have to buy it and say ‘We get it, we want it, give me access to the third platform.’ Then the question will be how to accelerate building such an actual platform. So the big question is: What does such a platform look like? What types of services would you find on such a platform? For example, mobility services, data services, integration services, management services, development services, all of that. What would that look like in a typical Platform 3.0? Maybe even define a catalog of services that you would find in the platform. Then, of course, if you could use such a catalog or shopping list, if you like, to reach out to the technology suppliers of this world and convince them to pick that up and gear around these definitions—that would facilitate such a platform. Also maybe the architectural roadmap—so what would an architecture look like and what would be the typical five ways of getting there? We have to start with your local situation, so probably also several design cases would be helpful, so there’s an architectural dimension here.

Also, in terms of competencies, what type of competencies will we need in the near future to be able to supply these types of services to the business? That’s, again, very new—in this case, IT Specialist Certification and Architect Certification. These groups also need to think about what are the new competencies inherent in the third platform and how does it affect things like certification criteria and competency profiles?

In other areas, if you look at TOGAF®, and Open Group standard, is it really still suitable in fast paced world of the third platform or do we need a third platform version of TOGAF? With Security, for example, there are so many users, so many connections, and the activities of the former Jericho Forum seem like child’s play compared to what you will see around the third platform, so there’s no Forum or Work Group that’s not affected by this Open Platform 3.0 emerging.

With Open Platform 3.0 touching pretty much every aspect of technology and The Open Group, how do you tackle that? Do you have just an umbrella group for everything or look at it through the lens of TOGAF or security or the IT Specialist? How do you attack something so large?

It’s exactly what you just said. It’s fundamentally my belief that we need to do both of these two things. First, we need a catalyst forum, which I would argue is the Open Platform 3.0 Forum, which would be the catalyst platform, the orchestration platform if you like, that would do the overall definitions, the call to action. They’ve already been doing the business scenarios—they set the scene. Then it would be up to this Forum to reach out to all the other Forums and Work Groups to discuss impact and make sure it stays aligned, so here we have an orchestration function of the Open Platform 3.0 Forum. Then, very obviously, all the other Work Groups and Forums need to pick it up and do their own stuff because you cannot aspire to do all of this with one and the same forum because it’s so wide, it’s so diverse. You need to do both.

The Open Platform 3.0 Forum has been working for a year and a half now. What are some of the things the Forum has accomplished thus far?

They’ve been particularly working on some of the key definitions and some of the business scenarios. I would say in order to create an awareness of Open Platform 3.0 in terms of the business value and the definitions, they’ve done a very good job. Next, there needs to be a call to action to get everybody mobilized and setting tangible steps toward the Platform 3.0. I think that’s currently where we are, so that’s good timing, I believe, in terms of what the forum has achieved so far.

Returning to the mission of The Open Group, given all of the awareness we have created, what does it all mean in terms of Boundaryless Information Flow and how does it affect the Forums and Work Groups in The Open Group? That’s what we need to do now.

What are some of the biggest challenges that you see facing adoption of Open Platform 3.0 and standards for that platform?

They are relatively immature technologies. For example, with the Cloud you see a lot of players, a lot of technology providers being quite reluctant to standardize. Some of them are very open about it and are like ‘Right now we are in a niche, and we’re having a lot of fun ourselves, so why open it up right now?’ The movement would be more pressure from the business side saying ‘We want to use your technology but only if you align with some of these emerging standards.’ That would do it or certainly help. This, of course, is what makes The Open Group as powerful as not only technology providers, but also businesses, the enterprises involved and end users of technology. If they work together and created something to mobilize technology providers, that would certainly be a breakthrough, but these are immature technologies and, as I said, with some of these technology providers, it seems more important to them to be a niche player for now and create their own market rather than standardizing on something that their competitors could be on as well.

So this is a sign of a relatively immature industry because every industry that starts to mature around certain topics begins to work around open standards. The more mature we grow in mastering the understanding of the Open Platform 3.0, the more you will see the need for standards arise. It’s all a matter of timing so it’s not so strange that in the past year and a half it’s been very difficult to even discuss standards in this area. But I think we’re entering that era really soon, so it seems to be good timing to discuss it. That’s one important limiting area; I think the providers are not necessarily waiting for it or committed to it.

Secondly, of course, this is a whole next generation of technologies. With all new generations of technologies there are always generation gaps and people in denial or who just don’t feel up to picking it up again or maybe they lack the energy to pick up a new wave of technology and they’re like ‘Why can’t I stay in what I’ve mastered?’ All very understandable. I would call that a very typical IT generation gap that occurs when we see the next generation of IT emerge—sooner or later you get a generation gap, as well. Which has nothing to do with physical age, by the way.

With all these technologies converging so quickly, that gap is going to have to close quickly this time around isn’t it?

Well, there are still mainframes around, so you could argue that there will be two or even three speeds of IT sooner or later. A very stable, robust and predictable legacy environment could even be the first platform that’s more mainframe-oriented, like you see today. A second wave would be that PC workstation, client/server, Internet-based IT landscape, and it has a certain base and certain dynamics. Then you have this third phase, which is the new platform, that is more dynamic and volatile and much more diverse. You could argue that there might be within an organization multiple speeds of IT, multiple speeds of architectures, multi-speed solutioning, and why not choose your own speed?

It probably takes a decade or more to really move forward for many enterprises.

It’s not going as quickly as the Gartners of this world typically thinks it is—in practice we all know it takes longer. So I don’t see any reason why certain people wouldn’t certainly choose deliberately to stay in second gear and don’t go to third gear simply because they think it’s challenging to be there, which is perfectly sound to me and it would bring a lot of work in many years to companies.

That’s an interesting concept because start-ups can easily begin on a new platform but if you’re a company that has been around for a long time and you have existing legacy systems from the mainframe or PC era, those are things that you have to maintain. How do you tackle that as well?

That’s a given in big enterprises. Not everybody can be a disruptive start up. Maybe we all think that we should be like that but it’s not the case in real life. In real life, we have to deal with enterprise systems and enterprise processes and all of them might be very vulnerable to this new wave of challenges. Certainly enterprises can be disruptive themselves if they do it right, but there are always different dynamics, and, as I said, we still have mainframes, as well, even though we declared their ending quite some time ago. The same will happen, of course, to PC-based IT landscapes. It will take a very long time and will take very skilled hands and minds to keep it going and to simplify.

Having said that, you could argue that some new players in the market obviously have the advantage of not having to deal with that and could possibly benefit from a first-mover advantage where existing enterprises have to juggle several balls at the same time. Maybe that’s more difficult, but of course enterprises are enterprises for a good reason—they are big and holistic and mighty, and they might be able to do things that start-ups simply can’t do. But it’s a very unpredictable world, as we all realize, and the third platform brings a lot of disruptiveness.

What’s your perspective on how the Internet of Things will affect all of this?

It’s part of the third platform of course, and it’s something Andras Szakal will be addressing as well. There’s much more coming, both at the input sites, everything is becoming a sensor essentially to where even your wallpaper or paint is a sensor, but on the other hand, in terms of devices that we use to communicate or get information—smart things that whisper in your ears or whatever we’ll have in the coming years—is clearly part of this Platform 3.0 wave that we’ll have as we move away from the PC and the workstation, and there’s a whole bunch of new technologies around to replace it. The Internet of Things is clearly part of it, and we’ll need open standards as well because there are so many different things and devices, and if you don’t create the right standards and platform services to deal with it, it will be a mess. It’s an integral part of the Platform 3.0 wave that we’re seeing.

What is the Open Platform 3.0 Forum going to be working on over the next few months?

Understanding what this Open Platform 3.0 actually means—I think the work we’ve seen so far in the Forum really sets the way in terms of what is it and definitions are growing. Andras will be adding his notion of the Internet of Things and looking at definitions of what is it exactly. Many people already intuitively have an image of it.

The second will be how we deliver value to the business—so the business scenarios are a crucial thing to consider to see how applicable they are, how relevant they are to enterprises. The next thing to do will pertain to work that still needs to be done in The Open Group, as well. What would a new Open Platform 3.0 architecture look like? What are the platform services? What are the ones we can start working on right now? What are the most important business scenarios and what are the platform services that they will require? So architectural impacts, skills impacts, security impacts—as I said, there are very few areas in IT that are not touched by it. Even the new IT4IT Forum that will be launched in October, which is all about methodologies and lifecycle, will need to consider Agile, DevOps-related methodologies because that’s the rhythm and the pace that we’ve got to expect in this third platform. So the rhythm of the working group—definitions, business scenarios and then you start to thinking about what does the platform consist of, what type of services do I need to create to support it and hopefully by then we’ll have some open standards to help accelerate that thinking to help enterprises set a course for themselves. That’s our mission as The Open Group to help facilitate that.

Tolido-RonRon Tolido is Senior Vice President and Chief Technology Officer of Application Services Continental Europe, Capgemini. He is also a Director on the board of The Open Group and blogger for Capgemini’s multiple award-winning CTO blog, as well as the lead author of Capgemini’s TechnoVision and the global Application Landscape Reports. As a noted Digital Transformation ambassador, Tolido speaks and writes about IT strategy, innovation, applications and architecture. Based in the Netherlands, Mr. Tolido currently takes interest in apps rationalization, Cloud, enterprise mobility, the power of open, Slow Tech, process technologies, the Internet of Things, Design Thinking and – above all – radical simplification.

 

 

Comments Off

Filed under architecture, Boundaryless Information Flow™, Certifications, Cloud, digital technologies, Enterprise Architecture, Future Technologies, Information security, Internet of Things, Open Platform 3.0, Security, Service Oriented Architecture, Standards, TOGAF®, Uncategorized