Category Archives: Uncategorized

Now is the Time for Third Generation Enterprise Architecture Methods

By Erwin Oord, Principal Consultant Enterprise Architecture and Managing Partner at Netherlands-based ArchiXL Consultancy

Common methods for Enterprise Architecture used at present have been around for ages already. Although these methods have made a strong contribution to the development of the architecture discipline, they have reached the limits of their abilities. It is time to make a leap forward and for that we need a new generation of architecture methods. What characterizes architecture methods of this new generation?

Architects currently working with methods like TOGAF®, an Open Group standard, DYA or IAF might not realize it, but these methods stem from the early days of the architecture discipline. DYA originated in 2001 and the first version of TOGAF dates back to even 1995! Of course, these architecture methods are not dinosaurs that forgot to extinct. TOGAF produces new versions that are the result of lively discussion at The Open Group.

But an architecture method is like a car model. With annual facelifts you can adjust to the latest fashion, but you cannot hide the fact that the basic product reflects the spirit of the time in which it was developed. Car models, including those of the better car brands, reach their end after a decade or so. The automotive industry is used to this and knows that this cycle requires high investments, but also brings new opportunities. Enterprise Architecture is no different!

Let’s take a look back in history. The notion of Enterprise Architecture emerged in the mid-eighties. In that period, people like Zachman discovered that systems development models together create a coherent view on the enterprise. Thus arose the first architectural frameworks. This is the first generation of architecture methods, although a “method” was barely recognized.

The need for a repeatable process to develop and use architecture models emerged in the nineties. This is the time when the famous TOGAF Architecture Development Method came about, later followed by the concept of the strategic dialogue in DYA. This process-oriented approach to Enterprise Architecture was a great leap forward. We can therefore speak of a second generation of architecture methods.

A shocking discovery is that since then not much more has happened. Of course, methods have evolved with the addition of reference models and techniques for creating models. The underlying content frames have improved, now including architectural principles and implementation aspects. But all this is merely facelifting. We are still working with basic designs dating back more than a decade.

In order to make a leap forward again, we must escape the current process orientation. Instead of focusing on a fixed process to develop and use architecture, we must focus on the results of architecture. But that is only possible when we realize architecture is not a process in itself but an aspect of the overall change process in an organization. After all, governments and companies are constantly changing. An architecture method should therefore not be self-contained, but should be fully integrated in the change process.

A third generation architecture method has no fixed processes but focuses on essential architecture tasks, and integrates these tasks in the change methodology used by the organization. It provides a limited set of clearly defined architectural products that can be used directly in the change process. And it recognizes clearly defined roles that, depending on the situation, can be assigned to the right stakeholders. And that is certainly not always the Enterprise Architect. The key of a third generation Enterprise Architecture method is not the method itself but the way it is integrated into the organization.

OordErwin Oord, Principal Consultant Enterprise Architecture and Managing Partner at Netherlands based ArchiXL consultancy, has a rich experience in applying and customising Enterprise Architecture methods in both public sector and business organisations. Being co-author of a successful (Dutch) guide on selecting appropriate architecture methods, he is frequently asked for setting up an architecture practice or advancing architecture maturity stages in organisations. In his assignments, he focuses on effective integration of architecture with business and organisation change management.

6 Comments

Filed under Enterprise Architecture, Standards, TOGAF®, Uncategorized

Using The Open Group Standards – O-ISM3 with TOGAF®

By Jose Salamanca, UST Global, and Vicente Aceituno, Inovement

In order to prevent duplication of work and maximize the value provided by the Enterprise Architecture and Information Security discipline, it is necessary to find ways to communicate and take advantage from each other’s work. We have been examining the relationship between O-ISM3 and TOGAF®, both Open Group standards, and have found that, terminology differences aside, there are quite a number of ways to use these two standards together. We’d like to share our findings with The Open Group’s audience of Enterprise Architects, IT professionals, and Security Architects in this article.

Any ISMS manager needs to understand what the Security needs of the business are, how IT can cater for these needs, and how Information Security can contribute the most with the least amount of resources possible. Conversely, Enterprise Architects are challenged to build Security into the architectures deployed in the business in such a way that Security operations may be managed effectively.

There are parts of Enterprise Architecture that make the process of understanding the dependencies between the business and IT pretty straightforward. For example:

  • The TOGAF® 9 document “Business Principles – Goals – Drivers” will help inform the O-ISM3 practitioner what the business is about, in other words, what needs to be protected.
  • The TOGAF 9 document – Architecture Definition contains the Application, Technology and Data Domains, and the Business Domain. As a TOGAF service is a subdivision of an application used by one or several business functions, the O-ISM3 practitioner will be able to understand the needs of the business, developed and expressed as O-ISM3 Security objectives and Security targets, by interviewing the business process owners (found in the TOGAF Architecture Definition).
  • To determine how prepared applications are to meet those Security objectives and Security targets the O-ISM3 practitioner can interview the owner (found in the TOGAF Application Portfolio Catalog) of each application.
  • To check the location of the Components (parts of the application from the point of view of IT), which can have licensing and privacy protection implications, the O-ISM3 practitioner can interview the data owners (found in the TOGAF Architecture Definition) of each application.
  • To check the different Roles of use of an application, which will direct how access control is designed and operated, the O-ISM3 practitioner can interview the business process owners (found in the TOGAF Architecture Definition).
  • To understand how Components depend on each other, which has broad reaching implications in Security and business continuity, the O-ISM3 practitioner can examine the TOGAF Logical Application Components Map.

TOGAF practitioners can find Security constraints, which are equivalent to O-ISM3 Security Objectives (documented in “TOGAF 9 Architecture Vision” and “Data Landscape”) in the documents TSP-031 Information Security Targets and TSP-032 Information Requirements and Classification.

The Application Portfolio artifact in TOGAF is especially suitable to document the way applications are categorized from the point of view of security. The categorization enables prioritizing how they are protected.

The Security requirements which are created in O-ISM3, namely Security objectives and Security targets, should be included in the document “Requirements TOGAF 9 Template – Architecture Requirements Specification”, which contains all the requirements, constraints, and assumptions.

What are your views and experiences of aligning your ISMS + Enterprise Architecture methods? We’d love to hear your thoughts.

 

JMSalamanca photoJosé Salamanca is Regional Head of Solutions & Services at UST Global Spain. Certified in TOGAF9®, Project Management Professional (PMP®), and EFQM®. Jose also holds a MBA Executive by the Business European School (Spain) and achieved his BSc. at Universidad Complutense of Madrid. He is Vice President of the Association of Enterprise Architects Spanish chapter and Master Teacher at Universidad de Antonio de Nebrija of Madrid. José has built his professional career with repeated successes in Europe and the Middle East.

 

 

JulioVicente Aceituno is Principal author of O-ISM3, an experienced Information Security Manager and Consultant with broad experience in outsourcing of security services and research. His focus is information security outsourcing, management and related fields like metrics and certification of ISMS. Vicente is President of the Spanish chapter of the Information Security Systems Association; Member of The Open Group Security Forum Steering Committee; Secretary of the Spanish Chapter of the Association of Enterprise Architects; ISMS Forum Member.

Leave a comment

Filed under Enterprise Architecture, Enterprise Transformation, Information security, Security, Security Architecture, Standards, TOGAF®, Uncategorized

The Internet of Things is the New Media

By Dave Lounsbury, Chief Technical Officer, The Open Group

A tip of the hat to @artbourbon for pointing out the article “Principles for Open Innovation and Open Leadingship” by Peter Vander Auwera, which led to a TED Talk by Joi Ito with his “Nine Principles of the Media Lab”. Something in this presentation struck me:

“Media is plural for Medium, Medium is something in which you can express yourself. The medium was hardware, screens, robots, etc. Now the medium is society, ecosystem, journalism,… Our work looks more like social science.”

Great changes in society often go hand-in-hand with advances in communications, which in turn are tied to improvements in scale or portability of media. Think the printing press, television or even the development of paint in tubes which allowed impressionist painters to get out of the studios to paint water lilies and wheat fields.

295px-Vincent_Van_Gogh_0020

We are seeing a similar advance in the next generation of the Internet. Traditionally, humans interact with computer systems and networks through visual media, like screens of varying sizes and printed material. However, this is changing: Sensors and actuators are shrinking in size and price, and there has been an explosion of devices, new services and applications that network these together into larger systems  to increase their value through Metcalfe’s law. We interact with the actions of these sensors not just with our eyes, but other senses as well – a simple example is the feeling of warmth as your house adjusts its temperature as you arrive home.

These devices, and the platforms that orchestrate their interactions, are the media in which the next generation of the internet will be painted. We call it the Internet of Things today, or maybe the Internet of Everything – but in long run, it will become just be the Internet. The expression of connectivity through sensors and devices will soon become as commonplace as social media is today.

Join the conversation! @theopengroup #ogchat

lounsburyDavid is Chief Technical Officer (CTO) and Vice President, Services for The Open Group. As CTO, he ensures that The Open Group’s people and IT resources are effectively used to implement the organization’s strategy and mission.  As VP of Services, David leads the delivery of The Open Group’s proven collaboration processes for collaboration and certification both within the organization and in support of third-party consortia.

David holds a degree in Electrical Engineering from Worcester Polytechnic Institute, and is holder of three U.S. patents.

Leave a comment

Filed under digital technologies, Future Technologies, Internet of Things, Open Platform 3.0, Uncategorized

Q&A with Marshall Van Alstyne, Professor, Boston University School of Management and Research Scientist MIT Center for Digital Business

By The Open Group

The word “platform” has become a nearly ubiquitous term in the tech and business worlds these days. From “Platform as a Service” (PaaS) to IDC’s Third Platform to The Open Group Open Platform 3.0™ Forum, the concept of platforms and building technology frames and applications on top of them has become the next “big thing.”

Although the technology industry tends to conceive of “platforms” as the vehicle that is driving trends such as mobile, social networking, the Cloud and Big Data, Marshall Van Alstyne, Professor at Boston University’s School of Management and a Research Scientist at the MIT Center for Digital Business, believes that the radical shifts that platforms bring are not just technological.

We spoke with Van Alstyne prior to The Open Group Boston 2014, where he presented a keynote, about platforms, how they have shifted traditional business models and how they are impacting industries everywhere.

The title of your session at the Boston conference was “Platform Shift – How New Open Business Models are Changing the Shape of Industry.” How would you define both “platform” and “open business model”?

I think of “platform” as a combination of two things. One, a set of standards or components that folks can take up and use for production of goods and services. The second thing is the rules of play, or the governance model – who has the ability to participate, how do you resolve conflict, and how do you divide up the royalty streams, or who gets what? You can think of it as the two components of the platform—the open standard together with the governance model. The technologists usually get the technology portion of it, and the economists usually get the governance and legal portions of it, but you really need both of them to understand what a ‘platform’ is.

What is the platform allowing then and how is that different from a regular business model?

The platform allows third parties to conduct business using system resources so they can actually meet and exchange goods across the platform. Wonderful examples of that include AirBnB where you can rent rooms or you can post rooms, or eBay, where you can sell goods or exchange goods, or iTunes where you can go find music, videos, apps and games provided by others, or Amazon where third parties are even allowed to set up shop on top of Amazon. They have moved to a business model where they can take control of the books in addition to allowing third parties to sell their own books and music and products and services through the Amazon platform. So by opening it up to allow third parties to participate, you facilitate exchange and grow a market by helping that exchange.

How does this relate to the concept of the technology industry is defining the “third platform”?

I think of it slightly differently. The tech industry uses mobile and social and cloud and data to characterize it. In some sense this view offers those as the attributes that characterize platforms or the knowledge base that enable platforms. But we would add to that the economic forces that actually shape platforms. What we want to do is give you some of the strategic tools, the incentives, the rules that will actually help you control their trajectory by helping you improve who participates and then measure and improve the value they contribute to the platform. So a full ecosystem view is not just the technology and the data, it also measures the value and how you divide that value. The rules of play really become important.

I think the “third platform” offers marvelous concepts and attributes but you also need to add the economics to it: Why do you participate, who gets what portions of the value, and who ultimately owns control.

Who does control the platform then?

A platform has multiple parts. Determining who controls what part is the art and design of the governance model. You have to set up control in the right way to motivate people to participate. But before we get to that, let’s go back and complete the idea of what’s an ‘open platform.’

To define an open platform, consider both the right of access and the right to manipulate platform resources, then consider granting those rights to four different parties. One is the user—can they access one another, can they access data, can they access system resources? Another group is developers—can they manipulate system resources, can they add new features to it, can they sell through the platform? The third group is the platform providers. You often think of them as those folks that facilitate access across the platform. To give you an example, iTunes is a single monolithic store, so the provider is simply Apple, but Android, in contrast, allows multiple providers, so there’s a Samsung Android store, an LTC Android store, a Google Android store, there’s even an Amazon version that uses a different version of Android. So that platform has multiple providers each with rights to access users. The fourth group is the party that controls the underlying property rights, who owns the IP. The ability modify the underlying standard and also the rights of access for other parties is the bottom-most layer.

So to answer the question of what is ‘open,’ you have to consider the rights of access of all four groups—the users, developers, the providers and IP rights holders, or sponsors, underneath.

Popping back up a level, we’re trying to motivate different parties to participate in the ecosystem. So what do you give the users? Usually it’s some kind of value. What do you give developers? Usually it’s some set of SDKs and APIs, but also some level of royalties. It’s fascinating. If you look back historically, Amazon initially tried a publishing royalty where they took 70% and gave a minority 30% back to developers. They found that didn’t fly very well and they had to fall back to the app store or software-style royalty, where they’re taking a lower percentage. I think Apple, for example, takes 30 percent, and Amazon is now close to that. You see ranges of royalties going anywhere from a few percent—an example is credit cards—all the way up to iStock photo where they take roughly 70 percent. That’s an extremely high rate, and one that I don’t recommend. We were just contracting for designs at 99Designs and they take a 20 percent cut. That’s probably more realistic, but lower might perhaps even be better—you can create stronger network effect if that’s the case.

Again, the real question of control is how you motivate third parties to participate and add value? If you are allowing them to use resources to create value and keep a lot of that value, then they’re more motivated to participate, to invest, to bring their resources to your platform. If you take most of the value they create, they won’t participate. They won’t add value. One of the biggest challenges for open platforms—what you might call the ‘Field of Dreams’ approach—is that most folks open their platform and assume ‘if you build it, they will come,’ but you really need to reward them to do so. Why would they want to come build with you? There are numerous instances of platforms that opened but no developer chooses to add value—the ecosystem is too small. You have to solve the chicken and egg problem where if you don’t have users, developers don’t want to build for you, but if you don’t have developer apps, then why do users participate? So you’ve got a huge feedback problem. And those are where the economics become critical, you must solve the chicken and egg problem to build and roll out platforms.

It’s not just a technology question; it’s also an economics and rewards question.

Then who is controlling the platform?

The answer depends on the type of platform. Giving different groups a different set of rights creates different types of platform. Consider the four different parties: users, developers, providers, and sponsors. At one extreme, the Apple Mac platform of the 1980s reserved most rights for development, for producing hardware (the provider layer), and for modifying the IP (the sponsor layer) all to Apple. Apple controlled the platform and it remained closed. In contrast, Microsoft relaxed platform control in specific ways. It licensed to multiple providers, enabling Dell, HP, Compaq and others to sell the platform. It gave developers rights of access to SDKs and APIs, enabling them to extend the platform. These control choices gave Microsoft more than six times the number of developers and more than twenty times the market share of Apple at the high point of Microsoft’s dominance of desktop operating systems. Microsoft gave up some control in order to create a more inclusive platform and a much bigger market.

Control is not a single concept. There are many different control rights you can grant to different parties. For example, you often want to give users an ability to control their own data. You often want to give developers intellectual property rights for the apps that they create and often over the data that their users create. You may want to give them some protections against platform misappropriation. Developers resent it if you take their ideas. So if the platform sees a really clever app that’s been built on top of its platform, what’s the guarantee that the platform simply doesn’t take it or build a competing app? You need to protect your developers in that case. Same thing’s true of the platform provider—what guarantees do they provide users for the quality of content provided on their ecosystem? For example, the Android ecosystem is much more open than the iPhone ecosystem, which means you have more folks offering stores. Simultaneously, that means that there are more viruses and more malware in Android, so what rights and guarantees do you require of the platform providers to protect the users in order that they want to participate? And then at the bottom, what rights do other participants have to control the direction of the platform growth? In the Visa model, for example, there are multiple member banks that help to influence the general direction of that credit card standard. Usually the most successful platforms have a single IP rights holder, but there are several examples of that have multiple IP rights holders.

So, in the end control defines the platform as much as the platform defines control.

What is the “secret” of the Internet-driven marketplace? Is that indeed the platform?

The secret is that, in effect, the goal of the platform is to increase transaction volume and value. If you can do that—and we can give you techniques for doing it—then you can create massive scale. Increasing the transaction value and transactions volume across your platform means that the owner of the platform doesn’t have to be the sole source of content and new ideas provided on the platform. If the platform owner is the only source of value then the owner is also the bottleneck. The goal is to consummate matches between producers and consumers of value. You want to help users find the content, find the resources, find the other people that they want to meet across your platform. In Apple’s case, you’re helping them find the music, the video, the games, and the apps that they want. In AirBnB’s case, you’re helping them find the rooms that they want, or Uber, you’re helping them find a driver. On Amazon, the book recommendations help you find the content that you want. In all the truly successful platforms, the owner of the platform is not providing all of that value. They’re enabling third parties to add that value, and that’s one reasy why The Open Group’s ideas are so important—you need open systems for this to happen.

What’s wrong with current linear business models? Why is a network-driven approach superior?

The fundamental reason why the linear business model no longer works is that it does not manage network effects. Network effects allow you to build platforms where users attract other users and you get feedback that grows your system. As more users join your platform, more developers join your platform, which attracts more users, which attracts more developers. You can see it on any of the major platforms. This is also true of Google. As advertisers use Google Search, the algorithms get better, people find the content that they want, so more advertisers use it. As more drivers join Uber, more people are happier passengers, which attracts more drivers. The more merchants accept Visa, the more consumers are willing to carry it, which attracts more merchants, which attracts more consumers. You get positive feedback.

The consequence of that is that you tend to get market concentration—you get winner take all markets. That’s where platforms dominate. So you have a few large firms within a given category, whether this is rides or books or hotels or auctions. Further, once you get network effects changing your business model, the linear insights into pricing, into inventory management, into innovation, into strategy breakdown.

When you have these multi-sided markets, pricing breaks down because you often price differently to one side than another because one side attracts the other. Inventory management practices breakdown because you’re selling inventory that you don’t even own. Your R&D strategies breakdown because now you’re motivating innovation and research outside the boundaries of the firm, as opposed to inside the internal R&D group. And your strategies breakdown because you’re not just looking for cost leadership or product differentiation, now you’re looking to shape the network effects as you create barriers to entry.

One of the things that I really want to argue strenuously is that in markets where platforms will emerge, platforms beat product every time. So the platform business model will inevitably beat the linear, product-based business model. Because you’re harnessing new forces in order to develop a different kind of business model.

Think of it the following way–imagine that value is growing as users consume your product. Think of any of the major platforms, as more folks use Google, search gets better, the more recommendations improve on Amazon, and the easier it is to find a ride on Uber, so more folks want to be on there. It is easier to scale network effects outside your business than inside your business. There’s simply more people outside than inside. The moment that happens, the locus control, the locus of innovation, moves from inside the firm to outside the firm. So the rules change. Pricing changes, your innovation strategies change, your inventory policies change, your R&D changes. You’re now managing resources outside the firm, rather than inside, in order to capture scale. This is different than the traditional industrial supply economies of scale.

Old systems are giving away to new systems. It’s not that the whole system breaks down, it’s simply that you’re looking to manage network effects and manage new business models. Another way to see this is that previously you were managing capital. In the industrial era, you were managing steel, you were managing large amounts of finance in banking, you were managing auto parts—huge supply economies of scale. In telecommunications, you were managing infrastructure. Now, you’re managing communities and these are managed outside the firm. The value that’s been created at Facebook or WhatsApp or Instagram or any of the new acquisitions, it’s not the capital that’s critical, it’s the communities that are critical, and these are built outside the firm.

There is a lot of talk in the industry about the Nexus of Forces as Gartner calls it, or Third Platform (IDC). The Open Group calls it Open Platform 3.0. Your concept goes well beyond technology—how does Open Platform 3.0 enable new business models?

Those are the enablers—they’re shall we say necessary, but they’re not sufficient. You really must harness the economic forces in addition to those enablers—mobile, social, Cloud, data. You must manage communities outside the firm, that’s the mobile and the social element of it. But this also involves designing governance and setting incentives. How are you capturing users outside the organization, how are they contributing, how are they being motivated to participate, why are they spreading your products to their peers? The Cloud allows it to scale—so Instagram and What’s App and others scale. Data allows you to “consummate the match.” You use that data to help people find what they need, to add value, so all of those things are the enablers. Then you have to harness the economics of the enablers to encourage people to do the right thing. You can see the correct intuition if you simply ask what happens if all you offer is a Cloud service and nothing more. Why will anyone use it? What’s the value to that system? If you open APIs to it, again, if you don’t have a user base, why are developers going to contribute? Developers want to reach users. Users want valuable functionality.

You must manage the motives and the value-add on the platform. New business models come from orchestrating not just the technology but also the third party sources of value. One of the biggest challenges is to grow these businesses from scratch—you’ve got the cold start chicken and egg problem. You don’t have network effects if you don’t have a user base, if you don’t have users, you don’t have network effects.

Do companies need to transform themselves into a “business platform” to succeed in this new marketplace? Are there industries immune to this shift?

There is a continuum of companies that are going to be affected. It starts at one end with companies that are highly information intense—anything that’s an information intensive business will be dramatically affected, anything that’s community or fashion-based business will be dramatically affected. Those include companies involved in media and news, songs, music, video; all of those are going to be the canaries in the coalmine that see this first. Moving farther along will be those industries that require some sort of certification—those include law and medicine and education—those, too, will also be platformized, so the services industries will become platforms. Farther down that are the ones that are heavily, heavily capital intensive where control of physical capital is paramount, those include trains and oil rigs and telecommunications infrastructure—eventually those will be affected by platform business models to the extent that data helps them gain efficiencies or add value, but they will in some sense be the last to be affected by platform business models. Look for the businesses where the cost side is shrinking in proportion to the delivery of value and where the network effects are rising as a proportional increase in value. Those forces will help you predict which industries will be transformed.

How can Enterprise Architecture be a part of this and how do open standards play a role?

The second part of that question is actually much easier. How do open standards play a role? The open standards make it much easier for third parties to attach and incorporate technology and features such that they can in turn add value. Open standards are essential to that happening. You do need to ask the question as to who controls those standards—is it completely open or is it a proprietary standard, a published standard but it’s not manipulable by a third party.

There will be at least two or three different things that Enterprise Architects need to do. One of these is to design modular components that are swappable, so as better systems become available, the better systems can be swapped in. The second element will be to watch for components of value that should be absorbed into the platform itself. As an example, in operating systems, web browsing has effectively been absorbed into the platform, streaming has been absorbed into the platform so that they become aware of how that actually works. A third thing they need to do is talk to the legal team to see where it is that the third parties property rights can be protected so that they invest. One of the biggest mistakes that firms make is to simply assume that because they own the platform, because they have the rights of control, that they can do what they please. If they do that, they risk alienating their ecosystems. So they should talk to their potential developers to incorporate developer concerns. One of my favorite examples is the Intel Architecture Lab which has done a beautiful job of articulating the voices of developers in their own architectural plans. A fourth thing that can be done is an idea borrowed from SAP, that builds Enterprise Architecture—they articulate an 18-24 month roadmap where they say these are the features that are coming, so you can anticipate and build on those. Also it gives you an idea of what features will be safe to build on so you won’t lose the value you’ve created.

What can companies do to begin opening their business models and more easily architect that?

What they should do is to consider four groups articulated earlier— those are the users, the providers, the developers and the sponsors—each serve a different role. Firms need to understand what their own role will be in order that they can open and architect the other roles within their ecosystem. They’ll also need to choose what levels of exclusivity they need to give their ecosystem partners in a different slice of the business. They should also figure out which of those components they prefer to offer themselves as unique competencies and where they need to seek third party assistance, either in new ideas or new resources or even new marketplaces. Those factors will help guide businesses toward different kinds of partnerships, and they’ll have to be open to those kinds of partners. In particular, they should think about where are they most likely to be missing ideas or missing opportunities. Those technical and business areas should open in order that third parties can take advantage of those opportunities and add value.

 

vanalstynemarshallProfessor Van Alstyne is one of the leading experts in network business models. He conducts research on information economics, covering such topics as communications markets, the economics of networks, intellectual property, social effects of technology, and productivity effects of information. As co-developer of the concept of “two sided networks” he has been a major contributor to the theory of network effects, a set of ideas now taught in more than 50 business schools worldwide.

Awards include two patents, National Science Foundation IOC, SGER, SBIR, iCorp and Career Awards, and six best paper awards. Articles or commentary have appeared in Science, Nature, Management Science, Harvard Business Review, Strategic Management Journal, The New York Times, and The Wall Street Journal.

1 Comment

Filed under architecture, Cloud, Conference, Data management, digital technologies, Enterprise Architecture, Governance, Open Platform 3.0, Standards, Uncategorized

Discussing Enterprise Decision-Making with Penelope Everall Gordon

By The Open Group

Most enterprises today are in the process of jumping onto the Big Data bandwagon. The promise of Big Data, as we’re told, is that if every company collects as much data as they can—about everything from their customers to sales transactions to their social media feeds—executives will have “the data they need” to make important decisions that can make or break the company. Not collecting and using your data, as the conventional wisdom has it, can have deadly consequences for any business.

As is often the case with industry trends, the hype around Big Data contains both a fair amount of truth and a fair amount of fuzz. The problem is that within most organizations, that conventional wisdom about the power of data for decision-making is usually just the tip of the iceberg when it comes to how and why organizational decisions are made.

According to Penelope Gordon, a consultant for 1Plug Corporation who was recently a Cloud Strategist at Verizon and was formerly a Service Product Strategist at IBM, that’s why big “D” (Data) needs to be put back into the context of enterprise decision-making. Gordon, who spoke at The Open Group Boston 2014, in the session titled “Putting the D Back in Decision” with Jean-Francois Barsoum of IBM, argues that a focus on collecting a lot of data has the potential to get in the way of making quality decisions. This is, in part, due to the overabundance of data that’s being collected under the assumption that you never know where there’s gold to be mined in your data, so if you don’t have all of it at hand, you may just miss something.

Gordon says that assuming the data will make decisions obvious also ignores the fact that ultimately decisions are made by people—and people usually make decisions based on their own biases. According to Gordon, there are three types of natural decision making styles—heart, head and gut styles—corresponding to different personality types, she said; the greater the amount of data the more likely that it will not balance the natural decision-making style.

Head types, Gordon says, naturally make decisions based on quantitative evidence. But what often happens is that head types often put off making a decision until more data can be collected, wanting more and more data so that they can make the best decision based on the facts. She cites former President Bill Clinton as a classic example of this type. During his presidency, he was famous for putting off decision-making in favor of gathering more and more data before making the decision, she says. Relying solely on quantitative data also can mean you may miss out on other important factors in making optimal decisions based on either heart (qualitative) or instinct. Conversely, a gut-type presented with too much data will likely just end up ignoring data and acting on instinct, much like former President George W. Bush, Gordon says.

Gordon believes part of the reason that data and decisions are more disconnected than one might think is because IT and Marketing departments have become overly enamored with what technology can offer. These data providers need to step back and first examine the decision objectives as well as the governance behind those decisions. Without understanding the organization’s decision-making processes and the dynamics of the decision-makers, it can be difficult to make optimal and effective strategic recommendations, she says, because you don’t have the full picture of what the stakeholders will or will not accept in terms of a recommendation, data or no data.

Ideally, Gordon says, you want to get to a point where you can get to the best decision outcome possible by first figuring out the personal and organizational dynamics driving decisions within the organization, shifting the focus from the data to the decision for which the data is an input.

“…what you’re trying to do is get the optimal outcome, so your focus needs to be on the outcome, so when you’re collecting the data and assessing the data, you also need to be thinking about ‘how am I going to present this data in a way that it is going to be impactful in improving the decision outcomes?’ And that’s where the governance comes into play,” she said.

Governance is of particular importance now, Gordon says, because decisions are increasingly being made by individual departments, such as when departments buy their own cloud-enabled services, such as sales force automation. In that case, an organization needs to have a roadmap in place with compensation to incent decision-makers to adhere to that roadmap and decision criteria for buying decisions, she said.

Gordon recommends that companies put in place 3-5 top criteria for each decision that needs to be made so that you can ensure that the decision objectives are met. This distillation of the metrics gives decision-makers a more comprehensible picture of their data so that their decisions don’t become either too subjective or disconnected from the data. Lower levels of metrics can be used underneath each of those top-level criteria to facilitate a more nuanced valuation. For example, if an organization needing to find good partner candidates scored and ranked (preferably in tiers) potential partners using decision criteria based on the characteristics of the most attractive partner, rather than just assuming that companies with the best reputation or biggest brands will be the best, then they will expeditiously identify the optimal partner candidates.

One of the reasons that companies have gotten so concerned with collecting and storing data rather than just making better decisions, Gordon believes, is that business decisions have become inherently more risky. The required size of investment is increasing in tandem with an increase in the time to return; time to return is a key determinant of risk. Data helps people feel like they are making competent decisions but in reality does little to reduce risk.

“If you’ve got lots of data, then the thinking is, ‘well, I did the best that I could because I got all of this data.’ People are worried that they might miss something,“ she said. “But that’s where I’m trying to come around and say, ‘yeah, but going and collecting more data, if you’ve got somebody like President Clinton, you’re just feeding into their tendency to put off making decisions. If you’ve got somebody like President Bush and you’re feeding into their tendency to ignore it, then there may be some really good information, good recommendations they’re ignoring.”

Gordon also says that having all the data possible to work with isn’t usually necessary—generally a representative sample will do. For example, she says the U.S Census Bureau takes the approach where it tries to count every citizen; consequently certain populations are chronically undercounted and inaccuracies pass undetected. The Canadian census, on the other hand, uses representative samples and thus tends to be much more accurate—and much less expensive to conduct. Organizations should also think about how they can find representative or “proxy” data in cases where collecting data that directly addresses a top-level decision criteria isn’t really practical.

To begin shifting the focus from collecting data inputs to improving decision outcomes, Gordon recommends clearly stating the decision objectives for each major decision and then identifying and defining the 3-5 criteria that are most important for achieving the decision objectives. She also recommends ensuring that there is sufficient governance and a process in place for making decisions including mechanisms for measuring the performance of the decision-making process and the outcomes resulting from the execution of that process. In addition, companies need to consider whether their decisions are made in a centralized or decentralized manner and then adapt decision governance accordingly.

One way that Enterprise Architects can help to encourage better decision-making within the organizations in which they work is to help in developing that governance rather than just providing data or data architectures, Gordon says. They should help stakeholders identify and define the important decision criteria, determine when full population rather than representative sampling is justified, recognize better methods for data analysis, and form decision recommendations based on that analysis. By gauging the appropriate blend of quantitative and qualitative data for a particular decision maker, an Architect can moderate gut types’ reliance on instinct and stimulate head and heart types’ intuition – thereby producing an optimally balanced decision. Architects should help lead and facilitate execution of the decision process, as well as help determine how data is presented within organizations in order to support the recommendations with the highest potential for meeting the decision objectives.

Join the conversation – #ogchat

penelopegordonPenelope Gordon recently led the expansion of the channel and service packaging strategies for Verizon’s cloud network products. Previously she was an IBM Strategist and Product Manager bringing emerging technologies such as predictive analytics to market. She helped to develop one of the world’s first public clouds.

 

 

2 Comments

Filed under architecture, Conference, Data management, Enterprise Architecture, Governance, Professional Development, Uncategorized

Case Study – ArchiMate®, An Open Group Standard: Public Research Centre Henri Tudor and Centre Hospitalier de Luxembourg

By The Open Group

The Public Research Centre Henri Tudor is an institute of applied research aimed at reinforcing the innovation capacity at organizations and companies and providing support for national policies and international recognition of Luxembourg’s scientific community. Its activities include applied and experimental research; doctoral research; the development of tools, methods, labels, certifications and standards; technological assistance; consulting and watch services; and knowledge and competency transfer. Its main technological domains are advanced materials, environmental, Healthcare, information and communication technologies as well as business organization and management. The Centre utilizes its competencies across a number of industries including Healthcare, industrial manufacturing, mobile, transportation and financial services among others.

In 2012, the Centre Hospitalier de Luxembourg allowed Tudor to experiment with an access rights management system modeled using ArchiMate®, an Open Group standard. This model was tested by CRP Tudor to confirm the approach used by the hospital’s management to grant employees, nurses and doctors permission to access patient records.

Background

The Centre Hospitalier de Luxembourg is a public hospital that focuses on severe pathologies, medical and surgical emergencies and palliative care. The hospital also has an academic research arm. The hospital employs a staff of approximately 2,000, including physicians and specialized employees, medical specialists, nurses and administrative staff. On average the hospital performs more than 450,000 outpatient services, 30,000 inpatient services and more than 60,000 adult and pediatric emergency services, respectively, per year.

Unlike many hospitals throughout the world, the Centre Hospitalier de Luxembourg is open and accessible 24 hours a day, seven days a week. Accessing patient records is required at the hospital at any time, no matter the time of day or weekend. In addition, the Grand Duchy of Luxembourg has a system where medical emergencies are allocated to one hospital each weekend across each of the country’s three regions. In other words, every two weeks, one hospital within a given region is responsible for all of the incoming medical emergencies on its assigned weekend, affecting patient volume and activity.

Access rights management

As organizations have become not only increasingly global but also increasingly digital, access rights management has become a critical component of keeping institutional information secure so that it does not fall into the wrong hands. Managing access to internal information is a critical component of every company’s security strategy, but it is particularly important for organizations that deal with sensitive information about consumers, or in the case of the Centre Hospitalier de Luxembourg, patients.

Modeling an access rights management system was important for the hospital for a number of reasons. First, European privacy laws dictate that only the people who require information regarding patient medical files should be allowed access to those files. Although privacy laws may restrict access to patient records, a rights management system must be flexible enough to grant access to the correct individuals when necessary.

In the case of a hospital such as the Centre Hospitalier de Luxembourg, access to information may be critical for the life of the patient. For instance, if a patient was admitted to the emergency room, the emergency room physician will be able to better treat the patient if he or she can access the patient’s records, even if they are not the patient’s primary care physician. Admitting personnel may also need access to records at the time of admittance. Therefore, a successful access rights management system must combine a balance between restricting information and providing flexible access as necessary, giving the right access at the right time without placing an administrative burden on the doctors or staff.

The project

Prior to the experiment in which the Public Research Centre Henri Tudor tested this access rights management model, the Centre Hospitalier de Luxembourg had not experienced any problems in regard to its information sharing system. However, its access rights were still being managed by a primarily paper-based system. As part of the scope of the project, the hospital was also looking to become compliant with existing privacy laws. Developing an access rights management model was intended to close the gap within the hospital between restricting access to patient information overall and providing new rights, as necessary, to employees that would allow them to do their work without endangering patient lives. From a technical perspective, the access rights management system also needed not only to work in conjunction with existing applications, such as the ERP system, used within the hospital but also support rights management at the business layer.

Most current access rights managements systems provide information access to individuals based on a combination of the functional requirements necessary for employees to do their jobs and governance rights, which provide the protections that will keep the organization and its information safe and secure. What many existing models have failed to take into account is that most access control models and rights engineering methods don’t adequately represent both sides of this equation. As such, determining the correct level of access for different employees within organizations can be difficult.

Modeling access rights management

Within the Centre Hospitalier de Luxembourg, employee access rights were defined based on individual job responsibilities and job descriptions. To best determine how to grant access rights across an hospital, the Public Research Centre Henri Tudor needed to create a system that could take these responsibilities into account, rather than just rely on functional or governance requirements.

To create an access rights management model that would work with the hospital’s existing processes and ERP software, the Public Research Centre Henri Tudor first needed to come up with a way to model responsibility requirements instead of just functional or governance requirements. According to Christophe Feltus, Research Engineer at the Public Research Centre, defining a new approach based on actor or employee responsibilities was the first step in creating a new model for the hospital.

Although existing architecture modeling languages provide views for many different types of stakeholders within organizations—from executives to IT and project managers—no modeling language had previously been used to develop a view dedicated to access rights management, Feltus says. As such, that view needed to be created and modeled anew for this project.

To develop this new view, the Public Research Centre needed to find an architecture modeling language that was flexible enough to accommodate such an extension. After evaluating three separate modeling languages, they chose ArchiMate®, an Open Group Standard and open and independent modeling language, to help them visualize the relationships among the hospital’s various employees in an unambiguous way.

Much like architectural drawings are used in building architecture to describe the various aspects of construction and building use, ArchiMate provides a common language for describing how to construct business processes, organizational structures, information flows, IT systems and technical infrastructures. By providing a common language and visual representation of systems, ArchiMate helps stakeholders within organizations design, assess and communicate how decisions and changes within business domains will affect the organization.

According to Feltus, Archimate provided a well-formalized language for the Public Research Centre to portray the architecture needed to model the access rights management system they wanted to propose for Centre Hospitalier. Because ArchiMate is a flexible and open language, it also provided an extension mechanism that could accommodate the responsibility modeling language (ReMMo) that the engineering team had developed for the hospital.

In addition to providing the tools and extensions necessary for the engineering team to properly model the hospital’s access rights system, the Public Research Centre also chose ArchiMate because it is an open and vendor-neutral modeling language. As a publically funded institution, it was important that the Public Research Centre avoided using vendor-specific tools that would lock them in to a potentially costly cycle of constant version upgrades.

“What was very interesting [about ArchiMate] was that it was an open and independent solution. This is very important for us. As a public company, it’s preferable not to use private solutions. This was something very important,” said Feltus.

Feltus notes that using ArchiMate to model the access rights project was also a relatively easy and intuitive process. “It was rather easy,” Feltus said. “The concepts are clear and recommendations are well done, so it was easy to explore the framework.” The most challenging part of the project was selecting which extension mechanism would best portray the design and model they wanted to use.

Results

After developing the access rights model using ArchiMate, the responsibility metamodel was presented to the hospital’s IT staff by the Public Research Centre Henri Tudor. The Public Research Centre team believes that the responsibility model created using ArchiMate allows for better alignment between the hospital’s business processes defined at the business layer with their IT applications being run at the application layer. The team also believes the model could both enhance provisioning of access rights to employees and improve the hospital’s performance. For example, using the proposed responsibility model, the team found that some employees in the reception department had been assigned more permissions than they required in practice. Comparing the research findings with the reality on the ground at the hospital has shown the Public Research Centre team that ArchiMate is an effective tool for modeling and determining both responsibilities and access rights within organizations.

Due to the ease of use and success the Public Research Centre Henri Tudor experienced in using ArchiMate to create the responsibility model and the access rights management system for the hospital, Tudor also intends to continue to use ArchiMate for other public and private research projects as appropriate.

Follow The Open Group @theopengroup, #ogchat and / or let us know your thoughts on the blog here.

 

5 Comments

Filed under ArchiMate®, Healthcare, Standards, Uncategorized

The Open Group Boston 2014 – Day Two Highlights

By Loren K. Bayes, Director, Global Marketing Communications

Enabling Boundaryless Information Flow™  continued in Boston on Tuesday, July 22Allen Brown, CEO and President of The Open Group welcomed attendees with an overview of the company’s second quarter results.

The Open Group membership is at 459 organizations in 39 countries, including 16 new membership agreements in 2Q 2014.

Membership value is highlighted by the collaboration Open Group members experience. For example, over 4,000 individuals attended Open Group events (physically and virtually whether at member meetings, webinars, podcasts, tweet jams). The Open Group website had more than 1 million page views and over 105,000 publication items were downloaded by members in 80 countries.

Brown also shared highlights from The Open Group Forums which featured status on many upcoming white papers, snapshots, reference models and standards, as well as individiual Forum Roadmaps. The Forums are busy developing and reviewing projects such as the Next Version of TOGAF®, an Open Group standard, an ArchiMate® white paper, The Open Group Healthcare Forum charter and treatise, Standard Mils™ APIs and Open Fair. Many publications are translated into multiple languages including Chinese and Portuguese. Also, a new Forum will be announced in the third quarter at The Open Group London 2014 so stay tuned for that launch news!

Our first keynote of the day was Making Health Addictive by Joseph Kvedar, MD, Partners HealthCare, Center for Connected Health.

Dr. Kvedar described how Healthcare delivery is changing, with mobile technology being a big part. Other factors pushing changes are reimbursement paradigms and caregivers being paid to be more efficient and interested in keeping people healthy and out of hospitals. The goal of Healthcare providers is to integrate care into the day-to-day lives of patients. Healthcare also aims for better technologies and architecture.

Mobile is a game-changer in Healthcare because people are “always on and connected”. Mobile technology allows for in-the-moment messaging, ability to capture health data (GPS, accelerator, etc.) and display information in real time as needed. Bottom-line, smartphones are addictive so they are excellent tools for communication and engagement.

But there is a need to understand and address the implications of automating Healthcare: security, privacy, accountability, economics.

The plenary continued with Proteus Duxbury, CTO, Connect for Health Colorado, who presented From Build to Run at the Colorado Health Insurance Exchange – Achieving Long-term Sustainability through Better Architecture.

Duxbury stated the keys to successes of his organization are the leadership and team’s shared vision, a flexible vendor being agile with rapidly changing regulatory requirements, and COTS solution which provided minimal customization and custom development, resilient architecture and security. Connect for Health experiences many challenges including budget restraints, regulation and operating in a “fish bowl”. Yet, they are on-track with their three-year ‘build to run’ roadmap, stabilizing their foundation and gaining efficiencies.

During the Q&A with Allen Brown following each presentation, both speakers emphasized the need for standards, architecture and data security.

Brown and DuxburyAllen Brown and Proteus Duxbury

During the afternoon, track sessions consisted of Healthcare, Enterprise Architecture (EA) & Business Value, Service-Oriented Architecture (SOA), Security & Risk Management, Professional Development and ArchiMate Tutorials. Chris Armstrong, President, Armstrong Process Group, Inc. discussed Architecture Value Chain and Capability Model. Laura Heritage, Principal Solution Architect / Enterprise API Platform, SOA Software, presented Protecting your APIs from Threats and Hacks.

The evening culminated with a reception at the historic Old South Meeting House, where the Boston Tea Party began in 1773.

photo2

IMG_2814Networking Reception at Old South Meeting House

A special thank you to our sponsors and exhibitors at The Open Group Boston 2014: BiZZdesign, Black Duck, Corso, Good e-Learning, Orbus and AEA.

Join the conversation #ogBOS!

Loren K. BaynesLoren K. Baynes, Director, Global Marketing Communications, joined The Open Group in 2013 and spearheads corporate marketing initiatives, primarily the website, blog and media relations. Loren has over 20 years experience in brand marketing and public relations and, prior to The Open Group, was with The Walt Disney Company for over 10 years. Loren holds a Bachelor of Business Administration from Texas A&M University. She is based in the US.

3 Comments

Filed under Accreditations, Boundaryless Information Flow™, Business Architecture, COTS, Data management, Enterprise Architecture, Enterprise Transformation, Healthcare, Information security, Open FAIR Certification, OTTF, RISK Management, Service Oriented Architecture, Standards, Uncategorized