Tag Archives: cloud

Q&A with Marshall Van Alstyne, Professor, Boston University School of Management and Research Scientist MIT Center for Digital Business

By The Open Group

The word “platform” has become a nearly ubiquitous term in the tech and business worlds these days. From “Platform as a Service” (PaaS) to IDC’s Third Platform to The Open Group Open Platform 3.0™ Forum, the concept of platforms and building technology frames and applications on top of them has become the next “big thing.”

Although the technology industry tends to conceive of “platforms” as the vehicle that is driving trends such as mobile, social networking, the Cloud and Big Data, Marshall Van Alstyne, Professor at Boston University’s School of Management and a Research Scientist at the MIT Center for Digital Business, believes that the radical shifts that platforms bring are not just technological.

We spoke with Van Alstyne prior to The Open Group Boston 2014, where he presented a keynote, about platforms, how they have shifted traditional business models and how they are impacting industries everywhere.

The title of your session at the Boston conference was “Platform Shift – How New Open Business Models are Changing the Shape of Industry.” How would you define both “platform” and “open business model”?

I think of “platform” as a combination of two things. One, a set of standards or components that folks can take up and use for production of goods and services. The second thing is the rules of play, or the governance model – who has the ability to participate, how do you resolve conflict, and how do you divide up the royalty streams, or who gets what? You can think of it as the two components of the platform—the open standard together with the governance model. The technologists usually get the technology portion of it, and the economists usually get the governance and legal portions of it, but you really need both of them to understand what a ‘platform’ is.

What is the platform allowing then and how is that different from a regular business model?

The platform allows third parties to conduct business using system resources so they can actually meet and exchange goods across the platform. Wonderful examples of that include AirBnB where you can rent rooms or you can post rooms, or eBay, where you can sell goods or exchange goods, or iTunes where you can go find music, videos, apps and games provided by others, or Amazon where third parties are even allowed to set up shop on top of Amazon. They have moved to a business model where they can take control of the books in addition to allowing third parties to sell their own books and music and products and services through the Amazon platform. So by opening it up to allow third parties to participate, you facilitate exchange and grow a market by helping that exchange.

How does this relate to the concept of the technology industry is defining the “third platform”?

I think of it slightly differently. The tech industry uses mobile and social and cloud and data to characterize it. In some sense this view offers those as the attributes that characterize platforms or the knowledge base that enable platforms. But we would add to that the economic forces that actually shape platforms. What we want to do is give you some of the strategic tools, the incentives, the rules that will actually help you control their trajectory by helping you improve who participates and then measure and improve the value they contribute to the platform. So a full ecosystem view is not just the technology and the data, it also measures the value and how you divide that value. The rules of play really become important.

I think the “third platform” offers marvelous concepts and attributes but you also need to add the economics to it: Why do you participate, who gets what portions of the value, and who ultimately owns control.

Who does control the platform then?

A platform has multiple parts. Determining who controls what part is the art and design of the governance model. You have to set up control in the right way to motivate people to participate. But before we get to that, let’s go back and complete the idea of what’s an ‘open platform.’

To define an open platform, consider both the right of access and the right to manipulate platform resources, then consider granting those rights to four different parties. One is the user—can they access one another, can they access data, can they access system resources? Another group is developers—can they manipulate system resources, can they add new features to it, can they sell through the platform? The third group is the platform providers. You often think of them as those folks that facilitate access across the platform. To give you an example, iTunes is a single monolithic store, so the provider is simply Apple, but Android, in contrast, allows multiple providers, so there’s a Samsung Android store, an LTC Android store, a Google Android store, there’s even an Amazon version that uses a different version of Android. So that platform has multiple providers each with rights to access users. The fourth group is the party that controls the underlying property rights, who owns the IP. The ability modify the underlying standard and also the rights of access for other parties is the bottom-most layer.

So to answer the question of what is ‘open,’ you have to consider the rights of access of all four groups—the users, developers, the providers and IP rights holders, or sponsors, underneath.

Popping back up a level, we’re trying to motivate different parties to participate in the ecosystem. So what do you give the users? Usually it’s some kind of value. What do you give developers? Usually it’s some set of SDKs and APIs, but also some level of royalties. It’s fascinating. If you look back historically, Amazon initially tried a publishing royalty where they took 70% and gave a minority 30% back to developers. They found that didn’t fly very well and they had to fall back to the app store or software-style royalty, where they’re taking a lower percentage. I think Apple, for example, takes 30 percent, and Amazon is now close to that. You see ranges of royalties going anywhere from a few percent—an example is credit cards—all the way up to iStock photo where they take roughly 70 percent. That’s an extremely high rate, and one that I don’t recommend. We were just contracting for designs at 99Designs and they take a 20 percent cut. That’s probably more realistic, but lower might perhaps even be better—you can create stronger network effect if that’s the case.

Again, the real question of control is how you motivate third parties to participate and add value? If you are allowing them to use resources to create value and keep a lot of that value, then they’re more motivated to participate, to invest, to bring their resources to your platform. If you take most of the value they create, they won’t participate. They won’t add value. One of the biggest challenges for open platforms—what you might call the ‘Field of Dreams’ approach—is that most folks open their platform and assume ‘if you build it, they will come,’ but you really need to reward them to do so. Why would they want to come build with you? There are numerous instances of platforms that opened but no developer chooses to add value—the ecosystem is too small. You have to solve the chicken and egg problem where if you don’t have users, developers don’t want to build for you, but if you don’t have developer apps, then why do users participate? So you’ve got a huge feedback problem. And those are where the economics become critical, you must solve the chicken and egg problem to build and roll out platforms.

It’s not just a technology question; it’s also an economics and rewards question.

Then who is controlling the platform?

The answer depends on the type of platform. Giving different groups a different set of rights creates different types of platform. Consider the four different parties: users, developers, providers, and sponsors. At one extreme, the Apple Mac platform of the 1980s reserved most rights for development, for producing hardware (the provider layer), and for modifying the IP (the sponsor layer) all to Apple. Apple controlled the platform and it remained closed. In contrast, Microsoft relaxed platform control in specific ways. It licensed to multiple providers, enabling Dell, HP, Compaq and others to sell the platform. It gave developers rights of access to SDKs and APIs, enabling them to extend the platform. These control choices gave Microsoft more than six times the number of developers and more than twenty times the market share of Apple at the high point of Microsoft’s dominance of desktop operating systems. Microsoft gave up some control in order to create a more inclusive platform and a much bigger market.

Control is not a single concept. There are many different control rights you can grant to different parties. For example, you often want to give users an ability to control their own data. You often want to give developers intellectual property rights for the apps that they create and often over the data that their users create. You may want to give them some protections against platform misappropriation. Developers resent it if you take their ideas. So if the platform sees a really clever app that’s been built on top of its platform, what’s the guarantee that the platform simply doesn’t take it or build a competing app? You need to protect your developers in that case. Same thing’s true of the platform provider—what guarantees do they provide users for the quality of content provided on their ecosystem? For example, the Android ecosystem is much more open than the iPhone ecosystem, which means you have more folks offering stores. Simultaneously, that means that there are more viruses and more malware in Android, so what rights and guarantees do you require of the platform providers to protect the users in order that they want to participate? And then at the bottom, what rights do other participants have to control the direction of the platform growth? In the Visa model, for example, there are multiple member banks that help to influence the general direction of that credit card standard. Usually the most successful platforms have a single IP rights holder, but there are several examples of that have multiple IP rights holders.

So, in the end control defines the platform as much as the platform defines control.

What is the “secret” of the Internet-driven marketplace? Is that indeed the platform?

The secret is that, in effect, the goal of the platform is to increase transaction volume and value. If you can do that—and we can give you techniques for doing it—then you can create massive scale. Increasing the transaction value and transactions volume across your platform means that the owner of the platform doesn’t have to be the sole source of content and new ideas provided on the platform. If the platform owner is the only source of value then the owner is also the bottleneck. The goal is to consummate matches between producers and consumers of value. You want to help users find the content, find the resources, find the other people that they want to meet across your platform. In Apple’s case, you’re helping them find the music, the video, the games, and the apps that they want. In AirBnB’s case, you’re helping them find the rooms that they want, or Uber, you’re helping them find a driver. On Amazon, the book recommendations help you find the content that you want. In all the truly successful platforms, the owner of the platform is not providing all of that value. They’re enabling third parties to add that value, and that’s one reasy why The Open Group’s ideas are so important—you need open systems for this to happen.

What’s wrong with current linear business models? Why is a network-driven approach superior?

The fundamental reason why the linear business model no longer works is that it does not manage network effects. Network effects allow you to build platforms where users attract other users and you get feedback that grows your system. As more users join your platform, more developers join your platform, which attracts more users, which attracts more developers. You can see it on any of the major platforms. This is also true of Google. As advertisers use Google Search, the algorithms get better, people find the content that they want, so more advertisers use it. As more drivers join Uber, more people are happier passengers, which attracts more drivers. The more merchants accept Visa, the more consumers are willing to carry it, which attracts more merchants, which attracts more consumers. You get positive feedback.

The consequence of that is that you tend to get market concentration—you get winner take all markets. That’s where platforms dominate. So you have a few large firms within a given category, whether this is rides or books or hotels or auctions. Further, once you get network effects changing your business model, the linear insights into pricing, into inventory management, into innovation, into strategy breakdown.

When you have these multi-sided markets, pricing breaks down because you often price differently to one side than another because one side attracts the other. Inventory management practices breakdown because you’re selling inventory that you don’t even own. Your R&D strategies breakdown because now you’re motivating innovation and research outside the boundaries of the firm, as opposed to inside the internal R&D group. And your strategies breakdown because you’re not just looking for cost leadership or product differentiation, now you’re looking to shape the network effects as you create barriers to entry.

One of the things that I really want to argue strenuously is that in markets where platforms will emerge, platforms beat product every time. So the platform business model will inevitably beat the linear, product-based business model. Because you’re harnessing new forces in order to develop a different kind of business model.

Think of it the following way–imagine that value is growing as users consume your product. Think of any of the major platforms, as more folks use Google, search gets better, the more recommendations improve on Amazon, and the easier it is to find a ride on Uber, so more folks want to be on there. It is easier to scale network effects outside your business than inside your business. There’s simply more people outside than inside. The moment that happens, the locus control, the locus of innovation, moves from inside the firm to outside the firm. So the rules change. Pricing changes, your innovation strategies change, your inventory policies change, your R&D changes. You’re now managing resources outside the firm, rather than inside, in order to capture scale. This is different than the traditional industrial supply economies of scale.

Old systems are giving away to new systems. It’s not that the whole system breaks down, it’s simply that you’re looking to manage network effects and manage new business models. Another way to see this is that previously you were managing capital. In the industrial era, you were managing steel, you were managing large amounts of finance in banking, you were managing auto parts—huge supply economies of scale. In telecommunications, you were managing infrastructure. Now, you’re managing communities and these are managed outside the firm. The value that’s been created at Facebook or WhatsApp or Instagram or any of the new acquisitions, it’s not the capital that’s critical, it’s the communities that are critical, and these are built outside the firm.

There is a lot of talk in the industry about the Nexus of Forces as Gartner calls it, or Third Platform (IDC). The Open Group calls it Open Platform 3.0. Your concept goes well beyond technology—how does Open Platform 3.0 enable new business models?

Those are the enablers—they’re shall we say necessary, but they’re not sufficient. You really must harness the economic forces in addition to those enablers—mobile, social, Cloud, data. You must manage communities outside the firm, that’s the mobile and the social element of it. But this also involves designing governance and setting incentives. How are you capturing users outside the organization, how are they contributing, how are they being motivated to participate, why are they spreading your products to their peers? The Cloud allows it to scale—so Instagram and What’s App and others scale. Data allows you to “consummate the match.” You use that data to help people find what they need, to add value, so all of those things are the enablers. Then you have to harness the economics of the enablers to encourage people to do the right thing. You can see the correct intuition if you simply ask what happens if all you offer is a Cloud service and nothing more. Why will anyone use it? What’s the value to that system? If you open APIs to it, again, if you don’t have a user base, why are developers going to contribute? Developers want to reach users. Users want valuable functionality.

You must manage the motives and the value-add on the platform. New business models come from orchestrating not just the technology but also the third party sources of value. One of the biggest challenges is to grow these businesses from scratch—you’ve got the cold start chicken and egg problem. You don’t have network effects if you don’t have a user base, if you don’t have users, you don’t have network effects.

Do companies need to transform themselves into a “business platform” to succeed in this new marketplace? Are there industries immune to this shift?

There is a continuum of companies that are going to be affected. It starts at one end with companies that are highly information intense—anything that’s an information intensive business will be dramatically affected, anything that’s community or fashion-based business will be dramatically affected. Those include companies involved in media and news, songs, music, video; all of those are going to be the canaries in the coalmine that see this first. Moving farther along will be those industries that require some sort of certification—those include law and medicine and education—those, too, will also be platformized, so the services industries will become platforms. Farther down that are the ones that are heavily, heavily capital intensive where control of physical capital is paramount, those include trains and oil rigs and telecommunications infrastructure—eventually those will be affected by platform business models to the extent that data helps them gain efficiencies or add value, but they will in some sense be the last to be affected by platform business models. Look for the businesses where the cost side is shrinking in proportion to the delivery of value and where the network effects are rising as a proportional increase in value. Those forces will help you predict which industries will be transformed.

How can Enterprise Architecture be a part of this and how do open standards play a role?

The second part of that question is actually much easier. How do open standards play a role? The open standards make it much easier for third parties to attach and incorporate technology and features such that they can in turn add value. Open standards are essential to that happening. You do need to ask the question as to who controls those standards—is it completely open or is it a proprietary standard, a published standard but it’s not manipulable by a third party.

There will be at least two or three different things that Enterprise Architects need to do. One of these is to design modular components that are swappable, so as better systems become available, the better systems can be swapped in. The second element will be to watch for components of value that should be absorbed into the platform itself. As an example, in operating systems, web browsing has effectively been absorbed into the platform, streaming has been absorbed into the platform so that they become aware of how that actually works. A third thing they need to do is talk to the legal team to see where it is that the third parties property rights can be protected so that they invest. One of the biggest mistakes that firms make is to simply assume that because they own the platform, because they have the rights of control, that they can do what they please. If they do that, they risk alienating their ecosystems. So they should talk to their potential developers to incorporate developer concerns. One of my favorite examples is the Intel Architecture Lab which has done a beautiful job of articulating the voices of developers in their own architectural plans. A fourth thing that can be done is an idea borrowed from SAP, that builds Enterprise Architecture—they articulate an 18-24 month roadmap where they say these are the features that are coming, so you can anticipate and build on those. Also it gives you an idea of what features will be safe to build on so you won’t lose the value you’ve created.

What can companies do to begin opening their business models and more easily architect that?

What they should do is to consider four groups articulated earlier— those are the users, the providers, the developers and the sponsors—each serve a different role. Firms need to understand what their own role will be in order that they can open and architect the other roles within their ecosystem. They’ll also need to choose what levels of exclusivity they need to give their ecosystem partners in a different slice of the business. They should also figure out which of those components they prefer to offer themselves as unique competencies and where they need to seek third party assistance, either in new ideas or new resources or even new marketplaces. Those factors will help guide businesses toward different kinds of partnerships, and they’ll have to be open to those kinds of partners. In particular, they should think about where are they most likely to be missing ideas or missing opportunities. Those technical and business areas should open in order that third parties can take advantage of those opportunities and add value.

 

vanalstynemarshallProfessor Van Alstyne is one of the leading experts in network business models. He conducts research on information economics, covering such topics as communications markets, the economics of networks, intellectual property, social effects of technology, and productivity effects of information. As co-developer of the concept of “two sided networks” he has been a major contributor to the theory of network effects, a set of ideas now taught in more than 50 business schools worldwide.

Awards include two patents, National Science Foundation IOC, SGER, SBIR, iCorp and Career Awards, and six best paper awards. Articles or commentary have appeared in Science, Nature, Management Science, Harvard Business Review, Strategic Management Journal, The New York Times, and The Wall Street Journal.

1 Comment

Filed under architecture, Cloud, Conference, Data management, digital technologies, Enterprise Architecture, Governance, Open Platform 3.0, Standards, Uncategorized

New Health Data Deluges Require Secure Information Flow Enablement Via Standards, Says The Open Group’s New Healthcare Director

By The Open Group

Below is the transcript of The Open Group podcast on how new devices and practices have the potential to expand the information available to Healthcare providers and facilities.

Listen to the podcast here.

Dana Gardner: Hello, and welcome to a special BriefingsDirect Thought Leadership Interview coming to you in conjunction with The Open Group’s upcoming event, Enabling Boundaryless Information Flow™ July 21-22, 2014 in Boston.

GardnerI’m Dana Gardner, Principal Analyst at Interarbor Solutions and I’ll be your host and moderator for the series of discussions from the conference on Boundaryless Information Flow, Open Platform 3.0™, Healthcare, and Security issues.

One area of special interest is the Healthcare arena, and Boston is a hotbed of innovation and adaption for how technology, Enterprise Architecture, and standards can improve the communication and collaboration among Healthcare ecosystem players.

And so, we’re joined by a new Forum Director at The Open Group to learn how an expected continued deluge of data and information about patients, providers, outcomes, and efficiencies is pushing the Healthcare industry to rapid change.

WJason Lee headshotith that, please join me now in welcoming our guest. We’re here with Jason Lee, Healthcare and Security Forums Director at The Open Group. Welcome, Jason.

Jason Lee: Thank you so much, Dana. Good to be here.

Gardner: Great to have you. I’m looking forward to the Boston conference and want to remind our listeners and readers that it’s not too late to sign up. You can learn more at http://www.opengroup.org.

Jason, let’s start by talking about the relationship between Boundaryless Information Flow, which is a major theme of the conference, and healthcare. Healthcare perhaps is the killer application for Boundaryless Information Flow.

Lee: Interesting, I haven’t heard it referred to that way, but healthcare is 17 percent of the US economy. It’s upwards of $3 trillion. The costs of healthcare are a problem, not just in the United States, but all over the world, and there are a great number of inefficiencies in the way we practice healthcare.

We don’t necessarily intend to be inefficient, but there are so many places and people involved in healthcare, it’s very difficult to get them to speak the same language. It’s almost as if you’re in a large house with lots of different rooms, and every room you walk into they speak a different language. To get information to flow from one room to the other requires some active efforts and that’s what we’re undertaking here at The Open Group.

Gardner: What is it about the current collaboration approaches that don’t work? Obviously, healthcare has been around for a long time and there have been different players involved. What’s the hurdle? What prevents a nice, seamless, easy flow and collaboration in information that gets better outcomes? What’s the holdup?

Lee: There are many ways to answer that question, because there are many barriers. Perhaps the simplest is the transformation of healthcare from a paper-based industry to a digital industry. Everyone has walked into an office, looked behind the people at the front desk, and seen file upon file and row upon row of folders, information that’s kept in a written format.

When there’s been movement toward digitizing that information, not everyone has used the same system. It’s almost like trains running on a different gauge track. Obviously if the track going east to west is a different gauge than going north to south, then trains aren’t going to be able to travel on those same tracks. In the same way, healthcare information does not flow easily from one office to another or from one provider to another.

Gardner: So not only do we have disparate strategies for collecting and communicating health data, but we’re also seeing much larger amounts of data coming from a variety of new and different places. Some of them now even involve sensors inside of patients themselves or devices that people will wear. So is the data deluge, the volume, also an issue here?

Lee: Certainly. I heard recently that an integrated health plan, which has multiple hospitals involved, contains more elements of data than the Library of Congress. As information is collected at multiple points in time, over a relatively short period of time, you really do have a data deluge. Figuring out how to find your way through all the data and look at the most relevant for the patient is a great challenge.

Gardner: I suppose the bad news is that there is this deluge of data, but it’s also good news, because more data means more opportunity for analysis, a better ability to predict and determine best practices, and also provide overall lower costs with better patient care.

So it seems like the stakes are rather high here to get this right, to not just crumble under a volume or an avalanche of data, but to master it, because it’s perhaps the future. The solution is somewhere in there too.

Lee: No question about it. At The Open Group, our focus is on solutions. We, like others, put a great deal of effort into describing the problems, but figuring out how to bring IT technologies to bear on business problems, how to encourage different parts of organizations to speak to one another and across organizations to speak the same language, and to operate using common standards and language. That’s really what we’re all about.

And it is, in a large sense, part of the process of helping to bring healthcare into the 21st Century. A number of industries are a couple of decades ahead of healthcare in the way they use large datasets — big data, some people refer to it as. I’m talking about companies like big department stores and large online retailers. They really have stepped up to the plate and are using that deluge of data in ways that are very beneficial to them, and healthcare can do the same. We’re just not quite at the same level of evolution.

Gardner: And to your point, the stakes are so much higher. Retail is, of course, a big deal in the economy, but as you pointed out, healthcare is such a much larger segment and portion. So just making modest improvements in communication, collaboration, or data analysis can reap huge rewards.

Lee: Absolutely true. There is the cost side of things, but there is also the quality side. So there are many ways in which healthcare can improve through standardization and coordinated development, using modern technology that cannot just reduce cost, but improve quality at the same time.

Gardner: I’d like to get into a few of the hotter trends, but before we do, it seems that The Open Group has recognized the importance here by devoting the entire second day of their conference in Boston, that will be on July 22, to Healthcare.

Maybe you could give us a brief overview of what participants, and even those who come in online and view recorded sessions of the conference at http://new.livestream.com/opengroup should expect? What’s going to go on July 22nd?

Lee: We have a packed day. We’re very excited to have Dr. Joe Kvedar, a physician at Partners HealthCare and Founding Director of the Center for Connected Health, as our first plenary speaker. The title of his presentation is “Making Health Additive.” Dr. Kvedar is a widely respected expert on mobile health, which is currently the Healthcare Forum’s top work priority. As mobile medical devices become ever more available and diversified, they will enable consumers to know more about their own health and wellness. A great deal of data of potentially useful health data will be generated. How this information can be used–not just by consumers but also by the healthcare establishment that takes care of them as patients, will become a question of increasing importance. It will become an area where standards development and The Open Group can be very helpful.

Our second plenary speaker, Proteus Duxbury, Chief Technology Officer at Connect for Health Colorado,will discuss a major feature of the Affordable Care Act—the health insurance exchanges–which are designed to bring health insurance to tens of millions of people who previously did not have access to it. Mr. Duxbury is going to talk about how Enterprise Architecture–which is really about getting to solutions by helping the IT folks talk to the business folks and vice versa–has helped the State of Colorado develop their Health Insurance Exchange.

After the plenaries, we will break up into 3 tracks, one of which is Healthcare-focused. In this track there will be three presentations, all of which discuss how Enterprise Architecture and the approach to Boundaryless Information Flow can help healthcare and healthcare decision-makers become more effective and efficient.

One presentation will focus on the transformation of care delivery at the Visiting Nurse Service of New York. Another will address stewarding healthcare transformation using Enterprise Architecture, focusing on one of our Platinum members, Oracle, and a company called Intelligent Medical Objects, and how they’re working together in a productive way, bringing IT and healthcare decision-making together.

Then, the final presentation in this track will focus on the development of an Enterprise Architecture-based solution at an insurance company. The payers, or the insurers–the big companies that are responsible for paying bills and collecting premiums–have a very important role in the healthcare system that extends beyond administration of benefits. Yet, payers are not always recognized for their key responsibilities and capabilities in the area of clinical improvements and cost improvements.

With the increase in payer data brought on in large part by the adoption of a new coding system–the ICD-10–which will come online this year, there will be a huge amount of additional data, including clinical data, that become available. At The Open Group, we consider payers—health insurance companies (some of which are integrated with providers)–as very important stakeholders in the big picture..

In the afternoon, we’re going to switch gears a bit and have a speaker talk about the challenges, the barriers, the “pain points” in introducing new technology into the healthcare systems. The focus will return to remote or mobile medical devices and the predictable but challenging barriers to getting newly generated health information to flow to doctors’ offices and into patients records, electronic health records, and hospitals data keeping and data sharing systems.

We’ll have a panel of experts that responds to these pain points, these challenges, and then we’ll draw heavily from the audience, who we believe will be very, very helpful, because they bring a great deal of expertise in guiding us in our work. So we’re very much looking forward to the afternoon as well.

Gardner: It’s really interesting. A couple of these different plenaries and discussions in the afternoon come back to this user-generated data. Jason, we really seem to be on the cusp of a whole new level of information that people will be able to develop from themselves through their lifestyle, new devices that are connected.

We hear from folks like Apple, Samsung, Google, and Microsoft. They’re all pulling together information and making it easier for people to not only monitor their exercise, but their diet, and maybe even start to use sensors to keep track of blood sugar levels, for example.

In fact, a new Flurry Analytics survey showed 62 percent increase in the use of health and fitness application over the last six months on the popular mobile devices. This compares to a 33 percent increase in other applications in general. So there’s an 87 percent faster uptick in the use of health and fitness applications.

Tell me a little bit how you see this factoring in. Is this a mixed blessing? Will so much data generated from people in addition to the electronic medical records, for example, be a bad thing? Is this going to be a garbage in, garbage out, or is this something that could potentially be a game-changer in terms of how people react to their own data and then bring more data into the interactions they have with care providers?

Lee: It’s always a challenge to predict what the market is going to do, but I think that’s a remarkable statistic that you cited. My prediction is that the increased volume of person- generated data from mobile health devices is going to be a game-changer. This view also reflects how the Healthcare Forum members (which includes members from Capgemini, Philips, IBM, Oracle and HP) view the future.

The commercial demand for mobile medical devices, things that can be worn, embedded, or swallowed, as in pills, as you mentioned, is growing ever more. The software and the applications that will be developed to be used with the devices is going to grow by leaps and bounds. As you say, there are big players getting involved. Already some of the pedometer type devices that measure the number of steps taken in a day have captured the interest of many, many people. Even David Sedaris, serious guy that he is, was writing about it recently in ‘The New Yorker’.

What we will find is that many of the health indicators that we used to have to go to the doctor or nurse or lab to get information on will become available to us through these remote devices.

There will be a question, of course, as to reliability and validity of the information, to your point about garbage in, garbage out, but I think standards development will help here This, again, is where The Open Group comes in. We might also see the FDA exercising its role in ensuring safety here, as well as other organizations, in determining which devices are reliable.

The Open Group is working in the area of mobile data and information systems that are developed around them, and their ability to (a) talk to one another and (b) talk to the data devices/infrastructure used in doctors’ offices and in hospitals. This is called interoperability and it’s certainly lacking in the country.

There are already problems around interoperability and connectivity of information in the healthcare establishment as it is now. When patients and consumers start collecting their own data, and the patient is put at the center of the nexus of healthcare, then the question becomes how does that information that patients collect get back to the doctor/clinician in ways in which the data can be trusted and where the data are helpful?

After all, if a patient is wearing a medical device, there is the opportunity to collect data, about blood sugar level let’s say, throughout the day. And this is really taking healthcare outside of the four walls of the clinic and bringing information to bear that can be very, very useful to clinicians and beneficial to patients.

In short, the rapid market dynamic in mobile medical devices and in the software and hardware that facilitates interoperability begs for standards-based solutions that reduce costs and improve quality, and all of which puts the patient at the center. This is The Open Group’s Healthcare Forum’s sweet spot.

Gardner: It seems to me a real potential game-changer as well, and that something like Boundaryless Information Flow and standards will play an essential role. Because one of the big question marks with many of the ailments in a modern society has to do with lifestyle and behavior.

So often, the providers of the care only really have the patient’s responses to questions, but imagine having a trove of data at their disposal, a 360-degree view of the patient to then further the cause of understanding what’s really going on, on a day-to-day basis.

But then, it’s also having a two-way street, being able to deliver perhaps in an automated fashion reinforcements and incentives, information back to the patient in real-time about behavior and lifestyles. So it strikes me as something quite promising, and I look forward to hearing more about it at the Boston conference.

Any other thoughts on this issue about patient flow of data, not just among and between providers and payers, for example, or providers in an ecosystem of care, but with the patient as the center of it all, as you said?

Lee: As more mobile medical devices come to the market, we’ll find that consumers own multiple types of devices at least some of which collect multiple types of data. So even for the patient, being at the center of their own healthcare information collection, there can be barriers to having one device talk to the other. If a patient wants to keep their own personal health record, there may be difficulties in bringing all that information into one place.

So the interoperability issue, the need for standards, guidelines, and voluntary consensus among stakeholders about how information is represented becomes an issue, not just between patients and their providers, but for individual consumers as well.

Gardner: And also the cloud providers. There will be a variety of large organizations with cloud-modeled services, and they are going to need to be, in some fashion, brought together, so that a complete 360-degree view of the patient is available when needed. It’s going to be an interesting time.

Of course, we’ve also looked at many other industries and tried to have a cloud synergy, a cloud-of-clouds approach to data and also the transaction. So it’s interesting how what’s going on in multiple industries is common, but it strikes me that, again, the scale and the impact of the healthcare industry makes it a leader now, and perhaps a driver for some of these long overdue structured and standardized activities.

Lee: It could become a leader. There is no question about it. Moreover, there is a lot Healthcare can learn from other companies, from mistakes that other companies have made, from lessons they have learned, from best practices they have developed (both on the content and process side). And there are issues, around security in particular, where Healthcare will be at the leading edge in trying to figure out how much is enough, how much is too much, and what kinds of solutions work.

There’s a great future ahead here. It’s not going to be without bumps in the road, but organizations like The Open Group are designed and experienced to help multiple stakeholders come together and have the conversations that they need to have in order to push forward and solve some of these problems.

Gardner: Well, great. I’m sure there will be a lot more about how to actually implement some of those activities at the conference. Again, that’s going to be in Boston, beginning on July 21, 2014.

We’ll have to leave it there. We’re about out of time. We’ve been talking with a new Director at The Open Group to learn how an expected continued deluge of data and information about patients and providers, outcomes and efficiencies are all working together to push the Healthcare industry to rapid change. And, as we’ve heard, that might very well spill over into other industries as well.

So we’ve seen how innovation and adaptation around technology, Enterprise Architecture and standards can improve the communication and collaboration among Healthcare ecosystem players.

It’s not too late to register for The Open Group Boston 2014 (http://www.opengroup.org/boston2014) and join the conversation via Twitter #ogchat #ogBOS, where you will be able to learn more about Boundaryless Information Flow, Open Platform 3.0, Healthcare and other relevant topics.

So a big thank you to our guest. We’ve been joined by Jason Lee, Healthcare and Security Forums Director at The Open Group. Thanks so much, Jason.

Lee: Thank you very much.

 

 

 

 

 

 

 

 

 

Leave a comment

Filed under Boundaryless Information Flow™, Cloud, Conference, Data management, Enterprise Architecture, Enterprise Transformation, Healthcare, Information security, Interoperability, Open Platform 3.0, Standards, Uncategorized

The Open Group Open Platform 3.0™ Starts to Take Shape

By Dr. Chris Harding, Director for Interoperability, The Open Group

The Open Group published a White Paper on Open Platform 3.0™ at the start of its conference in Amsterdam in May 2014. This article, based on a presentation given at the conference, explains how the definition of the platform is beginning to emerge.

Introduction

Amsterdam is a beautiful place. Walking along the canals is like moving through a set of picture postcards. But as you look up at the houses beside the canals, and you see the cargo hoists that many of them have, you are reminded that the purpose of the arrangement was not to give pleasure to tourists. Amsterdam is a great trading city, and the canals were built as a very efficient way of moving goods around.

This is also a reminder that the primary purpose of architecture is not to look beautiful, but to deliver business value, though surprisingly, the two often seem to go together quite well.

When those canals were first thought of, it might not have been obvious that this was the right thing to do for Amsterdam. Certainly the right layout for the canal network would not be obvious. The beginning of a project is always a little uncertain, and seeing the idea begin to take shape is exciting. That is where we are with Open Platform 3.0 right now.

We started with the intention to define a platform to enable enterprises to get value from new technologies including cloud computing, social computing, mobile computing, big data, the Internet of Things, and perhaps others. We developed an Open Group business scenario to capture the business requirements. We developed a set of business use-cases to show how people are using and wanting to use those technologies. And that leads to the next step, which is to define the platform. All these new technologies and their applications sound wonderful, but what actually is Open Platform 3.0?

The Third Platform

Looking historically, the first platform was the computer operating system. A vendor-independent operating system interface was defined by the UNIX® standard. The X/Open Company and the Open Software Foundation (OSF), which later combined to form The Open Group, were created because companies everywhere were complaining that they were locked into proprietary operating systems. They wanted applications portability. X/Open specified the UNIX® operating system as a common application environment, and the value that it delivered was to prevent vendor lock-in.

The second platform is the World Wide Web. It is a common services environment, for services used by people browsing web pages or for web services used by programs. The value delivered is universal deployment and access. Any person or company anywhere can create a services-based solution and deploy it on the web, and every person or company throughout the world can access that solution.

Open Platform 3.0 is developing as a common architecture environment. This does not mean it is a replacement for TOGAF®. TOGAF is about how you do architecture and will continue to be used with Open Platform 3.0. Open Platform 3.0 is about what kind of architecture you will create. It will be a common environment in which enterprises can do architecture. The big business benefit that it will deliver is integrated solutions.

ChrisBlog1

Figure 1: The Third Platform

With the second platform, you can develop solutions. Anyone can develop a solution based on services accessible over the World Wide Web. But independently-developed web service solutions will very rarely work together “out of the box”.

There is an increasing need for such solutions to work together. We see this need when looking at The Open Platform 3.0 technologies. People want to use these technologies together. There are solutions that use them, but they have been developed independently of each other and have to be integrated. That is why Open Platform 3.0 has to deliver a way of integrating solutions that have been developed independently.

Common Architecture Environment

The Open Group has recently published its first thoughts on Open Platform 3.0 in the Open Platform 3.0 White Paper. This lists a number of things that will eventually be in the Open Platform 3.0 standard. Many of these are common architecture artifacts that can be used in solution development. They will form a common architecture environment. They are:

  • Statement of need, objectives, and principles – this is not part of that environment of course; it says why we are creating it.
  • Definitions of key terms – clearly you must share an understanding of the key terms if you are going to develop common solutions or integrable solutions.
  • Stakeholders and their concerns – an understanding of these is an important aspect of an architecture development, and something that we need in the standard.
  • Capabilities map – this shows what the products and services that are in the platform do.
  • Basic models – these show how the platform components work with each other and with other products and services.
  • Explanation of how the models can be combined to realize solutions – this is an important point and one that the white paper does not yet start to address.
  • Standards and guidelines that govern how the products and services interoperate – these are not standards that The Open Group is likely to produce, they will almost certainly be produced by other bodies, but we need to identify the appropriate ones and probably in some cases coordinate with the appropriate bodies to see that they are developed.

The Open Platform 3.0 White Paper contains an initial statement of needs, objectives and principles, definitions of some key terms, a first-pass list of stakeholders and their concerns, and half a dozen basic models. The basic models are in an analysis of the business use-cases for Open Platform 3.0 that were developed earlier.

These are just starting points. The white paper is incomplete: each of the sections is incomplete in itself, and of course the white paper does not contain all the sections that will be in the standard. And it is all subject to change.

An Example Basic Model

The figure shows a basic model that could be part of the Open Platform 3.0 common architecture environment.

ChrisBlog 2

Figure 2: Mobile Connected Device Model

This is the Mobile Connected Device Model: one of the basic models that we identified in the snapshot. It comes up quite often in the use-cases.

The stack on the left is a mobile device. It has a user, it has apps, it has a platform which would probably be Android or iOS, it has infrastructure that supports the platform, and it is connected to the World Wide Web, because that’s part of the definition of mobile computing.

On the right you see, and this is a frequently encountered pattern, that you don’t just use your mobile device for running apps. Maybe you connect it to a printer, maybe you connect it to your headphones, maybe you connect it to somebody’s payment terminal, you can connect it to many things. You might do this through a Universal Serial Bus (USB). You might do it through Bluetooth. You might do it by Near Field Communications (NFC). You might use other kinds of local connection.

The device you connect to may be operated by yourself (e.g. if it is headphones), or by another organization (e.g. if it is a payment terminal). In the latter case you typically have a business relationship with the operator of the connected device.

That is an example of the basic models that came up in the analysis of the use-cases. It is captured in the White Paper. It is fundamental to mobile computing and is also relevant to the Internet of Things.

Access to Technologies

This figure captures our understanding of the need to obtain information from the new technologies, social media, mobile devices, sensors and so on, the need to process that information, maybe on the cloud, to manage it and, ultimately, to deliver it in a form where there is analysis and reasoning that enables enterprises to take business decisions.

ChrisBlog 3

Figure 3: Access to Technologies

The delivery of information to improve the quality of decisions is the source of real business value.

User-Driven IT

The next figure captures a requirement that we picked up in the development of the business scenario.

ChrisBlog 4

Figure 4: User-Driven IT

Traditionally, you would have had the business use in the business departments of an enterprise, and pretty much everything else in the IT department. But we are seeing two big changes. One is that the business users are getting smarter, more able to use technology. The other is they want to use technology themselves, or to have business technologists closely working with them, rather than accessing it indirectly through the IT department.

The systems provisioning and management is now often done by cloud service providers, and the programming and integration and helpdesk by cloud brokers, or by an IT department that plays a broker role, rather than working in the traditional way.

The business still needs to retain responsibility for the overall architecture and for compliance. If you do something against your company’s principles, your customers will hold you responsible. It is no defense to say, “Our broker did it that way.” Similarly, if you break the law, your broker does not go to jail, you do. So those things will continue to be more associated with the business departments, even as the rest is devolved.

In short, businesses have a new way of using IT that Open Platform 3.0 must and will accommodate.

Integration of Independently-Developed Solutions

The next figure illustrates how the integration of independently developed solutions can be achieved.

ChrisBlog 5

Figure 5: Architecture Integration

It shows two solutions, which come from the analysis of different business use-cases. They share a common model, which makes it much easier to integrate them. That is why the Open Platform 3.0 standard will define common models for access to the new technologies.

The Open Platform 3.0 standard will have other common artifacts: architectural principles, stakeholder definitions and descriptions, and so on. Independently-developed architectures that use them can be integrated more easily.

Enterprises develop their architectures independently, but engage with other enterprises in business ecosystems that require shared solutions. Increasingly, business relationships are dynamic, and there is no time to develop an agreed ecosystem architecture from scratch. Use of the same architecture platform, with a common architecture environment including elements such as principles, stakeholder concerns, and basic models, enables the enterprise architectures to be integrated, and shared solutions to be developed quickly.

Completing the Definition

How will we complete the definition of Open Platform 3.0?

The Open Platform 3.0 Forum recently published a set of 22 business use-cases – the Nexus of Forces in Action. These use-cases show the application of Social, Mobile and Cloud Computing, Big Data, and the Internet of Things in a wide variety of business areas.

ChrisBlog 6

Figure 6: Business Use-Cases

The figure comes from that White Paper and shows some of those areas: multimedia, social networks, building energy management, smart appliances, financial services, medical research, and so on.

Use-Case Analysis

We have started to analyze those use-cases. This is an ArchiMate model showing how our first business use-case, The Mobile Smart Store, could be realized.

ChrisBlog 7

Figure 7: Use-Case Analysis

As you look at it you see common models. Outlined on the left is a basic model that is pretty much the same as the original TOGAF Technical Reference Model. The main difference is the addition of a business layer (which shows how enterprise architecture has moved in the business direction since the TRM was defined).

But you also see that the same model appears in the use-case in a different place, as outlined on the right. It appears many times throughout the business use-cases.

Finally, you can see that the Mobile Connected Device Model has appeared in this use-case (outlined in the center). It appears in other use-cases too.

As we analyze the use-cases, we find common models, as well as common principles, common stakeholders, and other artifacts.

The Development Cycle

We have a development cycle: understanding the value of the platform by considering use-cases, analyzing those use-cases to derive common features, and documenting the common features in a specification.

ChrisBlog 8

Figure 8: The Development Cycle

The Open Platform 3.0 White Paper represents the very first pass through that cycle, further passes will result in further White Papers, a snapshot, and ultimately The Open Platform 3.0 standard, and no doubt more than one version of that standard.

Conclusions

Open Platform 3.0 provides a common architecture environment. This enables enterprises to derive business value from social computing, mobile computing, big data, the Internet-of-Things, and potentially other new technologies.

Cognitive computing, for example, has been suggested as another technology that Open Platform 3.0 might in due course accommodate. What would that lead to? There would be additional use-cases, which would lead to further analysis, which would no doubt identify some basic models for cognitive computing, which would be added to the platform.

Open Platform 3.0 enables enterprise IT to be user-driven. There is a revolution in the way that businesses use IT. Users are becoming smarter and more able to use technology, and want to do so directly, rather than through a separate IT department. Business departments are taking in business technologists who understand how to use technology for business purposes. Some companies are closing their IT departments and using cloud brokers instead. In other companies, the IT department is taking on a broker role, sourcing technology that business people use directly.Open Platform 3.0 will be part of that revolution.

Open Platform 3.0 will deliver the ability to integrate solutions that have been independently developed. Businesses typically exist within one or more business ecosystems. Those ecosystems are dynamic: partners join, partners leave, and businesses cannot standardize the whole architecture across the ecosystem; it would be nice to do so but, by the time it was done, the business opportunity would be gone. Integration of independently developed architectures is crucial to the world of business ecosystems and delivering value within them.

Call for Input

The platform will deliver a common architecture environment, user-driven enterprise IT, and the ability to integrate solutions that have been independently developed. The Open Platform 3.0 Forum is defining it through an iterative process of understanding the content, analyzing the use-cases, and documenting the common features. We welcome input and comments from other individuals within and outside The Open Group and from other industry bodies.

If you have comments on the way Open Platform 3.0 is developing or input on the way it should develop, please tell us! You can do so by sending mail to platform3-input@opengroup.org or share your comments on our blog.

References

The Open Platform 3.0 White Paper: https://www2.opengroup.org/ogsys/catalog/W147

The Nexus of Forces in Action: https://www2.opengroup.org/ogsys/catalog/W145

TOGAF®: http://www.opengroup.org/togaf/

harding

Dr. Chris Harding is Director for Interoperability at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0™ Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

 

 

 

 

 

2 Comments

Filed under architecture, Boundaryless Information Flow™, Cloud, Cloud/SOA, digital technologies, Open Platform 3.0, Service Oriented Architecture, Standards, TOGAF®, Uncategorized

The Digital Ecosystem Paradox – Learning to Move to Better Digital Design Outcomes

By Mark Skilton, Professor of Practice, Information Systems Management, Warwick Business School

Does digital technologies raise quality and improve efficiencies but at the same time drive higher costs of service as more advanced solutions and capabilities become available demanding higher entry investment and maintenance costs?

Many new digital technologies introduce step change in performance that would have been cost prohibitive in the previous technology generations. But in some industries the technology cost per outcome have be steadily rising in some industries.

In the healthcare market the cost per treatment of health care technology was highlighted in a MIT Technology Review article (1). In areas such as new drugs for treating depression, left-ventricular assistance devices, or implantable defibrillators may be raising the overall cost of health, yet how do we value this if patient quality of life is improving and life extending. While lower cost drugs and vaccines may be enabling better overall patient outcomes

In the smart city a similar story is unfolding where governments and organizations are seeking paths to use digitization to drive improvements in jobs productivity, better lifestyles and support of environmental sustainability. While there are several opportunities to reduce energy bills, improve transport and office spaces exist with savings of 40% to 60% consumption and efficiencies complexity costs of connecting different residential, corporate offices, transport and other living spaces requires digital initiatives that are coordinated and managed. (U-city experience in South Korea (2)).

These digital paradoxes represent the digital ecosystem challenge to maximise what these new digital technologies can do to augment every objects, services, places and spaces while taking account of the size and addressable market that all these solutions can serve.

Skilton1

What we see is that technology can be both a driver of the physical and digital economy through lowering of price per function in computer storage, compute, access and application technology and creating new value; conversely the issues around driving new value is having different degrees of success in industries.

Creating value in the digital economy

The digital economy is at a tipping point, a growing 30% of business is shifting online to search and engage with consumers, markets and transactions taking account of retail , mobile and impact on supply channels (3);  80% of transport, real estate and hotelier activity is processed through websites (4); over 70% of companies and consumers are experiencing cyber-privacy challenges (5), (6) yet the digital media in social, networks, mobile devices, sensors and the explosion of big data and cloud computing networks is interconnecting potentially everything everywhere – amounting to a new digital “ecosystem.

Disruptive business models across industries and new consumer innovation are increasingly built around new digital technologies such as social media, mobility, big data, cloud computing and the emerging internet of things sensors, networks and machine intelligence. (MISQ Digital Strategy Special Issue (7)).

These trends have significantly enhanced the relevance and significance of IT in its role and impact on business and market value at local, regional and global scale.

With IT budgets increasing shifting more towards the marketing functions and business users of these digital services from traditional IT, there is a growing role for technology to be able to work together in new connected ways.

Driving better digital design outcomes

The age of new digital technologies are combining in new ways to drive new value for individuals, enterprise, communities and societies. The key is in understanding the value that each of these technologies can bring individually and in the mechanisms to creating additive value when used appropriately and cost effectively to drive brand, manage cyber risk, and build consumer engagement and economic growth.

Skilton2

Value-in-use, value in contextualization

Each digital technology has the potential to enable better contextualization of the consumer experience and the value added by providers.   Each industry market has emerging combinations of technologies that can be developed to enable focused value.

Examples of these include.

  • Social media networks

o   Creating enhanced co-presence

  • Big data

o   Providing uniqueness profiling , targeting advice and preferences in context

  • Mobility

o   Creating location context services and awareness

  • Cloud

o   Enabling access to resources and services

  • Sensors

o   Creating real time feedback responsiveness

  • Machine intelligence

o   Enabling insight and higher decision quality

Together these digital technologies can build generative effects that when in context can enable higher value outcomes in digital workspaces.

Skilton3

Value in Contextualization

The value is not in whether these technologies, objects, consumers or provider inside or outside the enterprise or market. These distinctions are out-of-context from relating them to the situation and the consumer needs and wants. The issue is how to apply and put into context the user experience and enterprise and social environment to best use and maximise the outcomes in a specific setting context rom the role perspective.

With the medical roles of patient and clinician, the aim in digitization is how mobile devices, wearable monitoring can be used most efficiently and effectively to raise patient outcome quality and manage health service costs. Especially in the developing countries and remote areas where infrastructure and investment costs, how can technologies reach and improve the quality of health and at an effective cost price point.

This phenomena is wide spread and growing across all industry sectors such as: the connected automobile with in-car entertainment, route planning services; to tele-health that offers remote patient care monitoring and personalized responses; to smart buildings and smart cities that are optimizing energy consumption and work environments; to smart retail where interactive product tags for instant customer mobile information feedback and in-store promotions and automated supply chains. The convergence of these technologies requires a response from all businesses.

These issues are not going to go away, the statistics from analysts describe a new era of a digital industrial economy (8). What is common is the prediction in the next twenty to fifty years suggest double or triple growth in demand for new digital technologies and their adoption.

Skilton4

Platforming and designing better digital outcomes

Developing efective digital workspaces will be fundamental to the value and use of these technologies. There will be not absolute winners and losers as a result of the digital paradox. What is at state is in how the cost and inovation of these technologies can be leveraged to fit specific outcomes.

Understanding the architecting practices will be essentuial in realizing the digitel enterprise. Central to this is how to develop ways to contextualize digital technologies to enable this value for consumers and customers (Value and Worth – creating new markets in the digital economy (9)).Skilton5Platforming will be a central IT strategy that we see already emerging in early generations of digital marketplaces, mobile app ecosystems and emerging cross connecting services in health, automotive, retail and others seeking to create joined up value.

Digital technologies will enable new forms of digital workspaces to support new outcomes. By driving contextualized offers that meet and stimulate consumer behaviors and demand , a richer and more effective value experience and growth potential is possible.

Skilton6The challenge ahead

The evolution of digital technologies will enable many new types of architect and platforms. How these are constructed into meaningful solutions is both the opportunity and the task ahead.

The challenge for both business and IT practitioners is how to understand the practical use and advantages as well as the pitfalls and challenges from these digital technologies

  • What can be done using digital technologies to enhance customer experience, employee productivity and sell more products and services
  • Where to position in a digital market, create generative reinforcing positive behavior and feedback for better market branding
  • Who are the beneficiaries of the digital economy and the impact on the roles and jobs of business and IT professionals
  • Why do enterprises and industry marketplaces need to understand the disruptive effects of these digital technologies and how to leverage these for competitive advantage.
  • How to architect and design robust digital solutions that support the enterprise, its supply chain and extended consumers, customers and providers

References

  1. http://www.technologyreview.com/news/518876/the-costly-paradox-of-health-care-technology/.
  2. http://www.kyoto-smartcity.com/result_pdf/ksce2014_hwang.pdf.
  3. http://www.smartinsights.com/digital-marketing-strategy/online-retail-sales-growth/
  4. http://www.statisticbrain.com/internet-travel-hotel-booking-statistics/
  5. http://www.fastcompany.com/3019097/fast-feed/63-of-americans-70-of-milennials-are-cybercrime-victims
  6. https://www.kpmg.com/Global/en/IssuesAndInsights/ArticlesPublications/Documents/cyber-crime.pdf
  7. http://www.misq.org/contents-37-2
  8. http://www.gartner.com/newsroom/id/2602817
  9. http://www2.warwick.ac.uk/fac/sci/wmg/mediacentre/wmgnews/?newsItem=094d43a23d3fbe05013d835d6d5d05c6

 

Skilton7Digital Health

As the cost of health care, the increasing aging population and the rise of medical advances enable people to live longer and improved quality of life; the health sector together with governments and private industry are increasingly using digital technologies to manage the rising costs of health care while improve patient survival and quality outcomes.

Digital Health Technologies

mHealth, TeleHealth and Translation-to-Bench Health services are just some of the innovative medical technology practices creating new Connected Health Digital Ecosystems.

These systems connect Mobile phones, wearable health monitoring devices, remote emergency alerts to clinician respond and back to big data research for new generation health care.

The case for digital change

UN Department of Economic and Social Affairs

“World population projected to reach 8.92 billion for 2050 and 9.22 Million in 2075. Life expectance is expected to range from 66 to 97 years by 2100.”

OECD Organization for Economic Cooperation and Development

The cost of Health care in developing countries is 8 to 17% of GDP in developed countries. But overall Health car e spending is falling while population growth and life expectancy and aging is increasing.

 

Skilton8Smart cities

The desire to improve buildings, reduce pollution and crime, improve transport, create employment, better education and ways to launch new business start-ups through the use of digital technologies are at the core of important outcomes to drive city growth from “Smart Cities” digital Ecosystem.

Smart city digital technologies

Embedded sensors in building energy management, smart ID badges, and mobile apps for location based advice and services supporting social media communities, enabling improved traffic planning and citizen service response are just some of the ways digital technologies are changing the physical city in the new digital metropolis hubs of tomorrow.

The case for digital change

WHO World Health Organization

“By the middle of the 21st century, the urban population will almost double globally, By 2030, 6 out of every 10 people will live in a city, and by 2050, this proportion will increase to 7 out of 10 people.”

UN Inter-governmental Panel on Climate Change IPCC

“In 2010, the building sector accounted for around 32% final energy use with energy demand projected to approximately double and CO2 emissions to increase by 50–150% by mid-century”

IATA International Air Transport Association

“Airline Industry Forecast 2013-2017 show that airlines expect to see a 31% increase in passenger numbers between 2012 and 2017. By 2017 total passenger numbers are expected to rise to 3.91 billion—an increase of 930 million passengers over the 2.98 billion carried in 2012.”

Mark Skilton 2 Oct 2013Professor Mark Skilton,  Professor of Practice in Information Systems Management , Warwick Business School has over twenty years’ experience in Information Technology and Business consulting to many of the top fortune 1000 companies across many industry sectors and working in over 25 countries at C level board level to transform their operations and IT value.  Mark’s career has included CIO, CTO  Director roles for several FMCG, Telecoms Media and Engineering organizations and recently working in Global Strategic Office roles in the big 5 consulting organizations focusing on digital strategy and new multi-sourcing innovation models for public and private sectors. He is currently a part-time Professor of practice at Warwick Business School, UK where he teaches outsourcing and the intervention of new digital business models and CIO Excellence practices with leading Industry practitioners.

Mark’s current research and industry leadership engagement interests are in Digital Ecosystems and the convergence of social media networks, big data, mobility, cloud computing and M2M Internet of things to enable digital workspaces. This has focused on define new value models digitizing products, workplaces, transport and consumer and provider contextual services. He has spoken and published internationally on these subjects and is currently writing a book on the Digital Economy Series.

Since 2010 Mark has held International standards body roles in The Open Group co-chair of Cloud Computing and leading Open Platform 3.0™ initiatives and standards publications. Mark is active in the ISO JC38 distributed architecture standards and in the Hubs-of-all-things HAT a multi-disciplinary project funded by the Research Council’s UK Digital Economy Programme. Mark is also active in Cyber security forums at Warwick University, Ovum Security Summits and INFOSEC. He has spoken at the EU Commission on Digital Ecosystems Agenda and is currently an EU Commission Competition Judge on Smart Outsourcing Innovation.

 

 

 

 

 

Leave a comment

Filed under Data management, digital technologies, Enterprise Architecture, Future Technologies, Healthcare, Open Platform 3.0, Uncategorized

Q&A with Allen Brown, President and CEO of The Open Group

By The Open Group

Last month, The Open Group hosted its San Francisco 2014 conference themed “Toward Boundaryless Information Flow™.” Boundaryless Information Flow has been the pillar of The Open Group’s mission since 2002 when it was adopted as the organization’s vision for Enterprise Architecture. We sat down at the conference with The Open Group President and CEO Allen Brown to discuss the industry’s progress toward that goal and the industries that could most benefit from it now as well as The Open Group’s new Dependability through Assuredness™ Standard and what the organization’s Forums are working on in 2014.

The Open Group adopted Boundaryless Information Flow as its vision in 2002, and the theme of the San Francisco Conference has been “Towards Boundaryless Information Flow.” Where do you think the industry is at this point in progressing toward that goal?

Well, it’s progressing reasonably well but the challenge is, of course, when we established that vision back in 2002, life was a little less complex, a little bit less fast moving, a little bit less fast-paced. Although organizations are improving the way that they act in a boundaryless manner – and of course that changes by industry – some industries still have big silos and stovepipes, they still have big boundaries. But generally speaking we are moving and everyone understands the need for information to flow in a boundaryless manner, for people to be able to access and integrate information and to provide it to the teams that they need.

One of the keynotes on Day One focused on the opportunities within the healthcare industry and The Open Group recently started a Healthcare Forum. Do you see Healthcare industry as a test case for Boundaryless Information Flow and why?

Healthcare is one of the verticals that we’ve focused on. And it is not so much a test case, but it is an area that absolutely seems to need information to flow in a boundaryless manner so that everyone involved – from the patient through the administrator through the medical teams – have all got access to the right information at the right time. We know that in many situations there are shifts of medical teams, and from one medical team to another they don’t have access to the same information. Information isn’t easily shared between medical doctors, hospitals and payers. What we’re trying to do is to focus on the needs of the patient and improve the information flow so that you get better outcomes for the patient.

Are there other industries where this vision might be enabled sooner rather than later?

I think that we’re already making significant progress in what we call the Exploration, Mining and Minerals industry. Our EMMM™ Forum has produced an industry-wide model that is being adopted throughout that industry. We’re also looking at whether we can have an influence in the airline industry, automotive industry, manufacturing industry. There are many, many others, government and retail included.

The plenary on Day Two of the conference focused on The Open Group’s Dependability through Assuredness standard, which was released last August. Why is The Open Group looking at dependability and why is it important?

Dependability is ultimately what you need from any system. You need to be able to rely on that system to perform when needed. Systems are becoming more complex, they’re becoming bigger. We’re not just thinking about the things that arrive on the desktop, we’re thinking about systems like the barriers at subway stations or Tube stations, we’re looking at systems that operate any number of complex activities. And they bring an awful lot of things together that you have to rely upon.

Now in all of these systems, what we’re trying to do is to minimize the amount of downtime because downtime can result in financial loss or at worst human life, and we’re trying to focus on that. What is interesting about the Dependability through Assuredness Standard is that it brings together so many other aspects of what The Open Group is working on. Obviously the architecture is at the core, so it’s critical that there’s an architecture. It’s critical that we understand the requirements of that system. It’s also critical that we understand the risks, so that fits in with the work of the Security Forum, and the work that they’ve done on Risk Analysis, Dependency Modeling, and out of the dependency modeling we can get the use cases so that we can understand where the vulnerabilities are, what action has to be taken if we identify a vulnerability or what action needs to be taken in the event of a failure of the system. If we do that and assign accountability to people for who will do what by when, in the event of an anomaly being detected or a failure happening, we can actually minimize that downtime or remove it completely.

Now the other great thing about this is it’s not only a focus on the architecture for the actual system development, and as the system changes over time, requirements change, legislation changes that might affect it, external changes, that all goes into that system, but also there’s another circle within that system that deals with failure and analyzes it and makes sure it doesn’t happen again. But there have been so many evidences of failure recently. In the banks for example in the UK, a bank recently was unable to process debit cards or credit cards for customers for about three or four hours. And that was probably caused by the work done on a routine basis over a weekend. But if Dependability through Assuredness had been in place, that could have been averted, it could have saved an awfully lot of difficulty for an awful lot of people.

How does the Dependability through Assuredness Standard also move the industry toward Boundaryless Information Flow?

It’s part of it. It’s critical that with big systems the information has to flow. But this is not so much the information but how a system is going to work in a dependable manner.

Business Architecture was another featured topic in the San Francisco plenary. What role can business architecture play in enterprise transformation vis a vis the Enterprise Architecture as a whole?

A lot of people in the industry are talking about Business Architecture right now and trying to focus on that as a separate discipline. We see it as a fundamental part of Enterprise Architecture. And, in fact, there are three legs to Enterprise Architecture, there’s Business Architecture, there’s the need for business analysts, which are critical to supplying the information, and then there are the solutions, and other architects, data, applications architects and so on that are needed. So those three legs are needed.

We find that there are two or three different types of Business Architect. Those that are using the analysis to understand what the business is doing in order that they can inform the solutions architects and other architects for the development of solutions. There are those that are more integrated with the business that can understand what is going on and provide input into how that might be improved through technology. And there are those that can actually go another step and talk about here we have the advances and the technology and here are the opportunities for advancing our competitiveness and organization.

What are some of the other key initiatives that The Open Group’s forum and work groups will be working on in 2014?

That kind question is like if you’ve got an award, you’ve got to thank your friends, so apologies to anyone that I leave out. Let me start alphabetically with the Architecture Forum. The Architecture Forum obviously is working on the evolution of TOGAF®, they’re also working with the harmonization of TOGAF with Archimate® and they have a number of projects within that, of course Business Architecture is on one of the projects going on in the Architecture space. The Archimate Forum are pushing ahead with Archimate—they’ve got two interesting activities going on at the moment, one is called ArchiMetals, which is going to be a sister publication to the ArchiSurance case study, where the ArchiSurance provides the example of Archimate is used in the insurance industry, ArchiMetals is going to be used in a manufacturing context, so there will be a whitepaper on that and there will be examples and artifacts that we can use. They’re also working on in Archimate a standard for interoperability for modeling tools. There are four tools that are accredited and certified by The Open Group right now and we’re looking for that interoperability to help organizations that have multiple tools as many of them do.

Going down the alphabet, there’s DirecNet. Not many people know about DirecNet, but Direcnet™ is work that we do around the U.S. Navy. They’re working on standards for long range, high bandwidth mobile networking. We can go to the FACE™ Consortium, the Future Airborne Capability Environment. The FACE Consortium are working on their next version of their standard, they’re working toward accreditation, a certification program and the uptake of that through procurement is absolutely amazing, we’re thrilled about that.

Healthcare we’ve talked about. The Open Group Trusted Technology Forum, where they’re working on how we can trust the supply chain in developed systems, they’ve released the Open Trusted Technology Provider™ Standard (O-TTPS) Accreditation Program, that was launched this week, and we already have one accredited vendor and two certified test labs, assessment labs. That is really exciting because now we’ve got a way of helping any organization that has large complex systems that are developed through a global supply chain to make sure that they can trust their supply chain. And that is going to be invaluable to many industries but also to the safety of citizens and the infrastructure of many countries. So the other part of the O-TTPS is that standard we are planning to move toward ISO standardization shortly.

The next one moving down the list would be Open Platform 3.0™. This is really exciting part of Boundaryless Information Flow, it really is. This is talking about the convergence of SOA, Cloud, Social, Mobile, Internet of Things, Big Data, and bringing all of that together, this convergence, this bringing together of all of those activities is really something that is critical right now, and we need to focus on. In the different areas, some of our Cloud computing standards have already gone to ISO and have been adopted by ISO. We’re working right now on the next products that are going to move through. We have a governance standard in process and an ecosystem standard has recently been published. In the area of Big Data there’s a whitepaper that’s 25 percent completed, there’s also a lot of work on the definition of what Open Platform 3.0 is, so this week the members have been working on trying to define Open Platform 3.0. One of the really interesting activities that’s gone on, the members of the Open Platform 3.0 Forum have produced something like 22 different use cases and they’re really good. They’re concise and they’re precise and the cover a number of different industries, including healthcare and others, and the next stage is to look at those and work on the ROI of those, the monetization, the value from those use cases, and that’s really exciting, I’m looking forward to peeping at that from time to time.

The Real Time and Embedded Systems Forum (RTES) is next. Real-Time is where we incubated the Dependability through Assuredness Framework and that was where that happened and is continuing to develop and that’s really good. The core focus of the RTES Forum is high assurance system, and they’re doing some work with ISO on that and a lot of other areas with multicore and, of course, they have a number of EC projects that we’re partnering with other partners in the EC around RTES.

The Security Forum, as I mentioned earlier, they’ve done a lot of work on risk and dependability. So they’ve not only their standards for the Risk Taxonomy and Risk Analysis, but they’ve now also developed the Open FAIR Certification for People, which is based on those two standards of Risk Analysis and Risk Taxonomy. And we’re already starting to see people being trained and being certified under that Open FAIR Certification Program that the Security Forum developed.

A lot of other activities are going on. Like I said, I probably left a lot of things out, but I hope that gives you a flavor of what’s going on in The Open Group right now.

The Open Group will be hosting a summit in Amsterdam May 12-14, 2014. What can we look forward to at that conference?

In Amsterdam we have a summit – that’s going to bring together a lot of things, it’s going to be a bigger conference that we had here. We’ve got a lot of activity in all of our activities; we’re going to bring together top-level speakers, so we’re looking forward to some interesting work during that week.

 

 

 

1 Comment

Filed under ArchiMate®, Boundaryless Information Flow™, Business Architecture, Conference, Cybersecurity, EMMMv™, Enterprise Architecture, FACE™, Healthcare, O-TTF, RISK Management, Standards, TOGAF®

One Year Later: A Q&A Interview with Chris Harding and Dave Lounsbury about Open Platform 3.0™

By The Open Group

The Open Group launched its Open Platform 3.0™ Forum nearly one year ago at the 2013 Sydney conference. Open Platform 3.0 refers to the convergence of new and emerging technology trends such as Mobile, Social, Big Data, Cloud and the Internet of Things, as well as the new business models and system designs these trends are pushing organizations toward due to the consumerization of IT and evolving user behaviors. The Forum was created to help organizations address the architectural and structural considerations that businesses must consider to take advantage of and benefit from this evolutionary shift in how technology is used.

We sat down with The Open Group CTO Dave Lounsbury and Open Platform 3.0 Director Dr. Chris Harding at the recent San Francisco conference to catch up on the Forum’s activities and progress since launch and what they’ll be working on during 2014.

The Open Group’s Forum, Open Platform 3.0, was launched almost a year ago in April of 2013. What has the Forum been working on over the past year?

Chris Harding (CH): We launched at the Sydney conference in April of last year. What we’ve done since then first of all was to look at the requirements for the platform, and we did this using the proven TOGAF® technique of the Business Scenario. So over the course of last summer, the summer of 2013, we developed a Business Scenario capturing the requirements for Open Platform 3.0 and that was published just before The Open Group conference in October. Following that conference, the main activity that we’ve been doing is in fact furthering the requirements space. We’ve been developing analysis of use cases, so currently we have 22 different use cases that members of the forum have put together which are illustrating the use of the convergent technologies and most importantly the use of them in combination with each other.

What we’re doing here in this meeting in San Francisco is to obtain from that basis of requirements and use cases an understanding of what the platform fundamentally should be because it is our intention to produce a Snapshot definition of the platform by the end of March. So in the first year of the Forum, we hope that we will finish that year by producing a Snapshot definition of Open Platform 3.0.

Dave Lounsbury (DL): First, the roots of the Open Platform go deeper. Previous to that we had a number of works groups in the areas of Cloud, SOA and some other ones in terms of Semantic Interoperability. All of those were early pieces, and what we saw at the beginning of 2013 was a coalescing of that into this concept that businesses were looking for a new platform for their operations that combined aspects of Social, Mobile, Cloud computing, Big Data and the analytics that go along with it. We saw that emerging in the marketplace, and we formed the Forum to develop that direction. The Open Group always takes an end-to-end view of any problem – we like to look at the whole ecosystem. We want to make sure that the technical standards aren’t just point targets and actually address a business need.

Some of the work groups within The Open Group, such as Quantum Lifecycle Management (QLM) and Semantic Interoperability, have been brought under the umbrella of Open Platform 3.0, most notably the Cloud Work Group. How will the work of these groups continue under Platform 3.0?

CH: Some of the work already going on in The Open Group was directly or indirectly relevant to Open Platform 3.0. And that first and most importantly was the work of the Cloud Work Group, Cloud being one of the convergent technologies, and the Cloud Work Group became a part of Platform 3.0. Two other activities also became a part of Open Platform 3.0, one was of these was the Semantic Interoperability Work Group, and that is because we recognized that Semantic Interoperability has to be an important part of how these technologies work with each other. Though it may not be that we have a full definition of that in the first version of the standard – it’s a notoriously difficult area – but over the course of time, we hope to incorporate a Semantic Interoperability component in the Platform definition and that may well build on the work that we’ve been doing with the Universal Data Element Framework, the UDEF project, which is currently undergoing a major restructuring. The key thing from the Open Platform 3.0 perspective is how the semantic convention relates to the convergence of the technologies in the platform.

In terms of QLM, that became part become of Open Platform 3.0 because one of the key convergent technologies is the Internet of Things, and QLM overlaps significantly with that. QLM is not about the Internet of Things, as such, but it does have a strong component of understanding the way networked sensors and controls work, so that’s become an important contribution to the new Forum.

DL: Like in any platform there’s going to be multiple components. In Open Platform 3.0, one of the big drivers for this change is Big Data. Big Data is very trendy, right? But where does Big Data come from? Well, it comes from increased connectivity, increased use of mobile devices, increased use of sensors –  the ‘Internet of Things.’ All of these things are generating data about usage patterns, where people are, what they’re doing, what that they‘re buying, what they’re interested in and what their likes and dislikes are, creating a massive flood of data. Now the question becomes ‘how do you compute on that data?’ You need to handle that massively scalable stream of data. You need massively scalable computing  underneath it, you need the ability to move large amounts of information from one place to another. When you think about the analysis of data like that, you have algorithms that do a lot of data access and they’ll have big spikes of computation, as they create some model of it. If you’re going to look at 10 zillion records, you don’t want to buy enough computers so you can always look at 10 zillion records, you want to be able to turn that on, do your analysis and turn it back off.  That’s, of course, why Cloud is a critical component of Open Platform 3.0.

Open Platform 3.0 encompasses a lot of different technologies as well as how they are converging. How do you piece apart everything that Platform 3.0 entails to begin to formulate a standard for it?

CH: I mentioned that we developed 22 use cases. The way that we’re addressing this is to look at use cases and the business and technical ecosystems that those use cases exemplify and to abstract from that some fundamental architectural patterns. These we believe will be the basis for the initial definition of the platform.

DL: That gets back to this question about how were starting up. Again it’s The Open Group’s mantra that we look at a business problem as an end-to-end problem. So what you’ll see in Open Platform 3.0, is that we’ve done the Business Scenario to figure out what’s the business motivator, what do business people need to get this done, and we’re fleshing that out with these details in these detailed use cases.

One of the things that we’re very careful about in The Open Group is that we don’t replicate what’s going on in other standards bodies. If you look at what’s going on in Cloud, and what continues to go on in Cloud under the Open Platform 3.0, banner, we really focused in on what do business people really need in the cloud guides – those are how business people really use it.  We’ve stayed away for a long time from the bits and bytes – we’re now doing a Cloud Reference Architecture – but we’ve also created the Cloud Ecosystem Reference Model, which was just published. That Cloud Ecosystem Reference Model, if you read through it, isn’t about how bits flow around, it’s about how partners interact with each other – what to look for in your Cloud partner, who are the players? When you go to use Cloud in your business, what players do you have to engage with? What are the roles that you have to engage with them on? So again it’s really that business level of guidance that The Open Group is really good at, and we do liaison with other organizations in order to get technical stuff if we need it – or if not, we’ll create it ourselves because we’ve got very competent technical people – but again, it’s that balanced business approach that distinguishes The Open Group way.

Many industry pundits have said that Open Platform 3.0 is ultimately about a shift toward user-driven IT. How does that change the standards making process when most standards are ultimately put in place by technologists not necessarily end-users?

CH:  It’s an interesting question. I mentioned the Business Scenario that we developed over the summer – one of the key things that came out of that was that there is this shift towards a more direct use of the technologies by business users.  And that is partly because it’s becoming more possible. Cloud is one of the key factors that has shortened the cycle of procuring and putting IT in place to support business use, and made it more possible to manage IT directly. At the same time [users are] becoming impatient with delay and wanting to gain the benefits of technology directly and not at arms length through the IT department. We’re seeing in connection with these phenomena such as the business technologist, the technical specialist who works with or is employed by the business department rather than within a separate IT department, and one of whose key strengths is an understanding of the business.  So that is certainly an important dimension that we’re seeing and one of the requirements for the Platform is that it should be usable in an environment where business is using IT more directly.

But that wasn’t the question you asked. The question was, ‘isn’t it a problem that the standards are defined by technologists?’ We don’t believe it’s a problem provided that the technologists do have an understanding of the business environment. That was why in the Business Scenario activity that we conducted, one of the key inputs was a roundtable workshop with CIO level people, and that is where a lot of our perspective on why things are changing comes from. Open Platform 3.0 certainly does have dimension of fundamental architecture patterns and part of that is business architecture patterns but it also has a technical dimension, and obviously you do really need the technical people to explore that dimension though they do always need to keep in mind the technology is there to serve the business.

DL: If you actually look at trends in the marketplace about how IT is done, and in fact if you look at the last blog post that Allen [Brown] did about agile, the whole thrust of agile methodologies and its successor DevOps is to really get the implementers right next to the business people and have a very tight arrangement in order to get fast iteration and really have the implementer do what the business person needs. I actually view consumerization not as some outside threat but actually a logical extension of that trend. What’s happening in my opinion is that people who are not technologists, who are not part of the IT department, are getting comfortable using and managing their own technology. And so they’re making decisions that used to be made by the IT department years ago – or what used to be the IT department. First there was the big mainframe, and you handed in your cards at a window and you got your printout in your little cubby hole. Then the IT department bought your PC, and now we bring our own devices. There’s nothing wrong with that, that’s people getting comfortable with technology and making decisions. I think that’s one of the reasons we have need for an Open Platform 3.0 approach – to develop business guidance and eventually technical standards on how we keep up with that trend. Because it’s a very natural trend – people want to control the resources they need to get their job done, and if those resources are technical resources, and they’re comfortable doing that, great!

Convergence and Open Platform 3.0 seem to take us closer and closer to The Open Group’s vision of Boundaryless Information Flow™.  Is Open Platform 3.0 the fulfillment of that vision?

DL: I think I’d be crazy to say that it’s the endpoint of that vision. I think being able to move large amounts of data and make decisions on it is a significant step forward in Boundaryless Information Flow, but this is a two-edged sword. I talked about all that data being generated by mobile devices and sensors and retail networks and social networks and things like that. That data is growing exponentially.  The number of people who can make decisions on that data are growing at best linearly and not very quickly. So if there’s all this data out there and nobody to look at it, we need to ask if we have we lowered the boundary for communications or have we actually raised it by creating a pile of data that no one can climb? That’s why I think a next step is, in fact, more machine-assisted analytics and predictive analytics and machine learning that will help humans digest and understand that data. That will be, I think, yet another step toward Boundaryless Information Flow. Moving bits around does not equate to information flow – its only information when it moves from data to being information in a human’s brain. Until we lower that barrier as well, we’re not there. And even beyond that, there’s still lots of things that can be done, in terms of breaking down human language barriers and things like that or social networks in more intuitive ways. I think there’s a long way to go. I think this is a really important step forward, but fulfillment is too strong a word.

CH:  Not in itself, I don’t believe. It is a major contribution towards the vision of Boundaryless Information Flow but it is not the complete fulfillment of that vision. Since we’ve formulated the problem statement of Boundaryless Information Flow there have been a number of developments that have impacted on it and maybe helped to bring it closer. So you might think of SOA as an important enabling technology for Boundaryless Information Flow, replacing the information silos with interacting services. Now we’re seeing Open Platform 3.0, which is certainly going to have a service-oriented flavor, shall we say, although it probably will not look exactly like traditional SOA. The Boundaryless Information Flow requirement was a very far-reaching problem statement. The Interoperable Business Scenario was where it was first set out and since then we’ve been gradually making process toward it. Open Platform 3.0 will bring it closer, but I’m sure there will be other things still needed to make it happen. 

One of the key things for Boundaryless Information Flow is Enterprise Architecture. So within a particular enterprise, the business and IT needs to be architected to enable Boundaryless Information Flow, and TOGAF is the method that is defined and maintained by The Open Group for how enterprises define enterprise architectures. Open Platform 3.0 will complement that by providing a ‘this is what an architecture looks like that enables the business to take advantage of this new converging technologies.’ But there will still be a need for the Enterprise Architect to put that together with the other particular factors involved in an enterprise to create an architecture for Boundaryless Information Flow within that enterprise.

When can we expect the first standard from Open Platform 3.0?

DL: Well, we published the Cloud Ecosystem Reference Guide, and again the understanding of how business partners relate in the Cloud world is a key component of Open Platform 3.0. The Forum has a roadmap, and will start publishing the case studies still in process.

The message I would say is there’s already early value in the Cloud Ecosystem Reference Model, which is a logical continuation of cloud work that had already gone on in the Work Group, but is now part of the Forum as part of Open Platform 3.0.

CH: That’s always a tricky question however I can tell you what is planned. The intention, as I said, was to produce a Snapshot definition by the end of March and, given we are a quarter of the way through the meeting at this conference, which is the key meeting that will define the basis for that, the progress has been good so far, so I’m optimistic. A Snapshot is not a Standard. A Snapshot is a statement of ‘this is what we are thinking and might be what it will look like,’ but it’s not guaranteed in any way that the Standard will follow the Snapshot. We are intending to produce the first Standard definition of the platform in about a year’s time after the Snapshot.  That will give the opportunity for people not only within The Open Group but outside The Open Group to give us input and further understanding of the way people intend to use the platform as feedback on the snapshot, which should be the basis for the first published standard.

For more on the Open Platform 3.0 Forum, please visit: http://www3.opengroup.org/subjectareas/platform3.0.

If you have any questions about Open Platform 3.0 or if you would like to join the new Forum, please contact Chris Harding (c.harding@opengroup.org) for queries regarding the Forum or Chris Parnell (c.parnell@opengroup.org) for queries regarding membership.

Chris HardingDr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

Dave LounsburyDave is Chief Technical Officer (CTO) and Vice President, Services for The Open Group. As CTO, he ensures that The Open Group’s people and IT resources are effectively used to implement the organization’s strategy and mission.  As VP of Services, Dave leads the delivery of The Open Group’s proven collaboration processes for collaboration and certification both within the organization and in support of third-party consortia. Dave holds a degree in Electrical Engineering from Worcester Polytechnic Institute, and is holder of three U.S. patents.

Comments Off

Filed under Cloud, Cloud/SOA, Conference, Open Platform 3.0, Standards, TOGAF®

The Open Group San Francisco 2014 – Day Two Highlights

By Loren K. Baynes, Director, Global Marketing Communications

Day two, February 4th, of The Open Group San Francisco conference kicked off with a welcome and opening remarks from Steve Nunn, COO of The Open Group and CEO of the Association of Enterprise Architects.

Nunn introduced Allen Brown, President and CEO of The Open Group, who provided highlights from The Open Group’s last quarter.  As of Q4 2013, The Open Group had 45,000 individual members in 134 countries hailing from 449 member companies in 38 countries worldwide. Ten new member companies have already joined The Open Group in 2014, and 24 members joined in the last quarter of 2013, with the first member company joining from Vietnam. In addition, 6,500 individuals attended events sponsored by The Open Group in Q4 2013 worldwide.

Updates on The Open Group’s ongoing work were provided including updates on the FACE™ Consortium, DirectNet® Waveform Standard, Architecture Forum, Archimate® Forum, Open Platform 3.0™ Forum and Security Forum.

Of note was the ongoing development of TOGAF® and introduction of a three-volume work including individual volumes outlining the TOGAF framework, guidance and tools and techniques for the standard, as well as collaborative work that allows the Archimate modeling language to be used for risk management in enterprise architectures.

In addition, Open Platform 3.0 Forum has already put together 22 business use cases outlining ROI and business value for various uses related to technology convergence. The Cloud Work Group’s Cloud Reference Architecture has also been submitted to ISO for international standards certification, and the Security Forum has introduced certification programs for OpenFAIR risk management certification for individuals.

The morning plenary centered on The Open Group’s Dependability through Assuredness™ (O-DA) Framework, which was released last August.

Speaking first about the framework was Dr. Mario Tokoro, Founder and Executive Advisor for Sony Computer Science Laboratories. Dr. Tokoro gave an overview of the Dependable Embedded OS project (DEOS), a large national project in Japan originally intended to strengthen the country’s embedded systems. After considerable research, the project leaders discovered they needed to consider whether large, open systems could be dependable when it came to business continuity, accountability and ensuring consistency throughout the systems’ lifecycle. Because the boundaries of large open systems are ever-changing, the project leaders knew they must put together dependability requirements that could accommodate constant change, allow for continuous service and provide continuous accountability for the systems based on consensus. As a result, they put together a framework to address both the change accommodation cycle and failure response cycles for large systems – this framework was donated to The Open Group’s Real-Time Embedded Systems Forum and released as the O-DA standard.

Dr. Tokoro’s presentation was followed by a panel discussion on the O-DA standard. Moderated by Dave Lounsbury, VP and CTO of The Open Group, the panel included Dr. Tokoro; Jack Fujieda, Founder and CEO ReGIS, Inc.; T.J. Virdi, Senior Enterprise IT Architect at Boeing; and Bill Brierly, Partner and Senior Consultant, Conexiam. The panel discussed the importance of openness for systems, iterating the conference theme of boundaries and the realities of having standards that can ensure openness and dependability at the same time. They also discussed how the O-DA standard provides end-to-end requirements for system architectures that also account for accommodating changes within the system and accountability for it.

Lounsbury concluded the track by iterating that assuring systems’ dependability is not only fundamental to The Open Group mission of Boundaryless Information Flow™ and interoperability but also in preventing large system failures.

Tuesday’s late morning sessions were split into two tracks, with one track continuing the Dependability through Assuredness theme hosted by Joe Bergmann, Forum Chair of The Open Group’s Real-Time and Embedded Systems Forum. In this track, Fujieda and Brierly furthered the discussion of O-DA outlining the philosophy and vision of the standard, as well as providing a roadmap for the standard.

In the morning Business Innovation & Transformation track, Alan Hakimi, Consulting Executive, Microsoft presented “Zen and the Art of Enterprise Architecture: The Dynamics of Transformation in a Complex World.” Hakimi emphasized that transformation needs to focus on a holistic view of an organization’s ecosystem and motivations, economics, culture and existing systems to help foster real change. Based on Buddhist philosophy, he presented an eightfold path to transformation that can allow enterprise architects to approach transformation and discuss it with other architects and business constituents in a way that is meaningful to them and allows for complexity and balance.

This was followed by “Building the Knowledge-Based Enterprise,” a session given by Bob Weisman, Head Management Consultant for Build the Vision.

Tuesday’s afternoon sessions centered on a number of topics including Business Innovation and Transformation, Risk Management, Archimate, TOGAF tutorials and case studies and Professional Development.

In the Archimate track, Vadim Polyakov of Inovalon, Inc., presented “Implementing an EA Practice in an Agile Enterprise” a case study centered on how his company integrated its enterprise architecture with the principles of agile development and how they customized the Archimate framework as part of the process.

The Risk Management track featured William Estrem, President, Metaplexity Associates, and Jim May of Windsor Software discussing how the Open FAIR Standard can be used in conjunction with TOGAF 9.1 to enhance risk management in organizations in their session, “Integrating Open FAIR Risk Analysis into the Enterprise Architecture Capability.” Jack Jones, President of CXOWARE, also discussed the best ways for “Communicating the Value Proposition” for cohesive enterprise architectures to business managers using risk management scenarios.

The plenary sessions and many of the track sessions from today’s tracks can be viewed on The Open Group’s Livestream channel at http://new.livestream.com/opengroup.

The day culminated with dinner and a Lion Dance performance in honor of Chinese New Year performed by Leung’s White Crane Lion & Dragon Dance School of San Francisco.

We would like to express our gratitude for the support by our following sponsors:  BIZZDesign, Corso, Good e-Learning, I-Server and Metaplexity Associates.

IMG_1460 copy

O-DA standard panel discussion with Dave Lounsbury, Bill Brierly, Dr. Mario Tokoro, Jack Fujieda and TJ Virdi

Comments Off

Filed under Conference, Enterprise Architecture, Enterprise Transformation, Standards, TOGAF®, Uncategorized