Category Archives: Cloud

The Open Group Panel: Internet of Things – Opportunities and Obstacles

Below is the transcript of The Open Group podcast exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data.

Listen to the podcast.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with recent The Open Group Boston 2014 on July 21 in Boston.

Dana Gardner I’m Dana Gardner, principal analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow.

We’re going to now specifically delve into the Internet of Things with a panel of experts. The conference has examined how Open Platform 3.0™ leverages the combined impacts of cloud, big data, mobile, and social. But to each of these now we can add a new cresting wave of complexity and scale as we consider the rapid explosion of new devices, sensors, and myriad endpoints that will be connected using internet protocols, standards and architectural frameworks.

This means more data, more cloud connectivity and management, and an additional tier of “things” that are going to be part of the mobile edge — and extending that mobile edge ever deeper into even our own bodies.

When we think about inputs to these social networks — that’s going to increase as well. Not only will people be tweeting, your device could be very well tweet, too — using social networks to communicate. Perhaps your toaster will soon be sending you a tweet about your English muffins being ready each morning.

The Internet of Things is more than the “things” – it means a higher order of software platforms. For example, if we are going to operate data centers with new dexterity thanks to software-definited networking (SDN) and storage (SDS) — indeed the entire data center being software-defined (SDDC) — then why not a software-defined automobile, or factory floor, or hospital operating room — or even a software-defined city block or neighborhood?

And so how does this all actually work? Does it easily spin out of control? Or does it remain under proper management and governance? Do we have unknown unknowns about what to expect with this new level of complexity, scale, and volume of input devices?

Will architectures arise that support the numbers involved, interoperability, and provide governance for the Internet of Things — rather than just letting each type of device do its own thing?

To help answer some of these questions, The Open Group assembled a distinguished panel to explore the practical implications and limits of the Internet of Things. So please join me in welcoming Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC, and a primary representative to the Industrial Internet Consortium; Penelope Gordon, Emerging Technology Strategist at 1Plug Corporation; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technical Officer at The Open Group.

Jean-Francois, we have heard about this notion of “cities as platforms,” and I think the public sector might offer us some opportunity to look at what is going to happen with the Internet of Things, and then extrapolate from that to understand what might happen in the private sector.

Hypothetically, the public sector has a lot to gain. It doesn’t have to go through the same confines of a commercial market development, profit motive, and that sort of thing. Tell us a little bit about what the opportunity is in the public sector for smart cities.

Barsoum_Jean-FrancoisJean-Francois Barsoum: It’s immense. The first thing I want to do is link to something that Marshall Van Alstyne (Professor at Boston University and Researcher at MIT) had talked about, because I was thinking about his way of approaching platforms and thinking about how cities represent an example of that.

You don’t have customers; you have citizens. Cities are starting to see themselves as platforms, as ways to communicate with their customers, their citizens, to get information from them and to communicate back to them. But the complexity with cities is that as a good a platform as they could be, they’re relatively rigid. They’re legislated into existence and what they’re responsible for is written into law. It’s not really a market.

Chris Harding (Forum Director of The Open Group Open Platform 3.0) earlier mentioned, for example, water and traffic management. Cities could benefit greatly by managing traffic a lot better.

Part of the issue is that you might have a state or provincial government that looks after highways. You might have the central part of the city that looks after arterial networks. You might have a borough that would look after residential streets, and these different platforms end up not talking to each other.

They gather their own data. They put in their own widgets to collect information that concerns them, but do not necessarily share with their neighbor. One of the conditions that Marshall said would favor the emergence of a platform had to do with how much overlap there would be in your constituents and your customers. In this case, there’s perfect overlap. It’s the same citizen, but they have to carry an Android and an iPhone, despite the fact it is not the best way of dealing with the situation.

The complexities are proportional to the amount of benefit you could get if you could solve them.

Gardner: So more interoperability issues?

Barsoum: Yes.

More hurdles

Gardner: More hurdles, and when you say commensurate, you’re saying that the opportunity is huge, but the hurdles are huge and we’re not quite sure how this is going to unfold.

Barsoum: That’s right.

Gardner: Let’s go to an area where the opportunity outstrips the challenge, manufacturing. Said, what is the opportunity for the software-defined factory floor for recognizing huge efficiencies and applying algorithmic benefits to how management occurs across domains of supply-chain, distribution, and logistics. It seems to me that this is a no-brainer. It’s such an opportunity that the solution must be found.

Tabet_SaidSaid Tabet: When it comes to manufacturing, the opportunities are probably much bigger. It’s where we can see a lot of progress that has already been done and still work is going on. There are two ways to look at it.

One is the internal side of it, where you have improvements of business processes. For example, similar to what Jean-Francois said, in a lot of the larger companies that have factories all around the world, you’ll see such improvements on a factory base level. You still have those silos at that level.

Now with this new technology, with this connectedness, those improvements are going to be made across factories, and there’s a learning aspect to it in terms of trying to manage that data. In fact, they do a better job. We still have to deal with interoperability, of course, and additional issues that could be jurisdictional, etc.

However, there is that learning that allows them to improve their processes across factories. Maintenance is one of them, as well as creating new products, and connecting better with their customers. We can see a lot of examples in the marketplace. I won’t mention names, but there are lots of them out there with the large manufacturers.

Gardner: We’ve had just-in-time manufacturing and lean processes for quite some time, trying to compress the supply chain and distribution networks, but these haven’t necessarily been done through public networks, the internet, or standardized approaches.

But if we’re to benefit, we’re going to need to be able to be platform companies, not just product companies. How do you go from being a proprietary set of manufacturing protocols and approaches to this wider, standardized interoperability architecture?

Tabet: That’s a very good question, because now we’re talking about that connection to the customer. With the airline and the jet engine manufacturer, for example, when the plane lands and there has been some monitoring of the activity during the whole flight, at that moment, they’ll get that data made available. There could be improvements and maybe solutions available as soon as the plane lands.

Interoperability

That requires interoperability. It requires Platform 3.0 for example. If you don’t have open platforms, then you’ll deal with the same hurdles in terms of proprietary technologies and integration in a silo-based manner.

Gardner: Penelope, you’ve been writing about the obstacles to decision-making that might become apparent as big data becomes more prolific and people try to capture all the data about all the processes and analyze it. That’s a little bit of a departure from the way we’ve made decisions in organizations, public and private, in the past.

Of course, one of the bigger tenets of Internet of Things is all this great data that will be available to us from so many different points. Is there a conundrum of some sort? Is there an unknown obstacle for how we, as organizations and individuals, can deal with that data? Is this going to be chaos, or is this going to be all the promises many organizations have led us to believe around big data in the Internet of Things?

Gordon_PenelopePenelope Gordon: It’s something that has just been accelerated. This is not a new problem in terms of the decision-making styles not matching the inputs that are being provided into the decision-making process.

Former US President Bill Clinton was known for delaying making decisions. He’s a head-type decision-maker and so he would always want more data and more data. That just gets into a never-ending loop, because as people collect data for him, there is always more data that you can collect, particularly on the quantitative side. Whereas, if it is distilled down and presented very succinctly and then balanced with the qualitative, that allows intuition to come to fore, and you can make optimal decisions in that fashion.

Conversely, if you have someone who is a heart-type or gut-type decision-maker and you present them with a lot of data, their first response is to ignore the data. It’s just too much for them to take in. Then you end up completely going with whatever you feel is correct or whatever you have that instinct that it’s the correct decision. If you’re talking about strategic decisions, where you’re making a decision that’s going to influence your direction five years down the road, that could be a very wrong decision to make, a very expensive decision, and as you said, it could be chaos.

It just brings to mind to me Dr. Suess’s The Cat in the Hat with Thing One and Thing Two. So, as we talk about the Internet of Things, we need to keep in mind that we need to have some sort of structure that we are tying this back to and understanding what are we trying to do with these things.

Gardner: Openness is important, and governance is essential. Then, we can start moving toward higher-order business platform benefits. But, so far, our panel has been a little bit cynical. We’ve heard that the opportunity and the challenges are commensurate in the public sector and that in manufacturing we’re moving into a whole new area of interoperability, when we think about reaching out to customers and having a boundary that is managed between internal processes and external communications.

And we’ve heard that an overload of data could become a very serious problem and that we might not get benefits from big data through the Internet of Things, but perhaps even stumble and have less quality of decisions.

So Dave Lounsbury of The Open Group, will the same level of standardization work? Do we need a new type of standards approach, a different type of framework, or is this a natural path and course what we have done in the past?

Different level

Lounsbury_DaveDave Lounsbury: We need to look at the problem at a different level than we institutionally think about an interoperability problem. Internet of Things is riding two very powerful waves, one of which is Moore’s Law, that these sensors, actuators, and network get smaller and smaller. Now we can put Ethernet in a light switch right, a tag, or something like that.

Also, Metcalfe’s Law that says that the value of all this connectivity goes up with the square of the number of connected points, and that applies to both the connection of the things but more importantly the connection of the data.

The trouble is, as we have said, that there’s so much data here. The question is how do you manage it and how do you keep control over it so that you actually get business value from it. That’s going to require us to have this new concept of a platform to not only to aggregate, but to just connect the data, aggregate it, correlate it as you said, and present it in ways that people can make decisions however they want.

Also, because of the raw volume, we have to start thinking about machine agency. We have to think about the system actually making the routine decisions or giving advice to the humans who are actually doing it. Those are important parts of the solution beyond just a simple “How do we connect all the stuff together?”

Gardner: We might need a higher order of intelligence, now that we have reached this border of what we can do with our conventional approaches to data, information, and process.

Thinking about where this works best first in order to then understand where it might end up later, I was intrigued again this morning by Professor Van Alstyne. He mentioned that in healthcare, we should expect major battles, that there is a turf element to this, that the organization, entity or even commercial corporation that controls and manages certain types of information and access to that information might have some very serious platform benefits.

The openness element now is something to look at, and I’ll come back to the public sector. Is there a degree of openness that we could legislate or regulate to require enough control to prevent the next generation of lock-in, which might not be to a platform to access to data information and endpoints? Where is it in the public sector that we might look to a leadership position to establish needed openness and not just interoperability.

Barsoum: I’m not even sure where to start answering that question. To take healthcare as an example, I certainly didn’t write the bible on healthcare IT systems and if someone did write that, I think they really need to publish it quickly.

We have a single-payer system in Canada, and you would think that would be relatively easy to manage. There is one entity that manages paying the doctors, and everybody gets covered the same way. Therefore, the data should be easily shared among all the players and it should be easy for you to go from your doctor, to your oncologist, to whomever, and maybe to your pharmacy, so that everybody has access to this same information.

We don’t have that and we’re nowhere near having that. If I look to other areas in the public sector, areas where we’re beginning to solve the problem are ones where we face a crisis, and so we need to address that crisis rapidly.

Possibility of improvement

In the transportation infrastructure, we’re getting to that point where the infrastructure we have just doesn’t meet the needs. There’s a constraint in terms of money, and we can’t put much more money into the structure. Then, there are new technologies that are coming in. Chris had talked about driverless cars earlier. They’re essentially throwing a wrench into the works or may be offering the possibility of improvement.

On any given piece of infrastructure, you could fit twice as many driverless cars as cars with human drivers in them. Given that set of circumstances, the governments are going to find they have no choice but to share data in order to be able to manage those. Are there cases where we could go ahead of a crisis in order to manage it? I certainly hope so.

Gardner: How about allowing some of the natural forces of marketplaces, behavior, groups, maybe even chaos theory, where if sufficient openness is maintained there will be some kind of a pattern that will emerge? We need to let this go through its paces, but if we have artificial barriers, that might be thwarted or power could go to places that we would regret later.

Barsoum: I agree. People often focus on structure. So the governance doesn’t work. We should find some way to change the governance of transportation. London has done a very good job of that. They’ve created something called Transport for London that manages everything related to transportation. It doesn’t matter if it’s taxis, bicycles, pedestrians, boats, cargo trains, or whatever, they manage it.

You could do that, but it requires a lot of political effort. The other way to go about doing it is saying, “I’m not going to mess with the structures. I’m just going to require you to open and share all your data.” So, you’re creating a new environment where the governance, the structures, don’t really matter so much anymore. Everybody shares the same data.

Gardner: Said, to the private sector example of manufacturing, you still want to have a global fabric of manufacturing capabilities. This is requiring many partners to work in concert, but with a vast new amount of data and new potential for efficiency.

How do you expect that openness will emerge in the manufacturing sector? How will interoperability play when you don’t have to wait for legislation, but you do need to have cooperation and openness nonetheless?

Tabet: It comes back to the question you asked Dave about standards. I’ll just give you some examples. For example, in the automotive industry, there have been some activities in Europe around specific standards for communication.

The Europeans came to the US and started to have discussions, and the Japanese have interest, as well as the Chinese. That shows, because there is a common interest in creating these new models from a business standpoint, that these challenges they have to be dealt with together.

Managing complexity

When we talk about the amounts of data, what we call now big data, and what we are going to see in about five years or so, you can’t even imagine. How do we manage that complexity, which is multidimensional? We talked about this sort of platform and then further, that capability and the data that will be there. From that point of view, openness is the only way to go.

There’s no way that we can stay away from it and still be able to work in silos in that new environment. There are lots of things that we take for granted today. I invite some of you to go back and read articles from 10 years ago that try to predict the future in technology in the 21st century. Look at your smart phones. Adoption is there, because the business models are there, and we can see that progress moving forward.

Collaboration is a must, because it is a multidimensional level. It’s not just manufacturing like jet engines, car manufacturers, or agriculture, where you have very specific areas. They really they have to work with their customers and the customers of their customers.

Adoption is there, because the business models are there, and we can see that progress moving forward.

Gardner: Dave, I have a question for both you and Penelope. I’ve seen some instances where there has been a cooperative endeavor for accessing data, but then making it available as a service, whether it’s an API, a data set, access to a data library, or even analytics applications set. The Ocean Observatories Initiative is one example, where it has created a sensor network across the oceans and have created data that then they make available.

Do you think we expect to see an intermediary organization level that gets between the sensors and the consumers or even controllers of the processes? Is there’s a model inherent in that that we might look to — something like that cooperative data structure that in some ways creates structure and governance, but also allows for freedom? It’s sort of an entity that we don’t have yet in many organizations or many ecosystems and that needs to evolve.

Lounsbury: We’re already seeing that in the marketplace. If you look at the commercial and social Internet of Things area, we’re starting to see intermediaries or brokers cropping up that will connect the silo of my android ecosystem to the ecosystem of package tracking or something like that. There are dozens and dozens of these cropping up.

In fact, you now see APIs even into a silo of what you might consider a proprietary system and what people are doing is to to build a layer on top of those APIs that intermediate the data.

This is happening on a point-to-point basis now, but you can easily see the path forward. That’s going to expand to large amounts of data that people will share through a third party. I can see this being a whole new emerging market much as what Google did for search. You could see that happening for the Internet of Things.

Gardner: Penelope, do you have any thoughts about how that would work? Is there a mutually assured benefit that would allow people to want to participate and cooperate with that third entity? Should they have governance and rules about good practices, best practices for that intermediary organization? Any thoughts about how data can be managed in this sort of hierarchical model?

Nothing new

Gordon: First, I’ll contradict it a little bit. To me, a lot of this is nothing new, particularly coming from a marketing strategy perspective, with business intelligence (BI). Having various types of intermediaries, who are not only collecting the data, but then doing what we call data hygiene, synthesis, and even correlation of the data has been around for a long time.

It was an interesting, when I looked at recent listing of the big-data companies, that some notable companies were excluded from that list — companies like Nielsen. Nielsen’s been collecting data for a long time. Harte-Hanks is another one that collects a tremendous amount of information and sells that to companies.

That leads into the another part of it that I think there’s going to be. We’re seeing an increasing amount of opportunity that involves taking public sources of data and then providing synthesis on it. What remains to be seen is how much of the output of that is going to be provided for “free”, as opposed to “fee”. We’re going to see a lot more companies figuring out creative ways of extracting more value out of data and then charging directly for that, rather than using that as an indirect way of generating traffic.

Gardner: We’ve seen examples of how this has been in place. Does it scale and does the governance or lack of governance that might be in the market now sustain us through the transition into Platform 3.0 and the Internet of Things.

Gordon: That aspect is the lead-on part of “you get what you pay for”. If you’re using a free source of data, you don’t have any guarantee that it is from authoritative sources of data. Often, what we’re getting now is something somebody put it in a blog post, and then that will get referenced elsewhere, but there was nothing to go back to. It’s the shaky supply chain for data.

You need to think about the data supply and that is where the governance comes in. Having standards is going to increasingly become important, unless we really address a lot of the data illiteracy that we have. A lot of people do not understand how to analyze data.

One aspect of that is a lot of people expect that we have to do full population surveys, as opposed representative sampling to get much more accurate and much more cost-effective collection of data. That’s just one example, and we do need a lot more in governance and standards.

Gardner: What would you like to see changed most in order for the benefits and rewards of the Internet of Things to develop and overcome the drawbacks, the risks, the downside? What, in your opinion, would you like to see happen to make this a positive, rapid outcome? Let’s start with you Jean-Francois.

Barsoum: There are things that I have seen cities start to do now. There are couple of examples: Philadelphia is one and Barcelona does this too. Rather than do the typical request for proposal (RFP), where they say, “This is the kind of solution we’re looking for, and here are our parameters. Can l you tell us how much it is going to cost to build,” they come to you with the problem and they say, “Here is the problem I want to fix. Here are my priorities, and you’re at liberty to decide how best to fix the problem, but tell us how much that would cost.”

If you do that and you combine it with access to the public data that is available — if public sector opens up its data — you end up with a very powerful combination that liberates a lot of creativity. You can create a lot of new business models. We need to see much more of that. That’s where I would start.

More education

Tabet: I agree with Jean-Francois on that. What I’d like to add is that I think we need to push the relation a little further. We need more education, to your point earlier, around the data and the capabilities.

We need these platforms that we can leverage a little bit further with the analytics, with machine learning, and with all of these capabilities that are out there. We have to also remember, when we talk about the Internet of Things, it is things talking to each other.

So it is not human-machine communication. Machine-to-machine automation will be further than that, and we need more innovation and more work in this area, particularly more activity from the governments. We’ve seen that, but it is a little bit frail from that point of view right now.

Gardner: Dave Lounsbury, thoughts about what need to happen in order to keep this on the tracks?

Lounsbury: We’ve touched on lot of them already. Thank you for mentioning the machine-to-machine part, because there are plenty of projections that show that it’s going to be the dominant form of Internet communication, probably within the next four years.

So we need to start thinking of that and moving beyond our traditional models of humans talking through interfaces to set of services. We need to identify the building blocks of capability that you need to manage, not only the information flow and the skilled person that is going to produce it, but also how you manage the machine-to-machine interactions.

Gordon: I’d like to see not so much focus on data management, but focus on what is the data managing and helping us to do. Focusing on the machine-to-machine and the devices is great, but it should be not on the devices or on the machines… it should be on what can they accomplish by communicating; what can you accomplish with the devices and then have a reverse engineer from that.

Gardner: Let’s go to some questions from the audience. The first one asks about a high order of intelligence which we mentioned earlier. It could be artificial intelligence, perhaps, but they ask whether that’s really the issue. Is the nature of the data substantially different, or we are just creating more of the same, so that it is a storage, plumbing, and processing problem? What, if anything, are we lacking in our current analytics capabilities that are holding us back from exploiting the Internet of Things?

Gordon: I’ve definitely seen that. That has a lot to do with not setting your decision objectives and your decision criteria ahead of time so that you end up collecting a whole bunch of data, and the important data gets lost in the mix. There is a term “data smog.”

Most important

The solution is to figure out, before you go collecting data, what data is most important to you. If you can’t collect certain kinds of data that are important to you directly, then think about how to indirectly collect that data and how to get proxies. But don’t try to go and collect all the data for that. Narrow in on what is going to be most important and most representative of what you’re trying to accomplish.

Gardner: Does anyone want to add to this idea of understanding what current analytics capabilities are lacking, if we have to adopt and absorb the Internet of Things?

Barsoum: There is one element around projection into the future. We’ve been very good at analyzing historical information to understand what’s been happening in the past. We need to become better at projecting into the future, and obviously we’ve been doing that for some time already.

But so many variables are changing. Just to take the driverless car as an example. We’ve been collecting data from loop detectors, radar detectors, and even Bluetooth antennas to understand how traffic moves in the city. But we need to think harder about what that means and how we understand the city of tomorrow is going to work. That requires more thinking about the data, a little bit like what Penelope mentioned, how we interpret that, and how we push that out into the future.

Lounsbury: I have to agree with both. It’s not about statistics. We can use historical data. It helps with lot of things, but one of the major issues we still deal with today is the question of semantics, the meaning of the data. This goes back to your point, Penelope, around the relevance and the context of that information – how you get what you need when you need it, so you can make the right decisions.

Gardner: Our last question from the audience goes back to Jean-Francois’s comments about the Canadian healthcare system. I imagine it applies to almost any healthcare system around the world. But it asks why interoperability is so difficult to achieve, when we have the power of the purse, that is the market. We also supposedly have the power of the legislation and regulation. You would think between one or the other or both that interoperability, because the stakes are so high, would happen. What’s holding it up?

Barsoum: There are a couple of reasons. One, in the particular case of healthcare, is privacy, but that is one that you could see going elsewhere. As soon as you talk about interoperability in the health sector, people start wondering where is their data going to go and how accessible is it going to be and to whom.

You need to put a certain number of controls over top of that. What is happening in parallel is that you have people who own some data, who believe they have some power from owning that data, and that they will lose that power if they share it. That can come from doctors, hospitals, anywhere.

So there’s a certain amount of change management you have to get beyond. Everybody has to focus on the welfare of the patient. They have to understand that there has to be a priority, but you also have to understand the welfare of the different stakeholders in the system and make sure that you do not forget about them, because if you forget about them they will find some way to slow you down.

Use of an ecosystem

Lounsbury: To me, that’s a perfect example of what Marshall Van Alstyne talked about this morning. It’s the change from focus on product to a focus on an ecosystem. Healthcare traditionally has been very focused on a doctor providing product to patient, or a caregiver providing a product to a patient. Now, we’re actually starting to see that the only way we’re able to do this is through use of an ecosystem.

That’s a hard transition. It’s a business-model transition. I will put in a plug here for The Open Group Healthcare vertical, which is looking at that from architecture perspective. I see that our Forum Director Jason Lee is over here. So if you want to explore that more, please see him.

Gardner: I’m afraid we will have to leave it there. We’ve been discussing the practical implications of the Internet of Things and how it is now set to add a new dimension to Open Platform 3.0 and Boundaryless Information Flow.

We’ve heard how new thinking about interoperability will be needed to extract the value and orchestrate out the chaos with such vast new scales of inputs and a whole new categories of information.

So with that, a big thank you to our guests: Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC; Penelope Gordon, Emerging Technology Strategist at 1Plug Corp.; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technology Officer at The Open Group.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow at The Open Group Conference, recently held in Boston. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript.

Transcript of The Open Group podcast exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Leave a comment

Filed under Boundaryless Information Flow™, Business Architecture, Cloud, Cloud/SOA, Data management, digital technologies, Enterprise Architecture, Future Technologies, Information security, Internet of Things, Interoperability, Open Platform 3.0, Service Oriented Architecture, Standards, Strategy, Supply chain risk, Uncategorized

Q&A with Marshall Van Alstyne, Professor, Boston University School of Management and Research Scientist MIT Center for Digital Business

By The Open Group

The word “platform” has become a nearly ubiquitous term in the tech and business worlds these days. From “Platform as a Service” (PaaS) to IDC’s Third Platform to The Open Group Open Platform 3.0™ Forum, the concept of platforms and building technology frames and applications on top of them has become the next “big thing.”

Although the technology industry tends to conceive of “platforms” as the vehicle that is driving trends such as mobile, social networking, the Cloud and Big Data, Marshall Van Alstyne, Professor at Boston University’s School of Management and a Research Scientist at the MIT Center for Digital Business, believes that the radical shifts that platforms bring are not just technological.

We spoke with Van Alstyne prior to The Open Group Boston 2014, where he presented a keynote, about platforms, how they have shifted traditional business models and how they are impacting industries everywhere.

The title of your session at the Boston conference was “Platform Shift – How New Open Business Models are Changing the Shape of Industry.” How would you define both “platform” and “open business model”?

I think of “platform” as a combination of two things. One, a set of standards or components that folks can take up and use for production of goods and services. The second thing is the rules of play, or the governance model – who has the ability to participate, how do you resolve conflict, and how do you divide up the royalty streams, or who gets what? You can think of it as the two components of the platform—the open standard together with the governance model. The technologists usually get the technology portion of it, and the economists usually get the governance and legal portions of it, but you really need both of them to understand what a ‘platform’ is.

What is the platform allowing then and how is that different from a regular business model?

The platform allows third parties to conduct business using system resources so they can actually meet and exchange goods across the platform. Wonderful examples of that include AirBnB where you can rent rooms or you can post rooms, or eBay, where you can sell goods or exchange goods, or iTunes where you can go find music, videos, apps and games provided by others, or Amazon where third parties are even allowed to set up shop on top of Amazon. They have moved to a business model where they can take control of the books in addition to allowing third parties to sell their own books and music and products and services through the Amazon platform. So by opening it up to allow third parties to participate, you facilitate exchange and grow a market by helping that exchange.

How does this relate to the concept of the technology industry is defining the “third platform”?

I think of it slightly differently. The tech industry uses mobile and social and cloud and data to characterize it. In some sense this view offers those as the attributes that characterize platforms or the knowledge base that enable platforms. But we would add to that the economic forces that actually shape platforms. What we want to do is give you some of the strategic tools, the incentives, the rules that will actually help you control their trajectory by helping you improve who participates and then measure and improve the value they contribute to the platform. So a full ecosystem view is not just the technology and the data, it also measures the value and how you divide that value. The rules of play really become important.

I think the “third platform” offers marvelous concepts and attributes but you also need to add the economics to it: Why do you participate, who gets what portions of the value, and who ultimately owns control.

Who does control the platform then?

A platform has multiple parts. Determining who controls what part is the art and design of the governance model. You have to set up control in the right way to motivate people to participate. But before we get to that, let’s go back and complete the idea of what’s an ‘open platform.’

To define an open platform, consider both the right of access and the right to manipulate platform resources, then consider granting those rights to four different parties. One is the user—can they access one another, can they access data, can they access system resources? Another group is developers—can they manipulate system resources, can they add new features to it, can they sell through the platform? The third group is the platform providers. You often think of them as those folks that facilitate access across the platform. To give you an example, iTunes is a single monolithic store, so the provider is simply Apple, but Android, in contrast, allows multiple providers, so there’s a Samsung Android store, an LTC Android store, a Google Android store, there’s even an Amazon version that uses a different version of Android. So that platform has multiple providers each with rights to access users. The fourth group is the party that controls the underlying property rights, who owns the IP. The ability modify the underlying standard and also the rights of access for other parties is the bottom-most layer.

So to answer the question of what is ‘open,’ you have to consider the rights of access of all four groups—the users, developers, the providers and IP rights holders, or sponsors, underneath.

Popping back up a level, we’re trying to motivate different parties to participate in the ecosystem. So what do you give the users? Usually it’s some kind of value. What do you give developers? Usually it’s some set of SDKs and APIs, but also some level of royalties. It’s fascinating. If you look back historically, Amazon initially tried a publishing royalty where they took 70% and gave a minority 30% back to developers. They found that didn’t fly very well and they had to fall back to the app store or software-style royalty, where they’re taking a lower percentage. I think Apple, for example, takes 30 percent, and Amazon is now close to that. You see ranges of royalties going anywhere from a few percent—an example is credit cards—all the way up to iStock photo where they take roughly 70 percent. That’s an extremely high rate, and one that I don’t recommend. We were just contracting for designs at 99Designs and they take a 20 percent cut. That’s probably more realistic, but lower might perhaps even be better—you can create stronger network effect if that’s the case.

Again, the real question of control is how you motivate third parties to participate and add value? If you are allowing them to use resources to create value and keep a lot of that value, then they’re more motivated to participate, to invest, to bring their resources to your platform. If you take most of the value they create, they won’t participate. They won’t add value. One of the biggest challenges for open platforms—what you might call the ‘Field of Dreams’ approach—is that most folks open their platform and assume ‘if you build it, they will come,’ but you really need to reward them to do so. Why would they want to come build with you? There are numerous instances of platforms that opened but no developer chooses to add value—the ecosystem is too small. You have to solve the chicken and egg problem where if you don’t have users, developers don’t want to build for you, but if you don’t have developer apps, then why do users participate? So you’ve got a huge feedback problem. And those are where the economics become critical, you must solve the chicken and egg problem to build and roll out platforms.

It’s not just a technology question; it’s also an economics and rewards question.

Then who is controlling the platform?

The answer depends on the type of platform. Giving different groups a different set of rights creates different types of platform. Consider the four different parties: users, developers, providers, and sponsors. At one extreme, the Apple Mac platform of the 1980s reserved most rights for development, for producing hardware (the provider layer), and for modifying the IP (the sponsor layer) all to Apple. Apple controlled the platform and it remained closed. In contrast, Microsoft relaxed platform control in specific ways. It licensed to multiple providers, enabling Dell, HP, Compaq and others to sell the platform. It gave developers rights of access to SDKs and APIs, enabling them to extend the platform. These control choices gave Microsoft more than six times the number of developers and more than twenty times the market share of Apple at the high point of Microsoft’s dominance of desktop operating systems. Microsoft gave up some control in order to create a more inclusive platform and a much bigger market.

Control is not a single concept. There are many different control rights you can grant to different parties. For example, you often want to give users an ability to control their own data. You often want to give developers intellectual property rights for the apps that they create and often over the data that their users create. You may want to give them some protections against platform misappropriation. Developers resent it if you take their ideas. So if the platform sees a really clever app that’s been built on top of its platform, what’s the guarantee that the platform simply doesn’t take it or build a competing app? You need to protect your developers in that case. Same thing’s true of the platform provider—what guarantees do they provide users for the quality of content provided on their ecosystem? For example, the Android ecosystem is much more open than the iPhone ecosystem, which means you have more folks offering stores. Simultaneously, that means that there are more viruses and more malware in Android, so what rights and guarantees do you require of the platform providers to protect the users in order that they want to participate? And then at the bottom, what rights do other participants have to control the direction of the platform growth? In the Visa model, for example, there are multiple member banks that help to influence the general direction of that credit card standard. Usually the most successful platforms have a single IP rights holder, but there are several examples of that have multiple IP rights holders.

So, in the end control defines the platform as much as the platform defines control.

What is the “secret” of the Internet-driven marketplace? Is that indeed the platform?

The secret is that, in effect, the goal of the platform is to increase transaction volume and value. If you can do that—and we can give you techniques for doing it—then you can create massive scale. Increasing the transaction value and transactions volume across your platform means that the owner of the platform doesn’t have to be the sole source of content and new ideas provided on the platform. If the platform owner is the only source of value then the owner is also the bottleneck. The goal is to consummate matches between producers and consumers of value. You want to help users find the content, find the resources, find the other people that they want to meet across your platform. In Apple’s case, you’re helping them find the music, the video, the games, and the apps that they want. In AirBnB’s case, you’re helping them find the rooms that they want, or Uber, you’re helping them find a driver. On Amazon, the book recommendations help you find the content that you want. In all the truly successful platforms, the owner of the platform is not providing all of that value. They’re enabling third parties to add that value, and that’s one reasy why The Open Group’s ideas are so important—you need open systems for this to happen.

What’s wrong with current linear business models? Why is a network-driven approach superior?

The fundamental reason why the linear business model no longer works is that it does not manage network effects. Network effects allow you to build platforms where users attract other users and you get feedback that grows your system. As more users join your platform, more developers join your platform, which attracts more users, which attracts more developers. You can see it on any of the major platforms. This is also true of Google. As advertisers use Google Search, the algorithms get better, people find the content that they want, so more advertisers use it. As more drivers join Uber, more people are happier passengers, which attracts more drivers. The more merchants accept Visa, the more consumers are willing to carry it, which attracts more merchants, which attracts more consumers. You get positive feedback.

The consequence of that is that you tend to get market concentration—you get winner take all markets. That’s where platforms dominate. So you have a few large firms within a given category, whether this is rides or books or hotels or auctions. Further, once you get network effects changing your business model, the linear insights into pricing, into inventory management, into innovation, into strategy breakdown.

When you have these multi-sided markets, pricing breaks down because you often price differently to one side than another because one side attracts the other. Inventory management practices breakdown because you’re selling inventory that you don’t even own. Your R&D strategies breakdown because now you’re motivating innovation and research outside the boundaries of the firm, as opposed to inside the internal R&D group. And your strategies breakdown because you’re not just looking for cost leadership or product differentiation, now you’re looking to shape the network effects as you create barriers to entry.

One of the things that I really want to argue strenuously is that in markets where platforms will emerge, platforms beat product every time. So the platform business model will inevitably beat the linear, product-based business model. Because you’re harnessing new forces in order to develop a different kind of business model.

Think of it the following way–imagine that value is growing as users consume your product. Think of any of the major platforms, as more folks use Google, search gets better, the more recommendations improve on Amazon, and the easier it is to find a ride on Uber, so more folks want to be on there. It is easier to scale network effects outside your business than inside your business. There’s simply more people outside than inside. The moment that happens, the locus control, the locus of innovation, moves from inside the firm to outside the firm. So the rules change. Pricing changes, your innovation strategies change, your inventory policies change, your R&D changes. You’re now managing resources outside the firm, rather than inside, in order to capture scale. This is different than the traditional industrial supply economies of scale.

Old systems are giving away to new systems. It’s not that the whole system breaks down, it’s simply that you’re looking to manage network effects and manage new business models. Another way to see this is that previously you were managing capital. In the industrial era, you were managing steel, you were managing large amounts of finance in banking, you were managing auto parts—huge supply economies of scale. In telecommunications, you were managing infrastructure. Now, you’re managing communities and these are managed outside the firm. The value that’s been created at Facebook or WhatsApp or Instagram or any of the new acquisitions, it’s not the capital that’s critical, it’s the communities that are critical, and these are built outside the firm.

There is a lot of talk in the industry about the Nexus of Forces as Gartner calls it, or Third Platform (IDC). The Open Group calls it Open Platform 3.0. Your concept goes well beyond technology—how does Open Platform 3.0 enable new business models?

Those are the enablers—they’re shall we say necessary, but they’re not sufficient. You really must harness the economic forces in addition to those enablers—mobile, social, Cloud, data. You must manage communities outside the firm, that’s the mobile and the social element of it. But this also involves designing governance and setting incentives. How are you capturing users outside the organization, how are they contributing, how are they being motivated to participate, why are they spreading your products to their peers? The Cloud allows it to scale—so Instagram and What’s App and others scale. Data allows you to “consummate the match.” You use that data to help people find what they need, to add value, so all of those things are the enablers. Then you have to harness the economics of the enablers to encourage people to do the right thing. You can see the correct intuition if you simply ask what happens if all you offer is a Cloud service and nothing more. Why will anyone use it? What’s the value to that system? If you open APIs to it, again, if you don’t have a user base, why are developers going to contribute? Developers want to reach users. Users want valuable functionality.

You must manage the motives and the value-add on the platform. New business models come from orchestrating not just the technology but also the third party sources of value. One of the biggest challenges is to grow these businesses from scratch—you’ve got the cold start chicken and egg problem. You don’t have network effects if you don’t have a user base, if you don’t have users, you don’t have network effects.

Do companies need to transform themselves into a “business platform” to succeed in this new marketplace? Are there industries immune to this shift?

There is a continuum of companies that are going to be affected. It starts at one end with companies that are highly information intense—anything that’s an information intensive business will be dramatically affected, anything that’s community or fashion-based business will be dramatically affected. Those include companies involved in media and news, songs, music, video; all of those are going to be the canaries in the coalmine that see this first. Moving farther along will be those industries that require some sort of certification—those include law and medicine and education—those, too, will also be platformized, so the services industries will become platforms. Farther down that are the ones that are heavily, heavily capital intensive where control of physical capital is paramount, those include trains and oil rigs and telecommunications infrastructure—eventually those will be affected by platform business models to the extent that data helps them gain efficiencies or add value, but they will in some sense be the last to be affected by platform business models. Look for the businesses where the cost side is shrinking in proportion to the delivery of value and where the network effects are rising as a proportional increase in value. Those forces will help you predict which industries will be transformed.

How can Enterprise Architecture be a part of this and how do open standards play a role?

The second part of that question is actually much easier. How do open standards play a role? The open standards make it much easier for third parties to attach and incorporate technology and features such that they can in turn add value. Open standards are essential to that happening. You do need to ask the question as to who controls those standards—is it completely open or is it a proprietary standard, a published standard but it’s not manipulable by a third party.

There will be at least two or three different things that Enterprise Architects need to do. One of these is to design modular components that are swappable, so as better systems become available, the better systems can be swapped in. The second element will be to watch for components of value that should be absorbed into the platform itself. As an example, in operating systems, web browsing has effectively been absorbed into the platform, streaming has been absorbed into the platform so that they become aware of how that actually works. A third thing they need to do is talk to the legal team to see where it is that the third parties property rights can be protected so that they invest. One of the biggest mistakes that firms make is to simply assume that because they own the platform, because they have the rights of control, that they can do what they please. If they do that, they risk alienating their ecosystems. So they should talk to their potential developers to incorporate developer concerns. One of my favorite examples is the Intel Architecture Lab which has done a beautiful job of articulating the voices of developers in their own architectural plans. A fourth thing that can be done is an idea borrowed from SAP, that builds Enterprise Architecture—they articulate an 18-24 month roadmap where they say these are the features that are coming, so you can anticipate and build on those. Also it gives you an idea of what features will be safe to build on so you won’t lose the value you’ve created.

What can companies do to begin opening their business models and more easily architect that?

What they should do is to consider four groups articulated earlier— those are the users, the providers, the developers and the sponsors—each serve a different role. Firms need to understand what their own role will be in order that they can open and architect the other roles within their ecosystem. They’ll also need to choose what levels of exclusivity they need to give their ecosystem partners in a different slice of the business. They should also figure out which of those components they prefer to offer themselves as unique competencies and where they need to seek third party assistance, either in new ideas or new resources or even new marketplaces. Those factors will help guide businesses toward different kinds of partnerships, and they’ll have to be open to those kinds of partners. In particular, they should think about where are they most likely to be missing ideas or missing opportunities. Those technical and business areas should open in order that third parties can take advantage of those opportunities and add value.

 

vanalstynemarshallProfessor Van Alstyne is one of the leading experts in network business models. He conducts research on information economics, covering such topics as communications markets, the economics of networks, intellectual property, social effects of technology, and productivity effects of information. As co-developer of the concept of “two sided networks” he has been a major contributor to the theory of network effects, a set of ideas now taught in more than 50 business schools worldwide.

Awards include two patents, National Science Foundation IOC, SGER, SBIR, iCorp and Career Awards, and six best paper awards. Articles or commentary have appeared in Science, Nature, Management Science, Harvard Business Review, Strategic Management Journal, The New York Times, and The Wall Street Journal.

1 Comment

Filed under architecture, Cloud, Conference, Data management, digital technologies, Enterprise Architecture, Governance, Open Platform 3.0, Standards, Uncategorized

New Health Data Deluges Require Secure Information Flow Enablement Via Standards, Says The Open Group’s New Healthcare Director

By The Open Group

Below is the transcript of The Open Group podcast on how new devices and practices have the potential to expand the information available to Healthcare providers and facilities.

Listen to the podcast here.

Dana Gardner: Hello, and welcome to a special BriefingsDirect Thought Leadership Interview coming to you in conjunction with The Open Group’s upcoming event, Enabling Boundaryless Information Flow™ July 21-22, 2014 in Boston.

GardnerI’m Dana Gardner, Principal Analyst at Interarbor Solutions and I’ll be your host and moderator for the series of discussions from the conference on Boundaryless Information Flow, Open Platform 3.0™, Healthcare, and Security issues.

One area of special interest is the Healthcare arena, and Boston is a hotbed of innovation and adaption for how technology, Enterprise Architecture, and standards can improve the communication and collaboration among Healthcare ecosystem players.

And so, we’re joined by a new Forum Director at The Open Group to learn how an expected continued deluge of data and information about patients, providers, outcomes, and efficiencies is pushing the Healthcare industry to rapid change.

WJason Lee headshotith that, please join me now in welcoming our guest. We’re here with Jason Lee, Healthcare and Security Forums Director at The Open Group. Welcome, Jason.

Jason Lee: Thank you so much, Dana. Good to be here.

Gardner: Great to have you. I’m looking forward to the Boston conference and want to remind our listeners and readers that it’s not too late to sign up. You can learn more at http://www.opengroup.org.

Jason, let’s start by talking about the relationship between Boundaryless Information Flow, which is a major theme of the conference, and healthcare. Healthcare perhaps is the killer application for Boundaryless Information Flow.

Lee: Interesting, I haven’t heard it referred to that way, but healthcare is 17 percent of the US economy. It’s upwards of $3 trillion. The costs of healthcare are a problem, not just in the United States, but all over the world, and there are a great number of inefficiencies in the way we practice healthcare.

We don’t necessarily intend to be inefficient, but there are so many places and people involved in healthcare, it’s very difficult to get them to speak the same language. It’s almost as if you’re in a large house with lots of different rooms, and every room you walk into they speak a different language. To get information to flow from one room to the other requires some active efforts and that’s what we’re undertaking here at The Open Group.

Gardner: What is it about the current collaboration approaches that don’t work? Obviously, healthcare has been around for a long time and there have been different players involved. What’s the hurdle? What prevents a nice, seamless, easy flow and collaboration in information that gets better outcomes? What’s the holdup?

Lee: There are many ways to answer that question, because there are many barriers. Perhaps the simplest is the transformation of healthcare from a paper-based industry to a digital industry. Everyone has walked into an office, looked behind the people at the front desk, and seen file upon file and row upon row of folders, information that’s kept in a written format.

When there’s been movement toward digitizing that information, not everyone has used the same system. It’s almost like trains running on a different gauge track. Obviously if the track going east to west is a different gauge than going north to south, then trains aren’t going to be able to travel on those same tracks. In the same way, healthcare information does not flow easily from one office to another or from one provider to another.

Gardner: So not only do we have disparate strategies for collecting and communicating health data, but we’re also seeing much larger amounts of data coming from a variety of new and different places. Some of them now even involve sensors inside of patients themselves or devices that people will wear. So is the data deluge, the volume, also an issue here?

Lee: Certainly. I heard recently that an integrated health plan, which has multiple hospitals involved, contains more elements of data than the Library of Congress. As information is collected at multiple points in time, over a relatively short period of time, you really do have a data deluge. Figuring out how to find your way through all the data and look at the most relevant for the patient is a great challenge.

Gardner: I suppose the bad news is that there is this deluge of data, but it’s also good news, because more data means more opportunity for analysis, a better ability to predict and determine best practices, and also provide overall lower costs with better patient care.

So it seems like the stakes are rather high here to get this right, to not just crumble under a volume or an avalanche of data, but to master it, because it’s perhaps the future. The solution is somewhere in there too.

Lee: No question about it. At The Open Group, our focus is on solutions. We, like others, put a great deal of effort into describing the problems, but figuring out how to bring IT technologies to bear on business problems, how to encourage different parts of organizations to speak to one another and across organizations to speak the same language, and to operate using common standards and language. That’s really what we’re all about.

And it is, in a large sense, part of the process of helping to bring healthcare into the 21st Century. A number of industries are a couple of decades ahead of healthcare in the way they use large datasets — big data, some people refer to it as. I’m talking about companies like big department stores and large online retailers. They really have stepped up to the plate and are using that deluge of data in ways that are very beneficial to them, and healthcare can do the same. We’re just not quite at the same level of evolution.

Gardner: And to your point, the stakes are so much higher. Retail is, of course, a big deal in the economy, but as you pointed out, healthcare is such a much larger segment and portion. So just making modest improvements in communication, collaboration, or data analysis can reap huge rewards.

Lee: Absolutely true. There is the cost side of things, but there is also the quality side. So there are many ways in which healthcare can improve through standardization and coordinated development, using modern technology that cannot just reduce cost, but improve quality at the same time.

Gardner: I’d like to get into a few of the hotter trends, but before we do, it seems that The Open Group has recognized the importance here by devoting the entire second day of their conference in Boston, that will be on July 22, to Healthcare.

Maybe you could give us a brief overview of what participants, and even those who come in online and view recorded sessions of the conference at http://new.livestream.com/opengroup should expect? What’s going to go on July 22nd?

Lee: We have a packed day. We’re very excited to have Dr. Joe Kvedar, a physician at Partners HealthCare and Founding Director of the Center for Connected Health, as our first plenary speaker. The title of his presentation is “Making Health Additive.” Dr. Kvedar is a widely respected expert on mobile health, which is currently the Healthcare Forum’s top work priority. As mobile medical devices become ever more available and diversified, they will enable consumers to know more about their own health and wellness. A great deal of data of potentially useful health data will be generated. How this information can be used–not just by consumers but also by the healthcare establishment that takes care of them as patients, will become a question of increasing importance. It will become an area where standards development and The Open Group can be very helpful.

Our second plenary speaker, Proteus Duxbury, Chief Technology Officer at Connect for Health Colorado,will discuss a major feature of the Affordable Care Act—the health insurance exchanges–which are designed to bring health insurance to tens of millions of people who previously did not have access to it. Mr. Duxbury is going to talk about how Enterprise Architecture–which is really about getting to solutions by helping the IT folks talk to the business folks and vice versa–has helped the State of Colorado develop their Health Insurance Exchange.

After the plenaries, we will break up into 3 tracks, one of which is Healthcare-focused. In this track there will be three presentations, all of which discuss how Enterprise Architecture and the approach to Boundaryless Information Flow can help healthcare and healthcare decision-makers become more effective and efficient.

One presentation will focus on the transformation of care delivery at the Visiting Nurse Service of New York. Another will address stewarding healthcare transformation using Enterprise Architecture, focusing on one of our Platinum members, Oracle, and a company called Intelligent Medical Objects, and how they’re working together in a productive way, bringing IT and healthcare decision-making together.

Then, the final presentation in this track will focus on the development of an Enterprise Architecture-based solution at an insurance company. The payers, or the insurers–the big companies that are responsible for paying bills and collecting premiums–have a very important role in the healthcare system that extends beyond administration of benefits. Yet, payers are not always recognized for their key responsibilities and capabilities in the area of clinical improvements and cost improvements.

With the increase in payer data brought on in large part by the adoption of a new coding system–the ICD-10–which will come online this year, there will be a huge amount of additional data, including clinical data, that become available. At The Open Group, we consider payers—health insurance companies (some of which are integrated with providers)–as very important stakeholders in the big picture..

In the afternoon, we’re going to switch gears a bit and have a speaker talk about the challenges, the barriers, the “pain points” in introducing new technology into the healthcare systems. The focus will return to remote or mobile medical devices and the predictable but challenging barriers to getting newly generated health information to flow to doctors’ offices and into patients records, electronic health records, and hospitals data keeping and data sharing systems.

We’ll have a panel of experts that responds to these pain points, these challenges, and then we’ll draw heavily from the audience, who we believe will be very, very helpful, because they bring a great deal of expertise in guiding us in our work. So we’re very much looking forward to the afternoon as well.

Gardner: It’s really interesting. A couple of these different plenaries and discussions in the afternoon come back to this user-generated data. Jason, we really seem to be on the cusp of a whole new level of information that people will be able to develop from themselves through their lifestyle, new devices that are connected.

We hear from folks like Apple, Samsung, Google, and Microsoft. They’re all pulling together information and making it easier for people to not only monitor their exercise, but their diet, and maybe even start to use sensors to keep track of blood sugar levels, for example.

In fact, a new Flurry Analytics survey showed 62 percent increase in the use of health and fitness application over the last six months on the popular mobile devices. This compares to a 33 percent increase in other applications in general. So there’s an 87 percent faster uptick in the use of health and fitness applications.

Tell me a little bit how you see this factoring in. Is this a mixed blessing? Will so much data generated from people in addition to the electronic medical records, for example, be a bad thing? Is this going to be a garbage in, garbage out, or is this something that could potentially be a game-changer in terms of how people react to their own data and then bring more data into the interactions they have with care providers?

Lee: It’s always a challenge to predict what the market is going to do, but I think that’s a remarkable statistic that you cited. My prediction is that the increased volume of person- generated data from mobile health devices is going to be a game-changer. This view also reflects how the Healthcare Forum members (which includes members from Capgemini, Philips, IBM, Oracle and HP) view the future.

The commercial demand for mobile medical devices, things that can be worn, embedded, or swallowed, as in pills, as you mentioned, is growing ever more. The software and the applications that will be developed to be used with the devices is going to grow by leaps and bounds. As you say, there are big players getting involved. Already some of the pedometer type devices that measure the number of steps taken in a day have captured the interest of many, many people. Even David Sedaris, serious guy that he is, was writing about it recently in ‘The New Yorker’.

What we will find is that many of the health indicators that we used to have to go to the doctor or nurse or lab to get information on will become available to us through these remote devices.

There will be a question, of course, as to reliability and validity of the information, to your point about garbage in, garbage out, but I think standards development will help here This, again, is where The Open Group comes in. We might also see the FDA exercising its role in ensuring safety here, as well as other organizations, in determining which devices are reliable.

The Open Group is working in the area of mobile data and information systems that are developed around them, and their ability to (a) talk to one another and (b) talk to the data devices/infrastructure used in doctors’ offices and in hospitals. This is called interoperability and it’s certainly lacking in the country.

There are already problems around interoperability and connectivity of information in the healthcare establishment as it is now. When patients and consumers start collecting their own data, and the patient is put at the center of the nexus of healthcare, then the question becomes how does that information that patients collect get back to the doctor/clinician in ways in which the data can be trusted and where the data are helpful?

After all, if a patient is wearing a medical device, there is the opportunity to collect data, about blood sugar level let’s say, throughout the day. And this is really taking healthcare outside of the four walls of the clinic and bringing information to bear that can be very, very useful to clinicians and beneficial to patients.

In short, the rapid market dynamic in mobile medical devices and in the software and hardware that facilitates interoperability begs for standards-based solutions that reduce costs and improve quality, and all of which puts the patient at the center. This is The Open Group’s Healthcare Forum’s sweet spot.

Gardner: It seems to me a real potential game-changer as well, and that something like Boundaryless Information Flow and standards will play an essential role. Because one of the big question marks with many of the ailments in a modern society has to do with lifestyle and behavior.

So often, the providers of the care only really have the patient’s responses to questions, but imagine having a trove of data at their disposal, a 360-degree view of the patient to then further the cause of understanding what’s really going on, on a day-to-day basis.

But then, it’s also having a two-way street, being able to deliver perhaps in an automated fashion reinforcements and incentives, information back to the patient in real-time about behavior and lifestyles. So it strikes me as something quite promising, and I look forward to hearing more about it at the Boston conference.

Any other thoughts on this issue about patient flow of data, not just among and between providers and payers, for example, or providers in an ecosystem of care, but with the patient as the center of it all, as you said?

Lee: As more mobile medical devices come to the market, we’ll find that consumers own multiple types of devices at least some of which collect multiple types of data. So even for the patient, being at the center of their own healthcare information collection, there can be barriers to having one device talk to the other. If a patient wants to keep their own personal health record, there may be difficulties in bringing all that information into one place.

So the interoperability issue, the need for standards, guidelines, and voluntary consensus among stakeholders about how information is represented becomes an issue, not just between patients and their providers, but for individual consumers as well.

Gardner: And also the cloud providers. There will be a variety of large organizations with cloud-modeled services, and they are going to need to be, in some fashion, brought together, so that a complete 360-degree view of the patient is available when needed. It’s going to be an interesting time.

Of course, we’ve also looked at many other industries and tried to have a cloud synergy, a cloud-of-clouds approach to data and also the transaction. So it’s interesting how what’s going on in multiple industries is common, but it strikes me that, again, the scale and the impact of the healthcare industry makes it a leader now, and perhaps a driver for some of these long overdue structured and standardized activities.

Lee: It could become a leader. There is no question about it. Moreover, there is a lot Healthcare can learn from other companies, from mistakes that other companies have made, from lessons they have learned, from best practices they have developed (both on the content and process side). And there are issues, around security in particular, where Healthcare will be at the leading edge in trying to figure out how much is enough, how much is too much, and what kinds of solutions work.

There’s a great future ahead here. It’s not going to be without bumps in the road, but organizations like The Open Group are designed and experienced to help multiple stakeholders come together and have the conversations that they need to have in order to push forward and solve some of these problems.

Gardner: Well, great. I’m sure there will be a lot more about how to actually implement some of those activities at the conference. Again, that’s going to be in Boston, beginning on July 21, 2014.

We’ll have to leave it there. We’re about out of time. We’ve been talking with a new Director at The Open Group to learn how an expected continued deluge of data and information about patients and providers, outcomes and efficiencies are all working together to push the Healthcare industry to rapid change. And, as we’ve heard, that might very well spill over into other industries as well.

So we’ve seen how innovation and adaptation around technology, Enterprise Architecture and standards can improve the communication and collaboration among Healthcare ecosystem players.

It’s not too late to register for The Open Group Boston 2014 (http://www.opengroup.org/boston2014) and join the conversation via Twitter #ogchat #ogBOS, where you will be able to learn more about Boundaryless Information Flow, Open Platform 3.0, Healthcare and other relevant topics.

So a big thank you to our guest. We’ve been joined by Jason Lee, Healthcare and Security Forums Director at The Open Group. Thanks so much, Jason.

Lee: Thank you very much.

 

 

 

 

 

 

 

 

 

Leave a comment

Filed under Uncategorized, Enterprise Architecture, Information security, Standards, Cloud, Data management, Enterprise Transformation, Conference, Healthcare, Open Platform 3.0, Boundaryless Information Flow™, Interoperability

The Open Group Open Platform 3.0™ Starts to Take Shape

By Dr. Chris Harding, Director for Interoperability, The Open Group

The Open Group published a White Paper on Open Platform 3.0™ at the start of its conference in Amsterdam in May 2014. This article, based on a presentation given at the conference, explains how the definition of the platform is beginning to emerge.

Introduction

Amsterdam is a beautiful place. Walking along the canals is like moving through a set of picture postcards. But as you look up at the houses beside the canals, and you see the cargo hoists that many of them have, you are reminded that the purpose of the arrangement was not to give pleasure to tourists. Amsterdam is a great trading city, and the canals were built as a very efficient way of moving goods around.

This is also a reminder that the primary purpose of architecture is not to look beautiful, but to deliver business value, though surprisingly, the two often seem to go together quite well.

When those canals were first thought of, it might not have been obvious that this was the right thing to do for Amsterdam. Certainly the right layout for the canal network would not be obvious. The beginning of a project is always a little uncertain, and seeing the idea begin to take shape is exciting. That is where we are with Open Platform 3.0 right now.

We started with the intention to define a platform to enable enterprises to get value from new technologies including cloud computing, social computing, mobile computing, big data, the Internet of Things, and perhaps others. We developed an Open Group business scenario to capture the business requirements. We developed a set of business use-cases to show how people are using and wanting to use those technologies. And that leads to the next step, which is to define the platform. All these new technologies and their applications sound wonderful, but what actually is Open Platform 3.0?

The Third Platform

Looking historically, the first platform was the computer operating system. A vendor-independent operating system interface was defined by the UNIX® standard. The X/Open Company and the Open Software Foundation (OSF), which later combined to form The Open Group, were created because companies everywhere were complaining that they were locked into proprietary operating systems. They wanted applications portability. X/Open specified the UNIX® operating system as a common application environment, and the value that it delivered was to prevent vendor lock-in.

The second platform is the World Wide Web. It is a common services environment, for services used by people browsing web pages or for web services used by programs. The value delivered is universal deployment and access. Any person or company anywhere can create a services-based solution and deploy it on the web, and every person or company throughout the world can access that solution.

Open Platform 3.0 is developing as a common architecture environment. This does not mean it is a replacement for TOGAF®. TOGAF is about how you do architecture and will continue to be used with Open Platform 3.0. Open Platform 3.0 is about what kind of architecture you will create. It will be a common environment in which enterprises can do architecture. The big business benefit that it will deliver is integrated solutions.

ChrisBlog1

Figure 1: The Third Platform

With the second platform, you can develop solutions. Anyone can develop a solution based on services accessible over the World Wide Web. But independently-developed web service solutions will very rarely work together “out of the box”.

There is an increasing need for such solutions to work together. We see this need when looking at The Open Platform 3.0 technologies. People want to use these technologies together. There are solutions that use them, but they have been developed independently of each other and have to be integrated. That is why Open Platform 3.0 has to deliver a way of integrating solutions that have been developed independently.

Common Architecture Environment

The Open Group has recently published its first thoughts on Open Platform 3.0 in the Open Platform 3.0 White Paper. This lists a number of things that will eventually be in the Open Platform 3.0 standard. Many of these are common architecture artifacts that can be used in solution development. They will form a common architecture environment. They are:

  • Statement of need, objectives, and principles – this is not part of that environment of course; it says why we are creating it.
  • Definitions of key terms – clearly you must share an understanding of the key terms if you are going to develop common solutions or integrable solutions.
  • Stakeholders and their concerns – an understanding of these is an important aspect of an architecture development, and something that we need in the standard.
  • Capabilities map – this shows what the products and services that are in the platform do.
  • Basic models – these show how the platform components work with each other and with other products and services.
  • Explanation of how the models can be combined to realize solutions – this is an important point and one that the white paper does not yet start to address.
  • Standards and guidelines that govern how the products and services interoperate – these are not standards that The Open Group is likely to produce, they will almost certainly be produced by other bodies, but we need to identify the appropriate ones and probably in some cases coordinate with the appropriate bodies to see that they are developed.

The Open Platform 3.0 White Paper contains an initial statement of needs, objectives and principles, definitions of some key terms, a first-pass list of stakeholders and their concerns, and half a dozen basic models. The basic models are in an analysis of the business use-cases for Open Platform 3.0 that were developed earlier.

These are just starting points. The white paper is incomplete: each of the sections is incomplete in itself, and of course the white paper does not contain all the sections that will be in the standard. And it is all subject to change.

An Example Basic Model

The figure shows a basic model that could be part of the Open Platform 3.0 common architecture environment.

ChrisBlog 2

Figure 2: Mobile Connected Device Model

This is the Mobile Connected Device Model: one of the basic models that we identified in the snapshot. It comes up quite often in the use-cases.

The stack on the left is a mobile device. It has a user, it has apps, it has a platform which would probably be Android or iOS, it has infrastructure that supports the platform, and it is connected to the World Wide Web, because that’s part of the definition of mobile computing.

On the right you see, and this is a frequently encountered pattern, that you don’t just use your mobile device for running apps. Maybe you connect it to a printer, maybe you connect it to your headphones, maybe you connect it to somebody’s payment terminal, you can connect it to many things. You might do this through a Universal Serial Bus (USB). You might do it through Bluetooth. You might do it by Near Field Communications (NFC). You might use other kinds of local connection.

The device you connect to may be operated by yourself (e.g. if it is headphones), or by another organization (e.g. if it is a payment terminal). In the latter case you typically have a business relationship with the operator of the connected device.

That is an example of the basic models that came up in the analysis of the use-cases. It is captured in the White Paper. It is fundamental to mobile computing and is also relevant to the Internet of Things.

Access to Technologies

This figure captures our understanding of the need to obtain information from the new technologies, social media, mobile devices, sensors and so on, the need to process that information, maybe on the cloud, to manage it and, ultimately, to deliver it in a form where there is analysis and reasoning that enables enterprises to take business decisions.

ChrisBlog 3

Figure 3: Access to Technologies

The delivery of information to improve the quality of decisions is the source of real business value.

User-Driven IT

The next figure captures a requirement that we picked up in the development of the business scenario.

ChrisBlog 4

Figure 4: User-Driven IT

Traditionally, you would have had the business use in the business departments of an enterprise, and pretty much everything else in the IT department. But we are seeing two big changes. One is that the business users are getting smarter, more able to use technology. The other is they want to use technology themselves, or to have business technologists closely working with them, rather than accessing it indirectly through the IT department.

The systems provisioning and management is now often done by cloud service providers, and the programming and integration and helpdesk by cloud brokers, or by an IT department that plays a broker role, rather than working in the traditional way.

The business still needs to retain responsibility for the overall architecture and for compliance. If you do something against your company’s principles, your customers will hold you responsible. It is no defense to say, “Our broker did it that way.” Similarly, if you break the law, your broker does not go to jail, you do. So those things will continue to be more associated with the business departments, even as the rest is devolved.

In short, businesses have a new way of using IT that Open Platform 3.0 must and will accommodate.

Integration of Independently-Developed Solutions

The next figure illustrates how the integration of independently developed solutions can be achieved.

ChrisBlog 5

Figure 5: Architecture Integration

It shows two solutions, which come from the analysis of different business use-cases. They share a common model, which makes it much easier to integrate them. That is why the Open Platform 3.0 standard will define common models for access to the new technologies.

The Open Platform 3.0 standard will have other common artifacts: architectural principles, stakeholder definitions and descriptions, and so on. Independently-developed architectures that use them can be integrated more easily.

Enterprises develop their architectures independently, but engage with other enterprises in business ecosystems that require shared solutions. Increasingly, business relationships are dynamic, and there is no time to develop an agreed ecosystem architecture from scratch. Use of the same architecture platform, with a common architecture environment including elements such as principles, stakeholder concerns, and basic models, enables the enterprise architectures to be integrated, and shared solutions to be developed quickly.

Completing the Definition

How will we complete the definition of Open Platform 3.0?

The Open Platform 3.0 Forum recently published a set of 22 business use-cases – the Nexus of Forces in Action. These use-cases show the application of Social, Mobile and Cloud Computing, Big Data, and the Internet of Things in a wide variety of business areas.

ChrisBlog 6

Figure 6: Business Use-Cases

The figure comes from that White Paper and shows some of those areas: multimedia, social networks, building energy management, smart appliances, financial services, medical research, and so on.

Use-Case Analysis

We have started to analyze those use-cases. This is an ArchiMate model showing how our first business use-case, The Mobile Smart Store, could be realized.

ChrisBlog 7

Figure 7: Use-Case Analysis

As you look at it you see common models. Outlined on the left is a basic model that is pretty much the same as the original TOGAF Technical Reference Model. The main difference is the addition of a business layer (which shows how enterprise architecture has moved in the business direction since the TRM was defined).

But you also see that the same model appears in the use-case in a different place, as outlined on the right. It appears many times throughout the business use-cases.

Finally, you can see that the Mobile Connected Device Model has appeared in this use-case (outlined in the center). It appears in other use-cases too.

As we analyze the use-cases, we find common models, as well as common principles, common stakeholders, and other artifacts.

The Development Cycle

We have a development cycle: understanding the value of the platform by considering use-cases, analyzing those use-cases to derive common features, and documenting the common features in a specification.

ChrisBlog 8

Figure 8: The Development Cycle

The Open Platform 3.0 White Paper represents the very first pass through that cycle, further passes will result in further White Papers, a snapshot, and ultimately The Open Platform 3.0 standard, and no doubt more than one version of that standard.

Conclusions

Open Platform 3.0 provides a common architecture environment. This enables enterprises to derive business value from social computing, mobile computing, big data, the Internet-of-Things, and potentially other new technologies.

Cognitive computing, for example, has been suggested as another technology that Open Platform 3.0 might in due course accommodate. What would that lead to? There would be additional use-cases, which would lead to further analysis, which would no doubt identify some basic models for cognitive computing, which would be added to the platform.

Open Platform 3.0 enables enterprise IT to be user-driven. There is a revolution in the way that businesses use IT. Users are becoming smarter and more able to use technology, and want to do so directly, rather than through a separate IT department. Business departments are taking in business technologists who understand how to use technology for business purposes. Some companies are closing their IT departments and using cloud brokers instead. In other companies, the IT department is taking on a broker role, sourcing technology that business people use directly.Open Platform 3.0 will be part of that revolution.

Open Platform 3.0 will deliver the ability to integrate solutions that have been independently developed. Businesses typically exist within one or more business ecosystems. Those ecosystems are dynamic: partners join, partners leave, and businesses cannot standardize the whole architecture across the ecosystem; it would be nice to do so but, by the time it was done, the business opportunity would be gone. Integration of independently developed architectures is crucial to the world of business ecosystems and delivering value within them.

Call for Input

The platform will deliver a common architecture environment, user-driven enterprise IT, and the ability to integrate solutions that have been independently developed. The Open Platform 3.0 Forum is defining it through an iterative process of understanding the content, analyzing the use-cases, and documenting the common features. We welcome input and comments from other individuals within and outside The Open Group and from other industry bodies.

If you have comments on the way Open Platform 3.0 is developing or input on the way it should develop, please tell us! You can do so by sending mail to platform3-input@opengroup.org or share your comments on our blog.

References

The Open Platform 3.0 White Paper: https://www2.opengroup.org/ogsys/catalog/W147

The Nexus of Forces in Action: https://www2.opengroup.org/ogsys/catalog/W145

TOGAF®: http://www.opengroup.org/togaf/

harding

Dr. Chris Harding is Director for Interoperability at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0™ Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

 

 

 

 

 

2 Comments

Filed under architecture, Boundaryless Information Flow™, Cloud, Cloud/SOA, digital technologies, Open Platform 3.0, Service Oriented Architecture, Standards, TOGAF®, Uncategorized

The Onion & The Open Group Open Platform 3.0™

By Stuart Boardman, Senior Business Consultant, KPN Consulting, and Co-Chair of The Open Group Open Platform 3.0™

Onion1

The onion is widely used as an analogy for complex systems – from IT systems to mystical world views.Onion2

 

 

 

It’s a good analogy. From the outside it’s a solid whole but each layer you peel off reveals a new onion (new information) underneath.

And a slice through the onion looks quite different from the whole…Onion3

What (and how much) you see depends on where and how you slice it.Onion4

 

 

 

 

The Open Group Open Platform 3.0™ is like that. Use-cases for Open Platform 3.0 reveal multiple participants and technologies (Cloud Computing, Big Data Analytics, Social networks, Mobility and The Internet of Things) working together to achieve goals that vary by participant. Each participant’s goals represent a different slice through the onion.

The Ecosystem View
We commonly use the idea of peeling off layers to understand large ecosystems, which could be Open Platform 3.0 systems like the energy smart grid but could equally be the workings of a large cooperative or the transport infrastructure of a city. We want to know what is needed to keep the ecosystem healthy and what the effects could be of the actions of individuals on the whole and therefore on each other. So we start from the whole thing and work our way in.

Onion5

The Service at the Centre of the Onion

If you’re the provider or consumer (or both) of an Open Platform 3.0 service, you’re primarily concerned with your slice of the onion. You want to be able to obtain and/or deliver the expected value from your service(s). You need to know as much as possible about the things that can positively or negatively affect that. So your concern is not the onion (ecosystem) as a whole but your part of it.

Right in the middle is your part of the service. The first level out from that consists of other participants with whom you have a direct relationship (contractual or otherwise). These are the organizations that deliver the services you consume directly to enable your own service.

One level out from that (level 2) are participants with whom you have no direct relationship but on whose services you are still dependent. It’s common in Platform 3.0 that your partners too will consume other services in order to deliver their services (see the use cases we have documented). You need to know as much as possible about this level , because whatever happens here can have a positive or negative effect on you.

One level further from the centre we find indirect participants who don’t necessarily delivery any part of the service but whose actions may well affect the rest. They could just be indirect materials suppliers. They could also be part of a completely different value network in which your level 1 or 2 “partners” participate. You can’t expect to understand this level in detail but you know that how that value network performs can affect your partners’ strategy or even their very existence. The knock-on impact on your own strategy can be significant.

We can conceive of more levels but pretty soon a law of diminishing returns sets in. At each level further from your own organization you will see less detail and more variety. That in turn means that there will be fewer things you can actually know (with any certainty) and not much more that you can even guess at. That doesn’t mean that the ecosystem ends at this point. Ecosystems are potentially infinite. You just need to decide how deep you can usefully go.

Limits of the Onion
At a certain point one hits the limits of an analogy. If everybody sees their own organization as the centre of the onion, what we actually have is a bunch of different, overlapping onions.

Onion6

And you can’t actually make onions overlap, so let’s not take the analogy too literally. Just keep it in mind as we move on. Remember that our objective is to ensure the value of the service we’re delivering or consuming. What we need to know therefore is what can change that’s outside of our own control and what kind of change we might expect. At each visible level of the theoretical onion we will find these sources of variety. How certain of their behaviour we can be will vary – with a tendency to the less certain as we move further from the centre of the onion. We’ll need to decide how, if at all, we want to respond to each kind of variety.

But that will have to wait for my next blog. In the meantime, here are some ways people look at the onion.

Onion7   Onion8

 

 

 

 

SONY DSCStuart Boardman is a Senior Business Consultant with KPN Consulting where he leads the Enterprise Architecture practice and consults to clients on Cloud Computing, Enterprise Mobility and The Internet of Everything. He is Co-Chair of The Open Group Open Platform 3.0™ Forum and was Co-Chair of the Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by KPN, the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI as well as several Open Group white papers, guides and standards. He is a frequent speaker at conferences on the topics of Open Platform 3.0 and Identity.

3 Comments

Filed under Cloud, Cloud/SOA, Conference, Enterprise Architecture, Open Platform 3.0, Service Oriented Architecture, Standards, Uncategorized

One Year Later: A Q&A Interview with Chris Harding and Dave Lounsbury about Open Platform 3.0™

By The Open Group

The Open Group launched its Open Platform 3.0™ Forum nearly one year ago at the 2013 Sydney conference. Open Platform 3.0 refers to the convergence of new and emerging technology trends such as Mobile, Social, Big Data, Cloud and the Internet of Things, as well as the new business models and system designs these trends are pushing organizations toward due to the consumerization of IT and evolving user behaviors. The Forum was created to help organizations address the architectural and structural considerations that businesses must consider to take advantage of and benefit from this evolutionary shift in how technology is used.

We sat down with The Open Group CTO Dave Lounsbury and Open Platform 3.0 Director Dr. Chris Harding at the recent San Francisco conference to catch up on the Forum’s activities and progress since launch and what they’ll be working on during 2014.

The Open Group’s Forum, Open Platform 3.0, was launched almost a year ago in April of 2013. What has the Forum been working on over the past year?

Chris Harding (CH): We launched at the Sydney conference in April of last year. What we’ve done since then first of all was to look at the requirements for the platform, and we did this using the proven TOGAF® technique of the Business Scenario. So over the course of last summer, the summer of 2013, we developed a Business Scenario capturing the requirements for Open Platform 3.0 and that was published just before The Open Group conference in October. Following that conference, the main activity that we’ve been doing is in fact furthering the requirements space. We’ve been developing analysis of use cases, so currently we have 22 different use cases that members of the forum have put together which are illustrating the use of the convergent technologies and most importantly the use of them in combination with each other.

What we’re doing here in this meeting in San Francisco is to obtain from that basis of requirements and use cases an understanding of what the platform fundamentally should be because it is our intention to produce a Snapshot definition of the platform by the end of March. So in the first year of the Forum, we hope that we will finish that year by producing a Snapshot definition of Open Platform 3.0.

Dave Lounsbury (DL): First, the roots of the Open Platform go deeper. Previous to that we had a number of works groups in the areas of Cloud, SOA and some other ones in terms of Semantic Interoperability. All of those were early pieces, and what we saw at the beginning of 2013 was a coalescing of that into this concept that businesses were looking for a new platform for their operations that combined aspects of Social, Mobile, Cloud computing, Big Data and the analytics that go along with it. We saw that emerging in the marketplace, and we formed the Forum to develop that direction. The Open Group always takes an end-to-end view of any problem – we like to look at the whole ecosystem. We want to make sure that the technical standards aren’t just point targets and actually address a business need.

Some of the work groups within The Open Group, such as Quantum Lifecycle Management (QLM) and Semantic Interoperability, have been brought under the umbrella of Open Platform 3.0, most notably the Cloud Work Group. How will the work of these groups continue under Platform 3.0?

CH: Some of the work already going on in The Open Group was directly or indirectly relevant to Open Platform 3.0. And that first and most importantly was the work of the Cloud Work Group, Cloud being one of the convergent technologies, and the Cloud Work Group became a part of Platform 3.0. Two other activities also became a part of Open Platform 3.0, one was of these was the Semantic Interoperability Work Group, and that is because we recognized that Semantic Interoperability has to be an important part of how these technologies work with each other. Though it may not be that we have a full definition of that in the first version of the standard – it’s a notoriously difficult area – but over the course of time, we hope to incorporate a Semantic Interoperability component in the Platform definition and that may well build on the work that we’ve been doing with the Universal Data Element Framework, the UDEF project, which is currently undergoing a major restructuring. The key thing from the Open Platform 3.0 perspective is how the semantic convention relates to the convergence of the technologies in the platform.

In terms of QLM, that became part become of Open Platform 3.0 because one of the key convergent technologies is the Internet of Things, and QLM overlaps significantly with that. QLM is not about the Internet of Things, as such, but it does have a strong component of understanding the way networked sensors and controls work, so that’s become an important contribution to the new Forum.

DL: Like in any platform there’s going to be multiple components. In Open Platform 3.0, one of the big drivers for this change is Big Data. Big Data is very trendy, right? But where does Big Data come from? Well, it comes from increased connectivity, increased use of mobile devices, increased use of sensors –  the ‘Internet of Things.’ All of these things are generating data about usage patterns, where people are, what they’re doing, what that they‘re buying, what they’re interested in and what their likes and dislikes are, creating a massive flood of data. Now the question becomes ‘how do you compute on that data?’ You need to handle that massively scalable stream of data. You need massively scalable computing  underneath it, you need the ability to move large amounts of information from one place to another. When you think about the analysis of data like that, you have algorithms that do a lot of data access and they’ll have big spikes of computation, as they create some model of it. If you’re going to look at 10 zillion records, you don’t want to buy enough computers so you can always look at 10 zillion records, you want to be able to turn that on, do your analysis and turn it back off.  That’s, of course, why Cloud is a critical component of Open Platform 3.0.

Open Platform 3.0 encompasses a lot of different technologies as well as how they are converging. How do you piece apart everything that Platform 3.0 entails to begin to formulate a standard for it?

CH: I mentioned that we developed 22 use cases. The way that we’re addressing this is to look at use cases and the business and technical ecosystems that those use cases exemplify and to abstract from that some fundamental architectural patterns. These we believe will be the basis for the initial definition of the platform.

DL: That gets back to this question about how were starting up. Again it’s The Open Group’s mantra that we look at a business problem as an end-to-end problem. So what you’ll see in Open Platform 3.0, is that we’ve done the Business Scenario to figure out what’s the business motivator, what do business people need to get this done, and we’re fleshing that out with these details in these detailed use cases.

One of the things that we’re very careful about in The Open Group is that we don’t replicate what’s going on in other standards bodies. If you look at what’s going on in Cloud, and what continues to go on in Cloud under the Open Platform 3.0, banner, we really focused in on what do business people really need in the cloud guides – those are how business people really use it.  We’ve stayed away for a long time from the bits and bytes – we’re now doing a Cloud Reference Architecture – but we’ve also created the Cloud Ecosystem Reference Model, which was just published. That Cloud Ecosystem Reference Model, if you read through it, isn’t about how bits flow around, it’s about how partners interact with each other – what to look for in your Cloud partner, who are the players? When you go to use Cloud in your business, what players do you have to engage with? What are the roles that you have to engage with them on? So again it’s really that business level of guidance that The Open Group is really good at, and we do liaison with other organizations in order to get technical stuff if we need it – or if not, we’ll create it ourselves because we’ve got very competent technical people – but again, it’s that balanced business approach that distinguishes The Open Group way.

Many industry pundits have said that Open Platform 3.0 is ultimately about a shift toward user-driven IT. How does that change the standards making process when most standards are ultimately put in place by technologists not necessarily end-users?

CH:  It’s an interesting question. I mentioned the Business Scenario that we developed over the summer – one of the key things that came out of that was that there is this shift towards a more direct use of the technologies by business users.  And that is partly because it’s becoming more possible. Cloud is one of the key factors that has shortened the cycle of procuring and putting IT in place to support business use, and made it more possible to manage IT directly. At the same time [users are] becoming impatient with delay and wanting to gain the benefits of technology directly and not at arms length through the IT department. We’re seeing in connection with these phenomena such as the business technologist, the technical specialist who works with or is employed by the business department rather than within a separate IT department, and one of whose key strengths is an understanding of the business.  So that is certainly an important dimension that we’re seeing and one of the requirements for the Platform is that it should be usable in an environment where business is using IT more directly.

But that wasn’t the question you asked. The question was, ‘isn’t it a problem that the standards are defined by technologists?’ We don’t believe it’s a problem provided that the technologists do have an understanding of the business environment. That was why in the Business Scenario activity that we conducted, one of the key inputs was a roundtable workshop with CIO level people, and that is where a lot of our perspective on why things are changing comes from. Open Platform 3.0 certainly does have dimension of fundamental architecture patterns and part of that is business architecture patterns but it also has a technical dimension, and obviously you do really need the technical people to explore that dimension though they do always need to keep in mind the technology is there to serve the business.

DL: If you actually look at trends in the marketplace about how IT is done, and in fact if you look at the last blog post that Allen [Brown] did about agile, the whole thrust of agile methodologies and its successor DevOps is to really get the implementers right next to the business people and have a very tight arrangement in order to get fast iteration and really have the implementer do what the business person needs. I actually view consumerization not as some outside threat but actually a logical extension of that trend. What’s happening in my opinion is that people who are not technologists, who are not part of the IT department, are getting comfortable using and managing their own technology. And so they’re making decisions that used to be made by the IT department years ago – or what used to be the IT department. First there was the big mainframe, and you handed in your cards at a window and you got your printout in your little cubby hole. Then the IT department bought your PC, and now we bring our own devices. There’s nothing wrong with that, that’s people getting comfortable with technology and making decisions. I think that’s one of the reasons we have need for an Open Platform 3.0 approach – to develop business guidance and eventually technical standards on how we keep up with that trend. Because it’s a very natural trend – people want to control the resources they need to get their job done, and if those resources are technical resources, and they’re comfortable doing that, great!

Convergence and Open Platform 3.0 seem to take us closer and closer to The Open Group’s vision of Boundaryless Information Flow™.  Is Open Platform 3.0 the fulfillment of that vision?

DL: I think I’d be crazy to say that it’s the endpoint of that vision. I think being able to move large amounts of data and make decisions on it is a significant step forward in Boundaryless Information Flow, but this is a two-edged sword. I talked about all that data being generated by mobile devices and sensors and retail networks and social networks and things like that. That data is growing exponentially.  The number of people who can make decisions on that data are growing at best linearly and not very quickly. So if there’s all this data out there and nobody to look at it, we need to ask if we have we lowered the boundary for communications or have we actually raised it by creating a pile of data that no one can climb? That’s why I think a next step is, in fact, more machine-assisted analytics and predictive analytics and machine learning that will help humans digest and understand that data. That will be, I think, yet another step toward Boundaryless Information Flow. Moving bits around does not equate to information flow – its only information when it moves from data to being information in a human’s brain. Until we lower that barrier as well, we’re not there. And even beyond that, there’s still lots of things that can be done, in terms of breaking down human language barriers and things like that or social networks in more intuitive ways. I think there’s a long way to go. I think this is a really important step forward, but fulfillment is too strong a word.

CH:  Not in itself, I don’t believe. It is a major contribution towards the vision of Boundaryless Information Flow but it is not the complete fulfillment of that vision. Since we’ve formulated the problem statement of Boundaryless Information Flow there have been a number of developments that have impacted on it and maybe helped to bring it closer. So you might think of SOA as an important enabling technology for Boundaryless Information Flow, replacing the information silos with interacting services. Now we’re seeing Open Platform 3.0, which is certainly going to have a service-oriented flavor, shall we say, although it probably will not look exactly like traditional SOA. The Boundaryless Information Flow requirement was a very far-reaching problem statement. The Interoperable Business Scenario was where it was first set out and since then we’ve been gradually making process toward it. Open Platform 3.0 will bring it closer, but I’m sure there will be other things still needed to make it happen. 

One of the key things for Boundaryless Information Flow is Enterprise Architecture. So within a particular enterprise, the business and IT needs to be architected to enable Boundaryless Information Flow, and TOGAF is the method that is defined and maintained by The Open Group for how enterprises define enterprise architectures. Open Platform 3.0 will complement that by providing a ‘this is what an architecture looks like that enables the business to take advantage of this new converging technologies.’ But there will still be a need for the Enterprise Architect to put that together with the other particular factors involved in an enterprise to create an architecture for Boundaryless Information Flow within that enterprise.

When can we expect the first standard from Open Platform 3.0?

DL: Well, we published the Cloud Ecosystem Reference Guide, and again the understanding of how business partners relate in the Cloud world is a key component of Open Platform 3.0. The Forum has a roadmap, and will start publishing the case studies still in process.

The message I would say is there’s already early value in the Cloud Ecosystem Reference Model, which is a logical continuation of cloud work that had already gone on in the Work Group, but is now part of the Forum as part of Open Platform 3.0.

CH: That’s always a tricky question however I can tell you what is planned. The intention, as I said, was to produce a Snapshot definition by the end of March and, given we are a quarter of the way through the meeting at this conference, which is the key meeting that will define the basis for that, the progress has been good so far, so I’m optimistic. A Snapshot is not a Standard. A Snapshot is a statement of ‘this is what we are thinking and might be what it will look like,’ but it’s not guaranteed in any way that the Standard will follow the Snapshot. We are intending to produce the first Standard definition of the platform in about a year’s time after the Snapshot.  That will give the opportunity for people not only within The Open Group but outside The Open Group to give us input and further understanding of the way people intend to use the platform as feedback on the snapshot, which should be the basis for the first published standard.

For more on the Open Platform 3.0 Forum, please visit: http://www3.opengroup.org/subjectareas/platform3.0.

If you have any questions about Open Platform 3.0 or if you would like to join the new Forum, please contact Chris Harding (c.harding@opengroup.org) for queries regarding the Forum or Chris Parnell (c.parnell@opengroup.org) for queries regarding membership.

Chris HardingDr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

Dave LounsburyDave is Chief Technical Officer (CTO) and Vice President, Services for The Open Group. As CTO, he ensures that The Open Group’s people and IT resources are effectively used to implement the organization’s strategy and mission.  As VP of Services, Dave leads the delivery of The Open Group’s proven collaboration processes for collaboration and certification both within the organization and in support of third-party consortia. Dave holds a degree in Electrical Engineering from Worcester Polytechnic Institute, and is holder of three U.S. patents.

Comments Off

Filed under Cloud, Cloud/SOA, Conference, Open Platform 3.0, Standards, TOGAF®

Do Androids Dream of Electric Sheep?

By Stuart Boardman, KPN

What does the apocalyptic vision of Blade Runner have to do with The Open Group’s Open Platform 3.0™ Forum?

Throughout history, from the ancient Greeks and the Talmud, through The Future Eve and Metropolis to I Robot and Terminator, we seem to have been both fascinated and appalled by the prospect of an autonomous “being” with its own consciousness and aspirations.

Hal-2001

But right now it’s not the machines that bother me. It’s how we try to do what we try to do with them. What we try to do is to address problems of increasingly critical economic, social and environmental importance. It bothers me because, like it or not, these problems can only be addressed by a partnership of man and (intelligent) machine and yet we seem to want to take the intelligence out of both partners.

Two recent posts that came my way via Twitter this week provoked me to write this blog. One is a GE Report that looks very thoroughly, if somewhat uncritically at what it calls the Industrial Internet. The other, by Forrester analyst Sarah Rotman Epps, appeared in Forbes under the title There Is No Internet of Things and laments the lack of interconnectedness in most “Smart” technologies.hammer

What disturbs me about both of those pieces is the suggestion that if we sort out some interoperability and throw masses of computing power and smart algorithms at a problem, everything will be dandy.

Actually it could just make things worse. Technically everything will work but the results will be a matter of chance. The problem lies in the validity of the models we use. And our ability to effectively model complex problems is at best unproven. If the model is faulty and the calculation perfect, the results will be wrong. In fact, when the systems we try to model are complex or chaotic, no deterministic model can deliver correct results other than by accident. But we like deterministic models, because they make us feel like we’re in control. I discussed this problem and its effects in more detail in my article on Ashby’s Law Of Requisite Variety. There’s also an important article by Joyce Hostyn, which explains how a simplistic view of objectivity leads to (at best) biased results. “Data does not lie. It just does not (always) mean what you think it does” (Claudia Perlich, Chief Scientist at Dstillery via CMSWire).

Now that doesn’t detract from the fact that developing a robot vacuum cleaner that actually “learns” the layout of a room is pretty impressive. That doesn’t mean that the robot is aware that it is a vacuum cleaner and that it has a (single) purpose in life. And just as well. It might get upset about us continually moving the furniture and decide to get revenge by crashing into our best antique glass cabinet.

With the Internet of Things (IoT) and Big Data in particular, we’re deploying machines to carry out analyses and take decisions that can be critical for the success of some human endeavor. If the models are wrong or only sometimes right, the consequences can be disastrous for health, the environment or the economy. In my Ashby piece I showed how unexpected events can result in an otherwise good model leading to fundamentally wrong reactions. In a world where IoT and Big Data combine with Mobility (multiple device types, locations and networks) and Cloud, the level of complexity is obviously high and there’s scope for a large number of unexpected events.
IoT Society

If we are to manage the volume of information coming our way and the speed with which it comes or with which we must react we need to harness the power of machine intelligence. In an intelligent manner. Which brings me to Cognitive Computing Systems.

On the IBM Research Cognitive Computing page I found this statement: “Far from replacing our thinking, cognitive systems will extend our cognition and free us to think more creatively.”  Cognitive Computing means allowing the computer to say “listen guys, I’m not really sure about this but here are the options”. Or even “I’ve actually never seen one of these before, so maybe you’d like to see what you can make of it”. And if the computer is really really not sure, maybe we’d better ride the storm for a while and figure out what this new thing is. Cognitive Computing means that we can, in a manner of speaking, discuss this with the computer.

It’s hard to say how far we are from commercially viable implementations of this technology. Watson has a few children but the family is still at the stage of applied research. But necessity is the mother of invention and, if the technologies we’re talking about in Platform 3.0 really do start collectively to take on the roles we have envisaged for them, that could just provide the necessary incentive to develop economically feasible solutions.

spacemenIn the meantime, we need to put ourselves more in the centre of things, to make the optimal use of the technologies we do have available to us but not shirk our responsibilities as intelligent human beings to use that intelligence and not seek easy answers to wicked problems.

 

 

I’ll leave you with 3 minutes and 12 seconds of genius:
marshalldavisjones
Marshall Davis Jones: “Touchscreen”


Stuart BoardmanStuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity.

Comments Off

Filed under Cloud, Cloud/SOA, Open Platform 3.0, Platform 3.0