Tag Archives: architecture

Professional Training Trends (Part Two): A Q&A with Chris Armstrong, Armstrong Process Group

By The Open Group

This is part two in a two part series.

Professional development and training is a perpetually hot topic within the technology industry. After all, who doesn’t want to succeed at their job and perform better?

Ongoing education and training is particularly important for technology professionals who are already in the field. With new tech trends, programming languages and methodologies continuously popping up, most professionals can’t afford not to keep their skill sets up to date these days.

The Open Group member Chris Armstrong is well-versed in the obstacles that technology professionals face to do their jobs. President of Armstrong Process Group, Inc. (APG), Armstrong and his firm provide continuing education and certification programs for technology professionals and Enterprise Architects covering all aspects of the enterprise development lifecycle. We recently spoke with Armstrong about the needs of Architecture professionals and the skills and tools he thinks are necessary to do the job effectively today.

What are some of the tools that EAs can be using to do architecture right now?

There’s quite a bit out there. I’m kind of developing a perspective on how to lay them out across the landscape a bit better. I think there are two general classes of EA tools based on requirements, which is not necessarily the same as what is offered by commercial or open source solutions.

When you take a look at the EA Capability model and the value chain, the first two parts of it have to do with understanding and analyzing what’s going on in an enterprise. Those can be effectively implemented by what I call Enterprise Architecture content management tools, or EACM. Most of the modeling tools would fall within that categorization. Tools that we use? There’s Sparx Enterprise Architect. It’s a very effective modeling tool that covers all aspects of the architecture landscape top-to-bottom, left-to-right and it’s very affordable. Consequently, it’s one of the most popular tools in the world—I think there are upwards of 300,000 licenses active right now. There are lots of other modeling tools as well.

A lot of people find the price point for Sparx Enterprise Architect so appealing that when people go into an investment and it’s only $5K, $10K, or $15K, instead of $100K or $250K, find it’s a great way to get into coming to grips with what it means to really build models. It really helps you build those fundamental modeling skills, which are best learned via on-the-job experience in your real business domain, without having to mortgage the farm.

Then there’s the other part of it, and this is where I think there needs to be a shift in emphasis to some extent. A lot of times the architect community gets caught up in modeling. We’ve been modeling for decades—modeling is not a new thing. Despite that—and this is just an anecdotal observation—the level of formal, rigorous modeling, at least in our client base in the U.S. market, is still very low. There are lots of Fortune 1000 organizations that have not made investments in some of these solutions yet, or are fractured or not well-unified. As a profession, we have a big history of modeling and I’m a big fan of that, but it sometimes seems a bit self-serving to some extent, in that a lot of times the people we model for are ourselves. It’s all good from an engineering perspective—helps us frame stuff up, produce views of our content that are meaningful to other stakeholders. But there’s a real missed opportunity in making those assets available and useful to the rest of the organization. Because if you build a model, irrespective of how good and relevant and pertinent it is, if nobody knows about it and nobody can use it to make good decisions or can’t use it to accelerate their project, there’s some legitimacy to the question of “So how much value is this really adding?” I see a chasm between the production of Enterprise Architecture content and the ease of accessing and using that content throughout the enterprise. The consumer market for Enterprise Architecture is much larger than the provider community.

But that’s a big part of the problem, which is why I mentioned cross-training earlier–most enterprises don’t have the self-awareness that they’ve made some investment in Enterprise Architecture and then often ironically end up making redundant, duplicative investments in repositories to keep track of inventories of things that EA is already doing or could already be doing. Making EA content as easily accessible to the enterprise as going to Google and searching for it would be a monumental improvement. One of the big barriers to re-use is finding if something useful has already been created, and there’s a lot of opportunity to deliver better capability through tooling to all of the consumers throughout an enterprise.

If we move a bit further along the EA value chain to what we call “Decide and Respond,” that’s a really good place for a different class of tools. Even though there are modeling tool vendors that try to do it, we need a second class of tools for EA Lifecycle Management (EALM), which is really getting into the understanding of “architecture-in-motion”. Once architecture content has been described as the current and future state, the real $64,000 question is how do we get there? How do we build a roadmap? How do we distribute the requirements of that roadmap across multiple projects and tie that to the strategic business decisions and critical assets over time? Then there’s how do I operate all of this stuff once I build it? That’s another part of lifecycle management—not just how do I get to this future state target architecture, but how do I continue to operate and evolve it incrementally and iteratively over time?

There are some tools that are emerging in the lifecycle management space and one of them is another product we partner with—that’s a solution from HP called Enterprise Maps. From our perspective it meets all the key requirements of what we consider enterprise architecture lifecycle management.

What tools do you recommend EAs use to enhance their skillsets?

Getting back to modeling—that’s a really good place to start as it relates to elevating the rigor of architecture. People are used to drawing pictures with something like Visio to graphically represent ”here’s how the business is arranged” or “here’s how the applications landscape looks,” but there’s a big difference in transitioning how to think about building a model. Because drawing a picture and building a model are not the same thing. The irony, though, is that to many consumers it looks the same, because you often look into a model through a picture. But the skill and the experience that the practitioner needs is very different. It’s a completely different way of looking at the world when you start building models as opposed to solely drawing pictures.

I see still, coming into 2015, a huge opportunity to uplift that skill set because I find a lot of people say they know how to model but they haven’t really had that experience. You just can’t simply explain it to somebody, you have to do it. It’s not the be-all and end-all—it’s part of the architect’s toolkit, but being able to think architecturally and from a model-driven approach is a key skill set that people are going to need to keep pace with all the rapid changes going on in the marketplace right now.

I also see that there’s still an opportunity to get people better educated on some formal modeling notations. We’re big fans of the Unified Modeling Language, UML. I still think uptake of some of those specifications is not as prevalent as it could be. I do see that there are architects that have some familiarity with some of these modeling standards. For example, in our TOGAF® training we talk about standards in one particular slide, many architects have only heard of one or two of them. That just points to there being a lack of awareness about the rich family of languages that are out there and how they can be used. If a community of architects can only identify one or two modeling languages on a list of 10 or 15 that is an indirect representation of their background in doing modeling, in my opinion. That’s anecdotal, but there’s a huge opportunity to uplift architect’s modeling skills.

How do you define the difference between models and pictures?

Modeling requires a theory—namely you have to postulate a theory first and then you build a model to test that theory. Picture drawing doesn’t require a theory—it just requires you to dump on a piece of paper a bunch of stuff that’s in your head. Modeling encourages more methodical approaches to framing the problem.

One of the anti-patterns that I’ve seen in many organizations is they often get overly enthusiastic, particularly when they get a modeling tool. They feel like they can suddenly do all these things they’ve never done before, all that modeling stuff, and they end up “over modeling” and not modeling effectively because one of the key things for modeling is modeling just enough because there’s never enough time to build the perfect thing. In my opinion, it’s about building the minimally sufficient model that’s useful. And in order to do that, you need to take a step back. TOGAF does acknowledge this in the ADM—you need to understand who your stakeholders are, what their concerns are and then use those concerns to frame how you look at this content. This is where you start coming up with the theory for “Why are we building a model?” Just because we have tools to build models doesn’t mean we should build models with those tools. We need to understand why we’re building models, because we can build infinite numbers of models forever, where none of them might be useful, and what’s the point of that?

The example I give is, there’s a CFO of an organization that needs to report earnings to Wall Street for quarterly projections and needs details from the last quarter. And the accounting people say, “We’ve got you covered, we know exactly what you need.” Then the next day the CFO comes in and on his/her desk is eight feet of green bar paper. She/he goes out to the accountants and says, “What the heck is this?” And they say “This is a dump of the general ledger for the first quarter. Every financial transaction you need.” And he/she says, “Well it’s been a while since I’ve been a CPA, and I believe it’s all in there, but there’s just no way I’ve got time to weed through all that stuff.”

There are generally accepted accounting principles where if I want to understand the relationship between revenue and expense that’s called a P&L and if I’m interested in understanding the difference between assets and liabilities that’s a balance sheet. We can think of the general ledger as the model of the finances of an organization. We need to be able to use intelligence to give people views of that model that are pertinent and help them understand things. So, the CFO says “Can you take those debits and credits in that double entry accounting system and summarize them into a one-pager called a P&L?”

The P&L would be an example of a view into a model, like a picture or diagram. The diagram comes from a model, the general ledger. So if you want to change the P&L in an accounting system you don’t change the financial statement, you change the general ledger. When you make an adjustment in your general ledger, you re-run your P&L with different content because you changed the model underneath it.

You can kind of think of it as the difference between doing accounting on register paper like we did up until the early 21st Century and then saying “Why don’t we keep track of all the debits and credits based on a chart of accounts and then we can use reporting capabilities to synthesize any way of looking at the finances that we care to?” It’s allows a different way for thinking about the interconnectedness of things.

What are some of the most sought after classes at APG?

Of course TOGAF certification is one of the big ones. I’d say in addition to that we do quite a bit in systems engineering, application architecture, and requirements management. Sometimes those are done in the context of solution delivery but sometimes they’re done in the context of Enterprise Architecture. There’s still a lot of opportunity in supporting Enterprise Architecture in some of the fundamentals like requirements management and effective architectural modeling.

What kinds of things should EAs look for in training courses?

I guess the big thing is to try to look for are offerings that get you as close to practical application as possible. A lot of people start with TOGAF and that’s a great way to frame the problem space. I would set the expectation—and we always do when we deliver our TOGAF training—that this will not tell you “how” to do Enterprise Architecture, there’s just not enough time for that in four days. We talk about “what” Enterprise Architecture is and related emerging best practices. That needs to be followed up with “how do I actually do Enterprise Architecture modeling,” “how do I actually collect EA requirements,” “how do I actually do architecture trade-off analysis?” Then “How do I synthesize an architecture roadmap,” “how do I put together a migration plan,” and “how do I manage the lifecycle of applications in my portfolio over the long haul?” Looking for training that gets you closer to those experiences will be the most valuable ones.

But a lot of this depends on the level of maturity within the organization, because in some places, just getting everybody on the same page of what Enterprise Architecture means is a big victory. But I also think Enterprise Architects need to be very thoughtful about this cross-training. I know it’s something I’m trying to make an investment in myself, is becoming more attuned to what’s going on in other parts of the enterprise in which Enterprise Architecture has some context but perhaps is not a known player. Getting training experiences in other places and engaging those parts of your organizations to really find out what are the problems they’re trying to solve and how might Enterprise Architecture help them is essential.

One of the best ways to demonstrate that is part of the organizational learning related to EA adoption. That may even be the bigger question. As individual architects, there are always opportunities for greater skill development, but really, organizational learning is where the real investment needs to be made so you can answer the question, “Why do I care?” One of the best ways to respond to that is to have an internal success. After a pilot project say, “We did EA on a limited scale for a specific purpose and look what we got out of it and how could you not want to do more of it?”

But ultimate the question usually should be “How can we make Enterprise Architecture indispensible? How can we create an environment where people can perform their duties more rapidly, more efficiently, more effectively and more sustainably based on Enterprise Architecture?” This is part of the problem, especially in larger organizations. In 2015, it’s not really the first time people have been making investments in Enterprise Architecture, it’s the second or third or fourth time, so it’s a reboot. You want to make sure that EA can become indispensible but you want to be able to support those critical activities with EA support and then when the stakeholders become dependent on it, you can say “If you like that stuff, we need you to show up with some support for EA and get some funding and resources, so we can continue to operate and sustain this capability.”

What we’ve found is that it’s a double-edged sword, ironically. If an organization has success in propping up their Architecture capability and sustaining and demonstrating some value there, it can be a snowball effect where you can become victims of your own success and suddenly people are starting to get wind of “Oh, I don’t have to do that if the EA’s already done it,” or “I can align myself with a part of the business where the EA has already been done.” The architecture community can get very busy—more busy than they’re prepared for—because of the momentum that might exist to really exploit those EA investments. But at the end of the day, it’s all good stuff because the more you can show the enterprise that it’s worth the investment, that it delivers value, the more likely you’ll get increased funding to sustain the capability.

By The Open GroupChris Armstrong is president of Armstrong Process Group, Inc. and an internationally recognized thought leader and expert in iterative software development, enterprise architecture, object-oriented analysis and design, the Unified Modeling Language (UML), use case driven requirements and process improvement.

Over the past twenty years, Chris has worked to bring modern software engineering best practices to practical application at many private companies and government organizations worldwide. Chris has spoken at over 30 conferences, including The Open Group Enterprise Architecture Practitioners Conference, Software Development Expo, Rational User Conference, OMG workshops and UML World. He has been published in such outlets as Cutter IT Journal, Enterprise Development and Rational Developer Network.

Join the conversation! @theopengroup #ogchat

Leave a comment

Filed under Business Architecture, Enterprise Architecture, Enterprise Transformation, Professional Development, Standards, TOGAF, TOGAF®, Uncategorized

Professional Training Trends (Part One): A Q&A with Chris Armstrong, Armstrong Process Group

By The Open Group

This is part one in a two part series.

Professional development and training is a perpetually hot topic within the technology industry. After all, who doesn’t want to succeed at their job and perform better?

Ongoing education and training is particularly important for technology professionals who are already in the field. With new tech trends, programming languages and methodologies continuously popping up, most professionals can’t afford not to keep their skill sets up to date these days.

The Open Group member Chris Armstrong is well-versed in the obstacles that technology professionals face to do their jobs. President of Armstrong Process Group, Inc. (APG), Armstrong and his firm provide continuing education and certification programs for technology professionals and Enterprise Architects covering all aspects of the enterprise development lifecycle. We recently spoke with Armstrong about the needs of Architecture professionals and the skills and tools he thinks are necessary to do the job effectively today.

What are some of the latest trends you’re seeing in training today?

If I look at the kinds of things we’ve been helping people with, we definitely continue to do professional certifications like TOGAF®. It appears that the U.S. is still lagging behind Europe with penetration of TOGAF certifications. For example, the trend has been that the U.K. is number one in certifications and the U.S. is number two. Based on sheer numbers of workers, there should actually be far more people certified in the U.S., but that could be related to cultural differences in regional markets as related to certification.

Another trend we’re seeing a lot of is “How do I do this in the real world?” TOGAF intentionally does not go to the level of detail that prescribes how you really do things. Many practitioners are looking for more focused, detailed training specific to different Enterprise Architecture (EA) domains. APG does quite a bit of that with our enterprise clients to help institutionalize EA practices. There are also many tool vendors that provide tools to help accomplish EA tasks and we help with training on those.

We also find that there’s a need for balance between how much to train someone in terms of formal training vs. mentoring and coaching them. As a profession, we do a lot of classroom training, but we need to follow up more with how we’re going to apply it in the real world and in our environment with on-the-job training. Grasping the concepts in an instructor-led class isn’t the same as doing it for real, when trying to solve a problem you actually care about.

When people are interested in becoming Enterprise Architects, what kind of training should they pursue?

That’s a pretty compelling question as it has to do with the state of the architecture profession, which is still in its infancy. From a milestone perspective, it’s still hard to call Enterprise Architecture a “true” profession if you can’t get educated on it. With other professions—attorneys or medical doctors—you can go to an accredited university and get a degree or a master’s and participate in continuing education. There are some indicators that things are progressing though. Now there are master’s programs in Enterprise Architecture at institutions like Penn State. We’ve donated some of our architecture curriculum as a gift- in-kind to the program and have a seat on their corporate advisory board. It was pretty awesome to make that kind of contribution to support and influence their program.

We talk about this in our Enterprise Architecture training to help to make people aware of that milestone. However, do you think that getting a four-year degree in Computer Science or Math or Engineering and then going on to get a master’s is sufficient to be a successful Enterprise Architect? Absolutely not. So if that’s insufficient, we have to agree what additional experiences individuals should have in order to become Enterprise Architects.

It seems like we need the kind of post-graduate experience of a medical doctor where there’s an internship and a residency, based on on-the-ground experience in the real world with guidance from seasoned professionals. That’s been that approach to most professional trades—apprentice, journeyman, to master—they require on-the-job training. You become a master artisan after a certain period of time and experience. Now there are board-level certifications and some elements of a true profession, but we’re just not there yet in Enterprise Architecture. Len Fehskens at the Association of Enterprise Architects (AEA) has been working on this a lot recently. I think it’s still unclear what it will take to legitimize this as a profession, and while I’m not sure I know the answer, there may be some indicators to consider.

I think as Enterprise Architecture becomes more commonplace, there will be more of an expectation for it. Part of the uptake issue is that most people running today’s organizations likely have an MBA and when they got it 20, 30 or 40 years ago, EA was not recognized as a key business capability. Now that there are EA master’s programs, future MBA candidates will have been exposed to it in their education, which will remove some of the organizational barriers to adoption.

I think it will still be another 20 or 30 years for mass awareness. As more organizations become successful in showing how they have exploited Enterprise Architecture to deliver real business benefits (increased profitability and reduced risk), the call for qualified people will increase. And because of the consequences of the decisions Enterprise Architects are involved in, business leaders will want assurance that their people are qualified and have the requisite accreditation and experience that they’d expect from an attorney or doctor.

Maybe one other thing to call out—in order for us to overcome some of these barriers, we need to be thinking about what kind of education do we need to be providing our business leaders about Enterprise Architecture so that they are making the right kinds of investments. It’s not just Architect education that we need, but also business leader education.

What kind of architecture skills are most in demand right now?

Business Architecture has a lot of legs right now because it’s an essential part of the alignment with the business. I do see some risks of bifurcation between the “traditional” EA community and the emerging Business Architecture community. The business is enterprise, so it’s critical that the EA and BA communities are unified. There is more in common amongst us than differences as professionals, and I think there’s strength in numbers. And while Business Architecture seems to have some good velocity right now, at the end of the day you still need to be able to support your business with IT Architecture.

There is a trend coming up I do wonder about, which is related to Technology Architecture, as it’s known in TOGAF. Some people may also call it Infrastructure Architecture. With the evolution of cloud as a platform, it’s becoming in my mind—and this might be just because I’m looking at it from the perspective of a start-up IT company with APG—it’s becoming less and less of an issue to have to care as much about the technology and the infrastructure because in many cases people are making investments in these platforms where that’s taken care of by other people. I don’t want to say we don’t care at all about the technology, but a lot of the challenges organizations have of standardizing on technology to make sure that things can be easily sustainable from a cost and risk perspective, many of those may change when more and more organizations start putting things in the cloud, so it could possibly mean that a lot of the investments that organizations have made in technical architecture could become less important.

Although, that will have to be compensated for from a different perspective, particularly an emerging domain that some people call Integration Architecture. And that also applies to Application Architecture as well—as many organizations move away from custom development to packaged solutions and SaaS solutions, when they think about where they want to make investments, it may be that when all these technologies and application offerings are being delivered to us via the cloud, we may need to focus more on how they’re integrated with one another.

But there’s still obviously a big case for the entirety of the discipline—Enterprise Architecture—and really being able to have that clear line of sight to the business.

What are some of the options currently available for ongoing continuing education for Enterprise Architects?

The Association of Enterprise Architects (AEA) provides a lot of programs to help out with that by supplementing the ecosystem with additional content. It’s a blend between formal classroom training and conference proceedings. We’re doing a monthly webinar series with the AEA entitled “Building an Architecture Platform,” which focuses on how to establish capabilities within the organization to deliver architecture services. The topics are about real-world concerns that have to do with the problems practitioners are trying to address. Complementing professional skills development with these types of offerings is another part of the APG approach.

One of the things APG is doing, and this is a project we’re working on with others at The Open Group, is defining an Enterprise Architecture capability model. One of the things that capability model will be used for is to decide where organizations need to make investments in education. The current capability model and value chain that we have is pretty broad and has a lot of different dimensions to it. When I take a look at it and think “How do people do those things?” I see an opportunity for education and development. Once we continue to elaborate the map of things that comprise Enterprise Architecture, I think we’ll see a lot of opportunity for getting into a lot of different dimensions of how Enterprise Architecture affects an organization.

And one of the things we need to think about is how we can deliver just-in-time training to a diverse, global community very rapidly and effectively. Exploiting online learning management systems and remote coaching are some of the avenues that APG is pursuing.

Are there particular types of continuing education programs that EAs should pursue from a career development standpoint?

One of the things I’ve found interesting is that I’ve seen a number of my associates in the profession going down the MBA path. My sense is that that’s a representation of an interest in understanding better how the business executives see the enterprise from their world and to help perhaps frame the question “How can I best anticipate and understand where they’re coming from so that I can more effectively position Enterprise Architecture at a different level?” So that’s cross-disciplinary training. Of course that makes a lot of sense, because at the end of the day, that’s what Enterprise Architecture is all about—how to exploit the synergy that exists within an enterprise. A lot of times that’s about going horizontal within the organization into places where people didn’t necessarily think you had any business in. So raising that awareness and understanding of the relevance of EA is a big part of it.

Another thing that certainly is driving many organizations is regulatory compliance, particularly general risk management. A lot of organizations are becoming aware that Enterprise Architecture plays an essential role in supporting that. Getting cross-training in those related disciplines would make a lot of sense. At the end of the day, those parts of an organizations typically have a lot more authority, and consequently, a lot more funding and money than Enterprise Architecture does, because the consequence of non-conformance is very punitive—the pulling of licenses to operate, heavy fines, bad publicity. We’re just not quite there that if an organization were to not do “good” on Enterprise Architecture, that it’d become front-page news in The New York Times. But when someone steals 30 million cardholders’ personal information, that does become headline news and the subject of regulatory punitive damages. And not to say that Enterprise Architecture is the savior of all things, but it is well-accepted within the EA community that Enterprise Architecture is an essential part of building an effective governance and a regulatory compliance environment.

By The Open GroupChris Armstrong is president of Armstrong Process Group, Inc. and an internationally recognized thought leader and expert in iterative software development, enterprise architecture, object-oriented analysis and design, the Unified Modeling Language (UML), use case driven requirements and process improvement.

Over the past twenty years, Chris has worked to bring modern software engineering best practices to practical application at many private companies and government organizations worldwide. Chris has spoken at over 30 conferences, including The Open Group Enterprise Architecture Practitioners Conference, Software Development Expo, Rational User Conference, OMG workshops and UML World. He has been published in such outlets as Cutter IT Journal, Enterprise Development and Rational Developer Network.

Join the conversation!  @theopengroup #ogchat

Leave a comment

Filed under Business Architecture, Enterprise Architecture, Professional Development, Standards, TOGAF®, Uncategorized

The Open Group London 2014: Eight Questions on Retail Architecture

By The Open Group

If there’s any vertical sector that has been experiencing constant and massive transformation in the ages of the Internet and social media, it’s the retail sector. From the ability to buy goods whenever and however you’d like (in store, online and now, through mobile devices) to customers taking to social media to express their opinions about brands and service, retailers have a lot to deal with.

Glue Reply is a UK-based consulting firm that has worked with some of Europe’s largest retailers to help them plan their Enterprise Architectures and deal with the onslaught of constant technological change. Glue Reply Partner Daren Ward and Senior Consultant Richard Veryard sat down recently to answer our questions about how the challenges of building architectures for the retail sector, the difficulties of seasonal business and the need to keep things simple and agile. Ward spoke at The Open Group London 2014 on October 20.

What are some of the biggest challenges facing the retail industry right now?

There are a number of well-documented challenges facing the retail sector. Retailers are facing new competitors, especially from discount chains, as well as online-only retailers such as Amazon. Retailers are also experiencing an increasing fragmentation of spend—for example, grocery customers buying smaller quantities more frequently.

At the same time, the customer expectations are higher, especially across multiple channels. There is an increased intolerance of poor customer service, and people’s expectations of prompt response is increasing rapidly, especially via social media.

There is also an increasing concern regarding cost. Many retailers have huge amounts invested in physical space and human resources. They can’t just keep increasing these costs, they must understand how to become more efficient and create new ways to make use of these resources.

What role is technology playing in those changes, and which technologies are forcing the most change?

New technologies are allowing us to provide shoppers with a personalized customer experience more akin to an old school type service like when the store manager knew my name, my collar size, etc. Combining technologies such as mobile and iBeacons is allowing us to not only reach out to our customers, but to also provide a context and increase relevance.

Some retailers are becoming extremely adept in using social media. The challenge here is to link the social media with the business process, so that the customer service agent can quickly check the relevant stock position and reserve the stock before posting a response on Facebook.

Big data is becoming one of the key technology drivers. Large retailers are able to mobilize large amounts of data, both from their own operations as well as external sources. Some retailers have become highly data-driven enterprises, with the ability to make rapid adjustments to marketing campaigns and physical supply chains. As we gather more data from more devices all plugged into the Internet of Things (IoT), technology can help us make sense of this data and spot trends we didn’t realize existed.

What role can Enterprise Architecture play in helping retailers, and what can retailers gain from taking an architectural approach to their business?

One of the key themes of the digital transformation is the ability to personalize the service, to really better understand our customers and to hold a conversation with them that is meaningful. We believe there are four key foundation blocks to achieving this seamless digital transformation: the ability to change, to integrate, to drive value from data and to understand the customer journey. Core to the ability to change is a business-driven roadmap. It provides all involved with a common language, a common set of goals and a target vision. This roadmap is not a series of hurdles that must be delivered, but rather a direction of travel towards the target allowing us to assess the impact of course corrections as we go and ensure we are still capable of arriving at our destination. This is how we create an agile environment, where tactical changes are still simple course corrections continuing on the right direction of travel.

Glue Reply provides a range of architecture services to our retail clients, from capability led planning to practical development of integration solutions. For example, we produced a five-year roadmap for Sainsbury’s, which allows IT investment to combine longer-term foundation projects with short-term initiatives that can respond rapidly to customer demand.

Are there issues specific to the retail sector that are particularly challenging to deal with in creating an architecture and why?

Retail is a very seasonal business—sometimes this leaves a very small window for business improvements. This also exaggerates the differences in the business and IT lifecycles. The business strategy can change at a pace often driven by external factors, whilst elements of IT have a lifespan of many years. This is why we need a roadmap—to assess the impact of these changes and re-plan and prioritize our activities.

Are there some retailers that you think are doing a good job of handling these technology challenges? Which ones are getting it right?

Our client John Lewis has just been named ‘Omnichannel Retailer of the Year’ at the World Retail Awards 2014. They have a vision, and they can assess the impact of change. We have seen similar success at Sainsbury’s, where initiatives such as brand match are brought to market with real pace and quality.

How can industry standards help to support the retail industry?

Where appropriate, we have used industry standards such as the ARTS (Association for Retail Standards) data model to assist our clients in creating a version that is good enough. But mostly, we use our own business reference models, which we have built up over many years of experience working with a range of different retail businesses.

What can other industries learn from how retailers are incorporating architecture into their operations?

The principle of omnichannel has a lot of relevance for other consumer-facing organizations, but also retail’s focus on loyalty. It’s not about creating a sale stampede, it’s about the brand. Apple is clearly an excellent example—when people queue for hours to be the first to buy the new product, at a price that will only reduce over time. Some retailers are making great use of customer data and profiling. And above, all successful retailers understand three key architectural principles that will drive success in any other sector—keep it simple, drive value and execute well.

What can retailers do to continue to best meet customer expectations into the future?

It’s no longer about the channel, it’s about the conversation. We have worked with the biggest brands in Europe, helping them deliver multichannel solutions that consider the conversation. The retailer that enables this conversation will better understand their customers’ needs and build long-term relationships.

By The Open GroupDaren Ward is a Partner at Reply in the UK. As well as being a practicing Enterprise Architecture, Daren is responsible for the development of the Strategy and Architecture business as well as playing a key role in driving growth of Reply in the UK. He is committed to helping organizations drive genuine business value from IT investments, working with both commercial focused business units and IT professionals.  Daren has helped establish Architecture practices at many organizations. Be it enterprise, solutions, integration or information architecture, he has helped these practices delivery real business value through capability led architecture and business-driven roadmaps.

 

RichardVeryard 2 June 2014Richard Veryard is a Business Architect and author, specializing in capability-led planning, systems thinking and organizational intelligence. Last year, Richard joined Glue Reply as a senior consultant in the retail sector.

 

Leave a comment

Filed under big data, Business Architecture, digital technologies, Enterprise Architecture, Internet of Things, Uncategorized

The Open Group London 2014 – Day Two Highlights

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

Despite gusts of 70mph hitting the capital on Day Two of this year’s London event, attendees were not disheartened as October 21 kicked off with an introduction from The Open Group President and CEO Allen Brown. He provided a recap of The Open Group’s achievements over the last quarter including successful events in Bratislava, Slovakia and Kuala Lumpur, Malaysia. Allen also cited some impressive membership figures, with The Open Group now boasting 468 member organizations across 39 countries with the latest member coming from Nigeria.

Dave Lounsbury, VP and CTO at The Open Group then introduced the panel debate of the day on The Open Group Open Platform 3.0™ and Enterprise Architecture, with participants Ron Tolido, SVP and CTO, Applications Continental Europe, Capgemini; Andras Szakal, VP and CTO, IBM U.S. Federal IMT; and TJ Virdi, Senior Enterprise IT Architect, The Boeing Company.

After a discussion around the definition of Open Platform 3.0, the participants debated the potential impact of the Platform on Enterprise Architecture. Tolido noted that there has been an explosion of solutions, typically with a much shorter life cycle. While we’re not going to be able to solve every single problem with Open Platform 3.0, we can work towards that end goal by documenting its requirements and collecting suitable case studies.

Discussions then moved towards the theme of machine-to-machine (M2M) learning, a key part of the Open Platform 3.0 revolution. TJ Virdi cited figures from Gartner that by the year 2017, machines will soon be learning more than processing, an especially interesting notion when it comes to the manufacturing industry according to Szakal. There are three different areas whereby manufacturing is affected by M2M: New business opportunities, business optimization and operational optimization. With the products themselves now effectively becoming platforms and tools for communication, they become intelligent things and attract others in turn.

PanelRon Tolido, Andras Szakal, TJ Virdi, Dave Lounsbury

Henry Franken, CEO at BizzDesign, went on to lead the morning session on the Pitfalls of Strategic Alignment, announcing the results of an expansive survey into the development and implementation of a strategy. Key findings from the survey include:

  • SWOT Analysis and Business Cases are the most often used strategy techniques to support the strategy process – many others, including the Confrontation Matrix as an example, are now rarely used
  • Organizations continue to struggle with the strategy process, and most do not see strategy development and strategy implementation intertwined as a single strategy process
  • 64% indicated that stakeholders had conflicting priorities regarding reaching strategic goals which can make it very difficult for a strategy to gain momentum
  • The majority of respondents believed the main constraint to strategic alignment to be the unknown impact of the strategy on the employees, followed by the majority of the organization not understanding the strategy

The wide-ranging afternoon tracks kicked off with sessions on Risk, Enterprise in the Cloud and Archimate®, an Open Group standard. Key speakers included Ryan Jones at Blackthorn Technologies, Marc Walker at British Telecom, James Osborn, KPMG, Anitha Parameswaran, Unilever and Ryan Betts, VoltDB.

To take another look at the day’s plenary or track sessions, please visit The Open Group on livestream.com.

The day ended in style with an evening reception of Victorian architecture at the Victoria & Albert Museum, along with a private viewing of the newly opened John Constable exhibition.

IMG_3976Victoria & Albert Museum

A special mention must go to Terry Blevins who, after years of hard work and commitment to The Open Group, was made a Fellow at this year’s event. Many congratulations to Terry – and here’s to another successful day tomorrow.

Join the conversation! #ogchat #ogLON

Loren K. BaynesLoren K. Baynes, Director, Global Marketing Communications, joined The Open Group in 2013 and spearheads corporate marketing initiatives, primarily the website, blog and media relations. Loren has over 20 years experience in brand marketing and public relations and, prior to The Open Group, was with The Walt Disney Company for over 10 years. Loren holds a Bachelor of Business Administration from Texas A&M University. She is based in the US.

Comments Off

Filed under ArchiMate®, Boundaryless Information Flow™, Business Architecture, Cloud, Enterprise Architecture, Enterprise Transformation, Internet of Things, Open Platform 3.0, Professional Development, Uncategorized

The Open Group London 2014 Preview: A Conversation with RTI’s Stan Schneider about the Internet of Things and Healthcare

By The Open Group

RTI is a Silicon Valley-based messaging and communications company focused on helping to bring the Industrial Internet of Things (IoT) to fruition. Recently named “The Most Influential Industrial Internet of Things Company” by Appinions and published in Forbes, RTI’s EMEA Manager Bettina Swynnerton will be discussing the impact that the IoT and connected medical devices will have on hospital environments and the Healthcare industry at The Open Group London October 20-23. We spoke to RTI CEO Stan Schneider in advance of the event about the Industrial IoT and the areas where he sees Healthcare being impacted the most by connected devices.

Earlier this year, industry research firm Gartner declared the Internet of Things (IoT) to be the most hyped technology around, having reached the pinnacle of the firm’s famed “Hype Cycle.”

Despite the hype around consumer IoT applications—from FitBits to Nest thermostats to fashionably placed “wearables” that may begin to appear in everything from jewelry to handbags to kids’ backpacks—Stan Schneider, CEO of IoT communications platform company RTI, says that 90 percent of what we’re hearing about the IoT is not where the real value will lie. Most of media coverage and hype is about the “Consumer” IoT like Google glasses or sensors in refrigerators that tell you when the milk’s gone bad. However, most of the real value of the IoT will take place in what GE has coined as the “Industrial Internet”—applications working behind the scenes to keep industrial systems operating more efficiently, says Schneider.

“In reality, 90 percent of the real value of the IoT will be in industrial applications such as energy systems, manufacturing advances, transportation or medical systems,” Schneider says.

However, the reality today is that the IoT is quite new. As Schneider points out, most companies are still trying to figure out what their IoT strategy should be. There isn’t that much active building of real systems at this point.

Most companies, at the moment, are just trying to figure out what the Internet of Things is. I can do a webinar on ‘What is the Internet of Things?’ or ‘What is the Industrial Internet of Things?’ and get hundreds and hundreds of people showing up, most of whom don’t have any idea. That’s where most companies are. But there are several leading companies that very much have strategies, and there are a few that are even executing their strategies, ” he said. According to Schneider, these companies include GE, which he says has a 700+ person team currently dedicated to building their Industrial IoT platform, as well as companies such as Siemens and Audi, which already have some applications working.

For its part, RTI is actively involved in trying to help define how the Industrial Internet will work and how companies can take disparate devices and make them work with one another. “We’re a nuts-and-bolts, make-it-work type of company,” Schneider notes. As such, openness and standards are critical not only to RTI’s work but to the success of the Industrial IoT in general, says Schneider. RTI is currently involved in as many as 15 different industry standards initiatives.

IoT Drivers in Healthcare

Although RTI is involved in IoT initiatives in many industries, from manufacturing to the military, Healthcare is one of the company’s main areas of focus. For instance, RTI is working with GE Healthcare on the software for its CAT scanner machines. GE chose RTI’s DDS (data distribution service) product because it will let GE standardize on a single communications platform across product lines.

Schneider says there are three big drivers that are changing the medical landscape when it comes to connectivity: the evolution of standalone systems to distributed systems, the connection of devices to improve patient outcome and the replacement of dedicated wiring with networks.

The first driver is that medical devices that have been standalone devices for years are now being built on new distributed architectures. This gives practitioners and patients easier access to the technology they need.

For example, RTI customer BK Medical, a medical device manufacturer based in Denmark, is in the process of changing their ultrasound product architecture. They are moving from a single-user physical system to a wirelessly connected distributed design. Images will now be generated in and distributed by the Cloud, thus saving significant hardware costs while making the systems more accessible.

According to Schneider, ultrasound machine architecture hasn’t really changed in the last 30 or 40 years. Today’s ultrasound machines are still wheeled in on a cart. That cart contains a wired transducer, image processing hardware or software and a monitor. If someone wants to keep an image—for example images of fetuses in utero—they get carry out physical media. Years ago it was a Polaroid picture, today the images are saved to CDs and handed to the patient.

In contrast, BK’s new systems will be completely distributed, Schneider says. Doctors will be able to carry a transducer that looks more like a cellphone with them throughout the hospital. A wireless connection will upload the imaging data into the cloud for image calculation. With a distributed scenario, only one image processing system may be needed for a hospital or clinic. It can even be kept in the cloud off-site. Both patients and caregivers can access images on any display, wherever they are. This kind of architecture makes the systems much cheaper and far more efficient, Schneider says. The days of the wheeled-in cart are numbered.

The second IoT driver in Healthcare is connecting medical devices together to improve patient outcomes. Most hospital devices today are completely independent and standalone. So, if a patient is hooked up to multiple monitors, the only thing that really “connects” those devices today is a piece of paper at the end of a hospital bed that shows how each should be functioning. Nurses are supposed to check these devices on an hourly basis to make sure they’re working correctly and the patient is ok.

Schneider says this approach is error-ridden. First, the nurse may be too busy to do a good job checking the devices. Worse, any number of things can set off alarms whether there’s something wrong with the patient or not. As anyone who has ever visited a friend or relative in the hospital attest to, alarms are going off constantly, making it difficult to determine when someone is really in distress. In fact, one of the biggest problems in hospital settings today, Schneider says, is a phenomenon known as “alarm fatigue.” Single devices simply can’t reliably tell if there’s some minor glitch in data or if the patient is in real trouble. Thus, 80% of all device alarms in hospitals are turned off. Meaningless alarms fatigue personnel, so they either ignore or turn off the alarms…and people can die.

To deal with this problem, new technologies are being created that will connect devices together on a network. Multiple devices can then work in tandem to really figure out when something is wrong. If the machines are networked, alarms can be set to go off only when multiple distress indicators are indicated rather than just one. For example, if oxygen levels drop on both an oxygen monitor on someone’s finger and on a respiration monitor, the alarm is much more likely a real patient problem than if only one source shows a problem. Schneider says the algorithms to fix these problems are reasonably well understood; the barrier is the lack of networking to tie all of these machines together.

The third area of change in the industrial medical Internet is the transition to networked systems from dedicated wired designs. Surgical operating rooms offer a good example. Today’s operating room is a maze of wires connecting screens, computers, and video. Videos, for instance, come from dynamic x-ray imaging systems, from ultrasound navigation probes and from tiny cameras embedded in surgical instruments. Today, these systems are connected via HDMI or other specialized cables. These cables are hard to reconfigure. Worse, they’re difficult to sterilize, Schneider says. Thus, the surgical theater is hard to configure, clean and maintain.

In the future, the mesh of special wires can be replaced by a single, high-speed networking bus. Networks make the systems easier to configure and integrate, easier to use and accessible remotely. A single, easy-to-sterilize optical network cable can replace hundreds of wires. As wireless gets faster, even that cable can be removed.

“By changing these systems from a mesh of TV-cables to a networked data bus, you really change the way the whole system is integrated,” he said. “It’s much more flexible, maintainable and sharable outside the room. Surgical systems will be fundamentally changed by the Industrial IoT.”

IoT Challenges for Healthcare

Schneider says there are numerous challenges facing the integration of the IoT into existing Healthcare systems—from technical challenges to standards and, of course, security and privacy. But one of the biggest challenges facing the industry, he believes, is plain old fear. In particular, Schneider says, there is a lot of fear within the industry of choosing the wrong path and, in effect, “walking off a cliff” if they choose the wrong direction. Getting beyond that fear and taking risks, he says, will be necessary to move the industry forward, he says.

In a practical sense, the other thing currently holding back integration is the sheer number of connected devices currently being used in medicine, he says. Manufacturers each have their own systems and obviously have a vested interest in keeping their equipment in hospitals, so many have been reluctant to develop or become standards-compliant and push interoperability forward, Schneider says.

This is, of course, not just a Healthcare issue. “We see it in every single industry we’re in. It’s a real problem,” he said.

Legacy systems are also a problematic area. “You can’t just go into a Kaiser Permanente and rip out $2 billion worth of equipment,” he says. Integrating new systems with existing technology is a process of incremental change that takes time and vested leadership, says Schneider.

Cloud Integration a Driver

Although many of these technologies are not yet very mature, Schneider believes that the fundamental industry driver is Cloud integration. In Schneider’s view, the Industrial Internet is ultimately a systems problem. As with the ultrasound machine example from BK Medical, it’s not that an existing ultrasound machine doesn’t work just fine today, Schneider says, it’s that it could work better.

“Look what you can do if you connect it to the Cloud—you can distribute it, you can make it cheaper, you can make it better, you can make it faster, you can make it more available, you can connect it to the patient at home. It’s a huge system problem. The real overwhelming striking value of the Industrial Internet really happens when you’re not just talking about the hospital but you’re talking about the Cloud and hooking up with practitioners, patients, hospitals, home care and health records. You have to be able to integrate the whole thing together to get that ultimate value. While there are many point cases that are compelling all by themselves, realizing the vision requires getting the whole system running. A truly connected system is a ways out, but it’s exciting.”

Open Standards

Schneider also says that openness is absolutely critical for these systems to ultimately work. Just as agreeing on a standard for the HTTP running on the Internet Protocol (IP) drove the Web, a new device-appropriate protocol will be necessary for the Internet of Things to work. Consensus will be necessary, he says, so that systems can talk to each other and connectivity will work. The Industrial Internet will push that out to the Cloud and beyond, he says.

“One of my favorite quotes is from IBM, he says – IBM said, ‘it’s not a new Internet, it’s a new Web.’” By that, they mean that the industry needs new, machine-centric protocols to run over the same Internet hardware and base IP protocol, Schneider said.

Schneider believes that this new web will eventually evolve to become the new architecture for most companies. However, for now, particularly in hospitals, it’s the “things” that need to be integrated into systems and overall architectures.

One example where this level of connectivity will make a huge difference, he says, is in predictive maintenance. Once a system can “sense” or predict that a machine may fail or if a part needs to be replaced, there will be a huge economic impact and cost savings. For instance, he said Siemens uses acoustic sensors to monitor the state of its wind generators. By placing sensors next to the bearings in the machine, they can literally “listen” for squeaky wheels and thus figure out whether a turbine may soon need repair. These analytics let them know when the bearing must be replaced before the turbine shuts down. Of course, the infrastructure will need to connect all of these “things” to the each other and the cloud first. So, there will need to be a lot of system level changes in architectures.

Standards, of course, will be key to getting these architectures to work together. Schneider believes standards development for the IoT will need to be tackled from both horizontal and vertical standpoint. Both generic communication standards and industry specific standards like how to integrate an operating room must evolve.

“We are a firm believer in open standards as a way to build consensus and make things actually work. It’s absolutely critical,” he said.

stan_schneiderStan Schneider is CEO at Real-Time Innovations (RTI), the Industrial Internet of Things communications platform company. RTI is the largest embedded middleware vendor and has an extensive footprint in all areas of the Industrial Internet, including Energy, Medical, Automotive, Transportation, Defense, and Industrial Control.  Stan has published over 50 papers in both academic and industry press. He speaks at events and conferences widely on topics ranging from networked medical devices for patient safety, the future of connected cars, the role of the DDS standard in the IoT, the evolution of power systems, and understanding the various IoT protocols.  Before RTI, Stan managed a large Stanford robotics laboratory, led an embedded communications software team and built data acquisition systems for automotive impact testing.  Stan completed his PhD in Electrical Engineering and Computer Science at Stanford University, and holds a BS and MS from the University of Michigan. He is a graduate of Stanford’s Advanced Management College.

 

Comments Off

Filed under architecture, Cloud, digital technologies, Enterprise Architecture, Healthcare, Internet of Things, Open Platform 3.0, Standards, Uncategorized

The Open Group London 2014: Open Platform 3.0™ Panel Preview with Capgemini’s Ron Tolido

By The Open Group

The third wave of platform technologies is poised to revolutionize how companies do business not only for the next few years but for years to come. At The Open Group London event in October, Open Group CTO Dave Lounsbury will be hosting a panel discussion on how The Open Group Open Platform 3.0™ will affect Enterprise Architectures. Panel speakers include IBM Vice President and CTO of U.S. Federal IMT Andras Szakal and Capgemini Senior Vice President and CTO for Application Services Ron Tolido.

We spoke with Tolido in advance of the event about the progress companies are making in implementing third platform technologies, the challenges facing the industry as Open Platform 3.0 evolves and the call to action he envisions for The Open Group as these technologies take hold in the marketplace.

Below is a transcript of that conversation.

From my perspective, we have to realize: What is the call to action that we should have for ourselves? If we look at the mission of Boundaryless Information Flow™ and the need for open standards to accommodate that, what exactly can The Open Group and any general open standards do to facilitate this next wave in IT? I think it’s nothing less than a revolution. The first platform was the mainframe, the second platform was the PC and now the third platform is anything beyond the PC, so all sorts of different devices, sensors and ways to access information, to deploy solutions and to connect. What does it mean in terms of Boundaryless Information Flow and what is the role of open standards to make that platform succeed and help companies to thrive in such a new world?

That’s the type of call to action I’m envisioning. And I believe there are very few Forums or Work Groups within The Open Group that are not affected by this notion of the third platform. Firstly, I believe an important part of the Open Platform 3.0 Forum’s mission will be to analyze, to understand, the impacts of the third platform, of all those different areas that we’re evolving currently in The Open Group, and, if you like, orchestrate them a bit or be a catalyst in all the working groups and forums.

In a blog you wrote this summer for Capgemini’s CTO Blog you cited third platform technologies as being responsible for a renewed interest in IT as an enabler of business growth. What is it about the Third Platform is driving that interest?

It’s the same type of revolution as we’ve seen with the PC, which was the second platform. A lot of people in business units—through the PC and client/server technologies and Windows and all of these different things—realized that they could create solutions of a whole new order. The second platform meant many more applications, many more uses, much more business value to be achieved and less direct dependence on the central IT department. I think we’re seeing a very similar evolution right now, but the essence of the move is not that it moves us even further away from central IT but it puts the power of technology right in the business. It’s much easier to create solutions. Nowadays, there are many more channels that are so close in business that it takes business people to understand them. This explains also why business people like the third platform so much—it’s the Cloud, it’s mobile, social, it’s big data, all of these are waves that bring technology closer to the business, and are easy to use with very apparent business value that haven’t seen before, certainly not in the PC era. So we’re seeing a next wave, almost a revolution in terms of how easy it is to create solutions and how widely spread these solutions can be. Because again, as with the PC, it’s many more applications yet again and many more potential uses that can be connected through these applications, so that’s the very nature of the revolution and that also explains why business people like the third platform so much. So what people say to me these days on the business side is ‘We love IT, it’s just these bloody IT people that are the problem.’

Due to the complexities of building the next wave of platform computing, do you think that we may hit a point of fatigue as companies begin to tackle everything that is involved in creating that platform and making it work together?

The way I see it, that’s still the work of the IT community and the Enterprise Architect and the platform designer. It’s the very nature of the platform is that it’s attractive to use it, not to build it. The very nature of the platform is to connect to it and launch from it, but building the platform is an entirely different story. I think it requires platform designers and Enterprise Architects, if you like, and people to do the plumbing and do the architecting and the design underneath. But the real nature of the platform is to use it and to build upon it rather than to create it. So the happy view is that the “business people” don’t have to construct this.

I do believe, by the way, that many of the people in The Open Group will be on the side of the builders. They’re supposed to like complexity and like reducing it, so if we do it right the users of the platform will not notice this effort. It’s the same with the Cloud—the problem with the Cloud nowadays is that many people are tempted to run their own clouds, their own technologies, and before they know it, they only have additional complexity on their agenda, rather than reduced, because of the Cloud. It’s the same with the third platform—it’s a foundation which is almost a no-brainer to do business upon, for the next generation of business models. But if we do it wrong, we only have additional complexity on our hands, and we give IT a bad name yet again. We don’t want to do that.

What are Capgemini customers struggling with the most in terms of adopting these new technologies and putting together an Open Platform 3.0?

What you currently see—and it’s not always good to look at history—but if you look at the emergence of the second platform, the PC, of course there were years in which central IT said ‘nobody needs a PC, we can do it all on the mainframe,’ and they just didn’t believe it and business people just started to do it themselves. And for years, we created a mess as a result of it, and we’re still picking up some of the pieces of that situation. The question for IT people, in particular, is to understand how to find this new rhythm, how to adopt the dynamics of this third platform while dealing with all the complexity of the legacy platform that’s already there. I think if we are able to accelerate creating such a platform—and I think The Open Group will be very critical there—what exactly should be in the third platform, what type of services should you be developing, how would these services interact, could we create some set of open standards that the industry could align to so that we don’t have to do too much work in integrating all that stuff. If we, as The Open Group, can create that industry momentum, that, at least, would narrow the gap between business and IT that we currently see. Right now IT’s very clearly not able to deliver on the promise because they have their hands full with surviving the existing IT landscape, so unless they do something about simplifying it on the one hand and bridging that old world with the new one, they might still be very unpopular in the forthcoming years. That’s not what you want as an IT person—you want to enable business and new business. But I don’t think we’ve been very effective with that for the past ten years as an industry in general, so that’s a big thing that we have to deal with, bridging the old world with the new world. But anything we can do to accelerate and simplify that job from The Open Group would be great, and I think that’s the very essence of where our actions would be.

What are some of the things that The Open Group, in particular, can do to help affect these changes?

To me it’s still in the evangelization phase. Sooner or later people have to buy it and say ‘We get it, we want it, give me access to the third platform.’ Then the question will be how to accelerate building such an actual platform. So the big question is: What does such a platform look like? What types of services would you find on such a platform? For example, mobility services, data services, integration services, management services, development services, all of that. What would that look like in a typical Platform 3.0? Maybe even define a catalog of services that you would find in the platform. Then, of course, if you could use such a catalog or shopping list, if you like, to reach out to the technology suppliers of this world and convince them to pick that up and gear around these definitions—that would facilitate such a platform. Also maybe the architectural roadmap—so what would an architecture look like and what would be the typical five ways of getting there? We have to start with your local situation, so probably also several design cases would be helpful, so there’s an architectural dimension here.

Also, in terms of competencies, what type of competencies will we need in the near future to be able to supply these types of services to the business? That’s, again, very new—in this case, IT Specialist Certification and Architect Certification. These groups also need to think about what are the new competencies inherent in the third platform and how does it affect things like certification criteria and competency profiles?

In other areas, if you look at TOGAF®, and Open Group standard, is it really still suitable in fast paced world of the third platform or do we need a third platform version of TOGAF? With Security, for example, there are so many users, so many connections, and the activities of the former Jericho Forum seem like child’s play compared to what you will see around the third platform, so there’s no Forum or Work Group that’s not affected by this Open Platform 3.0 emerging.

With Open Platform 3.0 touching pretty much every aspect of technology and The Open Group, how do you tackle that? Do you have just an umbrella group for everything or look at it through the lens of TOGAF or security or the IT Specialist? How do you attack something so large?

It’s exactly what you just said. It’s fundamentally my belief that we need to do both of these two things. First, we need a catalyst forum, which I would argue is the Open Platform 3.0 Forum, which would be the catalyst platform, the orchestration platform if you like, that would do the overall definitions, the call to action. They’ve already been doing the business scenarios—they set the scene. Then it would be up to this Forum to reach out to all the other Forums and Work Groups to discuss impact and make sure it stays aligned, so here we have an orchestration function of the Open Platform 3.0 Forum. Then, very obviously, all the other Work Groups and Forums need to pick it up and do their own stuff because you cannot aspire to do all of this with one and the same forum because it’s so wide, it’s so diverse. You need to do both.

The Open Platform 3.0 Forum has been working for a year and a half now. What are some of the things the Forum has accomplished thus far?

They’ve been particularly working on some of the key definitions and some of the business scenarios. I would say in order to create an awareness of Open Platform 3.0 in terms of the business value and the definitions, they’ve done a very good job. Next, there needs to be a call to action to get everybody mobilized and setting tangible steps toward the Platform 3.0. I think that’s currently where we are, so that’s good timing, I believe, in terms of what the forum has achieved so far.

Returning to the mission of The Open Group, given all of the awareness we have created, what does it all mean in terms of Boundaryless Information Flow and how does it affect the Forums and Work Groups in The Open Group? That’s what we need to do now.

What are some of the biggest challenges that you see facing adoption of Open Platform 3.0 and standards for that platform?

They are relatively immature technologies. For example, with the Cloud you see a lot of players, a lot of technology providers being quite reluctant to standardize. Some of them are very open about it and are like ‘Right now we are in a niche, and we’re having a lot of fun ourselves, so why open it up right now?’ The movement would be more pressure from the business side saying ‘We want to use your technology but only if you align with some of these emerging standards.’ That would do it or certainly help. This, of course, is what makes The Open Group as powerful as not only technology providers, but also businesses, the enterprises involved and end users of technology. If they work together and created something to mobilize technology providers, that would certainly be a breakthrough, but these are immature technologies and, as I said, with some of these technology providers, it seems more important to them to be a niche player for now and create their own market rather than standardizing on something that their competitors could be on as well.

So this is a sign of a relatively immature industry because every industry that starts to mature around certain topics begins to work around open standards. The more mature we grow in mastering the understanding of the Open Platform 3.0, the more you will see the need for standards arise. It’s all a matter of timing so it’s not so strange that in the past year and a half it’s been very difficult to even discuss standards in this area. But I think we’re entering that era really soon, so it seems to be good timing to discuss it. That’s one important limiting area; I think the providers are not necessarily waiting for it or committed to it.

Secondly, of course, this is a whole next generation of technologies. With all new generations of technologies there are always generation gaps and people in denial or who just don’t feel up to picking it up again or maybe they lack the energy to pick up a new wave of technology and they’re like ‘Why can’t I stay in what I’ve mastered?’ All very understandable. I would call that a very typical IT generation gap that occurs when we see the next generation of IT emerge—sooner or later you get a generation gap, as well. Which has nothing to do with physical age, by the way.

With all these technologies converging so quickly, that gap is going to have to close quickly this time around isn’t it?

Well, there are still mainframes around, so you could argue that there will be two or even three speeds of IT sooner or later. A very stable, robust and predictable legacy environment could even be the first platform that’s more mainframe-oriented, like you see today. A second wave would be that PC workstation, client/server, Internet-based IT landscape, and it has a certain base and certain dynamics. Then you have this third phase, which is the new platform, that is more dynamic and volatile and much more diverse. You could argue that there might be within an organization multiple speeds of IT, multiple speeds of architectures, multi-speed solutioning, and why not choose your own speed?

It probably takes a decade or more to really move forward for many enterprises.

It’s not going as quickly as the Gartners of this world typically thinks it is—in practice we all know it takes longer. So I don’t see any reason why certain people wouldn’t certainly choose deliberately to stay in second gear and don’t go to third gear simply because they think it’s challenging to be there, which is perfectly sound to me and it would bring a lot of work in many years to companies.

That’s an interesting concept because start-ups can easily begin on a new platform but if you’re a company that has been around for a long time and you have existing legacy systems from the mainframe or PC era, those are things that you have to maintain. How do you tackle that as well?

That’s a given in big enterprises. Not everybody can be a disruptive start up. Maybe we all think that we should be like that but it’s not the case in real life. In real life, we have to deal with enterprise systems and enterprise processes and all of them might be very vulnerable to this new wave of challenges. Certainly enterprises can be disruptive themselves if they do it right, but there are always different dynamics, and, as I said, we still have mainframes, as well, even though we declared their ending quite some time ago. The same will happen, of course, to PC-based IT landscapes. It will take a very long time and will take very skilled hands and minds to keep it going and to simplify.

Having said that, you could argue that some new players in the market obviously have the advantage of not having to deal with that and could possibly benefit from a first-mover advantage where existing enterprises have to juggle several balls at the same time. Maybe that’s more difficult, but of course enterprises are enterprises for a good reason—they are big and holistic and mighty, and they might be able to do things that start-ups simply can’t do. But it’s a very unpredictable world, as we all realize, and the third platform brings a lot of disruptiveness.

What’s your perspective on how the Internet of Things will affect all of this?

It’s part of the third platform of course, and it’s something Andras Szakal will be addressing as well. There’s much more coming, both at the input sites, everything is becoming a sensor essentially to where even your wallpaper or paint is a sensor, but on the other hand, in terms of devices that we use to communicate or get information—smart things that whisper in your ears or whatever we’ll have in the coming years—is clearly part of this Platform 3.0 wave that we’ll have as we move away from the PC and the workstation, and there’s a whole bunch of new technologies around to replace it. The Internet of Things is clearly part of it, and we’ll need open standards as well because there are so many different things and devices, and if you don’t create the right standards and platform services to deal with it, it will be a mess. It’s an integral part of the Platform 3.0 wave that we’re seeing.

What is the Open Platform 3.0 Forum going to be working on over the next few months?

Understanding what this Open Platform 3.0 actually means—I think the work we’ve seen so far in the Forum really sets the way in terms of what is it and definitions are growing. Andras will be adding his notion of the Internet of Things and looking at definitions of what is it exactly. Many people already intuitively have an image of it.

The second will be how we deliver value to the business—so the business scenarios are a crucial thing to consider to see how applicable they are, how relevant they are to enterprises. The next thing to do will pertain to work that still needs to be done in The Open Group, as well. What would a new Open Platform 3.0 architecture look like? What are the platform services? What are the ones we can start working on right now? What are the most important business scenarios and what are the platform services that they will require? So architectural impacts, skills impacts, security impacts—as I said, there are very few areas in IT that are not touched by it. Even the new IT4IT Forum that will be launched in October, which is all about methodologies and lifecycle, will need to consider Agile, DevOps-related methodologies because that’s the rhythm and the pace that we’ve got to expect in this third platform. So the rhythm of the working group—definitions, business scenarios and then you start to thinking about what does the platform consist of, what type of services do I need to create to support it and hopefully by then we’ll have some open standards to help accelerate that thinking to help enterprises set a course for themselves. That’s our mission as The Open Group to help facilitate that.

Tolido-RonRon Tolido is Senior Vice President and Chief Technology Officer of Application Services Continental Europe, Capgemini. He is also a Director on the board of The Open Group and blogger for Capgemini’s multiple award-winning CTO blog, as well as the lead author of Capgemini’s TechnoVision and the global Application Landscape Reports. As a noted Digital Transformation ambassador, Tolido speaks and writes about IT strategy, innovation, applications and architecture. Based in the Netherlands, Mr. Tolido currently takes interest in apps rationalization, Cloud, enterprise mobility, the power of open, Slow Tech, process technologies, the Internet of Things, Design Thinking and – above all – radical simplification.

 

 

Comments Off

Filed under architecture, Boundaryless Information Flow™, Certifications, Cloud, digital technologies, Enterprise Architecture, Future Technologies, Information security, Internet of Things, Open Platform 3.0, Security, Service Oriented Architecture, Standards, TOGAF®, Uncategorized

The Open Group Panel: Internet of Things – Opportunities and Obstacles

Below is the transcript of The Open Group podcast exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data.

Listen to the podcast.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with recent The Open Group Boston 2014 on July 21 in Boston.

Dana Gardner I’m Dana Gardner, principal analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow.

We’re going to now specifically delve into the Internet of Things with a panel of experts. The conference has examined how Open Platform 3.0™ leverages the combined impacts of cloud, big data, mobile, and social. But to each of these now we can add a new cresting wave of complexity and scale as we consider the rapid explosion of new devices, sensors, and myriad endpoints that will be connected using internet protocols, standards and architectural frameworks.

This means more data, more cloud connectivity and management, and an additional tier of “things” that are going to be part of the mobile edge — and extending that mobile edge ever deeper into even our own bodies.

When we think about inputs to these social networks — that’s going to increase as well. Not only will people be tweeting, your device could be very well tweet, too — using social networks to communicate. Perhaps your toaster will soon be sending you a tweet about your English muffins being ready each morning.

The Internet of Things is more than the “things” – it means a higher order of software platforms. For example, if we are going to operate data centers with new dexterity thanks to software-definited networking (SDN) and storage (SDS) — indeed the entire data center being software-defined (SDDC) — then why not a software-defined automobile, or factory floor, or hospital operating room — or even a software-defined city block or neighborhood?

And so how does this all actually work? Does it easily spin out of control? Or does it remain under proper management and governance? Do we have unknown unknowns about what to expect with this new level of complexity, scale, and volume of input devices?

Will architectures arise that support the numbers involved, interoperability, and provide governance for the Internet of Things — rather than just letting each type of device do its own thing?

To help answer some of these questions, The Open Group assembled a distinguished panel to explore the practical implications and limits of the Internet of Things. So please join me in welcoming Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC, and a primary representative to the Industrial Internet Consortium; Penelope Gordon, Emerging Technology Strategist at 1Plug Corporation; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technical Officer at The Open Group.

Jean-Francois, we have heard about this notion of “cities as platforms,” and I think the public sector might offer us some opportunity to look at what is going to happen with the Internet of Things, and then extrapolate from that to understand what might happen in the private sector.

Hypothetically, the public sector has a lot to gain. It doesn’t have to go through the same confines of a commercial market development, profit motive, and that sort of thing. Tell us a little bit about what the opportunity is in the public sector for smart cities.

Barsoum_Jean-FrancoisJean-Francois Barsoum: It’s immense. The first thing I want to do is link to something that Marshall Van Alstyne (Professor at Boston University and Researcher at MIT) had talked about, because I was thinking about his way of approaching platforms and thinking about how cities represent an example of that.

You don’t have customers; you have citizens. Cities are starting to see themselves as platforms, as ways to communicate with their customers, their citizens, to get information from them and to communicate back to them. But the complexity with cities is that as a good a platform as they could be, they’re relatively rigid. They’re legislated into existence and what they’re responsible for is written into law. It’s not really a market.

Chris Harding (Forum Director of The Open Group Open Platform 3.0) earlier mentioned, for example, water and traffic management. Cities could benefit greatly by managing traffic a lot better.

Part of the issue is that you might have a state or provincial government that looks after highways. You might have the central part of the city that looks after arterial networks. You might have a borough that would look after residential streets, and these different platforms end up not talking to each other.

They gather their own data. They put in their own widgets to collect information that concerns them, but do not necessarily share with their neighbor. One of the conditions that Marshall said would favor the emergence of a platform had to do with how much overlap there would be in your constituents and your customers. In this case, there’s perfect overlap. It’s the same citizen, but they have to carry an Android and an iPhone, despite the fact it is not the best way of dealing with the situation.

The complexities are proportional to the amount of benefit you could get if you could solve them.

Gardner: So more interoperability issues?

Barsoum: Yes.

More hurdles

Gardner: More hurdles, and when you say commensurate, you’re saying that the opportunity is huge, but the hurdles are huge and we’re not quite sure how this is going to unfold.

Barsoum: That’s right.

Gardner: Let’s go to an area where the opportunity outstrips the challenge, manufacturing. Said, what is the opportunity for the software-defined factory floor for recognizing huge efficiencies and applying algorithmic benefits to how management occurs across domains of supply-chain, distribution, and logistics. It seems to me that this is a no-brainer. It’s such an opportunity that the solution must be found.

Tabet_SaidSaid Tabet: When it comes to manufacturing, the opportunities are probably much bigger. It’s where we can see a lot of progress that has already been done and still work is going on. There are two ways to look at it.

One is the internal side of it, where you have improvements of business processes. For example, similar to what Jean-Francois said, in a lot of the larger companies that have factories all around the world, you’ll see such improvements on a factory base level. You still have those silos at that level.

Now with this new technology, with this connectedness, those improvements are going to be made across factories, and there’s a learning aspect to it in terms of trying to manage that data. In fact, they do a better job. We still have to deal with interoperability, of course, and additional issues that could be jurisdictional, etc.

However, there is that learning that allows them to improve their processes across factories. Maintenance is one of them, as well as creating new products, and connecting better with their customers. We can see a lot of examples in the marketplace. I won’t mention names, but there are lots of them out there with the large manufacturers.

Gardner: We’ve had just-in-time manufacturing and lean processes for quite some time, trying to compress the supply chain and distribution networks, but these haven’t necessarily been done through public networks, the internet, or standardized approaches.

But if we’re to benefit, we’re going to need to be able to be platform companies, not just product companies. How do you go from being a proprietary set of manufacturing protocols and approaches to this wider, standardized interoperability architecture?

Tabet: That’s a very good question, because now we’re talking about that connection to the customer. With the airline and the jet engine manufacturer, for example, when the plane lands and there has been some monitoring of the activity during the whole flight, at that moment, they’ll get that data made available. There could be improvements and maybe solutions available as soon as the plane lands.

Interoperability

That requires interoperability. It requires Platform 3.0 for example. If you don’t have open platforms, then you’ll deal with the same hurdles in terms of proprietary technologies and integration in a silo-based manner.

Gardner: Penelope, you’ve been writing about the obstacles to decision-making that might become apparent as big data becomes more prolific and people try to capture all the data about all the processes and analyze it. That’s a little bit of a departure from the way we’ve made decisions in organizations, public and private, in the past.

Of course, one of the bigger tenets of Internet of Things is all this great data that will be available to us from so many different points. Is there a conundrum of some sort? Is there an unknown obstacle for how we, as organizations and individuals, can deal with that data? Is this going to be chaos, or is this going to be all the promises many organizations have led us to believe around big data in the Internet of Things?

Gordon_PenelopePenelope Gordon: It’s something that has just been accelerated. This is not a new problem in terms of the decision-making styles not matching the inputs that are being provided into the decision-making process.

Former US President Bill Clinton was known for delaying making decisions. He’s a head-type decision-maker and so he would always want more data and more data. That just gets into a never-ending loop, because as people collect data for him, there is always more data that you can collect, particularly on the quantitative side. Whereas, if it is distilled down and presented very succinctly and then balanced with the qualitative, that allows intuition to come to fore, and you can make optimal decisions in that fashion.

Conversely, if you have someone who is a heart-type or gut-type decision-maker and you present them with a lot of data, their first response is to ignore the data. It’s just too much for them to take in. Then you end up completely going with whatever you feel is correct or whatever you have that instinct that it’s the correct decision. If you’re talking about strategic decisions, where you’re making a decision that’s going to influence your direction five years down the road, that could be a very wrong decision to make, a very expensive decision, and as you said, it could be chaos.

It just brings to mind to me Dr. Suess’s The Cat in the Hat with Thing One and Thing Two. So, as we talk about the Internet of Things, we need to keep in mind that we need to have some sort of structure that we are tying this back to and understanding what are we trying to do with these things.

Gardner: Openness is important, and governance is essential. Then, we can start moving toward higher-order business platform benefits. But, so far, our panel has been a little bit cynical. We’ve heard that the opportunity and the challenges are commensurate in the public sector and that in manufacturing we’re moving into a whole new area of interoperability, when we think about reaching out to customers and having a boundary that is managed between internal processes and external communications.

And we’ve heard that an overload of data could become a very serious problem and that we might not get benefits from big data through the Internet of Things, but perhaps even stumble and have less quality of decisions.

So Dave Lounsbury of The Open Group, will the same level of standardization work? Do we need a new type of standards approach, a different type of framework, or is this a natural path and course what we have done in the past?

Different level

Lounsbury_DaveDave Lounsbury: We need to look at the problem at a different level than we institutionally think about an interoperability problem. Internet of Things is riding two very powerful waves, one of which is Moore’s Law, that these sensors, actuators, and network get smaller and smaller. Now we can put Ethernet in a light switch right, a tag, or something like that.

Also, Metcalfe’s Law that says that the value of all this connectivity goes up with the square of the number of connected points, and that applies to both the connection of the things but more importantly the connection of the data.

The trouble is, as we have said, that there’s so much data here. The question is how do you manage it and how do you keep control over it so that you actually get business value from it. That’s going to require us to have this new concept of a platform to not only to aggregate, but to just connect the data, aggregate it, correlate it as you said, and present it in ways that people can make decisions however they want.

Also, because of the raw volume, we have to start thinking about machine agency. We have to think about the system actually making the routine decisions or giving advice to the humans who are actually doing it. Those are important parts of the solution beyond just a simple “How do we connect all the stuff together?”

Gardner: We might need a higher order of intelligence, now that we have reached this border of what we can do with our conventional approaches to data, information, and process.

Thinking about where this works best first in order to then understand where it might end up later, I was intrigued again this morning by Professor Van Alstyne. He mentioned that in healthcare, we should expect major battles, that there is a turf element to this, that the organization, entity or even commercial corporation that controls and manages certain types of information and access to that information might have some very serious platform benefits.

The openness element now is something to look at, and I’ll come back to the public sector. Is there a degree of openness that we could legislate or regulate to require enough control to prevent the next generation of lock-in, which might not be to a platform to access to data information and endpoints? Where is it in the public sector that we might look to a leadership position to establish needed openness and not just interoperability.

Barsoum: I’m not even sure where to start answering that question. To take healthcare as an example, I certainly didn’t write the bible on healthcare IT systems and if someone did write that, I think they really need to publish it quickly.

We have a single-payer system in Canada, and you would think that would be relatively easy to manage. There is one entity that manages paying the doctors, and everybody gets covered the same way. Therefore, the data should be easily shared among all the players and it should be easy for you to go from your doctor, to your oncologist, to whomever, and maybe to your pharmacy, so that everybody has access to this same information.

We don’t have that and we’re nowhere near having that. If I look to other areas in the public sector, areas where we’re beginning to solve the problem are ones where we face a crisis, and so we need to address that crisis rapidly.

Possibility of improvement

In the transportation infrastructure, we’re getting to that point where the infrastructure we have just doesn’t meet the needs. There’s a constraint in terms of money, and we can’t put much more money into the structure. Then, there are new technologies that are coming in. Chris had talked about driverless cars earlier. They’re essentially throwing a wrench into the works or may be offering the possibility of improvement.

On any given piece of infrastructure, you could fit twice as many driverless cars as cars with human drivers in them. Given that set of circumstances, the governments are going to find they have no choice but to share data in order to be able to manage those. Are there cases where we could go ahead of a crisis in order to manage it? I certainly hope so.

Gardner: How about allowing some of the natural forces of marketplaces, behavior, groups, maybe even chaos theory, where if sufficient openness is maintained there will be some kind of a pattern that will emerge? We need to let this go through its paces, but if we have artificial barriers, that might be thwarted or power could go to places that we would regret later.

Barsoum: I agree. People often focus on structure. So the governance doesn’t work. We should find some way to change the governance of transportation. London has done a very good job of that. They’ve created something called Transport for London that manages everything related to transportation. It doesn’t matter if it’s taxis, bicycles, pedestrians, boats, cargo trains, or whatever, they manage it.

You could do that, but it requires a lot of political effort. The other way to go about doing it is saying, “I’m not going to mess with the structures. I’m just going to require you to open and share all your data.” So, you’re creating a new environment where the governance, the structures, don’t really matter so much anymore. Everybody shares the same data.

Gardner: Said, to the private sector example of manufacturing, you still want to have a global fabric of manufacturing capabilities. This is requiring many partners to work in concert, but with a vast new amount of data and new potential for efficiency.

How do you expect that openness will emerge in the manufacturing sector? How will interoperability play when you don’t have to wait for legislation, but you do need to have cooperation and openness nonetheless?

Tabet: It comes back to the question you asked Dave about standards. I’ll just give you some examples. For example, in the automotive industry, there have been some activities in Europe around specific standards for communication.

The Europeans came to the US and started to have discussions, and the Japanese have interest, as well as the Chinese. That shows, because there is a common interest in creating these new models from a business standpoint, that these challenges they have to be dealt with together.

Managing complexity

When we talk about the amounts of data, what we call now big data, and what we are going to see in about five years or so, you can’t even imagine. How do we manage that complexity, which is multidimensional? We talked about this sort of platform and then further, that capability and the data that will be there. From that point of view, openness is the only way to go.

There’s no way that we can stay away from it and still be able to work in silos in that new environment. There are lots of things that we take for granted today. I invite some of you to go back and read articles from 10 years ago that try to predict the future in technology in the 21st century. Look at your smart phones. Adoption is there, because the business models are there, and we can see that progress moving forward.

Collaboration is a must, because it is a multidimensional level. It’s not just manufacturing like jet engines, car manufacturers, or agriculture, where you have very specific areas. They really they have to work with their customers and the customers of their customers.

Adoption is there, because the business models are there, and we can see that progress moving forward.

Gardner: Dave, I have a question for both you and Penelope. I’ve seen some instances where there has been a cooperative endeavor for accessing data, but then making it available as a service, whether it’s an API, a data set, access to a data library, or even analytics applications set. The Ocean Observatories Initiative is one example, where it has created a sensor network across the oceans and have created data that then they make available.

Do you think we expect to see an intermediary organization level that gets between the sensors and the consumers or even controllers of the processes? Is there’s a model inherent in that that we might look to — something like that cooperative data structure that in some ways creates structure and governance, but also allows for freedom? It’s sort of an entity that we don’t have yet in many organizations or many ecosystems and that needs to evolve.

Lounsbury: We’re already seeing that in the marketplace. If you look at the commercial and social Internet of Things area, we’re starting to see intermediaries or brokers cropping up that will connect the silo of my android ecosystem to the ecosystem of package tracking or something like that. There are dozens and dozens of these cropping up.

In fact, you now see APIs even into a silo of what you might consider a proprietary system and what people are doing is to to build a layer on top of those APIs that intermediate the data.

This is happening on a point-to-point basis now, but you can easily see the path forward. That’s going to expand to large amounts of data that people will share through a third party. I can see this being a whole new emerging market much as what Google did for search. You could see that happening for the Internet of Things.

Gardner: Penelope, do you have any thoughts about how that would work? Is there a mutually assured benefit that would allow people to want to participate and cooperate with that third entity? Should they have governance and rules about good practices, best practices for that intermediary organization? Any thoughts about how data can be managed in this sort of hierarchical model?

Nothing new

Gordon: First, I’ll contradict it a little bit. To me, a lot of this is nothing new, particularly coming from a marketing strategy perspective, with business intelligence (BI). Having various types of intermediaries, who are not only collecting the data, but then doing what we call data hygiene, synthesis, and even correlation of the data has been around for a long time.

It was an interesting, when I looked at recent listing of the big-data companies, that some notable companies were excluded from that list — companies like Nielsen. Nielsen’s been collecting data for a long time. Harte-Hanks is another one that collects a tremendous amount of information and sells that to companies.

That leads into the another part of it that I think there’s going to be. We’re seeing an increasing amount of opportunity that involves taking public sources of data and then providing synthesis on it. What remains to be seen is how much of the output of that is going to be provided for “free”, as opposed to “fee”. We’re going to see a lot more companies figuring out creative ways of extracting more value out of data and then charging directly for that, rather than using that as an indirect way of generating traffic.

Gardner: We’ve seen examples of how this has been in place. Does it scale and does the governance or lack of governance that might be in the market now sustain us through the transition into Platform 3.0 and the Internet of Things.

Gordon: That aspect is the lead-on part of “you get what you pay for”. If you’re using a free source of data, you don’t have any guarantee that it is from authoritative sources of data. Often, what we’re getting now is something somebody put it in a blog post, and then that will get referenced elsewhere, but there was nothing to go back to. It’s the shaky supply chain for data.

You need to think about the data supply and that is where the governance comes in. Having standards is going to increasingly become important, unless we really address a lot of the data illiteracy that we have. A lot of people do not understand how to analyze data.

One aspect of that is a lot of people expect that we have to do full population surveys, as opposed representative sampling to get much more accurate and much more cost-effective collection of data. That’s just one example, and we do need a lot more in governance and standards.

Gardner: What would you like to see changed most in order for the benefits and rewards of the Internet of Things to develop and overcome the drawbacks, the risks, the downside? What, in your opinion, would you like to see happen to make this a positive, rapid outcome? Let’s start with you Jean-Francois.

Barsoum: There are things that I have seen cities start to do now. There are couple of examples: Philadelphia is one and Barcelona does this too. Rather than do the typical request for proposal (RFP), where they say, “This is the kind of solution we’re looking for, and here are our parameters. Can l you tell us how much it is going to cost to build,” they come to you with the problem and they say, “Here is the problem I want to fix. Here are my priorities, and you’re at liberty to decide how best to fix the problem, but tell us how much that would cost.”

If you do that and you combine it with access to the public data that is available — if public sector opens up its data — you end up with a very powerful combination that liberates a lot of creativity. You can create a lot of new business models. We need to see much more of that. That’s where I would start.

More education

Tabet: I agree with Jean-Francois on that. What I’d like to add is that I think we need to push the relation a little further. We need more education, to your point earlier, around the data and the capabilities.

We need these platforms that we can leverage a little bit further with the analytics, with machine learning, and with all of these capabilities that are out there. We have to also remember, when we talk about the Internet of Things, it is things talking to each other.

So it is not human-machine communication. Machine-to-machine automation will be further than that, and we need more innovation and more work in this area, particularly more activity from the governments. We’ve seen that, but it is a little bit frail from that point of view right now.

Gardner: Dave Lounsbury, thoughts about what need to happen in order to keep this on the tracks?

Lounsbury: We’ve touched on lot of them already. Thank you for mentioning the machine-to-machine part, because there are plenty of projections that show that it’s going to be the dominant form of Internet communication, probably within the next four years.

So we need to start thinking of that and moving beyond our traditional models of humans talking through interfaces to set of services. We need to identify the building blocks of capability that you need to manage, not only the information flow and the skilled person that is going to produce it, but also how you manage the machine-to-machine interactions.

Gordon: I’d like to see not so much focus on data management, but focus on what is the data managing and helping us to do. Focusing on the machine-to-machine and the devices is great, but it should be not on the devices or on the machines… it should be on what can they accomplish by communicating; what can you accomplish with the devices and then have a reverse engineer from that.

Gardner: Let’s go to some questions from the audience. The first one asks about a high order of intelligence which we mentioned earlier. It could be artificial intelligence, perhaps, but they ask whether that’s really the issue. Is the nature of the data substantially different, or we are just creating more of the same, so that it is a storage, plumbing, and processing problem? What, if anything, are we lacking in our current analytics capabilities that are holding us back from exploiting the Internet of Things?

Gordon: I’ve definitely seen that. That has a lot to do with not setting your decision objectives and your decision criteria ahead of time so that you end up collecting a whole bunch of data, and the important data gets lost in the mix. There is a term “data smog.”

Most important

The solution is to figure out, before you go collecting data, what data is most important to you. If you can’t collect certain kinds of data that are important to you directly, then think about how to indirectly collect that data and how to get proxies. But don’t try to go and collect all the data for that. Narrow in on what is going to be most important and most representative of what you’re trying to accomplish.

Gardner: Does anyone want to add to this idea of understanding what current analytics capabilities are lacking, if we have to adopt and absorb the Internet of Things?

Barsoum: There is one element around projection into the future. We’ve been very good at analyzing historical information to understand what’s been happening in the past. We need to become better at projecting into the future, and obviously we’ve been doing that for some time already.

But so many variables are changing. Just to take the driverless car as an example. We’ve been collecting data from loop detectors, radar detectors, and even Bluetooth antennas to understand how traffic moves in the city. But we need to think harder about what that means and how we understand the city of tomorrow is going to work. That requires more thinking about the data, a little bit like what Penelope mentioned, how we interpret that, and how we push that out into the future.

Lounsbury: I have to agree with both. It’s not about statistics. We can use historical data. It helps with lot of things, but one of the major issues we still deal with today is the question of semantics, the meaning of the data. This goes back to your point, Penelope, around the relevance and the context of that information – how you get what you need when you need it, so you can make the right decisions.

Gardner: Our last question from the audience goes back to Jean-Francois’s comments about the Canadian healthcare system. I imagine it applies to almost any healthcare system around the world. But it asks why interoperability is so difficult to achieve, when we have the power of the purse, that is the market. We also supposedly have the power of the legislation and regulation. You would think between one or the other or both that interoperability, because the stakes are so high, would happen. What’s holding it up?

Barsoum: There are a couple of reasons. One, in the particular case of healthcare, is privacy, but that is one that you could see going elsewhere. As soon as you talk about interoperability in the health sector, people start wondering where is their data going to go and how accessible is it going to be and to whom.

You need to put a certain number of controls over top of that. What is happening in parallel is that you have people who own some data, who believe they have some power from owning that data, and that they will lose that power if they share it. That can come from doctors, hospitals, anywhere.

So there’s a certain amount of change management you have to get beyond. Everybody has to focus on the welfare of the patient. They have to understand that there has to be a priority, but you also have to understand the welfare of the different stakeholders in the system and make sure that you do not forget about them, because if you forget about them they will find some way to slow you down.

Use of an ecosystem

Lounsbury: To me, that’s a perfect example of what Marshall Van Alstyne talked about this morning. It’s the change from focus on product to a focus on an ecosystem. Healthcare traditionally has been very focused on a doctor providing product to patient, or a caregiver providing a product to a patient. Now, we’re actually starting to see that the only way we’re able to do this is through use of an ecosystem.

That’s a hard transition. It’s a business-model transition. I will put in a plug here for The Open Group Healthcare vertical, which is looking at that from architecture perspective. I see that our Forum Director Jason Lee is over here. So if you want to explore that more, please see him.

Gardner: I’m afraid we will have to leave it there. We’ve been discussing the practical implications of the Internet of Things and how it is now set to add a new dimension to Open Platform 3.0 and Boundaryless Information Flow.

We’ve heard how new thinking about interoperability will be needed to extract the value and orchestrate out the chaos with such vast new scales of inputs and a whole new categories of information.

So with that, a big thank you to our guests: Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC; Penelope Gordon, Emerging Technology Strategist at 1Plug Corp.; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technology Officer at The Open Group.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow at The Open Group Conference, recently held in Boston. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript.

Transcript of The Open Group podcast exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Comments Off

Filed under Boundaryless Information Flow™, Business Architecture, Cloud, Cloud/SOA, Data management, digital technologies, Enterprise Architecture, Future Technologies, Information security, Internet of Things, Interoperability, Open Platform 3.0, Service Oriented Architecture, Standards, Strategy, Supply chain risk, Uncategorized