Tag Archives: IT

A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cloud and Bimodal IT Era

Transcript of an Open Group discussion/podcast on the role of Cloud Governance and Enterprise Architecture and how they work together in the era of increasingly fragmented IT.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Sponsor: The Open Group

Dana Gardner: Hello, and welcome to a special Thought Leadership Panel Discussion, coming to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator as we examine the role that Cloud Governance and Enterprise Architecture play in an era of increasingly fragmented IT.

Not only are IT organizations dealing with so-called shadow IT and myriad proof-of-concept affairs, there is now a strong rationale for fostering what Gartner calls Bimodal IT. There’s a strong case to be made for exploiting the strengths of several different flavors of IT, except that — at the same time — businesses are asking IT in total to be faster, better, and cheaper.

The topic before us today is how to allow for the benefits of Bimodal IT or even Multimodal IT, but without IT fragmentation leading to a fractured and even broken business.

Here to update us on the work of The Open Group Cloud Governance initiatives and working groups and to further explore the ways that companies can better manage and thrive with hybrid IT are our guests. We’re here today with Dr. Chris Harding, Director for Interoperability and Cloud Computing Forum Director at The Open Group. Welcome, Chris.

Dr. Chris Harding: Thank you, Dana. It’s great to be here.

Gardner: We’re also here with David Janson, Executive IT Architect and Business Solutions Professional with the IBM Industry Solutions Team for Central and Eastern Europe and a leading contributor to The Open Group Cloud Governance Project. Welcome, David.

David Janson: Thank you. Glad to be here.

Gardner: Lastly, we here with Nadhan, HP Distinguished Technologist and Cloud Advisor and Co-Chairman of The Open Group Cloud Governance Project. Welcome, Nadhan.

Nadhan: Thank you, Dana. It’s a pleasure to be here.

IT trends

Gardner: Before we get into an update on The Open Group Cloud Governance Initiatives, in many ways over the past decades IT has always been somewhat fragmented. Very few companies have been able to keep all their IT oars rowing in the same direction, if you will. But today things seem to be changing so rapidly that we seem to acknowledge that some degree of disparate IT methods are necessary. We might even think of old IT and new IT, and this may even be desirable.

But what are the trends that are driving this need for a Multimodal IT? What’s accelerating the need for different types of IT, and how can we think about retaining a common governance, and even a frameworks-driven enterprise architecture umbrella, over these IT elements?

Nadhan: Basically, the change that we’re going through is really driven by the business. Business today has much more rapid access to the services that IT has traditionally provided. Business has a need to react to its own customers in a much more agile manner than they were traditionally used to.

We now have to react to demands where we’re talking days and weeks instead of months and years. Businesses today have a choice. Business units are no longer dependent on the traditional IT to avail themselves of the services provided. Instead, they can go out and use the services that are available external to the enterprise.

To a great extent, the advent of social media has also resulted in direct customer feedback on the sentiment from the external customer that businesses need to react to. That is actually changing the timelines. It is requiring IT to be delivered at the pace of business. And the very definition of IT is undergoing a change, where we need to have the right paradigm, the right technology, and the right solution for the right business function and therefore the right application.

Since the choices have increased with the new style of IT, the manner in which you pair them up, the solutions with the problems, also has significantly changed. With more choices, come more such pairs on which solution is right for which problem. That’s really what has caused the change that we’re going through.

A change of this magnitude requires governance that goes across building up on the traditional governance that was always in play, requiring elements like cloud to have governance that is more specific to solutions that are in the cloud across the whole lifecycle of cloud solutions deployment.

Gardner: David, do you agree that this seems to be a natural evolution, based on business requirements, that we basically spin out different types of IT within the same organization to address some of these issues around agility? Or is this perhaps a bad thing, something that’s unnatural and should be avoided?

Janson: In many ways, this follows a repeating pattern we’ve seen with other kinds of transformations in business and IT. Not to diminish the specifics about what we’re looking at today, but I think there are some repeating patterns here.

There are new disruptive events that compete with the status quo. Those things that have been optimized, proven, and settled into sort of a consistent groove can compete with each other. Excitement about the new value that can be produced by new approaches generates momentum, and so far this actually sounds like a healthy state of vitality.

Good governance

However, one of the challenges is that the excitement potentially can lead to overlooking other important factors, and that’s where I think good governance practices can help.

For example, governance helps remind people about important durable principles that should be guiding their decisions, important considerations that we don’t want to forget or under-appreciate as we roll through stages of change and transformation.

At the same time, governance practices need to evolve so that it can adapt to new things that fit into the governance framework. What are those things and how do we govern those? So governance needs to evolve at the same time.

There is a pattern here with some specific things that are new today, but there is a repeating pattern as well, something we can learn from.

Gardner: Chris Harding, is there a built-in capability with cloud governance that anticipates some of these issues around different styles or flavors or even velocity of IT innovation that can then allow for that innovation and experimentation, but then keep it all under the same umbrella with a common management and visibility?

Harding: There are a number of forces at play here, and there are three separate trends that we’ve seen, or at least that I have observed, in discussions with members within The Open Group that relate to this.

The first is one that Nadhan mentioned, the possibility of outsourcing IT. I remember a member’s meeting a few years ago, when one of our members who worked for a company that was starting a cloud brokerage activity happened to mention that two major clients were going to do away with their IT departments completely and just go for cloud brokerage. You could see the jaws drop around the table, particularly with the representatives who were from company corporate IT departments.

Of course, cloud brokers haven’t taken over from corporate IT, but there has been that trend towards things moving out of the enterprise to bring in IT services from elsewhere.

That’s all very well to do that, but from a governance perspective, you may have an easy life if you outsource all of your IT to a broker somewhere, but if you fail to comply with regulations, the broker won’t go to jail; you will go to jail.

So you need to make sure that you retain control at the governance level over what is happening from the point of view of compliance. You probably also want to make sure that your architecture principles are followed and retain governance control to enable that to happen. That’s the first trend and the governance implication of it.

In response to that, a second trend that we see is that IT departments have reacted often by becoming quite like brokers themselves — providing services, maybe providing hybrid cloud services or private cloud services within the enterprise, or maybe sourcing cloud services from outside. So that’s a way that IT has moved in the past and maybe still is moving.

Third trend

The third trend that we’re seeing in some cases is that multi-discipline teams within line of business divisions, including both business people and technical people, address the business problems. This is the way that some companies are addressing the need to be on top of the technology in order to innovate at a business level. That is an interesting and, I think, a very healthy development.

So maybe, yes, we are seeing a bimodal splitting in IT between the traditional IT and the more flexible and agile IT, but maybe you could say that that second part belongs really in the line of business departments, rather than in the IT departments. That’s at least how I see it.

Nadhan: I’d like to build on a point that David made earlier about repeating patterns. I can relate to that very well within The Open Group, speaking about the Cloud Governance Project. Truth be told, as we continue to evolve the content in cloud governance, some of the seeding content actually came from the SOA Governance Project that The Open Group worked on a few years back. So the point David made about the repeating patterns resonates very well with that particular case in mind.

Gardner: So we’ve been through this before. When there is change and disruption, sometimes it’s required for a new version of methodologies and best practices to emerge, perhaps even associated with specific technologies. Then, over time, we see that folded back in to IT in general, or maybe it’s pushed back out into the business, as Chris alluded to.

My question, though, is how we make sure that these don’t become disruptive and negative influences over time. Maybe governance and enterprise architecture principles can prevent that. So is there something about the cloud governance, which I think really anticipates a hybrid model, particularly a cloud hybrid model, that would be germane and appropriate for a hybrid IT environment?

David Janson, is there a cloud governance benefit in managing hybrid IT?

Janson: There most definitely is. I tend to think that hybrid IT is probably where we’re headed. I don’t think this is avoidable. My editorial comment upon that is that’s an unavoidable direction we’re going in. Part of the reason I say that is I think there’s a repeating pattern here of new approaches, new ways of doing things, coming into the picture.

And then some balancing acts goes on, where people look at more traditional ways versus the new approaches people are talking about, and eventually they look at the strengths and weaknesses of both.

There’s going to be some disruption, but that’s not necessarily bad. That’s how we drive change and transformation. What we’re really talking about is making sure the amount of disruption is not so counterproductive that it actually moves things backward instead of forward.

I don’t mind a little bit of disruption. The governance processes that we’re talking about, good governance practices, have an overall life cycle that things move through. If there is a way to apply governance, as you work through that life cycle, at each point, you’re looking at the particular decision points and actions that are going to happen, and make sure that those decisions and actions are well-informed.

We sometimes say that governance helps us do the right things right. So governance helps people know what the right things are, and then the right way to do those things..

Bimodal IT

Also, we can measure how well people are actually adapting to those “right things” to do. What’s “right” can vary over time, because we have disruptive change. Things like we are talking about with Bimodal IT is one example.

Within a narrower time frame in the process lifecycle,, there are points that evolve across that time frame that have particular decisions and actions. Governance makes sure that people are well informed as they’re rolling through that about important things they shouldn’t forget. It’s very easy to forget key things and optimize for only one factor, and governance helps people remember that.

Also, just check to see whether we’re getting the benefits that people expected out of it. Coming back around and looking afterward to see if we accomplish what we thought we would or did we get off in the wrong direction. So it’s a bit like a steering mechanism or a feedback mechanism, in it that helps keep the car on the road, rather than going off in the soft shoulder. Did we overlook something important? Governance is key to making this all successful.

Gardner: Let’s return to The Open Group’s upcoming conference on July 20 in Baltimore and also learn a bit more about what the Cloud Governance Project has been up to. I think that will help us better understand how cloud governance relates to these hybrid IT issues that we’ve been discussing.

Nadhan, you are the co-chairman of the Cloud Governance Project. Tell us about what to expect in Baltimore with the concepts of Boundaryless Information Flow™, and then also perhaps an update on what the Cloud Governance Project has been up to.

Nadhan: Absolutely, Dana. When the Cloud Governance Project started, the first question we challenged ourselves with was, what is it and why do we need it, especially given that SOA governance, architecture governance, IT governance, enterprise governance, in general are all out there with frameworks? We actually detailed out the landscape with different standards and then identified the niche or the domain that cloud governance addresses.

After that, we went through and identified the top five principles that matter for cloud governance to be done right. Some of the obvious ones being that cloud is a business decision, and the governance exercise should keep in mind whether it is the right business decision to go to the cloud rather than just jumping on the bandwagon. Those are just some examples of the foundational principles that drive how cloud governance must be established and exercised.

Subsequent to that, we have a lifecycle for cloud governance defined and then we have gone through the process of detailing it out by identifying and decoupling the governance process and the process that is actually governed.

So there is this concept of process pairs that we have going, where we’ve identified key processes, key process pairs, whether it be the planning, the architecture, reusing cloud service, subscribing to it, unsubscribing, retiring, and so on. These are some of the defining milestones in the life cycle.

We’ve actually put together a template for identifying and detailing these process pairs, and the template has an outline of the process that is being governed, the key phases that the governance goes through, the desirable business outcomes that we would expect because of the cloud governance, as well as the associated metrics and the key roles.

Real-life solution

The Cloud Governance Framework is actually detailing each one. Where we are right now is looking at a real-life solution. The hypothetical could be an actual business scenario, but the idea is to help the reader digest the concepts outlined in the context of a scenario where such governance is exercised. That’s where we are on the Cloud Governance Project.

Let me take the opportunity to invite everyone to be part of the project to continue it by subscribing to the right mailing list for cloud governance within The Open Group.

Gardner: Thank you. Chris Harding, just for the benefit of our readers and listeners who might not be that familiar with The Open Group, perhaps you could give us a very quick overview of The Open Group — its mission, its charter, what we could expect at the Baltimore conference, and why people should get involved, either directly by attending, or following it on social media or the other avenues that The Open Group provides on its website?

Harding: Thank you, Dana. The Open Group is a vendor-neutral consortium whose vision is Boundaryless Information Flow. That is to say the idea that information should be available to people within an enterprise, or indeed within an ecosystem of enterprises, as and when needed, not locked away into silos.

We hold main conferences, quarterly conferences, four times a year and also regional conferences in various parts of the world in between those, and we discuss a variety of topics.

In fact, the main topics for the conference that we will be holding in July in Baltimore are enterprise architecture and risk and security. Architecture and security are two of the key things for which The Open Group is known, Enterprise Architecture, particularly with its TOGAF® Framework, is perhaps what The Open Group is best known for.

We’ve been active in a number of other areas, and risk and security is one. We also have started a new vertical activity on healthcare, and there will be a track on that at the Baltimore conference.

There will be tracks on other topics too, including four sessions on Open Platform 3.0™. Open Platform 3.0 is The Open Group initiative to address how enterprises can gain value from new technologies, including cloud computing, social computing, mobile computing, big data analysis, and the Internet of Things.

We’ll have a number of presentations related to that. These will include, in fact, a perspective on cloud governance, although that will not necessarily reflect what is happening in the Cloud Governance Project. Until an Open Group standard is published, there is no official Open Group position on the topic, and members will present their views at conferences. So we’re including a presentation on that.

Lifecycle governance

There is also a presentation on another interesting governance topic, which is on Information Lifecycle Governance. We have a panel session on the business context for Open Platform 3.0 and a number of other presentations on particular topics, for example, relating to the new technologies that Open Platform 3.0 will help enterprises to use.

There’s always a lot going on at Open Group conferences, and that’s a brief flavor of what will happen at this one.

Gardner: Thank you. And I’d just add that there is more available at The Open Group website, opengroup.org.

Going to one thing you mentioned about a standard and publishing that standard — and I’ll throw this out to any of our guests today — is there a roadmap that we could look to in order to anticipate the next steps or milestones in the Cloud Governance Project? When would such a standard emerge and when might we expect it?

Nadhan: As I said earlier, the next step is to identify the business scenario and apply it. I’m expecting, with the right level of participation, that it will take another quarter, after which it would go through the internal review with The Open Group and the company reviews for the publication of the standard. Assuming we have that in another quarter, Chris, could you please weigh in on what it usually takes, on average, for those reviews before it gets published.

Harding: You could add on another quarter. It shouldn’t actually take that long, but we do have a thorough review process. All members of The Open Group are invited to participate. The document is posted for comment for, I would think, four weeks, after which we review the comments and decide what actually needs to be taken.

Certainly, it could take only two months to complete the overall publication of the standard from the draft being completed, but it’s safer to say about a quarter.

Gardner: So a real important working document could be available in the second half of 2015. Let’s now go back to why a cloud governance document and approach is important when we consider the implications of Bimodal or Multimodal IT.

One of things that Gartner says is that Bimodal IT projects require new project management styles. They didn’t say project management products. They didn’t say, downloads or services from a cloud provider. We’re talking about styles.

So it seems to me that, in order to prevent the good aspects of Bimodal IT to be overridden by negative impacts of chaos and the lack of coordination that we’re talking about, not about a product or a download, we’re talking about something that a working group and a standards approach like the Cloud Governance Project can accommodate.

David, why is it that you can’t buy this in a box or download it as a product? What is it that we need to look at in terms of governance across Bimodal IT and why is that appropriate for a style? Maybe the IT people need to think differently about accomplishing this through technology alone?

First question

Janson: When I think of anything like a tool or a piece of software, the first question I tend to have is what is that helping me do, because the tool itself generally is not the be-all and end-all of this. What process is this going to help me carry out?

So, before I would think about tools, I want to step back and think about what are the changes to project-related processes that new approaches require. Then secondly, think about how can tools help me speed up, automate, or make those a little bit more reliable?

It’s an easy thing to think about a tool that may have some process-related aspects embedded in it as sort of some kind of a magic wand that’s going to automatically make everything work well, but it’s the processes that the tool could enable that are really the important decision. Then, the tools simply help to carry that out more effectively, more reliably, and more consistently.

We’ve always seen an evolution about the processes we use in developing solutions, as well as tools. Technology requires tools to adapt. As to the processes we use, as they get more agile, we want to be more incremental, and see rapid turnarounds in how we’re developing things. Tools need to evolve with that.

But I’d really start out from a governance standpoint, thinking about challenging the idea that if we’re going to make a change, how do we know that it’s really an appropriate one and asking some questions about how we differentiate this change from just reinventing the wheel. Is this an innovation that really makes a difference and isn’t just change for the sake of change?

Governance helps people challenge their thinking and make sure that it’s actually a worthwhile step to take to make those adaptations in project-related processes.

Once you’ve settled on some decisions about evolving those processes, then we’ll start looking for tools that help you automate, accelerate, and make consistent and more reliable what those processes are.

I tend to start with the process and think of the technology second, rather than the other way around. Where governance can help to remind people of principles we want to think about. Are you putting the cart before the horse? It helps people challenge their thinking a little bit to be sure they’re really going in the right direction.

Gardner: Of course, a lot of what you just mentioned pertains to enterprise architecture generally as well.

Nadhan, when we think about Bimodal or Multimodal IT, this to me is going to be very variable from company to company, given their legacy, given their existing style, the rate of adoption of cloud or other software as a service (SaaS), agile, or DevOps types of methods. So this isn’t something that’s going to be a cookie-cutter. It really needs to be looked at company by company and timeline by timeline.

Is this a vehicle for professional services, for management consulting more than IT and product? What is n the relationship between cloud governance, Bimodal IT, and professional services?

Delineating systems

Nadhan: It’s a great question Dana. Let me characterize Bimodal IT slightly differently, before answering the question. Another way to look at Bimodal IT, where we are today, is delineating systems of record and systems of engagement.

In traditional IT, typically, we’re looking at the systems of record, and systems of engagement with the social media and so on are in the live interaction. Those define the continuously evolving, growing-by-the-second systems of engagement, which results in the need for big data, security, and definitely the cloud and so on.

The coexistence of both of these paradigms requires the right move to the cloud for the right reason. So even though they are the systems of record, some, if not most, do need to get transformed to the cloud, but that doesn’t mean all systems of engagement eventually get transformed to the cloud.

There are good reasons why you may actually want to leave certain systems of engagement the way they are. The art really is in combining the historical data that the systems of record have with the continual influx of data that we get through the live channels of social media, and then, using the right level of predictive analytics to get information.

I said a lot in there just to characterize the Bimodal IT slightly differently, making the point that what really is at play, Dana, is a new style of thinking. It’s a new style of addressing the problems that have been around for a while.

But a new way to address the same problems, new solutions, a new way of coming up with the solution models would address the business problems at hand. That requires an external perspective. That requires service providers, consulting professionals, who have worked with multiple customers, perhaps other customers in the same industry, and other industries with a healthy dose of innovation.

That’s where this is a new opportunity for professional services to work with the CxOs, the enterprise architects, the CIOs to exercise the right business decision with the rights level of governance.

Because of the challenges with the coexistence of both systems of record and systems of engagement and harvesting the right information to make the right business decision, there is a significant opportunity for consulting services to be provided to enterprises today.

Drilling down

Gardner: Before we close off I wanted to just drill down on one thing, Nadhan, that you brought up, which is that ability to measure and know and then analyze and compare.

One of the things that we’ve seen with IT developing over the past several years as well is that the big data capabilities have been applied to all the information coming out of IT systems so that we can develop a steady state and understand those systems of record, how they are performing, and compare and contrast in ways that we couldn’t have before.

So on our last topic for today, David Janson, how important is it for that measuring capability in a governance context, and for organizations that want to pursue Bimodal IT, but keep it governed and keep it from spinning out of control? What should they be thinking about putting in place, the proper big data and analytics and measurement and visibility apparatus and capabilities?

Janson: That’s a really good question. One aspect of this is that, when I talk with people about the ideas around governance, it’s not unusual that the first idea that people have about what governance is is about the compliance or the policing aspect that governance can play. That sounds like that’s interference, sand in the gears, but it really should be the other way around.

A governance framework should actually make it very clear how people should be doing things, what’s expected as the result at the end, and how things are checked and measured across time at early stages and later stages, so that people are very clear about how things are carried out and what they are expected to do. So, if someone does use a governance-compliance process to see if things are working right, there is no surprise, there is no slowdown. They actually know how to quickly move through that.

Good governance has communicated that well enough, so that people should actually move faster rather than slower. In other words, there should be no surprises.

Measuring things is very important, because if you haven’t established the objectives that you’re after and some metrics to help you determine whether you’re meeting those, then it’s kind of an empty suit, so to speak, with governance. You express some ideas that you want to achieve, but you have no way of knowing or answering the question of how we know if this is doing what we want to do. Metrics are very important around this.

We capture metrics within processes. Then, for the end result, is it actually producing the effects people want? That’s pretty important.

One of the things that we have built into the Cloud Governance Framework is some idea about what are the outcomes and the metrics that each of these process pairs should have in mind. It helps to answer the question, how do we know? How do we know if something is doing what we expect? That’s very, very essential.

Gardner: I am afraid we’ll have to leave it there. We’ve been examining the role of cloud governance and enterprise architecture and how they work together in the era of increasingly fragmented IT. And we’ve seen how The Open Group Cloud Governance Initiatives and Working Groups can help allow for the benefits of Bimodal IT, but without necessarily IT fragmentation leading to a fractured or broken business process around technology and innovation.

This special Thought Leadership Panel Discussion comes to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore. And it’s not too late to register on The Open Group’s website or to follow the proceedings online and via social media such as Twitter, LinkedIn and Facebook.

So, thank you to our guests today. We’ve been joined by Dr. Chris Harding, Director for Interoperability and Cloud Computing Forum Director at The Open Group; David Janson, Executive IT Architect and Business Solutions Professional with the IBM Industry Solutions Team for Central and Eastern Europe and a leading contributor to The Open Group Cloud Governance Project, and Nadhan, HP Distinguished Technologist and Cloud Advisor and Co-Chairman of The Open Group Cloud Governance Project.

And a big thank you, too, to our audience for joining this special Open Group-sponsored discussion. This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this thought leadership panel discussion series. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android.

Sponsor: The Open Group

Transcript of an Open Group discussion/podcast on the role of Cloud Governance and Enterprise Architecture and how they work together in the era of increasingly fragmented IT. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

Join the conversation! @theopengroup #ogchat #ogBWI

You may also be interested in:

Comments Off on A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cloud and Bimodal IT Era

Filed under Accreditations, Boundaryless Information Flow™, Cloud, Cloud Governance, Interoperability, IoT, The Open Group Baltimore 2015

A World Without IT4IT: Why It’s Time to Run IT Like a Business

By Dave Lounsbury, CTO, The Open Group

IT departments today are under enormous pressure. In the digital world, businesses have become dependent on IT to help them remain competitive. However, traditional IT departments have their roots in skills such as development or operations and have not been set up to handle a business and technology environment that is trying to rapidly adapt to a constantly changing marketplace. As a result, many IT departments today may be headed for a crisis.

At one time, IT departments led technology adoption in support of business. Once a new technology was created—departmental servers, for instance—it took a relatively long time before businesses took advantage of it and even longer before they became dependent on the technology. But once a business did adopt the technology, it became subject to business rules—expectations and parameters for reliability, maintenance and upgrades that kept the technology up to date and allowed the business it supported to keep up with the market.

As IT became more entrenched in organizations throughout the 1980s and 1990s, IT systems increased in size and scope as technology companies fought to keep pace with market forces. In large enterprises, in particular, IT’s function became to maintain large infrastructures, requiring small armies of IT workers to sustain them.

A number of forces have combined to change all that. Today, most businesses do their business operations digitally—what Constellation Research analyst Andy Mulholland calls “Front Office Digital Business.” Technology-as-a-service models have changed how the technologies and applications are delivered and supported, with support and upgrades coming from outsourced vendors, not in-house staff. With Cloud models, an IT department may not even be necessary. Entrepreneurs can spin up a company with a swipe of a credit card and have all the technology they need at their fingertips, hosted remotely in the Cloud.

The Gulf between IT and Business

Although the gap between IT and business is closing, the gulf in how IT is run still remains. In structure, most IT departments today remain close to their technology roots. This is, in part, because IT departments are still run by technologists and engineers whose primary skills lie in the challenge (and excitement) of creating new technologies. Not every skilled engineer makes a good businessperson, but in most organizations, people who are good at their jobs often get promoted into management whether or not they are ready to manage. The Peter Principle is a problem that hinders many organizations, not just IT departments.

What has happened is that IT departments have not traditionally been run as if they were a business. Good business models for how IT should be run have been piecemeal or slow to develop—despite IT’s role in how the rest of the business is run. Although some standards have been developed as guides for how different parts of IT should be run (COBIT for governance, ITIL for service management, TOGAF®, an Open Group standard, for architecture), no overarching standard has been developed that encompasses how to holistically manage all of IT, from systems administration to development to management through governance and, of course, staffing. For all its advances, IT has yet to become a well-oiled business machine.

The business—and technological—climate today is not the same as it was when companies took three years to do a software upgrade. Everything in today’s climate happens nearly instantaneously. “Convergence” technologies like Cloud Computing, Big Data, social media, mobile and the Internet of Things are changing the nature of IT. New technical skills and methodologies are emerging every day, as well. Although languages such as Java or C may remain the top programming languages, new languages like Pig or Hive are emerging everyday, as are new approaches to development, such as Scrum, Agile or DevOps.

The Consequences of IT Business as Usual

With these various forces facing IT, departments will either need to change and adopt a model where IT is managed more effectively or departments may face some impending chaos that ends up hindering their organizations.

Without an effective management model for IT, companies won’t be able to mobilize quickly for a digital age. Even something as simple as an inability to utilize data could result in problems such as investing in a product prototype that customers aren’t interested in. Those are mistakes most companies can’t afford to make these days.

Having an umbrella view of what all of IT does also allows the department to make better decisions. With technology and development trends changing so quickly, how do you know what will fit your organization’s business goals? You want to take advantage of the trends or technologies that make sense for the company and leave behind those that don’t.

For example, in DevOps, one of the core concepts is to bring the development phase into closer alignment with releasing and operating the software. You need to know your business’s operating model to determine whether this approach will actually work or not. Having a sense of that also allows IT to make decisions about whether it’s wise to invest in training or hiring staff skilled in those methods or buying new technologies that will allow you to adopt the model.

Not having that management view can leave companies subject to the whims of technological evolution and also to current IT fads. If you don’t know what’s valuable to your business, you run the risk of chasing every new fad that comes along. There’s nothing worse—as the IT guy—than being the person who comes to the management meeting each month saying you’re trying yet another new approach to solve a problem that never seems to get solved. Business people won’t respond to that and will wonder if you know what you’re doing. IT needs to be decisive and choose wisely.

These issues not only affect the IT department but to trickle up to business operations. Ineffective IT shops will not know when to invest in the correct technologies, and they may miss out on working with new technologies that could benefit the business. Without a framework to plan how technology fits into the business, you could end up in the position of having great IT bows and arrows but when you walk out into the competitive world, you get machine-gunned.

The other side is cost and efficiency—if the entire IT shop isn’t running smoothly throughout then you end up spending too much money on problems, which in turn takes money away from other parts of the business that can keep the organization competitive. Failing to manage IT can lead to competitive loss across numerous areas within a business.

A New Business Model

To help prevent the consequences that may result if IT isn’t run more like a business, industry leaders such as Accenture; Achmea; AT&T; HP IT; ING Bank; Munich RE; PwC; Royal Dutch Shell; and University of South Florida, recently formed a consortium to address how to better run the business of IT. With billions of dollars invested in IT each year, these companies realized their investments must be made wisely and show governable results in order succeed.

The result of their efforts is The Open Group IT4IT™ Forum, which released a Snapshot of its proposed Reference Architecture for running IT more like a business this past November. The Reference Architecture is meant to serve as an operating model for IT, providing the “missing link” that previous IT-function specific models have failed to address. The model allows IT to achieve the same level of business, discipline, predictability and efficiency as other business functions.

The Snapshot includes a four-phase Value Chain for IT that provides both an operating model for an IT business and outlines how value can be added at every stage of the IT process. In addition to providing suggested best practices for delivery, the Snapshot includes technical models for the IT tools that organizations can use, whether for systems monitoring, release monitoring or IT point solutions. Providing guidance around IT tools will allow these tools to become more interoperable so that they can exchange information at the right place at the right time. In addition, it will allow for better control of information flow between various parts of the business through the IT shop, thus saving IT departments the time and hassle of aggregating tools or cobbling together their own tools and solutions. Staffing guidance models are also included in the Reference Architecture.

Why IT4IT now? Digitalization cannot be held back, particularly in an era of Cloud, Big Data and an impending Internet of Things. An IT4IT Reference Architecture provides more than just best practices for IT—it puts IT in the context of a business model that allows IT to be a contributing part of an enterprise, providing a roadmap for digital businesses to compete and thrive for years to come.

Join the conversation! @theopengroup #ogchat

By The Open GroupDavid is Chief Technical Officer (CTO) and Vice President, Services for The Open Group. As CTO, he ensures that The Open Group’s people and IT resources are effectively used to implement the organization’s strategy and mission.  As VP of Services, David leads the delivery of The Open Group’s proven collaboration processes for collaboration and certification both within the organization and in support of third-party consortia.

David holds a degree in Electrical Engineering from Worcester Polytechnic Institute, and is holder of three U.S. patents.

2 Comments

Filed under Cloud, digital technologies, Enterprise Transformation, Internet of Things, IT, IT4IT, TOGAF, TOGAF®

Open FAIR Certification for People Program

By Jim Hietala, VP Security, and Andrew Josey, Director of Standards, The Open Group

In this, the final installment of this Open FAIR blog series, we will look at the Open FAIR Certification for People program.

In early 2012, The Open Group Security Forum began exploring the idea of creating a certification program for Risk Analysts. Discussions with large enterprises regarding their risk analysis programs led us to the conclusion that there was a need for a professional certification program for Risk Analysts. In addition, Risk Analyst professionals and Open FAIR practitioners expressed interest in a certification program. Security and risk training organizations also expressed interest in providing training courses based upon the Open FAIR standards and Body of Knowledge.

The Open FAIR People Certification Program was designed to meet the requirements of employers and risk professionals. The certification program is a knowledge-based certification, testing candidates knowledge of the two standards, O-RA, and O-RT. Candidates are free to acquire their knowledge through self-study, or to take a course from an accredited training organization. The program currently has a single level (Foundation), with a more advanced certification level (Certified) planned for 2015.

Several resources are available from The Open Group to assist Risk Analysts preparing to sit for the exam, including the following:

  • Open FAIR Pocket Guide
  • Open FAIR Study Guide
  • Risk Taxonomy (O-RT), Version 2.0 (C13K, October 2013) defines a taxonomy for the factors that drive information security risk – Factor Analysis of Information Risk (FAIR).
  • Risk Analysis (O-RA) (C13G, October 2013) describes process aspects associated with performing effective risk analysis.

All of these can be downloaded from The Open Group publications catalog at http://www.opengroup.org/bookstore/catalog.

For training organizations, The Open Group accredits organizations wishing to offer training courses on Open FAIR. Testing of candidates is offered through Prometric test centers worldwide.

For more information on Open FAIR certification or accreditation, please contact us at: openfair-cert-auth@opengroup.org

By Jim Hietala and Andrew JoseyJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT Security, Risk Management and Healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on Information Security, Risk Management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

 

By Andrew JoseyAndrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate® 2.0, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX® Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

 

 

 

Comments Off on Open FAIR Certification for People Program

Filed under Accreditations, Certifications, Cybersecurity, Enterprise Architecture, Information security, Open FAIR Certification, Professional Development, RISK Management, Security, Uncategorized

The Business of Managing IT: The Open Group IT4IT™ Forum

By The Open Group

At The Open Group London 2014 event in October, the launch of The Open Group IT4IT™ Forum was announced. The goal of the new Forum is to create a Reference Architecture and standard that will allow IT departments to take a more holistic approach to managing the business of IT with continuous insight and control, enabling Boundaryless Information Flow™ across the IT Value Chain.

We recently spoke to Forum member Charlie Betz, Founder, Digital Management Academy, LLC, about the new Forum, its origins and why it’s time for IT to be managed as if it were a business in itself.

As IT has become more central to organizations, its role has changed drastically from the days when companies had one large mainframe or just a few PCs. For many organizations today, particularly large enterprises, IT is becoming a business within the business.

The problem with most IT departments, though, is that IT has never really been run as if it was a business.

In order for IT to better cope with rapid technological change and become more efficient at transitioning to the service-based model that most businesses today require, IT departments need guidance as to how the business of IT can be run. What’s at stake are things such as how to better manage IT at scale, how to understand IT as a value chain in its own right and how organizations can get better visibility into the vast amount of economic activity that’s currently characterized in organizations through technology.

The Open Group’s latest Forum aims to do just that.

The Case for IT Management

In the age of digital transformation, IT has become an integral part of how business is done. So says Charlie Betz, one of the founding members of the IT4IT Forum. From the software in your car to the supply chain that brings you your bananas, IT has become an irreplaceable component of how things work.

Quoting industry luminary Marc Andreessen, Betz says “software is eating the world.” Similarly, Betz says, IT management is actually beginning to eat management, too. Although this might seem laughable, we have become increasingly dependent on computing systems in our everyday lives. With that dependence comes significant concerns about the complexity of those systems and the potential they carry for chaotic behaviors. Therefore, he says, as technology becomes pervasive, how IT is managed will increasingly dictate how businesses are managed.

“If IT is increasing in its proportion of all product management, and all markets are increasingly dependent on managing IT, then understanding pure IT management becomes critically important not just for IT but for all business management,” Betz says.

According to Betz, the conversation about running the business of IT has been going on in the industry for a number of years under the guise of ideas such as “enterprise resource planning for IT” and the like. Ultimately, though, Betz says managing IT comes down to determining what IT’s value chain is and how to deliver on it.

By The Open GroupBetz compares modern IT departments to atoms, cells and bits where atoms represent hardware, including servers, data centers and networks; cells represent people; and bits are represented by software. In this analogy, these three things comprise the fundamental resources that an IT department manages. When reduced to economic terms, Betz says, what is currently lacking in most IT departments is a sense of how much things are worth, what the total costs are for acquisition and maintenance for capabilities and the supply and demand dynamics for IT services.

For example, in traditional IT management, workloads are defined by projects, tickets and also a middle ground characterized by work that is smaller than a project and larger than a ticket, Betz says. Often IT departments lack an understanding of how the three relate to each other and how they affect resources—particularly in the form of people—which becomes problematic because there is no holistic view of what the department is doing. Without that aggregate view, management is not only difficult but nearly impossible.

Betz says that to get a grasp on the whole, IT needs to take a cue from the lean management movement and first understand where the work originates and what it’s nature is so activities and processes don’t continue to proliferate without being managed.

Betz believes part of the reason IT has not better managed itself to date is because the level of complexity within IT has grown so quickly. He likens it to the frog in the boiling water metaphor—if the heat is turned up incrementally, the frog doesn’t know what’s hit him until it’s too late.

“Back when you had one computer it just wasn’t a concern,” he said. “You had very few systems that you were automating. It’s not that way nowadays. You have thousands of them. The application portfolio in major enterprises—depending on how you count applications, which is not an easy question in and of itself—the range is between 5000-10,000 applications. One hundred thousand servers is not unheard of. These are massive numbers, and the complexity is unimaginable. The potential for emergent chaotic behavior is unprecedented in human technological development.”

Betz believes the reason there is a perception that IT is poorly managed is also because it’s at the cutting-edge of every management question in business today. And because no one has ever dealt with systems and issues this complex before, it’s difficult to get a handle on them. Which is why the time for creating a framework for how IT can be managed has come.

IT4IT

The IT4IT Forum grew out of a joint initiative that was originally undertaken by Royal Dutch Shell and HP. Begun as a high-level user group within HP, companies such as Accenture, Achmea, Munich RE and PwC have also been integral in pulling together the initial work that has been provided to The Open Group to create the Forum. As the group began to develop a framework, it was clear that what they were developing needed to become an open standard, Betz says, so the group turned to The Open Group.

“It was pretty clear that The Open Group was the best fit for this,” he says. “There was clearly recognition and understanding on the part of The Open Group senior staff that this was a huge opportunity. They were very positive about it from the get-go.”

Currently in development, the IT4IT standard will provide guidance and specifications for how IT departments can provide consistent end-to-end service across the IT Value Chain and lifecycle. The IT Value Chain is meant to provide a model for managing the IT services life cycle and for how those service can be brokered with enterprises. By providing the IT similar level functionality as other critical business functions (such as finance or HR), IT is enabled to achieve better levels of predictability and efficiency.

By The Open Group

Betz says developing a Reference Architecture for IT4IT will be helpful for IT departments because it will provide a tested model for departments to begin the process of better management. And having that model be created by a vendor-neutral consortium helps provide credibility for users because no one company is profiting from it.

“It’s the community telling itself a story of what it wants to be,” he said.

The Reference Architecture will not only include prescriptive methods for how to design, procure and implement the functionality necessary to better manage IT departments but will also include real-world use cases related to current industry trends such as Cloud-sourcing, Agile, Dev-Ops and service brokering. As an open standard, it will also be designed to work with existing industry standards that IT departments may already be using including ITIL®, CoBIT®, SAFe® and TOGAF®, an Open Group standard.

With almost 200 pages of material already developed toward a standard, Betz says the Forum released its initial Snapshot for the standard available in late November. From there the Forum will need to decide which sections should be included as normative parts for the standard. The hope is to have the first version of the IT4IT Reference Architecture standard available next summer, Betz says.

For more on The Open Group IT4IT Forum or to become a member, please visit http://www.opengroup.org/IT4IT.

 

Comments Off on The Business of Managing IT: The Open Group IT4IT™ Forum

Filed under architecture, IT, IT4IT, Standards, Uncategorized, Value Chain

The Open Group London 2014 – Day Two Highlights

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

Despite gusts of 70mph hitting the capital on Day Two of this year’s London event, attendees were not disheartened as October 21 kicked off with an introduction from The Open Group President and CEO Allen Brown. He provided a recap of The Open Group’s achievements over the last quarter including successful events in Bratislava, Slovakia and Kuala Lumpur, Malaysia. Allen also cited some impressive membership figures, with The Open Group now boasting 468 member organizations across 39 countries with the latest member coming from Nigeria.

Dave Lounsbury, VP and CTO at The Open Group then introduced the panel debate of the day on The Open Group Open Platform 3.0™ and Enterprise Architecture, with participants Ron Tolido, SVP and CTO, Applications Continental Europe, Capgemini; Andras Szakal, VP and CTO, IBM U.S. Federal IMT; and TJ Virdi, Senior Enterprise IT Architect, The Boeing Company.

After a discussion around the definition of Open Platform 3.0, the participants debated the potential impact of the Platform on Enterprise Architecture. Tolido noted that there has been an explosion of solutions, typically with a much shorter life cycle. While we’re not going to be able to solve every single problem with Open Platform 3.0, we can work towards that end goal by documenting its requirements and collecting suitable case studies.

Discussions then moved towards the theme of machine-to-machine (M2M) learning, a key part of the Open Platform 3.0 revolution. TJ Virdi cited figures from Gartner that by the year 2017, machines will soon be learning more than processing, an especially interesting notion when it comes to the manufacturing industry according to Szakal. There are three different areas whereby manufacturing is affected by M2M: New business opportunities, business optimization and operational optimization. With the products themselves now effectively becoming platforms and tools for communication, they become intelligent things and attract others in turn.

PanelRon Tolido, Andras Szakal, TJ Virdi, Dave Lounsbury

Henry Franken, CEO at BizzDesign, went on to lead the morning session on the Pitfalls of Strategic Alignment, announcing the results of an expansive survey into the development and implementation of a strategy. Key findings from the survey include:

  • SWOT Analysis and Business Cases are the most often used strategy techniques to support the strategy process – many others, including the Confrontation Matrix as an example, are now rarely used
  • Organizations continue to struggle with the strategy process, and most do not see strategy development and strategy implementation intertwined as a single strategy process
  • 64% indicated that stakeholders had conflicting priorities regarding reaching strategic goals which can make it very difficult for a strategy to gain momentum
  • The majority of respondents believed the main constraint to strategic alignment to be the unknown impact of the strategy on the employees, followed by the majority of the organization not understanding the strategy

The wide-ranging afternoon tracks kicked off with sessions on Risk, Enterprise in the Cloud and Archimate®, an Open Group standard. Key speakers included Ryan Jones at Blackthorn Technologies, Marc Walker at British Telecom, James Osborn, KPMG, Anitha Parameswaran, Unilever and Ryan Betts, VoltDB.

To take another look at the day’s plenary or track sessions, please visit The Open Group on livestream.com.

The day ended in style with an evening reception of Victorian architecture at the Victoria & Albert Museum, along with a private viewing of the newly opened John Constable exhibition.

IMG_3976Victoria & Albert Museum

A special mention must go to Terry Blevins who, after years of hard work and commitment to The Open Group, was made a Fellow at this year’s event. Many congratulations to Terry – and here’s to another successful day tomorrow.

Join the conversation! #ogchat #ogLON

Loren K. BaynesLoren K. Baynes, Director, Global Marketing Communications, joined The Open Group in 2013 and spearheads corporate marketing initiatives, primarily the website, blog and media relations. Loren has over 20 years experience in brand marketing and public relations and, prior to The Open Group, was with The Walt Disney Company for over 10 years. Loren holds a Bachelor of Business Administration from Texas A&M University. She is based in the US.

Comments Off on The Open Group London 2014 – Day Two Highlights

Filed under ArchiMate®, Boundaryless Information Flow™, Business Architecture, Cloud, Enterprise Architecture, Enterprise Transformation, Internet of Things, Open Platform 3.0, Professional Development, Uncategorized

The Open Group London 2014: Open Platform 3.0™ Panel Preview with Capgemini’s Ron Tolido

By The Open Group

The third wave of platform technologies is poised to revolutionize how companies do business not only for the next few years but for years to come. At The Open Group London event in October, Open Group CTO Dave Lounsbury will be hosting a panel discussion on how The Open Group Open Platform 3.0™ will affect Enterprise Architectures. Panel speakers include IBM Vice President and CTO of U.S. Federal IMT Andras Szakal and Capgemini Senior Vice President and CTO for Application Services Ron Tolido.

We spoke with Tolido in advance of the event about the progress companies are making in implementing third platform technologies, the challenges facing the industry as Open Platform 3.0 evolves and the call to action he envisions for The Open Group as these technologies take hold in the marketplace.

Below is a transcript of that conversation.

From my perspective, we have to realize: What is the call to action that we should have for ourselves? If we look at the mission of Boundaryless Information Flow™ and the need for open standards to accommodate that, what exactly can The Open Group and any general open standards do to facilitate this next wave in IT? I think it’s nothing less than a revolution. The first platform was the mainframe, the second platform was the PC and now the third platform is anything beyond the PC, so all sorts of different devices, sensors and ways to access information, to deploy solutions and to connect. What does it mean in terms of Boundaryless Information Flow and what is the role of open standards to make that platform succeed and help companies to thrive in such a new world?

That’s the type of call to action I’m envisioning. And I believe there are very few Forums or Work Groups within The Open Group that are not affected by this notion of the third platform. Firstly, I believe an important part of the Open Platform 3.0 Forum’s mission will be to analyze, to understand, the impacts of the third platform, of all those different areas that we’re evolving currently in The Open Group, and, if you like, orchestrate them a bit or be a catalyst in all the working groups and forums.

In a blog you wrote this summer for Capgemini’s CTO Blog you cited third platform technologies as being responsible for a renewed interest in IT as an enabler of business growth. What is it about the Third Platform is driving that interest?

It’s the same type of revolution as we’ve seen with the PC, which was the second platform. A lot of people in business units—through the PC and client/server technologies and Windows and all of these different things—realized that they could create solutions of a whole new order. The second platform meant many more applications, many more uses, much more business value to be achieved and less direct dependence on the central IT department. I think we’re seeing a very similar evolution right now, but the essence of the move is not that it moves us even further away from central IT but it puts the power of technology right in the business. It’s much easier to create solutions. Nowadays, there are many more channels that are so close in business that it takes business people to understand them. This explains also why business people like the third platform so much—it’s the Cloud, it’s mobile, social, it’s big data, all of these are waves that bring technology closer to the business, and are easy to use with very apparent business value that haven’t seen before, certainly not in the PC era. So we’re seeing a next wave, almost a revolution in terms of how easy it is to create solutions and how widely spread these solutions can be. Because again, as with the PC, it’s many more applications yet again and many more potential uses that can be connected through these applications, so that’s the very nature of the revolution and that also explains why business people like the third platform so much. So what people say to me these days on the business side is ‘We love IT, it’s just these bloody IT people that are the problem.’

Due to the complexities of building the next wave of platform computing, do you think that we may hit a point of fatigue as companies begin to tackle everything that is involved in creating that platform and making it work together?

The way I see it, that’s still the work of the IT community and the Enterprise Architect and the platform designer. It’s the very nature of the platform is that it’s attractive to use it, not to build it. The very nature of the platform is to connect to it and launch from it, but building the platform is an entirely different story. I think it requires platform designers and Enterprise Architects, if you like, and people to do the plumbing and do the architecting and the design underneath. But the real nature of the platform is to use it and to build upon it rather than to create it. So the happy view is that the “business people” don’t have to construct this.

I do believe, by the way, that many of the people in The Open Group will be on the side of the builders. They’re supposed to like complexity and like reducing it, so if we do it right the users of the platform will not notice this effort. It’s the same with the Cloud—the problem with the Cloud nowadays is that many people are tempted to run their own clouds, their own technologies, and before they know it, they only have additional complexity on their agenda, rather than reduced, because of the Cloud. It’s the same with the third platform—it’s a foundation which is almost a no-brainer to do business upon, for the next generation of business models. But if we do it wrong, we only have additional complexity on our hands, and we give IT a bad name yet again. We don’t want to do that.

What are Capgemini customers struggling with the most in terms of adopting these new technologies and putting together an Open Platform 3.0?

What you currently see—and it’s not always good to look at history—but if you look at the emergence of the second platform, the PC, of course there were years in which central IT said ‘nobody needs a PC, we can do it all on the mainframe,’ and they just didn’t believe it and business people just started to do it themselves. And for years, we created a mess as a result of it, and we’re still picking up some of the pieces of that situation. The question for IT people, in particular, is to understand how to find this new rhythm, how to adopt the dynamics of this third platform while dealing with all the complexity of the legacy platform that’s already there. I think if we are able to accelerate creating such a platform—and I think The Open Group will be very critical there—what exactly should be in the third platform, what type of services should you be developing, how would these services interact, could we create some set of open standards that the industry could align to so that we don’t have to do too much work in integrating all that stuff. If we, as The Open Group, can create that industry momentum, that, at least, would narrow the gap between business and IT that we currently see. Right now IT’s very clearly not able to deliver on the promise because they have their hands full with surviving the existing IT landscape, so unless they do something about simplifying it on the one hand and bridging that old world with the new one, they might still be very unpopular in the forthcoming years. That’s not what you want as an IT person—you want to enable business and new business. But I don’t think we’ve been very effective with that for the past ten years as an industry in general, so that’s a big thing that we have to deal with, bridging the old world with the new world. But anything we can do to accelerate and simplify that job from The Open Group would be great, and I think that’s the very essence of where our actions would be.

What are some of the things that The Open Group, in particular, can do to help affect these changes?

To me it’s still in the evangelization phase. Sooner or later people have to buy it and say ‘We get it, we want it, give me access to the third platform.’ Then the question will be how to accelerate building such an actual platform. So the big question is: What does such a platform look like? What types of services would you find on such a platform? For example, mobility services, data services, integration services, management services, development services, all of that. What would that look like in a typical Platform 3.0? Maybe even define a catalog of services that you would find in the platform. Then, of course, if you could use such a catalog or shopping list, if you like, to reach out to the technology suppliers of this world and convince them to pick that up and gear around these definitions—that would facilitate such a platform. Also maybe the architectural roadmap—so what would an architecture look like and what would be the typical five ways of getting there? We have to start with your local situation, so probably also several design cases would be helpful, so there’s an architectural dimension here.

Also, in terms of competencies, what type of competencies will we need in the near future to be able to supply these types of services to the business? That’s, again, very new—in this case, IT Specialist Certification and Architect Certification. These groups also need to think about what are the new competencies inherent in the third platform and how does it affect things like certification criteria and competency profiles?

In other areas, if you look at TOGAF®, and Open Group standard, is it really still suitable in fast paced world of the third platform or do we need a third platform version of TOGAF? With Security, for example, there are so many users, so many connections, and the activities of the former Jericho Forum seem like child’s play compared to what you will see around the third platform, so there’s no Forum or Work Group that’s not affected by this Open Platform 3.0 emerging.

With Open Platform 3.0 touching pretty much every aspect of technology and The Open Group, how do you tackle that? Do you have just an umbrella group for everything or look at it through the lens of TOGAF or security or the IT Specialist? How do you attack something so large?

It’s exactly what you just said. It’s fundamentally my belief that we need to do both of these two things. First, we need a catalyst forum, which I would argue is the Open Platform 3.0 Forum, which would be the catalyst platform, the orchestration platform if you like, that would do the overall definitions, the call to action. They’ve already been doing the business scenarios—they set the scene. Then it would be up to this Forum to reach out to all the other Forums and Work Groups to discuss impact and make sure it stays aligned, so here we have an orchestration function of the Open Platform 3.0 Forum. Then, very obviously, all the other Work Groups and Forums need to pick it up and do their own stuff because you cannot aspire to do all of this with one and the same forum because it’s so wide, it’s so diverse. You need to do both.

The Open Platform 3.0 Forum has been working for a year and a half now. What are some of the things the Forum has accomplished thus far?

They’ve been particularly working on some of the key definitions and some of the business scenarios. I would say in order to create an awareness of Open Platform 3.0 in terms of the business value and the definitions, they’ve done a very good job. Next, there needs to be a call to action to get everybody mobilized and setting tangible steps toward the Platform 3.0. I think that’s currently where we are, so that’s good timing, I believe, in terms of what the forum has achieved so far.

Returning to the mission of The Open Group, given all of the awareness we have created, what does it all mean in terms of Boundaryless Information Flow and how does it affect the Forums and Work Groups in The Open Group? That’s what we need to do now.

What are some of the biggest challenges that you see facing adoption of Open Platform 3.0 and standards for that platform?

They are relatively immature technologies. For example, with the Cloud you see a lot of players, a lot of technology providers being quite reluctant to standardize. Some of them are very open about it and are like ‘Right now we are in a niche, and we’re having a lot of fun ourselves, so why open it up right now?’ The movement would be more pressure from the business side saying ‘We want to use your technology but only if you align with some of these emerging standards.’ That would do it or certainly help. This, of course, is what makes The Open Group as powerful as not only technology providers, but also businesses, the enterprises involved and end users of technology. If they work together and created something to mobilize technology providers, that would certainly be a breakthrough, but these are immature technologies and, as I said, with some of these technology providers, it seems more important to them to be a niche player for now and create their own market rather than standardizing on something that their competitors could be on as well.

So this is a sign of a relatively immature industry because every industry that starts to mature around certain topics begins to work around open standards. The more mature we grow in mastering the understanding of the Open Platform 3.0, the more you will see the need for standards arise. It’s all a matter of timing so it’s not so strange that in the past year and a half it’s been very difficult to even discuss standards in this area. But I think we’re entering that era really soon, so it seems to be good timing to discuss it. That’s one important limiting area; I think the providers are not necessarily waiting for it or committed to it.

Secondly, of course, this is a whole next generation of technologies. With all new generations of technologies there are always generation gaps and people in denial or who just don’t feel up to picking it up again or maybe they lack the energy to pick up a new wave of technology and they’re like ‘Why can’t I stay in what I’ve mastered?’ All very understandable. I would call that a very typical IT generation gap that occurs when we see the next generation of IT emerge—sooner or later you get a generation gap, as well. Which has nothing to do with physical age, by the way.

With all these technologies converging so quickly, that gap is going to have to close quickly this time around isn’t it?

Well, there are still mainframes around, so you could argue that there will be two or even three speeds of IT sooner or later. A very stable, robust and predictable legacy environment could even be the first platform that’s more mainframe-oriented, like you see today. A second wave would be that PC workstation, client/server, Internet-based IT landscape, and it has a certain base and certain dynamics. Then you have this third phase, which is the new platform, that is more dynamic and volatile and much more diverse. You could argue that there might be within an organization multiple speeds of IT, multiple speeds of architectures, multi-speed solutioning, and why not choose your own speed?

It probably takes a decade or more to really move forward for many enterprises.

It’s not going as quickly as the Gartners of this world typically thinks it is—in practice we all know it takes longer. So I don’t see any reason why certain people wouldn’t certainly choose deliberately to stay in second gear and don’t go to third gear simply because they think it’s challenging to be there, which is perfectly sound to me and it would bring a lot of work in many years to companies.

That’s an interesting concept because start-ups can easily begin on a new platform but if you’re a company that has been around for a long time and you have existing legacy systems from the mainframe or PC era, those are things that you have to maintain. How do you tackle that as well?

That’s a given in big enterprises. Not everybody can be a disruptive start up. Maybe we all think that we should be like that but it’s not the case in real life. In real life, we have to deal with enterprise systems and enterprise processes and all of them might be very vulnerable to this new wave of challenges. Certainly enterprises can be disruptive themselves if they do it right, but there are always different dynamics, and, as I said, we still have mainframes, as well, even though we declared their ending quite some time ago. The same will happen, of course, to PC-based IT landscapes. It will take a very long time and will take very skilled hands and minds to keep it going and to simplify.

Having said that, you could argue that some new players in the market obviously have the advantage of not having to deal with that and could possibly benefit from a first-mover advantage where existing enterprises have to juggle several balls at the same time. Maybe that’s more difficult, but of course enterprises are enterprises for a good reason—they are big and holistic and mighty, and they might be able to do things that start-ups simply can’t do. But it’s a very unpredictable world, as we all realize, and the third platform brings a lot of disruptiveness.

What’s your perspective on how the Internet of Things will affect all of this?

It’s part of the third platform of course, and it’s something Andras Szakal will be addressing as well. There’s much more coming, both at the input sites, everything is becoming a sensor essentially to where even your wallpaper or paint is a sensor, but on the other hand, in terms of devices that we use to communicate or get information—smart things that whisper in your ears or whatever we’ll have in the coming years—is clearly part of this Platform 3.0 wave that we’ll have as we move away from the PC and the workstation, and there’s a whole bunch of new technologies around to replace it. The Internet of Things is clearly part of it, and we’ll need open standards as well because there are so many different things and devices, and if you don’t create the right standards and platform services to deal with it, it will be a mess. It’s an integral part of the Platform 3.0 wave that we’re seeing.

What is the Open Platform 3.0 Forum going to be working on over the next few months?

Understanding what this Open Platform 3.0 actually means—I think the work we’ve seen so far in the Forum really sets the way in terms of what is it and definitions are growing. Andras will be adding his notion of the Internet of Things and looking at definitions of what is it exactly. Many people already intuitively have an image of it.

The second will be how we deliver value to the business—so the business scenarios are a crucial thing to consider to see how applicable they are, how relevant they are to enterprises. The next thing to do will pertain to work that still needs to be done in The Open Group, as well. What would a new Open Platform 3.0 architecture look like? What are the platform services? What are the ones we can start working on right now? What are the most important business scenarios and what are the platform services that they will require? So architectural impacts, skills impacts, security impacts—as I said, there are very few areas in IT that are not touched by it. Even the new IT4IT Forum that will be launched in October, which is all about methodologies and lifecycle, will need to consider Agile, DevOps-related methodologies because that’s the rhythm and the pace that we’ve got to expect in this third platform. So the rhythm of the working group—definitions, business scenarios and then you start to thinking about what does the platform consist of, what type of services do I need to create to support it and hopefully by then we’ll have some open standards to help accelerate that thinking to help enterprises set a course for themselves. That’s our mission as The Open Group to help facilitate that.

Tolido-RonRon Tolido is Senior Vice President and Chief Technology Officer of Application Services Continental Europe, Capgemini. He is also a Director on the board of The Open Group and blogger for Capgemini’s multiple award-winning CTO blog, as well as the lead author of Capgemini’s TechnoVision and the global Application Landscape Reports. As a noted Digital Transformation ambassador, Tolido speaks and writes about IT strategy, innovation, applications and architecture. Based in the Netherlands, Mr. Tolido currently takes interest in apps rationalization, Cloud, enterprise mobility, the power of open, Slow Tech, process technologies, the Internet of Things, Design Thinking and – above all – radical simplification.

 

 

Comments Off on The Open Group London 2014: Open Platform 3.0™ Panel Preview with Capgemini’s Ron Tolido

Filed under architecture, Boundaryless Information Flow™, Certifications, Cloud, digital technologies, Enterprise Architecture, Future Technologies, Information security, Internet of Things, Open Platform 3.0, Security, Service Oriented Architecture, Standards, TOGAF®, Uncategorized

The Open Group Panel: Internet of Things – Opportunities and Obstacles

Below is the transcript of The Open Group podcast exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data.

Listen to the podcast.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with recent The Open Group Boston 2014 on July 21 in Boston.

Dana Gardner I’m Dana Gardner, principal analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow.

We’re going to now specifically delve into the Internet of Things with a panel of experts. The conference has examined how Open Platform 3.0™ leverages the combined impacts of cloud, big data, mobile, and social. But to each of these now we can add a new cresting wave of complexity and scale as we consider the rapid explosion of new devices, sensors, and myriad endpoints that will be connected using internet protocols, standards and architectural frameworks.

This means more data, more cloud connectivity and management, and an additional tier of “things” that are going to be part of the mobile edge — and extending that mobile edge ever deeper into even our own bodies.

When we think about inputs to these social networks — that’s going to increase as well. Not only will people be tweeting, your device could be very well tweet, too — using social networks to communicate. Perhaps your toaster will soon be sending you a tweet about your English muffins being ready each morning.

The Internet of Things is more than the “things” – it means a higher order of software platforms. For example, if we are going to operate data centers with new dexterity thanks to software-definited networking (SDN) and storage (SDS) — indeed the entire data center being software-defined (SDDC) — then why not a software-defined automobile, or factory floor, or hospital operating room — or even a software-defined city block or neighborhood?

And so how does this all actually work? Does it easily spin out of control? Or does it remain under proper management and governance? Do we have unknown unknowns about what to expect with this new level of complexity, scale, and volume of input devices?

Will architectures arise that support the numbers involved, interoperability, and provide governance for the Internet of Things — rather than just letting each type of device do its own thing?

To help answer some of these questions, The Open Group assembled a distinguished panel to explore the practical implications and limits of the Internet of Things. So please join me in welcoming Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC, and a primary representative to the Industrial Internet Consortium; Penelope Gordon, Emerging Technology Strategist at 1Plug Corporation; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technical Officer at The Open Group.

Jean-Francois, we have heard about this notion of “cities as platforms,” and I think the public sector might offer us some opportunity to look at what is going to happen with the Internet of Things, and then extrapolate from that to understand what might happen in the private sector.

Hypothetically, the public sector has a lot to gain. It doesn’t have to go through the same confines of a commercial market development, profit motive, and that sort of thing. Tell us a little bit about what the opportunity is in the public sector for smart cities.

Barsoum_Jean-FrancoisJean-Francois Barsoum: It’s immense. The first thing I want to do is link to something that Marshall Van Alstyne (Professor at Boston University and Researcher at MIT) had talked about, because I was thinking about his way of approaching platforms and thinking about how cities represent an example of that.

You don’t have customers; you have citizens. Cities are starting to see themselves as platforms, as ways to communicate with their customers, their citizens, to get information from them and to communicate back to them. But the complexity with cities is that as a good a platform as they could be, they’re relatively rigid. They’re legislated into existence and what they’re responsible for is written into law. It’s not really a market.

Chris Harding (Forum Director of The Open Group Open Platform 3.0) earlier mentioned, for example, water and traffic management. Cities could benefit greatly by managing traffic a lot better.

Part of the issue is that you might have a state or provincial government that looks after highways. You might have the central part of the city that looks after arterial networks. You might have a borough that would look after residential streets, and these different platforms end up not talking to each other.

They gather their own data. They put in their own widgets to collect information that concerns them, but do not necessarily share with their neighbor. One of the conditions that Marshall said would favor the emergence of a platform had to do with how much overlap there would be in your constituents and your customers. In this case, there’s perfect overlap. It’s the same citizen, but they have to carry an Android and an iPhone, despite the fact it is not the best way of dealing with the situation.

The complexities are proportional to the amount of benefit you could get if you could solve them.

Gardner: So more interoperability issues?

Barsoum: Yes.

More hurdles

Gardner: More hurdles, and when you say commensurate, you’re saying that the opportunity is huge, but the hurdles are huge and we’re not quite sure how this is going to unfold.

Barsoum: That’s right.

Gardner: Let’s go to an area where the opportunity outstrips the challenge, manufacturing. Said, what is the opportunity for the software-defined factory floor for recognizing huge efficiencies and applying algorithmic benefits to how management occurs across domains of supply-chain, distribution, and logistics. It seems to me that this is a no-brainer. It’s such an opportunity that the solution must be found.

Tabet_SaidSaid Tabet: When it comes to manufacturing, the opportunities are probably much bigger. It’s where we can see a lot of progress that has already been done and still work is going on. There are two ways to look at it.

One is the internal side of it, where you have improvements of business processes. For example, similar to what Jean-Francois said, in a lot of the larger companies that have factories all around the world, you’ll see such improvements on a factory base level. You still have those silos at that level.

Now with this new technology, with this connectedness, those improvements are going to be made across factories, and there’s a learning aspect to it in terms of trying to manage that data. In fact, they do a better job. We still have to deal with interoperability, of course, and additional issues that could be jurisdictional, etc.

However, there is that learning that allows them to improve their processes across factories. Maintenance is one of them, as well as creating new products, and connecting better with their customers. We can see a lot of examples in the marketplace. I won’t mention names, but there are lots of them out there with the large manufacturers.

Gardner: We’ve had just-in-time manufacturing and lean processes for quite some time, trying to compress the supply chain and distribution networks, but these haven’t necessarily been done through public networks, the internet, or standardized approaches.

But if we’re to benefit, we’re going to need to be able to be platform companies, not just product companies. How do you go from being a proprietary set of manufacturing protocols and approaches to this wider, standardized interoperability architecture?

Tabet: That’s a very good question, because now we’re talking about that connection to the customer. With the airline and the jet engine manufacturer, for example, when the plane lands and there has been some monitoring of the activity during the whole flight, at that moment, they’ll get that data made available. There could be improvements and maybe solutions available as soon as the plane lands.

Interoperability

That requires interoperability. It requires Platform 3.0 for example. If you don’t have open platforms, then you’ll deal with the same hurdles in terms of proprietary technologies and integration in a silo-based manner.

Gardner: Penelope, you’ve been writing about the obstacles to decision-making that might become apparent as big data becomes more prolific and people try to capture all the data about all the processes and analyze it. That’s a little bit of a departure from the way we’ve made decisions in organizations, public and private, in the past.

Of course, one of the bigger tenets of Internet of Things is all this great data that will be available to us from so many different points. Is there a conundrum of some sort? Is there an unknown obstacle for how we, as organizations and individuals, can deal with that data? Is this going to be chaos, or is this going to be all the promises many organizations have led us to believe around big data in the Internet of Things?

Gordon_PenelopePenelope Gordon: It’s something that has just been accelerated. This is not a new problem in terms of the decision-making styles not matching the inputs that are being provided into the decision-making process.

Former US President Bill Clinton was known for delaying making decisions. He’s a head-type decision-maker and so he would always want more data and more data. That just gets into a never-ending loop, because as people collect data for him, there is always more data that you can collect, particularly on the quantitative side. Whereas, if it is distilled down and presented very succinctly and then balanced with the qualitative, that allows intuition to come to fore, and you can make optimal decisions in that fashion.

Conversely, if you have someone who is a heart-type or gut-type decision-maker and you present them with a lot of data, their first response is to ignore the data. It’s just too much for them to take in. Then you end up completely going with whatever you feel is correct or whatever you have that instinct that it’s the correct decision. If you’re talking about strategic decisions, where you’re making a decision that’s going to influence your direction five years down the road, that could be a very wrong decision to make, a very expensive decision, and as you said, it could be chaos.

It just brings to mind to me Dr. Suess’s The Cat in the Hat with Thing One and Thing Two. So, as we talk about the Internet of Things, we need to keep in mind that we need to have some sort of structure that we are tying this back to and understanding what are we trying to do with these things.

Gardner: Openness is important, and governance is essential. Then, we can start moving toward higher-order business platform benefits. But, so far, our panel has been a little bit cynical. We’ve heard that the opportunity and the challenges are commensurate in the public sector and that in manufacturing we’re moving into a whole new area of interoperability, when we think about reaching out to customers and having a boundary that is managed between internal processes and external communications.

And we’ve heard that an overload of data could become a very serious problem and that we might not get benefits from big data through the Internet of Things, but perhaps even stumble and have less quality of decisions.

So Dave Lounsbury of The Open Group, will the same level of standardization work? Do we need a new type of standards approach, a different type of framework, or is this a natural path and course what we have done in the past?

Different level

Lounsbury_DaveDave Lounsbury: We need to look at the problem at a different level than we institutionally think about an interoperability problem. Internet of Things is riding two very powerful waves, one of which is Moore’s Law, that these sensors, actuators, and network get smaller and smaller. Now we can put Ethernet in a light switch right, a tag, or something like that.

Also, Metcalfe’s Law that says that the value of all this connectivity goes up with the square of the number of connected points, and that applies to both the connection of the things but more importantly the connection of the data.

The trouble is, as we have said, that there’s so much data here. The question is how do you manage it and how do you keep control over it so that you actually get business value from it. That’s going to require us to have this new concept of a platform to not only to aggregate, but to just connect the data, aggregate it, correlate it as you said, and present it in ways that people can make decisions however they want.

Also, because of the raw volume, we have to start thinking about machine agency. We have to think about the system actually making the routine decisions or giving advice to the humans who are actually doing it. Those are important parts of the solution beyond just a simple “How do we connect all the stuff together?”

Gardner: We might need a higher order of intelligence, now that we have reached this border of what we can do with our conventional approaches to data, information, and process.

Thinking about where this works best first in order to then understand where it might end up later, I was intrigued again this morning by Professor Van Alstyne. He mentioned that in healthcare, we should expect major battles, that there is a turf element to this, that the organization, entity or even commercial corporation that controls and manages certain types of information and access to that information might have some very serious platform benefits.

The openness element now is something to look at, and I’ll come back to the public sector. Is there a degree of openness that we could legislate or regulate to require enough control to prevent the next generation of lock-in, which might not be to a platform to access to data information and endpoints? Where is it in the public sector that we might look to a leadership position to establish needed openness and not just interoperability.

Barsoum: I’m not even sure where to start answering that question. To take healthcare as an example, I certainly didn’t write the bible on healthcare IT systems and if someone did write that, I think they really need to publish it quickly.

We have a single-payer system in Canada, and you would think that would be relatively easy to manage. There is one entity that manages paying the doctors, and everybody gets covered the same way. Therefore, the data should be easily shared among all the players and it should be easy for you to go from your doctor, to your oncologist, to whomever, and maybe to your pharmacy, so that everybody has access to this same information.

We don’t have that and we’re nowhere near having that. If I look to other areas in the public sector, areas where we’re beginning to solve the problem are ones where we face a crisis, and so we need to address that crisis rapidly.

Possibility of improvement

In the transportation infrastructure, we’re getting to that point where the infrastructure we have just doesn’t meet the needs. There’s a constraint in terms of money, and we can’t put much more money into the structure. Then, there are new technologies that are coming in. Chris had talked about driverless cars earlier. They’re essentially throwing a wrench into the works or may be offering the possibility of improvement.

On any given piece of infrastructure, you could fit twice as many driverless cars as cars with human drivers in them. Given that set of circumstances, the governments are going to find they have no choice but to share data in order to be able to manage those. Are there cases where we could go ahead of a crisis in order to manage it? I certainly hope so.

Gardner: How about allowing some of the natural forces of marketplaces, behavior, groups, maybe even chaos theory, where if sufficient openness is maintained there will be some kind of a pattern that will emerge? We need to let this go through its paces, but if we have artificial barriers, that might be thwarted or power could go to places that we would regret later.

Barsoum: I agree. People often focus on structure. So the governance doesn’t work. We should find some way to change the governance of transportation. London has done a very good job of that. They’ve created something called Transport for London that manages everything related to transportation. It doesn’t matter if it’s taxis, bicycles, pedestrians, boats, cargo trains, or whatever, they manage it.

You could do that, but it requires a lot of political effort. The other way to go about doing it is saying, “I’m not going to mess with the structures. I’m just going to require you to open and share all your data.” So, you’re creating a new environment where the governance, the structures, don’t really matter so much anymore. Everybody shares the same data.

Gardner: Said, to the private sector example of manufacturing, you still want to have a global fabric of manufacturing capabilities. This is requiring many partners to work in concert, but with a vast new amount of data and new potential for efficiency.

How do you expect that openness will emerge in the manufacturing sector? How will interoperability play when you don’t have to wait for legislation, but you do need to have cooperation and openness nonetheless?

Tabet: It comes back to the question you asked Dave about standards. I’ll just give you some examples. For example, in the automotive industry, there have been some activities in Europe around specific standards for communication.

The Europeans came to the US and started to have discussions, and the Japanese have interest, as well as the Chinese. That shows, because there is a common interest in creating these new models from a business standpoint, that these challenges they have to be dealt with together.

Managing complexity

When we talk about the amounts of data, what we call now big data, and what we are going to see in about five years or so, you can’t even imagine. How do we manage that complexity, which is multidimensional? We talked about this sort of platform and then further, that capability and the data that will be there. From that point of view, openness is the only way to go.

There’s no way that we can stay away from it and still be able to work in silos in that new environment. There are lots of things that we take for granted today. I invite some of you to go back and read articles from 10 years ago that try to predict the future in technology in the 21st century. Look at your smart phones. Adoption is there, because the business models are there, and we can see that progress moving forward.

Collaboration is a must, because it is a multidimensional level. It’s not just manufacturing like jet engines, car manufacturers, or agriculture, where you have very specific areas. They really they have to work with their customers and the customers of their customers.

Adoption is there, because the business models are there, and we can see that progress moving forward.

Gardner: Dave, I have a question for both you and Penelope. I’ve seen some instances where there has been a cooperative endeavor for accessing data, but then making it available as a service, whether it’s an API, a data set, access to a data library, or even analytics applications set. The Ocean Observatories Initiative is one example, where it has created a sensor network across the oceans and have created data that then they make available.

Do you think we expect to see an intermediary organization level that gets between the sensors and the consumers or even controllers of the processes? Is there’s a model inherent in that that we might look to — something like that cooperative data structure that in some ways creates structure and governance, but also allows for freedom? It’s sort of an entity that we don’t have yet in many organizations or many ecosystems and that needs to evolve.

Lounsbury: We’re already seeing that in the marketplace. If you look at the commercial and social Internet of Things area, we’re starting to see intermediaries or brokers cropping up that will connect the silo of my android ecosystem to the ecosystem of package tracking or something like that. There are dozens and dozens of these cropping up.

In fact, you now see APIs even into a silo of what you might consider a proprietary system and what people are doing is to to build a layer on top of those APIs that intermediate the data.

This is happening on a point-to-point basis now, but you can easily see the path forward. That’s going to expand to large amounts of data that people will share through a third party. I can see this being a whole new emerging market much as what Google did for search. You could see that happening for the Internet of Things.

Gardner: Penelope, do you have any thoughts about how that would work? Is there a mutually assured benefit that would allow people to want to participate and cooperate with that third entity? Should they have governance and rules about good practices, best practices for that intermediary organization? Any thoughts about how data can be managed in this sort of hierarchical model?

Nothing new

Gordon: First, I’ll contradict it a little bit. To me, a lot of this is nothing new, particularly coming from a marketing strategy perspective, with business intelligence (BI). Having various types of intermediaries, who are not only collecting the data, but then doing what we call data hygiene, synthesis, and even correlation of the data has been around for a long time.

It was an interesting, when I looked at recent listing of the big-data companies, that some notable companies were excluded from that list — companies like Nielsen. Nielsen’s been collecting data for a long time. Harte-Hanks is another one that collects a tremendous amount of information and sells that to companies.

That leads into the another part of it that I think there’s going to be. We’re seeing an increasing amount of opportunity that involves taking public sources of data and then providing synthesis on it. What remains to be seen is how much of the output of that is going to be provided for “free”, as opposed to “fee”. We’re going to see a lot more companies figuring out creative ways of extracting more value out of data and then charging directly for that, rather than using that as an indirect way of generating traffic.

Gardner: We’ve seen examples of how this has been in place. Does it scale and does the governance or lack of governance that might be in the market now sustain us through the transition into Platform 3.0 and the Internet of Things.

Gordon: That aspect is the lead-on part of “you get what you pay for”. If you’re using a free source of data, you don’t have any guarantee that it is from authoritative sources of data. Often, what we’re getting now is something somebody put it in a blog post, and then that will get referenced elsewhere, but there was nothing to go back to. It’s the shaky supply chain for data.

You need to think about the data supply and that is where the governance comes in. Having standards is going to increasingly become important, unless we really address a lot of the data illiteracy that we have. A lot of people do not understand how to analyze data.

One aspect of that is a lot of people expect that we have to do full population surveys, as opposed representative sampling to get much more accurate and much more cost-effective collection of data. That’s just one example, and we do need a lot more in governance and standards.

Gardner: What would you like to see changed most in order for the benefits and rewards of the Internet of Things to develop and overcome the drawbacks, the risks, the downside? What, in your opinion, would you like to see happen to make this a positive, rapid outcome? Let’s start with you Jean-Francois.

Barsoum: There are things that I have seen cities start to do now. There are couple of examples: Philadelphia is one and Barcelona does this too. Rather than do the typical request for proposal (RFP), where they say, “This is the kind of solution we’re looking for, and here are our parameters. Can l you tell us how much it is going to cost to build,” they come to you with the problem and they say, “Here is the problem I want to fix. Here are my priorities, and you’re at liberty to decide how best to fix the problem, but tell us how much that would cost.”

If you do that and you combine it with access to the public data that is available — if public sector opens up its data — you end up with a very powerful combination that liberates a lot of creativity. You can create a lot of new business models. We need to see much more of that. That’s where I would start.

More education

Tabet: I agree with Jean-Francois on that. What I’d like to add is that I think we need to push the relation a little further. We need more education, to your point earlier, around the data and the capabilities.

We need these platforms that we can leverage a little bit further with the analytics, with machine learning, and with all of these capabilities that are out there. We have to also remember, when we talk about the Internet of Things, it is things talking to each other.

So it is not human-machine communication. Machine-to-machine automation will be further than that, and we need more innovation and more work in this area, particularly more activity from the governments. We’ve seen that, but it is a little bit frail from that point of view right now.

Gardner: Dave Lounsbury, thoughts about what need to happen in order to keep this on the tracks?

Lounsbury: We’ve touched on lot of them already. Thank you for mentioning the machine-to-machine part, because there are plenty of projections that show that it’s going to be the dominant form of Internet communication, probably within the next four years.

So we need to start thinking of that and moving beyond our traditional models of humans talking through interfaces to set of services. We need to identify the building blocks of capability that you need to manage, not only the information flow and the skilled person that is going to produce it, but also how you manage the machine-to-machine interactions.

Gordon: I’d like to see not so much focus on data management, but focus on what is the data managing and helping us to do. Focusing on the machine-to-machine and the devices is great, but it should be not on the devices or on the machines… it should be on what can they accomplish by communicating; what can you accomplish with the devices and then have a reverse engineer from that.

Gardner: Let’s go to some questions from the audience. The first one asks about a high order of intelligence which we mentioned earlier. It could be artificial intelligence, perhaps, but they ask whether that’s really the issue. Is the nature of the data substantially different, or we are just creating more of the same, so that it is a storage, plumbing, and processing problem? What, if anything, are we lacking in our current analytics capabilities that are holding us back from exploiting the Internet of Things?

Gordon: I’ve definitely seen that. That has a lot to do with not setting your decision objectives and your decision criteria ahead of time so that you end up collecting a whole bunch of data, and the important data gets lost in the mix. There is a term “data smog.”

Most important

The solution is to figure out, before you go collecting data, what data is most important to you. If you can’t collect certain kinds of data that are important to you directly, then think about how to indirectly collect that data and how to get proxies. But don’t try to go and collect all the data for that. Narrow in on what is going to be most important and most representative of what you’re trying to accomplish.

Gardner: Does anyone want to add to this idea of understanding what current analytics capabilities are lacking, if we have to adopt and absorb the Internet of Things?

Barsoum: There is one element around projection into the future. We’ve been very good at analyzing historical information to understand what’s been happening in the past. We need to become better at projecting into the future, and obviously we’ve been doing that for some time already.

But so many variables are changing. Just to take the driverless car as an example. We’ve been collecting data from loop detectors, radar detectors, and even Bluetooth antennas to understand how traffic moves in the city. But we need to think harder about what that means and how we understand the city of tomorrow is going to work. That requires more thinking about the data, a little bit like what Penelope mentioned, how we interpret that, and how we push that out into the future.

Lounsbury: I have to agree with both. It’s not about statistics. We can use historical data. It helps with lot of things, but one of the major issues we still deal with today is the question of semantics, the meaning of the data. This goes back to your point, Penelope, around the relevance and the context of that information – how you get what you need when you need it, so you can make the right decisions.

Gardner: Our last question from the audience goes back to Jean-Francois’s comments about the Canadian healthcare system. I imagine it applies to almost any healthcare system around the world. But it asks why interoperability is so difficult to achieve, when we have the power of the purse, that is the market. We also supposedly have the power of the legislation and regulation. You would think between one or the other or both that interoperability, because the stakes are so high, would happen. What’s holding it up?

Barsoum: There are a couple of reasons. One, in the particular case of healthcare, is privacy, but that is one that you could see going elsewhere. As soon as you talk about interoperability in the health sector, people start wondering where is their data going to go and how accessible is it going to be and to whom.

You need to put a certain number of controls over top of that. What is happening in parallel is that you have people who own some data, who believe they have some power from owning that data, and that they will lose that power if they share it. That can come from doctors, hospitals, anywhere.

So there’s a certain amount of change management you have to get beyond. Everybody has to focus on the welfare of the patient. They have to understand that there has to be a priority, but you also have to understand the welfare of the different stakeholders in the system and make sure that you do not forget about them, because if you forget about them they will find some way to slow you down.

Use of an ecosystem

Lounsbury: To me, that’s a perfect example of what Marshall Van Alstyne talked about this morning. It’s the change from focus on product to a focus on an ecosystem. Healthcare traditionally has been very focused on a doctor providing product to patient, or a caregiver providing a product to a patient. Now, we’re actually starting to see that the only way we’re able to do this is through use of an ecosystem.

That’s a hard transition. It’s a business-model transition. I will put in a plug here for The Open Group Healthcare vertical, which is looking at that from architecture perspective. I see that our Forum Director Jason Lee is over here. So if you want to explore that more, please see him.

Gardner: I’m afraid we will have to leave it there. We’ve been discussing the practical implications of the Internet of Things and how it is now set to add a new dimension to Open Platform 3.0 and Boundaryless Information Flow.

We’ve heard how new thinking about interoperability will be needed to extract the value and orchestrate out the chaos with such vast new scales of inputs and a whole new categories of information.

So with that, a big thank you to our guests: Said Tabet, Chief Technology Officer for Governance, Risk and Compliance Strategy at EMC; Penelope Gordon, Emerging Technology Strategist at 1Plug Corp.; Jean-Francois Barsoum, Senior Managing Consultant for Smarter Cities, Water and Transportation at IBM, and Dave Lounsbury, Chief Technology Officer at The Open Group.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout these discussions on Open Platform 3.0 and Boundaryless Information Flow at The Open Group Conference, recently held in Boston. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript.

Transcript of The Open Group podcast exploring the challenges and ramifications of the Internet of Things, as machines and sensors collect vast amounts of data. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Comments Off on The Open Group Panel: Internet of Things – Opportunities and Obstacles

Filed under Boundaryless Information Flow™, Business Architecture, Cloud, Cloud/SOA, Data management, digital technologies, Enterprise Architecture, Future Technologies, Information security, Internet of Things, Interoperability, Open Platform 3.0, Service Oriented Architecture, Standards, Strategy, Supply chain risk, Uncategorized