Category Archives: Boundaryless Information Flow™

The Open Group Baltimore 2015 Highlights

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

The Open Group Baltimore 2015, Enabling Boundaryless Information Flow™, July 20-23, was held at the beautiful Hyatt Regency Inner Harbor. Over 300 attendees from 16 countries, including China, Japan, Netherlands and Brazil, attended this agenda-packed event.

The event kicked off on July 20th with a warm Open Group welcome by Allen Brown, President and CEO of The Open Group. The first plenary speaker was Bruce McConnell, Senior VP, East West Institute, whose presentation “Global Cooperation in Cyberspace”, gave a behind-the-scenes look at global cybersecurity issues. Bruce focused on US – China cyber cooperation, major threats and what the US is doing about them.

Allen then welcomed Christopher Davis, Professor of Information Systems, University of South Florida, to The Open Group Governing Board as an Elected Customer Member Representative. Chris also serves as Chair of The Open Group IT4IT™ Forum.

The plenary continued with a joint presentation “Can Cyber Insurance Be Linked to Assurance” by Larry Clinton, President & CEO, Internet Security Alliance and Dan Reddy, Adjunct Faculty, Quinsigamond Community College MA. The speakers emphasized that cybersecurity is not a simply an IT issue. They stated there are currently 15 billion mobile devices and there will be 50 billion within 5 years. Organizations and governments need to prepare for new vulnerabilities and the explosion of the Internet of Things (IoT).

The plenary culminated with a panel “US Government Initiatives for Securing the Global Supply Chain”. Panelists were Donald Davidson, Chief, Lifecycle Risk Management, DoD CIO for Cybersecurity, Angela Smith, Senior Technical Advisor, General Services Administration (GSA) and Matthew Scholl, Deputy Division Chief, NIST. The panel was moderated by Dave Lounsbury, CTO and VP, Services, The Open Group. They discussed the importance and benefits of ensuring product integrity of hardware, software and services being incorporated into government enterprise capabilities and critical infrastructure. Government and industry must look at supply chain, processes, best practices, standards and people.

All sessions concluded with Q&A moderated by Allen Brown and Jim Hietala, VP, Business Development and Security, The Open Group.

Afternoon tracks (11 presentations) consisted of various topics including Information & Data Architecture and EA & Business Transformation. The Risk, Dependability and Trusted Technology theme also continued. Jack Daniel, Strategist, Tenable Network Security shared “The Evolution of Vulnerability Management”. Michele Goetz, Principal Analyst at Forrester Research, presented “Harness the Composable Data Layer to Survive the Digital Tsunami”. This session was aimed at helping data professionals understand how Composable Data Layers set digital and the Internet of Things up for success.

The evening featured a Partner Pavilion and Networking Reception. The Open Group Forums and Partners hosted short presentations and demonstrations while guests also enjoyed the reception. Areas focused on were Enterprise Architecture, Healthcare, Security, Future Airborne Capability Environment (FACE™), IT4IT™ and Open Platform™.

Exhibitors in attendance were Esteral Technologies, Wind River, RTI and SimVentions.

By Loren K. Baynes, Director, Global Marketing CommunicationsPartner Pavilion – The Open Group Open Platform 3.0™

On July 21, Allen Brown began the plenary with the great news that Huawei has become a Platinum Member of The Open Group. Huawei joins our other Platinum Members Capgemini, HP, IBM, Philips and Oracle.

By Loren K Baynes, Director, Global Marketing CommunicationsAllen Brown, Trevor Cheung, Chris Forde

Trevor Cheung, VP Strategy & Architecture Practice, Huawei Global Services, will be joining The Open Group Governing Board. Trevor posed the question, “what can we do to combine The Open Group and IT aspects to make a customer experience transformation?” His presentation entitled “The Value of Industry Standardization in Promoting ICT Innovation”, addressed the “ROADS Experience”. ROADS is an acronym for Real Time, On-Demand, All Online, DIY, Social, which need to be defined across all industries. Trevor also discussed bridging the gap; the importance of combining Customer Experience (customer needs, strategy, business needs) and Enterprise Architecture (business outcome, strategies, systems, processes innovation). EA plays a key role in the digital transformation. Huawei will continue to have a global impact yet still focus on China activities.

Allen then presented The Open Group Forum updates. He shared roadmaps which include schedules of snapshots, reviews, standards, and publications/white papers.

Allen also provided a sneak peek of results from our recent survey on TOGAF®, an Open Group standard. TOGAF® 9 is currently available in 15 different languages.

Next speaker was Jason Uppal, Chief Architecture and CEO, iCareQuality, on “Enterprise Architecture Practice Beyond Models”. Jason emphasized the goal is “Zero Patient Harm” and stressed the importance of Open CA Certification. He also stated that there are many roles of Enterprise Architects and they are always changing.

Joanne MacGregor, IT Trainer and Psychologist, Real IRM Solutions, gave a very interesting presentation entitled “You can Lead a Horse to Water… Managing the Human Aspects of Change in EA Implementations”. Joanne discussed managing, implementing, maintaining change and shared an in-depth analysis of the psychology of change.

“Outcome Driven Government and the Movement Towards Agility in Architecture” was presented by David Chesebrough, President, Association for Enterprise Information (AFEI). “IT Transformation reshapes business models, lean startups, web business challenges and even traditional organizations”, stated David.

Questions from attendees were addressed after each session.

In parallel with the plenary was the Healthcare Interoperability Day. Speakers from a wide range of Healthcare industry organizations, such as ONC, AMIA and Healthway shared their views and vision on how IT can improve the quality and efficiency of the Healthcare enterprise.

Before the plenary ended, Allen made another announcement. Allen is stepping down in April 2016 as President and CEO after more than 20 years with The Open Group, including the last 17 as CEO. After conducting a process to choose his successor, The Open Group Governing Board has selected Steve Nunn as his replacement who will assume the role with effect from November of this year. Steve is the current COO of The Open Group and CEO of the Association of Enterprise Architects. Please see press release here.By Loren K. Baynes, Director, Global Marketing Communications

Steve Nunn, Allen Brown

Afternoon track topics were comprised of EA Practice & Professional Development and Open Platform 3.0™.

After a very informative and productive day of sessions, workshops and presentations, event guests were treated to a dinner aboard the USS Constellation just a few minutes walk from the hotel. The USS Constellation constructed in 1854, is a sloop-of-war, the second US Navy ship to carry the name and is designated a National Historic Landmark.

By Loren K. Baynes, Director, Global Marketing CommunicationsUSS Constellation

On Wednesday, July 22, tracks continued: TOGAF® 9 Case Studies and Standard, EA & Capability Training, Knowledge Architecture and IT4IT™ – Managing the Business of IT.

Thursday consisted of members-only meetings which are closed sessions.

A special “thank you” goes to our sponsors and exhibitors: Avolution, SNA Technologies, BiZZdesign, Van Haren Publishing, AFEI and AEA.

Check out all the Twitter conversation about the event – @theopengroup #ogBWI

Event proceedings for all members and event attendees can be found here.

Hope to see you at The Open Group Edinburgh 2015 October 19-22! Please register here.

By Loren K. Baynes, Director, Global Marketing CommunicationsLoren K. Baynes, Director, Global Marketing Communications, joined The Open Group in 2013 and spearheads corporate marketing initiatives, primarily the website, blog, media relations and social media. Loren has over 20 years experience in brand marketing and public relations and, prior to The Open Group, was with The Walt Disney Company for over 10 years. Loren holds a Bachelor of Business Administration from Texas A&M University. She is based in the US.

Leave a comment

Filed under Accreditations, Boundaryless Information Flow™, Cybersecurity, Enterprise Architecture, Enterprise Transformation, Healthcare, Internet of Things, Interoperability, Open CA, Open Platform 3.0, Security, Security Architecture, The Open Group Baltimore 2015, TOGAF®

A Tale of Two IT Departments, or How Governance is Essential in the Hybrid Cloud and Bimodal IT Era

Transcript of an Open Group discussion/podcast on the role of Cloud Governance and Enterprise Architecture and how they work together in the era of increasingly fragmented IT.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Sponsor: The Open Group

Dana Gardner: Hello, and welcome to a special Thought Leadership Panel Discussion, coming to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator as we examine the role that Cloud Governance and Enterprise Architecture play in an era of increasingly fragmented IT.

Not only are IT organizations dealing with so-called shadow IT and myriad proof-of-concept affairs, there is now a strong rationale for fostering what Gartner calls Bimodal IT. There’s a strong case to be made for exploiting the strengths of several different flavors of IT, except that — at the same time — businesses are asking IT in total to be faster, better, and cheaper.

The topic before us today is how to allow for the benefits of Bimodal IT or even Multimodal IT, but without IT fragmentation leading to a fractured and even broken business.

Here to update us on the work of The Open Group Cloud Governance initiatives and working groups and to further explore the ways that companies can better manage and thrive with hybrid IT are our guests. We’re here today with Dr. Chris Harding, Director for Interoperability and Cloud Computing Forum Director at The Open Group. Welcome, Chris.

Dr. Chris Harding: Thank you, Dana. It’s great to be here.

Gardner: We’re also here with David Janson, Executive IT Architect and Business Solutions Professional with the IBM Industry Solutions Team for Central and Eastern Europe and a leading contributor to The Open Group Cloud Governance Project. Welcome, David.

David Janson: Thank you. Glad to be here.

Gardner: Lastly, we here with Nadhan, HP Distinguished Technologist and Cloud Advisor and Co-Chairman of The Open Group Cloud Governance Project. Welcome, Nadhan.

Nadhan: Thank you, Dana. It’s a pleasure to be here.

IT trends

Gardner: Before we get into an update on The Open Group Cloud Governance Initiatives, in many ways over the past decades IT has always been somewhat fragmented. Very few companies have been able to keep all their IT oars rowing in the same direction, if you will. But today things seem to be changing so rapidly that we seem to acknowledge that some degree of disparate IT methods are necessary. We might even think of old IT and new IT, and this may even be desirable.

But what are the trends that are driving this need for a Multimodal IT? What’s accelerating the need for different types of IT, and how can we think about retaining a common governance, and even a frameworks-driven enterprise architecture umbrella, over these IT elements?

Nadhan: Basically, the change that we’re going through is really driven by the business. Business today has much more rapid access to the services that IT has traditionally provided. Business has a need to react to its own customers in a much more agile manner than they were traditionally used to.

We now have to react to demands where we’re talking days and weeks instead of months and years. Businesses today have a choice. Business units are no longer dependent on the traditional IT to avail themselves of the services provided. Instead, they can go out and use the services that are available external to the enterprise.

To a great extent, the advent of social media has also resulted in direct customer feedback on the sentiment from the external customer that businesses need to react to. That is actually changing the timelines. It is requiring IT to be delivered at the pace of business. And the very definition of IT is undergoing a change, where we need to have the right paradigm, the right technology, and the right solution for the right business function and therefore the right application.

Since the choices have increased with the new style of IT, the manner in which you pair them up, the solutions with the problems, also has significantly changed. With more choices, come more such pairs on which solution is right for which problem. That’s really what has caused the change that we’re going through.

A change of this magnitude requires governance that goes across building up on the traditional governance that was always in play, requiring elements like cloud to have governance that is more specific to solutions that are in the cloud across the whole lifecycle of cloud solutions deployment.

Gardner: David, do you agree that this seems to be a natural evolution, based on business requirements, that we basically spin out different types of IT within the same organization to address some of these issues around agility? Or is this perhaps a bad thing, something that’s unnatural and should be avoided?

Janson: In many ways, this follows a repeating pattern we’ve seen with other kinds of transformations in business and IT. Not to diminish the specifics about what we’re looking at today, but I think there are some repeating patterns here.

There are new disruptive events that compete with the status quo. Those things that have been optimized, proven, and settled into sort of a consistent groove can compete with each other. Excitement about the new value that can be produced by new approaches generates momentum, and so far this actually sounds like a healthy state of vitality.

Good governance

However, one of the challenges is that the excitement potentially can lead to overlooking other important factors, and that’s where I think good governance practices can help.

For example, governance helps remind people about important durable principles that should be guiding their decisions, important considerations that we don’t want to forget or under-appreciate as we roll through stages of change and transformation.

At the same time, governance practices need to evolve so that it can adapt to new things that fit into the governance framework. What are those things and how do we govern those? So governance needs to evolve at the same time.

There is a pattern here with some specific things that are new today, but there is a repeating pattern as well, something we can learn from.

Gardner: Chris Harding, is there a built-in capability with cloud governance that anticipates some of these issues around different styles or flavors or even velocity of IT innovation that can then allow for that innovation and experimentation, but then keep it all under the same umbrella with a common management and visibility?

Harding: There are a number of forces at play here, and there are three separate trends that we’ve seen, or at least that I have observed, in discussions with members within The Open Group that relate to this.

The first is one that Nadhan mentioned, the possibility of outsourcing IT. I remember a member’s meeting a few years ago, when one of our members who worked for a company that was starting a cloud brokerage activity happened to mention that two major clients were going to do away with their IT departments completely and just go for cloud brokerage. You could see the jaws drop around the table, particularly with the representatives who were from company corporate IT departments.

Of course, cloud brokers haven’t taken over from corporate IT, but there has been that trend towards things moving out of the enterprise to bring in IT services from elsewhere.

That’s all very well to do that, but from a governance perspective, you may have an easy life if you outsource all of your IT to a broker somewhere, but if you fail to comply with regulations, the broker won’t go to jail; you will go to jail.

So you need to make sure that you retain control at the governance level over what is happening from the point of view of compliance. You probably also want to make sure that your architecture principles are followed and retain governance control to enable that to happen. That’s the first trend and the governance implication of it.

In response to that, a second trend that we see is that IT departments have reacted often by becoming quite like brokers themselves — providing services, maybe providing hybrid cloud services or private cloud services within the enterprise, or maybe sourcing cloud services from outside. So that’s a way that IT has moved in the past and maybe still is moving.

Third trend

The third trend that we’re seeing in some cases is that multi-discipline teams within line of business divisions, including both business people and technical people, address the business problems. This is the way that some companies are addressing the need to be on top of the technology in order to innovate at a business level. That is an interesting and, I think, a very healthy development.

So maybe, yes, we are seeing a bimodal splitting in IT between the traditional IT and the more flexible and agile IT, but maybe you could say that that second part belongs really in the line of business departments, rather than in the IT departments. That’s at least how I see it.

Nadhan: I’d like to build on a point that David made earlier about repeating patterns. I can relate to that very well within The Open Group, speaking about the Cloud Governance Project. Truth be told, as we continue to evolve the content in cloud governance, some of the seeding content actually came from the SOA Governance Project that The Open Group worked on a few years back. So the point David made about the repeating patterns resonates very well with that particular case in mind.

Gardner: So we’ve been through this before. When there is change and disruption, sometimes it’s required for a new version of methodologies and best practices to emerge, perhaps even associated with specific technologies. Then, over time, we see that folded back in to IT in general, or maybe it’s pushed back out into the business, as Chris alluded to.

My question, though, is how we make sure that these don’t become disruptive and negative influences over time. Maybe governance and enterprise architecture principles can prevent that. So is there something about the cloud governance, which I think really anticipates a hybrid model, particularly a cloud hybrid model, that would be germane and appropriate for a hybrid IT environment?

David Janson, is there a cloud governance benefit in managing hybrid IT?

Janson: There most definitely is. I tend to think that hybrid IT is probably where we’re headed. I don’t think this is avoidable. My editorial comment upon that is that’s an unavoidable direction we’re going in. Part of the reason I say that is I think there’s a repeating pattern here of new approaches, new ways of doing things, coming into the picture.

And then some balancing acts goes on, where people look at more traditional ways versus the new approaches people are talking about, and eventually they look at the strengths and weaknesses of both.

There’s going to be some disruption, but that’s not necessarily bad. That’s how we drive change and transformation. What we’re really talking about is making sure the amount of disruption is not so counterproductive that it actually moves things backward instead of forward.

I don’t mind a little bit of disruption. The governance processes that we’re talking about, good governance practices, have an overall life cycle that things move through. If there is a way to apply governance, as you work through that life cycle, at each point, you’re looking at the particular decision points and actions that are going to happen, and make sure that those decisions and actions are well-informed.

We sometimes say that governance helps us do the right things right. So governance helps people know what the right things are, and then the right way to do those things..

Bimodal IT

Also, we can measure how well people are actually adapting to those “right things” to do. What’s “right” can vary over time, because we have disruptive change. Things like we are talking about with Bimodal IT is one example.

Within a narrower time frame in the process lifecycle,, there are points that evolve across that time frame that have particular decisions and actions. Governance makes sure that people are well informed as they’re rolling through that about important things they shouldn’t forget. It’s very easy to forget key things and optimize for only one factor, and governance helps people remember that.

Also, just check to see whether we’re getting the benefits that people expected out of it. Coming back around and looking afterward to see if we accomplish what we thought we would or did we get off in the wrong direction. So it’s a bit like a steering mechanism or a feedback mechanism, in it that helps keep the car on the road, rather than going off in the soft shoulder. Did we overlook something important? Governance is key to making this all successful.

Gardner: Let’s return to The Open Group’s upcoming conference on July 20 in Baltimore and also learn a bit more about what the Cloud Governance Project has been up to. I think that will help us better understand how cloud governance relates to these hybrid IT issues that we’ve been discussing.

Nadhan, you are the co-chairman of the Cloud Governance Project. Tell us about what to expect in Baltimore with the concepts of Boundaryless Information Flow™, and then also perhaps an update on what the Cloud Governance Project has been up to.

Nadhan: Absolutely, Dana. When the Cloud Governance Project started, the first question we challenged ourselves with was, what is it and why do we need it, especially given that SOA governance, architecture governance, IT governance, enterprise governance, in general are all out there with frameworks? We actually detailed out the landscape with different standards and then identified the niche or the domain that cloud governance addresses.

After that, we went through and identified the top five principles that matter for cloud governance to be done right. Some of the obvious ones being that cloud is a business decision, and the governance exercise should keep in mind whether it is the right business decision to go to the cloud rather than just jumping on the bandwagon. Those are just some examples of the foundational principles that drive how cloud governance must be established and exercised.

Subsequent to that, we have a lifecycle for cloud governance defined and then we have gone through the process of detailing it out by identifying and decoupling the governance process and the process that is actually governed.

So there is this concept of process pairs that we have going, where we’ve identified key processes, key process pairs, whether it be the planning, the architecture, reusing cloud service, subscribing to it, unsubscribing, retiring, and so on. These are some of the defining milestones in the life cycle.

We’ve actually put together a template for identifying and detailing these process pairs, and the template has an outline of the process that is being governed, the key phases that the governance goes through, the desirable business outcomes that we would expect because of the cloud governance, as well as the associated metrics and the key roles.

Real-life solution

The Cloud Governance Framework is actually detailing each one. Where we are right now is looking at a real-life solution. The hypothetical could be an actual business scenario, but the idea is to help the reader digest the concepts outlined in the context of a scenario where such governance is exercised. That’s where we are on the Cloud Governance Project.

Let me take the opportunity to invite everyone to be part of the project to continue it by subscribing to the right mailing list for cloud governance within The Open Group.

Gardner: Thank you. Chris Harding, just for the benefit of our readers and listeners who might not be that familiar with The Open Group, perhaps you could give us a very quick overview of The Open Group — its mission, its charter, what we could expect at the Baltimore conference, and why people should get involved, either directly by attending, or following it on social media or the other avenues that The Open Group provides on its website?

Harding: Thank you, Dana. The Open Group is a vendor-neutral consortium whose vision is Boundaryless Information Flow. That is to say the idea that information should be available to people within an enterprise, or indeed within an ecosystem of enterprises, as and when needed, not locked away into silos.

We hold main conferences, quarterly conferences, four times a year and also regional conferences in various parts of the world in between those, and we discuss a variety of topics.

In fact, the main topics for the conference that we will be holding in July in Baltimore are enterprise architecture and risk and security. Architecture and security are two of the key things for which The Open Group is known, Enterprise Architecture, particularly with its TOGAF® Framework, is perhaps what The Open Group is best known for.

We’ve been active in a number of other areas, and risk and security is one. We also have started a new vertical activity on healthcare, and there will be a track on that at the Baltimore conference.

There will be tracks on other topics too, including four sessions on Open Platform 3.0™. Open Platform 3.0 is The Open Group initiative to address how enterprises can gain value from new technologies, including cloud computing, social computing, mobile computing, big data analysis, and the Internet of Things.

We’ll have a number of presentations related to that. These will include, in fact, a perspective on cloud governance, although that will not necessarily reflect what is happening in the Cloud Governance Project. Until an Open Group standard is published, there is no official Open Group position on the topic, and members will present their views at conferences. So we’re including a presentation on that.

Lifecycle governance

There is also a presentation on another interesting governance topic, which is on Information Lifecycle Governance. We have a panel session on the business context for Open Platform 3.0 and a number of other presentations on particular topics, for example, relating to the new technologies that Open Platform 3.0 will help enterprises to use.

There’s always a lot going on at Open Group conferences, and that’s a brief flavor of what will happen at this one.

Gardner: Thank you. And I’d just add that there is more available at The Open Group website, opengroup.org.

Going to one thing you mentioned about a standard and publishing that standard — and I’ll throw this out to any of our guests today — is there a roadmap that we could look to in order to anticipate the next steps or milestones in the Cloud Governance Project? When would such a standard emerge and when might we expect it?

Nadhan: As I said earlier, the next step is to identify the business scenario and apply it. I’m expecting, with the right level of participation, that it will take another quarter, after which it would go through the internal review with The Open Group and the company reviews for the publication of the standard. Assuming we have that in another quarter, Chris, could you please weigh in on what it usually takes, on average, for those reviews before it gets published.

Harding: You could add on another quarter. It shouldn’t actually take that long, but we do have a thorough review process. All members of The Open Group are invited to participate. The document is posted for comment for, I would think, four weeks, after which we review the comments and decide what actually needs to be taken.

Certainly, it could take only two months to complete the overall publication of the standard from the draft being completed, but it’s safer to say about a quarter.

Gardner: So a real important working document could be available in the second half of 2015. Let’s now go back to why a cloud governance document and approach is important when we consider the implications of Bimodal or Multimodal IT.

One of things that Gartner says is that Bimodal IT projects require new project management styles. They didn’t say project management products. They didn’t say, downloads or services from a cloud provider. We’re talking about styles.

So it seems to me that, in order to prevent the good aspects of Bimodal IT to be overridden by negative impacts of chaos and the lack of coordination that we’re talking about, not about a product or a download, we’re talking about something that a working group and a standards approach like the Cloud Governance Project can accommodate.

David, why is it that you can’t buy this in a box or download it as a product? What is it that we need to look at in terms of governance across Bimodal IT and why is that appropriate for a style? Maybe the IT people need to think differently about accomplishing this through technology alone?

First question

Janson: When I think of anything like a tool or a piece of software, the first question I tend to have is what is that helping me do, because the tool itself generally is not the be-all and end-all of this. What process is this going to help me carry out?

So, before I would think about tools, I want to step back and think about what are the changes to project-related processes that new approaches require. Then secondly, think about how can tools help me speed up, automate, or make those a little bit more reliable?

It’s an easy thing to think about a tool that may have some process-related aspects embedded in it as sort of some kind of a magic wand that’s going to automatically make everything work well, but it’s the processes that the tool could enable that are really the important decision. Then, the tools simply help to carry that out more effectively, more reliably, and more consistently.

We’ve always seen an evolution about the processes we use in developing solutions, as well as tools. Technology requires tools to adapt. As to the processes we use, as they get more agile, we want to be more incremental, and see rapid turnarounds in how we’re developing things. Tools need to evolve with that.

But I’d really start out from a governance standpoint, thinking about challenging the idea that if we’re going to make a change, how do we know that it’s really an appropriate one and asking some questions about how we differentiate this change from just reinventing the wheel. Is this an innovation that really makes a difference and isn’t just change for the sake of change?

Governance helps people challenge their thinking and make sure that it’s actually a worthwhile step to take to make those adaptations in project-related processes.

Once you’ve settled on some decisions about evolving those processes, then we’ll start looking for tools that help you automate, accelerate, and make consistent and more reliable what those processes are.

I tend to start with the process and think of the technology second, rather than the other way around. Where governance can help to remind people of principles we want to think about. Are you putting the cart before the horse? It helps people challenge their thinking a little bit to be sure they’re really going in the right direction.

Gardner: Of course, a lot of what you just mentioned pertains to enterprise architecture generally as well.

Nadhan, when we think about Bimodal or Multimodal IT, this to me is going to be very variable from company to company, given their legacy, given their existing style, the rate of adoption of cloud or other software as a service (SaaS), agile, or DevOps types of methods. So this isn’t something that’s going to be a cookie-cutter. It really needs to be looked at company by company and timeline by timeline.

Is this a vehicle for professional services, for management consulting more than IT and product? What is n the relationship between cloud governance, Bimodal IT, and professional services?

Delineating systems

Nadhan: It’s a great question Dana. Let me characterize Bimodal IT slightly differently, before answering the question. Another way to look at Bimodal IT, where we are today, is delineating systems of record and systems of engagement.

In traditional IT, typically, we’re looking at the systems of record, and systems of engagement with the social media and so on are in the live interaction. Those define the continuously evolving, growing-by-the-second systems of engagement, which results in the need for big data, security, and definitely the cloud and so on.

The coexistence of both of these paradigms requires the right move to the cloud for the right reason. So even though they are the systems of record, some, if not most, do need to get transformed to the cloud, but that doesn’t mean all systems of engagement eventually get transformed to the cloud.

There are good reasons why you may actually want to leave certain systems of engagement the way they are. The art really is in combining the historical data that the systems of record have with the continual influx of data that we get through the live channels of social media, and then, using the right level of predictive analytics to get information.

I said a lot in there just to characterize the Bimodal IT slightly differently, making the point that what really is at play, Dana, is a new style of thinking. It’s a new style of addressing the problems that have been around for a while.

But a new way to address the same problems, new solutions, a new way of coming up with the solution models would address the business problems at hand. That requires an external perspective. That requires service providers, consulting professionals, who have worked with multiple customers, perhaps other customers in the same industry, and other industries with a healthy dose of innovation.

That’s where this is a new opportunity for professional services to work with the CxOs, the enterprise architects, the CIOs to exercise the right business decision with the rights level of governance.

Because of the challenges with the coexistence of both systems of record and systems of engagement and harvesting the right information to make the right business decision, there is a significant opportunity for consulting services to be provided to enterprises today.

Drilling down

Gardner: Before we close off I wanted to just drill down on one thing, Nadhan, that you brought up, which is that ability to measure and know and then analyze and compare.

One of the things that we’ve seen with IT developing over the past several years as well is that the big data capabilities have been applied to all the information coming out of IT systems so that we can develop a steady state and understand those systems of record, how they are performing, and compare and contrast in ways that we couldn’t have before.

So on our last topic for today, David Janson, how important is it for that measuring capability in a governance context, and for organizations that want to pursue Bimodal IT, but keep it governed and keep it from spinning out of control? What should they be thinking about putting in place, the proper big data and analytics and measurement and visibility apparatus and capabilities?

Janson: That’s a really good question. One aspect of this is that, when I talk with people about the ideas around governance, it’s not unusual that the first idea that people have about what governance is is about the compliance or the policing aspect that governance can play. That sounds like that’s interference, sand in the gears, but it really should be the other way around.

A governance framework should actually make it very clear how people should be doing things, what’s expected as the result at the end, and how things are checked and measured across time at early stages and later stages, so that people are very clear about how things are carried out and what they are expected to do. So, if someone does use a governance-compliance process to see if things are working right, there is no surprise, there is no slowdown. They actually know how to quickly move through that.

Good governance has communicated that well enough, so that people should actually move faster rather than slower. In other words, there should be no surprises.

Measuring things is very important, because if you haven’t established the objectives that you’re after and some metrics to help you determine whether you’re meeting those, then it’s kind of an empty suit, so to speak, with governance. You express some ideas that you want to achieve, but you have no way of knowing or answering the question of how we know if this is doing what we want to do. Metrics are very important around this.

We capture metrics within processes. Then, for the end result, is it actually producing the effects people want? That’s pretty important.

One of the things that we have built into the Cloud Governance Framework is some idea about what are the outcomes and the metrics that each of these process pairs should have in mind. It helps to answer the question, how do we know? How do we know if something is doing what we expect? That’s very, very essential.

Gardner: I am afraid we’ll have to leave it there. We’ve been examining the role of cloud governance and enterprise architecture and how they work together in the era of increasingly fragmented IT. And we’ve seen how The Open Group Cloud Governance Initiatives and Working Groups can help allow for the benefits of Bimodal IT, but without necessarily IT fragmentation leading to a fractured or broken business process around technology and innovation.

This special Thought Leadership Panel Discussion comes to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore. And it’s not too late to register on The Open Group’s website or to follow the proceedings online and via social media such as Twitter, LinkedIn and Facebook.

So, thank you to our guests today. We’ve been joined by Dr. Chris Harding, Director for Interoperability and Cloud Computing Forum Director at The Open Group; David Janson, Executive IT Architect and Business Solutions Professional with the IBM Industry Solutions Team for Central and Eastern Europe and a leading contributor to The Open Group Cloud Governance Project, and Nadhan, HP Distinguished Technologist and Cloud Advisor and Co-Chairman of The Open Group Cloud Governance Project.

And a big thank you, too, to our audience for joining this special Open Group-sponsored discussion. This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this thought leadership panel discussion series. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android.

Sponsor: The Open Group

Transcript of an Open Group discussion/podcast on the role of Cloud Governance and Enterprise Architecture and how they work together in the era of increasingly fragmented IT. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

Join the conversation! @theopengroup #ogchat #ogBWI

You may also be interested in:

Leave a comment

Filed under Accreditations, Boundaryless Information Flow™, Cloud, Cloud Governance, Interoperability, IoT, The Open Group Baltimore 2015

Managing Your Vulnerabilities: A Q&A with Jack Daniel

By The Open Group

With hacks and security breaches becoming more prevalent everyday, it’s incumbent on organizations to determine the areas where their systems may be vulnerable and take actions to better handle those vulnerabilities. Jack Daniel, a strategist with Tenable Network Security who has been active in securing networks and systems for more than 20 years, says that if companies start implementing vulnerability management on an incremental basis and use automation to help them, they can hopefully reach a point where they’re not constantly handling vulnerability crises.

Daniel will be speaking at The Open Group Baltimore event on July 20, presenting on “The Evolution of Vulnerability Management.” In advance of that event, we recently spoke to Daniel to get his perspective on hacker motivations, the state of vulnerability management in organizations today, the human problems that underlie security issues and why automation is key to better handling vulnerabilities.

How do you define vulnerability management?

Vulnerability detection is where this started. News would break years ago of some vulnerability, some weakness in a system—a fault in the configuration or software bug that allows bad things to happen. We used to really to do a hit-or-miss job of it, it didn’t have to be rushed at all. Depending on where you were or what you were doing, you might not be targeted—it would take months after something was released before bad people would start doing things with it. As criminals discovered there was money to be made in exploiting vulnerabilities, the attackers became more and more motivated by more than just notoriety. The early hacker scene that was disruptive or did criminal things was largely motivated by notoriety. As people realized they could make money, it became a problem, and that’s when we turned to management.

You have to manage finding vulnerabilities, detecting vulnerabilities and resolving them, which usually means patching but not always. There are a lot of ways to resolve or mitigate without actually patching, but the management aspect is discovering all the weaknesses in your environment—and that’s a really broad brush, depending on what you’re worried about. That could be you’re not compliant with PCI if you’re taking credit cards or it could be that bad guys can steal your database full of credit card numbers or intellectual property.

It’s finding all the weaknesses in your environment, the vulnerabilities, tracking them, resolving them and then continuing to track as new ones appear to make sure old ones don’t reappear. Or if they do reappear, what in your corporate process is allowing bad things to happen over and over again? It’s continuously doing this.

The pace of bad things has accelerated, the motivations of the actors have forked in a couple of directions, and to do a good job of vulnerability management really requires gathering data of different qualities and being able to make assessments about it and then applying what you know to what’s the most effective use of your resources—whether it’s time or money or employees to fix what you can.

What are the primary motivations you’re seeing with hacks today?

They fall into a couple big buckets, and there are a whole bunch of them. One common one is financial—these are the people that are stealing credit cards, stealing credentials so they can do bank wire fraud, or some other way to get at money. There are a variety of financial motivators.

There are also some others, depending on who you are. There’s the so-called ‘Hacktivist,’ which used to be a thing in the early days of hacking but has now become more widespread. These are folks like the Syrian Electronic Army or there’s various Turkish groups that through the years have done website defacements. These people are not trying to steal money, they’re trying to embarrass you, they’re trying to promote a message. It may be, as with the Syrian Electronic Army, they’re trying to support the ruler of whatever’s left of Syria. So there are political motivations. Anonymous did a lot of destructive things—or people calling themselves ‘Anonymous’—that’s a whole other conversation, but people do things under the banner of Anonymous as hacktivism that struck out at corporations they thought were unjust or unfair or they did political things.

Intellectual property theft would be the third big one, I think. Generally the finger is pointed at China, but it’s unfair to say they’re the only ones stealing trade secrets. People within your own country or your own market or region are stealing trade secrets continuously, too.

Those are the three big ones—money, hacktivism and intellectual property theft. It trickles down. One of the things that has come up more often over the past few years is people get attacked because of who they’re connected to. It’s a smaller portion of it and one that’s overlooked but is a message that people need to hear. For example, in the Target breach, it is claimed that the initial entry point was through the heating and air conditioning vendors’ computer systems and their access to the HVAC systems inside a Target facility, and, from there, they were able to get through. There are other stories about the companies where organizations have been targeted because of who they do business with. That’s usually a case of trying to attack somebody that’s well-secured and there’s not an easy way in, so you find out who does their heating and air-conditioning or who manages their remote data centers or something and you attack those people and then come in.

How is vulnerability management different from risk management?

It’s a subset of risk management. Risk management, when done well, gives a scope of a very large picture and helps you drill down into the details, but it has to factor in things above and beyond the more technical details of what we more typically think of as vulnerability management. Certainly they work together—you have to find what’s vulnerable and then you have to make assessments as to how you’re going to address your vulnerabilities, and that ideally should be done in a risk-based manner. Because as much as all of the reports from Verizon Data Breach Report and others say you have to fix everything, the reality is that not only can we not fix everything, we can’t fix a lot immediately so you really have to prioritize things. You have to have information to prioritize things, and that’s a challenge for many organizations.

Your session at The Open Group Baltimore event is on the evolution of vulnerability management—where does vulnerability management stand today and where does it need to go?

One of my opening slides sums it up—it used to be easy, and it’s not anymore. It’s like a lot of other things in security, it’s sort of a buzz phrase that’s never really taken off like it needs to at the enterprise level, which is as part of the operationalization of security. Security needs to be a component of running your organization and needs to be factored into a number of things.

The information security industry has a challenge and history of being a department in the middle and being obstructionist, which is I think is well deserved. But the real challenge is to cooperate more. We have to get a lot more information, which means working well with the rest of the organization, particularly networking and systems administrators and having conversations with them as far as the data and the environment and sharing and what we discover as problems without being the judgmental know-it-all security people. That is our stereotype. The adversaries are often far more cooperative than we are. In a lot of criminal forums, people will be fairly supportive of other people in their community—they’ll go up to where they reach the trade-secret level and stop—but if somebody’s not cutting into their profits, rumor is these people are cooperating and collaborating.

Within an organization, you need to work cross-organizationally. Information sharing is a very real piece of it. That’s not necessarily vulnerability management, but when you step into risk analysis and how you manage your environment, knowing what vulnerabilities you have is one thing, but knowing what vulnerabilities people are actually going to do bad things to requires information sharing, and that’s an industry wide challenge. It’s a challenge within our organizations, and outside it’s a real challenge across the enterprise, across industry, across government.

Why has that happened in the Security industry?

One is the stereotype—a lot of teams are very siloed, a lot of teams have their fiefdoms—that’s just human nature.

Another problem that everyone in security and technology faces is that we talk to all sorts of people and have all sorts of great conversations, learn amazing things, see amazing things and a lot of it is under NDA, formal or informal NDAs. And if it weren’t for friend-of-a-friend contacts a lot of information sharing would be dramatically less. A lot of the sanitized information that comes out is too sanitized to be useful. The Verizon Data Breach Report pointed out that there are similarities in attacks but they don’t line up with industry verticals as you might expect them to, so we have that challenge.

Another serious challenge we have in security, especially in the research community, is that there’s total distrust of the government. The Snowden revelations have really severely damaged the technology and security community’s faith in the government and willingness to cooperate with them. Further damaging that are the discussions about criminalizing many security tools—because the people in Congress don’t understand these things. We have a president who claims to be technologically savvy, and he is more than any before him, but he still doesn’t get it and he’s got advisors that don’t get it. So we have a great distrust of the government, which has been earned, despite the fact that any one of us in the industry knows folks at various agencies—whether the FBI or intelligence agencies or military —who are fantastic people—brilliant, hardworking patriotic—but the entities themselves are political entities, and that causes a lot of distrust in information sharing.

And there are just a lot of people that have the idea that they want proprietary information. This is not unique to security. There are a couple of different types of managers—there are people in organizations who strive to make themselves irreplaceable. As a manager, you’ve got to get those people out of your environment because they’re just poisonous. There are other people who strive to make it so that they can walk away at any time and it will be a minor inconvenience for someone to pick up the notes and run. Those are the type of people you should hang onto for dear life because they share information, they build knowledge, they build relationships. That’s just human nature. In security I don’t think there are enough people who are about building those bridges, building those communications paths, sharing what they’ve learned and trying to advance the cause. I think there’s still too many who horde information as a tool or a weapon.

Security is fundamentally a human problem amplified by technology. If you don’t address the human factors in it, you can have technological controls, but it still has to be managed by people. Human nature is a big part of what we do.

You advocate for automation to help with vulnerability management. Can automation catch the threats when hackers are becoming increasingly sophisticated and use bots themselves? Will this become a war of bot vs. bot?

A couple of points about automation. Our adversaries are using automation against us. We need to use automation to fight them, and we need to use as much automation as we can rely on to improve our situation. But at some point, we need smart people working on hard problems, and that’s not unique to security at all. The more you automate, at some point in time you have to look at whether your automation processes are improving things or not. If you’ve ever seen a big retailer or grocery store that has a person working full-time to manage the self-checkout line, that’s failed automation. That’s just one example of failed automation. Or if there’s a power or network outage at a hospital where everything is regulated and medications are regulated and then nobody can get their medications because the network’s down. Then you have patients suffering until somebody does something. They have manual systems that they have to fall back on and eventually some poor nurse has to spend an entire shift doing data entry because the systems failed so badly.

Automation doesn’t solve the problems—you have to automate the right things in the right ways, and the goal is to do the menial tasks in an automated fashion so you have to spend less human cycles. As a system or network administrator, you run into the same repetitive tasks over and over and you write scripts to do it or buy a tool to automate it. They same applies here –you want to filter through as much of the data as you can because one of the things that modern vulnerability management requires is a lot of data. It requires a ton of data, and it’s very easy to fall into an information overload situation. Where the tools can help is by filtering it down and reducing the amount of stuff that gets put in front of people to make decisions about, and that’s challenging. It’s a balance that requires continuous tuning—you don’t want it to miss anything so you want it to tell you everything that’s questionable but it can’t throw too many things at you that aren’t actually problems or people give up and ignore the problems. That was allegedly part of a couple of the major breaches last year. Alerts were triggered but nobody paid attention because they get tens of thousands of alerts a day as opposed to one big alert. One alert is hard to ignore—40,000 alerts and you just turn it off.

What’s the state of automated solutions today?

It’s pretty good if you tune it, but it takes maintenance. There isn’t an Easy Button, to use the Staples tagline. There’s not an Easy Button, and anyone promising an Easy Button is probably not being honest with you. But if you understand your environment and tune the vulnerability management and patch management tools (and a lot of them are administrative tools), you can automate a lot of it and you can reduce the pain dramatically. It does require a couple of very hard first steps. The first step in all of it is knowing what’s in your environment and knowing what’s crucial in your environment and understanding what you have because if you don’t know what you’ve got, you won’t be able to defend it well. It is pretty good but it does take a fair amount of effort to get to where you can make the best of it. Some organizations are certainly there, and some are not.

What do organizations need to consider when putting together a vulnerability management system?

One word: visibility. They need to understand that they need to be able to see and know what’s in the environment—everything that’s in their environment—and get good information on those systems. There needs to be visibility into a lot of systems that you don’t always have good visibility into. That means your mobile workforce with their laptops, that means mobile devices that are on the network, which are probably somewhere whether they belong there or not, that means understanding what’s on your network that’s not being managed actively, like Windows systems that might not be in active directory or RedHat systems that aren’t being managed by satellite or whatever systems you use to manage it.

Knowing everything that’s in the environment and its roles in the system—that’s a starting point. Then understanding what’s critical in the environment and how to prioritize that. The first step is really understanding your own environment and having visibility into the entire network—and that can extend to Cloud services if you’re using a lot of Cloud services. One of the conversations I’ve been having lately since the latest Akamai report was about IPv6. Most Americans are ignoring it even at the corporate level, and a lot of folks think you can ignore it still because we’re still routing most of our traffic over the IPv4 protocol. But IPv6 is active on just about every network out there. It’s just whether or not we actively measure and monitor it. The Akamai Report said something that a lot of folks have been saying for years and that’s that this is really a problem. Even though the adoption is pretty low, what you see if you start monitoring for it is people communicating in IPv6 whether intentionally or unintentionally. Often unintentionally because everythings’s enabled, so there’s often a whole swath of your network that people are ignoring. And you can’t have those huge blind spots in the environment, you just can’t. The vulnerability management program has to take into account that sort of overall view of the environment. Then once you’re there, you need a lot of help to solve the vulnerabilities, and that’s back to the human problem.

What should Enterprise Architects look for in an automated solution?

It really depends on the corporate need. They need to figure out whether or not the systems they’re looking at are going to find most or all of their network and discover all of the weakness, and then help them prioritize those. For example, can your systems do vulnerability analysis on newly discovered systems with little or no input? Can you automate detection? Can you automate confirmation of findings somehow? Can you interact with other systems? There’s a piece, too—what’s the rest of your environment look like? Are there ways into it? Does your vulnerability management system work with or understand all the things you’ve got? What if you have some unique network gear that your vulnerability management systems not going to tell you what the vulnerability’s in? There are German companies that like to use operating systems other than Windows and garden variety Linux distributions. Does it work in your environment and will it give you good coverage in your environment and can it take a lot of the mundane out of it?

How can companies maintain Boundaryless Information Flow™–particularly in an era of the Internet of Things–but still manage their vulnerabilities?

The challenge is a lot of people push back against high information flow because they can’t make sense of it; they can’t ingest the data, they can’t do anything with it. It’s the challenge of accepting and sharing a lot of information. It doesn’t matter whether vulnerability management or lot analysis or patch management or systems administration or back up or anything—the challenge is that networks have systems that share a lot of data but until you add context, it’s not really information. What we’re interested in in vulnerability management is different than what you’re automated backup is. The challenge is having systems that can share information outbound, share information inbound and then act rationally on only that which is relevant to them. That’s a real challenge because information overload is a problem that people have been complaining about for years, and it’s accelerating at a stunning rate.

You say Internet of Things, and I get a little frustrated when people treat that as a monolith because at one end an Internet enabled microwave or stove has one set of challenges, and they’re built on garbage commodity hardware with no maintenance ability at all. There are other things that people consider Internet of Things because they’re Internet enabled and they’re running Windows or a more mature Linux stack that has full management and somebody’s managing it. So there’s a huge gap between the managed IoT and the unmanaged, and the unmanaged is just adding low power machines in environments that will just amplify things like distributed denial of service (DoS). As it is, a lot of consumers have home routers that are being used to attack other people and do DoS attacks. A lot of the commercial stuff is being cleaned up, but a lot of the inexpensive home routers that people have are being used, and if those are used and misused or misconfigured or attacked with worms that can change the settings for things to have everything in the network participate in.

The thing with the evolution of vulnerability management is that we’re trying to drive people to a continuous monitoring situation. That’s where the federal government has gone, that’s where a lot of industries are, and it’s a challenge to go from infrequent or even frequent big scans to watching things continuously. The key is to take incremental steps, and the goal is, instead of having a big massive vulnerability project every quarter or every month, the goal is to get down to where it’s part of the routine, you’re taking small remediated measures on a daily or regular basis. There’s still going to be things when Microsoft or Oracle come out with a big patch that will require a bigger tool-up but you’re going to need to do this continuously and reach that point where you do small pieces of the task continuously rather than one big task. That’s the goal is to get to where you’re doing this continuously so you get to where you’re blowing out birthday candles rather than putting out forest fires.

Jack Daniel, a strategist at Tenable Network Security, has over 20 years experience in network and system administration and security, and has worked in a variety of practitioner and management positions. A technology community activist, he supports several information security and technology organizations. Jack is a co-founder of Security BSides, serves on the boards of three Security BSides non-profit corporations, and helps organize Security B-Sides events. Jack is a regular, featured speaker at ShmooCon, SOURCE Boston, DEF CON, RSA and other marque conferences. Jack is a CISSP, holds CCSK, and is a Microsoft MVP for Enterprise Security.

Join the conversation – @theopengroup #ogchat #ogBWI

1 Comment

Filed under Boundaryless Information Flow™, Internet of Things, RISK Management, Security, the open group, The Open Group Baltimore 2015

The Open Group Healthcare Forum Publishes First Whitepaper and Announces New Member

By The Open Group

The Open Group Healthcare Forum has published its first whitepaper, “Enhancing Health Information Exchange with the FHIM” which examines the Federal Health Information Model (FHIM) and its efforts to bring semantic interoperability to the Healthcare industry.

The document was developed in response to a 2014 request to the Healthcare Forum made by the Federal Health Architecture program (FHA), an E-Government Line of Business initiative managed by ONC. The Forum was asked to evaluate the FHIM and to detail its potential usefulness to the wider Healthcare ecosystem. In response, The Healthcare Forum developed a whitepaper that highlights the strengths of the FHIM and the challenges it faces. Contributors came from organizations based across the globe including HP (US), Dividend Group (Canada), Sykehuspartner (Norway), and Philips Medical Systems (Germany).

The FHIM is a key component of a multimillion dollar effort to enable data sharing across the Healthcare enterprise. It has relevance worldwide as US federal agencies are among the leading markets for healthcare technology and processes. By identifying examples of FHIM adoption, understanding barriers to its adoption, and relating the FHIM to other major efforts to achieve Healthcare interoperability, the white paper reflects The Healthcare Forum’s support of Boundaryless Information Flow™, which continues to be engaged in this important work and expects to publish new insights in the second white paper in this series, planned for late 2015. The full whitepaper can be found here to download.

At the same time The Open Group Healthcare Forum has also announced that The Office of the National Coordinator for Health Information Technology (ONC – part of the U.S. Department of Health and Human Services) as its latest key member.

FHA Director Gail Kalbfleisch commented on the announcement, “We look forward to this membership opportunity with the Healthcare Forum, and becoming a part of the synergy that comes from collaborating with other members.”

Allen Brown, President & CEO of The Open Group also welcomed the news, “We are delighted to welcome the ONC to The Open Group Healthcare Forum following the evaluation of the FHIM by our members. The efficient and effective flow of secure healthcare information through healthcare systems is a critical goal of all who are engaged in that industry and is core to the vision of The Open Group which is Boundaryless Information Flow™, achieved through global interoperability in a secure, reliable and timely manner“.

Leave a comment

Filed under Boundaryless Information Flow™, Healthcare, whitepaper

The Open Group Madrid 2015 – Day Two Highlights

By The Open Group

On Tuesday, April 21, Allen Brown, President & CEO of The Open Group, began the plenary presenting highlights of the work going on in The Open Group Forums. The Open Group is approaching 500 memberships in 40 countries.

Big Data & Open Platform 3.0™ – a Big Deal for Open Standards

Ron Tolido, Senior Vice President of Capgemini’s group CTO network and Open Group Board Member, discussed the digital platform as the “fuel” of enterprise transformation today, citing a study published in the book “Leading Digital.” The DNA of companies that successfully achieve transform has the following factors:

  • There is no escaping from mastering the digital technology – this is an essential part of leading transformation. CEO leadership is a success factor.
  • You need a sustainable technology platform embraced by both the business and technical functions

Mastering digital transformation shows a payoff in financial results, both from the standpoint of efficient revenue generation and maintaining and growing market share. The building blocks of digital capability are:

  • Customer Experience
  • Operations
  • New business models

Security technology must move from being a constraint or “passion killer” to being a driver for digital transformation. Data handling must change it’s model – the old structured and siloed approach to managing data no longer works, resulting in business units bypassing or ignoring the “single souce” data repository. He recommended the “Business Data Lake” approach as a approach to overcoming this, and suggested it should be considered as an open standard as part of the work of the Open Platform 3.0 Forum.

In the Q&A session, Ron suggested establishing hands-on labs to help people embrace digital transformation, and presented the analogy of DatOps as an analogy to DevOps for business data.

Challengers in the Digital Era

Mariano Arnaiz, Chief Information Officer in the CESCE Group, presented the experiences of CESCE in facing challenges of:

  • Changing regulation
  • Changing consumer expectations
  • Changing technology
  • Changing competition and market entrants based on new technology

The digital era represents a new language for many businesses, which CESCE faced during the financial crisis of 2008. They chose the “path less traveled” of becoming a data-driven company, using data and analytics to improve business insight, predict behavior and act on it. CESCE receives over 8000 risk analysis requests per day; using analytics, over 85% are answered in real time, when it used to take more than 20 days. Using analytics has given them unique competitive products such as variable pricing and targeted credit risk coverage while reducing loss ratio.

To drive transformation, the CIO must move beyond IT service supporting the business to helping drive business process improvement. Aligning IT to business is no longer enough for EA – EA must also help align business to transformational technology.

In the Q&A, Mariano said that the approach of using analytics and simulation for financial risk modeling could be applied to some cybersecurity risk analysis cases.

Architecting the Internet of Things

Kary Främling,  CEO of the Finnish company ControlThings and Professor of Practice in Building Information Modeling (BIM) at Aalto University, Finland, gave a history of the Internet of Things (IoT), the standards landscape, issues on security in IoT, and real-world examples.

IoT today is characterized by an increasing number of sensors and devices each pushing large amounts of data to their own silos, with communication limited to their own network. Gaining benefit from IoT requires standards to take a systems view of IoT providing horizontal integration among IoT devices and sensors with data collected as and when needed, and two-way data flows between trusted entities within a vision of Closed-Loop Lifecycle Management. These standards are being developed in The Open Group Open Platform 3.0 Forum’s IoT work stream; published standards such as Open Messaging interface (O-MI) and Open Data Format (O-DF) that allow discovery and interoperability of sensors using open protocols, similar to the way http and html enable interoperability on the Web.

Kary addressed the issues of security and privacy in IoT, noting this is an opportunity for The Open Group to use our EA and Security work to to assess these issues at the scale IoT will bring.By The Open Group

Kary Främling

Comments Off on The Open Group Madrid 2015 – Day Two Highlights

Filed under big data, Boundaryless Information Flow™, Cybersecurity, Enterprise Architecture, Internet of Things

The Open Group Madrid 2015 – Day One Highlights

By The Open Group

On Monday, April 20, Allen Brown, President & CEO of The Open Group, welcomed 150 attendees to the Enabling Boundaryless Information Flow™ summit held at the Madrid Eurobuilding Hotel.  Following are highlights from the plenary:

The Digital Transformation of the Public Administration of Spain – Domingo Javier Molina Moscoso

Domingo Molina, the first Spanish national CIO, said that governments must transform digitally to meet public expectations, stay nationally competitive, and control costs – the common theme in transformation of doing more with less. Their CORA commission studied what commercial businesses did, and saw the need for an ICT platform as part of the reform, along with coordination and centralization of ICT decision making across agencies.

Three Projects:

  • Telecom consolidation – €125M savings, reduction in infrastructure and vendors
  • Reduction in number of data centers
  • Standardizing and strengething security platform for central administration – only possible because of consolidation of telecom.

The Future: Increasing use of mobile, social networks, online commercial services such as banking – these are the expectations of young people. The administration must therefore be in the forefront of providing digital services to citizens. They have set a transformation target of having citizens being able to interact digitally with all government services by 2020.

Q&A:

  • Any use of formal methods for transformation such as EA? Looked at other countries – seen models such as outsourcing. They are taking a combined approach of reusing their experts and externalizing.
  • How difficult has it been to achieve savings in Europe given labor laws? Model is to re-assign people to higher-value tasks.
  • How do you measure progress: Each unit has own ERP for IT governance – no unified reporting. CIO requests and consolidates data. Working on common IT tool to do this.

An Enterprise Transformation Approach for Today’s Digital Business – Fernando García Velasco

Computing has moved from tabulating systems to the internet and moving into an era of “third platform” of Cloud, Analytics, Mobile and Social (CAMS) and cognitive computing. The creates a “perfect storm” for disruption of enterprise IT delivery.

  • 58% say SMAC will reduce barriers to entry
  • 69% say it will increase competition
  • 41% expect this competition to come from outside traditional market players

These trends are being collected and consolidated in The Open Group Open Platform 3.0™ standard.

He sees the transformation happening in three ways:

  1. Top-down – a transformation view
  2. Meet in the middle: Achieving innovation through EA
  3. Bottom-up: the normal drive for incremental improvement

Gartner: EA is the discipline for leading enterprise response to disruptive forces. IDC: EA is mandatory for managing transformation to third platform.

EA Challenges & Evolution – a Younger Perspective

Steve Nunn, COO of The Open Group and CEO of the Association of Enterprise Architects (AEA), noted the AEA is leading the development of EA as a profession, and is holding the session to recognize the younger voices joining the EA profession. He introduced the panelists: Juan Abal, Itziar Leguinazabal, Mario Gómez Velasco, Daniel Aguado Pérez, Ignacio Macias Jareño.

The panelists talked about their journey as EAs, noting that their training focused on development with little exposure to EA or Computer Science concepts. Schools aren’t currently very interested in teaching EA, so it is hard to get a start. Steve Nunn noted the question of how to enter EA as a profession is a worldwide concern. The panelists said they started looking at EA as a way of gaining a wider perspective of the development or administrative projects they were working on. Mentoring is important, and there is a challenge in learning about the business side when coming from a technical world. Juan Abal said such guidance and mentoring by senior architects is one of the benefits the AEA chapter offers.

Q: What advice would you give to someone entering into the EA career? A: If you are starting from a CS or engineering perspective, you need to start learning about the business. Gain a deep knowledge of your industry. Expect a lot of hard work, but it will have the reward of having more impact on decisions. Q: EA is really about business and strategy. Does the AEA have a strategy for making the market aware of this? A: The Spanish AEA chapter focuses on communicating that EA is a mix, and that EAs need to develop business skills. It is a concern that young architects are focused on IT aspects of EA, and how they can be shown the path to understand the business side.

Q: Should EA be part of the IT program or the CS program in schools? A: We have seen around the world a history of architects coming from IT and that only a few universities have specific IT programs. Some offer it at the postgraduate level. The EA is trying globally to raise awareness of the need for EA education. continuing education as part of a career development path is a good way to manage the breadth of skills a good EA needs; organizations should also be aware of the levels of The Open Group Open CA certifications.

Q: If EA is connected to business, should EAs be specialized to the vertical sector, or should EA be business agnostic? A: Core EA skills are industry-agnostic, and these need to be supplemented by industry-specific reference models. Methodology, Industry knowledge and interpersonal skills are all critical, and these are developed over time.

Q: Do you use EA tools in your job? A: Not really – the experience to use complex tools comes over time.

Q: Are telecom companies adopting EA? A: Telecom companies are adopting standard reference architectures. This sector has not made much progress in EA, though it is critical for transformation in the current market. Time pressure in a changing market is also a barrier.

Q: Is EA being grown in-house or outsourced? A: We are seeing increased uptake among end-user companies in using EA to achieve transformation – this is happening across sectors and is a big opportunity in Spain right now.

Join the conversation! @theopengroup #ogMAD

Comments Off on The Open Group Madrid 2015 – Day One Highlights

Filed under Boundaryless Information Flow™, Enterprise Architecture, Internet of Things, Open Platform 3.0, Professional Development, Standards, Uncategorized

The Open Group Madrid Summit 2015 – An Interview with Steve Nunn

By The Open Group

The Open Group will be hosting its Spring 2015 summit in Madrid from April 20-23. Focused on Enabling Boundaryless Information Flow™, the summit will explore the increasing digitalization of business today and how Enterprise Architecture will be a critical factor in helping organizations to adapt to the changes that digitalization and rapidly evolving technologies are bringing.

In advance of the summit, we spoke to Steve Nunn, Vice President and COO of The Open Group and CEO of the Association of Enterprise Architects (AEA) about two speaking tracks he will be participating in at the event—a panel on the challenges facing young Enterprise Architects today, and a session addressing the need for Enterprise Architects to consider their personal brand when it comes to their career path.

Tell us about the panel you’ll be moderating at the Madrid Summit on EA Challenges.

The idea for the panel really came from the last meeting we had in San Diego. We had a panel of experienced Enterprise Architects, including John Zachman, giving their perspectives on the state of Enterprise Architecture and answering questions from the audience. It gave us the idea that, we’ve heard from the experienced architects, what if we also heard from younger folks in the industry, maybe those newer to the profession than the previous panel? We decided to put together a panel of young architects, ideally local to Madrid, to get what we hope will be a different set of perspectives on what they have to deal with on a day-to-day basis and what they see as the challenges for the profession, what’s working well and what’s working less well. In conjunction with the local Madrid chapter of the AEA, we put the panel together. I believe it’s a panel of four young architects, plus a gentleman named Juan Abel, who is the chair of the local chapter in Madrid, who helped put it together, with me moderating. The Madrid chapter of the AEA has been very helpful in putting together the summit in Madrid and with details on the ground, and we thank them for all their help.

We’ll be putting some questions together ahead of time, and there will be questions from the audience. We hope it will be a different set of perspectives from folks entering the profession and in a different geography as well, so there may be some things that are particular to practicing Enterprise Architecture in Spain which come out as well. It’s a long panel—over an hour—so, hopefully, we’ll be able to not just hit things at a cursory level, but get into more detail.

What are some of the challenges that younger Enterprise Architects are facing these days?

We’re hoping to learn what the challenges are for those individuals, and we’re also hoping to hear what they think is attracting people to the profession. That’s a part that I’m particularly interested in. In terms of what I think going in to the panel session, the thing I hear about the most from young architects in the profession is about the career path. What is the career path for Enterprise Architects? How do I get in? How do I justify the practice of Enterprise Architecture in my organization if it doesn’t exist already? And if it does exist, how do I get to be part of it?

In the case of those individuals coming out of university—what are the relevant qualifications and certifications that they might be looking at to give themselves the best shot at a career in Enterprise Architecture. I expect it will be a lot of discussion about getting into Enterprise Architecture and how do you best position yourself and equip yourself to be an Enterprise Architect.

Were there things that came out of the San Diego session that will be relevant to the Madrid panel?

There were certainly some things discussed about frameworks and the use of frameworks in Enterprise Architecture. Being an Open Group event, obviously a lot of it was around TOGAF®, an Open Group standard, and with John Zachman as part of it, naturally the Zachman Framework too. There was some discussion about looking into how the two can play more naturally together. There was less discussion about the career development aspect, by and large because, when these people started out in their careers, they weren’t Enterprise Architects because it wasn’t called that. They got into it along the way, rather than starting out with a goal to be an Enterprise Architect, so there wasn’t as much about the career aspect, but I do think that will be a big part of what will come out in Madrid.

I think where there are overlaps is the area around the value proposition for Enterprise Architecture inside an organization. That’s something that experienced architects and less experienced architects will face on a day-to-day basis in an organization that hasn’t yet bought into an Enterprise Architecture approach. The common theme is, how do you justify taking Enterprise Architecture inside an organization in a way that delivers value quick enough for people to see that something is happening? So that it’s not just a multi-year project that will eventually produce something that’s nicely tied up in a bow that may or may not do what they wanted because, chances are, the business need has moved on in that time anyway. It’s being able to show that Enterprise Architecture can deliver things in the short term as well as the long term. I think that’s something that’s common to architects at all stages of their careers.

You’re also doing a session on creating a personal brand in Madrid. Why is branding important for Enterprise Architects these days?

I have to say, it’s a lot of fun doing that presentation. It really is. Why is it important? I think at a time, not just for Enterprise Architects but for any of us, when our identities are out there so much now in social media—whatever it may be, Facebook, LinkedIn, other social media profiles— people get a perception of you, many times never having met you. It is important to control that perception. If you don’t do it, someone else may get a perception that you may or may not want from it. It’s really the idea of taking charge of your own brand and image and how you are perceived, what values you have, what you want to be known for, the type of organization you want to work in, the types of projects that you want to be involved in. Not all of those things happen at once, they don’t all land on a plate, but by taking more control of it in a planned way, there’s more chance of you realizing some of those goals than if you don’t. That’s really the essence of it.

The timing and particular relevance to Enterprise Architects is that, more and more, as organizations do see value in Enterprise Architecture, Enterprise Architects are getting a seat at the top table. They’re being listened to by senior management, and are sometimes playing an active role in strategy and important decisions being made in organizations. So, now more than ever, how Enterprise Architects are being perceived is important. They need to be seen to be the people that can bring together the business people and IT, who have the soft skills, being able to talk to and understand enough about different aspects of the business to get their job done. They don’t have to be experts in everything, of course, but they have to have a good enough understanding to have meaningful discussions with the people with whom they’re working. That’s why it’s crucial at this time that those who are Enterprise Architects, as we build the profession, are perceived in a positive way, and the value of that is highlighted and consistently delivered.

A lot of technologists don’t always feel comfortable with overtly marketing themselves—how do you help them get over the perception that having a personal brand is just “marketing speak?”

That’s something that we go through in the presentation. There are 11 steps that we recommend following. This goes back to an old Tom Peters article that was written years ago titled ‘The Brand Called You’ . Many of us aren’t comfortable doing this and it’s hard, but it is important to force yourself to go through this so your name and your work and what you stand for are what you want them to be.

Some of the suggestions are to think of the things that you’re good at and what your strengths are, and to test those out with people that you know and trust. You can have some fun with it along the way. Think about what those strengths are, and think about what it is that you offer that differentiates you.

A big part of the personal brand concept is to help individuals differentiate themselves from everyone else in the workplace, and that’s a message that seems to resonate very well. How do you stand out from lots of other people that claim to have the same skills and similar experience to yourself? Think of what those strengths are, pick a few things that you want to be known for. Maybe it’s that you never miss a deadline, you’re great at summarizing meetings or you’re a great facilitator—I’m not suggesting you focus on one—but what combination of things do you want to be known for? Once you know what that is—one of the examples I use is, if you want to be known for being punctual, which is an important thing, make sure you are—set the alarm earlier, make sure you show up for meetings on time, then that’s one of the things you’re known for. All these things help build the personal brand, and when people think of you, they think of how they can rely on you, and think of the attributes and experience that they can get from working with you.

That’s really what it comes down to—as human beings, we all prefer to work with people we can trust. Ideally people that we like, but certainly people that we can trust and rely on. You’re far more likely to get the right doors opening for you and more widely if you’ve built a brand that you maintain, and people you work with know what you stand for and know they can rely on you. It’s going to work in your favor and help you get the opportunities that you hope for in your career.

But there’s a big fun aspect to the presentation, as well. I start the presentation looking at branding and the types of brands that people know what they stand for. I think it has scope for workshop-type sessions, as well, where people follow some of the steps and start developing their personal brands. Feedback on this presentation has been very positive because it stands out as a non-technical presentation, and people can see that they can use it privately to further their careers, or to use it with their teams within their organizations. People really seem to resonate with it.

As CEO of the Association of Enterprise Architects, what are you seeing in terms of career opportunities available for architects right now?

We are seeing a lot of demand for Enterprise Architects all over the place, not just in the U.S., but globally. One of the things we have on the AEA website is a job board and career center, and we’ve been trying to increase the number of jobs posted there and make it a useful place for our members to go when they’re considering another position, and a good place for recruiters to promote their openings. We are growing that and it’s being populated more and more. Generally, I hear that there is a lot of demand for Enterprise Architects, and the demand outweighs the supply at the moment. It’s a good time to get into the profession. It’s a good time to be making the most of the demand that’s out there in the market right now. To back that up, the latest Foote Report showed that the OpenCA and TOGAF certifications were among the most valuable certifications in the IT industry. I think there is demand for certified architects and what we’re doing in the AEA is building the professional body to the point, ultimately, where people not only want to be AEA members, but effectively need to be AEA members in order to be taken seriously in Enterprise Architecture.

We’re also seeing an increasing number of inquiries from organizations that are recruiting Enterprise Architects to check that the applicant is indeed an AEA member. So clearly that tells us that people are putting it down on their resumes as something that differentiates them. It’s good that we get these inquiries, because it shows that there is perceived value in membership.

What’s new with the AEA? What’s happening within the organization right now?

Other things we have going on are a couple of webinar series running in parallel. One is a series of 13 webinars led by Jason Uppal of QRS Systems. He’s giving one a month for 13 months—we’ve done seven or eight already. The other is a series of 10 webinars given by Chris Armstrong of the Armstrong Process Group. What they have in common is that they are tutorials, they’re educational webinars and learning opportunities, and we’re seeing the number of attendees for those increasing. It’s a value of being an AEA member to be able to participate in these webinars. Our focus is on giving more value to the members, and those are a couple of examples of how we’re doing that.

The other thing that we have introduced is a series of blogs on ‘What Enterprise Architects Need to Know About…’ We’ve covered a couple of topics like Internet of Things and Big Data—we have more planned in that series. That’s an attempt to get people thinking about the changing environment in which we’re all operating now and the technologies coming down the pike at us, and what it means for Enterprise Architects. It’s not that architects have to be an expert in everything, but they do need to know about them because they will eventually change how organizations put together their architectures.

By The Open GroupSteve Nunn is the VP and Chief Operating Officer of The Open Group. Steve’s primary responsibility for The Open Group is to ensure the legal protection of its assets, particularly its intellectual property. This involves the development, maintenance and policing of the trademark portfolio of The Open Group, including the registered trade marks behind the Open Brand and, therefore, the various Open Group certification programs, including TOGAF®, Open CA, Open CITS, and UNIX® system certification. The licensing, protection and promotion of TOGAF also falls within his remit.

In addition, Steve is CEO of the Association of Enterprise Architects (AEA) and is focused on creating and developing the definitive professional association for enterprise architects around the globe. To achieve this, Steve is dedicated to advancing professional excellence amongst AEA’s 20,000+ members, whilst raising the status of the profession as a whole.

Steve is a lawyer by training and has an L.L.B. (Hons) in Law with French and retains a current legal practising certificate.

Join the conversation @theopengroup #ogchat #ogMAD

 

 

Comments Off on The Open Group Madrid Summit 2015 – An Interview with Steve Nunn

Filed under Boundaryless Information Flow™, Brand Marketing, Enterprise Architecture, Internet of Things, Open CA, Standards, TOGAF, TOGAF®, Uncategorized