Monthly Archives: July 2011

How strategic planning relates to Enterprise Architecture

By Serge Thorn, Architecting the Enterprise

TOGAF® often refers to Strategic Planning without specifying the details of what it consists of. This document explains why there is a perfect fit between the two.

Strategic Planning means different things to different people. The one constant is its reference to Business Planning which usually occurs annually in most companies. One of the activities of this exercise is the consideration of the portfolio of projects for the following financial year, also referred to as Project Portfolio Management (PPM). This activity may also be triggered when a company modifies its strategy or the priority of its current developments.

Drivers for Strategic Planning may be

  • New products or services
  • A need for greater business flexibility and agility
  • Merger & acquisition
  • Company’s reorganization
  • Consolidation of manufacturing plants, lines of business, partners, information systems
  • Cost reduction
  • Risk mitigation
  • Business process management initiatives
  • Business process outsourcing
  • Facilities outsourcing or insourcing
  • Off-shoring

Strategic Planning as a process may include activities such as:

1. The definition of the mission and objectives of the enterprise

Most companies have a mission statement depicting the business vision, the purpose and value of the company and the visionary goals to address future opportunities. With that business vision, the board of the company defines the strategic (e.g. reputation, market share) and financial objectives (e.g. earnings growth, sales targets).

2. Environmental analysis

The environmental analysis may include the following activities:

  • Internal analysis of the enterprise
  • Analysis of the enterprise’s industry
  • A PEST Analysis (Political, Economic, Social, and Technological factors). It is very important that an organization considers its environment before beginning the marketing process. In fact, environmental analysis should be continuous and feed all aspects of planning, identify the strengths and weaknesses, the opportunities and threats (SWOT)

3. Strategy definition

Based on the previous activities, the enterprise matches strengths to opportunities and addressing its weaknesses and external threats and elaborate a strategic plan. This plan may then be refined at different levels in the enterprise. Below is a diagram explaining the various levels of plans.

To build that strategy, an Enterprise Strategy Model may be used to represent the Enterprise situation accurately and realistically for both past and future views. This can be based on Business Motivation Modeling (BMM) which allows developing, communicating and managing a Strategic Plan. Another possibility is the use of Business Model Canvas which allows the company to develop and sketch out new or existing business models. (Refer to the work from Alexander Osterwalder).

The model’s analyses should consider important strategic variables such as customers demand expectations, pricing and elasticity, competitor behavior, emissions regulations, future input, and labor costs.

These variables are then mapped to the main important business processes (capacity, business capabilities, constraints), and economic performance to determine the best decision for each scenario. The strategic model can be based on business processes such as customer, operation or background processes. Scenarios can then are segmented and analyzed by customer, product portfolio, network redesign, long term recruiting and capacity, mergers and acquisitions to describe Segment Business Plans.

4. Strategy Implementation

The selected strategy is implemented by means of programs, projects, budgets, processes and procedures. The way in which the strategy is implemented can have a significant impact on whether it will be successful, and this is where Enterprise Architecture may have a significant role to play. Often, the people formulating the strategy are different from those implementing it. The way the strategy is communicated is a key element of the success and should be clearly explained to the different layers of management including the Enterprise Architecture team.

To support that strategy, different levels or architecture can be considered such as strategic, segment or capability architectures.

This diagram below illustrates different examples of new business capabilities linked to a Strategic Architecture.

It also illustrates how Strategic Architecture supports the enterprise’s vision and the strategic plan communicated to an Enterprise Architecture team.

Going to the next level allows better detail the various deliverables and the associated new business capabilities. The segment architecture maps perfectly to the Segment Business Plan.

5. Evaluation and monitoring

The implementation of the strategy must be monitored and adjustments made as required.

Evaluation and monitoring consists of the following steps:

  • Definition of KPIs, measurement and metrics
  • Definition of target values for these KPIs
  • Perform measurements
  • Compare measured results to the pre-defined standard
  • Make necessary changes

Strategic Planning and Enterprise Architecture should ensure that information systems do not operate in a vacuum. At its core, TOGAF® 9 uses/supports a strong set of guidelines that were promoted in the previous version, and have surrounded them with guidance on how to adopt and apply TOGAF® to the enterprise for Strategic Planning initiatives. The ADM diagram below clearly indicates the integration between the two processes.

The company’s mission and vision must be communicated to the Enterprise Architecture team which then maps Business Capabilities to the different Business Plans levels.

Many Enterprise Architecture projects are focused at low levels but should be aligned with Strategic Corporate Planning. Enterprise Architecture is a critical discipline, one Strategic Planning mechanism to structure an enterprise. TOGAF® 9 is without doubt an effective framework for working with stakeholders through Strategic Planning and architecture work, especially for organizations who are actively transforming themselves.

This article has previously appeared in Serge Thorn’s personal blog and appears here with his permission.

Serge Thorn is CIO of Architecting the Enterprise.  He has worked in the IT Industry for over 25 years, in a variety of roles, which include; Development and Systems Design, Project Management, Business Analysis, IT Operations, IT Management, IT Strategy, Research and Innovation, IT Governance, Architecture and Service Management (ITIL). He has more than 20 years of experience in Banking and Finance and 5 years of experience in the Pharmaceuticals industry. Among various roles, he has been responsible for the Architecture team in an international bank, where he gained wide experience in the deployment and management of information systems in Private Banking, Wealth Management, and also in IT architecture domains such as the Internet, dealing rooms, inter-banking networks, and Middle and Back-office. He then took charge of IT Research and Innovation (a function which consisted of motivating, encouraging creativity, and innovation in the IT Units), with a mission to help to deploy a TOGAF based Enterprise Architecture, taking into account the company IT Governance Framework. He also chaired the Enterprise Architecture Governance worldwide program, integrating the IT Innovation initiative in order to identify new business capabilities that were creating and sustaining competitive advantage for his organization. Serge has been a regular speaker at various conferences, including those by The Open Group. His topics have included, “IT Service Management and Enterprise Architecture”, “IT Governance”, “SOA and Service Management”, and “Innovation”. Serge has also written several articles and whitepapers for different magazines (Pharma Asia, Open Source Magazine). He is the Chairman of the itSMF (IT Service Management forum) Swiss chapter and is based in Geneva, Switzerland.

2 Comments

Filed under Enterprise Architecture, TOGAF®

PODCAST: Industry moves to fill gap for building trusted supply chain technology accreditation

By Dana Gardner, Interabor Solutions

Listen to this recorded podcast here: BriefingsDirect-IT Industry Looks to Open Trusted Technology Forum to Help Secure Supply Chains That Support Technology Products

The following is the transcript of a sponsored podcast panel discussion on how the OTTF is developing an accreditation process for trusted technology, in conjunction with the The Open Group Conference, Austin 2011.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, we present a sponsored podcast discussion in conjunction with The Open Group Conference in Austin, Texas, the week of July 18, 2011.

We’ve assembled a distinguished panel to update us on The Open Group Trusted Technology Forum, also known as the OTTF, and an accreditation process to help technology acquirers and buyers safely conduct global procurement and supply chain commerce. We’ll examine how the security risk for many companies and organizations has only grown, even as these companies form essential partnerships and integral supplier relationships. So, how can all the players in a technology ecosystem gain assurances that the other participants are adhering to best practices and taking the proper precautions?

Here to help us better understand how established standard best practices and an associated accreditation approach can help make supply chains stronger and safer is our panel. We’re here with Dave Lounsbury, the Chief Technical Officer at The Open Group. Welcome back, Dave.

Dave Lounsbury: Hello Dana. How are you?

Gardner: Great. We are also here with Steve Lipner, Senior Director of Security Engineering Strategy in the Trustworthy Computing Security at Microsoft. Welcome back, Steve.

Steve Lipner: Hi, Dana. Glad to be here.

Gardner: We’re here also with Joshua Brickman, Director of the Federal Certification Program Office at CA Technologies. Welcome, Joshua.

Joshua Brickman: Thanks for having me.

Gardner: And, we’re here too with Andras Szakal. He’s the Vice President and CTO of IBM’s Federal Software Group. Welcome back, Andras.

Andras Szakal: Thank you very much, Dana. I appreciate it.

Gardner: Dave, let’s start with you. We’ve heard so much lately about “hacktivism,” break-ins, and people being compromised. These are some very prominent big companies, both public and private. How important is it that we start to engage more with things like the OTTF?

No backup plan

Dave LounsburyLounsbury: Dana, a great quote coming out of this week’s conference was that we have moved the entire world’s economy to being dependent on the Internet, without a backup plan. Anyone who looks at the world economy will see, not only are we dependent on it for exchange of value in many cases, but even information about how our daily lives are run, traffic, health information, and things like that. It’s becoming increasingly vitally important that we understand all the aspects of what it means to have trust in the chain of components that deliver that connectivity to us, not just as technologists, but as people who live in the world.

Gardner: Steve Lipner, your thoughts on how this problem seems to be only getting worse?

Lipner: Well, the attackers are becoming more determined and more visible across the Internet ecosystem. Vendors have stepped up to improve the security of their product offerings, but customers are concerned. A lot of what we’re doing in The Open Group and in the OTTF is about trying to give them additional confidence of what vendors are doing, as well as inform vendors what they should be doing.

Gardner: Joshua Brickman, this is obviously a big topic and a very large and complex area. From your perspective, what is it that the OTTF is good at? What is it focused on? What should we be looking to it for in terms of benefit in this overall security issue?

Brickman: One of the things that I really like about this group is that you have all of the leaders, everybody who is important in this space, working together with one common goal. Today, we had a discussion where one of the things we were thinking about is, whether there’s a 100 percent fail-safe solution to cyber? And there really isn’t. There is just a bar that you can set, and the question is how much do you want to make the attackers spend, before they can get over that bar? What we’re going to try to do is establish that level, and working together, I feel very encouraged that we are getting there, so far.

Gardner: Andras, we are not just trying to set the bar, but we’re also trying to enforce, or at least have clarity into, what other players in an ecosystem are doing. So that accreditation process seems to be essential.

Szakal: We’re going to develop a standard, or are in the process of developing a specification and ultimately an accreditation program, that will validate suppliers and providers against that standard. It’s focused on building trust into a technology provider organization through this accreditation program, facilitated through either one of several different delivery mechanisms that we are working on. We’re looking for this to become a global program, with global partners, as we move forward.

Gardner: It seems as if almost anyone is a potential target, and when someone decides to target you, you do seem to suffer. We’ve seen things with Booz Allen, RSA, and consumer organizations like Sony. Is this something that almost everyone needs to be more focused on? Are we at the point now where there is no such thing as turning back, Dave Lounsbury?

Global effort

Lounsbury: I think there is, and we have talked about this before. Any electronic or information system now is really built on components and software that are delivered from all around the globe. We have software that’s developed in one continent, hardware that’s developed in another, integrated in a third, and used globally. So, we really do need to have the kinds of global standards and engagement that Andras has referred to, so that there is that one bar for all to clear in order to be considered as a provider of trusted components.

Gardner: As we’ve seen, there is a weak link in any chain, and the hackers or the cyber criminals or the state sponsored organizations will look for those weak links. That’s really where we need to focus.

Lounsbury: I would agree with that. In fact, some of the other outcomes of this week’s conference have been the change in these attacks, from just nuisance attacks, to ones that are focused on monetization of cyber crimes and exfiltration of data. So the spectrum of threats is increasing a lot. More sophisticated attackers are looking for narrower and narrower attack vectors each time. So we really do need to look across the spectrum of how this IT technology gets produced in order to address it.

Gardner: Steve Lipner, it certainly seems that the technology supply chain is essential. If there is weakness there, then it’s difficult for the people who deploy those technologies to cover their bases. It seems that focusing on the technology providers, the ecosystems that support them, is a really necessary first step to taking this to a larger, either public or private, buyer side value.

Lipner: The tagline we have used for The Open Group TTF is “Build with Integrity, Buy with Confidence.” We certainly understand that customers want to have confidence in the hardware and software of the IT products that they buy. We believe that it’s up to the suppliers, working together with other members of the IT community, to identify best practices and then articulate them, so that organizations up and down the supply chain will know what they ought to be doing to ensure that customer confidence.

Gardner: Let’s take a step back and get a little bit of a sense of where this process that you are all involved with is. I know you’re all on working groups and in other ways involved in moving this forward, but it’s been about six months now since The OTTF was developed initially, and there was a white paper to explain that. Perhaps, one of you will volunteer to give us sort of a state of affairs where things are,. Then, we’d also like to hear an update about what’s been going on here in Austin. Anyone?

Szakal: Well, as the chair, I have the responsibility of keeping track of our milestones, so I’ll take that one. A, we completed the white paper earlier this year, in the first quarter. The white paper was visionary in nature, and it was obviously designed to help our constituents understand the goals of the OTTF. However, in order to actually make this a normative specification and design a program, around which you would have conformance and be able to measure suppliers’ conformity to that specification, we have to develop a specification with normative language.

First draft

We’re finishing that up as we speak and we are going to have a first draft here within the next month. We’re looking to have that entire specification go through company review in the fourth quarter of this year.

Simultaneously, we’ll be working on the accreditation policy and conformance criteria and evidence requirements necessary to actually have an accreditation program, while continuing to liaise with other evaluation schemes that are interested in partnering with us. In a global international environment, that’s very important, because there exist more than one of these regimes that we will have to exist, coexist, and partner with. Over the next year, we’ll have completed the accreditation program and have begun testing of the process, probably having to make some adjustments along the way. We’re looking at sometime within the first half of 2012 for having a completed program to begin ramping up.

Gardner: Is there an update on the public sector’s, or in the U.S., the federal government’s, role in this? Are they active? Are they leading? How would you characterize the public role or where you would like to see that go?

Szakal: The Forum itself continues to liaise with the government and all of our constituents. As you know, we have several government members that are part of the TTF and they are just as important as any of the other members. We continue to provide update to many of the governments that we are working with globally to ensure they understand the goals of the OTTF and how they can provide value synergistically with what we are doing, as we would to them.

Gardner: I’ll throw this back out to the panel? How about the activities this week at the conference? What have been the progress or insights that you can point to from that?

Brickman: We’ve been meeting for the first couple of days and we have made tremendous progress on wrapping up our framework and getting it ready for the first review. We’ve also been meeting with several government officials. I can’t say who they are, but what’s been good about it is that they’re very positive on the work that we’re doing, they support what we are doing and want to continue this discussion. It’s very much a partnership, and we do feel like it’s not just an industry-led project, where we have participation from folks who could very much be the consumers of this initiative.

Gardner: Clearly, there are a lot of stakeholders around the world, across both the public and private domains. Dave Lounsbury, what’s possible? What would we gain if this is done correctly? How would we tangibly look to improvements? I know that’s hard with security. It’s hard to point out what doesn’t happen, which is usually the result of proper planning, but how would you characterize the value of doing this all correctly say a year or two from now?

Awareness of security

Lounsbury: One of the trends we’ll see is that people are increasingly going to be making decisions about what technology to produce and who to partner with, based on more awareness of security.

A very clear possible outcome is that there will be a set of simple guidelines and ones that can be implemented by a broad spectrum of vendors, where a consumer can look and say, “These folks have followed good practices. They have baked secure engineering, secure design, and secure supply chain processes into their thing, and therefore I am more comfortable in dealing with them as a partner.”

Of course, what the means is that, not only do you end up with more confidence in your supply chain and the components for getting to that supply chain, but also it takes a little bit of work off your plate. You don’t have to invest as much in evaluating your vendors, because you can use commonly available and widely understood sort of best practices.

From the vendor perspective, it’s helpful because we’re already seeing places where a company, like a financial services company, will go to a vendor and say, “We need to evaluate you. Here’s our checklist.” Of course, the vendor would have to deal with many different checklists in order to close the business, and this will give them some common starting point.

Of course, everybody is going to customize and build on top of what that minimum bar is, depending on what kind of business they’re in. But at least it gives everybody a common starting point, a common reference point, some common vocabulary for how they are going to talk about how they do those assessments and make those purchasing decisions.

Gardner: Steve Lipner, do you think that this is going to find its way into a lot of RFPs, beginning a sales process, looking to have a major checkbox around these issues? Is that sort of how you see this unfolding?

Lipner: If we achieve the sort of success that we are aiming for and anticipating, you’ll see requirements for the OTTF, not only in RFPs, but also potentially in government policy documents around the world, basically aiming to increase the trust of broad collections of products that countries and companies use.

Gardner: Joshua Brickman, I have to imagine that this is a living type of an activity that you never really finish. There’s always something new to be done, a type of threat that’s evolving that needs to be reacted to. Would the TTF over time take on a larger role? Do you see it expanding into larger set of requirements, even as it adjusts to the contemporary landscape?

Brickman: That’s possible. I think that we are going to try to get something achievable out there in a timeframe that’s useful and see what sticks. One of the things that will happen is that as companies start to go out and test this, as with any other standard, the 1.0 standard will evolve to something that will become more germane, and as Steve said, will hopefully be adopted worldwide.

Agile and useful

It’s absolutely possible. It could grow. I don’t think anybody wants it to become a behemoth. We want it to be agile, useful, and certainly something readable and achievable for companies that are not multinational billion dollar companies, but also companies that are just out there trying to sell their piece of the pie into the space. That’s ultimately the goal of all of us, to make sure that this is a reasonable achievement.

Lounsbury: Dana, I’d like to expand on what Joshua just said. This is another thing that has come out of our meetings this week. We’ve heard a number of times that governments, of course, feel the need to protect their infrastructure and their economies, but also have a realization that because of the rapid evolution of technology and the rapid evolution of security threats that it’s hard for them to keep up. It’s not really the right vehicle.

There really is a strong preference. The U.S. strategy on this is to let industry take the lead. One of the reasons for that is the fact that industry can evolve, in fact must evolve, at the pace of the commercial marketplace. Otherwise, they wouldn’t be in business.

So, we really do want to get that first stake in the ground and get this working, as Joshua said. But there is some expectation that, over time, the industry will drive the evolution of security practices and security policies, like the ones OTTF is developing at the pace of commercial market, so that governments won’t have to do that kind of regulation which may not keep up.

Gardner: Andras, any thoughts from your perspective on this ability to keep up in terms of market forces? How do you see the dynamic nature of this being able to be proactive instead of reactive?

Szakal: One of our goals is to ensure that the viability of the specification itself, the best practices, are updated periodically. We’re talking about potentially yearly. And to include new techniques and the application of potentially new technologies to ensure that providers are implementing the best practices for development engineering, secure engineering, and supply chain integrity. It’s going to be very important for us to continue to evolve these best practices over a period of time and not allow them to fall into a state of static disrepair.

I’m very enthusiastic, because many of the members are very much in agreement that this is something that needs to be happening in order to actually raise the bar on the industry, as we move forward, and help the entire industry adopt the practices and then move forward in our journey to secure our critical infrastructure.

Gardner: Given that this has the potential of being a fairly rapidly evolving standard that may start really appearing in RFPs and be impactful for real world business success, how should enterprises get involved from the buy side? How should suppliers get involved from the sell side, given that this is seemingly a market driven, private enterprise driven activity?

I’ll throw this out to the crowd. What’s the responsibility from the buyers and the sellers to keep this active and to keep themselves up-to-date?

Lounsbury: Let me take the first stab at this. The reason we’ve been able to make the progress we have is that we’ve got the expertise in security from all of these major corporations and government agencies participating in the TTF. The best way to maintain that currency and maintain that drive is for people who have a problem, if you’re on the buy side or expertise from either side, to come in and participate.

Hands-on awareness

You have got the hands-on awareness of the market, and bringing that in and adding that knowledge of what is needed to the specification and helping move its evolution along is absolutely the best thing to do.

That’s our steady state, and of course the way to get started on that is to go and look at the materials. The white paper is out there. I expect we will be doing snapshots of early versions of this that would be available, so people can take a look at those. Or, come to an Open Group Conference and learn about what we are doing.

Gardner: Anyone else have a reaction to that? I’m curious. Given that we are looking to the private sector and market forces to be the drivers of this, will they also be the drivers in terms of enforcement? Is this voluntary? One would hope that market forces reward those who seek accreditation and demonstrate adhesion to the standard, and that those who don’t would suffer. Or is there a potential for more teeth and more enforcement? Again, I’ll throw this out to the panel at large.

Szakal: As vendors, we’d would like to see minimal regulation and that’s simply the nature of the beast. In order for us to conduct our business and lower the cost of market entry, I think that’s important.

I think it’s important that we provide leadership within the industry to ensure that we’re following the best practices to ensure the integrity of the products that we provide. It’s through that industry leadership that we will avoid potential damaging regulations across different regional environments.

We certainly wouldn’t want to see different regulations pop-up in different places globally. It makes for very messy technology insertion opportunity for us. We’re hoping that by actually getting engaged and providing some self-regulation, we won’t see additional government or international regulation.

Lipner: One of the things that my experience has taught me is that customers are very aware these days of security, product integrity, and the importance of suppliers paying attention to those issues. Having a robust program like the TTF and the certifications that it envisions will give customers confidence, and they will pay attention to that. That will change their behavior in the market even without formal regulations.

Gardner: Joshua Brickman, any thoughts on the self-regulation benefits? If that doesn’t work, is it self-correcting? Is there a natural approach that if this doesn’t work at first, that a couple of highly publicized incidents and corporations that suffer for not regulating themselves properly, would ride that ship, so to speak?

Brickman: First of all, industry setting the standard is an idea that has been thrown around a while, and I think that it’s great to see us finally doing it in this area, because we know our stuff the best.

But as far as an incident indicating that it’s not working, I don’t think so. We’re going to try to set up a standard, whereby we’re providing public information about what our products do and what we do as far as best practices. At the end of the day the acquiring agency, or whatever, is going to have to make decisions, and they’re going to make intelligent decisions, based upon looking at folks that choose to go through this and folks that choose not to go through it.

It will continue

The bad news that continues to come out is going to continue to happen. The only thing that they’ll be able to do is to look to the companies that are the experts in this to try to help them with that, and they are going to get some of that with the companies that go through these evaluations. There’s no question about it.

At the end of the day, this accreditation program is going to shake out the products and companies that really do follow best practices for secure engineering and supply chain best practices.

Gardner: What should we expect next? As we heard, there has been a lot of activity here in Austin at the conference. We’ve got that white paper. We’re working towards more mature definitions and approaching certification and accreditation types of activities. What’s next? What milestone should we look to? Andras, this is for you.

Szakal: Around November, we’re going to be going through company review of the specification and we’ll be publishing that in the fourth quarter.

We’ll also be liaising with our government and international partners during that time and we’ll also be looking forward to several upcoming conferences within The Open Group where we conduct those activities. We’re going to solicit some of our partners to be speaking during those events on our behalf.

As we move into 2012, we’ll be working on the accreditation program, specifically the conformance criteria and the accreditation policy, and liaising again with some of our international partners on this particular issue. Hopefully we will, if all things go well and according to plan, come out of 2012 with a viable program.

Gardner: Dave Lounsbury, any further thoughts about next steps, what people should be looking for, or even where they should go for more information?

Lounsbury: Andras has covered it well. Of course, you can always learn more by going to www.opengroup.org and looking on our website for information about the OTTF. You can find drafts of all the documents that have been made public so far, and there will be our white paper and, of course, more information about how to become involved.

Gardner: Very good. We’ve been getting an update about The Open Group Trusted Technology Forum, OTTF, and seeing how this can have a major impact from a private sector perspective and perhaps head off issues about lack of trust and lack of clarity in a complex evolving technology ecosystem environment.

I’d like to thank our guests. We’ve been joined by Dave Lounsbury, Chief Technical Officer at The Open Group. Thank you, sir.

Lounsbury: Thank you, Dana.

Gardner: Steve Lipner, the Senior Director of Security Engineering Strategy in the Trustworthy

Computing Security Group at Microsoft. Thank you, Steve.

Lipner: Thanks, Dana.

Gardner: Joshua Brickman, who is the Director of the Federal Certification Program Office in CA Technologies, has also joined us. Thank you.

Brickman: I enjoyed it very much.

Gardner: And Andras Szakal, Vice President and CTO of IBM’s Federal Software Group. Thank you, sir.

Szakal: It’s my pleasure. Thank you very much, Dana.

Gardner: This discussion has come to you as a sponsored podcast in conjunction with The Open Group Conference in Austin, Texas. We are here the week of July 18, 2011. I want to thank our listeners as well. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Don’t forget to come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com.

Copyright The Open Group 2011. All rights reserved.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirect™ blogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

3 Comments

Filed under Cybersecurity, Supply chain risk

Google+, spiral galaxies and Louisa’s bright idea

By Stuart Boardman, Getronics

Even a social media lightweight like me could hardly avoid getting caught up in the Google+ hype. It got me thinking about the rate and unpredictability of change in the web world, and the effect on large enterprises of phenomena originating in the consumer and small business market.

The concept of the enterprise is experiencing a change – maybe a radical one. The role of technology is also changing (not for the first time).  New business models are developing which, whilst not technological in nature, would never have been thought of without the technology developments of the last few years. Other business models, around for a bit longer and not technological in nature are pushing technology in a different direction. Business models themselves are subject to increasingly frequent and not always predictable change. What does this mean for the practice of enterprise architecture?

Back to Google+. A few years ago, when Web 2.0 was the buzzword and everyone conveniently forgot that the web actually started out as a vehicle for user-generated content and collaboration (sorry, had to get that off my chest), there was quite a battery of social media providers all with their own specializations: Facebook, MySpace, LinkedIn, Plaxo, Flickr and a whole bunch of sites for gamers and metal fans, etc. In Holland, where I live, we had our own, very successful variant on Facebook. Had. In the period since then there’s been increasing consolidation with Facebook developing an astonishing hegemony. I’ll admit that I assumed that’s how it would stay until a new Zuckerberg came up with a totally new game changer. But now here comes Google with a new spin on a familiar story and they look set to chew a big chunk out of the market. Perhaps even the enterprise market.

What’s this have to do with enterprises? Well, the fact is that everyone in the enterprise is out there exchanging ideas via Twitter and LinkedIn and Facebook and Google+ (and whatever specialized sites they might use) and they’re even using those media to tell the rest of the enterprise that they published something internally – because otherwise no one will notice. And then there’s co-creation, which is becoming increasingly common – even in large enterprises. So like it or not, the enterprise is being irreversibly extended out into the blogosphere. And that means that the enterprise is far more exposed to the trends and rapid shifts in the world outside its own boundaries than it has ever been before.

In the meantime, a lot of other stuff has been changing for the enterprise. Extended Enterprise, the idea that an enterprise’s business processes (some of them) are performed by third parties, who themselves are part of a broad value network, is pretty much established fact for many large and medium-sized organizations. And there are unexpected new business models emerging. Think about app stores. I can’t see inside Steve Jobs’ head but I suspect the app store was developed to support the iPhone – not the other way around. Just like iTunes was developed to support the iPod. But now everyone has app stores (even if Apple doesn’t want them to use the name). The end result of all this has been to create a whole new market, where new entrepreneurs can develop low-cost software and sell it in bulk across multiple platforms and where those platforms could hardly exist without the app developers. I’m even using an iPhone app (also available on Android) to drive my domestic hi-fi system (from a very respectable English high end designer – not some uber-nerd). The app strengthens the business case for the equipment and makes money for the developer. The app didn’t come with the equipment; I bought it at the app store. App stores themselves are new value propositions for their owners (Apple, etc). In some ways we could regard this as a commercial instantiation of the old Virtual Enterprise idea – an “enterprise” consisting of a loosely coupled, shifting alliance of unrelated legal entities. I like this recent quote from Verna Allee (@vernaallee): “Business models often assume the world revolves around our organization when we really revolve in spiral galaxy ecosystems”. Louisa Leontiades (@MoneyDecisions) is launching a web based, social media driven consultancy, which provides a sort of app store where independent experts can sell tools and frameworks (and yes, get consultancy deals too). Brilliant. And of course all this represents a very scattered field of players, business models and solutions.

How are these developments reflected in Enterprise Architecture? In particular what is the effect on architecture vision and the idea of a target state?  I came across another interesting discussion recently. Robert Phipps (@robert_phipps) suggested in a discussion with Tom Graves (@tetradian) that an enterprise consists of many vectors, each with its own direction and velocity and each potentially colliding with and therefore affecting the direction and velocity of the others. Sounds pretty abstract but if you accept the metaphor you can see that the target state is going to be different depending on how the various collisions work out. In a “traditional” enterprise, the power relationships between the various vectors is pretty stable and the influence of external factors limited to macro-economic effects. The metaphor is still valid but the scale of the problem much smaller (less entropy). If what I wrote above is correct, there aren’t too many “traditional” enterprises these days.  Tom took the metaphor a bit further and made reference to Quantum theory. That’s also interesting, because it focuses on a probabilistic situation. Architecting for uncertainty. Welcome to the real world. That doesn’t mean there is no value in a target. You have to have some idea what you want to achieve based on what you know now. It just doesn’t need to be too prescriptive. Or put another way, it needs not to be too sensitive to unpredictability. Everything (not just the technology) is likely to have changed before you get there. It certainly increases the relative importance of the first steps on the road to that target. The less particle/vector collisions take place within one step, the more chance of achieving something useful. After each step we re-evaluate both target and roadmap. Iterate. Agile EA. And guess what? This is what we’re supposed to do anyway – design for change, constant delivery of value. No “wait a year and we’ll have something for you”. So if we’ve not been doing that, we’ve not been doing what the enterprise needed from us. All that’s changed is that we will become increasingly irrelevant, if we don’t do it.

Stuart Boardman is a Senior Business Consultant with Getronics Consulting where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity.

Comments Off

Filed under Enterprise Architecture

Twtpoll results from The Open Group Conference, Austin

The Open Group set up two informal Twitter polls this week during The Open Group Conference, Austin. If you wondered about the results, or just want to see what our Twitter followers think about some topline issues in the industry in very simple terms, see our twtpoll.com results below.

On Day One of the Conference, when the focus of the discussions was on Enterprise Architecture, we polled our Twitter followers about the profession of EA: Do you think we will see a shortage of enterprise architects within the next decade? Why or why not?

The results were split right down the middle.  A sampling of responses:

  • “Yes, if you mean good enterprise architects. No, if you are just referring to those who take the training but have no clue.”
  • “Yes, retirement of Boomers; not enough professionalization.”
  • “Yes, we probably will. EA is becoming more and more important because of fast-changing economies which request fast company change.”
  • “No: budgets, not a priority.”
  • “No. Over just one year, I can see the significant increase of the number of people who are talking EA and realizing the benefits of EA practices.”
  • “No, a majority of companies will still be focusing on short-term improvement because of ongoing current economic status, etc. EA is not a priority.”

On Day Two, while we focused on security, we queried our Twitter followers about data security protection: What type of data security do you think provides the most comprehensive protection of PII? Again, the results were split evenly into thirds:

What do you think of our informal poll results? Do you agree? Disagree? And why?

And let us know if you have thoughts on this one: Do you think SOA is essential for Cloud implementation?

Want some survey results you can really sink your teeth into? View the results of The Open Group’s State of the Industry Cloud Survey. Download the slide deck from The Open Group Bookstore, or read a previous blog post about it.

The Open Group Conference, Austin is now in member meetings. Join us in Taipei or San Francisco for our next Conferences! Hear best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Comments Off

Filed under Cloud/SOA, Cybersecurity, Enterprise Architecture

Improve Data Quality and Enable Semantic Interoperability by Adopting the UDEF

By Ron Schuldt, UDEF-IT, LLC

For many years I have been promoting UDEF as an enabler for semantic interoperability. The problem with being an early adopter of UDEF where the benefit is semantic interoperability is that multiple systems need to adopt UDEF before you can begin to realize the benefits. The semantic interoperability benefit is realized by leveraging the UDEF ID that is language and application independent.

Within the last seven or eight months, I realized that UDEF provides an immediate benefit – specifically, when you follow the six basic steps of mapping your data to the UDEF, you improve the clarity of the name associated with the data. The UDEF name adds substantial clarity when compared to the typically cryptic names assigned to fields within an application. The garbage-in, garbage-out problem is likely heavily affected by poor names assigned to the fields. UDEF is a means for correcting the poor names issue which gives the early adopters of UDEF an immediate benefit while enabling the system for interoperability.

Semantic interoperability is one of the topics being discussed at The Open Group Conference, Austin, currently underway this week.

Ron Schuldt is a Senior Partner of UDEF-IT, LLC. He has more twenty years experience with national and international data standards covering the gamut from Electronic Data Interchange (EDI) to the National Information Exchange Model (NIEM). He is Chairman of The Open Group UDEF Project.

Comments Off

Filed under Semantic Interoperability

The Open Group releases O-ACEML standard, automates compliance configuration

By Jim Hietala, The Open Group

The Open Group recently published the Open Automated Compliance Expert Markup Language (O-ACEML) standard. This new technical standard addresses needs to automate the process of configuring IT environments to meet compliance requirements. O-ACEML will also enable customer organizations and their auditors to streamline data gathering and reporting on compliance postures.

O-ACEML is aimed at helping organizations to reduce the cost of compliance by easing manual compliance processes. The standard is an open, simple, and well defined XML schema that allows compliance requirements to be described in machine understandable XML, as opposed to requiring humans to interpret text from documents. The standard also allows for a remediation element, which enables multiple requirements (from different compliance regulations) to be blended into a single policy. An example of where this is needed would be in password length and complexity requirements, which may differ between different regulations. O-ACEML allows for the most secure setting to be selected and applied, enabling all of the regulations to be met or exceeded.

O-ACEML is intended to allow platform vendors and compliance management and IT-GRC providers to utilize a common language for exchanging compliance information. The existence of a single common standard will benefit platform vendors and compliance management tool vendors, by reducing development costs and providing a single data interchange format. Customer organizations will benefit by reducing costs for managing compliance in complex IT environments, and by increasing effectiveness. Where previously organizations might have just polled a small but representative sample of their environment to assess compliance, the existence of a standard allowing automated compliance checking makes it feasible to survey the entire environment rather than just a small sample. Organizations publishing government compliance regulations, as well as the de facto standard compliance organizations that have emerged in many industries will benefit by enabling more cost effective adoption and simpler compliance with their regulations and standards.

In terms of how O-ACEML relates to other compliance related standards and content frameworks, it has similarities and differences to NIST’s Security Content Automation Protocol (SCAP), and to the Unified Compliance Framework (UCF). One of the main differences is that O-ACEML was architected such that a Compliance Organization could author its IT security requirements in a high-level language, without the need to understand the specific configuration command and settings an OS or device will use to implement the requirement. A distinguishing capability of O-ACEML is that it gathers artifacts as it moves from Compliance Organization directive, implementation on a particular device, and the result of the configuration command. The final step of this automation not only produces a computer system configured meet or exceed the compliance requirements, it also produces an xml document from which compliance reporting can be simplified. The Open Group plans to work with NIST and the creators of the UCF to ensure interoperability and integration between O-ACEML and SCAP and UCF.

If you have responsibility for managing compliance in your organization, or if you are a vendor whose software product involves compliance or security configuration management, we invite you to learn more about O-ACEML.

An IT security industry veteran, Jim Hietala is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

8 Comments

Filed under Cybersecurity, Standards

What’s in a name? A change of name for our ITAC and ITSC professional certifications

By Steve Philp, The Open Group

With the launch of the new Open Group website this week, we have taken the opportunity to rebrand our two skills- and experience-based certification programs. The IT Architect Certification (ITAC) program has now become The Open Group Certified Architect (Open CA) program. The IT Specialist Certification (ITSC) program has now become The Open Group Certified IT Specialist (Open CITS) program.

The new website (and our new logo for that matter) places much more emphasis on the word “Open”.  This is one of the reasons for us changing the names away from something that is not readily associated with The Open Group (i.e. ITAC) to something that is more recognizable as an Open Group certification, i.e. Open CA.  However, besides the name change, there hasn’t been any changes made to the way in which either program operates. For example, the Open CA program still requires candidates to submit a comprehensive certification package detailing their skills and experience gained on working on architecture-related projects, followed by a rigorous peer review process.

The Open CA program still currently focuses on IT-related work. However, the architecture profession is constantly evolving and to reflect this, The Open Group will incorporate dedicated Business Architecture and Enterprise Architecture streams into the Open CA program at some point in the near future. Our members are working on defining the core skills that an architect needs to have and the specific competencies one needs for each of these three specialist areas. Therefore, going forward, applicants will be able to become an Open CA in:

  • IT Architecture
  • Business Architecture
  • Enterprise Architecture

There are approximately 3,200 individuals who are certified in our Open CA program, and by broadening the scope of the program we hope to certify many more architects. There are more than 2,300 certified IT Specialists in the Open CITS program, and many organizations around the world have identified this type of skills- and experienced-based program as a necessary part of the process to develop their own internal IT profession frameworks.

Open CA and Open CITs can be used in the recruitment process and help to guarantee a consistent and quality assured service on project proposals, procurements and on service level agreements. They can also help in the assessment of individuals in specific IT domains and provide a roadmap for their future career development.  You can find out more about our programs by visiting the professional certification area of our website.

Steve PhilpSteve Philp is the Marketing Director for the Open CA and Open CITS certification programs at The Open Group. Over the past 20 years, Steve has worked predominantly in sales, marketing and general management roles within the IT training industry. Based in Reading, UK, he joined the Open Group in 2008 to promote and develop the organization’s skills and experience-based IT certifications.

Comments Off

Filed under Certifications

Announcing our new website: Building awareness of The Open Group’s standards and certifications

By Patricia Donovan, The Open Group

Those who have already visited The Open Group website today may have noticed it has a new appearance. And if you haven’t, please visit it now!

Yes; we’ve refined the design and encapsulated the information accumulated over the years into an easily digestible and navigatable site. But the real change is in the approach to how we use it as a business tool. In many ways, our new website is an extension of the mission we set for ourselves nearly 25 years ago: to drive the creation of Boundaryless Information Flow™ by giving people access to the information they need most, in the way they expect to find it.

You may recall that in 2010, we sent out surveys asking your opinions on what our members find to be important and what features and activities they value, as well as thoughts on compelling images, colors and other visuals. The new website, and some of the other communications you are now seeing from The Open Group, are a direct result of your input.

The new website is easier to scan, read and navigate, enabling visitors to find what they need quickly. Just as importantly, our key messages and value propositions are evident and clear. We are confident that our new web presence will improve The Open Group’s visibility and reputation as the global thought leader in the development of open, vendor-neutral standards and certifications — which will increase awareness for the valuable work done by the members who make up The Open Group Forums and Work Groups.

Additionally, the foundation has been laid to make the website a more agile, more interactive, Web 2.0 site — a tool that evolves organically, enables us to add features we were unable to offer previously, and allows us to meet your needs in real time.

I hope you will visit the new website at the same address, www.opengroup.org, and acquaint yourself with the new site. We’re quite proud of it, but we know there’s still work to do beyond today’s launch. In the coming months, we hope to continue improving the site so that it best serves you, our members.

In the meantime, please note some of the pages you may have previously bookmarked may no longer work and need to be bookmarked again; and for a time you’ll still be able to access material on our former site. Finally, please send any web feedback to webfeedback@opengroup.org.

Patricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the US.

2 Comments

Filed under Uncategorized

PODCAST: Embracing EA and TOGAF® aids companies in improving innovation, market response and governance

By Dana Gardner, Interabor Solutions

Listen to this recorded podcast here: BriefingsDirect-How to Leverage Advanced TOGAF 9 Use for Business Benefits

The following is the transcript of a sponsored podcast panel discussion on how to leverage advanced concepts in TOGAF® for business benefits, in conjunction with the The Open Group Conference, Austin 2011.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion in conjunction with the latest Open Group Conference in Austin, Texas, the week of July 18, 2011. We’ve assembled a panel to examine the maturing use of TOGAF. That’s The Open Group Architecture Framework, and how enterprise architects and business leaders are advancing and exploiting the latest Framework, Version 9. We’ll further explore how the full embrace of TOGAF, its principles and methodologies, are benefiting companies in their pursuit of improved innovation, responsiveness to markets, and operational governance. Is enterprise architecture (EA) joining other business transformation agents as a part of a larger and extended strategic value? How? And what exactly are the best practitioners of TOGAF getting for their efforts in terms of business achievements?

Here with us to delve into advanced use and expanded benefits of EA frameworks and what that’s doing for their user organizations is Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities for The Open Group, based in Shanghai. Welcome, Chris.

Chris Forde: Good morning, Dana.

Gardner: We’re also here with Jason Uppal. He is the Chief Architect at QR Systems, based in Toronto. Welcome, Jason.

Jason Uppal: Thank you, Dana.

Gardner: Jason, let’s cut to the quick here. We hear an awful lot about architecture. We hear about planning and methodologies, putting the right frameworks in place, but is TOGAF having an impact on the bottom line in the organizations that are employing it?

Uppal: One of the things with a framework like a TOGAF is that, on the outside, it’s a framework. At the same time, when you apply this along with the other disciplines, it’s making big difference in the organization, partially it’s because it’s allowing the IT organizations to step up to the plate within the core enterprise as a whole and ask how they can actually exploit the current assets that they already have. And secondly, how do they make sure the new assets that they do bring into the organization are aligned to the business needs.

One of the examples where EA has a huge impact in many of the organizations that I have experience with is that, with key EA methods, we’re able to capture the innovation that exists in the organization and make that innovation real, as opposed to just suggestions that are thrown in a box, and nobody ever sees them.

Gardner: What is it about capturing that innovation that gets us to something we can measure in terms of an achievable bottom-line benefit?

Evolve over time

Uppal: Say you define an end-to-end process using architecture development method (ADM) methods in TOGAF. What it does is give me a way to capture that innovation at the lowest level and then evolve it over time. Those people who are part of the innovation at the beginning see their innovation or idea progressing through the organization, as the innovation gets aligned to value statements, and value statements get aligned to their capabilities, and the strategies, and the projects, and hence to the end of the day.

Therefore, if I make a suggestion of some sort, that innovation or idea is seen throughout the organization through the methods like ADM, and the linkage is explicit and very visible to the people. Therefore, they feel comfortable that their ideas are going somewhere, they are just not getting stuck.

Forde: There’s an additional point here, Dana, to underscore the answer that Jason gave to your question. In the end result, what you want to be seeing out of your architectural program is moving the KPIs for the business, the business levers that they are expecting to be moved out. If that is related to cost reduction or is related to top-line numbers or whatever, that explicit linkage through to the business levers in an architecture program is critical.

Gardner: Chris, you have a good view on the global markets and the variability of goals here. Many companies are looking to either cut cost or improve productivity. Others are looking to expand. Others are looking to manage how to keep operations afloat. There are so many different variables. How do things like the TOGAF 9 and EA have a common benefit to all of those various pursuits? What is the common denominator that makes EA so powerful?

Forde: Going back to the framework reference, what we have with TOGAF 9 is a number of assets, but primarily it’s a tool that’s available to be customized, and it’s expected to be customized.

If you come to the toolset with a problem, you need to focus the framework on the area that’s going to help you get rapid value to solving your particular problem set. So once you get into that particular space, then you can look at migrating out from that entry point, if that’s the approach, to expanding your use of the framework, the methods, the capabilities, that are implicit and explicit in the framework to address other areas. You can start at the top and work your way down through the framework, from this kind of über value proposition, right down through delivery to the departmental level or whatever. Or, you can come into the bottom, in the infrastructure layer, in IT for example, and work your way up. Or, you can come in at the middle. The question is what is impeding your company’s growth or your department’s growth, if those are the issues that are facing you.

One of the reasons that this framework is so useful in so many different dimensions is that it is a framework. It’s designed to be customized, and is applicable to many different problems.

Gardner: Back to you, Jason. When we think about a beginning effort, perhaps a crawl-walk- run approach to EA and TOGAF, the promise is that further development, advancement, understanding, implementation will lead to larger, more strategic goals.

Let’s define what it means to get to that level of maturity. When we think about an advanced user of TOGAF, what does that mean? Then, we’ll get into how they can then leverage that to further goals. But, what do we really mean by an advanced user at this point?

Advanced user

Uppal: When we think about an advanced user, in our practice we look at it from different points of view and ask what value I’m delivering to the organization. It could very well be delivering value to a CTO in the organization. That is not to say that’s not an advanced user, because that’s strictly focused on technology.

But then, the CTO focus is that it allows us to focus on the current assets that are under deployment in the organization. How do you get the most out of them? So, that’s an advanced user who can figure out how to standardize and scale those assets into a scalable way so therefore they become reusable in the organization. As we move up the food chain from very technology-centric view of a more optimized and transformed scale, advanced user at that point is looking at and saying that now I have a framework like TOGAF, that advanced user has all these tools in their back pocket.

Now, depending on the stakeholder that they’re working with, be that a CEO, a CFO, or a junior manager in the line of business, I can actually focus them on defining a specific capability that they are working towards and create transition roadmaps. Once those transition roadmaps are established, then I can drive that through. An advanced user in the organization is somebody who has all these tools available to them, frameworks available to them, but at the same time, are very focused on a specific value delivery point in their scope.

One beauty of TOGAF is that, because we get to define what enterprise is and we are not told that we have to interview the CEO on day one, I can define an enterprise from a manager’s point of view or a CFO’s point of view and work within that framework. That to me is an advanced user.

Gardner: When we talk about applied architecture, what does that mean? How is it that we move from concept into execution?

Uppal: The frameworks that we have are well thought-out frameworks. So, it moves the conversation away from this framework debate and very quickly moves our conversation into what we do with it. When we talk about a framework like TOGAF, now I can look at and say that if I wanted to apply it now, I have an executive who has defined a business strategy, which typically is a two page PowerPoint presentation, sometimes accompanied by Excel. That’s a good starting point for an enterprise architect. Now, I use methods like TOGAF to define the capabilities in that strategy that they are trying to optimize, where they are, and what they want to transition to.

Very creative

This is where a framework allows me to be very creative, defining the capabilities and the transition points, and giving a roadmap to get to those transitions. That is the cleverness and cuteness of architecture work, and the real skills of an architect comes into, not in defining the framework, but defining the application of the framework to a specific business strategy.

Gardner: Jason, we mentioned that there is a great deal of variability in what different companies in different regions and in different industries need to accomplish, but one of the common questions I get a lot these days is what to outsource and what to manage internally and how to decide the boundaries between a core competency and extended outsourcing or hybrid computing types of models? How does the applied architecture come to the rescue, when this sort of question, which I think is fundamental to an enterprise, comes up?

Uppal: That’s a great question. That’s one of the area where if architects do their job well, we can help the organization move much further along. Because, what we do in the business space, and we have done it many times with the framework, is to look at the value chain of the organization. And looking at the value chain, then to map that out to the capabilities required.

Once we know those capabilities, then I can squarely put that question to the executives and say, “Tell me which capability you want to be the best at. Tell me what capability you want to lead the market in. And, tell me which capability you want to be mediocre and just be at below the benchmark in industry.” Once I get an understanding of which capability I want to be the best at, that’s where I want to focus my energy. Those ones that I am prepared to live with being mediocre, then I can put another strategy into place and ask how I outsource these things, and focus my outsourcing deal on the cost and service.

This is opposed to having very confused contract with the outsourcer, where one day I’m outsourcing for the cost reasons. The other day, I’m outsourcing for growth reasons. It becomes very difficult for an organization to manage the contracts and bend it to provide the support. That conversation, at the beginning, is getting executives to commit to which capability they want to be best at. That is a good conversation for an enterprise architect.

My personal experience has been that if I get a call back from the executive, and they say they want to be best at every one of them, then I say, “Well, you really don’t have a clue what you are talking about. You can’t be super fast and super good at every single thing that you do.”

Gardner: So making those choices is what’s critical. Some of the confusion I also hear about in the field is how to do a cost-benefit analysis about what processes I might keep internal, versus either hybrid or external source processes?

Is there something about the applied architecture and TOGAF 9 that sets up some system of record or methodology approach that allows that cost-benefit analysis of these situations to be made in advance? Is there anything that the planning process brings to the table in trying to make proper decisions about sourcing?

Capability-based planning

Uppal: Absolutely. This is where the whole of our capability-based planning conversation is. It was introduced in TOGAF 9, and we got more legs to go into developing that concept further, as we learn how best to do some of these things.

When I look at a capability-based planning, I expect my executives to look at it from a point of view and ask what are the opportunities and threats. What it is that you can get out there in the industry, if you have this capability in your back pocket? Don’t worry about how we are going to get it first, let’s decide that it’s worth getting it.

Then, we focus the organization into the long haul and say, well, if we don’t have this capability and nobody in the industry has this capability, if we do have it, what will it do for us? It provides us another view, a long-term view, of the organization. How are we going to focus our attention on the capabilities?

One of the beauties of doing EA is, is that when we start EA at the starting point of a strategic intent, that gives us a good 10-15 year view of what our business is going to be like. When we start architecture at the business strategy level, that gives us a six months to five-year view.

Enterprise architects are very effective at having two views of the world — a 5, 10, or 15 year view of the world, and a 6 months to 3 year view of the world. If we don’t focus on the strategic intent, we’ll never know what is possible, and we would always be working on what is possible within our organization, as opposed to thinking of what is possible in the industry as a whole.

Gardner: So, in a sense, you have multiple future tracks or trajectories that you can evaluate, but without a framework, without an architectural approach, you would never be able to have that set of choices before you.

Chris Forde, any thoughts on what Jason’s been saying in terms of the sourcing and cost benefits and risks analysis that go into that?

Forde: In the kinds of environment that most organizations are operating in — government, for- profit, not-for-profit organizations — everybody is trying to understand what it is they need to be good at and what it is their partners are very good at that they can leverage. Their choices around this are of course critical.

One of the things that you need to consider is that if you are going to give x out and have the power to manage that and operate whatever it is, whatever process it might be, what do you have to be good at in order to make them effective? One of the things you need to be good at is managing third parties. One of the advanced uses of an EA is applying the architecture to those management processes. In the maturity of things you can see potentially an effective organization managing a number of partners through an architected approach to things. So when we talked about what do advanced users do, what I am offering is that an advanced use of EA is in the application of it to third-party management.

Gardner: So the emphasis is on the process, not necessarily who is executing on that process?

Framework necessary

Forde: Correct, because you need a framework. Think about what most major Fortune 500 companies in the United States do. They have multiple, multiple IT partners for application development and potentially for operations. They split the network out. They split the desktop out. This creates an amazing degree of complexity around multiple contracts. If you have an integrator, that’s great, but how do you manage the integrator?

There’s a whole slew of complex problems. What we’ve learned over the years is that the original idea of “outsourcing,” or whatever the term that’s going to be used, we tend to think of that in the abstract, as one activity, when in fact it might be anywhere from 5-25 partners. Coordinating that complexity is a major issue for organizations, and taking an architected approach to that problem is an advanced use of EA.

Gardner: So stated another way, Jason, the process is important, but the management of processes is perhaps your most important core competency. Is that fair, and how does EA support that need for a core competency of managing processes across multiple organizations?

Uppal: That’s absolutely correct. Chris is right. For example, there are two capabilities an organization decided on, one that they wanted to be very, very good at.

We worked with a large concrete manufacturing company in the northern part of the country. If you’re a concrete manufacturing company, your biggest cost is the cement. If you can exploit your capability to optimize the cement and substitute products with the chemicals and get the same performance, you can actually get a lot more return and higher margins for the same concrete.

In this organization, the concrete manufacturing process itself was core competency. That had to be kept in-house. The infrastructure is essential to make the concrete, but it wasn’t the core competency of the organization. So those things had to be outsourced. In this organization we have to build a process — how to manage the outsourcer and, at the same time, have a capability and a process. Also, how to become best concrete manufacturers. Those two essential capabilities were identified.

An EA framework like TOGAF actually allows you to build both of those capabilities, because it doesn’t care. It just thinks, okay, I have a capability to build, and I am going to give you a set of instructions, the way you do it. The next thing is the cleverness of the architect — how he uses his tools to actually define the best possible solutions.

Gardner: Of course, it’s not just enough to identify and implement myriad sourcing or complex sourcing activities, but you need to monitor and have an operational governance oversight capability as well. Is there something in TOGAF 9 specifically that lends itself to taking this into the operational and then creating ongoing efficiencies as a result?

Uppal: Absolutely, because this is one of the areas where in ADM, when we get back to our implementation of governance, and post implementation of governance, value realization, how do we actually manage the architecture over the life of it? This is one of the areas where TOGAF 9 has done a considerably good job, and we’ve still got a long way to go in how we actually monitor and what value is being realized.

Very explicit

Our governance model is very explicit about who does what and when and how you monitor it. We extended this conversation using TOGAF 9 many times. At the end, when the capability is deployed, the initial value statement that was created in the business architecture is given back to the executive who asked for that capability.

We say, “This is what the benefits of these capabilities are and you signed off at the beginning. Now, you’re going to find out that you got the capability. We are going to pass this thing into strategic planning next year, because for next year’s planning starting point, this is going to be your baseline.” So not only is the governance just to make sure it’s via monitoring, but did we actually get the business scores that we anticipated out of it.

Gardner: Another area that’s of great interest to me nowadays is looking at the IT organization as they pursue things like Cloud, software as a service (SaaS), and hybrid models. Do they gather a core competency at how to manage these multiple partners, as Chris pointed out, or does another part of the company that may have been dealing with outsourcing at a business process level teach the IT department how to do this?

Any sense from either of our panelists on whether IT becomes a leader or a laggard in how to manage these relationships, and how important is managing the IT element of that in the long run? Let’s start with you, Jason.

Uppal: It depends on the industry the IT is in. For example, if you’re an organization that is very engineering focused, engineers have a lot more experience managing outsourcing deals than IT organizations do. In that case, the engineering leads this conversation.

But in most organizations, which are service-oriented organizations, engineering has not been a primary discipline, and IT has a lot of experience managing outside contracts. In that case, the whole Cloud conversation becomes a very effective conversation within the IT organization.

When we think about cloud, we have actually done Cloud before. This is not a new thing, except that before we looked at it from a hosting point of view and from a SaaS point of view. Now, cloud is going in a much further extended way, where entire capability is provided to you. That capability is not only that the infrastructure is being used for somebody else, but the entire industry’s knowledge is in that capability. This is becoming a very popular thing, and rightfully so, not because it’s a sexy thing to have. In healthcare, especially in countries where it’s a socialized healthcare and it’s not monopolized, they are sharing this knowledge in the cloud space with all the hospitals. It’s becoming a very productive thing, and enterprise architects are driving it, because we’re thinking of capabilities, not components.

Gardner: Chris Forde, similar question. How do you see the role of IT shifting or changing as a result of the need to manage more processes across multiple sources?

Forde: It’s an interesting question. I tend to agree with the earlier part of Jason’s response. I am not disagreeing with any of it, actually, but the point that he made about it is that it’s a “it depends” answer.

IT interaction

Under normal circumstances the IT organizations are very good at interacting with other technology areas of the business. From what I’ve seen with the organizations I have dealt with, typically they see slices of business processes, rather than the end-to-end process entirely. Even within the IT organizations typically, because of the size of many organizations, you have some sort of division of responsibilities. As far as Jason’s emphasis on capabilities and business processes, of course the capabilities and processes transcend functional areas in an organization.

To the extent that a business unit or a business area has a process owner end to end, they may well be better positioned to manage the BPMO type of things. If there’s a heavy technology orientation around the process outsourcing, then you will see the IT organization being involved to one extent or another.

The real question is, where is the most effective knowledge, skill, and experience around managing these outsourcing capabilities? It may be in the IT organization or it may be in the business unit, but you have to assess where that is.

That’s one of the functions that the architecture approaches. You need to assess what it is that’s going to make you successful in this. If what you need happens to be in the IT organization, then go with that ability. If it is more effective in the business unit, then go with that. And perhaps the answer is that you need to combine or create a new functional organization for the specific purpose of meeting that activity and outsource need.

I’m hedging a little bit, Dana, in saying that it depends.

Gardner: It certainly raises some very interesting issues. At the same time that we’re seeing this big question mark around sourcing and how to do that well, we’re also in a period where more organizations are being data-driven and looking to have deeper, more accessible, and real-time analytics applied to their business decisions. Just as with sourcing, IT also has an integral role in this, having been perhaps the architects or implementators of warehousing, data marts, and business intelligence (BI).

Back to you Jason. As we enter into a phase where organizations are also trying to measure and get scientific and data-driven about their decisions, how does IT, and more importantly, how does TOGAF and EA come to help them do that?

Uppal: We have a number of experiences like that, Dana. One is a financial services organization. The entire organization’s function is that they manage some 100-plus billion dollars worth of assets. In that kind of organization, all the decision making process is based on the data that they get. And 95 percent of the data is not within the organization. It is vendor data that they’re getting from outside.

So in that kind of conversation, we look and say that the organization needs a capability to manage data. Once we define a capability, then we start putting metrics on this thing. What does this capability need to be able to do?

In this particular example, we put a metric on this and said that the data gets identified in the morning, by the afternoon we bring it into the organization, and by the end of the day we get rid of it. That’s how fast the data has to be procured, transformed into the organization, brought it in, and delivered it to end-use. That end-user makes the decision whether we will never look at the data again.

Data capability

Having that fast speed of data management capability in the organization, and this is one of the areas where architects can take a look at, this is the capability you need. Now I can give you a roadmap to get to that capability.

Gardner: Chris Forde, how do you see the need for a data-driven enterprise coincide with IT and EA?

Forde: For most, if not all, companies, information and data are critical to their operation and planning activities, both on a day-to-day basis, month-to-month, annually, and in longer time spans. So the information needs of a company are absolutely critical in any architected approach to solutioning or value-add type of activities.

I don’t think I would accept the assumption that the IT department is best-placed to understand what those information needs are. The IT organization may be well-placed to provide input into what technologies could be applied to those problems, but if the information needs are normally being applied to business problems, as opposed to technology problems, I would suggest that it is probably the business units that are best-placed to decide what their information needs are and how best to apply them.

The technologist’s role, at least in the model I’m suggesting, is to be supportive in that and deliver the right technology, at the right time, for the right purpose.

Gardner: Then, how would a well-advanced applied architecture methodology and framework help those business units attain their information needs, but also be in a position to exploit IT’s helping hand when needed?

Forde: It’s mostly providing the context to frame the problem in a way that it can be addressed, chunked down to reasonable delivery timeframes, and then marshaling the resources to bring that to reality.

From a pure framework and applied methodology standpoint, if you’re coming at it from an idealized situation, you’re going to be doing it from a strategic business need and you’re going to be talking to the business units about what their capability and functional needs are. And at that time, you’re really in the place of what business processes they’re dealing with and what information they need in order to accomplish what the particular set of goals is.

This is way in advance of any particular technology choice being made. That’s the idealized situation, but that’s typically what most frameworks, and in particular, the TOGAF 9 Framework from The Open Group, would go for.

Gardner: We’re just beginning these conversations about advanced concepts in EA and there are going to be quite a bit more offerings and feedback and collaboration around this subject at The Open Group Conference in Austin. Perhaps before we sign off, Jason, you can give us a quick encapsulation of what you will be discussing in terms of your presentation at the conference.

Uppal: One of the things that we’ve been looking at from the industry’s point of view is saying that this conversation around the frameworks is a done deal now, because everybody accepted that we have good enough frameworks. We’re moving to the next phase of what we do with these frameworks.

In our future conferences, we’re going to be addressing that and saying what people are specifically doing with these frameworks, not to debate the framework itself, but the application of it.

Continuous planning

In Austin we’ll be looking at how we’re using a TOGAF framework to improve ongoing annual business and IT planning. We have a specific example that we are going to bring out where we looked at an organization that was doing once-a-year planning. That was not a very effective way for the organizations. They wanted to change it to continuous planning, which means planning that happens throughout the year.

We identified four or five very specific measurable goals that the program had, such as accuracy of your plan, business goals being achieved by the plan, time and cost to manage and govern the plan, and stakeholders’ satisfaction. Those are the areas that we are defining as to how the TOGAF like framework will be applied to solve a specific problem like enterprise planning and governance.

That’s something we will be bringing to our conference in Austin and that event will be held on a Sunday. In the future, we’ll be doing a lot more of those specific applications of a framework like a TOGAF to a unique set of problems that are very tangible and they very quickly resonate with the executives, not in IT, but in the entire organization.

Forde: Can I follow along with a little bit of a plug here, Dana.

Gardner: Certainly.

Forde: Jason is going to be talking as a senior architect on the applied side of TOGAF on this Sunday. For the Monday plenary, this is basically the rundown. We have David Baker, a Principal from PricewaterhouseCoopers, talking about business driven architecture for strategic transformations.

Following that, Tim Barnes, the Chief Architect at Devon Energy out of Canada, covering what they are doing from an EA perspective with their organization.

Then, we’re going to wrap up the morning with Mike Walker, the Principal Architect for EA Strategy and Architecture at Microsoft, talking about IT Architecture to the Enterprise Architecture.

This is a very powerful lineup of people addressing this business focus in EA and the application of it for strategic transformations, which I think are issues that many, many organizations are struggling with.

Gardner: Looking at, again, the question I started us off with, how do TOGAF and EA affect the bottom line? We’ve heard about how it affects the implementation for business transformation processes. We’ve talked about operational governance. We looked at how sourcing, business process management and implementation, and ongoing refinement are impacted. We also got into data and how analytics and information sharing are affected. Then, as Jason just mentioned, planning and strategy as a core function across a variety of different types of business problems.

So, I don’t think we can in any way say that there’s a minor impact on the bottom line from this. Last word to you, Jason.

Uppal: This is a time now for the enterprise architects to really step up to the plate and be accountable for real performance influence on the organization’s bottom line.

If we can improve things like exploiting assets better today than what we have, improve our planning program, and have very measurable and unambiguous performance indicator that we’re committing to, this is a huge step forward for enterprise architects and moving away from technology and frameworks to real-time problems that resonate with executives and align to business and in IT.

Gardner: Well, great. You’ve been listening to a sponsored podcast discussion in conjunction with The Open Group Conference in Austin, Texas, the week of July 18, 2011.

I would like to thank our guests. We have been joined by Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities for The Open Group. Thanks, Chris.

Forde: Thanks, Dana.

Gardner: And also Jason Uppal. He is the Chief Architect at QR Systems. Thank you, Jason.

Uppal: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for joining, and come back next time.

Jason Uppal will be presenting “Advanced Concepts in Applying TOGAF 9” at The Open Group Conference, Austin, July 18-22. Join us for best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Copyright The Open Group 2011. All rights reserved.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirect™ blogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Enterprise Architecture, TOGAF®

EA Fundamentalism

By Stuart Boardman, Getronics

It’s an unfortunate fact that when times get tough, the tough, rather than get going, tend to pick on other people. What we see is that most formal and informal groups tend to turn inwards and splinter into factions, each possessing the only true gospel. When times are good, we’re all too busy doing what we actually want to do to want to waste time sniping at other folks.

Maybe this isn’t the reason but it strikes me that in the EA blogosphere at the moment (e.g. the EA group on LinkedIn) every discussion seems to deteriorate into debate about what the proper definition of EA is (guess how many different “right answers” there are) or which of TOGAF® or Zachman or <insert your favourite framework here> is the (only) correct framework or why all of them are totally wrong, or worse still, what the correct interpretation of the minutiae of some aspect of framework X might be.

Perhaps the only comfort we can draw from the current lack of proper recognition of EA by the business is the fact that the Zachmanites are actually not firing bullets at the Rheinlanders (or some other tribe). Apart from the occasional character assassination, it’s all reasonably civilized. There’s just not enough to lose. But this sort of inward looking debate gets us nowhere.

I use TOGAF® . If you use another framework that’s better suited to your purpose, I don’t have a problem with that. I use it as framework to help me think. That’s what frameworks are for. A good framework doesn’t exclude the possibility that you use other guidance and insights to address areas it doesn’t cover. For example, I make a lot of use of the Business Model Canvas from Osterwalder and Pigneur and I draw ideas from folks like Tom Graves (who in turn has specialized the Business Model Canvas to EA). A framework (and any good methodology) is not a cookbook. If you understand what it tries to achieve, you can adapt it to fit each practical situation. You can leave the salt out. You can even leave the meat out! There are some reasonable criticisms of TOGAF® from within and outside The Open Group. But I can use TOGAF® with those in mind. And I do. One of the things I like about The Open Group is that it’s open to change – and always working on it. So the combination of The Open Group and TOGAF® and the awareness of important things coming from other directions provides me with an environment that, on the one hand, encourages rigour, and on the other, constantly challenges my assumptions.

It’s not unusual in my work that I liaise with other people officially called Enterprise Architects. Some of these folks think EA is only about IT. Some of them think it’s only about abstractions. I also work with Business Architects and Business Process Architects and Business Strategists and Requirements Engineers and….. I could go on for a very long time indeed. All of these people have definitions of their own scope and responsibilities, which overlap quite enough to allow not just for fundamentalism but also serious turf wars. Just as out there in the real world, the fundamentalists and those who define their identity by what they are not are the ones who start wars which everyone loses.

The good news is that just about enough of the time enough of these folks are happy to look at what we are all trying to achieve and who can bring what to the party and will work together to produce a result that justifies our existence. And every time that happens I learn new things – things that will make a me a better Enterprise Architect. So if I get noticeably irritated by the religious disputes and respond a bit unreasonably in web forum debates, I hope you’ll forgive me. I don’t like war.

By the way, credit for the “fundamentalism” analogy goes to my friend and former colleague, François Belanger. Thanks François.

Enterprise Architecture will be a major topic of discussion at The Open Group Conference, Austin, July 18-22. Join us for best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Stuart Boardman is a Senior Business Consultant with Getronics Consulting where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity.

4 Comments

Filed under Enterprise Architecture, TOGAF®

Why standards in information technology are critical

By Mark Skilton, Capgemini

See the next article in Mark’s series on standards here.

Information technology as an industry is at the center of communications and exchange of information, and increasingly, fully digitized products and services. Its span of influence and control is enabled through the ability of protocols, syntax and nomenclatures to be defined and known between consumers and providers. The Internet is testament to HTTP, TCP-IP, HTML, URL, MAC and XML standards that have become universal languages to enable its very existence. These “universal common standards” are an example of a homogenous, all-pervasive standard that enables the construction and use of resources and connections that are built on these standards.

These “building blocks” are a necessary foundation to enable more advanced language and exchange interactions to become possible. It can be argued that with every new technology advance, a new language is needed to express and drive that new advance. Prior to the Internet, earlier standards of timeshare mainframes, virtual memory, ISA chip architecture and fiber optics established scale and increasing capacity to affect simple to more complex tasks. There simply was no universal protocol-based standards that could support the huge network of wired and wireless communications. Commercial-scale computing was locked and limited inside mainframe and PC computers.

With federated distributed computing standards, all that changed. The Client-Server era enabled cluster intranet and peer-to-peer networks. Email exchange, web access and data base access evolved to be across a number of computers and to connect groups of computers together for shared resource services. The web browser running as a client program at the user computer enables access to information at any web server in the world. So standards come and go, and evolve in cycles as existing technology matures and new technologies and capabilities evolve much like the cycles of innovation explained in the development of technology and innovation seen in the published works of “Machine that Changed the World” by James Womack 1990, “Clock Speed” by Charles Fine in 1999 and recently the “Innovators Dilemma” by Clayton Christensen in the mid 2000’s.

The challenge is to position standards and policies to use those standards in a way that establish and enable products, services and markets to be created or developed. The Open Group does just that.

Mark Skilton will be presenting on “Building A Cloud Computing Roadmap View To Your Enterprise Planning” at The Open Group Conference, Austin, July 18-22. Join us for best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

4 Comments

Filed under Standards