Monthly Archives: February 2011

Cloud security and risk management

by Varad G. Varadarajan, Cognizant Technology Solutions

Are you ready to move to the Cloud?

Risk management and cost control are two key issues facing CIOs and CTOs today. Both these issues come into play in Cloud Computing, and present an interesting dilemma for IT leaders at large corporations.

The elastic nature of the Cloud, the conversion of Capex to Opex and the managed security infrastructure provided by the Cloud service provider make it very attractive for hosting applications. However, there are a number of security and privacy issues that companies need to grapple with before moving to the Cloud.

For example, multi-tenancy and virtualization are great technologies for lowering the cost of hosting applications, and the service providers that would like to use them. However, these technologies also pose grave security risks because companies operate in a shared infrastructure that offers very little isolation. They greatly increase the target attack surface, which is a hacker’s dream come true.

Using multiple service providers on the Cloud is great for providing redundancy, connecting providers in a supply chain or handling spikes in services via Cloud bursts. However, managing identities across multiple providers is a challenge.  Making sure data does not accidentally cross trust boundaries is another difficult problem.

Likewise, there are many challenges in the areas of:

  • Choosing the right service / delivery model (and its security implications)
  • Key management and distribution
  • Governance and Compliance of the service provider
  • Vendor lock-in
  • Data privacy (e.g. regulations governing the offshore-ability of data)
  • Residual risks

In my presentation at The Open Group India Conference next week, I will discuss these and many other interesting challenges facing CIOs regarding Cloud adoption. I will present a five step approach that enterprises can use to select assets, assess risks, map them to service providers and manage the risks through contract negotiation, SLAs and regular monitoring.

Cloud Computing will be a topic of discussion at The Open Group India Conference in Chennai (March 7), Hyderabad (March 9) and Pune (March 11). Join us for best practices and case studies in the areas of Enterprise Architecture, Security, Cloud and Certification, presented by preeminent thought leaders in the industry.

Varad is a senior IT professional with 22 years of experience in Technology Management, Practice Development, Business Consulting, Architecture, Software Development and Entrepreneurship. He has led consulting assignments in IT Transformation, Architecture, and IT Strategy/Blueprinting at global companies across a broad range of industries and domains. He holds an MBA (Stern School of Business, New York), M.S Computer Science (G.W.U/Stanford California) and B.Tech (IIT India).

Comments Off

Filed under Cloud/SOA

PODCAST: Cloud Computing panel forecasts transition phase for Enterprise Architecture

By Dana Gardner, Interabor Solutions

Listen to this recorded podcast here: BriefingsDirect-Open Group Cloud Panel Forecasts Transition Phase for Enterprise IT

The following is the transcript of a sponsored podcast panel discussion on newly emerging Cloud models and their impact on business and government, from The Open Group Conference, San Diego 2011.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

We now present a sponsored podcast discussion coming to you live from The Open Group 2011 Conference in San Diego. We’re here the week of February 7, and we have assembled a distinguished panel to examine the expectation of new types of cloud models and perhaps cloud specialization requirements emerging quite soon.

By now, we’re all familiar with the taxonomy around public cloud, private cloud, software as a service (SaaS), platform as a service (PaaS), and my favorite, infrastructure as a service (IaaS), but we thought we would do you all an additional service and examine, firstly, where these general types of cloud models are actually gaining use and allegiance, and we’ll look at vertical industries and types of companies that are leaping ahead with cloud, as we now define it. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Then, second, we’re going to look at why one-size-fits-all cloud services may not fit so well in a highly fragmented, customized, heterogeneous, and specialized IT world.

How much of cloud services that come with a true price benefit, and that’s usually at scale and cheap, will be able to replace what is actually on the ground in many complex and unique enterprise IT organizations?

What’s more, we’ll look at the need for cloud specialization, based on geographic and regional requirements, as well as based on the size of these user organizations, which of course can vary from 5 to 50,000 seats. Can a few types of cloud work for all of them?

Please join me now in welcoming our panel. Here to help us better understand the quest for “fit for purpose” cloud balance and to predict, at least for some time, the considerable mismatch between enterprise cloud wants and cloud provider offerings we’re here with Penelope Gordon, the cofounder of 1Plug Corporation, based in San Francisco. Welcome, Penelope.

Penelope Gordon: Thank you.

Gardner: We’re also here with Mark Skilton. He is the Director of Portfolio and Solutions in the Global Infrastructure Services with Capgemini in London. Thank you for coming, Mark.

Mark Skilton: Thank you.

Gardner: Ed Harrington joins us. He is the Principal Consultant in Virginia for the UK-based Architecting the Enterprise organization. Thank you, Ed.

Ed Harrington: Thank you.

Gardner: Tom Plunkett is joining us. He is a Senior Solution Consultant with Oracle in Huntsville, Alabama.

Tom Plunkett: Thank you, Dana.

Gardner: And lastly, we’re here with TJ Virdi. He is Computing Architect in the CAS IT System Architecture Group at Boeing based in Seattle. Welcome.

TJ Virdi: Thank you.

Gardner: Let me go first to you, Mark Skilton. One size fits all has rarely worked in IT. If it has, it has been limited in its scope and, most often, leads to an additional level of engagement to make it work with what’s already there. Why should cloud be any different?

Three areas

Skilton: Well, Dana, from personal experience, there are probably three areas of adaptation of cloud into businesses. For sure, there are horizontal common services to which, what you call, the homogeneous cloud solution could be applied common to a number of business units or operations across a market.

But, we’re starting to increasingly see the need for customization to meet vertical competitive needs of a company or the decisions within that large company. So, differentiation and business models are still there, they are still in platform cloud as they were in the pre-cloud era.

But, the key thing is that we’re seeing a different kind of potential that a business can do now with cloud — a more elastic, explosive expansion and contraction of a business model. We’re seeing fundamentally the operating model of the business growing, and the industry can change using cloud technology.

So, there are two things going on in the business and the technologies are changing because of the cloud.

Gardner: Well, for us to understand where it fits best, and perhaps not so good, is to look at where it’s already working. Ed, you talked about the federal government. They seem to be going like gangbusters in the cloud. Why so?

Harrington: Perceived cost savings, primarily. The (US) federal government has done some analysis. In particular, the General Services Administration (GSA), has done some considerable analysis on what they think they can save by going to, in their case, a public cloud model for email and collaboration services. They’ve issued a $6.7 million contract to Unisys as the systems integrator, with Google being the cloud services supplier.

So, the debate over the benefits of cloud, versus the risks associated with cloud, is still going on quite heatedly.

Gardner: How about some other verticals? Where is this working? We’ve seen in some pharma, health-care, and research environments, which have a lot of elasticity, it makes sense, given that they have very variable loads. Any other suggestions on where this works, Tom?

Plunkett: You mentioned variable workloads. Another place where we are seeing a lot of customers approach cloud is when they are starting a new project. Because then, they don’t have to migrate from the existing infrastructure. Instead everything is brand new. That’s the other place where we see a lot of customers looking at cloud, your greenfields.

Gardner: TJ, any verticals that you are aware of? What are you seeing that’s working now?

Virdi: It’s not probably related with any vertical market, but I think what we are really looking for speed to put new products into the market or evolve the products that we already have and how to optimize business operations, as well as reduce the cost. These may be parallel to any vertical industries, where all these things are probably going to be working as a cloud solution.

Gardner: We’ve heard the application of “core and context” to applications, but maybe there is an application of core and context to cloud computing, whereby there’s not so much core and lot more context. Is that what you’re saying so far?

Unstructured data

Virdi: In a sense, you would have to measure not only the structured documents or structured data, but unstructured data as well. How to measure and create a new product or solutions is the really cool things you would be looking for in the cloud. And, it has proved pretty easy to put a new solution into the market. So, speed is also the big thing in there.

Gardner: Penelope, use cases or verticals where this is working so far?

Gordon: One example in talking about core and context is when you look in retail. You can have two retailers like a Walmart or a Costco, where they’re competing in the same general space, but are differentiating in different areas.

Walmart is really differentiating on the supply chain, and so it’s not a good candidate for public cloud computing solutions. We did discuss it that might possibly be a candidate for private cloud computing.

But that’s really where they’re going to invest in the differentiating, as opposed to a Costco, where it makes more sense for them to invest in their relationship with their customers and their relationship with their employees. They’re going to put more emphasis on those business processes, and they might be more inclined to outsource some of the aspects of their supply chain.

A specific example within retail is pricing optimization. A lot of grocery stores need to do pricing optimization checks once a quarter, or perhaps once a year in some of their areas. It doesn’t makes sense for smaller grocery store chains to have that kind of IT capability in house. So, that’s a really great candidate, when you are looking at a particular vertical business process to outsource to a cloud provider who has specific industry domain expertise.

Gardner: So for small and medium businesses (SMBs) that would be more core for them than others.

Gordon: Right. That’s an example, though, where you’re talking about what I would say is a particular vertical business process. Then, you’re talking about a monetization strategy and then part of the provider, where they are looking more at a niche strategy, rather than a commodity, where they are doing a horizontal infrastructure platform.

Gardner: Ed, you had a thought?

Harrington: Yeah, and it’s along the SMB dimension. We’re seeing a lot of cloud uptake in the small businesses. I work for a 50-person company. We have one “sort of” IT person and we do virtually everything in the cloud. We’ve got people in Australia and Canada, here in the States, headquartered in the UK, and we use cloud services for virtually everything across that. I’m associated with a number of other small companies and we are seeing big uptake of cloud services.

Gardner: Allow me to be a little bit of a skeptic, because I’m seeing these reports from analyst firms on the tens of billions of dollars in potential cloud market share and double-digit growth rates for the next several years. Is this going to come from just peripheral application context activities, mostly SMBs? What about the core in the enterprises? Does anybody have an example of where cloud is being used in either of those?

Skilton: In the telecom sector, which is very IT intensive, I’m seeing the emergence of their core business of delivering service to a large end user or multiple end user channels, using what I call cloud brokering.

Front-end cloud

So, if where you’re going with your question is that, certainly in the telecom sector we’re seeing the emergence of front end cloud, customer relationship management (CRM) type systems and also sort of back-end content delivery engines using cloud.

The fundamental shift away from the service orientated architecture (SOA) era is that we’re seeing more business driven self-service, more deployment of services as a business model, which is a big difference of the shift of the cloud. Particularly in telco, we’re seeing almost an explosion in that particular sector.

Gordon: A lot of companies don’t even necessarily realize that they’re using cloud services, particularly when you talk about SaaS. There are a number of SaaS solutions that are becoming more and more ubiquitous. If you look at large enterprise company recruiting sites, often you will see Taleo down at the bottom. Taleo is a SaaS. So, that’s a cloud solution, but it’s just not thought necessarily of in that context.

Gardner: Right. Tom?

Plunkett: Another place we’re seeing a lot of growth with regards to private clouds is actually on the defense side. The Defense Department is looking at private clouds, but they also have to deal with this core and context issue. We’re in San Diego today. The requirements for a shipboard system are very different from the land-based systems.

Ships have to deal with narrow bandwidth and going disconnected. They also have to deal with coalition partners or perhaps they are providing humanitarian assistance and they are dealing even with organizations we wouldn’t normally consider military. So, they have to deal with lots of information, assurance issues, and have completely different governance concerns that we normally think about for public clouds.

Gardner: However, in the last year or two, the assumption has been that this is something that’s going to impact every enterprise, and everybody should get ready. Yet, I’m hearing mostly this creeping in through packaged applications on a on-demand basis, SMBs, greenfield organizations, perhaps where high elasticity is a requirement.

What would be necessary for these cloud providers to be able to bring more of the core applications the large enterprises are looking for? What’s the new set of requirements? As I pointed out, we have had a general category of SaaS and development, elasticity, a handful of infrastructure services. What’s the next set of requirements that’s going to make it palatable for these core activities and these large enterprises to start doing this? Let me start with you, Penelope.

Gordon: It’s an interesting question and it was something that we were discussing in a session yesterday afternoon. Here is a gentleman from a large telecommunications company, and from his perspective, trust was a big issue. To him, part of it was just an immaturity of the market, specifically talking about what the new style of cloud is and that branding. Some of the aspects of cloud have been around for quite some time.

Look at Linux adoption as an analogy. A lot of companies started adopting Linux, but it was for peripheral applications and peripheral services, some web services that weren’t business critical. It didn’t really get into the core enterprise until much later.

We’re seeing some of that with cloud. It’s just a much bigger issue with cloud, especially as you start looking at providers wanting to moving up the food chain and providing greater value. This means that they have to have more industry knowledge and that they have to have more specialization. It becomes more difficult for large enterprises to trust a vendor to have that kind of knowledge.

No governance

Another aspect of what came up in the afternoon is that, at this point, while we talk about public cloud specifically, it’s not the same as saying it’s a public utility. We talk about “public utility,” but there is no governance, at this point, to say, “Here is certification that these companies have been tested to meet certain delivery standards.” Until that exists, it’s going to be difficult for some enterprises to get over that trust issue.

Gardner: Assuming that the trust and security issues are worked out over time, that experience leads to action, it leads to trust, it leads to adoption, and we have already seen that with SaaS applications. We’ve certainly seen it with the federal government, as Ed pointed out earlier.

Let’s just put that aside as one of the requirements that’s already on the drawing board and that we probably can put a checkmark next to at some point. What’s next? What about customization? What about heterogeneity? What about some of these other issues that are typical in IT, Mark Skilton?

Skilton: One of the under-played areas is PaaS. We hear about lock-in of technology caused by the use of the cloud, either putting too much data in or doing customization of parameters and you lose the elastic features of that cloud.

As to your question about what do vendors or providers need to do more to help the customer use the cloud, the two things we’re seeing are: one, more of an appliance strategy, where they can buy modular capabilities, so the licensing issue, solutioning issue, is more contained. The client can look at it more in a modular appliance sort of way. Think of it as cloud in a box.

The second thing is that we need to be seeing is much more offering transition services, transformation services, to accelerate the use of the cloud in a safe way, and I think that’s something that we need to really push hard to do. There’s a great quote from a client, “It’s not the destination, it’s the journey to the cloud that I need to see.”

Gardner: You mentioned PaaS. We haven’t seen too much yet with a full mature offering of the full continuum of PaaS to IaaS. That’s one where new application development activities and new integration activities would be built of, for, and by the cloud and coordinated between the dev and the ops, with the ops being any number of cloud models — on-premises, off-premises, co-lo, multi-tenancy, and so forth.

So what about that? Is that another requirement that there is continuity between the past and the infrastructure and deployment, Tom?

Plunkett: We’re getting there. PaaS is going to be a real requirement going forward, simply because that’s going to provide us the flexibility to reach some of those core applications that we were talking about before. The further you get away from the context, the more you’re focusing on what the business is really focused in on, and that’s going to be the core, which is going to require effective PaaS.

Gardner: TJ.

More regulatory

Virdi: I want to second that, but at the same time, we’re looking for more regulatory and other kind of licensing and configuration issues as well. Those also make it a little better to use the cloud. You don’t really have to buy, or you can go for the demand. You need to make your licenses a little bit better in such a way that you can just put the product or business solutions into the market, test the water, and then you can go further on that.

Gardner: Penelope, where do you see any benefit of having a coordinated or integrated platform and development test and deploy functions? Is that going to bring this to a more core usage in large enterprises?

Gordon: It depends. I see a lot more of the buying of cloud moving out to the non-IT line of business executives. If that accelerates, there is going to be less and less focus. Companies are really separating now what is differentiating and what is core to my business from the rest of it.

There’s going to be less emphasis on, “Let’s do our scale development on a platform level” and more, “Let’s really seek out those vendors that are going to enable us to effectively integrate, so we don’t have to do double entry of data between different solutions. Let’s look out for the solutions that allow us to apply the governance and that effectively let us tailor our experience with these solutions in a way that doesn’t impinge upon the provider’s ability to deliver in a cost effective fashion.”

That’s going to become much more important. So, a lot of the development onus is going to be on the providers, rather than on the actual buyers.

Gardner: Now, this is interesting. On one hand, we have non-IT people, business people, specifying, acquiring, and using cloud services. On the other hand we’re perhaps going to see more PaaS, the new application development, be it custom or more of a SaaS type of offering that’s brought in with a certain level of adjustment and integration. But, these are going off without necessarily any coordination. At some point, they are going to even come together. It’s inevitable, another “integrationness” perhaps.

Mark Skilton, is that what you see, that we have not just one cloud approach but multiple approaches and then some need to rationalize?

Skilton: There are two key points. There’s a missing architecture practice that needs to be there, which is a workers analysis, so that you design applications to fit specific infrastructure containers, and you’ve got a bridge between the the application service and the infrastructure service. There needs to be a piece of work by enterprise architects that starts to bring that together as a deliberate design for applications to be able to operate in the cloud, and the PaaS platform is a perfect environment.

The second thing is that there’s a lack of policy management in terms of technical governance, and because of the lack of understanding, there needs to be more of a matching exercise going on. The key thing is that that needs to evolve.

Part of the work we’re doing in The Open Group with the Cloud Computing Work Group is to develop new standards and methodologies that bridge those gaps between infrastructure, PaaS, platform development, and SaaS.

Gardner: We already have the Trusted Technology Forum. Maybe soon we’ll see an open trusted cloud technology forum.

Skilton: I hope so.

Gardner: Ed Harrington, you mentioned earlier that the role of the enterprise architect is going to benefit from cloud. Do you see what we just described in terms of dual tracks, multiple inception points, heterogeneity, perhaps overlap and redundancy? Is that where the enterprise architect flourishes?

Shadow IT

Harrington: I think we talked about line management IT getting involved in acquiring cloud services. If you think we’ve got this thing called “shadow IT” today, wait a few years. We’re going to have a huge problem with shadow IT.

From the architect’s perspective, there’s lot to be involved with and a lot to play with, as I said in my talk. There’s an awful lot of analysis to be done — what is the value that the cloud solution being proposed is going to be supplying to the organization in business terms, versus the risk associated with it? Enterprise architects deal with change, and that’s what we’re talking about. We’re talking about change, and change will inherently involve risk.

Gardner: TJ.

Virdi: All these business decisions are going to be coming upstream, and business executives need to be more aware about how cloud could be utilized as a delivery model. The enterprise architects and someone with a technical background needs to educate or drive them to make the right decisions and choose the proper solutions.

It has an impact how you want to use the cloud, as well as how you get out of it too, in case you want to move to different cloud vendors or providers. All those things come into play upstream rather than downstream.

Gardner: We all seem to be resigned to this world of, “Well, here we go again. We’re going to sit back and wait for all these different cloud things to happen. Then, we’ll come in, like the sheriff on the white horse, and try to rationalize.” Why not try to rationalize now before we get to that point? What could be done from an architecture standpoint to head off mass confusion around cloud? Let me start at one end and go down the other. Tom?

Plunkett: One word: governance. We talked about the importance of governance increasing as the IT industry went into SOA. Well, cloud is going to make it even more important. Governance throughout the lifecycle, not just at the end, not just at deployment, but from the very beginning.

Gardner: TJ.

Virdi: In addition to governance, you probably have to figure out how you want to plan to adapt to the cloud also. You don’t want to start as a Big Bang theory. You want to start in incremental steps, small steps, test out what you really want to do. If that works, then go do the other things after that.

Gardner: Penelope, how about following the money? Doesn’t where the money flows in and out of organizations tend to have a powerful impact on motivating people or getting them moving towards governance or not?

Gordon: I agree, and towards that end, it’s enterprise architects. Enterprise architects need to break out of the idea of focusing on how to address the boundary between IT and the business and talk to the business in business terms.

One way of doing that that I have seen as effective is to look at it from the standpoint of portfolio management. Where you were familiar with financial portfolio management, now you are looking at a service portfolio, as well as looking at your overall business and all of your business processes as a portfolio. How can you optimize at a macro level for your portfolio of all the investment decisions you’re making, and how the various processes and services are enabled? Then, it comes down to, as you said, a money issue.

Gardner: Perhaps one way to head off what we seem to think is an inevitable cloud chaos situation is to invoke more shared services, get people to consume services and think about how to pay for them along the way, regardless of where they come from and regardless of who specified them. So back to SOA, back to ITIL, back to the blocking and tackling that’s just good enterprise architecture. Anything to add to that, Mark?

Not more of the same

Skilton: I think it’s a mistake to just describe this as more of the same. ITIL, in my view, needs to change to take into account self-service dynamics. ITIL is kind of a provider service management process. It’s thing that you do to people. Cloud changes that direction to the other way, and I think that’s something that needs to be done.

Also, fundamentally the data center and network strategies need to be in place to adopt cloud. From my experience, the data center transformation or refurbishment strategies or next generation networks tend to be done as a separate exercise from the applications area. So a strong, strong recommendation from me would be to drive a clear cloud route map to your data center.

Gardner: So, perhaps a regulating effect on the self-selection of cloud services would be that the network isn’t designed for it and it’s not going to help.

Skilton: Exactly.

Gardner: That’s one way to govern your cloud. Ed Harrington, any other further thoughts on working towards a cloud future without the pitfalls?

Harrington: Again, the governance, certification of some sort. I’m not in favor of regulation, but I am in favor of some sort of third party certification of services that consumers can rely upon safely. But, I will go back to what I said earlier. It’s a combination of governance, treating the cloud services as services per se, and enterprise architecture.

Gardner: What about the notion that was brought up earlier about private clouds being an important on-ramp to this? If I were a public cloud provider, I would do my market research on what’s going on in the private clouds, because I think they are going to be incubators to what might then become hybrid and ultimately a full-fledged third-party public cloud providing assets and services.

What can we learn from looking at what’s going on with private cloud now, seemingly a lot of trying to reduce cost and energy consumption, but what does that tell us about what we should expect in the next few years? Again, let’s start with you, Tom.

Plunkett: What we’re seeing with private cloud is that it’s actually impacting governance, because one of the things that you look at with private cloud is chargeback between different internal customers. This is forcing these organizations to deal with complex money, business issues that they don’t really like to do.

Nowadays, it’s mostly vertical applications, where you’ve got one owner who is paying for everything. Now, we’re actually going back to, as we were talking about earlier, dealing with some of the tricky issues of SOA.

Gardner: TJ, private cloud as an incubator. What we should expect?

Securing your data

Virdi: Configuration and change management — how in the private cloud we are adapting to it and supporting different customer segments is really the key. This could be utilized in the public cloud too, as well as how you are really securing your information and data or your business knowledge. How you want to secure that is key, and that’s why the private cloud is there. If we can adapt to or mimic the same kind of controls in the public cloud, maybe we’ll have more adoptions in the public cloud too.

Gardner: Penelope, any thoughts on that, the private to public transition?

Gordon: I also look at it in a little different way. For example, in the U.S., you have the National Security Agency (NSA). For a lot of what you would think of as their non-differentiating processes, for example payroll, they can’t use ADP. They can’t use that SaaS for payroll, because they can’t allow the identities of their employees to become publicly known.

Anything that involves their employee data and all the rest of the information within the agency has to be kept within a private cloud. But, they’re actively looking at private cloud solutions for some of the other benefits of cloud.

In one sense, I look at it and say that private cloud adoption to me tells a provider that this is an area that’s not a candidate for a public-cloud solution. But, private clouds could also be another channel for public cloud providers to be able to better monetize what they’re doing, rather than just focusing on public cloud solutions.

Gardner: So, then, you’re saying this is a two-way street. Just as we could foresee someone architecting a good private cloud and then looking to take that out to someone else’s infrastructure, you’re saying there is a lot of public services that for regulatory or other reasons might then need to come back in and be privatized or kept within the walls. Interesting.

Mark Skilton, any thoughts on this public-private tension and/or benefit?

Skilton: I asked an IT service director the question about what was it like running a cloud service for the account. This is a guy who had previously been running hosting and management and with many years experience.

The surprising thing was that he was quite shocked that the disciplines that he previously had for escalating errors and doing planned maintenance, monitoring, billing and charging back to the customer fundamentally were changing, because it had to be done more in real time. You have to fix before it fails. You can’t just wait for it to fail. You have to have a much more disciplined approach to running a private cloud.

The lessons that we’re learning in running private clouds for our clients is the need to have a much more of a running-IT-as-a-business ethos and approach. We find that if customers try to do it themselves, either they may find that difficult, because they are used to buying that as a service, or they have to change their enterprise architecture and support service disciplines to operate the cloud.

Gardner: Perhaps yet another way to offset potential for cloud chaos in the future is to develop the core competencies within the private-cloud environment and do it sooner rather than later? This is where you can cut your teeth or get your chops, some number of metaphors come to mind, but this is something that sounds like a priority. Would you agree with that Ed, coming up with a private-cloud capability is important?

Harrington: It’s important, and it’s probably going to dominate for the foreseeable future, especially in areas that organizations view as core. They view them as core, because they believe they provide some sort of competitive advantage or, as Penelope was saying, security reasons. ADP’s a good idea. ADP could go into NSA and set up a private cloud using ADP and NSA. I think is a really good thing.

Trust a big issue

But, I also think that trust is still a big issue and it’s going to come down to trust. It’s going to take a lot of work to have anything that is perceived by a major organization as core and providing differentiation to move to other than a private cloud.

Gardner: TJ.

Virdi: Private clouds actually allow you to make more business modular. Your capability is going to be a little bit more modular and interoperability testing could happen in the private cloud. Then you can actually use those same kind of modular functions, utilize the public cloud, and work with other commercial off-the-shelf (COTS) vendors that really package this as new holistic solutions.

Gardner: Does anyone consider the impact of mergers and acquisitions on this? We’re seeing the economy pick up, at least in some markets, and we’re certainly seeing globalization, a very powerful trend with us still. We can probably assume, if you’re a big company, that you’re going to get bigger through some sort of merger and acquisition activity. Does a cloud strategy ameliorate the pain and suffering of integration in these business mergers, Tom?

Plunkett: Well, not to speak on behalf of Oracle, but we’ve gone through a few mergers and acquisitions recently, and I do believe that having a cloud environment internally helps quite a bit. Specifically, TJ made the earlier point about modularity. Well, when we’re looking at modules, they’re easier to integrate. It’s easier to recompose services, and all the benefits of SOA really.

Gardner: TJ, mergers and acquisitions in cloud.

Virdi: It really helps. At the same time, we were talking about legal and regulatory compliance stuff. EU and Japan require you to put the personally identifiable information (PII) in their geographical areas. Cloud could provide a way to manage those things without having the hosting where you have your own business.

Gardner: Penelope, any thoughts, or maybe even on a slightly different subject, of being able to grow rapidly vis-à-vis cloud experience and expertise and having architects that understand it?

Gordon: Some of this comes back to some of the discussions we were having about the extra discipline that comes into play, if you are going to effectively consume and provide cloud services, if you do become much more rigorous about your change management, your configuration management, and if you then apply that out to a larger process level.

So, if you define certain capabilities within the business in a much more modular fashion, then, when you go through that growth and add on people, you have documented procedures and processes. It’s much easier to bring someone in and say, “You’re going to be a product manager, and that job role is fungible across the business.”

That kind of thinking, the cloud constructs applied up at a business architecture level, enables a kind of business expansion that we are looking at.

Gardner: Mark Skilton, thoughts about being able to manage growth, mergers and acquisitions, even general business agility vis-à-vis more cloud capabilities.

Skilton: Right now, I’m involved in merging in a cloud company that we bought last year in May, and I would say yes and no. The no point is that I’m trying to bundle this service that we acquired in each product and with which we could add competitive advantage to the services that we are offering. I’ve had a problem with trying to bundle that into our existing portfolio. I’ve got to work out how they will fit and deploy in our own cloud. So, that’s still a complexity problem.

Faster launch

But, the upside is that I can bundle that service that we acquired, because we wanted to get that additional capability, and rewrite design techniques for cloud computing. We can then launch that bundle of new service faster into the market.

It’s kind of a mixed blessing with cloud. With our own cloud services, we acquire these new companies, but we still have the same IT integration problem to then exploit that capability we’ve acquired.

Gardner: That might be a perfect example of where cloud is or isn’t. When you run into the issue of complexity and integration, it doesn’t compute, so to speak.

Skilton: It’s not plug and play yet, unfortunately.

Gardner: Ed, what do you think about this growth opportunity, mergers and acquisitions, a good thing or bad thing?

Harrington: It’s a challenge. I think, as Mark presented it, it’s got two sides. It depends a lot on how close the organizations are, how close their service portfolios are, to what degree has each of the organizations adapted the cloud, and is that going to cause conflict as well. So I think there is potential.

Skilton: Each organization in the commercial sector can have different standards, and then you still have that interoperability problem that we have to translate to make it benefit, the post merger integration issue.

Gardner: We’ve been discussing the practical requirements of various cloud computing models, looking at core and context issues where cloud models would work, where they wouldn’t. And, we have been thinking about how we might want to head off the potential mixed bag of cloud models in our organizations and what we can do now to make the path better, but perhaps also make our organizations more agile, service oriented, and able to absorb things like rapid growth and mergers.

I’d like to thank you all for joining and certainly want to thank our guests. This is a sponsored podcast discussion coming to you from The Open Group’s 2011 Conference in San Diego. We’re here the week of February 7, 2011. A big thank you now to Penelope Gordon, cofounder of 1Plug Corporation. Thanks.

Gordon: Thank you.

Gardner: Mark Skilton, Director of Portfolio and Solutions in the Global Infrastructure Services with Capgemini. Thank you, Mark.

Skilton: Thank you very much.

Gardner: Ed Harrington, Principal Consultant in Virginia for the UK-based Architecting the Enterprise.

Harrington: Thank you, Dana.

Gardner: Tom Plunkett, Senior Solution Consultant with Oracle. Thank you.

Plunkett: Thank you, Dana.

Gardner: TJ Virdi, the Computing Architect in the CAS IT System Architecture group at Boeing.

Virdi: Thank you.

Gardner: I’m Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for joining, and come back next time.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2011. All rights reserved.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirectblogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cloud/SOA, Enterprise Architecture

The Business Case for Enterprise Architecture

By Balasubramanian Somasundram, Honeywell Technology Solutions Ltd.

Well, contrary to this blog post title, I am not going to talk about the finer details of preparing a business case for an Enterprise Architecture initiative. Rather, I am going to talk about ‘What makes the client to ask for a business case?”

Here is a little background…

Statistics assert that only 5% of companies practice Enterprise Architecture. And most of them are successful leaders in their businesses, not just IT.

When I attended Zachman’s conference last year, I was surprised to see Zachman being cynical about the realization of EA in the industry. He, in fact, went on to add that it may take 10-20 years to see EA truly alive in companies.

I am also closely watching some of the Enterprise Architects’ blogs. I don’t see convictions by looking at their blog posts titled – ‘Enterprise is a Joke’. ‘Enterprise Architects do only powerpoint presentations’. ‘There are not enough skilled architects’, etc.

In the recent past, when I was evangelizing EA among the top IT leadership, I often got questions on ‘short-term quick hits that can be achieved by EA’. That’s a tough one to answer!

Now the question is – ‘Why there is lack of faith in IT?’

And many of us know the answer – Because the teams often fail to deliver, despite spending lot of cash, effort and energy. The harsh reality is that IT does not believe in itself that it can deliver something significant, valuable and comprehensive.

If IT doesn’t believe in itself, how can we expect business to believe in us, to treat us like partners and not as order takers?

Now, getting to metrics… I happened to read this revealing Datamonitor whitepaper on the EDS site. Though the intent of the paper is to analyze the maintenance issues Vs adopting new innovations in existing applications, I found something very relevant and interesting to our topic of discussion here.

Some of the observations are:

  • IT departments that are overwhelmed by application maintenance do not see the benefit of planning
  • Datamonitor believes that skepticism of these overwhelmed decision makers can be largely attributed to a sense of ‘hopelessness’ or ‘burn out’ over formalized IT strategies.
  • Such decision makers are operating in a state of survival rather than one of enthusiastic optimism
  • IT departments see the value of planning primarily in the ‘build’ phase and not in the ‘run’ phase. They don’t really care too much about the ‘lifecycle’ of those application in the ‘planning’ phase.
  • And now, this compounds the maintenance complexity and inhibits the company from embarking into new initiatives – creating a vicious cycle.

What a resounding observation!

As someone said, adopting EA is like a lifestyle change – like following a fitness regimen. And that cannot be realized without discipline and commitment to change! The problem is not with EA but the way we look at it!

Balasubramanian Somasundaram is an Enterprise Architect with Honeywell Technology Solutions Ltd, Bangalore, a division of Honeywell Inc, USA. Bala has been with Honeywell Technology Solutions for the past five years and contributed in several technology roles. His current responsibilities include Architecture/Technology Planning and Governance, Solution Architecture Definition for business-critical programs, and Technical oversight/Review for programs delivered from Honeywell IT India center. With more than 12 years of experience in the IT services industry, Bala has worked with variety of technologies with a focus on IT architecture practice.  His current interests include Enterprise Architecture, Cloud Computing and Mobile Applications. He periodically writes about emerging technology trends that impact the Enterprise IT space on his blog. Bala holds a Master of Science in Computer Science from MKU University, India.

20 Comments

Filed under Enterprise Architecture

PODCAST: Impact of Security Issues on Doing Business in 2011 And Beyond

By Dana Gardner, Interabor Solutions

Listen to this recorded podcast here: BriefingsDirect-The Open Group Conference Cyber Security Panel

The following is the transcript of a sponsored podcast panel discussion on how enterprises need to change their thinking to face cyber threats, from The Open Group Conference, San Diego 2011.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion in conjunction with The Open Group Conference, held in San Diego in the week of February 7, 2011. We’ve assembled a panel to examine the business risk around cyber security threats.

Looking back over the past few years, it seems like threats are only getting worse. We’ve had the Stuxnet Worm, The WikiLeaks affair, China originating attacks against Google and others, and the recent Egypt Internet blackout. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

But, are cyber security dangers, in fact, getting much worse or rather perceptions that are at odds with what is really important in terms of security? In any event, how can businesses best protect themselves from the next round of risks, especially as Cloud, mobile, and social media activities increase? How can architecting for security become effective and pervasive? We’ll pose these and other serious questions to our panel to deeply examine the cyber business risks and ways to head them off.

Please join me now in welcoming our panel, we’re here with Jim Hietala, the Vice President of Security at The Open Group. Welcome back, Jim.

Jim Hietala: Hi, Dana. Good to be with you.

Gardner: And, we’re here with Mary Ann Mezzapelle, Chief Technologist in the CTO’s Office at HP. Welcome.

Mary Ann Mezzapelle: Thank you, Dana.

Gardner: We’re also here with Jim Stikeleather, Chief Innovation Officer at Dell Services. Welcome, Jim.

Jim Stikeleather: Thank you, Dana. Glad to be here.

Gardner: As I mentioned, there have been a lot of things in the news about security. I’m wondering, what are the real risks that are worth being worried about? What should you be staying up late at night thinking about, Jim?

Stikeleather: Pretty much everything, at this time. One of the things that you’re seeing is a combination of factors. When people are talking about the break-ins, you’re seeing more people actually having discussions of what’s happened and what’s not happening. You’re seeing a new variety of the types of break-ins, the type of exposures that people are experiencing. You’re also seeing more organization and sophistication on the part of the people who are actually breaking in.

The other piece of the puzzle has been that legal and regulatory bodies step in and say, “You are now responsible for it.” Therefore, people are paying a lot more attention to it. So, it’s a combination of all these factors that are keeping people up right now.

Gardner: Is it correct, Mary Ann, to say that it’s not just a risk for certain applications or certain aspects of technology, but it’s really a business-level risk?

Key component

Mezzapelle: That’s one of the key components that we like to emphasize. It’s about empowering the business, and each business is going to be different.

If you’re talking about a Department of Defense (DoD) military implementation, that’s going to be different than a manufacturing concern. So it’s important that you balance the risk, the cost, and the usability to make sure it empowers the business.

Gardner: How about complexity, Jim Hietala? Is that sort of an underlying current here? We now think about the myriad mobile devices, moving applications to a new tier, native apps for different platforms, more social interactions that are encouraging collaboration. This is good, but just creates more things for IT and security people to be aware of. So how about complexity? Is that really part of our main issue?

Hietala: It’s a big part of the challenge, with changes like you have mentioned on the client side, with mobile devices gaining more power, more ability to access information and store information, and cloud. On the other side, we’ve got a lot more complexity in the IT environment, and much bigger challenges for the folks who are tasked for securing things.

Gardner: Just to get a sense of how bad things are, Jim Stikeleather, on a scale of 1 to 10 — with 1 being you’re safe and sound and you can sleep well, and 10 being all the walls of your business are crumbling and you’re losing everything — where are we?

Stikeleather: Basically, it depends on who you are and where you are in the process. A major issue in cyber security right now is that we’ve never been able to construct an intelligent return on investment (ROI) for cyber security.

There are two parts to that. One, we’ve never been truly able to gauge how big the risk really is. So, for one person it maybe a 2, and most people it’s probably a 5 or a 6. Some people may be sitting there at a 10. But, you need to be able to gauge the magnitude of the risk. And, we never have done a good job of saying what exactly the exposure is or if the actual event took place. It’s the calculation of those two that tell you how much you should be able to invest in order to protect yourself.

So, I’m not really sure it’s a sense of exposure the people have, as people don’t have a sense of risk management — where am I in this continuum and how much should I invest actually to protect myself from that?

We’re starting to see a little bit of a sea change, because starting with HIPAA-HITECH in 2009, for the first time, regulatory bodies and legislatures have put criminal penalties on companies who have exposures and break-ins associated with them.

So we’re no longer talking about ROI. We’re starting to talk about risk of incarceration , and that changes the game a little bit. You’re beginning to see more and more companies do more in the security space — for example, having a Sarbanes-Oxley event notification to take place.

The answer to the question is that it really depends, and you almost can’t tell, as you look at each individual situation.

Gardner: Mary Ann, it seems like assessment then becomes super-important. In order to assess your situation, you can start to then plan for how to ameliorate it and/or create a strategy to improve, and particularly be ready for the unknown unknowns that are perhaps coming down the pike. When it comes to assessment, what would you recommend for your clients?

Comprehensive view

Mezzapelle: First of all we need to make sure that they have a comprehensive view. In some cases, it might be a portfolio approach, which is unique to most people in a security area. Some of my enterprise customers have more than a 150 different security products that they’re trying to integrate.

Their issue is around complexity, integration, and just knowing their environment — what levels they are at, what they are protecting and not, and how does that tie to the business? Are you protecting the most important asset? Is it your intellectual property (IP)? Is it your secret sauce recipe? Is it your financial data? Is it your transactions being available 24/7?

And, to Jim’s point, that makes a difference depending on what organization you’re in. It takes some discipline to go back to that InfoSec framework and make sure that you have that foundation in place, to make sure you’re putting your investments in the right way.

Stikeleather: One other piece of it is require an increased amount of business knowledge on the part of the IT group and the security group to be able to make the assessment of where is my IP, which is my most valuable data, and what do I put the emphasis on.

One of the things that people get confused about is, depending upon which analyst report you read, most data is lost by insiders, most data is lost from external hacking, or most data is lost through email. It really depends. Most IP is lost through email and social media activities. Most data, based upon a recent Verizon study, is being lost by external break-ins.

We’ve kind of always have the one-size-fits-all mindset about security. When you move from just “I’m doing security” to “I’m doing risk mitigation and risk management,” then you have to start doing portfolio and investment analysis in making those kinds of trade-offs.

That’s one of the reasons we have so much complexity in the environment, because every time something happens, we go out, we buy any tool to protect against that one thing, as opposed to trying to say, “Here are my staggered differences and here’s how I’m going to protect what is important to me and accept the fact nothing is perfect and some things I’m going to lose.”

Gardner: Perhaps a part of having an assessment of where you are is to look at how things have changed, Jim Hietala, thinking about where we were three or four years ago, what is fundamentally different about how people are approaching security and/or the threats that they are facing from just a few years ago?

Hietala: One of the big things that’s changed that I’ve observed is if you go back a number of years, the sorts of cyber threats that were out there were curious teenagers and things like that. Today, you’ve got profit-motivated individuals who have perpetrated distributed denial of service attacks to extort money. Now, they’ve gotten more sophisticated and are dropping Trojan horses on CFO’s machines and they can to try in exfiltrate passwords and log-ins to the bank accounts.

We had a case that popped up in our newspaper in Colorado, where a mortgage company, a title company lost a million dollars worth of mortgage money that was loans in the process of funding. All of a sudden, five homeowners are faced with paying two mortgages, because there was no insurance against that.

When you read through the details of what happened it was, it was clearly a Trojan horse that had been put on this company’s system. Somebody was able to walk off with a million dollars worth of these people’s money.

State-sponsored acts

So you’ve got profit-motivated individuals on the one side, and you’ve also got some things happening from another part of the world that look like they’re state-sponsored, grabbing corporate IP and defense industry and government sites. So, the motivation of the attackers has fundamentally changed and the threat really seems pretty pervasive at this point.

Gardner: Pervasive threat. Is that how you see it, Jim Stikeleather?

Stikeleather: I agree. The threat is pervasive. The only secure computer in the world right now is the one that’s turned off in a closet, and that’s the nature. You have to make decisions about what you’re putting on and where you’re putting it on. I’s a big concern that if we don’t get better with security, we run the risk of people losing trust in the Internet and trust in the web.

When that happens, we’re going to see some really significant global economic concerns. If you think about our economy, it’s structured around the way the Internet operates today. If people lose trust in the transactions that are flying across it, then we’re all going to be in pretty bad world of hurt.

Gardner: All right, well I am duly scared. Let’s think about what we can start doing about this. How should organizations rethink security? And is that perhaps the way to do this, Mary Ann? If you say, “Things have changed. I have to change, not only in how we do things tactically, but really at that high level strategic level,” how do you rethink security properly now?

Mezzapelle: It comes back to one of the bottom lines about empowering the business. Jim talked about having that balance. It means that not only do the IT people need to know more about the business, but the business needs to start taking ownership for the security of their own assets, because they are the ones that are going to have to belay the loss, whether it’s data, financial, or whatever.

They need to really understand what that means, but we as IT professionals need to be able to explain what that means, because it’s not common sense. We need to connect the dots and we need to have metrics. We need to look at it from an overall threat point of view, and it will be different based on what company you’re about.

You need to have your own threat model, who you think the major actors would be and how you prioritize your money, because it’s an unending bucket that you can pour money into. You need to prioritize.

Gardner: How would this align with your other technology and business innovation activities? If you’re perhaps transforming your business, if you’re taking more of a focus at the process level, if you’re engaged with enterprise architecture and business architecture, is security a sideline, is it central, does it come first? How do you organize what’s already fairly complex in security with these other larger initiatives?

Mezzapelle: The way that we’ve done that is this is we’ve had a multi-pronged approach. We communicate and educate the software developers, so that they start taking ownership for security in their software products, and that we make sure that that gets integrated into every part of portfolio.

The other part is to have that reference architecture, so that there’s common services that are available to the other services as they are being delivered and that we can not control it but at least manage from a central place.

You were asking about how to pay for it. It’s like Transformation 101. Most organizations spend about 80 percent of their spend on operations. And so they really need to look at their operational spend and reduce that cost to be able to fund the innovation part.

Getting benchmarks

It may not be in security. You may not be spending enough in security. There are several organizations that will give you some kind of benchmark about what other organizations in your particular industry are spending, whether it’s 2 percent on the low end for manufacturing up to 10-12 percent for financial institutions.

That can give you a guideline as to where you should start trying to move to. Sometimes, if you can use automation within your other IT service environment, for example, that might free up the cost to fuel that innovation.

Stikeleather: Mary Ann makes a really good point. The starting point is really architecture. We’re actually at a tipping point in the security space, and it comes from what’s taking place in the legal and regulatory environments with more-and-more laws being applied to privacy, IP, jurisdictional data location, and a whole series of things that the regulators and the lawyers are putting on us.

One of the things I ask people, when we talk to them, is what is the one application everybody in the world, every company in the world has outsourced. They think about it for a minute, and they all go payroll. Nobody does their own payroll any more. Even the largest companies don’t do their own payroll. It’s not because it’s difficult to run payroll. It’s because you can’t afford all of the lawyers and accountants necessary to keep up with all of the jurisdictional rules and regulations for every place that you operate in.

Data itself is beginning to fall under those types of constraints. In a lot of cases, it’s medical data. For example, Massachusetts just passed a major privacy law. PCI is being extended to anybody who takes credit cards.

The security issue is now also a data governance and compliance issue as well. So, because all these adjacencies are coming together, it’s a good opportunity to sit down and architect with a risk management framework. How am I going to deal with all of this information?

Plus you have additional funding capabilities now, because of compliance violations you can actually identify what the ROI is for of avoiding that. The real key to me is people stepping back and saying, “What is my business architecture? What is my risk profile associated with it? What’s the value associated with that information? Now, engineer my systems to follow that.”

Mezzapelle: You need to be careful that you don’t equate compliance with security? There are a lot of organizations that are good at compliance checking, but that doesn’t mean that they are really protecting against their most vulnerable areas, or what might be the largest threat. That’s just a letter of caution — you need to make sure that you are protecting the right assets.

Gardner: It’s a cliché, but people, process, and technology are also very important here. It seems to me that governance would be an overriding feature of bringing those into some alignment.

Jim Hietala, how should organizations approach these issues with a governance mindset? That is to say, following procedures, forcing those procedures, looking and reviewing them, and then putting into place the means by which security becomes in fact part-and-parcel with doing business?

Risk management

Hietala: I guess I’d go back to the risk management issue. That’s something that I think organizations frequently miss. There tends to be a lot of tactical security spending based upon the latest widget, the latest perceived threat — buy something, implement it, and solve the problem.

Taking a step back from that and really understanding what the risks are to your business, what the impacts of bad things happening are really, is doing a proper risk analysis. Risk assessment is what ought to drive decision-making around security. That’s a fundamental thing that gets lost a lot in organizations that are trying to grapple the security problems.

Gardner: Jim Stikeleather, any thoughts about governance as an important aspect to this?

Stikeleather: Governance is a critical aspect. The other piece of it is education. There’s an interesting fiction in both law and finance. The fiction of the reasonable, rational, prudent man. If you’ve done everything a reasonable, rational and prudent person has done, then you are not culpable for whatever the event was.

I don’t think we’ve done a good job of educating our users, the business, and even some of the technologists on what the threats are, and what are reasonable, rational, and prudent things to do. One of my favorite things are the companies that make you change your password every month and you can’t repeat a password for 16 or 24 times. The end result is that you get as this little thing stuck on the notebook telling them exactly what the password is.

So, it’s governance, but it’s also education on top of governance. We teach our kids not to cross the street in the middle of the road and don’t talk to strangers. Well, we haven’t quite created that same thing for cyberspace. Governance plus education may even be more important than the technological solutions.

Gardner: One sort of push-back on that is that the rate of change is so rapid and the nature of the risks can be so dynamic, how does one educate? How you keep up with that?

Stikeleather: I don’t think that it’s necessary. The technical details of the risks are changing rapidly, but the nature of the risk themselves, the higher level of the taxonomy, is not changing all that much.

If you just introduce safe practices so to speak, then you’re protected up until someone comes up with a totally new way of doing things, and there really hasn’t been a lot of that. Everything has been about knowing that you don’t put certain data on the system, or if you do, this data is always encrypted. At the deep technical details, yes, things change rapidly. At the level with which a person would exercise caution, I don’t think any of that has changed in the last ten years.

Gardner: We’ve now entered into the realm of behaviors and it strikes me also that it’s quite important and across the board. There are behaviors at different levels of the organization. Some of them can be good for ameliorating risk and others would be very bad and prolonged. How do you incentivize people? How do you get them to change their behavior when it comes to security, Mary Ann?

Mezzapelle: The key is to make it personalized to them or their job, and part of that is the education as Jim talked about. You also show them how it becomes a part of their job.

Experts don’t know

I have a little bit different view that it is so complex that even security professionals don’t always know what the reasonable right thing to do it. So, I think it’s very unreasonable for us to expect that of our business users, or consumers, or as I like to say, my mom. I use her as a use case quite a lot of times about what would she do, how would she react and would she recognize when she clicked on, “Yes, I want to download that antivirus program,” which just happened to be a virus program.

Part of it is the awareness so that you keep it in front of them, but you also have to make it a part of their job, so they can see that it’s a part of the culture. I also think it’s a responsibility of the leadership to not just talk about security, but make it evident in their planning, in their discussions, and in their viewpoints, so that it’s not just something that they talk about but ignore operationally.

Gardner: One other area I want to touch on is the notion of cloud computing, doing more outsourced services, finding a variety of different models that extend beyond your enterprise facilities and resources.

There’s quite a bit of back and forth about, is cloud better for security or worse for security? Can I impose more of these automation and behavioral benefits if I have a cloud provider or a single throat to choke, or is this something that opens up? I’ve got a sneaking suspicion I am going to hear “It depends” here, Jim Stikeleather, but I am going to go with you anyway. Cloud: I can’t live with it, can’t live without it. How does it work?

Stikeleather: You’re right, it depends. I can argue both sides of the equation. On one side, I’ve argued that cloud can be much more secure. If you think about it, and I will pick on Google, Google can expend a lot more on security than any other company in the world, probably more than the federal government will spend on security. The amount of investment does not necessarily tie to a quality of investment, but one would hope that they will have a more secure environment than a regular company will have.

On the flip side, there are more tantalizing targets. Therefore they’re going to draw more sophisticated attacks. I’ve also argued that you have statistical probability of break-in. If somebody is trying to break into Google, and you’re own Google running Google Apps or something like that, the probability of them getting your specific information is much less than if they attack XYZ enterprise. If they break in there, they are going to get your stuff.

Recently I was meeting with a lot of NASA CIOs and they think that the cloud is actually probably a little bit more secure than what they can do individually. On the other side of the coin it depends on the vendor. I’ve always admired astronauts, because they’re sitting on top of this explosive device built by the lowest-cost provider. I’ve always thought that took more bravery than anybody could think of. So the other piece of that puzzle is how much is the cloud provider actually providing in terms of security.

You have to do your due diligence, like with everything else in the world. I believe, as we move forward, cloud is going to give us an opportunity to reinvent how we do security.

I’ve often argued that a lot of what we are doing in security today is fighting the last war, as opposed to fighting the current war. Cloud is going to introduce some new techniques and new capabilities. You’ll see more systemic approaches, because somebody like Google can’t afford to put in 150 different types of security. They will put one more integrated. They will put in, to Mary Ann’s point, the control panels and everything that we haven’t seen before.

So, you’ll see better security there. However, in the interim, a lot of the software-as-a-service (SaaS) providers, some of the simpler platform-as-a-service (PaaS) providers haven’t made that kind of investment. You’re probably not as secured in those environments.

Gardner: Mary Ann, do you also see cloud as a catalyst to a better security either from technology process or implementation?

Lowers the barrier

Mezzapelle: For the small and medium size business it offers the opportunity to be more secure, because they don’t necessarily have the maturity of processes and tools to be able to address those kinds of things. So, it lowers that barrier to entry for being secure.

For enterprise customers, cloud solutions need to develop and mature more. They may want to do with hybrid solution right now, where they have more control and the ability to audit and to

have more influence over things in specialized contracts, which are not usually the business model for cloud providers.

I would disagree with Jim in some aspects. Just because there is a large provider on the Internet that’s creating a cloud service, security may not have been the key guiding principle in developing a low-cost or free product. So, size doesn’t always mean secure.

You have to know about it, and that’s where the sophistication of the business user comes in, because cloud is being bought by the business user, not by the IT people. That’s another component that we need to make sure gets incorporated into the thinking.

Stikeleather: I am going to reinforce what Mary Ann said. What’s going on in cloud space is almost a recreation of the late ’70s and early ’80s when PCs came into organizations. It’s the businesspeople that are acquiring the cloud services and again reinforces the concept of governance and education. They need to know what is it that they’re buying.

I absolutely agree with Mary. I didn’t mean to imply size means more security, but I do think that the expectation, especially for small and medium size businesses, is they will get a more secure environment than they can produce for themselves.

Gardner: Jim Hietala, we’re hearing a lot about frameworks, and governance, and automation. Perhaps even labeling individuals with responsibility for security and we are dealing with some changeable dynamics that move to cloud and issues around cyber security in general, threats from all over. What is The Open Group doing? It sounds like a huge opportunity for you to bring some clarity and structure to how this is approached from a professional perspective, as well as a process and framework perspective?

Hietala: It is a big opportunity. There are a number of different groups within The Open Group doing work in various areas. The Jericho Forum is tackling identity issues as it relates to cloud computing. There will be some new work coming out of them over the next few months that lay out some of the tough issues there and present some approaches to those problems.

We also have the Trusted Technology Forum (TTF) and the Trusted Technology Provider Framework (TTPF) that are being announced here at this conference. They’re looking at supply chain issues related to IT hardware and software products at the vendor level. It’s very much an industry-driven initiative and will benefit government buyers, as well as large enterprises, in terms of providing some assurance of products they’re procuring are secure and good commercial products.

Also in the Security Forum, we have a lot of work going on in security architecture and information security management. There are a number projects that are aimed at practitioners, providing them the guidance they need to do a better job of securing, whether it’s a traditional enterprise, IT environment, cloud and so forth. Our Cloud Computing Work Group is doing work on a cloud security reference architecture. So, there are number of different security activities going on in The Open Group related to all this.

Gardner: What have you seen in a field in terms of a development of what we could call a security professional? We’ve seen Chief Security Officer, but is there a certification aspect to identifying people as being qualified to step in and take on some of these issues?

Certification programs

Hietala: There are a number of certification programs for security professionals that exist out there. There was legislation, I think last year, that was proposed that was going to put some requirements at the federal level around certification of individuals. But, the industry is fairly well-served by the existing certifications that are out there. You’ve got CISSP, you’ve got a number of certification from SANS and GIAC that get fairly specialized, and there are lots of opportunities today for people to go out and get certifications in improving their expertise in a given topic.

Gardner: My last question will go to you on this same issue of certification. If you’re on the business side and you recognize these risks and you want to bring in the right personnel, what would you look for? Is there a higher level of certification or experience? How do you know when you’ve got a strategic thinker on security, Mary Ann?

Mezzapelle: The background that Jim talked about CISSP, CSSLP from (ISC)2, there is also the CISM or Certified Information Security Manager that’s from an audit point of view, but I don’t think there’s a certification that’s going to tell you that they’re a strategic thinker. I started out as a technologist, but it’s that translation to the business and it’s that strategic planning, but applying it to a particular area and really bringing it back to the fundamentals.

Gardner: Does this become then part of enterprise architecture (EA)?

Mezzapelle: It is a part of EA, and, as Jim talked, about we’ve done some work on The Open Group with Information Security Management model that extend some of other business frameworks like ITIL into the security space to have a little more specificity there.

Gardner: Last word to you, Jim Stikeleather, on this issue of how do you get the right people in the job and is this something that should be part and parcel with the enterprise or business architect?

Stikeleather: I absolutely agree with what Mary Ann said. It’s like a CPA. You can get a CPA and they know certain things, but that doesn’t guarantee that you’ve got a businessperson. That’s where we are with security certifications as well. They give you a comfort level that the fundamental knowledge of the issues and the techniques and stuff are there, but you still need someone who has experience.

At the end of the day it’s the incorporation of everything into EA, because you can’t bolt on security. It just doesn’t work. That’s the situation we’re in now. You have to think in terms of the framework of the information that the company is going to use, how it’s going to use it, the value that’s associated with it, and that’s the definition of EA.

Gardner: Well, great. We have been discussing the business risk around cyber security threats and how to perhaps position yourself to do a better job and anticipate some of the changes in the field. I’d like to thank our panelists. We have been joined by Jim Hietala, Vice President of Security for The Open Group. Thank you, Jim.

Hietala: Thank you, Dana.

Gardner: Mary Ann Mezzapelle, Chief Technologist in the Office of the CTO for HP. Thank you.

Mezzapelle: Thanks, Dana.

Gardner: And lastly, Jim Stikeleather,Chief Innovation Officer at Dell Services. Thank you.

Stikeleather: Thank you, Dana.

Gardner: This is Dana Gardner. You’ve been listening to a sponsored BriefingsDirect podcast in conjunction with The Open Group Conference here in San Diego, the week of February 7th, 2011. I want to thank all for joining and come back next time.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2011. All rights reserved.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirectblogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cybersecurity

Cloud Computing & Enterprise Architecture

By Balasubramanian Somasundram, Honeywell Technology Solutions Ltd.

What is the impact on Enterprise Architecture with the introduction of Cloud Computing and SaaS?

One word – ‘Serious’.

Here is my perspective.

On the first look, it may seem like Enterprise Architecture is irrelevant in a company if your complete IT is running on Cloud Computing, SaaS and outsourcing/offshoring. I was of the same opinion last year. However, it is not the case. In fact, the complexity is going to get multiplied.

We have moved from monolithic systems to client-server to tiered architectures. With SOA comes the truly distributed architecture. And with Cloud Computing and SaaS, we are moving to “Globally Decentralized/Distributed Architecture”.

With global distribution, we will be able to compose business processes out of services from SalesForce.com, Services running on Azure/Amazon and host the resulting composite in another cloud platform. Does that sound too cool and flexible! Of course. But it is also exponentially complex to manage in the long run!

Some of the challenges: What are the failure modes in these global composites? Can we optimize the attributes of those composites? How do we trace/troubleshoot, version control these composites? What are the foreseeable security threats in these global platforms?

Integration between these huge Clouds/SaaS platforms? – Welcome to the world of software-intensive, Massive System of Systems! :-)

If the first-generation EA guided us in dealing with System of Systems within an Enterprise, the next generation EA should help us in addressing ‘Massive System of Systems’.

With this new complexity, not only Enterprise Architecture gets necessary, but becomes absolutely critical in the IT ecosystem.

Enterprise Architecture and Cloud Computing will be topics of discussion at The Open Group India Conference in Chennai (March 7), Hyderabad (March 9) and Pune (March 11). Join us for best practices and case studies in the areas of Enterprise Architecture, Security, Cloud Computing and Certification, presented by preeminent thought leaders in the industry.

Balasubramanian Somasundaram is an Enterprise Architect with Honeywell Technology Solutions Ltd, Bangalore, a division of Honeywell Inc, USA. Bala has been with Honeywell Technology Solutions for the past five years and contributed in several technology roles. His current responsibilities include Architecture/Technology Planning and Governance, Solution Architecture Definition for business-critical programs, and Technical oversight/Review for programs delivered from Honeywell IT India center. With more than 12 years of experience in the IT services industry, Bala has worked with variety of technologies with a focus on IT architecture practice.  His current interests include Enterprise Architecture, Cloud Computing and Mobile Applications. He periodically writes about emerging technology trends that impact the Enterprise IT space on his blog. Bala holds a Master of Science in Computer Science from MKU University, India.

5 Comments

Filed under Cloud/SOA, Enterprise Architecture

World-class EA

By Mick Adams, Capgemini UK

World-class Enterprise Architecture is all about creating definitive collateral that defines how the architecture delivers value for societal value.

I know that’s a big, bold claim, but there’re enough dreamers and doers that are making this happen right now. World-class EA tackles big industry issues and offers big, brave solutions. The Open Group has already published several whitepapers at on this… banking, anyone? no problem… public services? Absolutely. World-class EA tackles these industry verticals and a bunch of others to describe a truly holistic model that unlocks value. Take a look at the World Class EA White Paper available in The Open Group’s online bookstore. Highlights of the whitepaper include:

  • Selection of industry drivers and potential architecture response
  • Suggested maturity model to calibrate organizations
  • Example of applying a maturity rating
  • Set of templates and suggested diagrams to provision TOGAF® 9 content

The work is ongoing; it’s not definitive yet. We are looking for more problem definitions and solutions to drive a collective global mindset forward to ensure that IT delivers benefits across the entire value chain. If we agree on what the problems are, prioritize and work on them in a wholly collegiate manner, the industry is in a better place as a consequence. My view is that The Open Group is the only viable platform to provision BIG IT to industry and society.

The Open Group India is running an event soon that I’m hoping will further refine world-class EA. The IT industry in India is flying red hot, and thriving at the moment. I’ve been lucky enough to work with some of the boldest and most innovative entrepreneurial people in the world that happen to come from India. There is an absolute passion for learning and contribution on the sub continent like no other. At The Open Group India event, we will discuss:

  • Defining the BIG IT topics for today
  • Insights about IT and EA
  • Providing/provisioning demonstrable value to make a difference

The countdown has begun to The Open Group India Conference. If you want to know what’s happening in architecture right now, or want to influence what could happen to our industry in India or globally, come along.

World-class EA will be a topic of discussion at The Open Group India Conference in Chennai (March 7), Hyderabad (March 9) and Pune (March 11). Join us for best practices and case studies in the areas of Enterprise Architecture, Security, Cloud and Certification, presented by preeminent thought leaders in the industry.

As a member of Capgemini global architecture leadership, Mick Adams has been involved in the development of some of the world’s largest enterprise architectures and has managed Capgemini contributions to The Open Group Architecture Forum for over two years. He has wide industry experience but his architecture work is currently focused on Central Government(s) and oil super-majors.

1 Comment

Filed under Enterprise Architecture

What’s the use of getting certified?

By Mieke Mahakena, Capgemini

After a day discussing business architecture methods and certification at The Open Group Conference in San Diego last week, I had to step back and consider if what I have been doing was still adding value. It seems to me that there is still much resistance against certification. “I don’t need to be certified; I have my college degree.” Or, “I have so much experience. Why should I need to prove anything?”

But let me ask you a question. Suppose you need to have surgery. The surgeon tells you that he hasn’t got a medical license, but you shouldn’t worry because he is so experienced. Would you let him perform surgery on you? I wouldn’t! So, if we expect others to be able to prove their skills before we hire them to work for us, shouldn’t the same apply to business architects? In our profession, mistakes can have severe consequences. As such, it is only reasonable for customers to demand some kind of impartial proof of our professional skills.

To become a good surgeon you not only need good education, you need a lot of practical experience as well. The same goes for the IT and architecture profession: Your skills develop with every new practical experience. This brings us to the importance of the ITAC or ITSC certifications. Both programs define the skills necessary for a certain profession and use a well-defined certification process to ensure that the candidate has the experience needed to develop those skills.

During The Open Group India Conference in March, you will be able to learn more about these certification programs and find out if they can bring value to you and your organization.

Certification will be a topic of discussion at The Open Group India Conference in Chennai (March 7), Hyderabad (March 9) and Pune (March 11). Join us for best practices and case studies in the areas of Enterprise Architecture, Security, Cloud and Certification, presented by preeminent thought leaders in the industry.

Find out more about the ITSC program by joining our webinar on Thursday, March 3.

Mieke Mahakena is an architect and architecture trainer at Capgemini Academy and label lead for the architecture training portfolio. She is the chair of the Business Forum at The Open Group, working on business architecture methods and certification. She is based in the Netherlands.

5 Comments

Filed under Certifications, Enterprise Architecture

T-Shaped people

By Steve Philp, The Open Group

We recently had an exhibition stand at the HR Directors Business Summit, which took place in Birmingham, UK. One of the main reasons for us attending this event was to find out the role HR plays in developing an internal IT profession, particularly for their IT Specialists.

On the second day of the conference there was a keynote presentation from Jill McDonald, the CEO and President of McDonald’s UK, who was talking about the CEO’s viewpoint of what is required from the strategic HR Director. Part of this presentation discussed the need to find t-shaped people within the organization. This is something that we often hear from both vendors and corporate organizations when they talk about what they are looking for from their IT staff.

T-shaped people are individuals who are experts or specialists in a core skill but also have a broad range of skills in other areas. A t-shaped person combines the broad level of skills and knowledge (the top horizontal part of the T) with specialist skills in a specific functional area (the bottom, vertical part of the T). They are not generalists because they have a specific core area of expertise but are often also referred to as generalizing specialists as well as T-shaped people.

A generalizing specialist is someone who: 1) Has one or more technical specialties […]. 2) Has at least a general knowledge of software development. 3) Has at least a general knowledge of the business domain in which they work. 4) Actively seeks to gain new skills in both their existing specialties as well as in other areas, including both technical and domain areas. – Scott Ambler, “Generalizing Specialist: A Definition

T-shaped people work well in teams because they can see a situation from a different perspective, can reduce bottlenecks, fill skills gaps and take on new skill sets quickly. This then leads to higher team productivity and greater flexibility.

The Open Group IT Specialist (ITSC) program measures an individual’s core competence in a specific stream or domain. However, it also covers a broader range of skills and competencies related to people, business, project management and architecture. In addition, the program looks at an individual’s work experience, professional development and community contribution. The conformance requirements of the program are mapped against your skills and experience rather than a body of knowledge and we assess people skills as well as technical abilities. Therefore, if it’s t-shaped people that you are looking for, then hiring somebody with ITSC status is a good place to start.

Find out more about the ITSC program by joining our webinar on Thursday, March 3.

Steve PhilpSteve Philp is the Marketing Director for the IT Architect and IT Specialist certification programs at The Open Group. Over the past 20 years, Steve has worked predominantly in sales, marketing and general management roles within the IT training industry. Based in Reading, UK, he joined the Open Group in 2008 to promote and develop the organization’s skills and experience-based IT certifications.

4 Comments

Filed under Certifications, Enterprise Architecture

PODCAST: Examining the state of EA and findings of recent survey

By Dana Gardner, Interabor Solutions

Listen to this recorded podcast here: BriefingsDirect-Panel of Architecture Experts Analyze EA Survey Data

The following is the transcript of a sponsored podcast panel discussion on the findings from a study on the current state and future direction of enterprise architecture from The Open Group Conference, San Diego 2011.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion in conjunction with The Open Group Conference, held in San Diego in the week of February 7, 2011. We’ve assembled a panel to examine the current state of enterprise architecture (EA) and analyze some new findings on this subject from a recently completed Infosys annual survey.

We’ll see how the architects themselves are defining the EA team concept, how enterprise architects are dealing with impact and engagement in their enterprises, and the latest definitions of EA deliverables and objectives. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

We’ll also look at where the latest trends around hot topics like cloud and mobile are pushing the enterprise architects towards a new future. Here with us to delve into the current state of EA and the survey results, is Len Fehskens, Vice President of Skills and Capabilities at The Open Group. Welcome, Len.

Len Fehskens: Thanks, Dana.

Gardner: Nick Hill, Principal Enterprise Architect at Infosys Technologies. Welcome.

Nick Hill: Thank you very much.

Gardner: Dave Hornford, Architecture Practice Principal at Integritas, as well as Chair of The Open Group’s Architecture Forum. Welcome, Dave.

Dave Hornford: Welcome. Thanks for being here.

Gardner: Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities for The Open Group.

Chris Forde: Good morning.

Gardner: We have a large group here today. We’re also joined by Andrew Guitarte. He is the Enterprise Business Architect of Internet Services at Wells Fargo Bank. Welcome.

Andrew Guitarte: Thank you very much.

Gardner: And, Ahmed Fattah. He is the Executive IT Architect in the Financial Services Sector for IBM, Australia. Welcome.

Ahmed Fattah: Thank you, Dana.

Gardner: Nick Hill, let’s go to you first. You’ve conducted an annual survey. You just put together your latest results. You asked enterprise architects what’s going on in the business. What are the takeaways? What jumped out at you this year?

Hot topics

Hill: There are several major takeaways. There were some things that were different about this year’s survey. One, we introduced the notion of hot topics. So, we had some questions around cloud computing. And, we took a more forward-looking view in terms of not so much what has been transpiring with enterprise architectures since the last survey, but what are they looking to go forward to in terms of their endeavors. And, as we have been going through economic turmoil over the past 2-3 years, we asked some questions about that.

We did notice that in terms of the team makeup, a lot of the sort of the constituents of the EA group are pretty much still the same, hailing from largely the IT core enterprise group. We looked at the engagement and impacts that they have had on their organizations and, as well, whether they have been able to establish the value that we’ve noticed that enterprise architects have been trying to accomplish over the past 3-4 years.

This was our fifth annual survey, which was started by Sohel Aziz. We did try to do some comparative results from previous surveys and we found that some of things were the same, but there are some things that are shifting in terms of EA.

More and more, the business is taking hold of the value that enterprise architects bring to the table, enterprise architects have been able to survive the economic troubled times, and some companies have even increased their investment in EA. But, there was a wide range of topics, and I’m sure that we’ll get to more of those as this discussion goes on.

Gardner: One of the things that you looked at was the EA team. Are we defining who these people are differently? Has that been shifting over the past few years?

Hill: Essentially, no. If you took a look at this year’s survey compared to 2007-2008 surveys, largely they’ve come from core IT with some increase from the business side, business architects and some increase in project managers. The leader of the EA group is still reporting through the IT chain either to the CIO or the CTO.

Gardner: Dave Hornford.

Hornford: Are you seeing that the leader of the architecture team is an architect or manager? The reason I’m asking is that we’re seeing an increasing shift in our client base to not having an architect lead the architecture team?

Hill: Well, that was very interesting. We didn’t exactly point to that kind of a determination. We wanted to see who they actually reported into. That would help us get some indication of how well they would be able to sell their value within the enterprise, if you’re largely aligned more with the IT or more with the business side functions.

Gardner: Chris Forde, you’ve been observing a lot of these issues, is this a particularly dynamic time, or is this a time where things are settling out? Is there any way to characterize the way in which the enterprise architect occupation or professional definition is involved with the organization? Where are we on this?

Forde: Actually, I’ll defer commentary on the professional aspect of things to my colleague Len. In terms of the dynamics of EA, we’re constantly trying to justify why enterprise architects should exist in any organization. That’s actually no different than most other positions are being reviewed on an ongoing basis, because of what the value proposition is for the organization.

Certifying architects

What I’m seeing in Asia is that a number of academic organizations, universities, are looking for an opportunity to certify enterprise architects, and a number of organizations are initiating, still through the IT organization but at a very high CIO-, CTO-level, the value proposition of an architected approach to business problems.

What I’m seeing in Asia is an increasing recognition of the need for EA, but also a continuing question of, “If we’re going to do this, what’s the value proposition,” which I think is just a reasonable conversation to have on a day- to-day basis anyway.

Gardner: So, Chris is pointing to the fact that business transformation is an undercurrent to all these things in many different occupations, and processes

and categories of workforce and even workflow are being reevaluated. Len, how is the EA job or function playing into that? Is this now an opportunity for it to start to become more of a business transformation occupation?

Fehskens: When you compare EA with all the other disciplines that make up a modern enterprise, it’s the new kid on the block. EA, as a discipline, is maybe 20 years old, depending on what you count as the formative event, whereas most of the other disciplines that are part of the modern enterprise at least hundreds of years old.

So, this is both a real challenge and a real opportunity. The other functions have a pretty good understanding of what their business case is They’ve been around for a long time, and the case that they can make is pretty familiar. Mostly they just have to argue in terms of more efficient or more effective delivery of their results.

For EA, the value proposition pretty much has to be reconstructed from whole cloth, because it didn’t really exist, and the value of the function is still not that well understood throughout most of the business.

So, this is an opportunity as well as a challenge, because it forces the maturing of the discipline, unlike some of these older disciplines who had decades to figure out what it was that we’re really doing. We have maybe a few years to figure out what it is we’re really doing and what we’re really contributing, and that helps a lot to accelerate the maturing of the discipline.

I don’t think we’re there completely yet, but I think EA, when it’s well done, people do see the value. When it’s not well done, it falls by the side of the road, which is to be expected. There’s going to be a lot of that, because of the relative use of the discipline, but we’ll get to the point where these other functions have and probably a lot faster than they did.

Gardner: So this is a work in progress, but that comes at a time when the organization is in transition. So, that might be a good match up. Nick, back to the survey. It seems, from my reading of it, that business strategy objectives are being given more to EA, perhaps because there is no one else in that über position to grab on to that and do something.

Hill: I think that’s very much the case. The caveat there is that it’s not necessarily an ownership. It’s a matter of participation and being able to weigh in on the business transformations that are happening and how EA can be instrumental in making those transformations successful.

Follow through

Now, given that, the idea is that it’s been more at a strategic level, and once that strategy is defined and you put that into play within an enterprise the idea is how does the enterprise architect really follow-through with that, if they are more focused on just the strategy not necessarily the implementation of that. That’s a big part of the challenge for enterprise architects — to understand how they percolate downwards the standards, the discipline of architecture that needs to be present within an organization to enable that strategy in transformation.

Gardner: Len.

Fehskens: One of the things that I am seeing is an idea taking hold within the architecture community that architecture is really about making the connection between strategy and execution.

If you look at the business literature, that problem is one that’s been around for a long time. A lot of organizations evolved really good strategies and then failed in the execution, with people banging their heads against the wall, trying to figure out, “We had such a great strategy. Why couldn’t we really implement it?”

I don’t know that anybody has actually done a study yet, but I would strongly suspect that, if they did, one of the things that they would discover was there wasn’t something that played the role of an architecture in making the connection between strategy and execution.

I see this is another great opportunity for architects, if we can express this idea in language that the businesspeople understand, and strategy to execution is language that businesspeople understand, and we can show them how architecture facilitates that connection. There is a great opportunity for a win-win situation for both the business and the architecture community.

Gardner: Chris.

Forde: I just wanted to follow the two points that are right here, and say that the strategy to execution problem space is not at all peculiar to IT architects or enterprise architects. It’s a fundamental business problem. Companies that are good at translating that bridge are extremely effective and it’s the role of architects in that, that’s the important thing, we have to have the place at the table.

But, to imagine that the enterprise architects are solely responsible for driving execution of a strategy in an organization is a fallacy, in my opinion. The need is to ensure that the team of people that are engaged in setting the strategy and executing on it are compelling enough to drive that through the organization. That is a management and an executive problem, a middle management problem, and then driving down to the delivery side. It’s not peculiar to EA at all in my opinion.

Gardner: Andrew at Wells Fargo Bank, you wear a number of hats outside of your organization that I think cross some of these boundaries. The idea of the enterprise architect or a business architect, where do you see this development of the occupation going, the category going, and what about this division between strategy and execution?

Guitarte: I may not speak for the bank itself, but from my experience of talking with people from the grassroots to the executive level, I have seen one very common observation, enterprise architects are caught off-guard, and the reason there is that there is this new paradigm. In fact, there is a shift in paradigm that business architecture is the new EA, and I am going out beyond my peers here in terms of predicting the future.

Creating a handbook

That is going to be the future. I am the founding chairman of the Business Architecture Society. Today, I am an advisory member of the Business Architecture Guild. We’re writing, or even rewriting, the textbooks on EA. We’re creating a handbook for business architects. What my peers have mentioned is that they are bridging the strategy and tactical demands and are producing the value that business has been asking for.

Gardner: Okay, we also see from the survey that process flexibility, and standardization seems to be a big topic. Again, they’re looking to the architects in the organization to try to bridge that, to move above and beyond IT and applications into process, standardization, and automation across the organization.

Ahmed, where do you see that going, and how do you think the architect can play a role in furthering this goal of process flexibility and standardization?

Fattah: The way I see the market is consistent with the results of the survey in that they see the emergence of the enterprise architect as business architect to work on a much wider space and make you focus more on the business. There are a number of catalysts for that. One of them is a business process, the rise of the business process management, as a very important discipline within the organization.

That, in a way, had some roots from Six Sigma, which was really a purely business aspect, but also from service oriented architecture (SOA), which has itself now developed into business process, decomposition and implementation.

That gives very good ammunition and support for the strategic decomposition of the whole enterprise as components that, with business process, is actually connecting elements between this. The business process architect is participating as a business architect using this business process as a major aspect for enabling business transformation.

I’m very encouraged with this development of business architecture. By the way, another catalyst now is a cloud. The cloud will actually purify or modify EA, because all the technical details maybe actually outsourced to the cloud provider, where the essence of what IT will support in the organization becomes the business process.

On one hand, I’m encouraged with the result of the survey and what I’ve seen in the organization, but on the other hand, I am disappointed that EA hasn’t developed these economic and business bases yet. I agree with Len that 20 years is a short time. On the other hand, it’s a long time for not applying this discipline in a consistent way. We’ll get much more penetration, especially with large organization, commercial organization, and not the academic side.

Gardner: So, if we look at that potential drop between the strategy and the execution, someone dropping the ball in that transition, what Ahmed is saying that cloud computing could come in whereby your strategy could be defined, your processes could be engineered, and then the tactical implementation of those could be handed off to the cloud providers. Is that a possible scenario from where you sit, Dave?

Hornford: I think it’s a possible scenario. I think more driving from it is the ability to highlight the process or business service requirements and not tie them to legacy investments that are not decomposed into a cloud. Where you have a separation to a cloud, you’re required to have the ability to improve your execution. The barriers in execution in our current world are very closely tied to our legacy investments in software asset with physical asset which are very closely tied to our organizational structure.

Gardner: How about you, Chris Forde, do you see some goodness or risk in ameliorating the issue of handing off strategy to a cloud provider?

Abdicating responsibility

Forde: Any organization that hands over strategic planning or execution activity to a third-party is abdicating its own responsibility to shareholders, as they are a profit-making organizations. So I would not advocate that position at all.

Hornford: You can’t outsource thinking?

Forde: Well, you can, but then you give up control, and that’s not a good situation. You need to be in control of your own destiny. In terms of what Ahmed was talking about, you need to be very careful as you engage with the third-party that they are actually going to implement your strategic intent.

You need to have a really strong idea of what it is you want from the provider, articulating clearly, and set up a structure that allows you to manage and operate that with their strength in the game. If you just simply abdicate that responsibility and assume that that’s going to happen, it’s likely to fail.

Gardner: So there probably be clearly be instances where handing off responsibility at some level will make sense and won’t make sense, but who better than the enterprise architect to make that determination? Ahmed.

Fattah: I agree, on one hand, the organization shouldn’t abdicate the core function of the businesses in defining a strategy and then executing it right.

However, an example, which I’m seeing as a trend, but a very slow trend — outsourcing architecture itself to other organizations. We have one example in Australia of a very large organization, which gives IBM the project execution, the delivery organization. Part of that was architecture. I was part of this to define with the organization their enterprise architecture, the demarcation between what they outsource and what they retain.

Definitely, they have to retain certain important parts, which is strategy and high-level, but outsourcing is a catalyst to be able to define what’s the value of this architecture. So the number of architectures within our software organization was looked with a greater scrutiny. They are monitoring the value of this delivery, and value was demonstrated. So the team actually grew; not shrunk.

Forde: In terms of outsourcing knowledge skills and experience in an architecture, this is a wave of activity that’s going to be coming. My point wasn’t that it wasn’t a valid way to go, but you have to be very careful about how you approach it.

My experience out of the Indian subcontinent has been that having a bunch of people labeled as architects is different than having a bunch of people that have the knowledge, skills, and experience to deliver what is expected. But in that region, and in Asia and China in particular, what I’m seeing is a recognition that there is a market there. In North America and in Europe, there is a gap of people with these skills and experience. And folks who are entrepreneurial in their outlook in Asia are certainly looking to fill that gap.

So, Ahmed’s model is one that can work well, and will be a burgeoning model over the next few years. You’ve to build the skill base first.

Gardner: Thank you, Chris Forde. Andrew, you had something?

Why the shift?

Guitarte: There’s no disagreement about what’s happening today, but I think the most important question is to ask why there is this shift. As Nick was saying, there is a shift of focus, and outsourcing is a symptom of that shift.

If you look back, Dave mentioned that in any organization there are two forces that tried to control the structure. One is the techno structure, which EA belongs to, and the main goal of a techno structure is to perpetrate themselves in power, to put it bluntly. Then, there is the other side, which is the shareholders, who want to maximize profit, and you’ve seen that cycle go back and forth.

Today, unfortunately, it’s the shareholders who are winning. Outsourcing for them is a way to manage cash flow, to control costs, and unfortunately, we’re getting hit.

Gardner: Nick, going back to the survey. When you asked about some of these hot trends — cloud, outsourcing, mobile, the impact — did anything jump out at you that might add more to our discussion around this shifting role and this demarcation between on-premises and outsource?

Hill: Absolutely. The whole concept of leveraging the external resources for computing capabilities is something we drove at. We did find the purpose behind that, and it largely plays into our conversation behind the impact of business. It’s more of a cost reduction play.

That’s what our survey respondents replied to and said the reason why the organization was interested in cloud was to reduce cost. It’s a very interesting concept, when you’re looking at why the business sees it as a cost play, as opposed to a revenue-generating, profit-making endeavor. It causes some need for balance there.

Gardner: So cutting cost, but at what price, Len?

Fehskens: The most interesting thing for me about cloud is that it replays a number of scenarios that we’ve seen happen over and over and over and over again. It’s almost always the case that the initial driver for the business to get interested in something is to reduce cost. But, eventually, you squeeze all the water out of that stone and you have to start looking at some other reason to keep moving in that direction, keep exploiting that opportunity.

That almost invariably is added value. What’s happening with cloud is that it’s forcing people to look at a lot of the issues that they started to address with SOA. But, the problem with SOA was that a lot of vendors managed to turn it into a technology issue. “Buy this product and you’ll have SOA,” which distracted people from thinking about the real issue here, which is figuring out what are the services that the business needs.

Once you understand what the services are that the business needs, then you can go and look for the lowest-cost provider out in the cloud to make that connection. But, once you’ve already made that disconnection between the services that the business needs and how they are provided, you can then start orchestrating the services on the business side from a strategically driven perspective to look at the opportunities to create added value.

You can assemble the implementation that delivers that added value from resources that are already out there that you don’t have to rely on your in-house organization to create it from scratch. So, there’s a huge opportunity here, but it’s accompanied by an enormous risk. If you get this right, you’re going to win big. But if you get it wrong, you are going to lose big.

Gardner: Ahmed, you had some thoughts?

Cloud has focus

Fattah: When we use the term, cloud, like many other terms, we refer to so many different things, and the cloud definitely has a focus. I agree that the focus now on reducing cost. However, when you look at the cloud as providing pure business service such as software as a service (SaaS), but also business process orchestrated services with perhaps outsourcing business process itself, it has a huge potential to create this mindset for organization about what they are doing and in which part they have to minimize cost. That’s where the service is a differentiator. They have to own it. They have to invest so much of it. And, they have to use the best around.

Definitely the cloud will play in different levels, but these levels where it will work in a business architecture is actually distilling the enterprise architecture into the essence of it, which is understanding what service do I need, how I sort the services, and how I integrate them together to achieve the value.

Gardner: So, the stakes are rather high. We have an opportunity where things could be very much more productive, and I’ll use that term rather than just cost savings, but we also have the

risk of some sort of disintermediation, dropping the ball, and handing off the strategic initiatives to the tactical implementation and/or losing control of your organization.

So, the question is, Dave Hornford, isn’t the enterprise architect in a catbird seat, in a real strong position to help determine the success or failure on this particular point?

Hornford: Yes, that gets to our first point, which was execution. We’ve talked in this group about the business struggle to execute. We also have to consider the ability of an enterprise architecture team to execute.

When we look at an organization that has historically come from and been very technically focused in enterprise IT, the struggle there, as Andrew said, is that it’s a self-perpetuating motion.

I keep running into architecture teams that talk about making sure that IT has a seat at the table. It’s a failure model, as opposed to going down the path that Len and Ahmed were talking about. That’s identifying the services that the business needs, so that they can be effectively assembled, whether that assembly is inside the company, partly with a outsource provider, or is assembled as someone else doing the work.

That gets back to that core focus of the sub-discipline that is evolving at an even faster rate than enterprise architecture. That’s business architecture. We’re 20 years into EA, but you can look at business literature going back a much broader period, talking about the difficulty of executing as a business.

This problem is not new. It’s a new player in it who has the capability to provide good advice, and the core of that I see for execution is an architecture team recognizing that they are advice providers, not doers, and they need to provide advice to a leadership team who can execute.

Gardner: Anyone else want to add to this issue of the role and importance of architect, be it business or be it information or IT, and this interesting catalyst position we are in between on- premises and outsource?

Varying maturity

Forde: I have a comment to make. It’s interesting listening to Dave’s comments. What we have to gauge here is that the state of EA varies in maturity from industry to industry and organization to organization.

For the function to be saying “I need a place at the table” is an indication of a maturity level inside an organization. If we’re going to say that an EA team that is looking for a place at the table is in a position to strategically advise the executives on what to do in an outsourcing agreement, that’s a recipe for disaster.

However, if you’re already in the position of being a trusted adviser within the organization, then it’s a very powerful position. It reflects the model that you just described, Dana.

Organizations and the enterprise architecture team at the business units need to be reflecting on where they are and how they can play in the model that Ahmed and Dave are talking about. There is no one-size-fits-all here from an EA perspective, I think it really varies from organization to organization.

Gardner: Nick, from the survey, was there any data and information that would lead you to have some insight into where these individuals need to go in order to accommodate, as Chris was saying, what they need to do from a self-starting situation to be able to rise to these issues even as these issues of course are variable from company to company?

Hill: One of the major focus areas that we found is that, when we talk about business architecture, the reality is that there’s a host of new technologies that have emerged with Web 2.0 and are emerging in grid computing, cloud computing, and those types of things that surely are alluring to the business. The challenge for the enterprise architecture is to take a look at what those legacy systems that are already invested in in-house and how an organization is going to transition that legacy environment to the new computing paradigms, do that efficiently, and at the same time be able to hit the business goals and objectives.

It’s a conundrum that the enterprise architects have to deal with, because there is a host of legacy investment that is there. In Infosys, we’ve seen a large uptake in the amount of modernization and rationalization of portfolios going on with our clientele.

That’s an important indicator that there is this transition happening and the enterprise architects are right in the middle of that, trying to coach and counsel the business leadership and, at the same time, provide the discipline that needs to happen on each and every project, and not just the very large projects or transformation initiatives that organizations are going through.

The key point here is that the enterprise architects are in the middle of this game. They are very instrumental in bringing these two worlds together, and the idea that they need to have more of a business acumen, business savvy, to understand how those things are affecting the business community, is going to be critical.

Gardner: Very good. We’re going to have to leave it there. I do want to thank you, Nick, for sharing the information from your Infosys Technologies survey and its result. So, thank you to Nick Hill, the Principal Enterprise Architect at Infosys Technologies.

I’d also like to thank our other members of our panel today. Len Fehskens, the Vice President of Skills and Capabilities at The Open Group. Thank you.

Fehskens: Thanks for the opportunity. It was a very interesting discussion.

Gardner: And Dave Hornford, the Architecture Practice Principal at Integritas. Thank you.

Hornford: Thank you very much, Dana, and everyone else.

Gardner: And Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities at The Open Group. Thank you.

Forde: My pleasure. Thanks, Dana.

Gardner: And of course, we’ve also been joined by Andrew Guitarte. He is the Enterprise Business Architect of Internet Services at Wells Fargo Bank. Thank you.

Guitarte: My pleasure.

Gardner: And lastly, Ahmed Fattah. He is the Executive IT Architect in the Financial Services Sector for IBM, Australia.

Fattah: Thank you, Dana.

Gardner: And I want to thank our listeners who have been enjoying a sponsored podcast discussion in conjunction with The Open Group Conference here in San Diego, the week of February 7, 2011. I’m Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for joining and come back next time.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2011. All rights reserved.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirectblogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Enterprise Architecture

PODCAST: Exploring the role and impact of the Open Trusted Technology Framework (OTTF)

by Dana Gardner, Interarbor Solutions

Listen to this recorded podcast here: BriefingsDirect-Discover the Open Trusted Technology Provider Framework

The following is the transcript of a sponsored podcast panel discussion from The Open Group Conference, San Diego 2011 on The Open Trusted Technology Forum and its impact on business and government.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion in conjunction with The Open Group Conference held in San Diego, the week of February 7, 2011. We’ve assembled a panel to examine The Open Group’s new Open Trusted Technology Forum (OTTF), which was established in December.

The forum is tasked with finding ways to better conduct global procurement and supply-chain commerce among and between technology acquirers and buyers and across the ecosystem of technology providers. By providing transparency, collaboration, innovation, and more trust on the partners and market participants in the IT environment, the OTTF will lead to improved business risk for global supply activities in the IT field.

We’ll examine how the OTTF will function, what its new framework will be charged with providing, and we will examine ways that participants in the global IT commerce ecosystem can become involved with and perhaps use the OTTF’s work to its advantage.

Here with us to delve into the mandate and impact of the Trusted Technology Forum, we’re here with Dave Lounsbury. He is the Chief Technology Officer for The Open Group. Welcome, Dave.

Dave Lounsbury: Hi, Dana. How are you?

Gardner: I’m great. We’re also here with Steve Lipner, the Senior Director of Security Engineering Strategy in Microsoft’s Trustworthy Computing Group. Welcome, Steve.

Steve Lipner: Hi, Dana. Glad to be here.

Gardner: And, we’re also here with Andras Szakal, the Chief Architect in IBM’s Federal Software Group and an IBM distinguished engineer. Welcome.

Andras Szakal: Welcome. Thanks for having me.

Gardner: We’re also here with Carrie Gates, Vice President and Research Staff Member at CA Labs. Welcome.Carrie Gates: Thank you.

Gardner: Let’s start with you, Dave. Tell us in a nutshell what the OTTF is and why it came about?

Lounsbury: The OTTF is a group that came together under the umbrella of The Open Group to identify and develop standards and best practices for trusting supply chain. It’s about how one consumer in a supply chain could trust their partners and how they will be able to indicate their use of best practices in the market, so that people who are buying from the supply chain or buying from a specific vendor will be able to know that they can procure this with a high level of confidence.

Gardner: Clearly, people have been buying these sorts of products for some time. What’s new? What’s changed that makes this necessary?

Concerns by DoD

Lounsbury: There are a couple of dimensions on it, and I will start this off because the other folks in the room are far more expert in this than I am.

This actually started a while ago at The Open Group by a question from the U.S. Department of Defense (DoD), which faced the challenge of buying commercial off-the-shelf product. Obviously, they wanted to take advantage of the economies of scale and the pace of technology in the commercial supply chain, but realized that means they’re not going to get purpose-built equipment, that they are going to buy things from a global supply chain.

They asked, “What would we look for in these things that we are buying to know that people have used good engineering practices and good supply chain management practices? Do they have a good software development methodology? What would be those indicators?”

Now, that was a question from the DoD, but everybody is on somebody’s supply chain. People buy components. The big vendors buy components from smaller vendors. Integrators bring multiple systems together.

So, this is a really broad question in the industry. Because of that, we felt the best way to address this was bring together a broad spectrum of industry to come in, identify the practices that they have been using — your real, practical experience — and bring that together within a framework to create a standard for how we would do that.

Gardner: And this is designed with that word “open” being important to being inclusive. This is about a level playing field, but not necessarily any sort of exclusionary affair.

Lounsbury: Absolutely. Not only is the objective of all The Open Group activities to produce open standards and conformance programs that are available to everyone,

but in this case, because we are dealing with a global supply chain, we know that we are going to have not only vendors at all scales, but also vendors from all around the world.

If you pick up any piece of technology, it will be designed in the USA, assembled in Mexico, and built in China. So we need that international and global dimension in production of this set of standards as well.

Gardner: Andras, you’ve been involved with this quite a bit. For the edification of our listeners, is this mostly software we’re talking about? Is it certain components? Can we really put a bead on what will be the majority of technologies that would probably be affected?

Szakal: That’s a great question, Dana. I’d like to provide a little background. In today’s environment, we’re seeing a bit of a paradigm shift. We’re seeing technology move out of the traditional enterprise infrastructure. We’re seeing these very complex value chains be created. We’re seeing cloud computing.

Smarter infrastructures

We’re actually working to create smarter infrastructures that are becoming more intelligent, automated, and instrumented, and they are very much becoming open-loop systems. Traditionally, they were closed loop systems, in other words, closed environments, for example, the energy and utility (E&U) industry, the transportation industry, and the health-care industry.

As technology becomes more pervasive and gets integrated into these environments, into the critical infrastructure, we have to consider whether they are vulnerable and how the components that have gone into these solutions are trustworthy.

Governments worldwide are asking that question. They’re worried about critical infrastructure and the risk of using commercial, off-the-shelf technology — software and hardware — in a myriad of ways, as it gets integrated into these more complex solutions.

That’s part of the worry internationally from a government and policy perspective, and part of our focus here is to help our constituents, government customers and critical infrastructure customers, understand how the commercial technology manufacturers, the software development manufactures, go about engineering and managing their supply chain integrity.

Gardner: I got the impression somehow, listening to some of the presentations here at the Conference, that this was mostly about software. Maybe at the start, would that be the case?

Szakal: No, it’s about all types of technology. Software obviously is a particularly important focus, because it’s at the center of most technology anyway. Even if you’re developing a chip, a chip has some sort of firmware, which is ultimately software. So that perception is valid to a certain extent, but no, not just software, hardware as well.

Gardner: Steve, I heard also the concept of “build with integrity,” as applied to the OTTF. What does that mean, build with integrity?

Lipner: Build with integrity really means that the developer who is building a technology product, whether it be hardware or software, applies best practices and understood techniques to prevent the inclusion of security problems, holes, bugs, in the product — whether those problems arise from some malicious act in the supply chain or whether they arise from inadvertent errors. With the complexity of modern software, it’s likely that security vulnerabilities can creep in.

So, what build with integrity really means is that the developer applies best practices to reduce the likelihood of security problems arising, as much as commercially feasible.

And not only that, but any given supplier has processes for convincing himself that upstream suppliers, component suppliers, and people or organizations that he relies on, do the same, so that ultimately he delivers as secure a product as possible.

Gardner: Carrie, one of the precepts of good commerce is a lack of friction between borders, where more markets can become involved, where the highest quality at the lowest cost types of effects can take place. This notion of trust, when applied to IT resources and assets, seems to be important to try to keep this a global market and to allow for the efficiencies that are inherent in an open market to take place. How do you see this as a borderless technology ecosystem? How does this help?

International trust

Gates: This helps tremendously in improving trust internationally. We’re looking at developing a framework that can be applied regardless of which country you’re coming from. So, it is not a US-centric framework that we’ll be using and adhering to.

We’re looking for a framework so that each country, regardless of its government, regardless of the consumers within that country, all of them have confidence in what it is that we’re building, that we’re building with integrity, that we are concerned about both, as Steve mentioned, malicious acts or inadvertent errors.

And each country has its own bad guy, and so by adhering to international standard we can say we’re looking for bad guys for every country and ensuring that what we provide is the best possible software.

Gardner: Let’s look a little bit at how this is going to shape up as a process. Dave, let’s explain the idea of The Open Group being involved as a steward. What is The Open Group’s role in this?

Lounsbury: The Open Group provides the framework under which both buyers and suppliers at any scale could come together to solve a common problem — in this case, the question of providing trusted technology best practices and standards. We operate a set of proven processes that ensure that everyone has a voice and that all these standards go forward in an orderly manner.

We provide infrastructure for doing that in the meetings and things like that. The third leg is that The Open Group operates industry-based conformance programs, the certification programs, that allow someone who is not a member to come in and indicate their conformance standard and give evidence that they’re using the best practices there.

Gardner: That’s important. I think there is a milestone set that you were involved with. You’ve created the forum. You’ve done some gathering of information. Now, you’ve come out right here at this conference with the framework, with the first step towards a framework, that could be accepted across the community. There is also a white paper that explains how that’s all going to work. But, eventually, you’re going to get to an accreditation capability. What does that mean? Is that a stamp of approval?

Lounsbury: Let me back up just a little bit. The white paper actually lays out the framework. The work of forum is to turn that framework into an Open Group standard and populate it. That will provide the standards and best practice foundation for this conformance program.

We’re just getting started on the vision for a conformance program. One of the challenges here is that first, not only do we have to come up with the standard and then come up with the criteria by which people would submit evidence, but you also have to deal with the problem of scale.

If we really want to address this problem of global supply chains, we’re talking about a very large number of companies around the world. It’s a part of the challenge that the forum faces.

Accrediting vendors

Part of the work that they’ve embarked on is, in fact, to figure out how we wouldn’t necessarily do that kind of conformance one on one, but how we would accredit either vendors themselves who have their own duty of quality processes as a big vendor would or third parties who can do assessments and then help provide the evidence for that conformance.

We’re getting ahead of ourselves here, but there would be a certification authority that would verify that all the evidence is correct and grant some certificate that says that they have met some or all of the standards.

Szakal: Our vision is that we want to leverage some of the capability that’s already out there. Most of us go through common criteria evaluations and that is actually listed as a best practice for a validating security function and products.

Where we are focused, from an accreditation point of view, affects more than just security products. That’s important to know. However, we definitely believe that the community of assessment labs that exists out there that already conducts security evaluations, whether they be country-specific or that they be common criteria, needs to be leveraged. We’ll endeavor to do that and integrate them into both the membership and the thinking of the accreditation process.

Gardner: Thank you, Andras. Now, for a company that is facing some hurdles — and we heard some questions in our sessions earlier about: “What do I have to do? Is this going to be hard for an SMB? — the upside could be pretty significant. If you’re a company and you do get that accreditation, you’re going to have some business value. Steve Lipner, what from your perspective is the business rationale for these players to go about this accreditation to get this sort of certification?

Lipner: To the extent that the process is successful, why then customers will really value the certification? And will that open markets or create preferences in markets for organizations that have sought and achieved the certification?

Obviously, there will be effort involved in achieving the certification, but that will be related to real value, more trust, more security, and the ability of customers to buy with confidence.

The challenge that we’ll face as a forum going forward is to make the processes deterministic and cost-effective. I can understand what I have to do. I can understand what it will cost me. I won’t get surprised in the certification process and I can understand that value equation. Here’s what I’m going to have to do and then here are the markets and the customer sets, and the supply chains it’s going to open up to me.

Gardner: So, we understand that there is this effort afoot that the idea is to create more trust and a set of practices in place, so that everyone understands that certain criteria have been met and vulnerabilities have been reduced. And, we understand that this is going to be community effort and you’re going to try to be inclusive.

What I’m now curious about is what is it this actually consists of — a list of best practices, technology suggestions? Are there certain tests and requirements that are already in place that one would have to tick off? Let me take that to you, Carrie, and we’ll go around the panel. How do you actually assure that this is safe stuff?

Different metrics

Gates: If you refer to our white paper, we start to address that there. We were looking at a number of different metrics across the board. For example, what do you have for documentation practices? Do you do code reviews? There are a number of different best practices that are already in the field that people are using. Anyone who wants to be a certified, can go and look at this document and say, “Yes, we are following these best practices” or “No, we are missing this. Is it something that we really need to add? What kind of benefit it will provide to us beyond the certification?”

Gardner: Dave, anything to add as to how a company would go about this? What are some of the main building blocks to a low-vulnerability technology creation and distribution process?

Lounsbury: Again, I refer everybody to the white paper, which is available on The Open Group website. You’ll see there in the categories that we’ve divided these kinds of best practice into four broad categories: product engineering and development methods, secure engineering development methods, supply chain integrity methods and the product evaluation methods.

Under there those are the categories, we’ll be looking at the attributes that are necessary to each of those categories and then identifying the underlying standards or bits of evidence, so people can submit to indicate their conformance.

I want to underscore this point about the question of the cost to a vendor. Steve said it very well. The objective here is to raise best practices across the industry and make the best practice commonplace. One of the great things about an industry-based conformance program is that it gives you the opportunity to take the standards and those categories that we’ve talked about as they are developed by OTTF and incorporate those in your engineering and development processes.

So you’re baking in the quality as you go along, and not trying to have an expensive thing going on at the end.

Gardner: Andras, IBM is perhaps one of the largest providers to governments and defense agencies when it comes to IT and certainly, at the center of a large ecosystem around the world, you probably have some insights into best practices that satisfy governments and military and defense organizations.

Can you offer a few major building blocks that perhaps folks that have been in a completely commercial environment would need to start thinking more about as they try to think about reaching accreditation?

Szakal: We have three broad categories here and we’ve broken each of the categories into a set of principles, what we call best practice attributes. One of those is secure engineering. Within secure engineering, for example, one of the attributes is threat assessment and threat modeling.

Another would be to focus on lineage of open-source. So, these are some of the attributes that go into these large-grained categories.

Unpublished best practices

You’re absolutely right, we have thought about this before. Steve and I have talked a lot about this. We’ve worked on his secure engineering initiative, his SDLC initiative within Microsoft. I worked on and was co-author of the IBM Secure Engineering Framework. So, these are living examples that have been published, but are proprietary, for some of the best practices out there. There are others, and in many cases, most companies have addressed this internally, as part of their practices without having to publish them.

Part of the challenge that we are seeing, and part of the reason that Microsoft and IBM went to the length of publishing there is that government customers and critical infrastructure were asking what is the industry practice and what were the best practices.

What we’ve done here is taken the best practices in the industry and bringing them together in a way that’s a non-vendor specific. So you’re not looking to IBM, you’re not having to look at the other vendors’ methods of implementing these practices, and it gives you a non-specific way of addressing them based on outcome.

These have all been realized in the field. We’ve observed these practices in the wild, and we believe that this is going to actually help vendors mature in these specific areas. Governments recognize that, to a certain degree, the industry is not a little drunk and disorderly and we do actually have a view on what it means to develop product in a secure engineering manner and that we have supply chain integrity initiatives out there. So, those are very important.

Gardner: Somebody mentioned earlier that technology is ubiquitous across so many products and services. Software in particular growing more important in how it affects all sorts of different aspects of different businesses around the world. It seems to me this is an inevitable step that you’re taking here and that it might even be overdue.

If we can take the step of certification and agreement about technology best practices, does this move beyond just technology companies in the ecosystem to a wider set of products and services? Any thoughts about whether this is a framework for technology that could become more of a framework for general commerce, Dave?

Lounsbury: Well, Dana, you asked me a question I’m not sure I have an answer for. We’ve got a quite a task in front of us doing some of these technology standards. I guess there might be cases where vertical industries that are heavy technology employers or have similar kinds of security problems might look to this or there might be some overlap. The one that comes to my mind immediately is health care, but we will be quite happy if we get the technology industry, standards and best practices in place in the near future.

Gardner: I didn’t mean to give you more work to do necessarily. I just wanted to emphasize how this is an important and inevitable step and that the standardization around best practices trust and credibility for lack of malware and other risks that comes in technology is probably going to become more prevalent across the economy and the globe. Would you agree with that, Andras?

Szakal: This approach is, by the way, our best practices approach to solving this problem. It’s an approach that’s been taken before by the industry or industries from a supply chain perspective. There are several frameworks out there that abstract the community practice into best practices and use it as a way to help global manufacturing and development practices, in general, ensure integrity.

Our approach is not all that unique, but it’s certainly the first time the technology industry has come together to make sure that we have an answer to some of these most important questions.

Gardner: Any thoughts, Steve?

Lipner: I think Andras was right in terms of the industry coming together to articulate best practices. You asked a few minutes ago about existing certifications and beyond in the trust and assurance space. Beyond common criteria for security features, security products, there’s really not much in terms of formal evaluation processes today.

Creating a discipline

One of the things we think that the forum can contribute is a discipline that governments and potentially other customers can use to say, “What is my supplier actually doing? What assurance do I have? What confidence do I have?”

Gardner: Dave?

Lounsbury: I want to expand on that point a little bit. The white paper’s name, “The Open Trusted Technology Provider Framework” was quite deliberately chosen. There are a lot of practices out there that talk about how you would establish specific security criteria or specific security practices for products. The Open Trusted Technology Provider Forum wants to take a step up and not look at the products, but actually look at the practices that the providers employ to do that. So it’s bringing together those best practices.

Now, good technology providers will use good practices, when they’re looking at their products, but we want to make sure that they’re doing all of the necessary standards and best practices across the spectrum, not just, “Oh, I did this in this product.”

Szakal: I have to agree 100 percent. We’re not simply focused on a bunch of security controls here. This is industry continuity and practices for supply chain integrity, as well as our internal manufacturing practices around the actual practice and process of engineering or software development, as well as supply chain integrity practices.

That’s a very important point to be made. This is not a traditional security standard, insomuch as that we’ve got a hundred security controls that you should always go out and implement. You’re going to have certain practices that make sense in certain situations, depending on the context of the product you’re manufacturing.

Gardner: Carrie, any suggestions for how people could get started at least from an educational perspective? What resources they might look to or what maybe in terms of a mindset they should start to develop as they move towards wanting to be a trusted part of a larger supply chain?

Gates: I would say an open mindset. In terms of getting started, the white paper is an excellent resource to get started and understand how the OTTF is thinking about the problem. How we are sort of structuring things? What are the high-level attributes that we are looking at? Then, digging down further and saying, “How are we actually addressing the problem?”

We had mentioned threat modeling, which for some — if you’re not security-focused — might be a new thing to think about, as an example, in terms of your supply chain. What are the threats to your supply chain? Who might be interested, if you’re looking at malicious attack, in inserting something into your code? Who are your customers and who might be interested in potentially compromising them? How might you go about protecting them?

I am going to contradict Andras a little bit, because there is a security aspect to this, and there is a security mindset that is required. The security mindset is a little bit different, in that you tend to be thinking about who is it that would be interested in doing harm and how do you prevent that?

It’s not a normal way of thinking about problems. Usually, people have a problem, they want to solve it, and security is an add-on afterwards. We’re asking that they start that thinking as part of their process now and then start including that as part of their process.

Szakal: But, you have to agree with me that this isn’t your hopelessly lost techie 150-paragraph list of security controls you have to do in all cases, right?

Gates: Absolutely, there is no checklist of, “Yes, I have a Firewall. Yes, I have an IDS.”

Gardner: Okay. It strikes me that this is really a unique form of insurance — insurance for the buyer, insurance for the seller — that they can demonstrate that they’ve taken proper steps — and insurance for the participants in a vast and complex supply chain of contractors and suppliers around the world. Do you think the word “insurance” makes sense or “assurance?” How would you describe it, Steve?

Lipner: We talk about security assurance, and assurance is really what the OTTF is about, providing developers and suppliers with ways to achieve that assurance in providing their customers ways to know that they have done that. Andras referred to install the Firewall, and so on. This is really not about adding some security band-aid onto a technology or a product. It’s really about the fundamental attributes or assurance of the product or technology that’s being produced.

Gardner: Very good. I think we’ll need to leave it there. We have been discussing The Open Group’s new Open Trusted Technology Forum, The Associated Open Trusted Technology Provider Framework, and the movement towards more of an accreditation process for the global supply chains around technology products.

I want to thank our panel. We’ve been joined by Dave Lounsbury, the Chief Technology Officer of The Open Group. Thank you.

Lounsbury: Thank you, Dana.

Gardner: Also, Steve Lipner, the Senior Director of Security Engineering Strategy in Microsoft’s Trustworthy Computing Group. Thank you, Steve.

Lipner: Thank you, Dana.

Gardner: And also, Andras Szakal, he is the Chief Architect in the IBM Federal Software Group and an IBM’s Distinguished Engineer. Thank you.

Szakal: Thank you so much.

Gardner: And, also Carrie Gates, Vice President and Research Staff Member at CA Labs. Thank you.

Gates: Thank you.

Gardner: You’ve been listening to a sponsored podcast discussion in conjunction with The Open Group Conference here in San Diego, the week of February 7, 2011. I’m Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for joining and come back next time.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2011. All rights reserved.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirectblogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cybersecurity, Supply chain risk

Enterprise Architecture: Helping to unravel the emergence of new forces

By Raghuraman Krishnamurthy, Cognizant Technology Solutions

It is very interesting to see how the society changes with time. Primitive society was predominantly agrarian. Society was closely knit; the people worked in well-defined land boundaries. Trade was happening but was limited to the adventurous few.

In the 1700s, as Europe began to explore the world in a well-organized manner, the need for industrial enterprises emerged. Industries offered greater stability in jobs as opposed to being at the mercy of vagaries of nature in the agrarian world. People slowly migrated to centres of industries, and the cities emerged. The work was accomplished by well-established principles of time and materials: in one hour, a worker was supposed to assemble ‘x’ components following a set, mechanical way of working. As society further evolved, a lot of information was exchanged and thoughts of optimization began to surface. Ways of storing and processing the information became predominant in the pursuits of creative minds. Computer systems ushered in new possibilities.

With the emergence of the Internet, unimagined new avenues opened up. In one stroke, it was possible to transcend the constraints of time and space. Free flow of information enabled ‘leveling’ of the world in some sense. Enterprises began to take advantage of it by following the principle of ‘get the best work from where it is possible’. Established notions for a big enterprise like well-organized labor, building of mammoth physical structures, etc., were challenged.

In the creative world of today, great emphasis is on innovation and new socially/environmentally-conscious ways of doing business.

At every turn from agrarian to industrial to informative to creative, fundamental changes occurred in how business was done and in the principles of business management. These changes, although seemingly unrelated to a field like Enterprise Architecture, will help unravel the emergence of new forces and the need to adjust (if you are reactive) or plan for (if you are proactive) these forces.

Learning from how manufacturing companies have adjusted their supply chain management to live in the flat world provides a valuable key to how Enterprise Architecture can be looked afresh. I am very excited to explore more of this theme and other topics in The Open Group India Conference in Hyderabad (March 9), Pune (March 11) and Chennai (March 7). I look forward to seeing you there and having interesting discussions amongst us.

Raghuraman Krishnamurthy works as a Principal Architect in Cognizant Technology Solutions and is based in India. He can be reached at Raghuraman.krishnamurthy2@cognizant.com.

4 Comments

Filed under Enterprise Architecture

Believe in change

By Garry Doherty, The Open Group

Many years ago I remember watching a TV documentary about the Apollo 11 moonshot. It was, as you would expect, very interesting; however, the only real memory of the program which has stayed with me was of a segment of the program which had little to do with Apollo 11 itself.

The cameraman was recording some footage of a guy who was cleaning the floor with a mop. An interviewer entered shot and engaged the cleaner… the dialogue went something like this:

Interviewer: Hi there, I wonder if you could give us a few minutes of your time?

Employee: Yeh, sure.

Interviewer: It must be amazing to be part of a team like this?

Employee: I guess.

Interviewer: Do you ever get the chance to meet any of the astronauts?

Employee: Oh no, sir.

Interviewer: So, what is your main role here?

Employee: I am helping to put a man on the moon.

Today, I am still humbled by that admission. Here was a man, who, despite his lowly manual labor, knew exactly that he was part of a team and understood his organization’s goals and the nature of his contribution.

So what can Enterprise Architects learn from this?

Well, at least part of what Architects do is to initiate change; but initiation is only the beginning. It’s not just enough to explain what change is going to happen; it’s also critical to explain why the change is necessary and what the impact that change will bring.

So, bear in mind that it’s not enough to just tell people. They need to believe as well!

Garry DohertyGarry Doherty is an experienced product marketer and product manager with a background in the IT and telecommunications industries. Garry is the TOGAF® Product Manager and theArchiMate® Forum Director at The Open Group. Garry is based in the U.K.

Comments Off

Filed under Enterprise Architecture

Cloud Conference — and Unconference

By Dr. Chris Harding, The Open Group

The Wednesday of The Open Group Conference in San Diego included a formal Cloud Computing conference stream. This was followed in the evening by an unstructured CloudCamp, which made an interesting contrast.

The Cloud Conference Stream

The Cloud conference stream featured presentations on Architecting for Cloud and Cloud Security, and included a panel discussion on the considerations that must be made when choosing a Cloud solution.

In the first session of the morning, we had two presentations on Architecting for Cloud. Both considered TOGAF® as the architectural context. The first, from Stuart Boardman of Getronics, explored the conceptual difference that Cloud makes to enterprise architecture, and the challenge of communicating an architecture vision and discussing the issues with stakeholders in the subsequent TOGAF® phases. The second, from Serge Thorn of Architecting the Enterprise, looked at the considerations in each TOGAF® phase, but in a more specific way. The two presentations showed different approaches to similar subject matter, which proved a very stimulating combination.

This session was followed by a presentation from Steve Else of EA Principals in which he shared several use cases related to Cloud Computing. Using these, he discussed solution architecture considerations, and put forward the lessons learned and some recommendations for more successful planning, decision-making, and execution.

We then had the first of the day’s security-related presentations. It was given by Omkhar Arasaratnam of IBM and Stuart Boardman of Getronics. It summarized the purpose and scope of the Security for the Cloud and SOA project that is being conducted in The Open Group as a joint project of The Open Group’s Cloud Computing Work Group, the SOA Work Group, and Security Forum. Omkhar and Stuart described the usage scenarios that the project team is studying to guide its thinking, the concepts that it is developing, and the conclusions that it has reached so far.

The first session of the afternoon was started by Ed Harrington, of Architecting the Enterprise, who gave an interesting presentation on current U.S. Federal Government thinking on enterprise architecture, showing clearly the importance of Cloud Computing to U.S. Government plans. The U.S. is a leader in the use of IT for government and administration, so we can expect that its conclusions – that Cloud Computing is already making its way into the government computing fabric, and that enterprise architecture, instantiated as SOA and properly governed, will provide the greatest possibility of success in its implementation – will have a global impact.

We then had a panel session, moderated by Dana Gardner with his usual insight and aplomb, that explored the considerations that must be made when choosing a Cloud solution — custom or shrink-wrapped — and whether different forms of Cloud Computing are appropriate to different industry sectors. The panelists represented different players in the Cloud solutions market – customers, providers, and consultants – so that the topic was covered in depth and from a variety of viewpoints. They were Penelope Gordon of 1Plug Corporation, Mark Skilton of Capgemini, Ed Harrington of Architecting the Enterprise, Tom Plunkett of Oracle, and TJ Virdi of the Boeing Company.

In the final session of the conference stream, we returned to the topic of Cloud Security. Paul Simmonds, a member of the Board of the Jericho Forum®, gave an excellent presentation on de-risking the Cloud through effective risk management, in which he explained the approach that the Jericho Forum has developed. The session was then concluded by Andres Kohn of Proofpoint, who addressed the question of whether data can be more secure in the Cloud, considering public, private and hybrid Cloud environment.

CloudCamp

The CloudCamp was hosted by The Open Group but run as a separate event, facilitated by CloudCamp organizer Dave Nielsen. There were around 150-200 participants, including conference delegates and other people from the San Diego area who happened to be interested in the Cloud.

Dave started by going through his definition of Cloud Computing. Perhaps he should have known better – starting a discussion on terminology and definitions can be a dangerous thing to do with an Open Group audience. He quickly got into a good-natured argument from which he eventually emerged a little bloodied, metaphorically speaking, but unbowed.

We then had eight “lightning talks”. These were five-minute presentations covering a wide range of topics, including how to get started with Cloud (Margaret Dawson, Hubspan), supplier/consumer relationship (Brian Loesgen, Microsoft), Cloud-based geographical mapping (Ming-Hsiang Tsou, San Diego University), a patterns-based approach to Cloud (Ken Klingensmith, IBM), efficient large-scale data processing (AlexRasmussen, San Diego University), using desktop spare capacity as a Cloud resource (Michael Krumpe, Intelligent Technology Integration), cost-effective large-scale data processing in the Cloud (Patrick Salami, Temboo), and Cloud-based voice and data communication (Chris Matthieu, Tropo).

The participants then split into groups to discuss topics proposed by volunteers. There were eight topics altogether. Some of these were simply explanations of particular products or services offered by the volunteers’ companies. Others related to areas of general interest such as data security and access control, life-changing Cloud applications, and success stories relating to “big data”.

I joined the groups discussing Cloud software development on Amazon Web Services (AWS) and Microsoft Azure. These sessions had excellent information content which would be valuable to anyone wishing to get started in – or already engaged in – software development on these platforms. They also brought out two points of general interest. The first is that the dividing line between IaaS and PaaS can be very thin. AWS and Azure are in theory on opposite sides of this divide; in practice they provide the developer with broadly similar capabilities. The second point is that in practice your preferred programming language and software environment is likely to be the determining factor in your choice of Cloud development platform.

Overall, the CloudCamp was a great opportunity for people to absorb the language and attitudes of the Cloud community, to discuss ideas, and to pick up specific technical knowledge. It gave an extra dimension to the conference, and we hope that this can be repeated at future events by The Open Group.

Cloud and SOA are a topic of discussion at The Open Group Conference, San Diego, which is currently underway.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF® practitioner.

1 Comment

Filed under Cloud/SOA

A First Step in Securing the Global Technology Supply Chain: Introducing The Open Group Trusted Technology Provider Framework Whitepaper

By Andras Szakal, IBM

Nearly two months ago, we announced the formation of The Open Group Trusted Technology Forum (OTTF), a global standards initiative among technology companies, customers, government and supplier organizations to create and promote guidelines for manufacturing, sourcing, and integrating trusted, secure technologies. The OTTF’s purpose is to shape global procurement strategies and best practices to help reduce threats and vulnerabilities in the global supply chain. I’m proud to say that we have just completed our first deliverable towards achieving our goal: The Open Trusted Technology Provider Framework (O-TTPF) whitepaper.

The framework outlines industry best practices that contribute to the secure and trusted development, manufacture, delivery and ongoing operation of commercial software and hardware products. Even though the OTTF has only recently been announced to the public, the framework and the work that led to this whitepaper have been in development for more than a year: first as a project of the Acquisition Cybersecurity Initiative, a collaborative effort facilitated by The Open Group between government and industry verticals under the sponsorship of the U.S. Department of Defense (OUSD (AT&L)/DDR&E). The framework is intended to benefit technology buyers and providers across all industries and across the globe concerned with secure development practices and supply chain management.

More than 15 member organizations joined efforts to form the OTTF as a proactive response to the changing cybersecurity threat landscape, which has forced governments and larger enterprises to take a more comprehensive view of risk management and product assurance. Current members of the OTTF include Atsec, Boeing, Carnegie Mellon SEI, CA Technologies, Cisco Systems, EMC, Hewlett-Packard, IBM, IDA, Kingdee, Microsoft, MITRE, NASA, Oracle, and the U.S. Department of Defense (OUSD(AT&L)/DDR&E), with the Forum operating under the stewardship and guidance of The Open Group.

Over the past year, OTTF member organizations have been hard at work collaborating, sharing and identifying secure engineering and supply chain integrity best practices that currently exist.  These best practices have been compiled from a number of sources throughout the industry including cues taken from industry associations, coalitions, traditional standards bodies and through existing vendor practices. OTTF member representatives have also shared best practices from within their own organizations.

From there, the OTTF created a common set of best practices distilled into categories and eventually categorized into the O-TTPF whitepaper. All this was done with a goal of ensuring that the practices are practical, outcome-based, aren’t unnecessarily prescriptive and don’t favor any particular vendor.

The Framework

The diagram below outlines the structure of the framework divided into categories that outline a hierarchy of how the OTTF arrived at the best practices it created.

Trusted Technology Provider Categories

Best practices were grouped by category because the types of technology development, manufacturing or integration activities conducted by a supplier are usually tailored to suit the type of product being produced, whether it is hardware, firmware, or software-based. Categories may also be aligned by manufacturing or development phase so that, for example, a supplier can implement a Secure Engineering/Development Method if necessary.

Provider categories outlined in the framework include:

  • Product Engineering/Development Method
  • Secure Engineering/Development Method
  • Supply Chain Integrity Method
  • Product Evaluation Method

Establishing Conformance and Determining Accreditation

In order for the best practices set forth in the O-TTPF to have a long-lasting effect on securing product development and the supply chain, the OTTF will define an accreditation process. Without an accreditation process, there can be no assurance that a practitioner has implemented practices according to the approved framework.

After the framework is formally adopted as a specification, The Open Group will establish conformance criteria and design an accreditation program for the O-TTPF. The Open Group currently manages multiple industry certification and accreditation programs, operating some independently and some in conjunction with third party validation labs. The Open Group is uniquely positioned to provide the foundation for creating standards and accreditation programs. Since trusted technology providers could be either software or hardware vendors, conformance will be applicable to each technology supplier based on the appropriate product architecture.

At this point, the OTTF envisions a multi-tiered accreditation scheme, which would allow for many levels of accreditation including enterprise-wide accreditations or a specific division. An accreditation program of this nature could provide alternative routes to claim conformity to the O-TTPF.

Over the long-term, the OTTF is expected to evolve the framework to make sure its industry best practices continue to ensure the integrity of the global supply chain. Since the O-TTPF is a framework, the authors fully expect that it will evolve to help augment existing manufacturing processes rather than replace existing organizational practices or policies.

There is much left to do, but we’re already well on the way to ensuring the technology supply chain stays safe and secure. If you’re interested in shaping the Trusted Technology Provider Framework best practices and accreditation program, please join us in the OTTF.

Download the O-TTPF, or visit read the OTTPF in full here.

Andras Szakal is an IBM Distinguished Engineer and Director of IBM’s Federal Software Architecture team. Andras is an Open Group Distinguished Certified IT Architect, IBM Certified SOA Solution Designer and a Certified Secure Software Lifecycle Professional (CSSLP). His responsibilities include developing e-Government software architectures using IBM middleware and leading the IBM U.S. Federal Software IT Architect Team. His team is responsible for designing solutions to enable smarter government by applying innovative approaches to secure service based computing and mission critical systems. He holds undergraduate degrees in Biology and Computer Science and a Masters Degree in Computer Science from James Madison University. Andras has been a driving force behind IBM’s adoption of federal government IT standards as a member of the IBM Software Group Government Standards Strategy Team and the IBM Corporate Security Executive Board focused on secure development and cybersecurity. Andras represents the IBM Software Group on the Board of Directors of The Open Group and currently holds the Chair of the IT Architect Profession Certification Standard (ITAC). More recently he was appointed chair of The Open Trusted Technology Forum.

4 Comments

Filed under Cybersecurity, Supply chain risk

An SOA Unconference

By Dr. Chris Harding, The Open Group

Monday at The Open Group Conference in San Diego was a big day for Interoperability, with an Interoperability panel session, SOA and Cloud conference streams, meetings of SOA and UDEF project teams, and a joint meeting with the IEEE on next-generation UDEF. The Tuesday was quieter, with just one major interoperability-related session: the SOACamp. The pace picks up again today, with a full day of Cloud meetings, followed by a Thursday packed with members meetings on SOA, Cloud, and Semantic Interoperability.

Unconferences

The SOACamp was an unstructured meeting, based on the CloudCamp Model, for SOA practitioners and people interested in SOA to ask questions and share experiences.

CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas. The CloudCamp organization is responsible for these events. They are frequent and worldwide; 19 events have been held or arranged so far for the first half of 2011 in countries including Australia, Brazil, Canada, India, New Zealand, Nigeria, Spain, Turkey, and the USA. The Open Group has hosted CloudCamps at several of its Conferences, and is hosting one at its current conference in San Diego today.

What is an unconference? It is an event that follows an unscripted format in which topics are proposed and presented by volunteers, with the agenda being made up on the fly to address whatever the attendees most want to discuss. This format works very well for Cloud, and we thought we would give it a try for SOA.

The SOA Hot Topics

So what were the SOA hot topics? Volunteers gave 5-minute “lightning talks” on five issues, which were then considered as the potential agenda items for discussion:

  • Does SOA Apply to Cloud service models?
  • Vendor-neutral framework for registry/repository access to encourage object re-use
  • Fine-grained policy-based authorization for exposing data in the Cloud
  • Relation of SOA to Cloud Architecture
  • Are all Cloud architectures SOA architectures?

The greatest interest was in the last two of these, and they were taken together as a single agenda item for the whole meeting: SOA and Cloud Architecture. The third topic, fine-grained policy-based authorization for exposing data in the Cloud, was considered to be more Cloud-related than SOA-related, and it was agreed to keep it back for the CloudCamp the following day. The other two topics, SOA and Cloud service models and vendor-neutral framework for registry/repository access were considered by separate subgroups meeting in parallel.

The discussions were lively and raised several interesting points.

SOA and Cloud Architecture

Cloud is a consumption and delivery model for SOA, but Cloud and SOA services are different. All Cloud services are SOA services, but not all SOA services are Cloud services, because Cloud services have additional requirements for Quality of Service (QoS) and delivery consumption.

Cloud requires a different approach to QoS. Awareness of the run-time environment and elasticity is crucial for Cloud applications.

Cloud architectures are service-oriented, but they need additional architectural building blocks, particularly for QoS. They may be particularly likely to use a REST-ful approach, but this is still service-oriented.

A final important point is that, within a service-oriented architecture, the Cloud is transparent to the consumer. The service consumer ultimately should not care whether a service is on the Cloud.

Vendor-Neutral Framework for Registry/Repository Access

The concept of vendor-neutral access to SOA registries and repositories is good, but it requires standard data models and protocols to be effective.

The Open Group SOA ontology has proved a good basis for a modeling framework.

Common methods for vendor-neutral access could help services in the Cloud connect to multiple registries and repositories.

Does SOA Apply to Cloud service Models?

The central idea here is that the cloud service models – Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) – could be defined as services in the SOA sense, with each of them exposing capabilities through defined interfaces.

This would require standards in three key areas: metrics/QoS, brokering/subletting, and service prioritization.

Is The Open Group an appropriate forum for setting and defining Cloud customer and provider standards? It has a standards development capability. The key determining factor is the availability of member volunteers with the relevant expertise.

Are Unconferences Good for Discussing SOA?

Cloud is an emerging topic while SOA is a mature one, and this affected the nature of the discussions. The unconference format is great for enabling people to share experience in new topic areas. The participants really wanted to explore new developments rather than compare notes on SOA practice, and the result of this was that the discussion mostly focused on the relation of SOA to the Cloud. This wasn’t what we expected – but resulted in some good discussions, exposing interesting ideas.

So is the unconference format a good one for SOA discussions? Yes it is – if you don’t need to produce a particular result. Just go with the flow, and let it take you and SOA to interesting new places.

Cloud and SOA are a topic of discussion at The Open Group Conference, San Diego, which is currently underway.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF practitioner.

Comments Off

Filed under Cloud/SOA

FACE™: Exchanging proprietary avionics solutions for a standardized computing environment

By Judy Cerenzia, The Open Group

It’s hard to believe that only eight months ago we kicked off The Open Group Future Airborne Capability Environment (FACE™) Consortium, a collaborative group of avionics industry and U.S. Army, Navy and Air Force contributors who are working to develop standards for a common operating environment to support portable capability applications across Department of Defense (DoD) avionics systems. Our goal is to create an avionics software environment on installed computing hardware of war-fighting platforms that enables FACE™ applications to be deployed on different platforms without impact to the FACE™ applications. This approach to portable applications and interoperability will reduce development and integration costs and reduce time to field new avionics capabilities.

The  FACE™ Technical Working Group is rapidly developing Version 1 of our technical standard, scheduled to be released later this year. The FACE™ Consortium strategy is changing the way government and industry do business by breaking down barriers to portability – exchanging proprietary solutions for a common and standardized computing environment and components. To enable this climate change, the Consortium’s Business Working Group is developing a Business Model Guide, defining stakeholders and their roles within a new business model, developing business scenarios and defining how stakeholders will impact or be impacted by business drivers in each, and investigating how contract terms, software licensing agreements, and IP rights may need to change to support procuring common components with standardized interfaces vs. a proprietary black-box solution from a prime contractor.  They’re also using TOGAF™ checklists from Phases A and B of the ADM process to ensure they’ve addressed all required business issues for the avionics enterprise.

We’ve grown from 74 individuals representing 14 organizations in June 2010 to over 200 participants from 20 government and industry partners to date. We’ve scheduled face-to-face meetings every 6 weeks, rotating among member locations to host the events and averaged over 70 attendees at each. Our next consortium meeting will be in the DC area March 2-3, 2011, hosted by the Office of Naval Research. I’m looking forward to seeing FACE™ colleagues, facilitating their working meeting, and forging ahead toward our mission to develop, evolve and publish a realistic open FACE™ architecture, standards and business model, and robust industry conformance program that will be supported and adopted by FACE™ customers, vendors, and integrators.

Judy Cerenzia is a Director of Collaboration Services for The Open Group, currently providing project management and facilitation support to the Future Airborn Capabilities Environment (FACE™) Consortium.  She has 10+ years experience leading cross-functional and cross-organizational development teams to reach consensus, using proven business processes and best practices to achieve strategic and technical goals.

2 Comments

Filed under FACE™

The golden thread of interoperability

By Dr. Chris Harding, The Open Group

There are so many things going on at every Conference by The Open Group that it is impossible to keep track of all of them, and this week’s Conference in San Diego, California, is no exception. The main themes are Cybersecurity, Enterprise Architecture, SOA and Cloud Computing. Additional topics range from Real-Time and Embedded Systems to Quantum Lifecycle Management. But there are a number of common threads running through all of those themes, relating to value delivered to IT customers through open systems. One of those threads is Interoperability.

Interoperability Panel Session

The interoperability thread showed strongly in several sessions on the opening day of the conference, Monday Feb. 7, starting with a panel session on Interoperability Challenges for 2011 that I was fortunate to have been invited to moderate.

The panelists were Arnold van Overeem of Capgemini, chair of the Architecture Forum’s Interoperability project, Ron Schuldt, the founder of UDEF-IT and chair of the Semantic Interoperability Work Group’s UDEF project, TJ Virdi of Boeing, co-chair of The Open Group Cloud Computing Work Group, and Bob Weisman of Build-the-Vision, chair of The Open Group Architecture Forum’s Information Architecture project. The audience was drawn from many companies, both members and non-members of The Open Group, and made a strong contribution to the debate.

What is interoperability? The panel described several essential characteristics:

  • Systems with different owners and governance models work together;
  • They exchange and understand data automatically;
  • They form an information-sharing environment in which business information is available in the right context, to the right person, and at the right time; and
  • This environment enables processes, as well as information, to be shared.

Interoperability is not just about the IT systems. It is also about the ecosystem of user organizations, and their cultural and legislative context.

Semantics is an important component of interoperability. It is estimated that 65% of data warehouse projects fail because of their inability to cope with a huge number of data elements, differently defined.

There is a constant battle for interoperability. Systems that lock customers in by refusing to interoperate with those of other vendors can deliver strong commercial profit. This strategy is locally optimal but globally disastrous; it gives benefits to both vendors and customers in the short term, but leads in the longer term to small markets and siloed systems.  The front line is shifting constantly. There are occasional resounding victories – as with the introduction of the Internet – but the normal state is trench warfare with small and painful gains and losses.

Blame for lack of interoperability is often put on the vendors, but this is not really fair. Vendors must work within what is commercially possible. Customer organizations can help the growth of interoperability by applying pressure and insisting on support for standards. This is in their interests; integration required by lack of interoperability is currently estimated to account for over 25% of IT spend.

SOA has proved a positive force for interoperability. By embracing SOA, a customer organization can define its data model and service interfaces, and tender for competing solutions that conform to its interfaces and meet its requirements. Services can be shared processing units forming part of the ecosystem environment.

The latest IT phenomenon is Cloud Computing. This is in some ways reinforcing SOA as an interoperability enabler. Shared services can be available on the Cloud, and the ease of provisioning services in a Cloud environment speeds up the competitive tendering process.

But there is one significant area in which Cloud computing gives cause for concern: lack of interoperability between virtualization products. Virtualization is a core enabling technology for Cloud Computing, and virtualization products form the basis for most private Cloud solutions. These products are generally vendor-specific and without interoperable interfaces, so that it is difficult for a customer organization to combine different virtualization products in a private Cloud, and easy for it to become locked in to a single vendor.

There is a need for an overall interoperability framework within which standards can be positioned, to help customers express their interoperability requirements effectively. This framework should address cultural and legal aspects, and architectural maturity, as well as purely technical aspects. Semantics will be a crucial element.

Such a framework could assist the development of interoperable ecosystems, involving multiple organizations. But it will also help the development of architectures for interoperability within individual organizations – and this is perhaps of more immediate concern.

The Open Group can play an important role in the development of this framework, and in establishing it with customers and vendors.

SOA/TOGAF Practical Guide

SOA is an interoperability enabler, but establishing SOA within an enterprise is not easy to do. There are many stakeholders involved, with particular concerns to be addressed. This presents a significant task for enterprise architects.

TOGAF® has long been established as a pragmatic framework that helps enterprise architects deliver better solutions. The Open Group is developing a practical guide to using TOGAF® for SOA, as a joint project of its SOA Work Group and The Open Group Architecture Forum.

This work is now nearing completion. Ed Harrington of Architecting-the-Enterprise had overcome the considerable difficulty of assembling and adding to the material created by the project to form a solid draft. This was discussed in detail by a small group, with some participants joining by teleconference. As well as Ed, this group included Mats Gejnevall of Capgemini and Steve Bennett of Oracle, and it was led by project co-chairs Dave Hornford of Integritas and Awel Dico of the Bank of Montreal.

The discussion resolved all the issues, enabling the preparation of a draft for review by The Open Group, and we can expect to see this valuable guide published at the conclusion of the review process.

UDEF Deployment Workshop

The importance of semantics for interoperability was an important theme of the interoperability panel discussion. The Open Group is working on a specific standard that is potentially a key enabler for semantic interoperability: the Universal Data Element Framework (UDEF).

It had been decided at the previous conference, in Amsterdam, that the next stage of UDEF development should be a deployment workshop. This was discussed by a small group, under the leadership of UDEF project chair Ron Schuldt, again with some participation by teleconference.

The group included Arnold van Overeem of Capgemini, Jayson Durham of the US Navy, and Brand Niemann of the Semantic Community. Jayson is a key player in the Enterprise Lexicon Services (ELS) initiative, which aims to provide critical information interoperability capabilities through common lexicon and vocabulary services. Brand is a major enthusiast for semantic interoperability with connections to many US semantic initiatives, and currently to the Air Force OneSource project in particular, which is evolving a data analysis tool used internally by the USAF Global Cyberspace Integration Center (GCIC) Vocabulary Services Team, and made available to general data management community.  The participation of Jayson and Brand provided an important connection between the UDEF and other semantic projects.

As a result of the discussions, Ron will draft an interoperability scenario that can be the basis of a practical workshop session at the next conference, which is in London.

Complex Cloud Environments

Cloud Computing is the latest hot technology, and its adoption is having some interesting interoperability implications, as came out clearly in the Interoperability panel session. In many cases, an enterprise will use, not a single Cloud, but multiple services in multiple Clouds. These services must interoperate to deliver value to the enterprise. The Complex Cloud Environments conference stream included two very interesting presentations on this.

The first, by Mark Skilton and Vladimir Baranek of Capgemini, explained how new notations for Cloud can help explain and create better understanding and adoption of new Cloud-enabled services and the impact of social and business networks. As Cloud environments become increasingly complex, the need to explain them clearly grows. Consumers and vendors of Cloud services must be able to communicate. Stakeholders in consumer organizations must be able to discuss their concerns about the Cloud environment. The work presented by Mark and Vladimir grew from discussions in a CloudCamp that was held at a previous Conference by The Open Group. We hope that it can now be developed by The Open Group Cloud Computing Work Group to become a powerful and sophisticated language to address this communication need.

The second presentation, from Soobaek Jang of IBM, addressed the issue of managing and coordinating across a large number of instances in a Cloud Computing environment. He explained an architecture for “Multi-Node Management Services” that acts as a framework for auto-scaling in a SaaS lifecycle, putting structure around self-service activity, and providing a simple and powerful web service orientation that allows providers to manage and orchestrate deployments in logical groups.

SOA Conference Stream

The principal presentation in this stream picked up on one of the key points from the Interoperability panel session in a very interesting way. It showed how a formal ontology can be a practical basis for common operation of SOA repositories. Semantic interoperability is at the cutting edge of interoperability, and is more often the subject of talk than of action. The presentation included a demonstration, and it was great to see the ideas put to real use.

The presentation was given jointly by Heather Kreger, SOA Work Group Co-chair, and Vince Brunssen, Co-chair of SOA Repository Artifact Model and Protocol (S-RAMP) at OASIS. Both presenters are from IBM. S-Ramp is an emerging standard from OASIS that enables interoperability between tools and repositories for SOA. It uses the formal SOA Ontology that was developed by The Open Group, with extensions to enable a common service model as well as an interoperability protocol.

This presentation illustrated how S-RAMP and the SOA Ontology work in concert with The Open Group SOA Governance Framework to enable governance across vendors. It contained a demonstration that included defining new service models with the S-RAMP extensions in one SOA repository and communicating with another repository to augment its service model.

To conclude the session, I gave a brief presentation on SOA in the Cloud – the Next Challenge for Enterprise Architects. This discussed how the SOA architectural style is widely accepted as the style for enterprise architecture, and how Cloud Computing is a technical possibility that can be used in enterprise architecture. Architectures using Cloud computing should be service-oriented, but this poses some key questions for the architect. Architecture governance must change in the context of Cloud-based ecosystems. It may take some effort to keep to the principles of the SOA style – but it will be important to do this. And the organization of the infrastructure – which may migrate from the enterprise to the Cloud – will present an interesting challenge.

Enabling Semantic Interoperability Through Next Generation UDEF

The day was rounded off by an evening meeting, held jointly with the local chapter of the IEEE, on semantic interoperability. The meeting featured a presentation by Ron Schuldt, UDEF Project Chair, on the history, current state, and future goals of the UDEF.

The importance of semantics as a component of interoperability was clear in the morning’s panel discussion. In this evening session, Ron explained how the UDEF can enable semantic interoperability, and described the plans of the UDEF Project Team to expand the framework to meet the evolving needs of enterprises today and in the future.

This meeting was arranged through the good offices of Jayson Durham, and it was great that local IEEE members could join conference participants for an excellent session.

Cloud is a topic of discussion at The Open Group Conference, San Diego, which is currently underway.

Dr Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF practitioner.

3 Comments

Filed under Cloud/SOA, TOGAF®

TOGAF® Trademark Success

By Garry Doherty, The Open Group

I’d like to take note of two major milestones have happened for The Open Group recently: TOGAF® became a registered trademark, and TOGAF® certifications passed 15,200. Both of these achievements signify the growing importance of open standards to organizations their stakeholders, and even their employees, and also underscores the value to be gained from trusted, globally accepted standards.

These validations of the growth of TOGAF® and the value of the TOGAF® brand have come during one of the most turbulent economic times in recent memory. As organizations have struggled financially, they have been forced to look at their organizational and business models and determine where they could cut spending dramatically. Obviously IT budgets were a large part of those evaluations.

Open standards, such as TOGAF®, can help organizations better manage difficult times by providing a framework that allows enterprise architects to help their companies save money, maintain and enhance profitability and improve efficiencies. TOGAF®’s tremendous growth over the past few years is a testament to not only how much open enterprise architecture frameworks are needed within organizations today, but also to how certifications like TOGAF® can help professionals differentiate themselves and remain secure in their employment when staff cutting is rampant throughout most industries.

As with The Open Group’s stewardship of the registered trademark for UNIX®, we’ve successfully steered TOGAF® to a position of global significance using our breadth of experience in the development of open standards to reach 83 countries worldwide, from Afghanistan to Vietnam. TOGAF® is currently available in English, Chinese and Japanese with pocket guides available in Chinese, Dutch, French and German.

The Open Group is working hard to ensure that open standards are in place that organizations can rely on. Our pedigree reflects over 20 years of developing successful global standards such as TOGAF®, UNIX, LDAP and WAP, using member organizations to enhance them and collect best practices for developing them along the way.

So, congratulations all of the individuals and organizations within The Open Group and The Open Group Architecture Forum for making TOGAF® such a success and making it a globally recognized, registered brand trademark. We look forward to the future of TOGAF® and many more milestones to come!

TOGAF® is a topic of discussion at The Open Group Conference, San Diego this week. Join us for TOGAF® Camp, best practices, case studies and all things Enterprise Architecture, presented by preeminent thought leaders in the industry, during our conferences held around the world.

Garry DohertyGarry Doherty is an experienced product marketer and product manager with a background in the IT and telecommunications industries. Garry is the TOGAF® Product Manager and theArchiMate® Forum Director at The Open Group. Garry is based in the U.K.

1 Comment

Filed under Enterprise Architecture, TOGAF®

Seeing above the Clouds

By Mark Skilton, Capgemini

Genie out of the bottle

I recently looked back at some significant papers that had influenced my thinking on Cloud Computing as part of a review on current strategic trends. In February 2009, a paper published at the University of California, Berkeley, “Above the Clouds: A Berkeley View of Cloud Computing”, stands out, as the first of many papers to drive out the issues around the promise of Cloud computing and technology barriers to achieving secure elastic service. The key issue unfolding at that time was the transfer of risk that resulted from moving to a Cloud environment and the obstacles to security, performance and licensing that would need to evolve. But the genie was out of the bottle, as early successful adopters could see cost savings and rapid one-to-many monetization benefits of on-demand services.

Worlds reloaded – Welcome to the era of multiplicity

A second key moment I can recall was the realization that the exchange of services was no longer a simple request and response. For sure, social networks had demonstrated huge communities of collaboration and online “personas” changing individual and business network interactions. But something else had happened less obvious but more profound. This change was made most evident in the proliferation of mobile computing that greatly expanded the original on-premise move to off-premise services. A key paper by Intel Research titled “CloneCloud” published around that same time period exemplified this shift. Services could be cloned and moved into the Cloud demonstrating the possible new realities in redefining the real potential of how work gets done using Cloud Computing. The key point was that storage or processing transactions, media streaming or complex calculations no longer had to be executed within a physical device. It could be provided as a service from remote source, a virtual Cloud service. But more significant was the term “multiplicity” in this concept. We see this everyday as we download apps, stream video and transact orders. The fact was that you could do not only a few, but multiple tasks simultaneously and pick and choose the services and results.

New thinking, new language

This signaled a big shift away from the old style of thinking about business services that had us conditioned to think of service oriented requests in static, tiered, rigid ways. Those business processes and services missed this new bigger picture. Just take a look at the phenomenon called  hyperlocal services that offer location specific on-demand information or how crowd sourcing can dramatically transform purchasing choices and collaboration incentives. Traditional ways of measuring, modeling and running business operations are underutilizing this potential and undervaluing what can be possible in these new collaborative networks. The new multiplicity based world of Cloud-enabled networks means you can augment yourself and your company’s assets in ways that change the shape of your industry.  What is needed is a new language to describe how this shift feels and works, and how advances in your business portfolio can be realized with these modern ideas, examining current methods and standards of strategy visualization, metrics and design to evolve a new expression of this potential.

Future perfect?

Some two years have passed and what has been achieved?  Certainly we have seen the huge proliferation of services into a Cloud hosting environment. Large strategic movements in private data centers seek to develop private Cloud services, by bringing together social media and social networking through Cloud technologies. But what’s needed now is a new connection between the potential of these technologies and the vision of the Internet, the growth of social graph associations and wider communities and ecosystems are emerging in the movement’s wake.

With every new significant disruptive change, there is also the need for a new language to help describe this new world. Open standards and industry forums will help drive this. The old language focuses on the previous potential and so a new way to visualize, define and use the new realities can help the big shift towards the potential above the Cloud.

This post was simultaneously published on the BriefingsDirect blog by Dana Gardner.

Mark Skilton, Director, Capgemini and is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of cloud computing impact on Outsourcing and Offshoring models and  contributed to the second edition of the Handbook of Global Outsourcing and Offshoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

Comments Off

Filed under Cloud/SOA

TOGAF™ to the Platform: Developing Dependability Cases, 2011 RTESF San Diego Meeting

By G. Edward Roberts, Elparazim

The Open Group RTES (Real Time Embedded Systems) Forum has embarked on a project to define a RTES version of TOGAF™.  To accomplish this task, the Forum has looked at technologies and techniques that represent the “best-of-breed” practices in the industry. So far, the Forum has studied the Modeling side of development with AADL (Architecture and Analysis and Design Language) standard from the SAE (Society of Automotive Engineers), SysML (Systems Modeling Language) and MARTE (Modeling and Analysis of Real-Time and Embedded Systems) from the OMG (Object Management Group).  These technologies and their use will definitely be in the guidelines being added into this vertical domain instance of TOGAF™.

On this afternoon’s session of the Forum during The Open Group Conference, San Diego, there will be a continuation of a discussion started in a webinar from September 2010. That webinar outlined certain proposals by some of the members on what they thought could be accomplished by the Forum in the area of the development of Dependability Cases for systems. One interesting proposal was the development of a multi-level taxonomy/ontology of Assurance attributes that would need to be captured by any tools supporting the development of Dependability Cases.  These discussions will help shape the roadmap for the Forum’s work in this area.

At this Conference, the RTES Forum will start to examine the technologies and techniques in the industry surrounding the development of Dependability Cases.  Many systems lack dependability (aka Assurance) in certain areas, e.g. MILS, security, deadlock avoidance, due to the lack of detailed development resulting in a failure to detect flaws (assumptions, missing data, lack of testing) in ones design of a Real-Time and/or Embedded System. In the past, systems desiring to be at a high level of Assurance in some area had to be formally (i.e. mathematically) proved for correctness (called ‘Formal Methods’).  This was an extremely costly endeavor. The industry has recognized this dilemma and showed that a somewhat lesser degree of Assurance could be obtained by making a formal structured argument about the system meeting certain requirements, i.e. a Dependability Case, which would keep track of the details of what one has to provide as evidence to prove the case. This technique has the ability to represent formal methods as well as these lesser Assurance arguments.

On Tuesday during the Conference, there will be a set of presentations on Dependability Cases technologies and the processes needed to develop them. First, I will present an update to the Forum on the work being done current on this project. Included with this report is the work being done in modeling TOGAF™ and its importance to the RTES effort. The second presentation will be a look at the technologies surrounding Dependability Cases: ARM (Argumentation Metamodel)  and SAEM (Software Assurance Evidence Metamodel) from the SysA group in the OMG, soon to be combined together into a single standard, SACM (Structured Assurance Case Metamodel), the GSN (Goal Structuring Notation) and a general discussion of the work of Steven Toulmin’s reasoning model with which these technologies have been influenced.

The third lecture, by Rance DeLong of Lynux Works, will deal with some of the theory and practice of building Dependibility Cases using his recent work on MILS Protection Profiles. This lecture will deal with how one does Compositional Certification, that is, given components that have some level of Assurance, how does one combine them to develop systems that are assured.  Also included in this lecture will be a discussion on the Common Criteria Authoring Environment and new MILS research directions.

The fourth and final presentation on this topic, will be right after lunch on Tueday at 1:30pm, and will be presented by Dr. Matsuno of the University of Tokyo on D-Case technology.  This is a process and soon to be released tool on the eclipse platform to develop Dependability Cases for systems.  The forum is excited to have Dr. Matsuno present and hopes that this will open up a process description that will be part of the RTES plugin to TOGAF™.

G. Edward Roberts is owner of Elparazim, a consulting company on Enterprise/Software Architecture and Development. Edward holds degrees in Electrical Engineer and Mathematics, and worked for most of his professional life, as an Advanced Technology Researcher for the US Navy. He is currently working with the Real-Time Embedded Systems Forum, of which he is a member, to develop a domain specific TOGAF™ for that sector and the Architecture Forum (also a member) to model TOGAF 9.  Edward is a TOGAF™ 9 Certified Architect and certified Professional Engineer in EE.

Comments Off

Filed under Enterprise Architecture, TOGAF®, Uncategorized