Tag Archives: cloud

Build Upon SOA Governance to Realize Cloud Governance

By E.G. Nadhan, HP

The Open Group SOA Governance Framework just became an International Standard available to government and enterprises worldwide. At the same time, I read an insightful post by ZDNet Blogger, Joe McKendrick who states that Cloud and automation drive new growth in SOA governance market. I have always maintained that the fundamentals of Cloud Computing are based upon SOA principles. This brings up the next natural question: Where are we with Cloud Governance?

I co-chair the Open Group project for defining the Cloud Governance framework. Fundamentally, the Cloud Governance framework builds upon The Open Group SOA Governance Framework and provides additional context for Cloud Governance in relation to other governance standards in the industry. We are with Cloud Governance today where we were with SOA Governance a few years back when The Open Group started on the SOA Governance framework project.

McKendrick goes on to say that the tools and methodologies built and stabilized over the past few years for SOA projects are seeing renewed life as enterprises move to the Cloud model. In McKendrick’s words, “it is just a matter of getting the word out.” That may be the case for the SOA governance market. But, is that so for Cloud Governance?

When it comes to Cloud Governance, it is more than just getting the word out. We must make progress in the following areas for Cloud Governance to become real:

  • Sustained adoption. Enterprises must continuously adopt cloud based services balancing it with outsourcing alternatives. This will give more visibility to the real-life use cases where Cloud Governance can be exercised to validate and refine the enabling set of governance models.
  • Framework Definition. Finally, Cloud Governance needs a standard framework to facilitate its adoption. Just like the SOA Governance Framework, the definition of a standard for the Cloud Governance Framework as well as the supporting reference models will pave the way for the consistent adoption of Cloud Governance.

Once these progressions are made, Cloud Governance will be positioned like SOA Governance—and it will then be just a “matter of getting the word out.”

A version of this blog post originally appeared on the Journey through Enterprise IT Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

1 Comment

Filed under Cloud, Cloud/SOA

SOA Provides Needed Support for Enterprise Architecture in Cloud, Mobile, Big Data, Says Open Group Panel

By Dana Gardner, BriefingsDirect

There’s been a resurgent role for service-oriented architecture (SOA) as a practical and relevant ingredient for effective design and use of Cloud, mobile, and big data technologies.

To find out why, The Open Group recently gathered an international panel of experts to explore the concept of “architecture is destiny,” especially when it comes to hybrid services delivery and management. The panel shows how SOA is proving instrumental in allowing the needed advancements over highly distributed services and data, when it comes to scale, heterogeneity support, and governance.

The panel consists of Chris Harding, Director of Interoperability at The Open Group, based in the UK; Nikhil Kumar, President of Applied Technology Solutions and Co-Chair of the SOA Reference Architecture Projects within The Open Group, and he’s based in Michigan, and Mats Gejnevall, Enterprise Architect at Capgemini and Co-Chair of The Open Group SOA Work Group, and he’s based in Sweden. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

The full podcast can be found here.

Here are some excerpts:

Gardner: Why this resurgence in the interest around SOA?

Harding: My role in The Open Group is to support the work of our members on SOA, Cloud computing, and other topics. We formed the SOA Work Group back in 2005, when SOA was a real emerging hot topic, and we set up a number of activities and projects. They’re all completed.

I was thinking that the SOA Work Group would wind down, move into maintenance mode, and meet once every few months or so, but we still get a fair attendance at our regular web meetings.

In fact, we’ve started two new projects and we’re about to start a third one. So, it’s very clear that there is still an interest, and indeed a renewed interest, in SOA from the IT community within The Open Group.

Larger trends

Gardner: Nikhil, do you believe that this has to do with some of the larger trends we’re seeing in the field, like Cloud Software as a Service (SaaS)? What’s driving this renewal?

Kumar: What I see driving it is three things. One is the advent of the Cloud and mobile, which requires a lot of cross-platform delivery of consistent services. The second is emerging technologies, mobile, big data, and the need to be able to look at data across multiple contexts.

The third thing that’s driving it is legacy modernization. A lot of organizations are now a lot more comfortable with SOA concepts. I see it in a number of our customers. I’ve just been running a large Enterprise Architecture initiative in a Fortune 500 customer.

At each stage, and at almost every point in that, they’re now comfortable. They feel that SOA can provide the ability to rationalize multiple platforms. They’re restructuring organizational structures, delivery organizations, as well as targeting their goals around a service-based platform capability.

So legacy modernization is a back-to-the-future kind of thing that has come back and is getting adoption. The way it’s being implemented is using RESTful services, as well as SOAP services, which is different from traditional SOA, say from the last version, which was mostly SOAP-driven.

Gardner: Mats, do you think that what’s happened is that the marketplace and the requirements have changed and that’s made SOA more relevant? Or has SOA changed to better fit the market? Or perhaps some combination?

Gejnevall: I think that the Cloud is really a service delivery platform. Companies discover that to be able to use the Cloud services, the SaaS things, they need to look at SOA as their internal development way of doing things as well. They understand they need to do the architecture internally, and if they’re going to use lots of external Cloud services, you might as well use SOA to do that.

Also, if you look at the Cloud suppliers, they also need to do their architecture in some way and SOA probably is a good vehicle for them. They can use that paradigm and also deliver what the customer wants in a well-designed SOA environment.

Gardner: Let’s drill down on the requirements around the Cloud and some of the key components of SOA. We’re certainly seeing, as you mentioned, the need for cross support for legacy, Cloud types of services, and using a variety of protocol, transports, and integration types. We already heard about REST for lightweight approaches and, of course, there will still be the need for object brokering and some of the more traditional enterprise integration approaches.

This really does sound like the job for an Enterprise Service Bus (ESB). So let’s go around the panel and look at this notion of an ESB. Some people, a few years back, didn’t think it was necessary or a requirement for SOA, but it certainly sounds like it’s the right type of functionality for the job.

Loosely coupled

Harding: I believe so, but maybe we ought to consider that in the Cloud context, you’re not just talking about within a single enterprise. You’re talking about a much more loosely coupled, distributed environment, and the ESB concept needs to take account of that in the Cloud context.

Gardner: Nikhil, any thoughts about how to manage this integration requirement around the modern SOA environment and whether ESBs are more or less relevant as a result?

Kumar: In the context of a Cloud we really see SOA and the concept of service contracts coming to the fore. In that scenario, ESBs play a role as a broker within the enterprise. When we talk about the interaction across Cloud-service providers and Cloud consumers, what we’re seeing is that the service provider has his own concept of an ESB within its own internal context.

If you want your Cloud services to be really reusable, the concept of the ESB then becomes more for the routing and the mediation of those services, once they’re provided to the consumer. There’s a kind of separation of concerns between the concept of a traditional ESB and a Cloud ESB, if you want to call it that.

The Cloud context involves more of the need to be able to support, enforce, and apply governance concepts and audit concepts, the capabilities to ensure that the interaction meets quality of service guarantees. That’s a little different from the concept that drove traditional ESBs.

That’s why you’re seeing API management platforms like Layer 7Mashery, or Apigee and other kind of product lines. They’re also coming into the picture, driven by the need to be able to support the way Cloud providers are provisioning their services. As Chris put it, you’re looking beyond the enterprise. Who owns it? That’s where the role of the ESB is different from the traditional concept.

Most Cloud platforms have cost factors associated with locality. If you have truly global enterprises and services, you need to factor in the ability to deal with safe harbor issues and you need to factor in variations and law in terms of security governance.

The platforms that are evolving are starting to provide this out of the box. The service consumer or a service provider needs to be able to support those. That’s going to become the role of their ESB in the future, to be able to consume a service, to be able to assert this quality-of-service guarantee, and manage constraints or data-in-flight and data-at-rest.

Gardner: Mats, are there other aspects of the concept of ESB that are now relevant to the Cloud?

Entire stack

Gejnevall: One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products that all the consultancies were asking companies to buy, wanting them to buy an ESB, governance tools, business process management tools, and a lot of sort of quite large investments to just get your foot into the door of doing SOA.

These days you can buy that kind of stuff. You can buy the entire stack in the Cloud and start playing with it. I did some searches on it today and I found a company that you can play with the entire stack, including business tools and everything like that, for zero dollars. Then you can grow and use more and more of it in your business, but you can start to see if this is something for you.

In the past, the suppliers or the consultants told you that you could do it. You couldn’t really try it out yourself. You needed both the software and the hardware in place. The money to get started is much lower today. That’s another reason people might be thinking about it these days.

Gardner: It sounds as if there’s a new type of on-ramp to SOA values, and the componentry that supports SOA is now being delivered as a service. On top of that, you’re also able to consume it in a pay-as-you-go manner.

Harding: That’s a very good point, but there are two contradictory trends we are seeing here. One is the kind of trend that Mats is describing, where the technology you need to handle a complex stack is becoming readily available in the Cloud.

And the other is the trend that Nikhil mentioned: to go for a simpler style, which a lot of people term REST, for accessing services. It will be interesting to see how those two tendencies play out against each other.

Kumar: I’d like to make a comment on that. The approach for the on-ramp is really one of the key differentiators of the Cloud, because you have the agility and the lack of capital investment (CAPEX) required to test things out.

But as we are evolving with Cloud platforms, I’m also seeing with a lot of Platform-as-a-Service (PaaS) vendor scenarios that they’re trying the ESB in the stack itself. They’re providing it in their Cloud fabric. A couple of large players have already done that.

For example, Azure provides that in the forward-looking vision. I am sure IBM and Oracle have already started down that path. A lot of the players are going to provide it as a core capability.

Pre-integrated environment

Gejnevall: Another interesting thing is that they could get a whole environment that’s pre-integrated. Usually, when you buy these things from a vendor, a lot of times they don’t fit together that well. Now, there’s an effort to make them work together.

But some people put these open-source tools together. Some people have done that and put them out on the Cloud, which gives them a pretty cheap platform for themselves. Then, they can sell it at a reasonable price, because of the integration of all these things.

Gardner: The Cloud model may be evolving toward an all-inclusive offering. But SOA, by its definition, advances interoperability, to plug and play across existing, current, and future sets of service possibilities. Are we talking about SOA being an important element of keeping Clouds dynamic and flexible — even open?

Kumar: We can think about the OSI 7 Layer Model. We’re evolving in terms of complexity, right? So from an interoperability perspective, we may talk SOAP or REST, for example, but the interaction with AWS, SalesforceSmartCloud, or Azure would involve using APIs that each of these platforms provide for interaction.

Lock-in

So you could have an AMI, which is an image on the Amazon Web Services environment, for example, and that could support a lab stack or an open source stack. How you interact with it, how you monitor it, how you cluster it, all of those aspects now start factoring in specific APIs, and so that’s the lock-in.

From an architect’s perspective, I look at it as we need to support proper separation of concerns, and that’s part of [The Open Group] SOA Reference Architecture. That’s what we tried to do, to be able to support implementation architectures that support that separation of concerns.

There’s another factor that we need to understand from the context of the Cloud, especially for mid-to-large sized organizations, and that is that the Cloud service providers, especially the large ones — Amazon, Microsoft, IBM — encapsulate infrastructure.

If you were to go to Amazon, Microsoft, or IBM and use their IaaS networking capabilities, you’d have one of the largest WAN networks in the world, and you wouldn’t have to pay a dime to establish that infrastructure. Not in terms of the cost of the infrastructure, not in terms of the capabilities required, nothing. So that’s an advantage that the Cloud is bringing, which I think is going to be very compelling.

The other thing is that, from an SOA context, you’re now able to look at it and say, “Well, I’m dealing with the Cloud, and what all these providers are doing is make it seamless, whether you’re dealing with the Cloud or on-premise.” That’s an important concept.

Now, each of these providers and different aspects of their stacks are at significantly different levels of maturity. Many of these providers may find that their stacks do not interoperate with themselves either, within their own stacks, just because they’re using different run times, different implementations, etc. That’s another factor to take in.

From an SOA perspective, the Cloud has become very compelling, because I’m dealing, let’s say, with a Salesforce.com and I want to use that same service within the enterprise, let’s say, an insurance capability for Microsoft Dynamics or for SugarCRM. If that capability is exposed to one source of truth in the enterprise, you’ve now reduced the complexity and have the ability to adopt different Cloud platforms.

What we are going to start seeing is that the Cloud is going to shift from being just one à-la-carte solution for everybody. It’s going to become something similar to what we used to deal with in the enterprise context. You had multiple applications, which you service-enabled to reduce complexity and provide one service-based capability, instead of an application-centered approach.

You’re now going to move the context to the Cloud, to your multiple Cloud solutions, and maybe many implementations in a nontrivial environment for the same business capability, but they are now exposed to services in the enterprise SOA. You could have Salesforce. You could have Amazon. You could have an IBM implementation. And you could pick and choose the source of truth and share it.

So a lot of the core SOA concepts will still apply and are still applying.

Another on-ramp

Gardner: Perhaps yet another on-ramp to the use of SOA is the app store, which allows for discovery, socialization of services, but at the same time provides overnance and control?

Kumar: We’re seeing that with a lot of our customers, typically the vendors who support PaaS solution associate app store models along with their platform as a mechanism to gain market share.

The issue that you run into with that is, it’s okay if it’s on your cellphone or on your iPad, your tablet PC, or whatever, but once you start having managed apps, for example Salesforce, or if you have applications which are being deployed on an Azure or on a SmartCloud context, you have high risk scenario. You don’t know how well architected that application is. It’s just like going and buying an enterprise application.

When you deploy it in the Cloud, you really need to understand the Cloud PaaS platform for that particular platform to understand the implications in terms of dependencies and cross-dependencies across apps that you have installed. They have real practical implications in terms of maintainability and performance. We’ve seen that with at least two platforms in the last six months.

Governance becomes extremely important. Because of the low CAPEX implications to the business, the business is very comfortable with going and buying these applications and saying, “We can install X, Y, or Z and it will cost us two months and a few million dollars and we are all set.” Or maybe it’s a few hundred thousand dollars.

They don’t realize the implications in terms of interoperability, performance, and standard architectural quality attributes that can occur. There is a governance aspect from the context of the Cloud provisioning of these applications.

There is another aspect to it, which is governance in terms of the run-time, more classic SOA governance, to measure, assert, and to view the cost of these applications in terms of performance to your infrastructural resources, to your security constraints. Also, are there scenarios where the application itself has a dependency on a daisy chain, multiple external applications, to trace the data?

In terms of the context of app stores, they’re almost like SaaS with a particular platform in mind. They provide the buyer with certain commitments from the platform manager or the platform provider, such as security. When you buy an app from Apple, there is at least a reputational expectation of security from the vendor.

What you do not always know is if that security is really being provided. There’s a risk there for organizations who are exposing mission-critical data to that.

The second thing is there is still very much a place for the classic SOA registries and repositories in the Cloud. Only the place is for a different purpose. Those registries and repositories are used either by service providers or by consumers to maintain the list of services they’re using internally.

Different paradigms

There are two different paradigms. The app store is a place where I can go and I know that the gas I am going to get is 85 percent ethanol, versus I also have to maintain some basic set of goods at home to make that I have my dinner on time. These are different kind of roles and different kind of purposes they’re serving.

Above all, I think the thing that’s going to become more and more important in the context of the Cloud is that the functionality will be provided by the Cloud platform or the app you buy, but the governance will be a major IT responsibility, right from the time of picking the app, to the time of delivering it, to the time of monitoring it.

Gardner: How is The Open Group allowing architects to better exercise SOA principles, as they’re grappling with some of these issues around governance, hybrid services delivery and management, and the use and demand in their organizations to start consuming more Cloud services?

Harding: The architect’s primary concern, of course, has to be to meet the needs of the client and to do so in a way that is most effective and that is cost-effective. Cloud gives the architect a usability to go out and get different components much more easily than hitherto.

There is a problem, of course, with integrating them and putting them together. SOA can provide part of the solution to that problem, in that it gives a principle of loosely coupled services. If you didn’t have that when you were trying to integrate different functionality from different places, you would be in a real mess.

What The Open Group contributes is a set of artifacts that enable the architect to think through how to meet the client’s needs in the best way when working with SOA and Cloud.

For example, the SOA Reference Architecture helps the architect understand what components might be brought into the solution. We have the SOA TOGAF Practical Guide, which helps the architect understand how to use TOGAF® in the SOA context.

We’re working further on artifacts in the Cloud space, the Cloud Computing Reference Architecture, a notational language for enabling people to describe Cloud ecosystems on recommendations for Cloud interoperability and portability. We’re also working on recommendations for Cloud governance to complement the recommendations for SOA governance, the SOA Governance Framework Standards that we have already produced, and a number of other artifacts.

The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client.

From the very early days, SOA was seen as bringing a closer connection between the business and technology. A lot of those promises that were made about SOA seven or eight years ago are only now becoming possible to fulfill, and that business front is what that project is looking at.

We’re also producing an update to the SOA Reference Architectures. We have input the SOA Reference Architecture for consideration by the ISO Group that is looking at an International Standard Reference Architecture for SOA and also to the IEEE Group that is looking at an IEEE Standard Reference Architecture.

We hope that both of those groups will want to work along the principles of our SOA Reference Architecture and we intend to produce a new version that incorporates the kind of ideas that they want to bring into the picture.

We’re also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions.

So those are three new initiatives that should result in new Open Group standards and guides to complement, as I have described already, the SOA Reference Architecture, the SOA Governance Framework, the Practical Guides to using TOGAF for SOA.

We also have the Service Integration Maturity Model that we need to assess the SOA maturity. We have a standard on service orientation applied to Cloud infrastructure, and we have a formal SOA Ontology.

Those are the things The Open Group has in place at present to assist the architect, and we are and will be working on three new things: version 2 of the Reference Architecture for SOA, SOA for business technology, and I believe shortly we’ll start on assistance to architects in developing SOA solutions.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirect™ blogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cloud, Cloud/SOA, Service Oriented Architecture

I Thought I had Said it All – and Then Comes Service Technology

By E.G. Nadhan, HP

It is not the first time that I am blogging about the evolution of fundamental service orientation principles serving as an effective foundation for cloud computing. You may recall my earlier posts in The Open Group blog on Top 5 tell-tale signs of SOA evolving to the Cloud, followed by The Right Way to Transform to Cloud Computing, following up with my latest post on this topic about taking a lesson from history to integrate to the Cloud. I thought I had said it all and there was nothing more to blog about on this topic other than diving into more details.

Until I saw the post by Forbes blogger Joe McKendrick on Before There Was Cloud Computing, There was SOA. In this post, McKendrick introduces a new term – Service Technology – which resonates with me because it cements the concept of a service-oriented thinking that technically enables the realization of SOA within the enterprise followed by its sustained evolution to cloud computing. In fact, the 5th International SOA, Cloud and Service Technology Symposium is a conference centered around this concept.

Even if this is a natural evolution, we must still exercise caution that we don’t fall prey to the same pitfalls of integration like the IT world did in the past. I elaborate further on this topic in my post on The Open Group blog: Take a lesson from History to Integrate to the Cloud.

I was intrigued by another comment in McKendrick’s post about “Cloud being inherently service-oriented.” Almost. I would slightly rephrase it to Cloud done right being inherently service-oriented. So, what do I mean by Cloud done right. Voila:The Right Way to Transform to Cloud Computing on The Open Group blog.

So, how about you? Where are you with your SOA strategy? Have you been selectively transforming to the Cloud? Do you have “Service Technology” in place within your enterprise?

I would like to know, and something tells me McKendrick will as well.

So, it would be an interesting exercise to see if the first Technical standard for Cloud Computing published by The Open Group should be extended to accommodate the concept of Service Technology. Perhaps, it is already an integral part of this standard in concept. Please let me know if you are interested. As the co-chair for this Open Group project, I am very interested in working with you on taking next steps.

A version of this blog post originally appeared on the Journey through Enterprise IT Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

Comments Off

Filed under Cloud/SOA

Take a Lesson from History to Integrate to the Cloud

By E.G. Nadhan, HP

In an earlier post for The Open Group Blog on the Top 5 tell-tale signs of SOA evolving to the Cloud, I had outlined the various characteristics of SOA that serve as a foundation for the cloud computing paradigm.  Steady growth of service oriented practices and the continued adoption of cloud computing across enterprises has resulted in the need for integrating out to the cloud.  When doing so, we must take a look back in time at the evolution of integration solutions starting with point-to-point solutions maturing to integration brokers and enterprise services buses over the years.  We should take a lesson from history to ensure that this time around, when integrating to the cloud, we prevent undue proliferation of point-to-point solutions across the extended enterprise.

We must exercise the same due-diligence and governance as is done for services within the enterprise. There is an increased risk of point-to-point solutions proliferating because of consumerization of IT and the ease of availability of such services to individual business units.

Thus, here are 5 steps that need to be taken to ensure a more systemic approach when integrating to cloud-based service providers.

  1. Extend your SOA strategy to the Cloud. Review your current SOA strategy and extend this to accommodate cloud based as-a-service providers.
  2. Extend Governance around Cloud Services.   Review your existing IT governance and SOA governance processes to accommodate the introduction and adoption of cloud based as-a-service providers.
  3. Identify Cloud based Integration models. It is not a one-size fits all. Therefore multiple integration models could apply to the cloud-based service provider depending upon the enterprise integration architecture. These integration models include a) point-to-point solutions, b) cloud to on-premise ESB and c) cloud based connectors that adopt a service centric approach to integrate cloud providers to enterprise applications and/or other cloud providers.
  4. Apply right models for right scenarios. Review the scenarios involved and apply the right models to the right scenarios.
  5. Sustain and evolve your services taxonomy. Provide enterprise-wide visibility to the taxonomy of services – both on-premise and those identified for integration with the cloud-based service providers. Continuously evolve these services to integrate to a rationalized set of providers who cater to the integration needs of the enterprise in the cloud.

The biggest challenge enterprises have in driving this systemic adoption of cloud-based services comes from within its business units. Multiple business units may unknowingly avail the same services from the same providers in different ways. Therefore, enterprises must ensure that such point-to-point integrations do not proliferate like they did during the era preceding integration brokers.

Enterprises should not let history repeat itself when integrating to the cloud by adopting service-oriented principles.

How about your enterprise? How are you going about doing this? What is your approach to integrating to cloud service providers?

A version of this post was originally published on HP’s Enterprise Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Cloud, Cloud/SOA

Secrets Behind the Rapid Growth of SOA

By E.G. Nadhan, HP

Service Oriented Architecture has been around for more than a decade and has steadily matured over the years with increasing levels of adoption. Cloud computing, a paradigm that is founded upon the fundamental service oriented principles, has fueled SOA’s adoption in recent years. ZDNet blogger Joe McKendrick calls out a survey by Companies and Markets in one of his blog posts - SOA market grew faster than expected.

Some of the statistics from this survey as referenced by McKendrick include:

  • SOA represents a total global market value of $5.518 billion, up from $3.987 billion in 2010 – or a 38% growth.
  • The SOA market in North America is set to grow at a compound annual growth rate (CAGR) of 11.5% through 2014.

So, what are the secrets of the success that SOA seems to be enjoying?  During the past decade, I can recall a few skeptics who were not so sure about SOA’s adoption and growth.  But I believe there are 5 “secrets” behind the success story of SOA that should put such skepticism to rest:

  1. Architecture. Service oriented architectures have greatly facilitated a structured approach to enterprise architecture (EA) at large. Despite debates over the scope of EA and SOA, the fact remains that service orientation is an integral part of the foundational factors considered by the enterprise architect. If anything, it has also acted as a catalyst for giving more visibility to the need for well-defined enterprise architecture to be in place for the current and desired states.
  2. Application. Service orientation has promoted standardized interfaces that have enabled the continued existence of multiple applications in an integrated, cohesive manner. Thanks to a SOA-based approach, integration mechanisms are no longer held hostage to proprietary formats and legacy platforms.
  3. Availability. Software Vendors have taken the initiative to make their functionality available through services. Think about the number of times you have heard a software vendor suggest Web services as their de-facto method for integrating to other systems? Single-click generation of a Web service is a very common feature across most of the software tools used for application development.
  4. Alignment. SOA has greatly facilitated and realized increased alignment from multiple fronts including the following:
    • Business to IT. The definition of application and technology services is really driven by the business need in the form of business services.
    • Application to Infrastructure. SOA strategies for the enterprise have gone beyond the application layer to the infrastructure, resulting in greater alignment between the application being deployed and the supporting infrastructure. Infrastructure services are an integral part of the comprehensive set of services landscape for an enterprise.
    • Platforms and technology. Interfaces between applications are much less dependent on the underlying technologies or platforms, resulting in increased alignment between various platforms and technologies. Interoperability has been taken to new levels across the extended enterprise.
  5. AdoptionSOA has served as the cornerstone for new paradigms like cloud computing. Increased adoption of SOA has also resulted in the evolution of multiple industry standards for SOA and has also led to the evolution of standards for infrastructure services to be provisioned in the cloudStandards do take time to evolve, but when they do, it is a tacit endorsement by the IT industry of the maturity of the underlying phenomenon — in this case, SOA.

Thus, the application of service oriented principles across the enterprise has increased SOA’s adoption spurred by the availability of readily exposed services across all architectural layers resulting in increased alignment between business and IT.

What about you? What factors come to your mind as SOA success secrets? Is your SOA experience in alignment with the statistics from the report McKendrick referenced? I would be interested to know.

Reposted with permission from CIO Magazine.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Cloud/SOA

The Open Group Barcelona Conference – Early Bird Registration ends September 21

By The Open Group Conference Team

Early Bird registration for The Open Group Conference in Barcelona ends September 21. Register now and save!

The conference runs October 22-24, 2012. On Monday, October 22, the plenary theme is “Big Data – The Next Frontier in the Enterprise,” and speakers will address the challenges and solutions facing Enterprise Architecture within the context of the growth of Big Data. Topics to be explored include:

  • How does an enterprise adopt the means to contend with Big Data within its information architecture?
  • How does Big Data enable your business architecture?
  • What are the issues concerned with real-time analysis of the data resources on the cloud?
  • What are the information security challenges in the world of outsourced and massively streamed data analytics?
  • What is the architectural view of security for cloud computing? How can you take a risk-based approach to cloud security?

Plenary speakers include:

  • Peter Haviland, head of Business Architecture, Ernst & Young
  • Ron Tolido, CTO of Application Services in Europe, Capgemini; and Manuel Sevilla, chief technical officer, Global Business Information Management, Capgemini
  • Scott Radeztsky, chief technical officer, Deloitte Analytics Innovation Centers
  • Helen Sun, director of Enterprise Architecture, Oracle

On Tuesday, October 23, Dr. Robert Winter, Institute of Information Management, University of St. Gallen, Switzerland, will kick off the day with a keynote on EA Management and Transformation Management.

Tracks include:

  • Practice-driven Research on Enterprise Transformation (PRET)
  • Trends in Enterprise Architecture Research (TEAR)
  • TOGAF® and ArchiMate® Case Studies
  • Information Architecture
  • Distributed Services Architecture
  • Holistic Enterprise Architecture Workshop
  • Business Innovation & Technical Disruption
  • Security Architecture
  • Big Data
  • Cloud Computing for Business
  • Cloud Security and Cloud Architecture
  • Agile Enterprise Architecture
  • Enterprise Architecture and Business Value
  • Setting Up A Successful Enterprise Architecture Practice

For more information or to register: http://www.opengroup.org/barcelona2012/registration

Comments Off

Filed under Conference

Counting the Cost of Cloud

By Chris Harding, The Open Group

IT costs were always a worry, but only an occasional one. Cloud computing has changed that.

Here’s how it used to be. The New System was proposed. Costs were estimated, more or less accurately, for computing resources, staff increases, maintenance contracts, consultants and outsourcing. The battle was fought, the New System was approved, the checks were signed, and everyone could forget about costs for a while and concentrate on other issues, such as making the New System actually work.

One of the essential characteristics of cloud computing is “measured service.” Resource usage is measured by the byte transmitted, the byte stored, and the millisecond of processing time. Charges are broken down by the hour, and billed by the month. This can change the way people take decisions.

“The New System is really popular. It’s being used much more than expected.”

“Hey, that’s great!”

Then, you might then have heard,

“But this means we are running out of capacity. Performance is degrading. Users are starting to complain.” 

“There’s no budget for an upgrade. The users will have to lump it.”

Now the conversation goes down a slightly different path.

“Our monthly compute costs are twice what we budgeted.”

“We can’t afford that. You must do something!”

And something will be done, either to tune the running of the system, or to pass the costs on to the users. Cloud computing is making professional day-to-day cost control of IT resource use both possible and necessary.

This starts at the planning stage. For a new cloud system, estimates should include models of how costs and revenue relate to usage. Approval is then based on an understanding of the returns on investment in likely usage scenarios. And the models form the basis of day-to-day cost control during the system’s life.

Last year’s Open Group “State of the Industry” cloud survey found that 55% of respondents thought that cloud ROI addressing business requirements in their organizations would be easy to evaluate and justify, but only 35% of respondents’ organizations had mechanisms in place to do this. Clearly, the need for cost control based on an understanding of the return was not widely appreciated in the industry at that time.

We are repeating the survey this year. It will be very interesting to see whether the picture has changed.

Participation in the survey is open until August 15. To add your experience and help improve industry understanding of the use of cloud computing, visit: http://www.surveymonkey.com/s/TheOpenGroup_2012CloudROI

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

Comments Off

Filed under Cloud

Cybersecurity Threats Key Theme at Washington, D.C. Conference – July 16-20, 2012

By The Open Group Conference Team

Identify risks and eliminating vulnerabilities that could undermine integrity and supply chain security is a significant global challenge and a top priority for governments, vendors, component suppliers, integrators and commercial enterprises around the world.

The Open Group Conference in Washington, D.C. will bring together leading minds in technology and government policy to discuss issues around cybersecurity and how enterprises can establish and maintain the necessary levels of integrity in a global supply chain. In addition to tutorial sessions on TOGAF and ArchiMate, the conference offers approximately 60 sessions on a varied of topics, including:

  • Cybersecurity threats and key approaches to defending critical assets and securing the global supply chain
  • Information security and Cloud security for global, open network environments within and across enterprises
  • Enterprise transformation, including Enterprise Architecture, TOGAF and SOA
  • Cloud Computing for business, collaborative Cloud frameworks and Cloud architectures
  • Transforming DoD avionics software through the use of open standards

Keynote sessions and speakers include:

  • America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime and Warfare - Keynote Speaker: Joel Brenner, author and attorney at Cooley LLP
  • Meeting the Challenge of Cybersecurity Threats through Industry-Government Partnerships - Keynote Speaker: Kristin Baldwin, principal deputy, deputy assistant secretary of defense for Systems Engineering
  • Implementation of the Federal Information Security Management Act (FISMA) - Keynote Speaker: Dr. Ron Ross, project leader at NIST (TBC)
  • Supply Chain: Mitigating Tainted and Counterfeit Products - Keynote Panel: Andras Szakal, VP and CTO at IBM Federal; Daniel Reddy, consulting product manager in the Product Security Office at EMC Corporation; John Boyens, senior advisor in the Computer Security Division at NIST; Edna Conway, chief security strategist of supply chain at Cisco; and Hart Rossman, VP and CTO of Cyber Security Services at SAIC
  • The New Role of Open Standards – Keynote Speaker: Allen Brown, CEO of The Open Group
  • Case Study: Ontario Healthcare - Keynote Speaker: Jason Uppal, chief enterprise architect at QRS
  • Future Airborne Capability Environment (FACE): Transforming the DoD Avionics Software Industry Through the Use of Open Standards - Keynote Speaker: Judy Cerenzia, program director at The Open Group; Kirk Avery of Lockheed Martin; and Robert Sweeney of Naval Air Systems Command (NAVAIR)

The full program can be found here: http://www3.opengroup.org/events/timetable/967

For more information on the conference tracks or to register, please visit our conference registration page. Please stay tuned throughout the next month as we continue to release blog posts and information leading up to The Open Group Conference in Washington, D.C. and be sure to follow the conference hashtag on Twitter – #ogDCA!

1 Comment

Filed under ArchiMate®, Cloud, Cloud/SOA, Conference, Cybersecurity, Enterprise Architecture, Information security, OTTF, Standards, Supply chain risk

RECAP: The Open Group Brazil Conference – May 24, 2012

By Isabela Abreu, The Open Group

Under an autumn Brazilian sky, The Open Group held its first regional event in São Paulo, Brazil, and it turned out to be a great success. More than 150 people attended the conference – including Open Group platinum members (CapGemini, HP, IBM and Oracle), the Brazil chapter of the Association of Enterprise Architecture (AEA), and Brazilian organizations (Daryus, Sensedia) – displaying a robust interest for Enterprise Architecture (EA) within the world’s sixth largest economy. The Open Group also introduced its mission, vision and values to the marketplace – a working model not very familiar to the Brazilian environment.

After the 10 hour, one-day event, I’m pleased to say that The Open Group’s first formal introduction to Brazil was well received, and the organization’s mission was immediately understood!

Introduction to Brazil

The event started with a brief introduction of The Open Group by myself, Isabela Abreu, Open Group country manager of Brazil, and was followed by an impressive presentation by Allen Brown, CEO of The Open Group, on how enterprise architects hold the power to change an organization’s future, and stay ahead of competitors, by using open standards that drive business transformation.

The conference aimed to provide an overview of trending topics, such as business transformation, EA, TOGAF®, Cloud Computing, SOA and Information Security. The presentations focused on case studies, including one by Marcelo Sávio of IBM that showed how the organization has evolved through the use of EA Governance; and one by Roberto Soria of Oracle that provided an introduction to SOA Governance.

Enterprise Architecture

Moving on to architecture, Roberto Severo, president of the AEA in Brazil, pointed out why architects must join the association to transform the Brazil EA community into a strong and ethical tool for transforming EA. He also demonstrated how to align tactical decisions to strategic objectives using Cloud Computing. Then Cecilio Fraguas of CPM Braxis CapGemini provided an introduction to TOGAF®; and Courtnay Guimarães of Instisys comically evinced that although it is sometimes difficult to apply, EA is a competitive tool for investment banks

Security

On the security front, Rodrigo Antão of Apura showed the audience that our enemies know us, but we don’t know them, in a larger discussion about counter-intelligence and cybersecurity; he indicated that architects are wrong when tend to believe EA has nothing to do with Information Security. In his session titled, “OSIMM: How to Measure Success with SOA and Design the Roadmap,” Luís Moraes of Sensedia provided a good overview for architects and explained how to measure success with SOA and design roadmaps with OSIMM - a maturity model of integration services soon to become an ISO standard, based on SOA and developed by The Open Group. Finally, Alberto Favero of Ernst & Young presented the findings of the Ernst & Young 2011 Global Information Security Survey, closing the event.

Aside from the competitive raffle, the real highlight of the event happened at lunch when I noticed the networking between conference attendees. I can testify that the Brazilian EA community actively ideas, in the spirit of The Open Group!

By the end of the day, everybody returned home with new ideas and new friends. I received many inquiries on how to keep the community engaged after the conference, and I promise to keep activities up and running here, in Brazil.

Stay tuned, as we plan sending on a survey to conference attendees, as well the link to all of the presentations. Thanks to everyone who made the conference a great success!

Isabela Abreu is The Open Group country manager for Brazil. She is a member of AEA Brazil and has participated in the translation of the glossary of TOGAF® 9.1, ISO/IEC 20000:1 and ISO/IEC 20000:5 and ITIL V3 to Portuguese. Abreu has worked for itSMF Brazil, EXIN Brazil – Examination Institute for Information Science, and PATH ITTS Consultancy, and is a graduate of São Paulo University.

1 Comment

Filed under Cloud, Conference, Cybersecurity, Enterprise Architecture, TOGAF®

Corporate Data, Supply Chains Remain Vulnerable to Cyber Crime Attacks, Says Open Group Conference Speaker

By Dana Gardner, Interarbor Solutions 

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how security impacts the Enterprise Architecture, enterprise transformation, and global supply chain activities in organizations, both large and small.

We’re now joined on the security front with one of the main speakers at the conference, Joel Brenner, the author of America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare.”

Joel is a former Senior Counsel at the National Security Agency (NSA), where he advised on legal and policy issues relating to network security. Mr. Brenner currently practices law in Washington at Cooley LLP, specializing in cyber security. Registration remains open for The Open Group Conference in Washington, DC beginning July 16.

Previously, he served as the National Counterintelligence Executive in the Office of the Director of National Intelligence, and as the NSA’s Inspector General. He is a graduate of University of Wisconsin–Madison, the London School of Economics, and Harvard Law School. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Your book came out last September and it affirmed this notion that the United States, or at least open Western cultures and societies, are particularly vulnerable to being infiltrated, if you will, from cybercrime, espionage, and dirty corporate tricks.

Why are we particularly vulnerable, when we should be most adept at using cyber activities to our advantage?

Brenner: Let’s make a distinction here between the political-military espionage that’s gone on since pre-biblical times and the economic espionage that’s going on now and, in many cases, has nothing to do at all to do with military, defense, or political issues.

The other stuff has been going on forever, but what we’ve seen in the last 15 or so years is a relentless espionage attack on private companies for reasons having nothing to do with political-military affairs or defense.

So the countries that are adept at cyber, but whose economies are relatively undeveloped compared to ours, are at a big advantage, because they’re not very lucrative targets for this kind of thing, and we are. Russia, for example, is paradoxical. While it has one of the most educated populations in the world and is deeply cultured, it has never been able to produce a commercially viable computer chip.

Not entrepreneurial

We’re not going to Russia to steal advanced technology. We’re not going to China to steal advanced technology. They’re good at engineering and they’re good at production, but so far, they have not been good at making themselves into an entrepreneurial culture.

That’s one just very cynical reason why we don’t do economic espionage against the people who are mainly attacking us, which are China, Russia, and Iran. I say attack in the espionage sense.

The other reason is that you’re stealing intellectual property when you’re doing economic espionage. It’s a bedrock proposition of American economics and political strategy around the world to defend the legal regime that protects intellectual property. So we don’t do that kind of espionage. Political-military stuff we’re real good at.

Gardner: Wouldn’t our defense rise to the occasion? Why hasn’t it?

Brenner: The answer has a lot to do with the nature of the Internet and its history. The Internet, as some of your listeners will know, was developed starting in the late ’60s by the predecessor of the Defense Advanced Research Projects Agency (DARPA), a brilliant operation which produced a lot of cool science over the years.

It was developed for a very limited purpose, to allow the collaboration of geographically dispersed scientists who worked under contract in various universities with the Defense Department’s own scientists. It was bringing dispersed brainpower to bear.

It was a brilliant idea, and the people who invented this, if you talk to them today, lament the fact that they didn’t build a security layer into it. They thought about it. But it wasn’t going to be used for anything else but this limited purpose in a trusted environment, so why go to the expense and aggravation of building a lot of security into it?

Until 1992, it was against the law to use the Internet for commercial purposes. Dana, this is just amazing to realize. That’s 20 years ago, a twinkling of an eye in the history of a country’s commerce. That means that 20 years ago, nobody was doing anything commercial on the Internet. Ten years ago, what were you doing on the Internet, Dana? Buying a book for the first time or something like that? That’s what I was doing, and a newspaper.

In the intervening decade, we’ve turned this sort of Swiss cheese, cool network, which has brought us dramatic productivity and all and pleasure into the backbone of virtually everything we do.

International finance, personal finance, command and control of military, manufacturing controls, the controls in our critical infrastructure, all of our communications, virtually all of our activities are either on the Internet or exposed to the Internet. And it’s the same Internet that was Swiss cheese 20 years ago and it’s Swiss cheese now. It’s easy to spoof identities on it.

So this gives a natural and profound advantage to attack on this network over defense. That’s why we’re in the predicament we’re in.

Both directions

Gardner: Let’s also look at this notion of supply chain, because corporations aren’t just islands unto themselves. A business is really a compendium of other businesses, products, services, best practices, methodologies, and intellectual property that come together to create a value add of some kind. It’s not just attacking the end point, where that value is extended into the market. It’s perhaps attacking anywhere along that value chain.

What are the implications for this notion of the ecosystem vulnerability versus the enterprise vulnerability?

Brenner: Well, the supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements. For example, this software, where was it written? Maybe it was written in Russia — or maybe somewhere in Ohio or in Nevada, but by whom? We don’t know.

There are two fundamental different issues for supply chain, depending on the company. One is counterfeiting. That’s a bad problem. Somebody is trying to substitute shoddy goods under your name or the name of somebody that you thought you could trust. That degrades performance and presents real serious liability problems as a result.

The other problem is the intentional hooking, or compromising, of software or chips to do things that they’re not meant to do, such as allow backdoors and so on in systems, so that they can be attacked later. That’s a big problem for military and for the intelligence services all around the world.

The reason we have the problem is that nobody knows how to vet a computer chip or software to see that it won’t do thesesquirrelly things. We can test that stuff to make sure it will do what it’s supposed to do, but nobody knows how to test the computer chip or two million lines of software reliably to be sure that it won’t also do certain things we don’t want it to do.

You can put it in a sandbox or a virtual environment and you can test it for a lot of things, but you can’t test it for everything. It’s just impossible. In hardware and software, it is thestrategic supply chain problem now. That’s why we have it.

If you have a worldwide supply chain, you have to have a worldwide supply chain management system. This is hard and it means getting very specific. It includes not only managing a production process, but also the shipment process. A lot of squirrelly things happen on loading docks, and you have to have a way not to bring perfect security to that — that’s impossible — but to make it really harder to attack your supply chain.

Notion of cost

Gardner: So many organizations today, given the economy and the lagging growth, have looked to lowest cost procedures, processes, suppliers, materials, and aren’t factoring in the risk and the associated cost around these security issues. Do people need to reevaluate cost in the supply chain by factoring in what the true risks are that we’re discussing?

Brenner: Yes, but of course, when the CEO and the CFO get together and start to figure this stuff out, they look at the return on investment (ROI) of additional security. It’s very hard to be quantitatively persuasive about that. That’s one reason why you may see some kinds of production coming back into the United States. How one evaluates that risk depends on the business you’re in and how much risk you can tolerate.

This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things.

Gardner: We’ve seen other aspects of commerce in which we can’t lock down the process. We can’t know all the information, but what we can do is offer deterrence, perhaps in the form of legal recourse, if something goes wrong, if in fact, decisions were made that countered the contracts or were against certain laws or trade practices.

Brenner: For a couple of years now, I’ve struggled with the question why it is that liability hasn’t played a bigger role in bringing more cyber security to our environment, and there are a number of reasons.

We’ve created liability for the loss of personal information, so you can quantify that risk. You have a statute that says there’s a minimum damage of $500 or $1,000 per person whose identifiable information you lose. You add up the number of files in the breach and how much the lawyers and the forensic guys cost and you come up with a calculation of what these things cost.

But when it comes to just business risk, not legal risk, and the law says intellectual property to a company that depends on that intellectual property, you have a business risk. You don’t have much of a legal risk at this point.

You may have a shareholder suit issue, but there hasn’t been an awful lot of that kind of litigation so far. So I don’t know. I’m not sure that’s quite the question you were asking me, Dana.

Gardner: My follow on to that was going to be where would you go to sue across borders anyway? Is there an über-regulatory or legal structure across borders to target things like supply chain, counterfeit, cyber espionage, or mistreatment of business practice?

Depends on the borders

Brenner: It depends on the borders you’re talking about. The Europeans have a highly developed legal and liability system. You can bring actions in European courts. So it depends what borders you mean.

If you’re talking about the border of Russia, you have very different legal issues. China has different legal issues, different from Russia, as well from Iran. There are an increasing number of cases where actions are being brought in China successfully for breaches of intellectual property rights. But you wouldn’t say that was the case in Nigeria. You wouldn’t say that was the case in a number of other countries where we’ve had a lot of cybercrime originating from.

So there’s no one solution here. You have to think in terms of all kinds of layered defenses. There are legal actions you can take sometimes, but the fundamental problem we’re dealing with is this inherently porous Swiss-cheesy system. In the long run, we’re going to have to begin thinking about the gradual reengineering of the way the Internet works, or else this basic dynamic, in which lawbreakers have advantage over law-abiding people, is not going to go away.

Think about what’s happened in cyber defenses over the last 10 years and how little they’ve evolved — even 20 years for that matter. They almost all require us to know the attack mode or the sequence of code in order to catch it. And we get better at that, but that’s a leapfrog business. That’s fundamentally the way we do it.

Whether we do it at the perimeter, inside, or even outside before the attack gets to the perimeter, that’s what we’re looking for — stuff we’ve already seen. That’s a very poor strategy for doing security, but that’s where we are. It hasn’t changed much in quite a long time and it’s probably not going to.

Gardner: Why is that the case? Is this not a perfect opportunity for a business-government partnership to come together and re-architect the Internet at least for certain types of business activities, permit a two-tier approach, and add different levels of security into that? Why hasn’t it gone anywhere?

Brenner: What I think you’re saying is different tiers or segments. We’re talking about the Balkanization of the Internet. I think that’s going to happen as more companies demand a higher level of protection, but this again is a cost-benefit analysis. You’re going to see even more Balkanization of the Internet as you see countries like Russia and China, with some success, imposing more controls over what can be said and done on the Internet. That’s not going to be acceptable to us.

Gardner: We’ve seen a lot with Cloud Computing and more businesses starting to go to third-party Cloud providers for their applications, services, data storage, even integration to other business services and so forth.

More secure

If there’s a limited lumber, or at least a finite number, of Cloud providers and they can institute the proper security and take advantage of certain networks within networks, then wouldn’t that hypothetically make a Cloud approach more secure and more managed than every-man-for-himself, which is what we have now in enterprises and small to medium-sized businesses (SMBs)?

Brenner: I think the short answer is, yes. The SMBs will achieve greater security by basically contracting it out to what are called Cloud providers. That’s because managing the patching of vulnerabilities and other aspects and encryption is beyond what’s most small businesses and many medium-sized businesses can do, are willing to do, or can do cost-effectively.

For big businesses in the Cloud, it just depends on how good the big businesses’ own management of IT is as to whether it’s an improvement or not. But there are some problems with the Cloud.

People talk about security, but there are different aspects of it. You and I have been talking just now about security meaning the ability to prevent somebody from stealing or corrupting your information. But availability is another aspect of security. By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.

Consequently, it seems to me that backup issues are really critical for people who are going to the Cloud. Are you going to rely on your Cloud provider to provide the backup? Are you going to rely on the Cloud provider to provide all of your backup? Are you going to go to a second Cloud provider? Are you going to keep some information copied in-house?

What would happen if your information is good, but you can’t get to it? That means you can’t get to anything anymore. So that’s another aspect of security people need to think through.

Gardner: How do you know you’re doing the right thing? How do you know that you’re protecting? How do you know that you’ve gone far enough to ameliorate the risk?

Brenner: This is really hard. If somebody steals your car tonight, Dana, you go out to the curb or the garage in the morning, and you know it’s not there. You know it’s been stolen.

When somebody steals your algorithms, your formulas, or your secret processes, you’ve still got them. You don’t know they’re gone, until three or four years later, when somebody in Central China or Siberia is opening a factory and selling stuff into your market that you thought you were going to be selling — and that’s your stuff. Then maybe you go back and realize, “Oh, that incident three or four years ago, maybe that’s when that happened, maybe that’s when I lost it.”

What’s going out

So you don’t even know necessarily when things have been stolen. Most companies don’t do a good job. They’re so busy trying to find out what’s coming into their network, they’re not looking at what’s going out.

That’s one reason the stuff is hard to measure. Another is that ROI is very tough. On the other hand, there are lots of things where business people have to make important judgments in the face of risks and opportunities they can’t quantify, but we do it.

We’re right to want data whenever we can get it, because data generally means we can make better decisions. But we make decisions about investment in R&D all the time without knowing what the ROI is going to be and we certainly don’t know what the return on a particular R&D expenditure is going to be. But we make that, because people are convinced that if they don’t make it, they’ll fall behind and they’ll be selling yesterday’s products tomorrow.

Why is it that we have a bias toward that kind of risk, when it comes to opportunity, but not when it comes to defense? I think we need to be candid about our own biases in that regard, but I don’t have a satisfactory answer to your question, and nobody else does either. This is one where we can’t quantify that answer.

Gardner: It sounds as if people need to have a healthy dose of paranoia to tide them over across these areas. Is that a fair assessment?

Brenner: Well, let’s say skepticism. People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.

In many companies, you don’t see a willingness to do that, but that varies a great deal from company to company. Things are not always what they seem. That is not how we Americans approach life. We are trusting folks, which is why this is a great country to do business in and live in. But we’re having our pockets picked and it’s time we understood that.

Gardner: And, as we pointed out earlier, this picking of pockets is not just on our block, but could be any of our suppliers, partners, or other players in our ecosystem. If their pockets get picked, it ends up being our problem too.

Brenner: Yeah, I described this risk in my book, America the Vulnerable,” at great length and in my practice, here at Cooley, I deal with this every day. I find myself, Dana, giving briefings to businesspeople that 5, 10, or 20 years ago, you wouldn’t have given to anybody who wasn’t a diplomat or a military person going outside the country. Now this kind of cyber pilferage is an aspect of daily commercial life, I’m sorry to say.

************

For more information on The Open Group’s upcoming conference in Washington, D.C., please visit: http://www.opengroup.org/dc2012

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

2 Comments

Filed under Cloud, Cybersecurity, Supply chain risk

The Right Way to Transform to the World of Cloud Computing

By E.G. Nadhan, HP Enterprise Services

There are myriad options available for moving to cloud computing today involving the synthetic realization and integration of different components that enable the overall solution. It is important that the foundational components across the compute, network, storage and facility domains are realized and integrated the right way for enterprises to realize the perceived benefits of moving to the cloud. To that end, this post outlines the key factors to be addressed when embarking on this transformation journey to the cloud:

  • Right Cloud. There are multiple forces at play when the CIOs of today consider moving to the cloud, further complicated by the availability of various deployment models — private, public, hybrid, etc. It is important that enterprises deploy solutions to the right mix of cloud environments. It is not a one-environment-fits-all scenario. Enterprises need to define the criteria that enable the effective determination of the optimal mix of environments that best addresses their scenarios.
  • Right Architecture. While doing so, it is important that there is a common reference architecture across various cloud deployment models that is accommodative of the traditional environments. This needs to be defined factoring in the overall IT strategy for the enterprise in alignment with the business objectives. A common reference architecture addresses the over-arching concepts across the various environments while accommodating nuances specific to each one.
  • Right Services. I discussed in one of my earlier posts that the foundational principles of cloud have evolved from SOA. Thus, it is vital that enterprises have a well-defined SOA strategy in place that includes the identification of services used across the various architectural layers within the enterprise, as well as the services to be availed from external providers.
  • Right Governance. While governance is essential within the enterprise, it needs to be extended to the extra-enterprise that includes the ecosystem of service providers in the cloud. This is especially true if the landscape comprises a healthy mix of various types of cloud environments. Proper governance ensures that the right solutions are deployed to the right environments while addressing key areas of concern like security, data privacy, compliance regulations, etc.
  • Right Standard. Conformance to industry standards is always a prudent approach for any solution — especially for the cloud. The Open Group recently published the first Cloud Computing Technical Standard — Service Oriented Cloud Computing Infrastructure which bears strong consideration in addition to other standards from NIST and other standards bodies.

These factors come together to define the “Right” way of transforming to the cloud. In addition, there are other factors that are unique to the transformation of applications as I outline in the Cloud Computing Transformation Bill of RIghts.

In addition to the publication of the SOCCI standard, the Cloud Work Group within The Open Group is addressing several aspects in this space including the Reference Architecture, Governance and Security.

How is your Transformation to the cloud going? Are there other factors that come to your mind? Please let me know.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Cloud, Cloud/SOA, Service Oriented Architecture, Standards

Is Cloud Computing a “Buyers’ Market?”

By Mark Skilton, Global Director at Capgemini

At the Open Group Cannes Conference, a session we are providing is on the topic of “Selecting and Delivering Successful Cloud Products and Services.” This is an area that comes up frequently in establishing costs and benefits of on-demand solutions using the term Cloud Computing.

Cloud Computing terms have been overhyped in terms of their benefits and have saturated the general IT marketplace with all kinds of information systems stating rapid scalable benefits. Most of this may be true in the sense that readily available compute or storage capacity has commoditized in the infrastructure space. Software has also changed in functionality such that it can be contractually purchased now on a subscription basis. Users can easily subscribe to software that focuses on one or many business process requirements covering virtually all core and non-core business activities from productivity tools, project management, and collaboration to VOIP communication and business software applications all in a Software-as-a-Service (SaaS) business model.

I recently heard in conversation a view stating “Cloud Computing, it’s a buyers’ market,” meaning that customers and consumers could just pick their portfolio of software and hardware. But underlying this concept there are still some questions about using a commoditized approach to solving all your enterprise system’s needs.

Is this the whole story, when typically many organizations may seek competitive differentiation in user experience, unique transaction and functional business services? It’s ultimately more a commodity view of Cloud that matches commodity type requirements and functional needs of a customer. But, it does not fit the other 50 percent of customers who want Cloud products and characteristics but not a commodity.

The session in The Open Group Conference, Cannes on April 25 will cover the following key questions:

  • How to identify the key steps in a Cloud Products and Services selection and delivery lifecycle, avoiding tactical level decisions resulting in Cloud solution lock-in and lock-out in one or more of the stages?
  • How Cloud consumers can identify where Cloud products and services can augment and improve their business models and capabilities?
  • How Cloud providers can identify what types of Cloud products and services they can develop and deliver successfully to meet consumer and market needs?
  • What kinds of competitive differentiators to look for in consumer choice and in building providers’ value propositions?
  • What security standards, risk and certifications expertise are needed complement understanding Cloud Products and service advice?
  • What kinds of pricing, revenue and cost management on-demand models are needed to incentivize and build successful Cloud products and service consumption and delivery?
  • How to deal with contractual issues and governance across the whole lifecycle of Cloud Product and services from the perspectives of consumers and providers?

 Mark Skilton is Global Director for Capgemini, Strategy CTO Group, Global Infrastructure Services. His role includes strategy development, competitive technology planning including Cloud Computing and on-demand services, global delivery readiness and creation of Centers of Excellence. He is currently author of the Capgemini University Cloud Computing Course and is responsible for Group Interoperability strategy.

Comments Off

Filed under Cloud, Cloud/SOA, Conference

Tweet Jam Summary: Identity Management #ogChat

By Patty Donovan, The Open Group

Over 300 tweets were posted during The Open Group’s initial tweet jam, which took place this week on Tuesday morning! The hour of spirited conversation included our expert panel, as well as other participants who joined in the spirited discussion including:

If you missed the event this time, here’s a snapshot of how the discussion went:

Q1: What are the biggest challenges of #idM today? #ogChat

Many agreed that regulations at the federal and business levels are inadequate today. Other big challenges include the lack of funding, managing people not affiliated to an organization and the various contexts surrounding the issue. Here’s a sampling of some of the tweets that drove the discussion:

  • @jim_hietala: For users, managing multiple identities with strong auth credentials across myriad systems #ogChat
  • @ErickaChick: Q1 Even when someone writes a check, no one usually measures effectiveness of the spend  #ogChat
  • @dazzagreenwood: #ogchat biggest challenges of #IdM are complexity of SSO, and especially legal and business aspects. #NSTIC approach can help.
  • @Dana_Gardner: Biggest challenges of ID mgmt today are same ones as 10 years ago, that’s the problem. #ogchat #IdM
Q2: What should be the role of governments and private companies in creating #idM standards? #ogChat

Although our participants agreed that governments should have a central role in creating standards, questions about boundaries, members and willingness to adopt emerged. Dana Gardner pointed out the need for a neutral hub, but will competitors be willing to share identities with rival providers?

  • @JohnFontana: Q2 NISTIC is 1 example of how it might work. They intend to facilitate, then give way to private sector. Will it work? #ogchat
  • @Dana_Gardner: This is clearly a government role, but they dropped the ball. And now the climate is anti-regulation. So too late? #ogChat #IdM
  • @gbrunkhorst: Corps have the ability to span geopolitical boundaries. any solution has to both allow this, and ‘respect borders’ (mutually Excl?)
Q3: What are the barriers to developing an identity ecosystem? #ogChat 

The panelists opposed the idea of creating a single identity ecosystem, but the key issues to developing one rest on trust and assurance between provider and user. Paul Simmonds from the Jericho Forum noted that there are no intersections between the providers of identity management (providers, governments and vendors).

  • @ErickaChick: Q3 So many IT pros forget that #IdM isn’t a tech prob, it’s a biz process prob #ogChat
    • Response from @NadhanAtHP: @wikidsystems Just curious why you “want” multiple ecosystems? What is wrong if we have one even though it may be idealist? #ogChat #idM
    • Response from @wikidsystems: Q3 to be clear, I don’t want one identity eco system, I want many, at least some of which I control (consumer). #ogChat
  • @451wendy: Q3 Context validation for identity attributes. We all use the Internet as citizens, customers, employees, parents, students etc. #ogChat
  • @451wendy: ‘@TheRealSpaf: regulation of minimal standards for interoperability and (sometimes) safety are reasonable. Think NIST vs Congress.” #ogChat

Q4: Identity attributes may be valuable and subject to monetization. How will this play out? #ogChat

The issue of trust continued in the discussion, along with the idea that many consumers are unaware that the monetization of identity attributes occurs.

  • @Technodad: Q4: How about portability? Should I be able to pick up my identity and move to another #idm provider, like I can move my phone num? #ogchat
  • @NadhanAtHP: Q4 Identify attributes along with information analytics & context will allow for prediction and handling of security violations #idM #ogChat

Q5: How secure are single sign-on (#SSO) schemes through Web service providers such as #Google and #Facebook? #ogChat

There was an almost unanimous agreement on the insecurity of these providers, but other questions were also raised.

  • @simmonds_paul: Q5. Wrong question, instead ask why you should trust a self-asserted identity? #ogchat
  • @dazzagreenwood: Q5  #ogchat The real question is not about FB and Google, but how mass-market sso could work with OpenID Connect with *any* provider
  • @Dana_Garnder: Q5. Issue isn’t security, it’s being locked in, and then them using your meta data against you…and no alternatives. #SSO  #ogChat #IdM
  • @NadhanAtHP: Q5 Tracking liability for security violations is a challenge with #SSO schemes across Web Service Providers #idM #ogChat 

Q6: Is #idM more or less secure on #mobile devices (for users, businesses and identity providers)? #ogChat

Even though time edged its way in and we could not devote the same amount of attention to the final question, our participants painted interesting perspectives on how we actually feel about mobile security.

  • @jim_hietala: Q6. Mobile device (in)security is scary, period, add in identity credentials buried in phones, bad news indeed #ogChat
  • @simmonds_paul: Q6. I lose my SecureID card I worry in a week, I lose Cell Phone I may worry in an hour (mins if under 25) – which is more secure? #ogchat
  • @dazzagreenwood: Q6 #ogchat Mobile can be more OR less secure for #ID – depends on 1) implementation, 2) applicable trust framework(s).
  • @Technodad: @jim_hietala Q6: Mobile might make it better through physical control – similar to passport. #ogChat

Thank you to all the participants who made this a possibility, and please stay tuned for our next tweet jam!

Patricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

Comments Off

Filed under Identity Management, Tweet Jam

The Open Group Brings the Cloud to Cannes (Well, Let’s Hope That’s Only Metaphorically the Case)

By Stuart Boardman, KPN 

On Wednesday, April 25 at The Open Group Cannes Conference, we have a whole stream of sessions that will discuss Cloud Computing. There’s a whole bunch of interesting presentations on the program but one of the things that struck me in particular is how many of them are dealing with Cloud as an ecosystem. As a member of The Open Group’s Cloud Work Group, this is not a huge surprise for me (we do tell each other what we’re working on!), but it also happens to be a major preoccupation of mine at the moment, so I tend to notice occurrences of the word “ecosystem” or of related concepts. Outside of The Open Group in the wider Enterprise Architecture community, there’s more and more being written about ecosystems. The topic was the focus of my last Open Group blog .

On Wednesday, you’ll hear Boeing’s TJ Virdi and Kevin Sevigny with Conexiam Solutions talking about ecosystems in the context of Cloud and TOGAF. They’ll be talking about “how the Cloud Ecosystem impacts Enterprise Architecture,” which will include “an overview of how to use TOGAF to develop an Enterprise Architecture for the Cloud ecosystem.”  This work comes out of the Using TOGAF for Cloud Ecosystem project (TOGAF-CE), which they co-chair. Capgemini’s Mark Skilton kicks off the day with a session called “Selecting and Delivering Successful Cloud Products and Services.” If you’re wondering what that has to do with ecosystems, Mark pointed out to me that  “the ecosystem in that sense is business technology dynamics and the structural, trust models that….” – well I won’t spoil it – come along and hear a nice business take on the subject. In fact, I wonder who on that Wednesday won’t be talking in one way or another about ecosystems. Take a look at the agenda for yourself.

By the way, apart from the TOGAF-CE project, several other current Open Group projects deal with ecosystems. The Cloud Interaction Ecosystem Language (CIEL) project is developing a visual language for Cloud ecosystems and then there’s the Cloud Interoperability and Portability project, which inevitably has to concern itself with ecosystems. So it’s clearly a significant concept for people to be thinking about.

In my own presentation I’ll be zooming in on Social Business as a Cloud-like phenomenon. “What has that to do with Cloud?” you might be asking. Well quite a lot actually. Technologically most social business tools have a Cloud delivery model. But far more importantly a social business involves interaction across parties who may not have any formal relationship (e.g. provider to not-yet customer or to potential partner) or where the formal aspect of their relationship doesn’t include the social business part (e.g. engaging a customer in a co-creation initiative). In some forms it’s really an extended enterprise. So even if there were no computing involved, the relationship has the same Cloud-like, loosely coupled, service oriented nature. And of course there is a lot of information technology involved. Moreover, most of the interaction takes place over Internet- based services. In a successful social business these will not be the proprietary services of the enterprise but the public services of one or more market leading provider, because that’s where your customers and partners interact. Or to put it another way, you don’t engage your customers by making them come to you but by going to them.

I don’t want to stretch this too far. The point here is not to insist that Social Business is a form of Cloud but rather that they have comparable types of ecosystem and that they are therefore amenable to similar analysis methods. There are of course essential parts of Cloud that are purely the business of the provider and are quite irrelevant to the ecosystem (the ecosystem only cares about what they deliver). Interestingly one can’t really say that about social business – that really is all about the ecosystem. It may not matter whether we think the IT underlying social business is really Cloud computing but it most certainly is part of the ecosystem.

In my presentation, I’ll be looking at techniques we can use to help us understand what’s going on in an ecosystem and how changes in one place can have unexpected effects elsewhere – if we don’t understand it properly. My focus is one part of the whole body of work that needs to be done. There is work being done on how we can capture the essence of a Cloud ecosystem (CIEL). There is work being done on how we can use TOGAF to help us describe the architecture of a Cloud ecosystem (TOGAF-CE). There is work being done on how to model ecosystem behavior in general (me and others). And there’s work being done in many places on how ecosystem participants can interoperate. At some point we’ll need to bring all this together but for now, as long as we all keep talking to each other, each of the focus areas will enrich the others. In fact I think it’s too early to try to construct some kind of grand unified theory out of it all. We’d just produce something overly complex that no one knew how to use. I hope that TOGAF Next will give us a home for some of this – not in core TOGAF but as part of the overall guidance – because enterprises are more and more drawn into and dependent upon their surrounding ecosystems and have an increasing need to understand them. And Cloud is accelerating that process.

You can expect a lot of interesting insights on Wednesday, April 25. Come along and please challenge the presenters, because we too have a lot to learn.

Stuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity. 

Comments Off

Filed under Cloud, Conference, Enterprise Architecture, TOGAF®

The SOA RA Provides a Way to Assess or Design your SOA

By Dr. Ali Arsanjani, IBM

The Standard in Summary

The SOA Reference Architecture (SOA RA) published by The Open Group provides a prescriptive means of assessment or creation of a service-oriented architecture (SOA) solution, including the architecture for Cloud-based solutions. It does so by grouping the capabilities required of an SOA into a set of layers, each containing a set of Architectural Building Blocks (ABBs) that can serve as a checklist for an implementation of an SOA, depending on the level of maturity required by an organization. The SOA RA is intended to support organizations adopting SOA, product vendors building SOA infrastructure components, integrators engaged in the building of SOA solutions and standards bodies engaged in the specifications for SOA.

The SOA RA provides a vendor-neutral product-agnostic perspective on the logical architecture that supports the service eco-system of providers, consumers and brokers bound together by services, their interfaces and contracts.

A high level description of the SOA RA can be found above. The layers shown in figure 1 above provide a starting point for the separation of concerns required to build or assess an SOA. Each group of the separated concerns is represented by a “layer” of their own.

Starting with a robust and complete set of building blocks provided by the standard is a key enabler for the achievement of the value propositions of an SOA, such as business agility, cost-reduction, faster time to market enabled by a flexible IT infrastructure. It does so in part by providing insights, patterns and the building blocks for integrating fundamental elements of an SOA into a solution or Enterprise Architecture. 

Background

In recent years, the decoupling of interface from implementation at the programming level has been elevated to an architectural level by loosely coupling the interface of a service consumed by a service consumer from its implementation by a service provider and by decoupling the implementation from its binding. This concept is reflected in a layer in the architecture corresponding to each of these notions: i.e., a layer for services and a layer for the runtime operational systems.

This style of architecture has come to be known as SOA, where rather than the implementations of components being exposed and known, only the services provided are published and consumers are insulated from the details of the underlying implementation by a provider.

Thus, an SOA enables business and IT convergence through agreement on a (contract consisting of a) set of business-aligned IT services that collectively support an organization’s business processes and business goals. Not only does it provide flexible decoupled functionality that can be reused, but it also provides the mechanisms to externalize variations of quality of service in declarative specifications such as WS-Policy and related standards.

The SOA RA can be used as a template to be used in defining capabilities and building blocks for SOA-based solutions. This is provided by the standard as a checklist of key elements that must be considered when an SOA solution is being evaluated or architected. The SOA RA provides this through a definition of layers and architectural building blocks within those layers. The elements underlying the SOA RA is based on a meta-model defined in figure 2:

The main SOA RA abstractions, represented in figure 2 above, collectively provide a logical design of an SOA and the inter-relationships between a set of architectural building blocks residing in its layers. During Architectural Assessments or the design of a solution or Enterprise Architecture, the SOA RA provides enterprise architects with a set of architectural building blocks and their associations in each layer, the available options, and architectural and design decisions that need to be made at each layer. This allows organizations to gradually mature into the implementation of more intricate SOA designs in a gradual fashion.

In the next blog post I will describe the details behind each of the layers.

Here is a brief description of those layers as an introduction.

Brief Description of Layers

The layers that are defined in the SOA RA each provide a set of capabilities that are then realized through the use of the architectural building blocks. Note that there are five functional layers providing direct business functional value in SOA solutions. An additional four layers (Integration, Information, QoS and Governance) provide cross-cutting capabilities expected of SOA and Cloud –based solutions. A brief description of these layers is provided below:

  • Operational Systems Layer – captures the new and existing organization infrastructure and is needed to support the SOA solution at design, deploy and run time.
  • Service Component Layer – contains software components, each of which provide the implementation or “realization” for a service, or operation on a service. The layer also contains the functional and technical components that facilitate a Service Component to realize one or more services.
  • Services Layer – consists of all the services defined within the SOA. The service layer can be thought of as containing the service descriptions for business capabilities, services and IT manifestation used/created during design time as well as runtime service contracts and descriptions that used at runtime.
  • Business Process Layer – covers the process representation, composition methods and building blocks for aggregating loosely coupled services as a sequencing process aligned with business goals.
  • Consumer Layer – where consumers interact with the SOA. It enables a SOA solution to support a client-independent, channel agnostic set of functionality, which is separately consumed and rendered through one or more channels (client platforms and devices).
  • Integration Layer – enables and provides the capability to mediate, which includes transformation, routing and protocol conversion to transport service requests from the service requester to the correct service provider.
  • Quality of Service Layer – supports non functional requirement (NFR) related issues as a primary feature/concern of SOA and provides a focal point for dealing with them in any given solution.
  • Information Architecture Layer – is responsible for manifesting a unified representation of the information aspect of an organization as provided by its IT services, applications, and systems enabling business needs and processes and aligned with the business vocabulary – glossary and terms. This enables the SOA to support data consistency, and consistency in data quality.
  • Governance Layer – contains the registries, repositories, and other capabilities required to support the governance of your SOA within the SOA ecosystem and is adapted to match and support the target SOA goals of the organization.

Conclusion

The SOA RA is an excellent tool in the toolbox forEnterpriseand Application Architects.  It maps out the path for enterprise architects to assess their current SOA implementations against an industry agreed upon standard and define and customize their own internal implementations and product mappings against it.  The SOA RA provides the tools necessary to allow architects to speak in a standard, consistent vocabulary to the business line executives when designing solutions for the enterprise.  As the industry moves more towards the Cloud, the SOA RA ensures a strong foundation to pave the way towards Cloud Computing, helping businesses leverage the investments they have made in SOA services that will keep them on solid ground while moving towards the Cloud Computing Model.  Lastly, the SOA RA defines the architecture for all kinds of services based solutions, where Cloud extends and leverages services into the infrastructure in a virtualized, elastic and monitored model of pooled resources on a grander scale.

Dr. Ali Arsanjani is Chief Technology Officer (CTO) for SOA, BPM & Emerging Technologies within IBM Global Services where he leads a team responsible for developing worldwide competency in SOA/ BPM and increasing delivery excellence of SOA solutions using IBM and non-IBM tools and SOA offerings. Dr. Arsanjani represents IBM in standards bodies such as The Open Group and is responsible for co-chairing the SOA Reference Architecture, SOA Maturity Model, Cloud Computing Reference Architecture standards within that body.

3 Comments

Filed under Cloud, Cloud/SOA

Top 5 Tell-tale Signs of SOA Evolving to the Cloud

By E.G. Nadhan, HP Enterprise Services

Rewind two decades and visualize what a forward-thinking prediction would have looked like then —  IT is headed towards a technology agnostic, service-based applications and infrastructure environment, consumed when needed, with usage-based chargeback models in place for elastic resources. A forward thinking tweet would have simply said – IT is headed for the Cloud. These concepts have steadily evolved within applications first with virtualization expediting their evolution within infrastructure across enterprises. Thus, IT has followed an evolutionary pattern over the years forcing enterprises to continuously revisit their overall strategy.

What started as SOA has evolved into the Cloud.  Here are five tell-tale signs:

  • As-a-service model:  Application interfaces being exposed as services in a standardized fashion were the technical foundation to SOA. This concept was slowly but steadily extended to the infrastructure environment leading to IaaS and eventually, [pick a letter of your choice]aaS. Infrastructure components, provisioned as services, had to be taken into account as part of the overall SOA strategy. Given the vital role of IaaS within the Cloud, a holistic SOA enterprise-wide SOA strategy is essential for successful Cloud deployment.
  • Location transparency: Prior to service orientation, applications had to be aware of the logistics of information sources. Service orientation introduced location transparency so that the specifics of the physical location where the services were executed did not matter as much. Extending this paradigm, Cloud leverages the available resources as and when needed for execution of the services provided.
  • Virtualization: Service orientation acted as a catalyst for virtualization of application interfaces wherein the standardization of the interfaces was given more importance than the actual execution of the services. Virtualization was extended to infrastructure components facilitating their rapid provisioning as long as it met the experience expectations of the consumers.
  • Hardware: IaaS provisioning based on virtualization along with the partitioning of existing physical hardware into logically consumable segments resulted in hardware being shared across multiple applications. Cloud extends this notion into a pool of hardware resources being shared across multiple applications.
  • Chargeback: SOA was initially focused on service implementation after which the focus shifted to SOA Governance and SOA Management including the tracking of metrics and chargeback mechanism. Cloud is following a similar model, which is why the challenges of metering and chargeback mechanisms that IT is dealing with in the Cloud are fundamentally similar to monitoring service consumption across the enterprise.

These are my tell-tale signs. I would be very interested to know about practical instances of similar signs on your end.

Figure 1: The Open Group Service Oriented Cloud Computing Infrastructure Technical Standard

It is no surprise that the very first Cloud technical standard published by The Open Group — Service Oriented Cloud Computing Infrastructure – initially started as the Service Oriented Infrastructure (SOI) project within The Open Group SOA Work Group. As its co-chair, I had requested extending SOI into the Open Group Cloud Work Group when it was formed making it a joint project across both work groups. Today, you will see how the SOCCI technical standard calls out the evolution of SOI into SOCCI for the Cloud.

To find out more about the new SOCCI technical standard, please check out: http://www3.opengroup.org/news/press/open-group-publishes-new-standards-soa-and-cloud

 This blog post was originally posted on HP’s Technical Support Services Blog.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

4 Comments

Filed under Cloud, Cloud/SOA, Service Oriented Architecture, Standards

Open Group Security Gurus Dissect the Cloud: Higher of Lower Risk

By Dana Gardner, Interarbor Solutions

For some, any move to the Cloud — at least the public Cloud — means a higher risk for security.

For others, relying more on a public Cloud provider means better security. There’s more of a concentrated and comprehensive focus on security best practices that are perhaps better implemented and monitored centrally in the major public Clouds.

And so which is it? Is Cloud a positive or negative when it comes to cyber security? And what of hybrid models that combine public and private Cloud activities, how is security impacted in those cases?

We posed these and other questions to a panel of security experts at last week’s Open Group Conference in San Francisco to deeply examine how Cloud and security come together — for better or worse.

The panel: Jim Hietala, Vice President of Security for The Open Group; Stuart Boardman, Senior Business Consultant at KPN, where he co-leads the Enterprise Architecture Practice as well as the Cloud Computing Solutions Group; Dave Gilmour, an Associate at Metaplexity Associates and a Director at PreterLex Ltd., and Mary Ann Mezzapelle, Strategist for Enterprise Services and Chief Technologist for Security Services at HP.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Is this notion of going outside the firewall fundamentally a good or bad thing when it comes to security?

Hietala: It can be either. Talking to security people in large companies, frequently what I hear is that with adoption of some of those services, their policy is either let’s try and block that until we get a grip on how to do it right, or let’s establish a policy that says we just don’t use certain kinds of Cloud services. Data I see says that that’s really a failed strategy. Adoption is happening whether they embrace it or not.

The real issue is how you do that in a planned, strategic way, as opposed to letting services like Dropbox and other kinds of Cloud Collaboration services just happen. So it’s really about getting some forethought around how do we do this the right way, picking the right services that meet your security objectives, and going from there.

Gardner: Is Cloud Computing good or bad for security purposes?

Boardman: It’s simply a fact, and it’s something that we need to learn to live with.

What I’ve noticed through my own work is a lot of enterprise security policies were written before we had Cloud, but when we had private web applications that you might call Cloud these days, and the policies tend to be directed toward staff’s private use of the Cloud.

Then you run into problems, because you read something in policy — and if you interpret that as meaning Cloud, it means you can’t do it. And if you say it’s not Cloud, then you haven’t got any policy about it at all. Enterprises need to sit down and think, “What would it mean to us to make use of Cloud services and to ask as well, what are we likely to do with Cloud services?”

Gardner: Dave, is there an added impetus for Cloud providers to be somewhat more secure than enterprises?

Gilmour: It depends on the enterprise that they’re actually supplying to. If you’re in a heavily regulated industry, you have a different view of what levels of security you need and want, and therefore what you’re going to impose contractually on your Cloud supplier. That means that the different Cloud suppliers are going to have to attack different industries with different levels of security arrangements.

The problem there is that the penalty regimes are always going to say, “Well, if the security lapses, you’re going to get off with two months of not paying” or something like that. That kind of attitude isn’t going to go in this kind of security.

What I don’t understand is exactly how secure Cloud provision is going to be enabled and governed under tight regimes like that.

An opportunity

Gardner: Jim, we’ve seen in the public sector that governments are recognizing that Cloud models could be a benefit to them. They can reduce redundancy. They can control and standardize. They’re putting in place some definitions, implementation standards, and so forth. Is the vanguard of correct Cloud Computing with security in mind being managed by governments at this point?

Hietala: I’d say that they’re at the forefront. Some of these shared government services, where they stand up Cloud and make it available to lots of different departments in a government, have the ability to do what they want from a security standpoint, not relying on a public provider, and get it right from their perspective and meet their requirements. They then take that consistent service out to lots of departments that may not have had the resources to get IT security right, when they were doing it themselves. So I think you can make a case for that.

Gardner: Stuart, being involved with standards activities yourself, does moving to the Cloud provide a better environment for managing, maintaining, instilling, and improving on standards than enterprise by enterprise by enterprise? As I say, we’re looking at a larger pool and therefore that strikes me as possibly being a better place to invoke and manage standards.

Boardman: Dana, that’s a really good point, and I do agree. Also, in the security field, we have an advantage in the sense that there are quite a lot of standards out there to deal with interoperability, exchange of policy, exchange of credentials, which we can use. If we adopt those, then we’ve got a much better chance of getting those standards used widely in the Cloud world than in an individual enterprise, with an individual supplier, where it’s not negotiation, but “you use my API, and it looks like this.”

Having said that, there are a lot of well-known Cloud providers who do not currently support those standards and they need a strong commercial reason to do it. So it’s going to be a question of the balance. Will we get enough specific weight of people who are using it to force the others to come on board? And I have no idea what the answer to that is.

Gardner: We’ve also seen that cooperation is an important aspect of security, knowing what’s going on on other people’s networks, being able to share information about what the threats are, remediation, working to move quickly and comprehensively when there are security issues across different networks.

Is that a case, Dave, where having a Cloud environment is a benefit? That is to say more sharing about what’s happening across networks for many companies that are clients or customers of a Cloud provider rather than perhaps spotty sharing when it comes to company by company?

Gilmour: There is something to be said for that, Dana. Part of the issue, though, is that companies are individually responsible for their data. They’re individually responsible to a regulator or to their clients for their data. The question then becomes that as soon as you start to share a certain aspect of the security, you’re de facto sharing the weaknesses as well as the strengths.

So it’s a two-edged sword. One of the problems we have is that until we mature a little bit more, we won’t be able to actually see which side is the sharpest.

Gardner: So our premise that Cloud is good and bad for security is holding up, but I’m wondering whether the same things that make you a risk in a private setting — poor adhesion to standards, no good governance, too many technologies that are not being measured and controlled, not instilling good behavior in your employees and then enforcing that — wouldn’t this be the same either way? Is it really Cloud or not Cloud, or is it good security practices or not good security practices? Mary Ann?

No accountability

Mezzapelle: You’re right. It’s a little bit of that “garbage in, garbage out,” if you don’t have the basic things in place in your enterprise, which means the policies, the governance cycle, the audit, and the tracking, because it doesn’t matter if you don’t measure it and track it, and if there is no business accountability.

David said it — each individual company is responsible for its own security, but I would say that it’s the business owner that’s responsible for the security, because they’re the ones that ultimately have to answer that question for themselves in their own business environment: “Is it enough for what I have to get done? Is the agility more important than the flexibility in getting to some systems or the accessibility for other people, as it is with some of the ubiquitous computing?”

So you’re right. If it’s an ugly situation within your enterprise, it’s going to get worse when you do outsourcing, out-tasking, or anything else you want to call within the Cloud environment. One of the things that we say is that organizations not only need to know their technology, but they have to get better at relationship management, understanding who their partners are, and being able to negotiate and manage that effectively through a series of relationships, not just transactions.

Gardner: If data and sharing data is so important, it strikes me that Cloud component is going to be part of that, especially if we’re dealing with business processes across organizations, doing joins, comparing and contrasting data, crunching it and sharing it, making data actually part of the business, a revenue generation activity, all seems prominent and likely.

So to you, Stuart, what is the issue now with data in the Cloud? Is it good, bad, or just the same double-edged sword, and it just depends how you manage and do it?

Boardman: Dana, I don’t know whether we really want to be putting our data in the Cloud, so much as putting the access to our data into the Cloud. There are all kinds of issues you’re going to run up against, as soon as you start putting your source information out into the Cloud, not the least privacy and that kind of thing.

A bunch of APIs

What you can do is simply say, “What information do I have that might be interesting to people? If it’s a private Cloud in a large organization elsewhere in the organization, how can I make that available to share?” Or maybe it’s really going out into public. What a government, for example, can be thinking about is making information services available, not just what you go and get from them that they already published. But “this is the information,” a bunch of APIs if you like. I prefer to call them data services, and to make those available.

So, if you do it properly, you have a layer of security in front of your data. You’re not letting people come in and do joins across all your tables. You’re providing information. That does require you then to engage your users in what is it that they want and what they want to do. Maybe there are people out there who want to take a bit of your information and a bit of somebody else’s and mash it together, provide added value. That’s great. Let’s go for that and not try and answer every possible question in advance.

Gardner: Dave, do you agree with that, or do you think that there is a place in the Cloud for some data?

Gilmour: There’s definitely a place in the Cloud for some data. I get the impression that there is going to drive out of this something like the insurance industry, where you’ll have a secondary Cloud. You’ll have secondary providers who will provide to the front-end providers. They might do things like archiving and that sort of thing.

Now, if you have that situation where your contractual relationship is two steps away, then you have to be very confident and certain of your cloud partner, and it has to actually therefore encompass a very strong level of governance.

The other issue you have is that you’ve got then the intersection of your governance requirements with that of the cloud provider’s governance requirements. Therefore you have to have a really strongly — and I hate to use the word — architected set of interfaces, so that you can understand how that governance is actually going to operate.

Gardner: Wouldn’t data perhaps be safer in a cloud than if they have a poorly managed network?

Mezzapelle: There is data in the Cloud and there will continue to be data in the Cloud, whether you want it there or not. The best organizations are going to start understanding that they can’t control it that way and that perimeter-like approach that we’ve been talking about getting away from for the last five or seven years.

So what we want to talk about is data-centric security, where you understand, based on role or context, who is going to access the information and for what reason. I think there is a better opportunity for services like storage, whether it’s for archiving or for near term use.

There are also other services that you don’t want to have to pay for 12 months out of the year, but that you might need independently. For instance, when you’re running a marketing campaign, you already share your data with some of your marketing partners. Or if you’re doing your payroll, you’re sharing that data through some of the national providers.

Data in different places

So there already is a lot of data in a lot of different places, whether you want Cloud or not, but the context is, it’s not in your perimeter, under your direct control, all of the time. The better you get at managing it wherever it is specific to the context, the better off you will be.

Hietala: It’s a slippery slope [when it comes to customer data]. That’s the most dangerous data to stick out in a Cloud service, if you ask me. If it’s personally identifiable information, then you get the privacy concerns that Stuart talked about. So to the extent you’re looking at putting that kind of data in a Cloud, looking at the Cloud service and trying to determine if we can apply some encryption, apply the sensible security controls to ensure that if that data gets loose, you’re not ending up in the headlines of The Wall Street Journal.

Gardner: Dave, you said there will be different levels on a regulatory basis for security. Wouldn’t that also play with data? Wouldn’t there be different types of data and therefore a spectrum of security and availability to that data?

Gilmour: You’re right. If we come back to Facebook as an example, Facebook is data that, even if it’s data about our known customers, it’s stuff that they have put out there with their will. The data that they give us, they have given to us for a purpose, and it is not for us then to distribute that data or make it available elsewhere. The fact that it may be the same data is not relevant to the discussion.

Three-dimensional solution

That’s where I think we are going to end up with not just one layer or two layers. We’re going to end up with a sort of a three-dimensional solution space. We’re going to work out exactly which chunk we’re going to handle in which way. There will be significant areas where these things crossover.

The other thing we shouldn’t forget is that data includes our software, and that’s something that people forget. Software nowadays is out in the Cloud, under current ways of running things, and you don’t even always know where it’s executing. So if you don’t know where your software is executing, how do you know where your data is?

It’s going to have to be just handled one way or another, and I think it’s going to be one of these things where it’s going to be shades of gray, because it cannot be black and white. The question is going to be, what’s the threshold shade of gray that’s acceptable.

Gardner: Mary Ann, to this notion of the different layers of security for different types of data, is there anything happening in the market that you’re aware of that’s already moving in that direction?

Mezzapelle: The experience that I have is mostly in some of the business frameworks for particular industries, like healthcare and what it takes to comply with the HIPAA regulation, or in the financial services industry, or in consumer products where you have to comply with the PCI regulations.

There has continued to be an issue around information lifecycle management, which is categorizing your data. Within a company, you might have had a document that you coded private, confidential, top secret, or whatever. So you might have had three or four levels for a document.

You’ve already talked about how complex it’s going to be as you move into trying understand, not only for that data, that the name Mary Ann Mezzapelle, happens to be in five or six different business systems over a 100 instances around the world.

That’s the importance of something like an Enterprise Architecture that can help you understand that you’re not just talking about the technology components, but the information, what they mean, and how they are prioritized or critical to the business, which sometimes comes up in a business continuity plan from a system point of view. That’s where I’ve advised clients on where they might start looking to how they connect the business criticality with a piece of information.

One last thing. Those regulations don’t necessarily mean that you’re secure. It makes for good basic health, but that doesn’t mean that it’s ultimately protected.You have to do a risk assessment based on your own environment and the bad actors that you expect and the priorities based on that.

Leaving security to the end

Boardman: I just wanted to pick up here, because Mary Ann spoke about Enterprise Architecture. One of my bugbears — and I call myself an enterprise architect — is that, we have a terrible habit of leaving security to the end. We don’t architect security into our Enterprise Architecture. It’s a techie thing, and we’ll fix that at the back. There are also people in the security world who are techies and they think that they will do it that way as well.

I don’t know how long ago it was published, but there was an activity to look at bringing the SABSA Methodology from security together with TOGAF®. There was a white paper published a few weeks ago.

The Open Group has been doing some really good work on bringing security right in to the process of EA.

Hietala: In the next version of TOGAF, which has already started, there will be a whole emphasis on making sure that security is better represented in some of the TOGAF guidance. That’s ongoing work here at The Open Group.

Gardner: As I listen, it sounds as if the in the Cloud or out of the Cloud security continuum is perhaps the wrong way to look at it. If you have a lifecycle approach to services and to data, then you’ll have a way in which you can approach data uses for certain instances, certain requirements, and that would then apply to a variety of different private Cloud, public Cloud, hybrid Cloud.

Is that where we need to go, perhaps have more of this lifecycle approach to services and data that would accommodate any number of different scenarios in terms of hosting access and availability? The Cloud seems inevitable. So what we really need to focus on are the services and the data.

Boardman: That’s part of it. That needs to be tied in with the risk-based approach. So if we have done that, we can then pick up on that information and we can look at a concrete situation, what have we got here, what do we want to do with it. We can then compare that information. We can assess our risk based on what we have done around the lifecycle. We can understand specifically what we might be thinking about putting where and come up with a sensible risk approach.

You may come to the conclusion in some cases that the risk is too high and the mitigation too expensive. In others, you may say, no, because we understand our information and we understand the risk situation, we can live with that, it’s fine.

Gardner: It sounds as if we are coming at this as an underwriter for an insurance company. Is that the way to look at it?

Current risk

Gilmour: That’s eminently sensible. You have the mortality tables, you have the current risk, and you just work the two together and work out what’s the premium. That’s probably a very good paradigm to give us guidance actually as to how we should approach intellectually the problem.

Mezzapelle: One of the problems is that we don’t have those actuarial tables yet. That’s a little bit of an issue for a lot of people when they talk about, “I’ve got $100 to spend on security. Where am I going to spend it this year? Am I going to spend it on firewalls? Am I going to spend it on information lifecycle management assessment? What am I going to spend it on?” That’s some of the research that we have been doing at HP is to try to get that into something that’s more of a statistic.

So, when you have a particular project that does a certain kind of security implementation, you can see what the business return on it is and how it actually lowers risk. We found that it’s better to spend your money on getting a better system to patch your systems than it is to do some other kind of content filtering or something like that.

Gardner: Perhaps what we need is the equivalent of an Underwriters Laboratories (UL) for permeable organizational IT assets, where the security stamp of approval comes in high or low. Then, you could get you insurance insight– maybe something for The Open Group to look into. Any thoughts about how standards and a consortium approach would come into that?

Hietala: I don’t know about the UL for all security things. That sounds like a risky proposition.

Gardner: It could be fairly popular and remunerative.

Hietala: It could.

Mezzapelle: An unending job.

Hietala: I will say we have one active project in the Security Forum that is looking at trying to allow organizations to measure and understand risk dependencies that they inherit from other organizations.

So if I’m outsourcing a function to XYZ corporation, being able to measure what risk am I inheriting from them by virtue of them doing some IT processing for me, could be a Cloud provider or it could be somebody doing a business process for me, whatever. So there’s work going on there.

I heard just last week about a NSF funded project here in the U.S. to do the same sort of thing, to look at trying to measure risk in a predictable way. So there are things going on out there.

Gardner: We have to wrap up, I’m afraid, but Stuart, it seems as if currently it’s the larger public Cloud provider, something of Amazon and Google and among others that might be playing the role of all of these entities we are talking about. They are their own self-insurer. They are their own underwriter. They are their own risk assessor, like a UL. Do you think that’s going to continue to be the case?

Boardman: No, I think that as Cloud adoption increases, you will have a greater weight of consumer organizations who will need to do that themselves. You look at the question that it’s not just responsibility, but it’s also accountability. At the end of the day, you’re always accountable for the data that you hold. It doesn’t matter where you put it and how many other parties they subcontract that out to.

The weight will change

So there’s a need to have that, and as the adoption increases, there’s less fear and more, “Let’s do something about it.” Then, I think the weight will change.

Plus, of course, there are other parties coming into this world, the world that Amazon has created. I’d imagine that HP is probably one of them as well, but all the big names in IT are moving in here, and I suspect that also for those companies there’s a differentiator in knowing how to do this properly in their history of enterprise involvement.

So yeah, I think it will change. That’s no offense to Amazon, etc. I just think that the balance is going to change.

Gilmour: Yes. I think that’s how it has to go. The question that then arises is, who is going to police the policeman and how is that going to happen? Every company is going to be using the Cloud. Even the Cloud suppliers are using the Cloud. So how is it going to work? It’s one of these never-decreasing circles.

Mezzapelle: At this point, I think it’s going to be more evolution than revolution, but I’m also one of the people who’ve been in that part of the business — IT services — for the last 20 years and have seen it morph in a little bit different way.

Stuart is right that there’s going to be a convergence of the consumer-driven, cloud-based model, which Amazon and Google represent, with an enterprise approach that corporations like HP are representing. It’s somewhere in the middle where we can bring the service level commitments, the options for security, the options for other things that make it more reliable and risk-averse for large corporations to take advantage of it.

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

1 Comment

Filed under Cloud, Cloud/SOA, Conference, Cybersecurity, Information security, Security Architecture

San Francisco Conference Observations: Enterprise Transformation, Enterprise Architecture, SOA and a Splash of Cloud Computing

By Chris Harding, The Open Group 

This week I have been at The Open Group conference in San Francisco. The theme was Enterprise Transformation which, in simple terms means changing how your business works to take advantage of the latest developments in IT.

Evidence of these developments is all around. I took a break and went for coffee and a sandwich, to a little cafe down on Pine and Leavenworth that seemed to be run by and for the Millennium generation. True to type, my server pulled out a cellphone with a device attached through which I swiped my credit card; an app read my screen-scrawled signature and the transaction was complete.

Then dinner. We spoke to the hotel concierge, she tapped a few keys on her terminal and, hey presto, we had a window table at a restaurant on Fisherman’s Wharf. No lengthy phone negotiations with the Maitre d’. We were just connected with the resource that we needed, quickly and efficiently.

The power of ubiquitous technology to transform the enterprise was the theme of the inspirational plenary presentation given by Andy Mulholland, Global CTO at Capgemini. Mobility, the Cloud, and big data are the three powerful technical forces that must be harnessed by the architect to move the business to smarter operation and new markets.

Jeanne Ross of the MIT Sloan School of Management shared her recipe for architecting business success, with examples drawn from several major companies. Indomitable and inimitable, she always challenges her audience to think through the issues. This time we responded with, “Don’t small companies need architecture too?” Of course they do, was the answer, but the architecture of a big corporation is very different from that of a corner cafe.

Corporations don’t come much bigger than Nissan. Celso Guiotoko, Corporate VP and CIO at the Nissan Motor Company, told us how Nissan are using enterprise architecture for business transformation. Highlights included the concept of information capitalization, the rationalization of the application portfolio through SOA and reusable services, and the delivery of technology resource through a private cloud platform.

The set of stimulating plenary presentations on the first day of the conference was completed by Lauren States, VP and CTO Cloud Computing and Growth Initiatives at IBM. Everyone now expects business results from technical change, and there is huge pressure on the people involved to deliver results that meet these expectations. IT enablement is one part of the answer, but it must be matched by business process excellence and values-based culture for real productivity and growth.

My role in The Open Group is to support our work on Cloud Computing and SOA, and these activities took all my attention after the initial plenary. If you had, thought five years ago, that no technical trend could possibly generate more interest and excitement than SOA, Cloud Computing would now be proving you wrong.

But interest in SOA continues, and we had a SOA stream including presentations of forward thinking on how to use SOA to deliver agility, and on SOA governance, as well as presentations describing and explaining the use of key Open Group SOA standards and guides: the Service Integration Maturity Model (OSIMM), the SOA Reference Architecture, and the Guide to using TOGAF for SOA.

We then moved into the Cloud, with a presentation by Mike Walker of Microsoft on why Enterprise Architecture must lead Cloud strategy and planning. The “why” was followed by the “how”: Zapthink’s Jason Bloomberg described Representational State Transfer (REST), which many now see as a key foundational principle for Cloud architecture. But perhaps it is not the only principle; a later presentation suggested a three-tier approach with the client tier, including mobile devices, accessing RESTful information resources through a middle tier of agents that compose resources and carry out transactions (ACT).

In the evening we had a CloudCamp, hosted by The Open Group and conducted as a separate event by the CloudCamp organization. The original CloudCamp concept was of an “unconference” where early adopters of Cloud Computing technologies exchange ideas. Its founder, Dave Nielsen, is now planning to set up a demo center where those adopters can experiment with setting up private clouds. This transition from idea to experiment reflects the changing status of mainstream cloud adoption.

The public conference streams were followed by a meeting of the Open Group Cloud Computing Work Group. This is currently pursuing nine separate projects to develop standards and guidance for architects using cloud computing. The meeting in San Francisco focused on one of these – the Cloud Computing Reference Architecture. It compared submissions from five companies, also taking into account ongoing work at the U.S. National Institute of Standards and Technology (NIST), with the aim of creating a base from which to create an Open Group reference architecture for Cloud Computing. This gave a productive finish to a busy week of information gathering and discussion.

Ralph Hitz of Visana, a health insurance company based in Switzerland, made an interesting comment on our reference architecture discussion. He remarked that we were not seeking to change or evolve the NIST service and deployment models. This may seem boring, but it is true, and it is right. Cloud Computing is now where the automobile was in 1920. We are pretty much agreed that it will have four wheels and be powered by gasoline. The business and economic impact is yet to come.

So now I’m on my way to the airport for the flight home. I checked in online, and my boarding pass is on my cellphone. Big companies, as well as small ones, now routinely use mobile technology, and my airline has a frequent-flyer app. It’s just a shame that they can’t manage a decent cup of coffee.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF practitioner.

Comments Off

Filed under Cloud, Cloud/SOA, Conference, Enterprise Architecture, Enterprise Transformation, Service Oriented Architecture, Standards

Setting Expectations and Working within Existing Structures the Dominate Themes for Day 3 of San Francisco Conference

By The Open Group Conference Team

Yesterday concluded The Open Group Conference San Francisco. Key themes that stood out on Day 3, as well as throughout the conference, included the need for a better understanding of business expectations and existing structures.

Jason Bloomberg, president of ZapThink, began his presentation by using an illustration of a plate of spaghetti and drawing an analogy to Cloud Computing. He compared spaghetti to legacy applications and displayed the way that enterprises are currently moving to the Cloud – by taking the plate of spaghetti and physically putting it in the Cloud.

A lot of companies that have adopted Cloud Computing have done so without a comprehensive understanding of their current organization and enterprise assets, according to Mr. Bloomberg. A legacy application that is not engineered to operate in the Cloud will not yield the hyped benefits of elasticity and infinite scalability. And Cloud adoption without well thought-out objectives will never reach the vague goals of “better ROI” or “reduced costs.”

Mr. Bloomberg urged the audience to start with the business problem in order to understand what the right adoption will be for your enterprise. He argued that it’s crucial to think about the question “What does your application require?” Do you require Scalability? Elasticity? A private, public or hybrid Cloud? Without knowing a business’s expected outcomes, enterprise architects will be hard pressed to help them achieve their goals.

Understand your environment

Chris Lockhart, consultant at Working Title Management & Technology Consultants, shared his experiences helping a Fortune 25 company with an outdated technology model support Cloud-centric services. Lockhart noted that for many large companies, Cloud has been the fix-it solution for poorly architected enterprises. But often times after the business tells architects to build a model for cloud adoption, the plan presented and the business expectations do not align.

After working on this project Mr. Lockhart learned that the greatest problem for architects is “people with unset and unmanaged expectations.” After the Enterprise Architecture team realized that they had limited power with their recommendations and strategic roadmaps, they acted as negotiators, often facilitating communication between different departments within the business. This is where architects began to display their true value to the organization, illustrated by the following statement made by a business executive within the organization: “Architects are seen as being balanced and rounded individuals who combine a creative approach with a caring, thoughtful disposition.”

The key takeaways from Mr. Lockhart’s experience were:

  • Recognize the limitations
  • Use the same language
  • Work within existing structures
  • Frameworks and models are important to a certain extent
  • Don’t talk products
  • Leave architectural purity in the ivory tower
  • Don’t dictate – low threat level works better
  • Recognize that EA doesn’t know everything
  • Most of the work was dealing with people, not technology

Understand your Cloud Perspective

Steve Bennett, senior enterprise architect at Oracle, discussed the best way to approach Cloud Computing in his session, entitled “A Pragmatic Approach to Cloud Computing.” While architects understand and create value driven approaches, most customers simply don’t think this way, Mr. Bennett said. Often the business side of the enterprise hears about the revolutionary benefits of the Cloud, but they usually don’t take a pragmatic approach to implementing it.

Mr. Bennett went on to compare two types of Cloud adopters – the “Dilberts” and the “Neos” (from the Matrix). Dilberts often pursue monetary savings when moving to the Cloud and are late adopters, while Neos pursue business agility and can be described as early adopters, again highlighting the importance of understanding who is driving the implementation before architecting a plan.

Comments Off

Filed under Cloud, Cloud/SOA, Cybersecurity, Enterprise Architecture, Enterprise Transformation

Cloud Interoperability and Portability Project Findings to be Showcased in San Francisco

By Mark Skilton, Capgemini

Over the past year, The Open Group has been conducting a project to assess the current state of interoperability and portability in Cloud Computing. The findings from this work will be presented at The Open Group San Francisco Conference on Wednesday, February 1 by Mark Skilton (Capgemini) Kapil Bakshi (Cisco) and Chris Harding (The Open Group) – co-chairs and members of the project team.

The work has surveyed the current range of international standards development impacting interoperability. The project then developed a set of proposed architectural reference models targeting data, application, platform, infrastructure and environment portability and interoperability for Cloud ecosystems and connectivity to non-Cloud environments.

The Open Group plans to showcase the current findings and proposed areas of development within The Open Group using the organization’s own international architecture standards models and is also exploring the possibility of promoting work in this area  with other leading standards bodies as well.

If you’re interested in learning more about this project and if you’re at the San Francisco Conference, please come to the session, “The Benefits, Challenges and Survey of Cloud Computing Interoperability and Portability” on Wednesday, February 1 at 4:00 p.m.

Mark Skilton is Global Director for Capgemini, Strategy CTO Group, Global Infrastructure Services. His role includes strategy development, competitive technology planning including Cloud Computing and on-demand services, global delivery readiness and creation of Centers of Excellence. He is currently author of the Capgemini University Cloud Computing Course and is responsible for Group Interoperability strategy.

Comments Off

Filed under Cloud, Semantic Interoperability, Standards