Tag Archives: SOA Work Group

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

SOA Provides Needed Support for Enterprise Architecture in Cloud, Mobile, Big Data, Says Open Group Panel

By Dana Gardner, BriefingsDirect

There’s been a resurgent role for service-oriented architecture (SOA) as a practical and relevant ingredient for effective design and use of Cloud, mobile, and big data technologies.

To find out why, The Open Group recently gathered an international panel of experts to explore the concept of “architecture is destiny,” especially when it comes to hybrid services delivery and management. The panel shows how SOA is proving instrumental in allowing the needed advancements over highly distributed services and data, when it comes to scale, heterogeneity support, and governance.

The panel consists of Chris Harding, Director of Interoperability at The Open Group, based in the UK; Nikhil Kumar, President of Applied Technology Solutions and Co-Chair of the SOA Reference Architecture Projects within The Open Group, and he’s based in Michigan, and Mats Gejnevall, Enterprise Architect at Capgemini and Co-Chair of The Open Group SOA Work Group, and he’s based in Sweden. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

The full podcast can be found here.

Here are some excerpts:

Gardner: Why this resurgence in the interest around SOA?

Harding: My role in The Open Group is to support the work of our members on SOA, Cloud computing, and other topics. We formed the SOA Work Group back in 2005, when SOA was a real emerging hot topic, and we set up a number of activities and projects. They’re all completed.

I was thinking that the SOA Work Group would wind down, move into maintenance mode, and meet once every few months or so, but we still get a fair attendance at our regular web meetings.

In fact, we’ve started two new projects and we’re about to start a third one. So, it’s very clear that there is still an interest, and indeed a renewed interest, in SOA from the IT community within The Open Group.

Larger trends

Gardner: Nikhil, do you believe that this has to do with some of the larger trends we’re seeing in the field, like Cloud Software as a Service (SaaS)? What’s driving this renewal?

Kumar: What I see driving it is three things. One is the advent of the Cloud and mobile, which requires a lot of cross-platform delivery of consistent services. The second is emerging technologies, mobile, big data, and the need to be able to look at data across multiple contexts.

The third thing that’s driving it is legacy modernization. A lot of organizations are now a lot more comfortable with SOA concepts. I see it in a number of our customers. I’ve just been running a large Enterprise Architecture initiative in a Fortune 500 customer.

At each stage, and at almost every point in that, they’re now comfortable. They feel that SOA can provide the ability to rationalize multiple platforms. They’re restructuring organizational structures, delivery organizations, as well as targeting their goals around a service-based platform capability.

So legacy modernization is a back-to-the-future kind of thing that has come back and is getting adoption. The way it’s being implemented is using RESTful services, as well as SOAP services, which is different from traditional SOA, say from the last version, which was mostly SOAP-driven.

Gardner: Mats, do you think that what’s happened is that the marketplace and the requirements have changed and that’s made SOA more relevant? Or has SOA changed to better fit the market? Or perhaps some combination?

Gejnevall: I think that the Cloud is really a service delivery platform. Companies discover that to be able to use the Cloud services, the SaaS things, they need to look at SOA as their internal development way of doing things as well. They understand they need to do the architecture internally, and if they’re going to use lots of external Cloud services, you might as well use SOA to do that.

Also, if you look at the Cloud suppliers, they also need to do their architecture in some way and SOA probably is a good vehicle for them. They can use that paradigm and also deliver what the customer wants in a well-designed SOA environment.

Gardner: Let’s drill down on the requirements around the Cloud and some of the key components of SOA. We’re certainly seeing, as you mentioned, the need for cross support for legacy, Cloud types of services, and using a variety of protocol, transports, and integration types. We already heard about REST for lightweight approaches and, of course, there will still be the need for object brokering and some of the more traditional enterprise integration approaches.

This really does sound like the job for an Enterprise Service Bus (ESB). So let’s go around the panel and look at this notion of an ESB. Some people, a few years back, didn’t think it was necessary or a requirement for SOA, but it certainly sounds like it’s the right type of functionality for the job.

Loosely coupled

Harding: I believe so, but maybe we ought to consider that in the Cloud context, you’re not just talking about within a single enterprise. You’re talking about a much more loosely coupled, distributed environment, and the ESB concept needs to take account of that in the Cloud context.

Gardner: Nikhil, any thoughts about how to manage this integration requirement around the modern SOA environment and whether ESBs are more or less relevant as a result?

Kumar: In the context of a Cloud we really see SOA and the concept of service contracts coming to the fore. In that scenario, ESBs play a role as a broker within the enterprise. When we talk about the interaction across Cloud-service providers and Cloud consumers, what we’re seeing is that the service provider has his own concept of an ESB within its own internal context.

If you want your Cloud services to be really reusable, the concept of the ESB then becomes more for the routing and the mediation of those services, once they’re provided to the consumer. There’s a kind of separation of concerns between the concept of a traditional ESB and a Cloud ESB, if you want to call it that.

The Cloud context involves more of the need to be able to support, enforce, and apply governance concepts and audit concepts, the capabilities to ensure that the interaction meets quality of service guarantees. That’s a little different from the concept that drove traditional ESBs.

That’s why you’re seeing API management platforms like Layer 7Mashery, or Apigee and other kind of product lines. They’re also coming into the picture, driven by the need to be able to support the way Cloud providers are provisioning their services. As Chris put it, you’re looking beyond the enterprise. Who owns it? That’s where the role of the ESB is different from the traditional concept.

Most Cloud platforms have cost factors associated with locality. If you have truly global enterprises and services, you need to factor in the ability to deal with safe harbor issues and you need to factor in variations and law in terms of security governance.

The platforms that are evolving are starting to provide this out of the box. The service consumer or a service provider needs to be able to support those. That’s going to become the role of their ESB in the future, to be able to consume a service, to be able to assert this quality-of-service guarantee, and manage constraints or data-in-flight and data-at-rest.

Gardner: Mats, are there other aspects of the concept of ESB that are now relevant to the Cloud?

Entire stack

Gejnevall: One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products that all the consultancies were asking companies to buy, wanting them to buy an ESB, governance tools, business process management tools, and a lot of sort of quite large investments to just get your foot into the door of doing SOA.

These days you can buy that kind of stuff. You can buy the entire stack in the Cloud and start playing with it. I did some searches on it today and I found a company that you can play with the entire stack, including business tools and everything like that, for zero dollars. Then you can grow and use more and more of it in your business, but you can start to see if this is something for you.

In the past, the suppliers or the consultants told you that you could do it. You couldn’t really try it out yourself. You needed both the software and the hardware in place. The money to get started is much lower today. That’s another reason people might be thinking about it these days.

Gardner: It sounds as if there’s a new type of on-ramp to SOA values, and the componentry that supports SOA is now being delivered as a service. On top of that, you’re also able to consume it in a pay-as-you-go manner.

Harding: That’s a very good point, but there are two contradictory trends we are seeing here. One is the kind of trend that Mats is describing, where the technology you need to handle a complex stack is becoming readily available in the Cloud.

And the other is the trend that Nikhil mentioned: to go for a simpler style, which a lot of people term REST, for accessing services. It will be interesting to see how those two tendencies play out against each other.

Kumar: I’d like to make a comment on that. The approach for the on-ramp is really one of the key differentiators of the Cloud, because you have the agility and the lack of capital investment (CAPEX) required to test things out.

But as we are evolving with Cloud platforms, I’m also seeing with a lot of Platform-as-a-Service (PaaS) vendor scenarios that they’re trying the ESB in the stack itself. They’re providing it in their Cloud fabric. A couple of large players have already done that.

For example, Azure provides that in the forward-looking vision. I am sure IBM and Oracle have already started down that path. A lot of the players are going to provide it as a core capability.

Pre-integrated environment

Gejnevall: Another interesting thing is that they could get a whole environment that’s pre-integrated. Usually, when you buy these things from a vendor, a lot of times they don’t fit together that well. Now, there’s an effort to make them work together.

But some people put these open-source tools together. Some people have done that and put them out on the Cloud, which gives them a pretty cheap platform for themselves. Then, they can sell it at a reasonable price, because of the integration of all these things.

Gardner: The Cloud model may be evolving toward an all-inclusive offering. But SOA, by its definition, advances interoperability, to plug and play across existing, current, and future sets of service possibilities. Are we talking about SOA being an important element of keeping Clouds dynamic and flexible — even open?

Kumar: We can think about the OSI 7 Layer Model. We’re evolving in terms of complexity, right? So from an interoperability perspective, we may talk SOAP or REST, for example, but the interaction with AWS, SalesforceSmartCloud, or Azure would involve using APIs that each of these platforms provide for interaction.

Lock-in

So you could have an AMI, which is an image on the Amazon Web Services environment, for example, and that could support a lab stack or an open source stack. How you interact with it, how you monitor it, how you cluster it, all of those aspects now start factoring in specific APIs, and so that’s the lock-in.

From an architect’s perspective, I look at it as we need to support proper separation of concerns, and that’s part of [The Open Group] SOA Reference Architecture. That’s what we tried to do, to be able to support implementation architectures that support that separation of concerns.

There’s another factor that we need to understand from the context of the Cloud, especially for mid-to-large sized organizations, and that is that the Cloud service providers, especially the large ones — Amazon, Microsoft, IBM — encapsulate infrastructure.

If you were to go to Amazon, Microsoft, or IBM and use their IaaS networking capabilities, you’d have one of the largest WAN networks in the world, and you wouldn’t have to pay a dime to establish that infrastructure. Not in terms of the cost of the infrastructure, not in terms of the capabilities required, nothing. So that’s an advantage that the Cloud is bringing, which I think is going to be very compelling.

The other thing is that, from an SOA context, you’re now able to look at it and say, “Well, I’m dealing with the Cloud, and what all these providers are doing is make it seamless, whether you’re dealing with the Cloud or on-premise.” That’s an important concept.

Now, each of these providers and different aspects of their stacks are at significantly different levels of maturity. Many of these providers may find that their stacks do not interoperate with themselves either, within their own stacks, just because they’re using different run times, different implementations, etc. That’s another factor to take in.

From an SOA perspective, the Cloud has become very compelling, because I’m dealing, let’s say, with a Salesforce.com and I want to use that same service within the enterprise, let’s say, an insurance capability for Microsoft Dynamics or for SugarCRM. If that capability is exposed to one source of truth in the enterprise, you’ve now reduced the complexity and have the ability to adopt different Cloud platforms.

What we are going to start seeing is that the Cloud is going to shift from being just one à-la-carte solution for everybody. It’s going to become something similar to what we used to deal with in the enterprise context. You had multiple applications, which you service-enabled to reduce complexity and provide one service-based capability, instead of an application-centered approach.

You’re now going to move the context to the Cloud, to your multiple Cloud solutions, and maybe many implementations in a nontrivial environment for the same business capability, but they are now exposed to services in the enterprise SOA. You could have Salesforce. You could have Amazon. You could have an IBM implementation. And you could pick and choose the source of truth and share it.

So a lot of the core SOA concepts will still apply and are still applying.

Another on-ramp

Gardner: Perhaps yet another on-ramp to the use of SOA is the app store, which allows for discovery, socialization of services, but at the same time provides overnance and control?

Kumar: We’re seeing that with a lot of our customers, typically the vendors who support PaaS solution associate app store models along with their platform as a mechanism to gain market share.

The issue that you run into with that is, it’s okay if it’s on your cellphone or on your iPad, your tablet PC, or whatever, but once you start having managed apps, for example Salesforce, or if you have applications which are being deployed on an Azure or on a SmartCloud context, you have high risk scenario. You don’t know how well architected that application is. It’s just like going and buying an enterprise application.

When you deploy it in the Cloud, you really need to understand the Cloud PaaS platform for that particular platform to understand the implications in terms of dependencies and cross-dependencies across apps that you have installed. They have real practical implications in terms of maintainability and performance. We’ve seen that with at least two platforms in the last six months.

Governance becomes extremely important. Because of the low CAPEX implications to the business, the business is very comfortable with going and buying these applications and saying, “We can install X, Y, or Z and it will cost us two months and a few million dollars and we are all set.” Or maybe it’s a few hundred thousand dollars.

They don’t realize the implications in terms of interoperability, performance, and standard architectural quality attributes that can occur. There is a governance aspect from the context of the Cloud provisioning of these applications.

There is another aspect to it, which is governance in terms of the run-time, more classic SOA governance, to measure, assert, and to view the cost of these applications in terms of performance to your infrastructural resources, to your security constraints. Also, are there scenarios where the application itself has a dependency on a daisy chain, multiple external applications, to trace the data?

In terms of the context of app stores, they’re almost like SaaS with a particular platform in mind. They provide the buyer with certain commitments from the platform manager or the platform provider, such as security. When you buy an app from Apple, there is at least a reputational expectation of security from the vendor.

What you do not always know is if that security is really being provided. There’s a risk there for organizations who are exposing mission-critical data to that.

The second thing is there is still very much a place for the classic SOA registries and repositories in the Cloud. Only the place is for a different purpose. Those registries and repositories are used either by service providers or by consumers to maintain the list of services they’re using internally.

Different paradigms

There are two different paradigms. The app store is a place where I can go and I know that the gas I am going to get is 85 percent ethanol, versus I also have to maintain some basic set of goods at home to make that I have my dinner on time. These are different kind of roles and different kind of purposes they’re serving.

Above all, I think the thing that’s going to become more and more important in the context of the Cloud is that the functionality will be provided by the Cloud platform or the app you buy, but the governance will be a major IT responsibility, right from the time of picking the app, to the time of delivering it, to the time of monitoring it.

Gardner: How is The Open Group allowing architects to better exercise SOA principles, as they’re grappling with some of these issues around governance, hybrid services delivery and management, and the use and demand in their organizations to start consuming more Cloud services?

Harding: The architect’s primary concern, of course, has to be to meet the needs of the client and to do so in a way that is most effective and that is cost-effective. Cloud gives the architect a usability to go out and get different components much more easily than hitherto.

There is a problem, of course, with integrating them and putting them together. SOA can provide part of the solution to that problem, in that it gives a principle of loosely coupled services. If you didn’t have that when you were trying to integrate different functionality from different places, you would be in a real mess.

What The Open Group contributes is a set of artifacts that enable the architect to think through how to meet the client’s needs in the best way when working with SOA and Cloud.

For example, the SOA Reference Architecture helps the architect understand what components might be brought into the solution. We have the SOA TOGAF Practical Guide, which helps the architect understand how to use TOGAF® in the SOA context.

We’re working further on artifacts in the Cloud space, the Cloud Computing Reference Architecture, a notational language for enabling people to describe Cloud ecosystems on recommendations for Cloud interoperability and portability. We’re also working on recommendations for Cloud governance to complement the recommendations for SOA governance, the SOA Governance Framework Standards that we have already produced, and a number of other artifacts.

The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client.

From the very early days, SOA was seen as bringing a closer connection between the business and technology. A lot of those promises that were made about SOA seven or eight years ago are only now becoming possible to fulfill, and that business front is what that project is looking at.

We’re also producing an update to the SOA Reference Architectures. We have input the SOA Reference Architecture for consideration by the ISO Group that is looking at an International Standard Reference Architecture for SOA and also to the IEEE Group that is looking at an IEEE Standard Reference Architecture.

We hope that both of those groups will want to work along the principles of our SOA Reference Architecture and we intend to produce a new version that incorporates the kind of ideas that they want to bring into the picture.

We’re also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions.

So those are three new initiatives that should result in new Open Group standards and guides to complement, as I have described already, the SOA Reference Architecture, the SOA Governance Framework, the Practical Guides to using TOGAF for SOA.

We also have the Service Integration Maturity Model that we need to assess the SOA maturity. We have a standard on service orientation applied to Cloud infrastructure, and we have a formal SOA Ontology.

Those are the things The Open Group has in place at present to assist the architect, and we are and will be working on three new things: version 2 of the Reference Architecture for SOA, SOA for business technology, and I believe shortly we’ll start on assistance to architects in developing SOA solutions.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirect™ blogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cloud, Cloud/SOA, Service Oriented Architecture

The Open Group SOA Governance Framework Becomes an International Standard

By Heather Kreger, CTO International Standards, IBM and Chris Harding, Director for Interoperability, The Open Group

The Open Group SOA Governance Framework is now an International Standard, having passed its six month ratification vote in ISO and IEC.

According to Gartner, effective governance is a key success factor for Service-Oriented Architecture (SOA) solutions today and in the future. This endorsement of The Open Group standard by ISO is exciting, because it means that this vendor-neutral, proven SOA governance standard is now available to governments and enterprises world-wide.

Published by The Open Group in 2009, the SOA Governance Framework enables organizations—public, private, large and small—to develop their own robust governance regimens, rapidly and using industry best practices. This substantially reduces the cost and risk of using SOA. As an international standard, the framework will now provide authoritative guidelines for companies across the globe to implement sound SOA governance practices.

The framework includes a standard governance reference model and a mechanism for enterprises to customize and implement the compliance, dispensation and communication processes that are appropriate for them. Long term vitality is an essential part of the framework, and it gives guidance on evolving these processes over time in the light of changing business and technical circumstances, ensuring the on-going alignment of business and IT.

This is The Open Group’s second international standard on SOA, the first being the Open Services Integration Maturity Model (OSIMM), which passed ISO ratification in January 2012. Since then, we have seen OSIMM being considered for adoption as a national standard in countries such as China and Korea. We are hoping that the new SOA Governance Framework International Standard will be given the same consideration. The Open Group also contributed its SOA Ontology and SOA Reference Architecture standards to JTC1 and is engaged in the development of international standards on SOA there.

In addition to submitting our SOA standards for international ratification, The Open Group is actively leveraging its SOA standards in its Cloud architecture projects. In particular, the Cloud Governance Project in The Open Group Cloud Computing Work Group is developing a Cloud Governance Framework based on and extending the SOA Governance Framework. This emerging standard will identify cloud specific governance issues and offer guidance and best practices for addressing them.

Finally, The Open Group is engaged in the development of Cloud architecture standards in JTC1, and in particular in the new Collaboration between ISO/IEC JTC1 SC38 and ITUT’s Cloud groups to create a common Combined Team Cloud Vocabulary and Combined Team Cloud Architecture. All of this is very exciting work, both for the SOA and for the Cloud Computing Work Group. Stay tuned for more developments as these projects progress!

Resources

Heather Kreger is IBM’s lead architect for Smarter Planet, Policy, and SOA Standards in the IBM Software Group, with 15 years of standards experience. She has led the development of standards for Cloud, SOA, Web services, Management and Java in numerous standards organizations, including W3C, OASIS, DMTF, and Open Group.Heather is currently co-chair for The Open Group’s SOA Work Group and liaison for the Open Group SOA and Cloud Work Groups to ISO/IEC JTC1 SC7 SOA SG and INCITS DAPS38 (US TAG to ISO/IEC JTC 1 SC38). Heather is also the author of numerous articles and specifications, as well as the book Java and JMX, Building Manageable Systems, and most recently was co-editor of Navigating the SOA Open Standards Landscape Around Architecture.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

Comments Off

Filed under Cloud/SOA

Cloud Conference — and Unconference

By Dr. Chris Harding, The Open Group

The Wednesday of The Open Group Conference in San Diego included a formal Cloud Computing conference stream. This was followed in the evening by an unstructured CloudCamp, which made an interesting contrast.

The Cloud Conference Stream

The Cloud conference stream featured presentations on Architecting for Cloud and Cloud Security, and included a panel discussion on the considerations that must be made when choosing a Cloud solution.

In the first session of the morning, we had two presentations on Architecting for Cloud. Both considered TOGAF® as the architectural context. The first, from Stuart Boardman of Getronics, explored the conceptual difference that Cloud makes to enterprise architecture, and the challenge of communicating an architecture vision and discussing the issues with stakeholders in the subsequent TOGAF® phases. The second, from Serge Thorn of Architecting the Enterprise, looked at the considerations in each TOGAF® phase, but in a more specific way. The two presentations showed different approaches to similar subject matter, which proved a very stimulating combination.

This session was followed by a presentation from Steve Else of EA Principals in which he shared several use cases related to Cloud Computing. Using these, he discussed solution architecture considerations, and put forward the lessons learned and some recommendations for more successful planning, decision-making, and execution.

We then had the first of the day’s security-related presentations. It was given by Omkhar Arasaratnam of IBM and Stuart Boardman of Getronics. It summarized the purpose and scope of the Security for the Cloud and SOA project that is being conducted in The Open Group as a joint project of The Open Group’s Cloud Computing Work Group, the SOA Work Group, and Security Forum. Omkhar and Stuart described the usage scenarios that the project team is studying to guide its thinking, the concepts that it is developing, and the conclusions that it has reached so far.

The first session of the afternoon was started by Ed Harrington, of Architecting the Enterprise, who gave an interesting presentation on current U.S. Federal Government thinking on enterprise architecture, showing clearly the importance of Cloud Computing to U.S. Government plans. The U.S. is a leader in the use of IT for government and administration, so we can expect that its conclusions – that Cloud Computing is already making its way into the government computing fabric, and that enterprise architecture, instantiated as SOA and properly governed, will provide the greatest possibility of success in its implementation – will have a global impact.

We then had a panel session, moderated by Dana Gardner with his usual insight and aplomb, that explored the considerations that must be made when choosing a Cloud solution — custom or shrink-wrapped — and whether different forms of Cloud Computing are appropriate to different industry sectors. The panelists represented different players in the Cloud solutions market – customers, providers, and consultants – so that the topic was covered in depth and from a variety of viewpoints. They were Penelope Gordon of 1Plug Corporation, Mark Skilton of Capgemini, Ed Harrington of Architecting the Enterprise, Tom Plunkett of Oracle, and TJ Virdi of the Boeing Company.

In the final session of the conference stream, we returned to the topic of Cloud Security. Paul Simmonds, a member of the Board of the Jericho Forum®, gave an excellent presentation on de-risking the Cloud through effective risk management, in which he explained the approach that the Jericho Forum has developed. The session was then concluded by Andres Kohn of Proofpoint, who addressed the question of whether data can be more secure in the Cloud, considering public, private and hybrid Cloud environment.

CloudCamp

The CloudCamp was hosted by The Open Group but run as a separate event, facilitated by CloudCamp organizer Dave Nielsen. There were around 150-200 participants, including conference delegates and other people from the San Diego area who happened to be interested in the Cloud.

Dave started by going through his definition of Cloud Computing. Perhaps he should have known better – starting a discussion on terminology and definitions can be a dangerous thing to do with an Open Group audience. He quickly got into a good-natured argument from which he eventually emerged a little bloodied, metaphorically speaking, but unbowed.

We then had eight “lightning talks”. These were five-minute presentations covering a wide range of topics, including how to get started with Cloud (Margaret Dawson, Hubspan), supplier/consumer relationship (Brian Loesgen, Microsoft), Cloud-based geographical mapping (Ming-Hsiang Tsou, San Diego University), a patterns-based approach to Cloud (Ken Klingensmith, IBM), efficient large-scale data processing (AlexRasmussen, San Diego University), using desktop spare capacity as a Cloud resource (Michael Krumpe, Intelligent Technology Integration), cost-effective large-scale data processing in the Cloud (Patrick Salami, Temboo), and Cloud-based voice and data communication (Chris Matthieu, Tropo).

The participants then split into groups to discuss topics proposed by volunteers. There were eight topics altogether. Some of these were simply explanations of particular products or services offered by the volunteers’ companies. Others related to areas of general interest such as data security and access control, life-changing Cloud applications, and success stories relating to “big data”.

I joined the groups discussing Cloud software development on Amazon Web Services (AWS) and Microsoft Azure. These sessions had excellent information content which would be valuable to anyone wishing to get started in – or already engaged in – software development on these platforms. They also brought out two points of general interest. The first is that the dividing line between IaaS and PaaS can be very thin. AWS and Azure are in theory on opposite sides of this divide; in practice they provide the developer with broadly similar capabilities. The second point is that in practice your preferred programming language and software environment is likely to be the determining factor in your choice of Cloud development platform.

Overall, the CloudCamp was a great opportunity for people to absorb the language and attitudes of the Cloud community, to discuss ideas, and to pick up specific technical knowledge. It gave an extra dimension to the conference, and we hope that this can be repeated at future events by The Open Group.

Cloud and SOA are a topic of discussion at The Open Group Conference, San Diego, which is currently underway.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF® practitioner.

1 Comment

Filed under Cloud/SOA

The Newest from SOA: The SOA Ontology Technical Standard

By Heather Kreger, IBM

The Open Group just announced the availability of The Open Group SOA Ontology Technical Standard.

Ontology?? Sounds very ‘semantic Web,’ doesn’t it? Just smacks of reasoning engines. What on earth do architects using SOA want with reasoning engines?

Actually, Ontologies are misunderstood — an Ontology is simply the definition of a set of concepts and the relationships between them for a particular domain — in this case, the domain is SOA.

They don’t HAVE to be used for reasoning… or semantic Web. And they are more than a simple glossary which defines terms, because they also define relationships between them — something important for SOA, we thought. It’s also important to note that they are more formal than Reference Models, usually by providing representations in OWL (just in case you want to use popular tools for Ontology and reasoners).

What would an architect do with THIS ontology?Image credit: jscreationzs

It can be used simply to read and understand the key concepts of SOA, and more importantly, a set of definitions and UNDERSTANDING of key concepts that you can agree to use with others in your company and between organizations. Making sure you are ‘speaking the same language’ is essential for any architect to be able to communicate effectively with IT, business, and marketing professionals within the enterprise as well as with vendors and suppliers outside the enterprise. This common language can help ensure that you can ask the right questions and interpret the answers you get unambiguously.

It can be used as a basis for the models for the SOA solution as well. In fact, this is happening in the SOA repository standard under development in OASIS, S-RAMP, where they have used the SOA Ontology as the foundational business model for registry/repository integration.

The Ontology can also be augmented with additional related domain-specific ontologies; for example, on Governance or Business Process Management… or even in a vertical industry like retail where ARTS is developing service models. In fact, we, the SOA Ontology project, tried to define the minimum, absolutely core concepts needed for SOA and allow other domain experts to define additional details for Policy, Process, Service Contract, etc.

This Ontology was developed to be consistent with existing and developing SOA standards including OMG’s SOA/ML and BPMN and those in The Open Group SOA Workgroup: SOA Governance Framework, OSIMM, and the SOA Reference Architecture. It seems it would have been good to have developed this standard before now, but the good news is that it is grounded in extensive real-world experience developing, deploying and communicating about SOA solutions over the past five years. The Ontology reflects the lessons learned about what terms NOT to use to avoid confusion, and how to best distinguish among some common and often overused concepts like service composition, process, service contracts, and policy and their roles in SOA.

Have a look at the new SOA Ontology and see if it can help you in your communications for SOA. It’s available to you free at this link: http://www.opengroup.org/bookstore/catalog/c104.htm

Additional Links:

Heather KregerHeather Kreger is IBM’s lead architect for Smarter Planet, Policy, and SOA Standards in the IBM Software Group, with 15 years of standards experience. She has led the development of standards for Cloud, SOA, Web services, Management and Java in numerous standards organizations, including W3C, OASIS, DMTF, and Open Group. Heather is currently co-chair for The Open Group’s SOA Work Group and liaison for the Open Group SOA and Cloud Work Groups to ISO/IEC JTC1 SC7 SOA SG and INCITS DAPS38 (US TAG to ISO/IEC JTC 1 SC38). Heather is also the author of numerous articles and specifications, as well as the book Java and JMX, Building Manageable Systems, and most recently was co-editor of Navigating the SOA Open Standards Landscape Around Architecture.

6 Comments

Filed under Cloud/SOA