Tag Archives: service-oriented architecture

The Open Group Boston 2014 to Explore How New IT Trends are Empowering Improvements in Business

By The Open Group

The Open Group Boston 2014 will be held on July 21-22 and will cover the major issues and trends surrounding Boundaryless Information Flow™. Thought-leaders at the event will share their outlook on IT trends, capabilities, best practices and global interoperability, and how this will lead to improvements in responsiveness and efficiency. The event will feature presentations from representatives of prominent organizations on topics including Healthcare, Service-Oriented Architecture, Security, Risk Management and Enterprise Architecture. The Open Group Boston will also explore how cross-organizational collaboration and trends such as big data and cloud computing are helping to make enterprises more effective.

The event will consist of two days of plenaries and interactive sessions that will provide in-depth insight on how new IT trends are leading to improvements in business. Attendees will learn how industry organizations are seeking large-scale transformation and some of the paths they are taking to realize that.

The first day of the event will bring together subject matter experts in the Open Platform 3.0™, Boundaryless Information Flow™ and Enterprise Architecture spaces. The day will feature thought-leaders from organizations including Boston University, Oracle, IBM and Raytheon. One of the keynotes is from Marshall Van Alstyne, Professor at Boston University School of Management & Researcher at MIT Center for Digital Business, which reveals the secret of internet-driven marketplaces. Other content:

• The Open Group Open Platform 3.0™ focuses on new and emerging technology trends converging with each other and leading to new business models and system designs. These trends include mobility, social media, big data analytics, cloud computing and the Internet of Things.
• Cloud security and the key differences in securing cloud computing environments vs. traditional ones as well as the methods for building secure cloud computing architectures
• Big Data as a service framework as well as preparing to deliver on Big Data promises through people, process and technology
• Integrated Data Analytics and using them to improve decision outcomes

The second day of the event will have an emphasis on Healthcare, with keynotes from Joseph Kvedar, MD, Partners HealthCare, Center for Connected Health, and Connect for Health Colorado CTO, Proteus Duxbury. The day will also showcase speakers from Hewlett Packard and Blue Cross Blue Shield, multiple tracks on a wide variety of topics such as Risk and Professional Development, and Archimate® tutorials. Key learnings include:

• Improving healthcare’s information flow is a key enabler to improving healthcare outcomes and implementing efficiencies within today’s delivery models
• Identifying the current state of IT standards and future opportunities which cover the healthcare ecosystem
• How Archimate® can be used by Enterprise Architects for driving business innovation with tried and true techniques and best practices
• Security and Risk Management evolving as software applications become more accessible through APIs – which can lead to vulnerabilities and the potential need to increase security while still understanding the business value of APIs

Member meetings will also be held on Wednesday and Thursday, June 23-24.

Don’t wait, register now to participate in these conversations and networking opportunities during The Open Group Boston 2014: http://www.opengroup.org/boston2014/registration

Join us on Twitter – #ogchat #ogBOS

Leave a comment

Filed under ArchiMate®, Boundaryless Information Flow™, Business Architecture, Cloud/SOA, Conference, Enterprise Architecture, Enterprise Transformation, Healthcare, Information security, Open Platform 3.0, Professional Development, RISK Management, Service Oriented Architecture, Standards, Uncategorized

The Open Group Open Platform 3.0™ Starts to Take Shape

By Dr. Chris Harding, Director for Interoperability, The Open Group

The Open Group published a White Paper on Open Platform 3.0™ at the start of its conference in Amsterdam in May 2014. This article, based on a presentation given at the conference, explains how the definition of the platform is beginning to emerge.

Introduction

Amsterdam is a beautiful place. Walking along the canals is like moving through a set of picture postcards. But as you look up at the houses beside the canals, and you see the cargo hoists that many of them have, you are reminded that the purpose of the arrangement was not to give pleasure to tourists. Amsterdam is a great trading city, and the canals were built as a very efficient way of moving goods around.

This is also a reminder that the primary purpose of architecture is not to look beautiful, but to deliver business value, though surprisingly, the two often seem to go together quite well.

When those canals were first thought of, it might not have been obvious that this was the right thing to do for Amsterdam. Certainly the right layout for the canal network would not be obvious. The beginning of a project is always a little uncertain, and seeing the idea begin to take shape is exciting. That is where we are with Open Platform 3.0 right now.

We started with the intention to define a platform to enable enterprises to get value from new technologies including cloud computing, social computing, mobile computing, big data, the Internet of Things, and perhaps others. We developed an Open Group business scenario to capture the business requirements. We developed a set of business use-cases to show how people are using and wanting to use those technologies. And that leads to the next step, which is to define the platform. All these new technologies and their applications sound wonderful, but what actually is Open Platform 3.0?

The Third Platform

Looking historically, the first platform was the computer operating system. A vendor-independent operating system interface was defined by the UNIX® standard. The X/Open Company and the Open Software Foundation (OSF), which later combined to form The Open Group, were created because companies everywhere were complaining that they were locked into proprietary operating systems. They wanted applications portability. X/Open specified the UNIX® operating system as a common application environment, and the value that it delivered was to prevent vendor lock-in.

The second platform is the World Wide Web. It is a common services environment, for services used by people browsing web pages or for web services used by programs. The value delivered is universal deployment and access. Any person or company anywhere can create a services-based solution and deploy it on the web, and every person or company throughout the world can access that solution.

Open Platform 3.0 is developing as a common architecture environment. This does not mean it is a replacement for TOGAF®. TOGAF is about how you do architecture and will continue to be used with Open Platform 3.0. Open Platform 3.0 is about what kind of architecture you will create. It will be a common environment in which enterprises can do architecture. The big business benefit that it will deliver is integrated solutions.

ChrisBlog1

Figure 1: The Third Platform

With the second platform, you can develop solutions. Anyone can develop a solution based on services accessible over the World Wide Web. But independently-developed web service solutions will very rarely work together “out of the box”.

There is an increasing need for such solutions to work together. We see this need when looking at The Open Platform 3.0 technologies. People want to use these technologies together. There are solutions that use them, but they have been developed independently of each other and have to be integrated. That is why Open Platform 3.0 has to deliver a way of integrating solutions that have been developed independently.

Common Architecture Environment

The Open Group has recently published its first thoughts on Open Platform 3.0 in the Open Platform 3.0 White Paper. This lists a number of things that will eventually be in the Open Platform 3.0 standard. Many of these are common architecture artifacts that can be used in solution development. They will form a common architecture environment. They are:

  • Statement of need, objectives, and principles – this is not part of that environment of course; it says why we are creating it.
  • Definitions of key terms – clearly you must share an understanding of the key terms if you are going to develop common solutions or integrable solutions.
  • Stakeholders and their concerns – an understanding of these is an important aspect of an architecture development, and something that we need in the standard.
  • Capabilities map – this shows what the products and services that are in the platform do.
  • Basic models – these show how the platform components work with each other and with other products and services.
  • Explanation of how the models can be combined to realize solutions – this is an important point and one that the white paper does not yet start to address.
  • Standards and guidelines that govern how the products and services interoperate – these are not standards that The Open Group is likely to produce, they will almost certainly be produced by other bodies, but we need to identify the appropriate ones and probably in some cases coordinate with the appropriate bodies to see that they are developed.

The Open Platform 3.0 White Paper contains an initial statement of needs, objectives and principles, definitions of some key terms, a first-pass list of stakeholders and their concerns, and half a dozen basic models. The basic models are in an analysis of the business use-cases for Open Platform 3.0 that were developed earlier.

These are just starting points. The white paper is incomplete: each of the sections is incomplete in itself, and of course the white paper does not contain all the sections that will be in the standard. And it is all subject to change.

An Example Basic Model

The figure shows a basic model that could be part of the Open Platform 3.0 common architecture environment.

ChrisBlog 2

Figure 2: Mobile Connected Device Model

This is the Mobile Connected Device Model: one of the basic models that we identified in the snapshot. It comes up quite often in the use-cases.

The stack on the left is a mobile device. It has a user, it has apps, it has a platform which would probably be Android or iOS, it has infrastructure that supports the platform, and it is connected to the World Wide Web, because that’s part of the definition of mobile computing.

On the right you see, and this is a frequently encountered pattern, that you don’t just use your mobile device for running apps. Maybe you connect it to a printer, maybe you connect it to your headphones, maybe you connect it to somebody’s payment terminal, you can connect it to many things. You might do this through a Universal Serial Bus (USB). You might do it through Bluetooth. You might do it by Near Field Communications (NFC). You might use other kinds of local connection.

The device you connect to may be operated by yourself (e.g. if it is headphones), or by another organization (e.g. if it is a payment terminal). In the latter case you typically have a business relationship with the operator of the connected device.

That is an example of the basic models that came up in the analysis of the use-cases. It is captured in the White Paper. It is fundamental to mobile computing and is also relevant to the Internet of Things.

Access to Technologies

This figure captures our understanding of the need to obtain information from the new technologies, social media, mobile devices, sensors and so on, the need to process that information, maybe on the cloud, to manage it and, ultimately, to deliver it in a form where there is analysis and reasoning that enables enterprises to take business decisions.

ChrisBlog 3

Figure 3: Access to Technologies

The delivery of information to improve the quality of decisions is the source of real business value.

User-Driven IT

The next figure captures a requirement that we picked up in the development of the business scenario.

ChrisBlog 4

Figure 4: User-Driven IT

Traditionally, you would have had the business use in the business departments of an enterprise, and pretty much everything else in the IT department. But we are seeing two big changes. One is that the business users are getting smarter, more able to use technology. The other is they want to use technology themselves, or to have business technologists closely working with them, rather than accessing it indirectly through the IT department.

The systems provisioning and management is now often done by cloud service providers, and the programming and integration and helpdesk by cloud brokers, or by an IT department that plays a broker role, rather than working in the traditional way.

The business still needs to retain responsibility for the overall architecture and for compliance. If you do something against your company’s principles, your customers will hold you responsible. It is no defense to say, “Our broker did it that way.” Similarly, if you break the law, your broker does not go to jail, you do. So those things will continue to be more associated with the business departments, even as the rest is devolved.

In short, businesses have a new way of using IT that Open Platform 3.0 must and will accommodate.

Integration of Independently-Developed Solutions

The next figure illustrates how the integration of independently developed solutions can be achieved.

ChrisBlog 5

Figure 5: Architecture Integration

It shows two solutions, which come from the analysis of different business use-cases. They share a common model, which makes it much easier to integrate them. That is why the Open Platform 3.0 standard will define common models for access to the new technologies.

The Open Platform 3.0 standard will have other common artifacts: architectural principles, stakeholder definitions and descriptions, and so on. Independently-developed architectures that use them can be integrated more easily.

Enterprises develop their architectures independently, but engage with other enterprises in business ecosystems that require shared solutions. Increasingly, business relationships are dynamic, and there is no time to develop an agreed ecosystem architecture from scratch. Use of the same architecture platform, with a common architecture environment including elements such as principles, stakeholder concerns, and basic models, enables the enterprise architectures to be integrated, and shared solutions to be developed quickly.

Completing the Definition

How will we complete the definition of Open Platform 3.0?

The Open Platform 3.0 Forum recently published a set of 22 business use-cases – the Nexus of Forces in Action. These use-cases show the application of Social, Mobile and Cloud Computing, Big Data, and the Internet of Things in a wide variety of business areas.

ChrisBlog 6

Figure 6: Business Use-Cases

The figure comes from that White Paper and shows some of those areas: multimedia, social networks, building energy management, smart appliances, financial services, medical research, and so on.

Use-Case Analysis

We have started to analyze those use-cases. This is an ArchiMate model showing how our first business use-case, The Mobile Smart Store, could be realized.

ChrisBlog 7

Figure 7: Use-Case Analysis

As you look at it you see common models. Outlined on the left is a basic model that is pretty much the same as the original TOGAF Technical Reference Model. The main difference is the addition of a business layer (which shows how enterprise architecture has moved in the business direction since the TRM was defined).

But you also see that the same model appears in the use-case in a different place, as outlined on the right. It appears many times throughout the business use-cases.

Finally, you can see that the Mobile Connected Device Model has appeared in this use-case (outlined in the center). It appears in other use-cases too.

As we analyze the use-cases, we find common models, as well as common principles, common stakeholders, and other artifacts.

The Development Cycle

We have a development cycle: understanding the value of the platform by considering use-cases, analyzing those use-cases to derive common features, and documenting the common features in a specification.

ChrisBlog 8

Figure 8: The Development Cycle

The Open Platform 3.0 White Paper represents the very first pass through that cycle, further passes will result in further White Papers, a snapshot, and ultimately The Open Platform 3.0 standard, and no doubt more than one version of that standard.

Conclusions

Open Platform 3.0 provides a common architecture environment. This enables enterprises to derive business value from social computing, mobile computing, big data, the Internet-of-Things, and potentially other new technologies.

Cognitive computing, for example, has been suggested as another technology that Open Platform 3.0 might in due course accommodate. What would that lead to? There would be additional use-cases, which would lead to further analysis, which would no doubt identify some basic models for cognitive computing, which would be added to the platform.

Open Platform 3.0 enables enterprise IT to be user-driven. There is a revolution in the way that businesses use IT. Users are becoming smarter and more able to use technology, and want to do so directly, rather than through a separate IT department. Business departments are taking in business technologists who understand how to use technology for business purposes. Some companies are closing their IT departments and using cloud brokers instead. In other companies, the IT department is taking on a broker role, sourcing technology that business people use directly.Open Platform 3.0 will be part of that revolution.

Open Platform 3.0 will deliver the ability to integrate solutions that have been independently developed. Businesses typically exist within one or more business ecosystems. Those ecosystems are dynamic: partners join, partners leave, and businesses cannot standardize the whole architecture across the ecosystem; it would be nice to do so but, by the time it was done, the business opportunity would be gone. Integration of independently developed architectures is crucial to the world of business ecosystems and delivering value within them.

Call for Input

The platform will deliver a common architecture environment, user-driven enterprise IT, and the ability to integrate solutions that have been independently developed. The Open Platform 3.0 Forum is defining it through an iterative process of understanding the content, analyzing the use-cases, and documenting the common features. We welcome input and comments from other individuals within and outside The Open Group and from other industry bodies.

If you have comments on the way Open Platform 3.0 is developing or input on the way it should develop, please tell us! You can do so by sending mail to platform3-input@opengroup.org or share your comments on our blog.

References

The Open Platform 3.0 White Paper: https://www2.opengroup.org/ogsys/catalog/W147

The Nexus of Forces in Action: https://www2.opengroup.org/ogsys/catalog/W145

TOGAF®: http://www.opengroup.org/togaf/

harding

Dr. Chris Harding is Director for Interoperability at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0™ Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

 

 

 

 

 

2 Comments

Filed under architecture, Boundaryless Information Flow™, Cloud, Cloud/SOA, digital technologies, Open Platform 3.0, Service Oriented Architecture, Standards, TOGAF®, Uncategorized

The Onion & The Open Group Open Platform 3.0™

By Stuart Boardman, Senior Business Consultant, KPN Consulting, and Co-Chair of The Open Group Open Platform 3.0™

Onion1

The onion is widely used as an analogy for complex systems – from IT systems to mystical world views.Onion2

 

 

 

It’s a good analogy. From the outside it’s a solid whole but each layer you peel off reveals a new onion (new information) underneath.

And a slice through the onion looks quite different from the whole…Onion3

What (and how much) you see depends on where and how you slice it.Onion4

 

 

 

 

The Open Group Open Platform 3.0™ is like that. Use-cases for Open Platform 3.0 reveal multiple participants and technologies (Cloud Computing, Big Data Analytics, Social networks, Mobility and The Internet of Things) working together to achieve goals that vary by participant. Each participant’s goals represent a different slice through the onion.

The Ecosystem View
We commonly use the idea of peeling off layers to understand large ecosystems, which could be Open Platform 3.0 systems like the energy smart grid but could equally be the workings of a large cooperative or the transport infrastructure of a city. We want to know what is needed to keep the ecosystem healthy and what the effects could be of the actions of individuals on the whole and therefore on each other. So we start from the whole thing and work our way in.

Onion5

The Service at the Centre of the Onion

If you’re the provider or consumer (or both) of an Open Platform 3.0 service, you’re primarily concerned with your slice of the onion. You want to be able to obtain and/or deliver the expected value from your service(s). You need to know as much as possible about the things that can positively or negatively affect that. So your concern is not the onion (ecosystem) as a whole but your part of it.

Right in the middle is your part of the service. The first level out from that consists of other participants with whom you have a direct relationship (contractual or otherwise). These are the organizations that deliver the services you consume directly to enable your own service.

One level out from that (level 2) are participants with whom you have no direct relationship but on whose services you are still dependent. It’s common in Platform 3.0 that your partners too will consume other services in order to deliver their services (see the use cases we have documented). You need to know as much as possible about this level , because whatever happens here can have a positive or negative effect on you.

One level further from the centre we find indirect participants who don’t necessarily delivery any part of the service but whose actions may well affect the rest. They could just be indirect materials suppliers. They could also be part of a completely different value network in which your level 1 or 2 “partners” participate. You can’t expect to understand this level in detail but you know that how that value network performs can affect your partners’ strategy or even their very existence. The knock-on impact on your own strategy can be significant.

We can conceive of more levels but pretty soon a law of diminishing returns sets in. At each level further from your own organization you will see less detail and more variety. That in turn means that there will be fewer things you can actually know (with any certainty) and not much more that you can even guess at. That doesn’t mean that the ecosystem ends at this point. Ecosystems are potentially infinite. You just need to decide how deep you can usefully go.

Limits of the Onion
At a certain point one hits the limits of an analogy. If everybody sees their own organization as the centre of the onion, what we actually have is a bunch of different, overlapping onions.

Onion6

And you can’t actually make onions overlap, so let’s not take the analogy too literally. Just keep it in mind as we move on. Remember that our objective is to ensure the value of the service we’re delivering or consuming. What we need to know therefore is what can change that’s outside of our own control and what kind of change we might expect. At each visible level of the theoretical onion we will find these sources of variety. How certain of their behaviour we can be will vary – with a tendency to the less certain as we move further from the centre of the onion. We’ll need to decide how, if at all, we want to respond to each kind of variety.

But that will have to wait for my next blog. In the meantime, here are some ways people look at the onion.

Onion7   Onion8

 

 

 

 

SONY DSCStuart Boardman is a Senior Business Consultant with KPN Consulting where he leads the Enterprise Architecture practice and consults to clients on Cloud Computing, Enterprise Mobility and The Internet of Everything. He is Co-Chair of The Open Group Open Platform 3.0™ Forum and was Co-Chair of the Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by KPN, the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI as well as several Open Group white papers, guides and standards. He is a frequent speaker at conferences on the topics of Open Platform 3.0 and Identity.

3 Comments

Filed under Cloud, Cloud/SOA, Conference, Enterprise Architecture, Open Platform 3.0, Service Oriented Architecture, Standards, Uncategorized

Secure Integration of Convergent Technologies – a Challenge for Open Platform™

By Dr. Chris Harding, The Open Group

The results of The Open Group Convergent Technologies survey point to secure integration of the technologies as a major challenge for Open Platform 3.0.  This and other input is the basis for the definition of the platform, where the discussion took place at The Open Group conference in London.

Survey Highlights

Here are some of the highlights from The Open Group Convergent Technologies survey.

  • 95% of respondents felt that the convergence of technologies such as social media, mobility, cloud, big data, and the Internet of things represents an opportunity for business
  • Mobility currently has greatest take-up of these technologies, and the Internet of things has least.
  • 84% of those from companies creating solutions want to deal with two or more of the technologies in combination.
  • Developing the understanding of the technologies by potential customers is the first problem that solution creators must overcome. This is followed by integrating with products, services and solutions from other suppliers, and using more than one technology in combination.
  • Respondents saw security, vendor lock-in, integration and regulatory compliance as the main problems for users of software that enables use of these convergent technologies for business purposes.
  • When users are considered separately from other respondents, security and vendor lock-in show particularly strongly as issues.

The full survey report is available at: https://www2.opengroup.org/ogsys/catalog/R130

Open Platform 3.0

Analysts forecast that convergence of technical phenomena including mobility, cloud, social media, and big data will drive the growth in use of information technology through 2020. Open Platform 3.0 is an initiative that will advance The Open Group vision of Boundaryless Information Flow™ by helping enterprises to use them.

The survey confirms the value of an open platform to protect users of these technologies from vendor lock-in. It also shows that security is a key concern that must be addressed, that the platform must make the technologies easy to use, and that it must enable them to be used in combination.

Understanding the Requirements

The Open Group is conducting other work to develop an understanding of the requirements of Open Platform 3.0. This includes:

  • The Open Platform 3.0 Business Scenario, that was recently published, and is available from https://www2.opengroup.org/ogsys/catalog/R130
  • A set of business use cases, currently in development
  • A high-level round-table meeting to gain the perspective of CIOs, who will be key stakeholders.

The requirements input have been part of the discussion at The Open Group Conference, which took place in London this week. Monday’s keynote presentation by Andy Mulholland, Former Global CTO at Capgemini on “Just Exactly What Is Going on in Business and Technology?” included the conclusions from the round-table meeting. This week’s presentation and panel discussion on the requirements for Open Platform 3.0 covered all the inputs.

Delivering the Platform

Review of the inputs in the conference was followed by a members meeting of the Open Platform 3.0 Forum, to start developing the architecture of Open Platform 3.0, and to plan the delivery of the platform definition. The aim is to have a snapshot of the definition early in 2014, and to deliver the first version of the standard a year later.

Meeting the Challenge

Open Platform 3.0 will be crucial to establishing openness and interoperability in the new generation of information technologies. This is of first importance for everyone in the IT industry.

Following the conference, there will be an opportunity for everyone to input material and ideas for the definition of the platform. If you want to be part of the community that shapes the definition, to work on it with like-minded people in other companies, and to gain early insight of what it will be, then your company must join the Open Platform 3.0 Forum. (For more information on this, contact Chris Parnell – c.parnell@opengroup.org)

Providing for secure integration of the convergent technologies, and meeting the other requirements for Open Platform 3.0, will be a difficult but exciting challenge. I’m looking forward to continue to tackle the challenge with the Forum members.

Dr. Chris Harding

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

1 Comment

Filed under Cloud/SOA, Standards, Service Oriented Architecture, Semantic Interoperability, Data management, Conference, Open Platform 3.0, Future Technologies

As Platform 3.0 ripens, expect agile access and distribution of actionable intelligence across enterprises, says The Open Group panel

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here

This latest BriefingsDirect discussion, leading into the The Open Group Conference on July 15 in Philadelphia, brings together a panel of experts to explore the business implications of the current shift to so-called Platform 3.0.

Known as the new model through which big data, cloud, and mobile and social — in combination — allow for advanced intelligence and automation in business, Platform 3.0 has so far lacked standards or even clear definitions.

The Open Group and its community are poised to change that, and we’re here now to learn more how to leverage Platform 3.0 as more than a IT shift — and as a business game-changer. It will be a big topic at next week’s conference.

The panel: Dave Lounsbury, Chief Technical Officer at The Open Group; Chris Harding, Director of Interoperability at The Open Group, and Mark Skilton, Global Director in the Strategy Office at Capgemini. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

This special BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference, which is focused on enterprise transformation in the finance, government, and healthcare sectors. Registration to the conference remains open. Follow the conference on Twitter at #ogPHL. [Disclosure: The Open Group is a sponsor of this and other BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: A lot of people are still wrapping their minds around this notion of Platform 3.0, something that is a whole greater than the sum of the parts. Why is this more than an IT conversation or a shift in how things are delivered? Why are the business implications momentous?

Lounsbury: Well, Dana, there are lot of IT changes or technical changes going on that are bringing together a lot of factors. They’re turning into this sort of super-saturated solution of ideas and possibilities and this emerging idea that this represents a new platform. I think it’s a pretty fundamental change.

Lounsbury

If you look at history, not just the history of IT, but all of human history, you see that step changes in societies and organizations are frequently driven by communication or connectedness. Think about the evolution of speech or the invention of the alphabet or movable-type printing. These technical innovations that we’re seeing are bringing together these vast sources of data about the world around us and doing it in real time.

Further, we’re starting to see a lot of rapid evolution in how you turn data into information and presenting the information in a way such that people can make decisions on it. Given all that we’re starting to realize, we’re on the cusp of another step of connectedness and awareness.

Fundamental changes

This really is going to drive some fundamental changes in the way we organize ourselves. Part of what The Open Group is doing, trying to bring Platform 3.0 together, is to try to get ahead of this and make sure that we understand not just what technical standards are needed, but how businesses will need to adapt and evolve what business processes they need to put in place in order to take maximum advantage of this to see change in the way that we look at the information.

Harding: Enterprises have to keep up with the way that things are moving in order to keep their positions in their industries. Enterprises can’t afford to be working with yesterday’s technology. It’s a case of being able to understand the information that they’re presented, and make the best decisions.

Harding

We’ve always talked about computers being about input, process, and output. Years ago, the input might have been through a teletype, the processing on a computer in the back office, and the output on print-out paper.

Now, we’re talking about the input being through a range of sensors and social media, the processing is done on the cloud, and the output goes to your mobile device, so you have it wherever you are when you need it. Enterprises that stick in the past are probably going to suffer.

Gardner: Mark Skilton, the ability to manage data at greater speed and scale, the whole three Vs — velocity, volume, and value — on its own could perhaps be a game changing shift in the market. The drive of mobile devices into lives of both consumers and workers is also a very big deal.

Of course, cloud has been an ongoing evolution of emphasis towards agility and efficiency in how workloads are supported. But is there something about the combination of how these are coming together at this particular time that, in your opinion, substantiates The Open Group’s emphasis on this as a literal platform shift?

Skilton: It is exactly that in terms of the workloads. The world we’re now into is the multi-workload environment, where you have mobile workloads, storage and compute workloads, and social networking workloads. There are many different types of data and traffic today in different cloud platforms and devices.

Skilton

It has to do with not just one solution, not one subscription model — because we’re now into this subscription-model era … the subscription economy, as one group tends to describe it. Now, we’re looking for not only just providing the security, the infrastructure, to deliver this kind of capability to a mobile device, as Chris was saying. The question is, how can you do this horizontally across other platforms? How can you integrate these things? This is something that is critical to the new order.

So Platform 3.0 addressing this point by bringing this together. Just look at the numbers. Look at the scale that we’re dealing with — 1.7 billion mobile devices sold in 2012, and 6.8 billion subscriptions estimated according to the International Telecommunications Union (ITU) equivalent to 96 percent of the world population.

Massive growth

We had massive growth in scale of mobile data traffic and internet data expansion. Mobile data is increasing 18 percent fold from 2011 to 2016 reaching 130 exabytes annually.  We passed 1 zettabyte of global online data storage back in 2010 and IP data traffic predicted to pass 1.3 zettabytes by 2016, with internet video accounting for 61 percent of total internet data according to Cisco studies.

These studies also predict data center traffic combining network and internet based storage will reach 6.6 zettabytes annually, and nearly two thirds of this will be cloud based by 2016.  This is only going to grow as social networking is reaching nearly one in four people around the world with 1.7 billion using at least one form of social networking in 2013, rising to one in three people with 2.55 billion global audience by 2017 as another extraordinary figure from an eMarketing.com study.

It is not surprising that many industry analysts are seeing growth in technologies of mobility, social computing, big data and cloud convergence at 30 to 40 percent and the shift to B2C commerce passing $1 trillion in 2012 is just the start of a wider digital transformation.

These numbers speak volumes in terms of the integration, interoperability, and connection of the new types of business and social realities that we have today.

Gardner: Why should IT be thinking about this as a fundamental shift, rather than a modest change?

Lounsbury: A lot depends on how you define your IT organization. It’s useful to separate the plumbing from the water. If we think of the water as the information that’s flowing, it’s how we make sure that the water is pure and getting to the places where you need to have the taps, where you need to have the water, etc.

But the plumbing also has to be up to the job. It needs to have the capacity. It needs to have new tools to filter out the impurities from the water. There’s no point giving someone data if it’s not been properly managed or if there’s incorrect information.

What’s going to happen in IT is not only do we have to focus on the mechanics of the plumbing, where we see things like the big database that we’ve seen in the open-source  role and things like that nature, but there’s the analytics and the data stewardship aspects of it.

We need to bring in mechanisms, so the data is valid and kept up to date. We need to indicate its freshness to the decision makers. Furthermore, IT is going to be called upon, whether as part of the enterprise IP or where end users will drive the selection of what they’re going to do with analytic tools and recommendation tools to take the data and turn it into information. One of the things you can’t do with business decision makers is overwhelm them with big rafts of data and expect them to figure it out.

You really need to present the information in a way that they can use to quickly make business decisions. That is an addition to the role of IT that may not have been there traditionally — how you think about the data and the role of what, in the beginning, was called data scientist and things of that nature.

Shift in constituency

Skilton: I’d just like to add to Dave’s excellent points about, the shape of data has changed, but also about why should IT get involved. We’re seeing that there’s a shift in the constituency of who is using this data.

We have the Chief Marketing Officer and the Chief Procurement Officer and other key line of business managers taking more direct control over the uses of information technology that enable their channels and interactions through mobile, social and data analytics. We’ve got processes that were previously managed just by IT and are now being consumed by significant stakeholders and investors in the organization.

We have to recognize in IT that we are the masters of our own destiny. The information needs to be sorted into new types of mobile devices, new types of data intelligence, and ways of delivering this kind of service.

I read recently in MIT Sloan Management Review an article that asked what is the role of the CIO. There is still the critical role of managing the security, compliance, and performance of these systems. But there’s also a socialization of IT, and this is where  the  positioning architectures which are cross platform is key to  delivering real value to the business users in the IT community.

Gardner: How do we prevent this from going off the rails?

Harding: This a very important point. And to add to the difficulties, it’s not only that a whole set of different people are getting involved with different kinds of information, but there’s also a step change in the speed with which all this is delivered. It’s no longer the case, that you can say, “Oh well, we need some kind of information system to manage this information. We’ll procure it and get a program written” that a year later that would be in place in delivering reports to it.

Now, people are looking to make sense of this information on the fly if possible. It’s really a case of having the platforms be the standard technology platform and also the systems for using it, the business processes, understood and in place.

Then, you can do all these things quickly and build on learning from what people have gone in the past, and not go out into all sorts of new experimental things that might not lead anywhere. It’s a case of building up the standard platform in the industry best practice. This is where The Open Group can really help things along by being a recipient and a reflector of best practice and standard.

Skilton: Capgemini has been doing work in this area. I break it down into four levels of scalability. It’s the platform scalability of understanding what you can do with your current legacy systems in introducing cloud computing or big data, and the infrastructure that gives you this, what we call multiplexing of resources. We’re very much seeing this idea of introducing scalable platform resource management, and you see that a lot with the heritage of virtualization.

Going into networking and the network scalability, a lot of the customers have who inherited their old telecommunications networks are looking to introduce new MPLS type scalable networks. The reason for this is that it’s all about connectivity in the field. I meet a number of clients who are saying, “We’ve got this cloud service,” or “This service is in a certain area of my country. If I move to another parts of the country or I’m traveling, I can’t get connectivity.” That’s the big issue of scaling.

Another one is application programming interfaces (APIs). What we’re seeing now is an explosion of integration and application services using API connectivity, and these are creating huge opportunities of what Chris Anderson of Wired used to call the “long tail effect.” It is now a reality in terms of building that kind of social connectivity and data exchange that Dave was talking about.

Finally, there are the marketplaces. Companies needs to think about what online marketplaces they need for digital branding, social branding, social networks, and awareness of your customers, suppliers, and employees. Customers can see that these four levels are where they need to start thinking about for IT strategy, and Platform 3.0 is right on this target of trying to work out what are the strategies of each of these new levels of scalability.

Gardner: We’re coming up on The Open Group Conference in Philadelphia very shortly. What should we expect from that? What is The Open Group doing vis-à-vis Platform 3.0, and how can organizations benefit from seeing a more methodological or standardized approach to some way of rationalizing all of this complexity? [Registration to the conference remains open. Follow the conference on Twitter at #ogPHL.]

Lounsbury: We’re still in the formational stages of  “third platform” or Platform 3.0 for The Open Group as an industry. To some extent, we’re starting pretty much at the ground floor with that in the Platform 3.0 forum. We’re leveraging a lot of the components that have been done previously by the work of the members of The Open Group in cloud, services-oriented architecture (SOA), and some of the work on the Internet of things.

First step

Our first step is to bring those things together to make sure that we’ve got a foundation to depart from. The next thing is that, through our Platform 3.0 Forum and the Steering Committee, we can ask people to talk about what their scenarios are for adoption of Platform 3.0?

That can range from things like the technological aspects of it and what standards are needed, but also to take a clue from our previous cloud working group. What are the best business practices in order to understand and then adopt some of these Platform 3.0 concepts to get your business using them?

What we’re really working toward in Philadelphia is to set up an exchange of ideas among the people who can, from the buy side, bring in their use cases from the supply side, bring in their ideas about what the technology possibilities are, and bring those together and start to shape a set of tracks where we can create business and technical artifacts that will help businesses adopt the Platform 3.0 concept.

Harding: We certainly also need to understand the business environment within which Platform 3.0 will be used. We’ve heard already about new players, new roles of various kinds that are appearing, and the fact that the technology is there and the business is adapting to this to use technology in new ways.

For example, we’ve heard about the data scientist. The data scientist is a new kind of role, a new kind of person, that is playing a particular part in all this within enterprises. We’re also hearing about marketplaces for services, new ways in which services are being made available and combined.

We really need to understand the actors in this new kind of business scenario. What are the pain points that people are having? What are the problems that need to be resolved in order to understand what kind of shape the new platform will have? That is one of the key things that the Platform 3.0 Forum members will be getting their teeth into.

Gardner: Looking to the future, when we think about the ability of the data to be so powerful when processed properly, when recommendations can be delivered to the right place at the right time, but we also recognize that there are limits to a manual or even human level approach to that, scientist by scientist, analysis by analysis.

When we think about the implications of automation, it seems like there were already some early examples of where bringing cloud, data, social, mobile, interactions, granularity of interactions together, that we’ve begun to see that how a recommendation engine could be brought to bear. I’m thinking about the Siri capability at Apple and even some of the examples of the Watson Technology at IBM.

So to our panel, are there unknown unknowns about where this will lead in terms of having extraordinary intelligence, a super computer or data center of super computers, brought to bear almost any problem instantly and then the result delivered directly to a center, a smart phone, any number of end points?

It seems that the potential here is mind boggling. Mark Skilton, any thoughts?

Skilton: What we’re talking about is the next generation of the Internet.  The advent of IPv6 and the explosion in multimedia services, will start to drive the next generation of the Internet.

I think that in the future, we’ll be talking about a multiplicity of information that is not just about services at your location or your personal lifestyle or your working preferences. We’ll see a convergence of information and services across multiple devices and new types of “co-presence services” that interact with your needs and social networks to provide predictive augmented information value.

When you start to get much more information about the context of where you are, the insight into what’s happening, and the predictive nature of these, it becomes something that becomes much more embedding into everyday life and in real time in context of what you are doing.

I expect to see much more intelligent applications coming forward on mobile devices in the next 5 to 10 years driven by this interconnected explosion of real time processing data, traffic, devices and social networking we describe in the scope of platform 3.0. This will add augmented intelligence and is something that’s really exciting and a complete game changer. I would call it the next killer app.

First-mover benefits

Gardner: There’s this notion of intelligence brought to bear rapidly in context, at a manageable cost. This seems to me a big change for businesses. We could, of course, go into the social implications as well, but just for businesses, that alone to me would be an incentive to get thinking and acting on this. So any thoughts about where businesses that do this well would be able to have significant advantage and first mover benefits?

Harding: Businesses always are taking stock. They understand their environments. They understand how the world that they live in is changing and they understand what part they play in it. It will be down to individual businesses to look at this new technical possibility and say, “So now this is where we could make a change to our business.” It’s the vision moment where you see a combination of technical possibility and business advantage that will work for your organization.

It’s going to be different for every business, and I’m very happy to say this, it’s something that computers aren’t going to be able to do for a very long time yet. It’s going to really be down to business people to do this as they have been doing for centuries and millennia, to understand how they can take advantage of these things.

So it’s a very exciting time, and we’ll see businesses understanding and developing their individual business visions as the starting point for a cycle of business transformation, which is what we’ll be very much talking about in Philadelphia. So yes, there will be businesses that gain advantage, but I wouldn’t point to any particular business, or any particular sector and say, “It’s going to be them” or “It’s going to be them.”

Gardner: Dave Lounsbury, a last word to you. In terms of some of the future implications and vision, where could this could lead in the not too distant future?

Lounsbury: I’d disagree a bit with my colleagues on this, and this could probably be a podcast on its own, Dana. You mentioned Siri, and I believe IBM just announced the commercial version of its Watson recommendation and analysis engine for use in some customer-facing applications.

I definitely see these as the thin end of the wedge on filling that gap between the growth of data and the analysis of data. I can imagine in not in the next couple of years, but in the next couple of technology cycles, that we’ll see the concept of recommendations and analysis as a service, to bring it full circle to cloud. And keep in mind that all of case law is data and all of the medical textbooks ever written are data. Pick your industry, and there is huge amount of knowledge base that humans must currently keep on top of.

This approach and these advances in the recommendation engines driven by the availability of big data are going to produce profound changes in the way knowledge workers produce their job. That’s something that businesses, including their IT functions, absolutely need to stay in front of to remain competitive in the next decade or so.

Comments Off

Filed under ArchiMate®, Business Architecture, Cloud, Cloud/SOA, Conference, Data management, Enterprise Architecture, Platform 3.0, Professional Development, TOGAF®

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

Data Governance: A Fundamental Aspect of IT

By E.G. Nadhan, HP

In an earlier post, I had explained how you can build upon SOA governance to realize Cloud governance.  But underlying both paradigms is a fundamental aspect that we have been dealing with ever since the dawn of IT—and that’s the data itself.

In fact, IT used to be referred to as “data processing.” Despite the continuing evolution of IT through various platforms, technologies, architectures and tools, at the end of the day IT is still processing data. However, the data has taken multiple shapes and forms—both structured and unstructured. And Cloud Computing has opened up opportunities to process and store structured and unstructured data. There has been a need for data governance since the day data processing was born, and today, it’s taken on a whole new dimension.

“It’s the economy, stupid,” was a campaign slogan, coined to win a critical election in the United States in 1992. Today, the campaign slogan for governance in the land of IT should be, “It’s the data, stupid!”

Let us challenge ourselves with a few questions. Consider them the what, why, when, where, who and how of data governance.

What is data governance? It is the mechanism by which we ensure that the right corporate data is available to the right people, at the right time, in the right format, with the right context, through the right channels.

Why is data governance needed? The Cloud, social networking and user-owned devices (BYOD) have acted as catalysts, triggering an unprecedented growth in recent years. We need to control and understand the data we are dealing with in order to process it effectively and securely.

When should data governance be exercised? Well, when shouldn’t it be? Data governance kicks in at the source, where the data enters the enterprise. It continues across the information lifecycle, as data is processed and consumed to address business needs. And it is also essential when data is archived and/or purged.

Where does data governance apply? It applies to all business units and across all processes. Data governance has a critical role to play at the point of storage—the final checkpoint before it is stored as “golden” in a database. Data Governance also applies across all layers of the architecture:

  • Presentation layer where the data enters the enterprise
  • Business logic layer where the business rules are applied to the data
  • Integration layer where data is routed
  • Storage layer where data finds its home

Who does data governance apply to? It applies to all business leaders, consumers, generators and administrators of data. It is a good idea to identify stewards for the ownership of key data domains. Stewards must ensure that their data domains abide by the enterprise architectural principles.  Stewards should continuously analyze the impact of various business events to their domains.

How is data governance applied? Data governance must be exercised at the enterprise level with federated governance to individual business units and data domains. It should be proactively exercised when a new process, application, repository or interface is introduced.  Existing data is likely to be impacted.  In the absence of effective data governance, data is likely to be duplicated, either by chance or by choice.

In our data universe, “informationalization” yields valuable intelligence that enables effective decision-making and analysis. However, even having the best people, process and technology is not going to yield the desired outcomes if the underlying data is suspect.

How about you? How is the data in your enterprise? What governance measures do you have in place? I would like to know.

A version of this blog post was originally published on HP’s Journey through Enterprise IT Services blog.

NadhanHP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has more than 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project, and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, LinkedIn and Journey Blog.

1 Comment

Filed under Cloud, Cloud/SOA

2013 Open Group Predictions, Vol. 1

By The Open Group

A big thank you to all of our members and staff who have made 2012 another great year for The Open Group. There were many notable achievements this year, including the release of ArchiMate 2.0, the launch of the Future Airborne Capability Environment (FACE™) Technical Standard and the publication of the SOA Reference Architecture (SOA RA) and the Service-Oriented Cloud Computing Infrastructure Framework (SOCCI).

As we wrap up 2012, we couldn’t help but look towards what is to come in 2013 for The Open Group and the industries we‘re a part of. Without further ado, here they are:

Big Data
By Dave Lounsbury, Chief Technical Officer

Big Data is on top of everyone’s mind these days. Consumerization, mobile smart devices, and expanding retail and sensor networks are generating massive amounts of data on behavior, environment, location, buying patterns – etc. – producing what is being called “Big Data”. In addition, as the use of personal devices and social networks continue to gain popularity so does the expectation to have access to such data and the computational power to use it anytime, anywhere. Organizations will turn to IT to restructure its services so it meets the growing expectation of control and access to data.

Organizations must embrace Big Data to drive their decision-making and to provide the optimal service mix services to customers. Big Data is becoming so big that the big challenge is how to use it to make timely decisions. IT naturally focuses on collecting data so Big Data itself is not an issue.. To allow humans to keep on top of this flood of data, industry will need to move away from programming computers for storing and processing data to teaching computers how to assess large amounts of uncorrelated data and draw inferences from this data on their own. We also need to start thinking about the skills that people need in the IT world to not only handle Big Data, but to make it actionable. Do we need “Data Architects” and if so, what would their role be?

In 2013, we will see the beginning of the Intellectual Computing era. IT will play an essential role in this new era and will need to help enterprises look at uncorrelated data to find the answer.

Security

By Jim Hietala, Vice President of Security

As 2012 comes to a close, some of the big developments in security over the past year include:

  • Continuation of hacktivism attacks.
  • Increase of significant and persistent threats targeting government and large enterprises. The notable U.S. National Strategy for Trusted Identities in Cyberspace started to make progress in the second half of the year in terms of industry and government movement to address fundamental security issues.
  • Security breaches were discovered by third parties, where the organizations affected had no idea that they were breached. Data from the 2012 Verizon report suggests that 92 percent of companies breached were notified by a third party.
  • Acknowledgement from senior U.S. cybersecurity professionals that organizations fall into two groups: those that know they’ve been penetrated, and those that have been penetrated, but don’t yet know it.

In 2013, we’ll no doubt see more of the same on the attack front, plus increased focus on mobile attack vectors. We’ll also see more focus on detective security controls, reflecting greater awareness of the threat and on the reality that many large organizations have already been penetrated, and therefore responding appropriately requires far more attention on detection and incident response.

We’ll also likely see the U.S. move forward with cybersecurity guidance from the executive branch, in the form of a Presidential directive. New national cybersecurity legislation seemed to come close to happening in 2012, and when it failed to become a reality, there were many indications that the administration would make something happen by executive order.

Enterprise Architecture

By Leonard Fehskens, Vice President of Skills and Capabilities

Preparatory to my looking back at 2012 and forward to 2013, I reviewed what I wrote last year about 2011 and 2012.

Probably the most significant thing from my perspective is that so little has changed. In fact, I think in many respects the confusion about what Enterprise Architecture (EA) and Business Architecture are about has gotten worse.

The stress within the EA community as both the demands being placed on it and the diversity of opinion within it increase continues to grow.  This year, I saw a lot more concern about the value proposition for EA, but not a lot of (read “almost no”) convergence on what that value proposition is.

Last year I wrote “As I expected at this time last year, the conventional wisdom about Enterprise Architecture continues to spin its wheels.”  No need to change a word of that. What little progress at the leading edge was made in 2011 seems to have had no effect in 2012. I think this is largely a consequence of the dust thrown in the eyes of the community by the ascendance of the concept of “Business Architecture,” which is still struggling to define itself.  Business Architecture seems to me to have supplanted last year’s infatuation with “enterprise transformation” as the means of compensating for the EA community’s entrenched IT-centric perspective.

I think this trend and the quest for a value proposition are symptomatic of the same thing — the urgent need for Enterprise Architecture to make its case to its stakeholder community, especially to the people who are paying the bills. Something I saw in 2011 that became almost epidemic in 2012 is conflation — the inclusion under the Enterprise Architecture umbrella of nearly anything with the slightest taste of “business” to it. This has had the unfortunate effect of further obscuring the unique contribution of Enterprise Architecture, which is to bring architectural thinking to bear on the design of human enterprise.

So, while I’m not quite mired in the slough of despond, I am discouraged by the community’s inability to advance the state of the art. In a private communication to some colleagues I wrote, “the conventional wisdom on EA is at about the same state of maturity as 14th century cosmology. It is obvious to even the most casual observer that the earth is both flat and the center of the universe. We debate what happens when you fall off the edge of the Earth, and is the flat earth carried on the back of a turtle or an elephant?  Does the walking of the turtle or elephant rotate the crystalline sphere of the heavens, or does the rotation of the sphere require the turtlephant to walk to keep the earth level?  These are obviously the questions we need to answer.”

Cloud

By Chris Harding, Director of Interoperability

2012 has seen the establishment of Cloud Computing as a mainstream resource for enterprise architects and the emergence of Big Data as the latest hot topic, likely to be mainstream for the future. Meanwhile, Service-Oriented Architecture (SOA) has kept its position as an architectural style of choice for delivering distributed solutions, and the move to ever more powerful mobile devices continues. These trends have been reflected in the activities of our Cloud Computing Work Group and in the continuing support by members of our SOA work.

The use of Cloud, Mobile Computing, and Big Data to deliver on-line systems that are available anywhere at any time is setting a new norm for customer expectations. In 2013, we will see the development of Enterprise Architecture practice to ensure the consistent delivery of these systems by IT professionals, and to support the evolution of creative new computing solutions.

IT systems are there to enable the business to operate more effectively. Customers expect constant on-line access through mobile and other devices. Business organizations work better when they focus on their core capabilities, and let external service providers take care of the rest. On-line data is a huge resource, so far largely untapped. Distributed, Cloud-enabled systems, using Big Data, and architected on service-oriented principles, are the best enablers of effective business operations. There will be a convergence of SOA, Mobility, Cloud Computing, and Big Data as they are seen from the overall perspective of the enterprise architect.

Within The Open Group, the SOA and Cloud Work Groups will continue their individual work, and will collaborate with other forums and work groups, and with outside organizations, to foster the convergence of IT disciplines for distributed computing.

3 Comments

Filed under Business Architecture, Cloud, Cloud/SOA, Cybersecurity, Enterprise Architecture

Creating Reference Architecture: The Center of Excellence

By Serge Thorn, Architecting the Enterprise

This is the second installment of a three-part series discussing how to implement SOA through TOGAF®. In my first blog post I explained the concept of the Center of Excellence, and creating a vision for your organization.

The SOA Center of Excellence (CoE) will need to define a Reference Architecture for the organization.

A Reference Architecture for SOA is an abstract realization of an architectural model showing how an architectural solution can be built while omitting any reference to specific concrete technologies. Reference Architecture is like an abstract machine. It is built to realize some function and it, in turn, relies on a set of underlying components and capabilities that must be present for it to perform. The capabilities are normally captured into layers, which in their own right require an architectural definition. However, the specific choice of the components representing the capabilities is made after various business and feasibility analysis are performed. A Reference Architecture can be used to guide the realization of implementations where specific properties are desired of the concrete system.

The purpose of the Reference Architecture is reflected in the set of requirements that the Reference Architecture must satisfy. We can structure these requirements into a set of goals, a set of critical success factors associated with these goals and a set of requirements that are connected to the critical success factors that ensure their satisfaction.

A Reference Architecture for SOA describes how to build systems according to the principles of SOA. These principles direct IT professionals to design, implement, and deploy information systems from components (i.e. services) that implement discrete business functions. These services can be distributed across geographic and organizational boundaries, can be independently scaled and can be reconfigured into new business processes as needed. This flexibility provides a range of benefits for both IT and business organizations.

Using the pattern approach the SOA Reference Architecture is a means for generating other more specific reference architectures, or even concrete architectures depending on the nature of the patterns. Or to put it another way, it is a machine for generating other machines.

The Open Group SOA Reference Architecture (SOA RA) standard is a good way of considering how to build systems.

The SOA CoE needs also to define the SOA lifecycle management that consists of various activities such as governing, modelling, assembling, deploying and controlling/monitoring.

Simply put, without management and control, there is no SOA only an “experience”. The SOA infrastructure must be managed in accordance with the goals and policies of the organization, which include hardware and software IT resource utilization, performance standards as well as goals for service level objectives (SLOs) for the services provided to IT users as well as business goals and policies for businesses that run and use IT. To be truly agile, enactment of all these different types of policies requires automated control that allows goals to be met with only the prescribed level of human interaction.

For every layer of the SOA infrastructure a corresponding Manage and Control component needs to exist / be in place. Moreover, the “manage and control” components must be integrated in a way that they can provide an end-to-end view of the entire SOA infrastructure.

These manage and control functions provide the run-time management and control of the entire enterprise IT execution environment.  This includes all of the enterprise’s business processes and information services, including those associated with the IT organization’s own business processes.

The “Principle of Service orientation” must exist as defined in the TOGAF® 9.1 Framework in section 22.7.1.1 Principle of Service-Orientation, but lower levels of principles, rules, and guidelines are required.

Needs and capabilities are not mechanisms in the SOA Reference Architecture. They are the guiding principles for building and using a particular SOA. Nonetheless, the usefulness of a particular SOA depends on how well the needs and capabilities are defined, understood, and satisfied.

Architecture principles define the underlying general rules and guidelines for the use and deployment of all IT resources and assets across the enterprise. They reflect a level of consensus among the various elements of the enterprise, and form the basis for making future IT decisions.

Guiding principles define the ground rules for development, maintenance, and usage of the SOA. Specific principles for architecture design or service definition are derived from these guiding principles, focusing on specific themes. These principles are the characteristics that provide the intrinsic behaviour for the style of design.

In the third and final installment of this series I will discuss how to relate SOA principles back to business objectives and key architecture drivers.

Serge Thorn is CIO of Architecting the Enterprise.  He has worked in the IT Industry for over 25 years, in a variety of roles, which include; Development and Systems Design, Project Management, Business Analysis, IT Operations, IT Management, IT Strategy, Research and Innovation, IT Governance, Architecture and Service Management (ITIL). He is the Chairman of the itSMF (IT Service Management forum) Swiss chapter and is based in Geneva, Switzerland.

Comments Off

Filed under Cloud/SOA, Enterprise Architecture, Standards, TOGAF, TOGAF®

Discover the World’s First Technical Cloud Computing Standard… for the Second Time

By E.G. Nadhan, HP

Have you heard of the first technical standard for Cloud Computing—SOCCI (pronounced “saw-key”)? Wondering what it stands for? Well, it stands for Service Oriented Cloud Computing Infrastructure, or SOCCI.

Whether you are just beginning to deploy solutions in the cloud or if you already have existing cloud solutions deployed, SOCCI can be applied in terms of each organization’s different situation. Where ever you are on the spectrum of cloud adoption, the standard offers a well-defined set of architecture building blocks with specific roles outlined in detail. Thus, the standard can be used in multiple ways including:

  • Defining the service oriented aspects of your infrastructure in the cloud as part of your reference architecture
  • Validating your reference architecture to ensure that these building blocks have been appropriately addressed

The standard provides you an opportunity to systematically perform the following in the context of your environment:

  • Identify synergies between service orientation and the cloud
  • Extend adoption of  traditional and service-oriented infrastructure in the cloud
  • Apply the consumer, provider and developer viewpoints on your cloud solution
  • Incorporate foundational building blocks into enterprise architecture for infrastructure services in the cloud
  • Implement cloud-based solutions using different infrastructure deployment models
  • Realize business solutions referencing the business scenario analyzed in this standard

Are you going to be SOCCI’s first application? Are you among the cloud innovators—opting not to wait when the benefits can be had today?

Incidentally, I will be presenting this standard for the second time at the HP Discover Conference in Frankfurt on 5th Dec 2012.   I plan on discussing this standard, as well as its application in a hypothetical business scenario so that we can collectively brainstorm on how it could apply in different business environments.

In an earlier tweet chat on cloud standards, I tweeted: “Waiting for standards is like waiting for Godot.” After the #DT2898 session at HP Discover 2012, I expect to tweet, “Waiting for standards may be like waiting for Godot, but waiting for the application of a standard does not have to be so.”

A version of this blog post originally appeared on the Journey through Enterprise IT Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

Comments Off

Filed under Cloud, Cloud/SOA

Call for Submissions

By Patty Donovan, The Open Group

The Open Group Blog is celebrating its second birthday this month! Over the past few years, our blog posts have tended to cover Open Group activities – conferences, announcements, our lovely members, etc. While several members and Open Group staff serve as regular contributors, we’d like to take this opportunity to invite our community members to share their thoughts and expertise on topics related to The Open Group’s areas of expertise as guest contributors.

Here are a few examples of popular guest blog posts that we’ve received over the past year

Blog posts generally run between 500 and 800 words and address topics relevant to The Open Group workgroups, forums, consortiums and events. Some suggested topics are listed below.

  • ArchiMate®
  • Big Data
  • Business Architecture
  • Cloud Computing
  • Conference recaps
  • DirectNet
  • Enterprise Architecture
  • Enterprise Management
  • Future of Airborne Capability Environment (FACE™)
  • Governing Board Businesses
  • Governing Board Certified Architects
  • Governing Board Certified IT Specialists
  • Identity Management
  • IT Security
  • The Jericho Forum
  • The Open Group Trusted Technology Forum (OTTF)
  • Quantum Lifecycle Management
  • Real-Time Embedded Systems
  • Semantic Interoperability
  • Service-Oriented Architecture
  • TOGAF®

If you have any questions or would like to contribute, please contact opengroup (at) bateman-group.com.

Please note that all content submitted to The Open Group blog is subject to The Open Group approval process. The Open Group reserves the right to deny publication of any contributed works. Anything published shall be copyright of The Open Group.

Patricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

1 Comment

Filed under Uncategorized

Build Upon SOA Governance to Realize Cloud Governance

By E.G. Nadhan, HP

The Open Group SOA Governance Framework just became an International Standard available to government and enterprises worldwide. At the same time, I read an insightful post by ZDNet Blogger, Joe McKendrick who states that Cloud and automation drive new growth in SOA governance market. I have always maintained that the fundamentals of Cloud Computing are based upon SOA principles. This brings up the next natural question: Where are we with Cloud Governance?

I co-chair the Open Group project for defining the Cloud Governance framework. Fundamentally, the Cloud Governance framework builds upon The Open Group SOA Governance Framework and provides additional context for Cloud Governance in relation to other governance standards in the industry. We are with Cloud Governance today where we were with SOA Governance a few years back when The Open Group started on the SOA Governance framework project.

McKendrick goes on to say that the tools and methodologies built and stabilized over the past few years for SOA projects are seeing renewed life as enterprises move to the Cloud model. In McKendrick’s words, “it is just a matter of getting the word out.” That may be the case for the SOA governance market. But, is that so for Cloud Governance?

When it comes to Cloud Governance, it is more than just getting the word out. We must make progress in the following areas for Cloud Governance to become real:

  • Sustained adoption. Enterprises must continuously adopt cloud based services balancing it with outsourcing alternatives. This will give more visibility to the real-life use cases where Cloud Governance can be exercised to validate and refine the enabling set of governance models.
  • Framework Definition. Finally, Cloud Governance needs a standard framework to facilitate its adoption. Just like the SOA Governance Framework, the definition of a standard for the Cloud Governance Framework as well as the supporting reference models will pave the way for the consistent adoption of Cloud Governance.

Once these progressions are made, Cloud Governance will be positioned like SOA Governance—and it will then be just a “matter of getting the word out.”

A version of this blog post originally appeared on the Journey through Enterprise IT Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

1 Comment

Filed under Cloud, Cloud/SOA

SOA Provides Needed Support for Enterprise Architecture in Cloud, Mobile, Big Data, Says Open Group Panel

By Dana Gardner, BriefingsDirect

There’s been a resurgent role for service-oriented architecture (SOA) as a practical and relevant ingredient for effective design and use of Cloud, mobile, and big data technologies.

To find out why, The Open Group recently gathered an international panel of experts to explore the concept of “architecture is destiny,” especially when it comes to hybrid services delivery and management. The panel shows how SOA is proving instrumental in allowing the needed advancements over highly distributed services and data, when it comes to scale, heterogeneity support, and governance.

The panel consists of Chris Harding, Director of Interoperability at The Open Group, based in the UK; Nikhil Kumar, President of Applied Technology Solutions and Co-Chair of the SOA Reference Architecture Projects within The Open Group, and he’s based in Michigan, and Mats Gejnevall, Enterprise Architect at Capgemini and Co-Chair of The Open Group SOA Work Group, and he’s based in Sweden. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

The full podcast can be found here.

Here are some excerpts:

Gardner: Why this resurgence in the interest around SOA?

Harding: My role in The Open Group is to support the work of our members on SOA, Cloud computing, and other topics. We formed the SOA Work Group back in 2005, when SOA was a real emerging hot topic, and we set up a number of activities and projects. They’re all completed.

I was thinking that the SOA Work Group would wind down, move into maintenance mode, and meet once every few months or so, but we still get a fair attendance at our regular web meetings.

In fact, we’ve started two new projects and we’re about to start a third one. So, it’s very clear that there is still an interest, and indeed a renewed interest, in SOA from the IT community within The Open Group.

Larger trends

Gardner: Nikhil, do you believe that this has to do with some of the larger trends we’re seeing in the field, like Cloud Software as a Service (SaaS)? What’s driving this renewal?

Kumar: What I see driving it is three things. One is the advent of the Cloud and mobile, which requires a lot of cross-platform delivery of consistent services. The second is emerging technologies, mobile, big data, and the need to be able to look at data across multiple contexts.

The third thing that’s driving it is legacy modernization. A lot of organizations are now a lot more comfortable with SOA concepts. I see it in a number of our customers. I’ve just been running a large Enterprise Architecture initiative in a Fortune 500 customer.

At each stage, and at almost every point in that, they’re now comfortable. They feel that SOA can provide the ability to rationalize multiple platforms. They’re restructuring organizational structures, delivery organizations, as well as targeting their goals around a service-based platform capability.

So legacy modernization is a back-to-the-future kind of thing that has come back and is getting adoption. The way it’s being implemented is using RESTful services, as well as SOAP services, which is different from traditional SOA, say from the last version, which was mostly SOAP-driven.

Gardner: Mats, do you think that what’s happened is that the marketplace and the requirements have changed and that’s made SOA more relevant? Or has SOA changed to better fit the market? Or perhaps some combination?

Gejnevall: I think that the Cloud is really a service delivery platform. Companies discover that to be able to use the Cloud services, the SaaS things, they need to look at SOA as their internal development way of doing things as well. They understand they need to do the architecture internally, and if they’re going to use lots of external Cloud services, you might as well use SOA to do that.

Also, if you look at the Cloud suppliers, they also need to do their architecture in some way and SOA probably is a good vehicle for them. They can use that paradigm and also deliver what the customer wants in a well-designed SOA environment.

Gardner: Let’s drill down on the requirements around the Cloud and some of the key components of SOA. We’re certainly seeing, as you mentioned, the need for cross support for legacy, Cloud types of services, and using a variety of protocol, transports, and integration types. We already heard about REST for lightweight approaches and, of course, there will still be the need for object brokering and some of the more traditional enterprise integration approaches.

This really does sound like the job for an Enterprise Service Bus (ESB). So let’s go around the panel and look at this notion of an ESB. Some people, a few years back, didn’t think it was necessary or a requirement for SOA, but it certainly sounds like it’s the right type of functionality for the job.

Loosely coupled

Harding: I believe so, but maybe we ought to consider that in the Cloud context, you’re not just talking about within a single enterprise. You’re talking about a much more loosely coupled, distributed environment, and the ESB concept needs to take account of that in the Cloud context.

Gardner: Nikhil, any thoughts about how to manage this integration requirement around the modern SOA environment and whether ESBs are more or less relevant as a result?

Kumar: In the context of a Cloud we really see SOA and the concept of service contracts coming to the fore. In that scenario, ESBs play a role as a broker within the enterprise. When we talk about the interaction across Cloud-service providers and Cloud consumers, what we’re seeing is that the service provider has his own concept of an ESB within its own internal context.

If you want your Cloud services to be really reusable, the concept of the ESB then becomes more for the routing and the mediation of those services, once they’re provided to the consumer. There’s a kind of separation of concerns between the concept of a traditional ESB and a Cloud ESB, if you want to call it that.

The Cloud context involves more of the need to be able to support, enforce, and apply governance concepts and audit concepts, the capabilities to ensure that the interaction meets quality of service guarantees. That’s a little different from the concept that drove traditional ESBs.

That’s why you’re seeing API management platforms like Layer 7Mashery, or Apigee and other kind of product lines. They’re also coming into the picture, driven by the need to be able to support the way Cloud providers are provisioning their services. As Chris put it, you’re looking beyond the enterprise. Who owns it? That’s where the role of the ESB is different from the traditional concept.

Most Cloud platforms have cost factors associated with locality. If you have truly global enterprises and services, you need to factor in the ability to deal with safe harbor issues and you need to factor in variations and law in terms of security governance.

The platforms that are evolving are starting to provide this out of the box. The service consumer or a service provider needs to be able to support those. That’s going to become the role of their ESB in the future, to be able to consume a service, to be able to assert this quality-of-service guarantee, and manage constraints or data-in-flight and data-at-rest.

Gardner: Mats, are there other aspects of the concept of ESB that are now relevant to the Cloud?

Entire stack

Gejnevall: One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products that all the consultancies were asking companies to buy, wanting them to buy an ESB, governance tools, business process management tools, and a lot of sort of quite large investments to just get your foot into the door of doing SOA.

These days you can buy that kind of stuff. You can buy the entire stack in the Cloud and start playing with it. I did some searches on it today and I found a company that you can play with the entire stack, including business tools and everything like that, for zero dollars. Then you can grow and use more and more of it in your business, but you can start to see if this is something for you.

In the past, the suppliers or the consultants told you that you could do it. You couldn’t really try it out yourself. You needed both the software and the hardware in place. The money to get started is much lower today. That’s another reason people might be thinking about it these days.

Gardner: It sounds as if there’s a new type of on-ramp to SOA values, and the componentry that supports SOA is now being delivered as a service. On top of that, you’re also able to consume it in a pay-as-you-go manner.

Harding: That’s a very good point, but there are two contradictory trends we are seeing here. One is the kind of trend that Mats is describing, where the technology you need to handle a complex stack is becoming readily available in the Cloud.

And the other is the trend that Nikhil mentioned: to go for a simpler style, which a lot of people term REST, for accessing services. It will be interesting to see how those two tendencies play out against each other.

Kumar: I’d like to make a comment on that. The approach for the on-ramp is really one of the key differentiators of the Cloud, because you have the agility and the lack of capital investment (CAPEX) required to test things out.

But as we are evolving with Cloud platforms, I’m also seeing with a lot of Platform-as-a-Service (PaaS) vendor scenarios that they’re trying the ESB in the stack itself. They’re providing it in their Cloud fabric. A couple of large players have already done that.

For example, Azure provides that in the forward-looking vision. I am sure IBM and Oracle have already started down that path. A lot of the players are going to provide it as a core capability.

Pre-integrated environment

Gejnevall: Another interesting thing is that they could get a whole environment that’s pre-integrated. Usually, when you buy these things from a vendor, a lot of times they don’t fit together that well. Now, there’s an effort to make them work together.

But some people put these open-source tools together. Some people have done that and put them out on the Cloud, which gives them a pretty cheap platform for themselves. Then, they can sell it at a reasonable price, because of the integration of all these things.

Gardner: The Cloud model may be evolving toward an all-inclusive offering. But SOA, by its definition, advances interoperability, to plug and play across existing, current, and future sets of service possibilities. Are we talking about SOA being an important element of keeping Clouds dynamic and flexible — even open?

Kumar: We can think about the OSI 7 Layer Model. We’re evolving in terms of complexity, right? So from an interoperability perspective, we may talk SOAP or REST, for example, but the interaction with AWS, SalesforceSmartCloud, or Azure would involve using APIs that each of these platforms provide for interaction.

Lock-in

So you could have an AMI, which is an image on the Amazon Web Services environment, for example, and that could support a lab stack or an open source stack. How you interact with it, how you monitor it, how you cluster it, all of those aspects now start factoring in specific APIs, and so that’s the lock-in.

From an architect’s perspective, I look at it as we need to support proper separation of concerns, and that’s part of [The Open Group] SOA Reference Architecture. That’s what we tried to do, to be able to support implementation architectures that support that separation of concerns.

There’s another factor that we need to understand from the context of the Cloud, especially for mid-to-large sized organizations, and that is that the Cloud service providers, especially the large ones — Amazon, Microsoft, IBM — encapsulate infrastructure.

If you were to go to Amazon, Microsoft, or IBM and use their IaaS networking capabilities, you’d have one of the largest WAN networks in the world, and you wouldn’t have to pay a dime to establish that infrastructure. Not in terms of the cost of the infrastructure, not in terms of the capabilities required, nothing. So that’s an advantage that the Cloud is bringing, which I think is going to be very compelling.

The other thing is that, from an SOA context, you’re now able to look at it and say, “Well, I’m dealing with the Cloud, and what all these providers are doing is make it seamless, whether you’re dealing with the Cloud or on-premise.” That’s an important concept.

Now, each of these providers and different aspects of their stacks are at significantly different levels of maturity. Many of these providers may find that their stacks do not interoperate with themselves either, within their own stacks, just because they’re using different run times, different implementations, etc. That’s another factor to take in.

From an SOA perspective, the Cloud has become very compelling, because I’m dealing, let’s say, with a Salesforce.com and I want to use that same service within the enterprise, let’s say, an insurance capability for Microsoft Dynamics or for SugarCRM. If that capability is exposed to one source of truth in the enterprise, you’ve now reduced the complexity and have the ability to adopt different Cloud platforms.

What we are going to start seeing is that the Cloud is going to shift from being just one à-la-carte solution for everybody. It’s going to become something similar to what we used to deal with in the enterprise context. You had multiple applications, which you service-enabled to reduce complexity and provide one service-based capability, instead of an application-centered approach.

You’re now going to move the context to the Cloud, to your multiple Cloud solutions, and maybe many implementations in a nontrivial environment for the same business capability, but they are now exposed to services in the enterprise SOA. You could have Salesforce. You could have Amazon. You could have an IBM implementation. And you could pick and choose the source of truth and share it.

So a lot of the core SOA concepts will still apply and are still applying.

Another on-ramp

Gardner: Perhaps yet another on-ramp to the use of SOA is the app store, which allows for discovery, socialization of services, but at the same time provides overnance and control?

Kumar: We’re seeing that with a lot of our customers, typically the vendors who support PaaS solution associate app store models along with their platform as a mechanism to gain market share.

The issue that you run into with that is, it’s okay if it’s on your cellphone or on your iPad, your tablet PC, or whatever, but once you start having managed apps, for example Salesforce, or if you have applications which are being deployed on an Azure or on a SmartCloud context, you have high risk scenario. You don’t know how well architected that application is. It’s just like going and buying an enterprise application.

When you deploy it in the Cloud, you really need to understand the Cloud PaaS platform for that particular platform to understand the implications in terms of dependencies and cross-dependencies across apps that you have installed. They have real practical implications in terms of maintainability and performance. We’ve seen that with at least two platforms in the last six months.

Governance becomes extremely important. Because of the low CAPEX implications to the business, the business is very comfortable with going and buying these applications and saying, “We can install X, Y, or Z and it will cost us two months and a few million dollars and we are all set.” Or maybe it’s a few hundred thousand dollars.

They don’t realize the implications in terms of interoperability, performance, and standard architectural quality attributes that can occur. There is a governance aspect from the context of the Cloud provisioning of these applications.

There is another aspect to it, which is governance in terms of the run-time, more classic SOA governance, to measure, assert, and to view the cost of these applications in terms of performance to your infrastructural resources, to your security constraints. Also, are there scenarios where the application itself has a dependency on a daisy chain, multiple external applications, to trace the data?

In terms of the context of app stores, they’re almost like SaaS with a particular platform in mind. They provide the buyer with certain commitments from the platform manager or the platform provider, such as security. When you buy an app from Apple, there is at least a reputational expectation of security from the vendor.

What you do not always know is if that security is really being provided. There’s a risk there for organizations who are exposing mission-critical data to that.

The second thing is there is still very much a place for the classic SOA registries and repositories in the Cloud. Only the place is for a different purpose. Those registries and repositories are used either by service providers or by consumers to maintain the list of services they’re using internally.

Different paradigms

There are two different paradigms. The app store is a place where I can go and I know that the gas I am going to get is 85 percent ethanol, versus I also have to maintain some basic set of goods at home to make that I have my dinner on time. These are different kind of roles and different kind of purposes they’re serving.

Above all, I think the thing that’s going to become more and more important in the context of the Cloud is that the functionality will be provided by the Cloud platform or the app you buy, but the governance will be a major IT responsibility, right from the time of picking the app, to the time of delivering it, to the time of monitoring it.

Gardner: How is The Open Group allowing architects to better exercise SOA principles, as they’re grappling with some of these issues around governance, hybrid services delivery and management, and the use and demand in their organizations to start consuming more Cloud services?

Harding: The architect’s primary concern, of course, has to be to meet the needs of the client and to do so in a way that is most effective and that is cost-effective. Cloud gives the architect a usability to go out and get different components much more easily than hitherto.

There is a problem, of course, with integrating them and putting them together. SOA can provide part of the solution to that problem, in that it gives a principle of loosely coupled services. If you didn’t have that when you were trying to integrate different functionality from different places, you would be in a real mess.

What The Open Group contributes is a set of artifacts that enable the architect to think through how to meet the client’s needs in the best way when working with SOA and Cloud.

For example, the SOA Reference Architecture helps the architect understand what components might be brought into the solution. We have the SOA TOGAF Practical Guide, which helps the architect understand how to use TOGAF® in the SOA context.

We’re working further on artifacts in the Cloud space, the Cloud Computing Reference Architecture, a notational language for enabling people to describe Cloud ecosystems on recommendations for Cloud interoperability and portability. We’re also working on recommendations for Cloud governance to complement the recommendations for SOA governance, the SOA Governance Framework Standards that we have already produced, and a number of other artifacts.

The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client.

From the very early days, SOA was seen as bringing a closer connection between the business and technology. A lot of those promises that were made about SOA seven or eight years ago are only now becoming possible to fulfill, and that business front is what that project is looking at.

We’re also producing an update to the SOA Reference Architectures. We have input the SOA Reference Architecture for consideration by the ISO Group that is looking at an International Standard Reference Architecture for SOA and also to the IEEE Group that is looking at an IEEE Standard Reference Architecture.

We hope that both of those groups will want to work along the principles of our SOA Reference Architecture and we intend to produce a new version that incorporates the kind of ideas that they want to bring into the picture.

We’re also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions.

So those are three new initiatives that should result in new Open Group standards and guides to complement, as I have described already, the SOA Reference Architecture, the SOA Governance Framework, the Practical Guides to using TOGAF for SOA.

We also have the Service Integration Maturity Model that we need to assess the SOA maturity. We have a standard on service orientation applied to Cloud infrastructure, and we have a formal SOA Ontology.

Those are the things The Open Group has in place at present to assist the architect, and we are and will be working on three new things: version 2 of the Reference Architecture for SOA, SOA for business technology, and I believe shortly we’ll start on assistance to architects in developing SOA solutions.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirect™ blogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cloud, Cloud/SOA, Service Oriented Architecture

I Thought I had Said it All – and Then Comes Service Technology

By E.G. Nadhan, HP

It is not the first time that I am blogging about the evolution of fundamental service orientation principles serving as an effective foundation for cloud computing. You may recall my earlier posts in The Open Group blog on Top 5 tell-tale signs of SOA evolving to the Cloud, followed by The Right Way to Transform to Cloud Computing, following up with my latest post on this topic about taking a lesson from history to integrate to the Cloud. I thought I had said it all and there was nothing more to blog about on this topic other than diving into more details.

Until I saw the post by Forbes blogger Joe McKendrick on Before There Was Cloud Computing, There was SOA. In this post, McKendrick introduces a new term – Service Technology – which resonates with me because it cements the concept of a service-oriented thinking that technically enables the realization of SOA within the enterprise followed by its sustained evolution to cloud computing. In fact, the 5th International SOA, Cloud and Service Technology Symposium is a conference centered around this concept.

Even if this is a natural evolution, we must still exercise caution that we don’t fall prey to the same pitfalls of integration like the IT world did in the past. I elaborate further on this topic in my post on The Open Group blog: Take a lesson from History to Integrate to the Cloud.

I was intrigued by another comment in McKendrick’s post about “Cloud being inherently service-oriented.” Almost. I would slightly rephrase it to Cloud done right being inherently service-oriented. So, what do I mean by Cloud done right. Voila:The Right Way to Transform to Cloud Computing on The Open Group blog.

So, how about you? Where are you with your SOA strategy? Have you been selectively transforming to the Cloud? Do you have “Service Technology” in place within your enterprise?

I would like to know, and something tells me McKendrick will as well.

So, it would be an interesting exercise to see if the first Technical standard for Cloud Computing published by The Open Group should be extended to accommodate the concept of Service Technology. Perhaps, it is already an integral part of this standard in concept. Please let me know if you are interested. As the co-chair for this Open Group project, I am very interested in working with you on taking next steps.

A version of this blog post originally appeared on the Journey through Enterprise IT Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

Comments Off

Filed under Cloud/SOA

Take a Lesson from History to Integrate to the Cloud

By E.G. Nadhan, HP

In an earlier post for The Open Group Blog on the Top 5 tell-tale signs of SOA evolving to the Cloud, I had outlined the various characteristics of SOA that serve as a foundation for the cloud computing paradigm.  Steady growth of service oriented practices and the continued adoption of cloud computing across enterprises has resulted in the need for integrating out to the cloud.  When doing so, we must take a look back in time at the evolution of integration solutions starting with point-to-point solutions maturing to integration brokers and enterprise services buses over the years.  We should take a lesson from history to ensure that this time around, when integrating to the cloud, we prevent undue proliferation of point-to-point solutions across the extended enterprise.

We must exercise the same due-diligence and governance as is done for services within the enterprise. There is an increased risk of point-to-point solutions proliferating because of consumerization of IT and the ease of availability of such services to individual business units.

Thus, here are 5 steps that need to be taken to ensure a more systemic approach when integrating to cloud-based service providers.

  1. Extend your SOA strategy to the Cloud. Review your current SOA strategy and extend this to accommodate cloud based as-a-service providers.
  2. Extend Governance around Cloud Services.   Review your existing IT governance and SOA governance processes to accommodate the introduction and adoption of cloud based as-a-service providers.
  3. Identify Cloud based Integration models. It is not a one-size fits all. Therefore multiple integration models could apply to the cloud-based service provider depending upon the enterprise integration architecture. These integration models include a) point-to-point solutions, b) cloud to on-premise ESB and c) cloud based connectors that adopt a service centric approach to integrate cloud providers to enterprise applications and/or other cloud providers.
  4. Apply right models for right scenarios. Review the scenarios involved and apply the right models to the right scenarios.
  5. Sustain and evolve your services taxonomy. Provide enterprise-wide visibility to the taxonomy of services – both on-premise and those identified for integration with the cloud-based service providers. Continuously evolve these services to integrate to a rationalized set of providers who cater to the integration needs of the enterprise in the cloud.

The biggest challenge enterprises have in driving this systemic adoption of cloud-based services comes from within its business units. Multiple business units may unknowingly avail the same services from the same providers in different ways. Therefore, enterprises must ensure that such point-to-point integrations do not proliferate like they did during the era preceding integration brokers.

Enterprises should not let history repeat itself when integrating to the cloud by adopting service-oriented principles.

How about your enterprise? How are you going about doing this? What is your approach to integrating to cloud service providers?

A version of this post was originally published on HP’s Enterprise Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Cloud, Cloud/SOA

Secrets Behind the Rapid Growth of SOA

By E.G. Nadhan, HP

Service Oriented Architecture has been around for more than a decade and has steadily matured over the years with increasing levels of adoption. Cloud computing, a paradigm that is founded upon the fundamental service oriented principles, has fueled SOA’s adoption in recent years. ZDNet blogger Joe McKendrick calls out a survey by Companies and Markets in one of his blog posts – SOA market grew faster than expected.

Some of the statistics from this survey as referenced by McKendrick include:

  • SOA represents a total global market value of $5.518 billion, up from $3.987 billion in 2010 – or a 38% growth.
  • The SOA market in North America is set to grow at a compound annual growth rate (CAGR) of 11.5% through 2014.

So, what are the secrets of the success that SOA seems to be enjoying?  During the past decade, I can recall a few skeptics who were not so sure about SOA’s adoption and growth.  But I believe there are 5 “secrets” behind the success story of SOA that should put such skepticism to rest:

  1. Architecture. Service oriented architectures have greatly facilitated a structured approach to enterprise architecture (EA) at large. Despite debates over the scope of EA and SOA, the fact remains that service orientation is an integral part of the foundational factors considered by the enterprise architect. If anything, it has also acted as a catalyst for giving more visibility to the need for well-defined enterprise architecture to be in place for the current and desired states.
  2. Application. Service orientation has promoted standardized interfaces that have enabled the continued existence of multiple applications in an integrated, cohesive manner. Thanks to a SOA-based approach, integration mechanisms are no longer held hostage to proprietary formats and legacy platforms.
  3. Availability. Software Vendors have taken the initiative to make their functionality available through services. Think about the number of times you have heard a software vendor suggest Web services as their de-facto method for integrating to other systems? Single-click generation of a Web service is a very common feature across most of the software tools used for application development.
  4. Alignment. SOA has greatly facilitated and realized increased alignment from multiple fronts including the following:
    • Business to IT. The definition of application and technology services is really driven by the business need in the form of business services.
    • Application to Infrastructure. SOA strategies for the enterprise have gone beyond the application layer to the infrastructure, resulting in greater alignment between the application being deployed and the supporting infrastructure. Infrastructure services are an integral part of the comprehensive set of services landscape for an enterprise.
    • Platforms and technology. Interfaces between applications are much less dependent on the underlying technologies or platforms, resulting in increased alignment between various platforms and technologies. Interoperability has been taken to new levels across the extended enterprise.
  5. AdoptionSOA has served as the cornerstone for new paradigms like cloud computing. Increased adoption of SOA has also resulted in the evolution of multiple industry standards for SOA and has also led to the evolution of standards for infrastructure services to be provisioned in the cloudStandards do take time to evolve, but when they do, it is a tacit endorsement by the IT industry of the maturity of the underlying phenomenon — in this case, SOA.

Thus, the application of service oriented principles across the enterprise has increased SOA’s adoption spurred by the availability of readily exposed services across all architectural layers resulting in increased alignment between business and IT.

What about you? What factors come to your mind as SOA success secrets? Is your SOA experience in alignment with the statistics from the report McKendrick referenced? I would be interested to know.

Reposted with permission from CIO Magazine.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Cloud/SOA

SF Conference to Explore Architecture Trends

By The Open Group Conference Team

In addition to exploring the theme of “Enterprise Transformation,” speakers at The Open Group San Francisco conference in January will explore a number of other trends related to enterprise architecture and the profession, including trends in service oriented architectures and business architecture. 

The debate about the role of EA in the development of high-level business strategy is a long running one. EA clearly contributes to business strategy, but does it formulate, plan or execute on business strategy?  If the scope of EA is limited to EA alone, it could have a diminutive role in business strategy and Enterprise Transformation going forward.

EA professionals will have the opportunity to discuss and debate these questions and hear from peers about their practical experiences, including the following tracks:

  • Establishing Value Driven EA as the Enterprise Embarks on Transformation (EA & Enterprise Transformation Track)  – Madhav Naidu, Lead Enterprise Architedt, Ciena Corp., US; and Mark Temple, Chief Architect, Ciena Corp.
  • Building an Enterprise Architecture Practice Foundation for Enterprise Transformation Execution  (EA & Business Innovation Track) – Frank Chen, Senior Manager & Principal Enterprise Architect, Cognizant, US
  • Death of IT: Rise of the Machines (Business Innovation & Technological Disruption: The Challenges to EA Track) –  Mans Bhuller, Senior Director, Oracle Corporation, US
  • Business Architecture Profession and Case Studies  (Business Architecture Track) – Mieke Mahakena, Capgemini,; and Peter Haviland, Chief Architect/Head of Business Architecture, Ernst & Young
  • Constructing the Architecture of an Agile Enterprise Using the MSBI Method (Agile Enterprise Architecture Track) – Nick Malike, Senior Principal Enterprise Architect, Microsoft Corporation, US
  • There’s a SEA Change in Your Future: How Sustainable EA Enables Business Success in Times of Disruptive Change (Sustainable EA Track)  – Leo Laverdure & Alex Conn, Managing Partners, SBSA Partners LLC, US
  • The Realization of SOA’s Using the SOA Reference Architecture  (Tutorials) – Nikhil Kumar, President, Applied Technology Solutions, US
  • SOA Governance: Thinking Beyond Services (SOA Track) – Jed Maczuba, Senior Manager, Accenture, US

In addition, a number of conference tracks will explore issues and trends related to the enterprise architecture profession and role of enterprise architects within organizations.  Tracks addressing professional concerns include:

  • EA: Professionalization or Marketing Needed? (Professional Development Track)  – Peter Kuppen, Senior Manager, Deloitte Consulting, BV, Netherlands
  • Implementing Capabilities With an Architecture Practice (Setting up a Successful EA Practice Track)  – Mike Jacobs, Director and Principal Architect, OmptumInsight; and Joseph May, Director, Architecture Center of Excellence, OmptumInsight
  • Gaining and Retaining Stakeholder Buy-In: The Key to a Successful EA Practice Practice (Setting up a Successful EA Practice Track)   – Russ Gibfried, Enterprise Architect, CareFusion Corporation, US
  • The Virtual Enterprise Architecture Team (Nature & Role of the Enterprise Architecture) – Nicholas Hill, Principal Enterprise Architect, Consulting Services, FSI, Infosys; and Musharal Mughal, Director of EA, Manulife Financials, Canada

 Our Tutorials track will also provide practical guidance for attendees interested in learning more about how to implement architectures within organizations.  Topics will include tutorials on subjects such as TOGAF®, Archimate®, Service Oriented Architectures,  and architecture methods and techniques.

For more information on EA conference tracks, please visit the conference program on our website.

Comments Off

Filed under Cloud/SOA, Enterprise Architecture, Enterprise Transformation, Semantic Interoperability, Service Oriented Architecture

Finding the value in SOA

by Stephen Bennett, Oracle

Republished with permission from CIO Update from an article published on behalf of The Open Group.

Confronted with the age old problems of agility and complexity, today’s CIOs are under more pressure than ever to improve the strategic value of IT to the business. At best, these challenges have increased costs, limited innovation and increased risk. At worst, they have reduced IT’s ability to respond to changing business needs in a timely fashion.

Yet, changes for business and IT are continuing to occur at an ever-increasing pace. To keep up, enterprises need to adopt an agile, flexible architecture style with a proven strategic approach to delivering IT to the business.

Over the last year, I have seen a resurgence of CIOs using Enterprise Architecture (EA) as a key tool to address these challenges. In the past, EA has experienced difficulties within the enterprise. It has been unfairly seen as primarily a documentation exercise and, when applied incorrectly, EA can — ironically — become a silo in of itself. To make sure that EA has better success this time, CIOs must make their EA efforts more actionable.

Step back: SOA

Service oriented architecture (SOA) has been positioned as an architectural style specifically intended to reduce costs, increase agility and, most importantly, simplify the business and the interoperation of different parts of that business.

A key principle of SOA is the structuring of business capabilities into meaningful, granular services as opposed to opaque and siloed business functions. This makes it possible to quickly identify and reuse any existing realized functional capabilities, thus avoiding the duplication of similar capabilities across the organization. By standardizing the behavior and interoperation of these services, it’s possible to limit the impacts of change and to forecast the likely chain of impacts.

Despite its popularity, relatively few enterprises have been able to measure and demonstrate the value of SOA. This is due primarily to the approach that enterprises have taken when adopting and applying SOA. In most cases, enterprises interpret SOA as simply another solution development approach. As a result, SOA has been relegated or wrongly positioned as a purely integration technology, rather than the strategic enabler that it can be.

Because of this, SOA must not be seen as a solution development approach that starts and ends once a solution is delivered. It must be seen as an on-going process that, when coupled with a strategic framework, can change and evolve with the business over time. Unfortunately, many enterprises adopt SOA without utilizing a strategic framework, causing a host of challenges for their business.

Just a few of the challenges I have seen include:

  • More complexity and moving parts
  • Increased costs
  • Projects taking longer than before
  • Solutions more fragile than ever
  • Little or no agility
  • Difficulty identifying and discovering services
  • Exponentially growing governance challenges
  • Limited service re-use
  • Duplication of effort leading to service sprawl
  • Multiple siloed technology focused SOAs
  • Funding for service oriented projects being cut

It’s no wonder that SOA has a bad reputation.

To address these challenges, enterprises utilizing or considering adopting SOA must align it with an EA framework that elevates the importance of the needs of the enterprise rather than only considering the requirements of individual projects.

Step forward: TOGAF® 9

Now used by 80 percent of the Fortune Global 50, TOGAF® , an Open Group standard, is an architecture framework that contains a detailed method and set of supporting resources for developing an EA. As a comprehensive, open method for EA, TOGAF 9 complements and can be used in conjunction with other frameworks that are more focused on specific aspects of architecture, such as MDA and ITIL.

The Open Group’s new guide, Using TOGAF to Define and Govern Service-Oriented Architectures, aims to facilitate common understanding of the development of SOA while offering a phased approach to maximizing its business impact based on the popular TOGAF methodology. Let’s take a look at the main takeaways from the guide:

Organization readiness - An enterprise first needs to adopt the principle of service-orientation. However, successful SOA depends on the readiness of the enterprise to become service-oriented. To get started with SOA, the guide recommends conducting a maturity assessment. Such an assessment is available from The Open Group and enables a practitioner to assess an organization’s SOA maturity level and define a roadmap for incremental adoption to maximize business benefits at each stage along the way.

Scope - The size and complexity of an enterprise affects the way its architecture develops. Where there are many different organizational and business models, it is not practical to integrate them within a single architecture. It is therefore generally not appropriate to develop a single, integrated SOA for a large and complex enterprise.

TOGAF defines enterprise as any collection of organizations that has a common set of goals. For example, an enterprise could be a government agency, a whole corporation, a division of a corporation, a single department, or a chain of geographically distant organizations linked together by common ownership.

The guide highlights an approach for enterprise architects to identify the business areas where SOA will be of greatest benefit and make a significant impact so that they can be prioritized. This approach will help organizations avoid using SOA with the wrong situations to maximize their investment and overall business impact.

Communication, communication, communication - Aspects of TOGAF 9 were extended and enhanced to cover specific service-oriented concepts and terminology such as service contracts. Service contracts formalize the functional and non-functional characteristics of a business service and how it interacts with other business services. This enables a business vocabulary to be derived that allows IT to converse with the business in terms of business process and business services and abstracting away the complexity of the underlying technical services.

Governance - The identification of service and service portfolios is a key task for SOA. The questions of what service and service portfolios the enterprise will have, and how they will be managed must be taken with an enterprise level view.

Just because you have identified a number of services does not automatically mean they will add value to the enterprise and that they should be realized (at least not initially). Governance plays a key role here and the guide recommends the establishment of a SOA governance and creating a linkage to both IT and EA governance in the enterprise.

The Open Group has a wealth of information available in this area, specifically an SOA governance framework that provides context and definitions that enable organizations to understand, customize, and deploy SOA governance.

The relationship between EA and SOA is a powerful and synergistic one. They are key enablers for one another, making EA actionable while making the wider business benefits of SOA obtainable.

SOA is certainly not the only architectural approach that your enterprise will require. But it can smooth the alignment and adoption of other architecture styles (e.g., business process management, event-driven architecture) into an EA framework. So rather than reinvent the wheel, organizations should consider using a well-established framework such as TOGAF to elevate and extend the value of SOA.

The Open Group’s new guide is a must-read for any enterprise architect currently using TOGAF, but remember that it needs to be customized and extended to your enterprises unique situation. Now, if only The Open Group had a guide on using TOGAF to define and govern Cloud Computing!

Stephen Bennett is a senior enterprise architect at Oracle, an author, and a 25-year technologist focused on providing thought leadership, best practices, and architecture guidance around SOA and Cloud Computing. He has co-chaired a number of Work Groups within The Open Group around SOA Governance and TOGAF/SOA.

Comments Off

Filed under Service Oriented Architecture

New Open Group Guide Shows Enterprise Architects How to Maximize SOA Business Value with TOGAF®

By Awel Dico, Bank of Montreal

Service Oriented Architecture (SOA) has promised many benefits for both IT and business. As a result, it has been widely adopted as an architectural style among both private business and government enterprises. Despite SOA’s popularity, however, relatively few of these enterprises are able to measure and demonstrate the value of SOA to their organization. What is the problem and why is it so hard to demonstrate that SOA can deliver the much needed business value it promises? In this post I will point out some root causes for this problem and highlight how The Open Group’s new guide, titled “Using TOGAF® to Define and Govern Service-Oriented Architectures,” can help organizations maximize their return on investment with SOA.

The main problem is rooted in the way SOA adoption is approached. In most cases, organizations approach SOA by limiting the scope to individual solution implementation projects – using it purely as a tool to group software functions into services described by some standard interface. As a result, each SOA implementation is disconnected and void of the larger business problem context. This creates disconnected, technology-focused SOA silos that are difficult to manage and govern. Reuse of services across business lines, arguably one of the main advantages of SOA, in turn becomes very limited if not impossible without increased cost of integration.

SOA calls for standard-based service infrastructure that requires big investment. I have seen many IT organizations struggle to establish a common SOA infrastructure, but fail to do so. The main reason for this failure is again the way SOA is approached in those organizations; limiting SOA’s scope to solution projects makes it hard for individual projects to justify the investment in service infrastructure. As a result they fall back to their tactical implementation which cannot be reused by other projects down the road.

The other culprit is that many organizations think SOA can be applied to all situations – failing to realize that there are cases when SOA is not a good approach at all. An SOA approach is not cheap, and trying to fit it to all situations results in an increased cost without any ROI.

Fortunately there’s a solution to this problem. The Open Group SOA Work Group recently developed a short guide on how to use TOGAF® to define and govern SOA. The guide’s main goal is to enable enterprises to deliver the expected business value from their SOA initiatives. What’s great about TOGAF® in helping organizations approach SOA is the fact that it’s an architecture-style, agnostic and flexible framework that can be customized to various enterprise needs, architectural scopes and styles. In a nutshell, the guide recommends the incorporation of SOA style in the EA framework through customization and enhancement of TOGAF® 9.

How does this solve the problem I pointed out above? Well, here’s how:

SOA, as an architectural style, becomes recognized as part of the organization’s overall Enterprise Architecture instead of leaving it linked to only individual projects. The guide advises the identification of SOA principles and establishment of supporting architectural capabilities at the preliminary phase of TOGAF®. It also recommends establishment of SOA governance and creating linkage to both IT and EA governance in the enterprise. These architecture capabilities lift the heavy weight from the solution projects and ensure that any SOA initiative delivers business value to the enterprise. This means SOA projects in the enterprise share a larger enterprise context and each project adds value to the whole enterprise business in an incremental, reusable fashion.

When TOGAF® is applied at the strategic level, then SOA concepts can be incorporated into the strategy by indentifying the business areas or segments in the enterprise that benefit from a SOA approach. Likewise, the strategy could point out the areas in which SOA is not adding any value to the business. This allows users to identify the expected key metrics from the start and focus their SOA investment on high value projects. This also makes sure that each smaller SOA project is initiated in the context of larger business objectives and as such, can add measurable business value.

In summary, this short and concise guide links all the moving parts (such as SOA principles, SOA governance, Reference Architectures, SOA maturity, SOA Meta-model, etc.) and I think it is a must-read for any enterprise architect using TOGAF® as their organization’s EA framework and SOA as an architectural style. If you are wondering how these architectural elements fit together, I recommend you look at the guide and customize or extend its key concepts to your own situation. If you read it carefully, you will understand why SOA projects must have larger enterprise business context and how this can be done by customizing TOGAF® to define and govern your own SOA initiatives.

To download the guide for free, please visit The Open Group’s online bookstore.

Awel Dico, Ph. D., is Enterprise Architect for the Bank of Montreal. He is currently working on enterprise integration architecture and establishing best practice styles and patterns for bank wide services integration.  In the past he has consulted on various projects and worked with many teams across the organization and worked on many architecture initiatives, some of which include: leading mid-tier service infrastructure architecture; developing enterprise SOA principles, guidelines and standards; Developing SOA Service Compliance process; developing and applying architectural patterns; researching technology and industry trends, and contributing to the development of bank’s Enterprise Reference Architecture blueprint. In addition, Dr. Dico currently co-chairs The Open Group SOA Work Group and The Open Group SOA/TOGAF Practical Guide Project. He also co-supervises PhD candidates at Addis Ababa University, Computer Science – in Software Engineering track. Dr. Dico is also a founder of Community College helping students in rural areas of Ethiopia.

2 Comments

Filed under Service Oriented Architecture

SOA is not differentiating, Cloud Computing is

By Mark Skilton, Capgemini

Warning: I confess at the start of this blog that I chose a deliberately evocative title to try to get your attention and guess I did if are reading this now. Having written a couple of blogs to date with what I believed were finely honed words on current lessons learnt and futures of technology had created little reaction, so I thought I’d try the more direct approach and head directly towards a pressing matter of architectural and strategic concern.

Service Oriented Architecture (SOA) is now commonplace across all software development lifecycles and has entered the standard language of information technology design. We hear “service oriented” and “service enabled” as standard phrases handed out as common terms of reference. The point is that the processes and practices of SOA are industrial and are not differentiating, as everyone is doing these either from a design standpoint or as a business systems service approach. They enable standardization and abstraction of services in the design and build stages to align with key business and technology strategy goals, and enable technology to be developed or utilized that meets specific technical or business service requirements.

SOA practices are prerequisites to good design practice. SOA is a foundation of Service Management ITIL processes and is to be found in diverse software engineering methods from Business Process Management Systems (BPMS) to rapid Model Driven Architecture design techniques that build compose web-enabled services. SOA is seen as a key method along the journey to industrialization supporting consolidation and rationalization, as well as lean engineering techniques to optimize business and systems landscape. SOA provides good development practice in defining user requirements that provide what the user wants, and in translating these into understanding how best to build agile, decoupled and flexible architectural solutions.

My point is that these methods are now mainstream, and merely putting SOA into your proposal or as a stated capability is no longer going to be a “deal clincher” or a key “business differentiator”. The counterview I hear practitioners in SOA will say is that SOA is not just the standardized service practices but is also how the services can be identified that are differentiating. But that’s the rub. If SOA treats every requirement or design as a service problem, where is the difference?

A possible answer is in how SOA will be used. In the future and today it will be a business differentiator in the way the SOA method is used. But not all SOA methods are equal, and what will be necessary to highlight SOA method differentiation for business benefit?

Enter Cloud Computing, its origins in utility computing and the ubiquitous web services and Internet. The definitions of what is Cloud Computing, much like the early days of Service Orientation, is still evolving in understanding where the boundary and types of services it encompasses. But the big disruptive step change has been the new business model the Cloud Computing mode has introduced.

Cloud Computing has introduced automatic provisioning, self-service, automatic load balancing and scaling of resources in technology. Building on virtualization principles, it has extended into on-demand metering and billing consumption models, large-scale computing resource data centers, and large-scale distributed businesses on the web using the power of the Internet to reach and run new business models. I can hear industry observers say this is just a consequence of the timely convergence of pervasive technology network standards, the rapid falling costs per compute and storage costs and the massive “hockey stick” movement of bandwidth, smart devices and wide-scale adoption of web-based services.

But this is a step change movement from a simple realization that it’s just “another technology phase”.

Put another way: It has brought the back office computing resources and the on-demand Software as a Service Models into a dynamic new business model that changes the way business and IT work. It has “merged” physical and logical services into a new marketplace on-demand model that hitherto was “good practice“ to design as separate consumer and provider services. All that’s changed.

But does SOA fully realize these aspects of a Cloud Computing Architecture? Answer these three simple questions:

  • Does the logical service contracts define how multi-tenant environments need to work to support many concurrent services users?
  • Does SOA enable automating balancing and scaling to be considered if the initial set of declarative conditions in the service contract don’t “fit” the new operating conditions that need scaling up or down?
  • Does SOA recognize the wider marketplace and ecosystem dynamics that may result in evolving consumer/producer patterns that are dynamic and not static, driving new sourcing behaviors and usage patterns that may involve using services through a portal with no contract?

For sure, ecosystem principles are axiomatic in that they will drive standards for containers, protocols and semantics which SOA standards are perfect to adopt as boundary conditions for service contracts in a Service Portfolio. But my illustrations here are to broaden the debate as to how to engage SOA as a differentiator when it meets a “new kid on the block” like Cloud, which is rapidly morphing into new models “as we speak” extending into social networks, mobile services and location aware integration.

My real intention is to raise awareness and interest in the subjects and the activities that The Open Group is engaged in to address such topics. I sincerely hope you can follow these up as further reading and investigation with The Open Group; and of course, do feel free to comment and contact me J

Cloud Computing and SOA are key topics of discussion at The Open Group Conference, London, May 9-13, which is underway. 

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

3 Comments

Filed under Cloud/SOA