Tag Archives: standard

Are You Ready for the Convergence of New, Disruptive Technologies?

By Chris Harding, The Open Group

The convergence of technical phenomena such as cloud, mobile and social computing, big data analysis, and the Internet of things that is being addressed by The Open Group’s Open Platform 3.0 Forum™ will transform the way that you use information technology. Are you ready? Take our survey at https://www.surveymonkey.com/s/convergent_tech

What the Technology Can Do

Mobile and social computing are leading the way. Recently, the launch of new iPhone models and the announcement of the Twitter stock flotation were headline news, reflecting the importance that these technologies now have for business. For example, banks use mobile text messaging to alert customers to security issues. Retailers use social media to understand their markets and communicate with potential customers.

Other technologies are close behind. In Formula One motor racing, sensors monitor vehicle operation and feed real-time information to the support teams, leading to improved design, greater safety, and lower costs. This approach could soon become routine for cars on the public roads too.

Many exciting new applications are being discussed. Stores could use sensors to capture customer behavior while browsing the goods on display, and give them targeted information and advice via their mobile devices. Medical professionals could monitor hospital patients and receive alerts of significant changes. Researchers could use shared cloud services and big data analysis to detect patterns in this information, and develop treatments, including for complex or uncommon conditions that are hard to understand using traditional methods. The potential is massive, and we are only just beginning to see it.

What the Analysts Say

Market analysts agree on the importance of the new technologies.

Gartner uses the term “Nexus of Forces” to describe the convergence and mutual reinforcement of social, mobility, cloud and information patterns that drive new business scenarios, and says that, although these forces are innovative and disruptive on their own, together they are revolutionizing business and society, disrupting old business models and creating new leaders.

IDC predicts that a combination of social cloud, mobile, and big data technologies will drive around 90% of all the growth in the IT market through 2020, and uses the term “third platform” to describe this combination.

The Open Group will identify the standards that will make Gartner’s Nexus of Forces and IDC’s Third Platform commercial realities. This will be the definition of Open Platform 3.0.

Disrupting Enterprise Use of IT

The new technologies are bringing new opportunities, but their use raises problems. In particular, end users find that working through IT departments in the traditional way is not satisfactory. The delays are too great for rapid, innovative development. They want to use the new technologies directly – “hands on”.

Increasingly, business departments are buying technology directly, by-passing their IT departments. Traditionally, the bulk of an enterprise’s IT budget was spent by the IT department and went on maintenance. A significant proportion is now spent by the business departments, on new technology.

Business and IT are not different worlds any more. Business analysts are increasingly using technical tools, and even doing application development, using exposed APIs. For example, marketing folk do search engine optimization, use business information tools, and analyze traffic on Twitter. Such operations require less IT skill than formerly because the new systems are easy to use. Also, users are becoming more IT-savvy. This is a revolution in business use of IT, comparable to the use of spreadsheets in the 1980s.

Also, business departments are hiring traditional application developers, who would once have only been found in IT departments.

Are You Ready?

These disruptive new technologies are changing, not just the IT architecture, but also the business architecture of the enterprises that use them. This is a sea change that affects us all.

The introduction of the PC had a dramatic impact on the way enterprises used IT, taking much of the technology out of the computer room and into the office. The new revolution is taking it out of the office and into the pocket. Cell phones and tablets give you windows into the world, not just your personal collection of applications and information. Through those windows you can see your friends, your best route home, what your customers like, how well your production processes are working, or whatever else you need to conduct your life and business.

This will change the way you work. You must learn how to tailor and combine the information and services available to you, to meet your personal objectives. If your role is to provide or help to provide IT services, you must learn how to support users working in this new way.

To negotiate this change successfully, and take advantage of it, each of us must understand what is happening, and how ready we are to deal with it.

The Open Group is conducting a survey of people’s reactions to the convergence of Cloud and other new technologies. Take the survey, to input your state of readiness, and get early sight of the results, to see how you compare with everyone else.

To take the survey, visit https://www.surveymonkey.com/s/convergent_tech

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

Comments Off

Filed under Cloud, Future Technologies, Open Platform 3.0, Platform 3.0

Why is Cloud Adoption Taking so Long?

By Chris Harding, The Open Group

At the end of last year, Gartner predicted that cloud computing would become an integral part of IT in 2013 (http://www.gartner.com/DisplayDocument?doc_cd=230929). This looks a pretty safe bet. The real question is, why is it taking so long?

Cloud Computing

Cloud computing is a simple concept. IT resources are made available, within an environment that enables them to be used, via a communications network, as a service. It is used within enterprises to enable IT departments to meet users’ needs more effectively, and by external providers to deliver better IT services to their enterprise customers.

There are established vendors of products to fit both of these scenarios. The potential business benefits are well documented. There are examples of real businesses gaining those benefits, such as Netflix as a public cloud user (see http://www.zdnet.com/the-biggest-cloud-app-of-all-netflix-7000014298/ ), and Unilever and Lufthansa as implementers of private cloud (see http://www.computerweekly.com/news/2240114043/Unilever-and-Lufthansa-Systems-deploy-Azure-Private-cloud ).

Slow Pace of Adoption

Yet we are still talking of cloud computing becoming an integral part of IT. In the 2012 Open Group Cloud ROI survey, less than half of the respondents’ organizations were using cloud computing, although most of the rest were investigating its use. (See http://www.opengroup.org/sites/default/files/contentimages/Documents/cloud_roi_formal_report_12_19_12-1.pdf ). Clearly, cloud computing is not being used for enterprise IT as a matter of routine.

Cloud computing is now at least seven years old. Amazon’s “Elastic Compute Cloud” was launched in August 2006, and there are services that we now regard as cloud computing, though they may not have been called that, dating from before then. Other IT revolutions – personal computers, for example – have reached the point of being an integral part of IT in half the time. Why has it taken Cloud so long?

The Reasons

One reason is that using Cloud requires a high level of trust. You can lock your PC in your office, but you cannot physically secure your cloud resources. You must trust the cloud service provider. Such trust takes time to earn.

Another reason is that, although it is a simple concept, cloud computing is described in a rather complex way. The widely-accepted NIST definition (see http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf ) has three service models and four deployment models, giving a total of twelve distinct delivery combinations. Each combination has different business drivers, and the three service models are based on very different technical capabilities. Real products, of course, often do not exactly correspond to the definition, and their vendors describe them in product-specific terms. This complexity often leads to misunderstanding and confusion.

A third reason is that you cannot “mix and match” cloud services from different providers. The market is consolidating, with a few key players emerging as dominant at the infrastructure and platform levels. Each of them has its own proprietary interfaces. There are no real vendor-neutral standards. A recent Information Week article on Netflix (http://www.informationweek.co.uk/cloud-computing/platform/how-netflix-is-ruining-cloud-computing/240151650 ) describes some of the consequences. Customers are beginning to talk of “vendor lock-in” in a way that we haven’t seen since the days of mainframes.

The Portability and Interoperability Guide

The Open Group Cloud Computing Portability and Interoperability Guide addresses this last problem, by providing recommendations to customers on how best to achieve portability and interoperability when working with current cloud products and services. It also makes recommendations to suppliers and standards bodies on how standards and best practice should evolve to enable greater portability and interoperability in the future.

The Guide tackles the complexity of its subject by defining a simple Distributed Computing Reference Model. This model shows how cloud services fit into the mix of products and services used by enterprises in distributed computing solutions today. It identifies the major components of cloud-enabled solutions, and describes their portability and interoperability interfaces.

Platform 3.0

Cloud is not the only new game in town. Enterprises are looking at mobile computing, social computing, big data, sensors, and controls as new technologies that can transform their businesses. Some of these – mobile and social computing, for example – have caught on faster than Cloud.

Portability and interoperability are major concerns for these technologies too. There is a need for a standard platform to enable enterprises to use all of the new technologies, individually and in combination, and “mix and match” different products. This is the vision of the Platform 3.0 Forum, recently formed by The Open Group. The distributed computing reference model is an important input to this work.

The State of the Cloud

It is now at least becoming routine to consider cloud computing when architecting a new IT solution. The chances of it being selected however appear to be less than fifty-fifty, in spite of its benefits. The reasons include those mentioned above: lack of trust, complexity, and potential lock-in.

The Guide removes some of the confusion caused by the complexity, and helps enterprises assess their exposure to lock-in, and take what measures they can to prevent it.

The growth of cloud computing is starting to be constrained by lack of standards to enable an open market with free competition. The Guide contains recommendations to help the industry and standards bodies produce the standards that are needed.

Let’s all hope that the standards do appear soon. Cloud is, quite simply, a good idea. It is an important technology paradigm that has the potential to transform businesses, to make commerce and industry more productive, and to benefit society as a whole, just as personal computing did. Its adoption really should not be taking this long.

The Open Group Cloud Computing Portability and Interoperability Guide is available from The Open Group bookstore at https://www2.opengroup.org/ogsys/catalog/G135

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

3 Comments

Filed under Platform 3.0

Flexibility, Agility and Open Standards

By Jose M. Sanchez Knaack, IBM

Flexibility and agility are terms used almost interchangeably these days as attributes of IT architectures designed to cope with rapidly changing business requirements. Did you ever wonder if they are actually the same? Don’t you have the feeling that these terms remain abstract and without a concrete link to the design of an IT architecture?

This post searches to provide clear definitions for both flexibility and agility, and explain how both relate to the design of IT architectures that exploit open standards. A ‘real-life’ example will help to understand these concepts and render them relevant to the Enterprise Architect’s daily job.

First, here is some context on why flexibility and agility are increasingly important for businesses. Today, the average smart phone has more computing power than the original Apollo mission to the moon. We live in times of exponential change; the new technological revolution seems to be always around the corner and is safe to state that the trend will continue as nicely visualized in this infographic by TIME Magazine.

The average lifetime of a company in the S&P 500 has fallen by 80 percent since 1937. In other words, companies need to adapt fast to capitalize on business opportunities created by new technologies at the price of loosing their leadership position.

Thus, flexibility and agility have become ever present business goals that need to be supported by the underlying IT architecture. But, what is the precise meaning of these two terms? The online Merriam-Webster dictionary offers the following definitions:

Flexible: characterized by a ready capability to adapt to new, different, or changing requirements.

Agile: marked by ready ability to move with quick easy grace.

To understand how these terms relate to IT architecture, let us explore an example based on an Enterprise Service Bus (ESB) scenario.

An ESB can be seen as the foundation for a flexible IT architecture allowing companies to integrate applications (processes) written in different programming languages and running on different platforms within and outside the corporate firewall.

ESB products are normally equipped with a set of pre-built adapters that allow integrating 70-80 percent of applications ‘out-of-the-box’, without additional programming efforts. For the remaining 20-30 percent of integration requirements, it is possible to develop custom adapters so that any application can be integrated with any other if required.

In other words, an ESB covers requirements regarding integration flexibility, that is, it can cope with changing requirements in terms of integrating additional applications via adapters, ‘out-of-the-box’ or custom built. How does this integration flexibility correlate to integration agility?

Let’s think of a scenario where the IT team has been requested to integrate an old manufacturing application with a new business partner. The integration needs to be ready within one month; otherwise the targeted business opportunity will not apply anymore.

The picture below shows the underlying IT architecture for this integration scenario.

jose diagram

Although the ESB is able to integrate the old manufacturing application, it requires an adapter to be custom developed since the application does not support any of the communication protocols covered by the pre-built adapters. To custom develop, test and deploy an adapter in a corporate environment is likely going to take longer that a month and the business opportunity will be lost because the IT architecture was not agile enough.

This is the subtle difference between flexible and agile.

Notice that if the manufacturing application had been able to communicate via open standards, the corresponding pre-built adapter would have significantly shortened the time required to integrate this application. Applications that do not support open standards still exist in corporate IT landscapes, like the above scenario illustrates. Thus, the importance of incorporating open standards when road mapping your IT architecture.

The key takeaway is that your architecture principles need to favor information technology built on open standards, and for that, you can leverage The Open Group Architecture Principle 20 on Interoperability.

Name Interoperability
Statement Software and hardware should conform to defined standards that promote interoperability for data, applications, and technology.

In summary, the accelerating pace of change requires corporate IT architectures to support the business goals of flexibility and agility. Establishing architecture principles that favor open standards as part of your architecture governance framework is one proven approach (although not the only one) to road map your IT architecture in the pursuit of resiliency.

linkedin - CopyJose M. Sanchez Knaack is Senior Manager with IBM Global Business Services in Switzerland. Mr. Sanchez Knaack professional background covers business aligned IT architecture strategy and complex system integration at global technology enabled transformation initiatives.

 

 

 

Comments Off

Filed under Enterprise Architecture

Why Business Needs Platform 3.0

By Chris Harding, The Open Group

The Internet gives businesses access to ever-larger markets, but it also brings more competition. To prosper, they must deliver outstanding products and services. Often, this means processing the ever-greater, and increasingly complex, data that the Internet makes available. The question they now face is, how to do this without spending all their time and effort on information technology.

Web Business Success

The success stories of giants such as Amazon are well-publicized, but there are other, less well-known companies that have profited from the Web in all sorts of ways. Here’s an example. In 2000 an English illustrator called Jacquie Lawson tried creating greetings cards on the Internet. People liked what she did, and she started an e-business whose website is now ranked by Alexa as number 2712 in the world, and #1879 in the USA. This is based on website traffic and is comparable, to take a company that may be better known, with toyota.com, which ranks slightly higher in the USA (#1314) but somewhat lower globally (#4838).

A company with a good product can grow fast. This also means, though, that a company with a better product, or even just better marketing, can eclipse it just as quickly. Social networking site Myspace was once the most visited site in the US. Now it is ranked by Alexa as #196, way behind Facebook, which is #2.

So who ranks as #1? You guessed it – Google. Which brings us to the ability to process large amounts of data, where Google excels.

The Data Explosion

The World-Wide Web probably contains over 13 billion pages, yet you can often find the information that you want in seconds. This is made possible by technology that indexes this vast amount of data – measured in petabytes (millions of gigabytes) – and responds to users’ queries.

The data on the world-wide-web originally came mostly from people, typing it in by hand. In future, we will often use data that is generated by sensors in inanimate objects. Automobiles, for example, can generate data that can be used to optimize their performance or assess the need for maintenance or repair.

The world population is measured in billions. It is estimated that the Internet of Things, in which data is collected from objects, could enable us to track 100 trillion objects in real time – ten thousand times as many things as there are people, tirelessly pumping out information. The amount of available data of potential value to businesses is set to explode yet again.

A New Business Generation

It’s not just the amount of data to be processed that is changing. We are also seeing changes in the way data is used, the way it is processed, and the way it is accessed. Following The Open Group conference in January, I wrote about the convergence of social, Cloud, and mobile computing with Big Data. These are the new technical trends that are taking us into the next generation of business applications.

We don’t yet know what all those applications will be – who in the 1990’s would have predicted greetings cards as a Web application – but there are some exciting ideas. They range from using social media to produce market forecasts to alerting hospital doctors via tablets and cellphones when monitors detect patient emergencies. All this, and more, is possible with technology that we have now, if we can use it.

The Problem

But there is a problem. Although there is technology that enables businesses to use social, Cloud, and mobile computing, and to analyze and process massive amounts of data of different kinds, it is not necessarily easy to use. A plethora of products is emerging, with different interfaces, and with no ability to work with each other.  This is fine for geeks who love to play with new toys, but not so good for someone who wants to realize a new business idea and make money.

The new generation of business applications cannot be built on a mish-mash of unstable products, each requiring a different kind of specialist expertise. It needs a solid platform, generally understood by enterprise architects and software engineers, who can translate the business ideas into technical solutions.

The New Platform

Former VMware CEO and current Pivotal Initiative leader Paul Maritz describes the situation very well in his recent blog on GigaOM. He characterizes the new breed of enterprises, that give customers what they want, when they want it and where they want it, by exploiting the opportunities provided by new technologies, as consumer grade. Paul says that, “Addressing these opportunities will require new underpinnings; a new platform, if you like. At the core of this platform, which needs to be Cloud-independent to prevent lock-in, will be new approaches to handling big and fast (real-time) data.”

The Open Group has announced its new Platform 3.0 Forum to help the industry define a standard platform to meet this need. As The Open Group CTO Dave Lounsbury says in his blog, the new Forum will advance The Open Group vision of Boundaryless Information Flow™ by helping enterprises to take advantage of these convergent technologies. This will be accomplished by identifying a set of new platform capabilities, and architecting and standardizing an IT platform by which enterprises can reap the business benefits of Platform 3.0.

Business Focus

A business set up to design greetings cards should not spend its time designing communications networks and server farms. It cannot afford to spend time on such things. Someone else will focus on its core business and take its market.

The Web provided a platform that businesses of its generation could build on to do what they do best without being overly distracted by the technology. Platform 3.0 will do this for the new generation of businesses.

Help It Happen!

To find out more about the Platform 3.0 Forum, and take part in its formation, watch out for the Platform 3.0 web meetings that will be announced by e-mail and twitter, and on our home page.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Platform 3.0

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

The Open Group Approves EMMM Technical Standard for Natural Resources Industry

By The Open Group Staff

The Open Group, a vendor- and technology-neutral consortium, which is represented locally by Real IRM, has approved the Exploration and Mining Business Reference Model (EM Model) as an Open Group Technical Standard. This is the first approved standard for the natural resources industry developed by the Exploration, Mining, Metals and Minerals (EMMM™) Forum, a Forum of The Open Group.

The development of the EM Model was overseen by The Open Group South Africa, and is the first step toward establishing a blueprint for organisations in the natural resources industry, providing standard operating practices and support for vendors delivering technical and business solutions to the industry.

“Designed to cater to business activities across a variety of different types of mining organisations, the model is helping companies align both their business and technical procedures to provide better measures for shared services, health, safety and environmental processes,” says Sarina Viljoen, senior consultant at Real IRM and Forum Director of The Open Group EMMM Forum. “I can confirm that the business reference model was accepted as an Open Group standard and will now form part of the standards information base.”

This is a significant development as the EMMM Forum aims to enable sustainable business value through collaboration around a common reference framework, and to support vendors in their delivery of technical and business solutions. Its outputs are common reference deliverables such as mining process, capability and information models. The first technical standard in the business space, the EM Model focuses on business processes within the exploration and mining sectors.

“Using the EM Model as a reference with clients allows us to engage with any client and any mining method. Since the model first went public I have not used anything else as a basis for discussion,” says Mike Woodhall, a mining executive with MineRP, one of the world’s largest providers of mining technical software, support and mining consulting services. “The EM Model captures the mining business generically and allows us and the clients to discuss further levels of detail based on understanding the specifics of the mining method. This is one of the two most significant parts of the exercise: the fact we have a multiparty definition – no one person could have produced the model – and the fact that we could capture it legibly on one page.”

Viljoen adds that Forum member organisations find the collaboration especially useful as it drives insight and clarity on shared challenges: “The Forum has built on the very significant endorsement of its first business process model by Gartner in its report ‘Process for Defining Architecture in an Integrated Mining Enterprise, 2020.’

“In the report, Gartner suggests that companies in the mining industry look to enterprise architectures as a way of creating better efficiencies and integration across the business, information and technology processes within mining companies,” says Viljoen.

Gartner highlights the following features of the EM Model as being particularly important in its approach, differing from many traditional models that have been developed by mining companies themselves:

  • Breadth – covers all aspects of mining and mining-related activities
  • Scale-Independent –suitable for any size businesses, even the largest of enterprise corporations
  • Product and Mining-Method Neutral – supports all products and mining methods
  • Extended and Extensible Model –provides a general level of process detail that can be extended by organisations to the activity or task level, as appropriate

The EM Model is available for download from The Open Group Bookstore here.

About The Open Group Exploration, Mining, Metals and Minerals Forum

The Open Group Exploration, Mining, Metals and Minerals (EMMM™) Forum is a global, vendor-neutral collaboration where members work to create a reference framework containing applicable standards for the exploration and  mining industry focused on all metals and minerals. The EMMM Forum functions to realize sustainable business value for the organisations within the industry through collaboration, and to support vendors in their delivery of technical and business solutions.

About Real IRM Solutions

Real IRM is the leading South African enterprise architecture specialist, offering a comprehensive portfolio of products and services to local and international organisations. www.realirm.com.

About The Open Group

The Open Group is an international vendor- and technology-neutral consortium upon which organizations rely to lead the development of IT standards and certifications, and to provide them with access to key industry peers, suppliers and best practices. The Open Group provides guidance and an open environment in order to ensure interoperability and vendor neutrality. Further information on The Open Group can be found at www.opengroup.org.

1 Comment

Filed under EMMMv™

An Update on ArchiMate® 2 Certification

By Andrew Josey, The Open Group

In this blog we provide latest news on the status of the ArchiMate® Certification for People program. Recent changes to the program include the availability of the ArchiMate 2 Examination through Prometric test centers and also the addition of the ArchiMate 2 Foundation qualification.

Program Vision

The vision for the ArchiMate 2 Certification Program is to define and promote a market-driven education and certification program to support the ArchiMate modeling language standard. The program is supported by an Accredited ArchiMate Training program, in which there are currently 10 accredited courses. There are self-study materials available.

Certification Levels

There are two levels defined for ArchiMate 2 People Certification:

  • Level 1: ArchiMate 2 Foundation
  • Level 2: ArchiMate 2 Certified

The difference between the two certification levels is that for ArchiMate 2 Certified there are further requirements in addition to passing the ArchiMate 2 Examination as shown in the figure below.

What are the study paths to become certified?

ArchiMate 2

The path to certification depends on the Level. For Level 2, ArchiMate Certified: you achieve certification only after satisfactorily completing an Accredited ArchiMate Training Course, including completion of practical exercises, together with an examination. For Level 1 you may choose to self study or attend a training course. For Level 1 the requirement is only to pass the ArchiMate 2 examination.

How can I find out about the syllabus and examinations?

To obtain a high level view, read the datasheets that describe certification that are available from the ArchiMate Certification website. For detail on what is expected from candidates, see the Conformance Requirements document. The Conformance Requirements apply to both Level 1 and Level 2.

The ArchiMate 2 examination comprises 40 questions in simple multiple choice format. A Practice examination is included as part of an Accredited ArchiMate Training course and also in the ArchiMate 2 Foundation Study Guide.

For Level 2, a set of Practical exercises are included as part of the training course and these must be successfully completed. They are assessed by the trainer as part of an accredited training course.

More Information and Resources

More information on the program is available at the ArchiMate 2 Certification site at http://www.opengroup.org/certifications/archimate/

Details of the ArchiMate 2 Examination are available at: http://www.opengroup.org/certifications/archimate/docs/exam

The calendar of Accredited ArchiMate 2 Training courses is available at: http://www.opengrou.org/archimate/training-calendar/

The ArchiMate 2 Foundation Self Study Pack is available for purchase and immediate download at http://www.opengroup.org/bookstore/catalog/b132.htm

ArchiMate is a registered trademark of The Open Group.

Andrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF 9.1, ArchiMate 2.0, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

6 Comments

Filed under ArchiMate®, Uncategorized

The Open Group works with Microsoft to create Open Management Infrastructure

By Martin Kirk, The Open Group

Most data centers are comprised of many different types and kinds of hardware, often including a mish-mash of products made by various vendors and manufacturers in various stages of their product lifecycle. This makes data center management a bit of a nightmare for administrators because it has been difficult to centralize management on one common platform. In the past, this conundrum has forced companies to do one of two things – write their own proprietary abstraction layer to manage the different types of hardware or buy of all the same type of hardware and be subject to vendor lock-in.

Today, building cloud infrastructures has exasperated the problem of datacenter management and automation. To solve this, the notion of a datacenter abstraction layer (DAL) has evolved that will allow datacenter elements (network, storage, server, power and platform) to be managed and administered in a standard and consistent manner. Additionally, this will open up datacenter infrastructure management to any management application that chooses to support this standards-based management approach.

The Open Group has been working with a number of industry-leading companies for more than 10 years on the OpenPegasus Project, an open-source implementation of Distributed Management Task Force (DMTF) Common Information Model (CIM) as well as the DMTF Web Services for Management (WS-Management) standard. The OpenPegasus Project led the industry in implementing the DMTF CIM/WS-Management standards and has been provided as the standard solution on a very wide variety of IT platforms.  Microsoft has been a sponsor of the OpenPegasus Project for 4 years and has contributed greatly to the project.

Microsoft has also developed another implementation of the DMTF CIM/WS-Management standards and, based on their work together on the OpenPegasus Project, has brought this to The Open Group where it has become the Open Management Infrastructure (OMI) Project. Both Projects are now organized under the umbrella of the Open Management Project as a collection of open-source management projects.

OMI is a highly portable, easy to implement, high performance CIM/WS-Management Object Manager in OMI, designed specifically to implement the DMTF standards. OMI is written to be easy to implement in Linux and UNIX® systems. It will empower datacenter device vendors to compile and implement a standards-based management service into any device or platform in a clear and consistent way. The Open Group has made the source code for OMI available under an Apache 2 license.

OMI provides the following benefits (from Microsoft’s blog post on the announcement):

  • DMTF Standards Support: OMI implements its CIMOM server according to the DMTF standard.
  • Small System Support: OMI is designed to also be implemented in small systems (including embedded and mobile systems).
  • Easy Implementation: Greatly shortened path to implementing WS-Management and CIM in your devices/platforms.
  • Remote Manageability: Instant remote manageability from Windows and non-Windows clients and servers as well as other WS-Management-enabled platforms.
  • API compatibility with WMI:  Providers and management applications can be written on Linux and Windows by using the same APIs.
  • Support for CIM IDE: Tools for generating and developing CIM providers using tools, such as Visual Studio’s CIM IDE.

Making OMI available to the public as an open-source package allows companies of all sizes to more easily implement standards-based management into any device or platform. The long-term vision for the project is to provide a standard that allows any device to be managed clearly and consistently, as well as create an ecosystem of products that are based on open standards that can be more easily managed.

To read Microsoft’s blog on the announcement, please go to: http://blogs.technet.com/b/windowsserver/archive/2012/06/28/open-management-infrastructure.aspx

If you are interested in getting involved in OMI or OpenPegasus, please email omi-interest@opengroup.org.

mkMartin Kirk is a Program Director at The Open Group. Previously the head of the Operating System Technology Centre at British Telecom Research Labs, Mr. Kirk has been with The Open Group since 1990.

 

1 Comment

Filed under Standards

Three Best Practices for Successful Implementation of Enterprise Architecture Using the TOGAF® Framework and the ArchiMate® Modeling Language

By Henry Franken, Sven van Dijk and Bas van Gils, BiZZdesign

The discipline of Enterprise Architecture (EA) was developed in the 1980s with a strong focus on the information systems landscape of organizations. Since those days, the scope of the discipline has slowly widened to include more and more aspects of the enterprise as a whole. This holistic perspective takes into account the concerns of a wide variety of stakeholders. Architects, especially at the strategic level, attempt to answer the question: “How should we organize ourselves in order to be successful?”

An architecture framework is a foundational structure or set of structures for developing a broad range of architectures and consists of a process and a modeling component. The TOGAF® framework and the ArchiMate® modeling language – both maintained by The Open Group – are two leading and widely adopted standards in this field.

TA 

While both the TOGAF framework and the ArchiMate modeling language have a broad (enterprise-wide) scope and provide a practical starting point for an effective EA capability, a key factor is the successful embedding of EA standards and tools in the organization. From this perspective, the implementation of EA means that an organization adopts processes for the development and governance of EA artifacts and deliverables. Standards need to be tailored, and tools need to be configured in the right way in order to create the right fit. Or more popularly stated, “For an effective EA, it has to walk the walk, and talk the talk of the organization!”

EA touches on many aspects such as business, IT (and especially the alignment of these two), strategic portfolio management, project management and risk management. EA is by definition about cooperation and therefore it is impossible to operate in isolation. Successful embedding of an EA capability in the organization is typically approached as a change project with clearly defined goals, metrics, stakeholders, appropriate governance and accountability, and with assigned responsibilities in place.

With this in mind, we share three best practices for the successful implementation of Enterprise Architecture:

Think big, start small

The potential footprint of a mature EA capability is as big as the entire organization, but one of the key success factors for being successful with EA is to deliver value early on. Experience from our consultancy practice proves that a “think big, start small” approach has the most potential for success. This means that the process of implementing an EA capability is a process with iterative and incremental steps, based on a long term vision. Each step in the process must add measurable value to the EA practice, and priorities should be based on the needs and the change capacity of the organization.

Combine process and modeling

The TOGAF framework and the ArchiMate modeling language are a powerful combination. Deliverables in the architecture process are more effective when based on an approach that combines formal models with powerful visualization capabilities.

The TOGAF standard describes the architecture process in detail. The Architecture Development Method (ADM) is the core of the TOGAF standard. The ADM is a customer-focused and value-driven process for the sustainable development of a business capability. The ADM specifies deliverables throughout the architecture life-cycle with a focus on the effective communication to a variety of stakeholders. ArchiMate is fully complementary to the content as specified in the TOGAF standard. The ArchiMate standard can be used to describe all aspects of the EA in a coherent way, while tailoring the content for a specific audience. Even more, an architecture repository is a valuable asset that can be reused throughout the enterprise. This greatly benefits communication and cooperation of Enterprise Architects and their stakeholders.

Use a tool!

It is true, “a fool with a tool is still a fool.” In our teaching and consulting practice we have found; however, that adoption of a flexible and easy to use tool can be a strong driver in pushing the EA initiative forward.

EA brings together valuable information that greatly enhances decision making, whether on a strategic or more operational level. This knowledge not only needs to be efficiently managed and maintained, it also needs to be communicated to the right stakeholder at the right time, and even more importantly, in the right format. EA has a diverse audience that has business and technical backgrounds, and each of the stakeholders needs to be addressed in a language that is understood by all. Therefore, essential qualifications for EA tools are: rigidity when it comes to the management and maintenance of knowledge and flexibility when it comes to the analysis (ad-hoc, what-if, etc.), presentation and communication of the information to diverse audiences.

So what you are looking for is a tool with solid repository capabilities, flexible modeling and analysis functionality.

Conclusion

EA brings value to the organization because it answers more accurately the question: “How should we organize ourselves?” Standards for EA help monetize on investments in EA more quickly. The TOGAF framework and the ArchiMate modeling language are popular, widespread, open and complete standards for EA, both from a process and a language perspective. EA becomes even more effective if these standards are used in the right way. The EA capability needs to be carefully embedded in the organization. This is usually a process based on a long term vision and has the most potential for success if approached as “think big, start small.” Enterprise Architects can benefit from tool support, provided that it supports flexible presentation of content, so that it can be tailored for the communication to specific audiences.

More information on this subject can be found on our website: www.bizzdesign.com. Whitepapers are available for download, and our blog section features a number of very interesting posts regarding the subjects covered in this paper.

If you would like to know more or comment on this blog, or please do not hesitate to contact us directly!

Henry Franken

Henry Franken is the managing director of BiZZdesign and is chair of The Open Group ArchiMate Forum. As chair of The Open Group ArchiMate Forum, Henry led the development of the ArchiMate Version 2.o standard. Henry is a speaker at many conferences and has co-authored several international publications and Open Group White Papers. Henry is co-founder of the BPM-Forum. At BiZZdesign, Henry is responsible for research and innovation.

 

 

sven Sven van Dijk Msc. is a consultant and trainer at BiZZdesign North America. He worked as an application consultant on large scale ERP implementations and as a business consultant in projects on information management and IT strategy in various industries such as finance and construction. He gained nearly eight years of experience in applying structured methods and tools for Business Process Management and Enterprise Architecture.

 

basBas van Gils is a consultant, trainer and researcher for BiZZdesign. His primary focus is on strategic use of enterprise architecture. Bas has worked in several countries, across a wide range of organizations in industry, retail, and (semi)governmental settings.  Bas is passionate about his work, has published in various professional and academic journals and writes for several blogs.

2 Comments

Filed under ArchiMate®, Enterprise Architecture, TOGAF®

Successful Enterprise Architecture using the TOGAF® and ArchiMate® Standards

By Henry Franken, BiZZdesign

The discipline of Enterprise Architecture was developed in the 1980s with a strong focus on the information systems landscape of organizations. Since those days, the scope of the discipline has slowly widened to include more and more aspects of the enterprise as a whole. This holistic perspective takes into account the concerns of a wide variety of stakeholders. Architects, especially at the strategic level, attempt to answer the question “How should we organize ourselves in order to be successful?”

An architecture framework is a foundational structure, or set of structures, which can be used for developing a broad range of different architectures and consists of a process and a modeling component. TOGAF® framework and the ArchiMate® modeling language – both maintained by The Open Group® – are the two leading standards in this field.

TA

Much has been written on this topic in online forums, whitepapers, and blogs. On the BiZZdesign blog we have published several series on EA in general and these standards in particular, with a strong focus on the question: what should we do to be successful with EA using TOGAF framework and the ArchiMate modeling language? I would like to summarize some of our findings here:

Tip 1 One of the key success factors for being successful with EA is to deliver value early on. We have found that organizations who understand that a long-term vision and incremental delivery (“think big, act small”) have a larger chance of developing an effective EA capability
 
Tip 2 Combine process and modeling: TOGAF framework and the ArchiMate modeling language are a powerful combination. Deliverables in the architecture process are more effective when based on an approach that combines formal models with powerful visualization capabilities. Even more, an architecture repository is an valuable asset that can be reused throughout the enterprise
 
Tip 3 Use a tool! It is true that “a fool with a tool is still a fool”. In our teaching and consulting practice we have found, however, that adoption of a flexible and easy to use tool can be a strong driver in pushing the EA-initiative forward.

There will be several interesting presentations on this subject at the upcoming Open Group conference (Newport Beach, CA, USA, January 28 – 31: Look here), ranging from theory to case practice, focusing on getting started with EA as well as on advanced topics.

I will also present on this subject and will elaborate on the combined use of The Open Group standards for EA. I also gladly invite you to join me at the panel sessions. Look forward to see you there!

Henry FrankenHenry Franken is the managing director of BiZZdesign and is chair of The Open Group ArchiMate Forum. As chair of The Open Group ArchiMate Forum, Henry led the development of the ArchiMate Version 2.o standard. Henry is a speaker at many conferences and has co-authored several international publications and Open Group White Papers. Henry is co-founder of the BPM-Forum. At BiZZdesign, Henry is responsible for research and innovation.

2 Comments

Filed under ArchiMate®, Enterprise Architecture, TOGAF®

Operational Resilience through Managing External Dependencies

By Ian Dobson & Jim Hietala, The Open Group

These days, organizations are rarely self-contained. Businesses collaborate through partnerships and close links with suppliers and customers. Outsourcing services and business processes, including into Cloud Computing, means that key operations that an organization depends on are often fulfilled outside their control.

The challenge here is how to manage the dependencies your operations have on factors that are outside your control. The goal is to perform your risk management so it optimizes your operational success through being resilient against external dependencies.

The Open Group’s Dependency Modeling (O-DM) standard specifies how to construct a dependency model to manage risk and build trust over organizational dependencies between enterprises – and between operational divisions within a large organization. The standard involves constructing a model of the operations necessary for an organization’s success, including the dependencies that can affect each operation. Then, applying quantitative risk sensitivities to each dependency reveals those operations that have highest exposure to risk of not being successful, informing business decision-makers where investment in reducing their organization’s exposure to external risks will result in best return.

O-DM helps you to plan for success through operational resilience, assured business continuity, and effective new controls and contingencies, enabling you to:

  • Cut costs without losing capability
  • Make the most of tight budgets
  • Build a resilient supply chain
  •  Lead programs and projects to success
  • Measure, understand and manage risk from outsourcing relationships and supply chains
  • Deliver complex event analysis

The O-DM analytical process facilitates organizational agility by allowing you to easily adjust and evolve your organization’s operations model, and produces rapid results to illustrate how reducing the sensitivity of your dependencies improves your operational resilience. O-DM also allows you to drill as deep as you need to go to reveal your organization’s operational dependencies.

O-DM support training on the development of operational dependency models conforming to this standard is available, as are software computation tools to automate speedy delivery of actionable results in graphic formats to facilitate informed business decision-making.

The O-DM standard represents a significant addition to our existing Open Group Risk Management publications:

The O-DM standard may be accessed here.

Ian Dobson is the director of the Security Forum and the Jericho Forum for The Open Group, coordinating and facilitating the members to achieve their goals in our challenging information security world.  In the Security Forum, his focus is on supporting development of open standards and guides on security architectures and management of risk and security, while in the Jericho Forum he works with members to anticipate the requirements for the security solutions we will need in future.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

1 Comment

Filed under Cybersecurity, Security Architecture

Discover the World’s First Technical Cloud Computing Standard… for the Second Time

By E.G. Nadhan, HP

Have you heard of the first technical standard for Cloud Computing—SOCCI (pronounced “saw-key”)? Wondering what it stands for? Well, it stands for Service Oriented Cloud Computing Infrastructure, or SOCCI.

Whether you are just beginning to deploy solutions in the cloud or if you already have existing cloud solutions deployed, SOCCI can be applied in terms of each organization’s different situation. Where ever you are on the spectrum of cloud adoption, the standard offers a well-defined set of architecture building blocks with specific roles outlined in detail. Thus, the standard can be used in multiple ways including:

  • Defining the service oriented aspects of your infrastructure in the cloud as part of your reference architecture
  • Validating your reference architecture to ensure that these building blocks have been appropriately addressed

The standard provides you an opportunity to systematically perform the following in the context of your environment:

  • Identify synergies between service orientation and the cloud
  • Extend adoption of  traditional and service-oriented infrastructure in the cloud
  • Apply the consumer, provider and developer viewpoints on your cloud solution
  • Incorporate foundational building blocks into enterprise architecture for infrastructure services in the cloud
  • Implement cloud-based solutions using different infrastructure deployment models
  • Realize business solutions referencing the business scenario analyzed in this standard

Are you going to be SOCCI’s first application? Are you among the cloud innovators—opting not to wait when the benefits can be had today?

Incidentally, I will be presenting this standard for the second time at the HP Discover Conference in Frankfurt on 5th Dec 2012.   I plan on discussing this standard, as well as its application in a hypothetical business scenario so that we can collectively brainstorm on how it could apply in different business environments.

In an earlier tweet chat on cloud standards, I tweeted: “Waiting for standards is like waiting for Godot.” After the #DT2898 session at HP Discover 2012, I expect to tweet, “Waiting for standards may be like waiting for Godot, but waiting for the application of a standard does not have to be so.”

A version of this blog post originally appeared on the Journey through Enterprise IT Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

Comments Off

Filed under Cloud, Cloud/SOA

I Thought I had Said it All – and Then Comes Service Technology

By E.G. Nadhan, HP

It is not the first time that I am blogging about the evolution of fundamental service orientation principles serving as an effective foundation for cloud computing. You may recall my earlier posts in The Open Group blog on Top 5 tell-tale signs of SOA evolving to the Cloud, followed by The Right Way to Transform to Cloud Computing, following up with my latest post on this topic about taking a lesson from history to integrate to the Cloud. I thought I had said it all and there was nothing more to blog about on this topic other than diving into more details.

Until I saw the post by Forbes blogger Joe McKendrick on Before There Was Cloud Computing, There was SOA. In this post, McKendrick introduces a new term – Service Technology – which resonates with me because it cements the concept of a service-oriented thinking that technically enables the realization of SOA within the enterprise followed by its sustained evolution to cloud computing. In fact, the 5th International SOA, Cloud and Service Technology Symposium is a conference centered around this concept.

Even if this is a natural evolution, we must still exercise caution that we don’t fall prey to the same pitfalls of integration like the IT world did in the past. I elaborate further on this topic in my post on The Open Group blog: Take a lesson from History to Integrate to the Cloud.

I was intrigued by another comment in McKendrick’s post about “Cloud being inherently service-oriented.” Almost. I would slightly rephrase it to Cloud done right being inherently service-oriented. So, what do I mean by Cloud done right. Voila:The Right Way to Transform to Cloud Computing on The Open Group blog.

So, how about you? Where are you with your SOA strategy? Have you been selectively transforming to the Cloud? Do you have “Service Technology” in place within your enterprise?

I would like to know, and something tells me McKendrick will as well.

So, it would be an interesting exercise to see if the first Technical standard for Cloud Computing published by The Open Group should be extended to accommodate the concept of Service Technology. Perhaps, it is already an integral part of this standard in concept. Please let me know if you are interested. As the co-chair for this Open Group project, I am very interested in working with you on taking next steps.

A version of this blog post originally appeared on the Journey through Enterprise IT Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

Comments Off

Filed under Cloud/SOA

Viewpoint: Technology Supply Chain Security – Becoming a Trust-Worthy Provider

By Andras Szakal, IBM

Increasingly, the critical systems of the planet — telecommunications, banking, energy and others — depend on and benefit from the intelligence and interconnectedness enabled by existing and emerging technologies. As evidence, one need only look to the increase in enterprise mobile applications and BYOD strategies to support corporate and government employees.

Whether these systems are trusted by the societies they serve depends in part on whether the technologies incorporated into them are fit for the purpose they are intended to serve. Fit for purpose is manifested in two essential ways: first, does the product meet essential functional requirements; and second, has the product or component been produced by trustworthy provider. Of course, the leaders or owners of these systems have to do their part to achieve security and safety (e.g., to install, use and maintain technology appropriately, and to pay attention to people and process aspects such as insider threats). Cybersecurity considerations must be addressed in a sustainable way from the get-go, by design, and across the whole ecosystem — not after the fact, or in just one sector or another, or in reaction to crisis.

In addressing the broader cybersecurity challenge, however, buyers of mission-critical technology naturally seek reassurance as to the quality and integrity of the products they procure. In our view, the fundamentals of the institutional response to that need are similar to those that have worked in prior eras and in other industries — like food.

For example:  Most of us are able to enjoy a meal of stir-fried shrimp and not give a second thought as to whether the shellfish is safe to eat.

Why is that? Because we are the beneficiaries of a system whose workings greatly increase the likelihood — in many parts of the world — that the shellfish served to end consumers is safe and uncontaminated. While tainted technology is not quite the same as tainted foods it’s a useful analogy.

Of course, a very high percentage of the seafood industry is extremely motivated to provide safe and delicious shellfish to the end consumer. So we start with the practical perspective that, much more likely than not in today’s hyper-informed and communicative world, the food supply system will provide reasonably safe and tasty products. Invisible though it may be to most of us, however, this generalized confidence rests on a worldwide system that is built on globally recognized standards and strong public-private collaboration.

This system is necessary because mistakes happen, expectations evolve and — worse — the occasional participant in the food supply chain may take a shortcut in their processing practices. Therefore, some kind of independent oversight and certification has proven useful to assure consumers that what they pay for — their desired size and quality grade and, always, safety — is what they will get. In many countries, close cooperation between industry and government results in industry-led development and implementation of food safety standards.[1]

Government’s role is limited but important. Clearly, government cannot look at and certify every piece of shellfish people buy. So its actions are focused on areas in which it can best contribute: to take action in the event of a reported issue; to help convene industry participants to create and update safety practices; to educate consumers on how to choose and prepare shellfish safely; and to recognize top performers.[2]

Is the system perfect? Of course not. But it works, and supports the most practical and affordable methods of conducting safe and global commerce.

Let’s apply this learning to another sphere: information technology. To wit:

  • We need to start with the realization that the overwhelming majority of technology suppliers are motivated to provide securely engineered products and services, and that competitive dynamics reward those who consistently perform well.
  • However, we also need to recognize that there is a gap in time between the corrective effect of the market’s Invisible Hand and the damage that can be done in any given incident. Mistakes will inevitably happen, and there are some bad actors. So some kind of oversight and governmental participation are important, to set the right incentives and expectations.
  • We need to acknowledge that third-party inspection and certification of every significant technology product at the “end of pipe” is not only impractical but also insufficient. It will not achieve trust across a wide variety of infrastructures and industries.  A much more effective approach is to gather the world’s experts and coalesce industry practices around the processes that the experts agree are best suited to produce desired end results.
  • Any proposed oversight or government involvement must not stymie innovation or endanger a provider’s intellectual capital by requiring exposure to 3rd party assessments or require overly burdensome escrow of source code.
  • Given the global and rapid manner in which technologies are invented, produced and sold, a global and agile approach to technology assurance is required to achieve scalable results.  The approach should be based on understood and transparently formulated standards that are, to the maximum extent possible, industry-led and global in their applicability.  Conformance to such standards once would then be recognized by multiple industry’s and geo-political regions.  Propagation of country or industry specific standards will result in economic fragmentation and slow the adoption of industry best practices.

The Open Group Trusted Technology Forum (OTTF)[3] is a promising and complementary effort in this regard. Facilitated by The Open Group, the OTTF is working with governments and industry worldwide to create vendor-neutral open standards and best practices that can be implemented by anyone. Membership continues to grow and includes representation from manufacturers world-wide.

Governments and enterprises alike will benefit from OTTF’s work. Technology purchasers can use the Open Trusted Technology Provider (OTTP) Standard and OTTP Framework best practice recommendations to guide their strategies.  And a wide range of technology vendors can use OTTF approaches to build security and integrity into their end-to-end supply chains. The first version of the OTTPS is focused on mitigating the risk of tainted and counterfeit technology components or products. The OTTF is currently working a program that will accredit technology providers to the OTTP Standard. We expect to begin pilot testing of the program by the end of 2012.

Don’t misunderstand us: Market leaders like IBM have every incentive to engineer security and quality into our products and services. We continually encourage and support others to do the same.

But we realize that trusted technology — like food safety — can only be achieved if we collaborate with others in industry and in government.  That’s why IBM is pleased to be an active member of the Trusted Technology Forum, and looks forward to contributing to its continued success.

A version of this blog post was originally posted by the IBM Institute for Advanced Security.

Andras Szakal is the Chief Architect and a Senior Certified Software IT Architect for IBM’s Federal Software Sales business unit. His responsibilities include developing e-Government software architectures using IBM middleware and managing the IBM federal government software IT architect team. Szakal is a proponent of service oriented and web services based enterprise architectures and participates in open standards and open source product development initiatives within IBM.

 

Comments Off

Filed under OTTF

How the Operating System Got Graphical

By Dave Lounsbury, The Open Group

The Open Group is a strong believer in open standards and our members strive to help businesses achieve objectives through open standards. In 1995, under the auspices of The Open Group, the Common Desktop Environment (CDE) was developed and licensed for use by HP, IBM, Novell and Sunsoft to make open systems desktop computers as easy to use as PCs.

CDE is a single, standard graphical user interface for managing data, files, and applications on an operating system. Both application developers and users embraced the technology and approach because it provided a simple and common approach to accessing data and applications on network. With a click of a mouse, users could easily navigate through the operating system – similar to how we work on PCs and Macs today.

It was the first successful attempt to standardize on a desktop GUI on multiple, competing platforms. In many ways, CDE is responsible for the look, feel, and functionality of many of the popular operating systems used today, and brings distributed computing capabilities to the end user’s desktop.

The Open Group is now passing the torch to a new CDE community, led by CDE suppliers and users such as Peter Howkins and Jon Trulson.

“I am grateful that The Open Group decided to open source the CDE codebase,” said Jon Trulson. “This technology still has its fans and is very fast and lightweight compared to the prevailing UNIX desktop environments commonly in use today. I look forward to seeing it grow.”

The CDE group is also releasing OpenMotif, which is the industry standard graphical interface that standardizes application presentation on open source operating systems such as Linux. OpenMotif is also the base graphical user interface toolkit for the CDE.

The Open Group thanks these founders of the new CDE community for their dedication and contribution to carrying this technology forward. We are delighted this community is moving forward with this project and look forward to the continued growth in adoption of this important technology.

For those of you who are interested in learning more about the CDE project and would like to get involved, please see http://sourceforge.net/projects/cdesktopenv.

Dave LounsburyDave Lounsbury is The Open Group‘s Chief Technology Officer, previously VP of Collaboration Services.  Dave holds three U.S. patents and is based in the U.S.

Comments Off

Filed under Standards