Tag Archives: open standards

Why Technology Must Move Toward Dependability through Assuredness™

By Allen Brown, President and CEO, The Open Group

In early December, a technical problem at the U.K.’s central air traffic control center in Swanwick, England caused significant delays that were felt at airports throughout Britain and Ireland, also affecting flights in and out of the U.K. from Europe to the U.S. At Heathrow—one of the world’s largest airports—alone, there were a reported 228 cancellations, affecting 15 percent of the 1,300 daily flights flying to and from the airport. With a ripple effect that also disturbed flight schedules at airports in Birmingham, Dublin, Edinburgh, Gatwick, Glasgow and Manchester, the British National Air Traffic Services (NATS) were reported to have handled 20 percent fewer flights that day as a result of the glitch.

According to The Register, the problem was caused when a touch-screen telephone system that allows air traffic controllers to talk to each other failed to update during what should have been a routine shift change from the night to daytime system. According to news reports, the NATS system is the largest of its kind in Europe, containing more than a million lines of code. It took the engineering and manufacturing teams nearly a day to fix the problem. As a result of the snafu, Irish airline Ryanair even went so far as to call on Britain’s Civil Aviation Authority to intervene to prevent further delays and to make sure better contingency efforts are in place to prevent such failures happening again.

Increasingly complex systems

As businesses have come to rely more and more on technology, the systems used to keep operations running smoothly from day to day have gotten not only increasingly larger but increasingly complex. We are long past the days where a single mainframe was used to handle a few batch calculations.

Today, large global organizations, in particular, have systems that are spread across multiple centers of technical operations, often scattered in various locations throughout the globe. And with industries also becoming more inter-related, even individual company systems are often connected to larger extended networks, such as when trading firms are connected to stock exchanges or, as was the case with the Swanwick failure, airlines are affected by NATS’ network problems. Often, when systems become so large that they are part of even larger interconnected systems, the boundaries of the entire system are no longer always known.

The Open Group’s vision for Boundaryless Information Flow™ has never been closer to fruition than it is today. Systems have become increasingly open out of necessity because commerce takes place on a more global scale than ever before. This is a good thing. But as these systems have grown in size and complexity, there is more at stake when they fail than ever before.

The ripple effect felt when technical problems shut down major commercial systems cuts far, wide and deep. Problems such as what happened at Swanwick can affect the entire extended system. In this case, NATS, for example, suffers from damage to its reputation for maintaining good air traffic control procedures. The airlines suffer in terms of cancelled flights, travel vouchers that must be given out and angry passengers blasting them on social media. The software manufacturers and architects of the system are blamed for shoddy planning and for not having the foresight to prevent failures. And so on and so on.

Looking for blame

When large technical failures happen, stakeholders, customers, the public and now governments are beginning to look for accountability for these failures, for someone to assign blame. When the Obamacare website didn’t operate as expected, the U.S. Congress went looking for blame and jobs were lost. In the NATS fiasco, Ryanair asked for the government to intervene. Risk.net has reported that after the Royal Bank of Scotland experienced a batch processing glitch last summer, the U.K. Financial Services Authority wrote to large banks in the U.K. requesting they identify the people in their organization’s responsible for business continuity. And when U.S. trading company Knight Capital lost $440 million in 40 minutes when a trading software upgrade failed in August, U.S. Securities and Exchange Commission Chairman Mary Schapiro was quoted in the same article as stating: “If there is a financial loss to be incurred, it is the firm committing the error that should suffer that loss, not its customers or other investors. That more than anything sends a wake-up call to the entire industry.”

As governments, in particular, look to lay blame for IT failures, companies—and individuals—will no longer be safe from the consequences of these failures. And it won’t just be reputations that are lost. Lawsuits may ensue. Fines will be levied. Jobs will be lost. Today’s organizations are at risk, and that risk must be addressed.

Avoiding catastrophic failure through assuredness

As any IT person or Enterprise Architect well knows, completely preventing system failure is impossible. But mitigating system failure is not. Increasingly the task of keeping systems from failing—rather than just up and running—will be the job of CTOs and enterprise architects.

When systems grow to a level of massive complexity that encompasses everything from old legacy hardware to Cloud infrastructures to worldwide data centers, how can we make sure those systems are reliable, highly available, secure and maintain optimal information flow while still operating at a maximum level that is cost effective?

In August, The Open Group introduced the first industry standard to address the risks associated with large complex systems, the Dependability through Assuredness™ (O-DA) Framework. This new standard is meant to help organizations both determine system risk and help prevent failure as much as possible.

O-DA provides guidelines to make sure large, complex, boundaryless systems run according to the requirements set out for them while also providing contingencies for minimizing damage when stoppage occurs. O-DA can be used as a standalone or in conjunction with an existing architecture development method (ADM) such as the TOGAF® ADM.

O-DA encompasses lessons learned within a number of The Open Group’s forums and work groups—it borrows from the work of the Security Forum’s Dependency Modeling (O-DM) and Risk Taxonomy (O-RT) standards and also from work done within the Open Group Trusted Technology Forum and the Real-Time and Embedded Systems Forums. Much of the work on this standard was completed thanks to the efforts of The Open Group Japan and its members.

This standard addresses the issue of responsibility for technical failures by providing a model for accountability throughout any large system. Accountability is at the core of O-DA because without accountability there is no way to create dependability or assuredness. The standard is also meant to address and account for the constant change that most organization’s experience on a daily basis. The two underlying principles within the standard provide models for both a change accommodation cycle and a failure response cycle. Each cycle, in turn, provides instructions for creating a dependable and adaptable architecture, providing accountability for it along the way.

oda2

Ultimately, the O-DA will help organizations identify potential anomalies and create contingencies for dealing with problems before or as they happen. The more organizations can do to build dependability into large, complex systems, hopefully the less technical disasters will occur. As systems continue to grow and their boundaries continue to blur, assuredness through dependability and accountability will be an integral part of managing complex systems into the future.

Allen Brown

Allen Brown is President and CEO, The Open Group – a global consortium that enables the achievement of business objectives through IT standards.  For over 14 years Allen has been responsible for driving The Open Group’s strategic plan and day-to-day operations, including extending its reach into new global markets, such as China, the Middle East, South Africa and India. In addition, he was instrumental in the creation of the AEA, which was formed to increase job opportunities for all of its members and elevate their market value by advancing professional excellence.

Leave a comment

Filed under Dependability through Assuredness™, Standards

Are You Ready for the Convergence of New, Disruptive Technologies?

By Chris Harding, The Open Group

The convergence of technical phenomena such as cloud, mobile and social computing, big data analysis, and the Internet of things that is being addressed by The Open Group’s Open Platform 3.0 Forum™ will transform the way that you use information technology. Are you ready? Take our survey at https://www.surveymonkey.com/s/convergent_tech

What the Technology Can Do

Mobile and social computing are leading the way. Recently, the launch of new iPhone models and the announcement of the Twitter stock flotation were headline news, reflecting the importance that these technologies now have for business. For example, banks use mobile text messaging to alert customers to security issues. Retailers use social media to understand their markets and communicate with potential customers.

Other technologies are close behind. In Formula One motor racing, sensors monitor vehicle operation and feed real-time information to the support teams, leading to improved design, greater safety, and lower costs. This approach could soon become routine for cars on the public roads too.

Many exciting new applications are being discussed. Stores could use sensors to capture customer behavior while browsing the goods on display, and give them targeted information and advice via their mobile devices. Medical professionals could monitor hospital patients and receive alerts of significant changes. Researchers could use shared cloud services and big data analysis to detect patterns in this information, and develop treatments, including for complex or uncommon conditions that are hard to understand using traditional methods. The potential is massive, and we are only just beginning to see it.

What the Analysts Say

Market analysts agree on the importance of the new technologies.

Gartner uses the term “Nexus of Forces” to describe the convergence and mutual reinforcement of social, mobility, cloud and information patterns that drive new business scenarios, and says that, although these forces are innovative and disruptive on their own, together they are revolutionizing business and society, disrupting old business models and creating new leaders.

IDC predicts that a combination of social cloud, mobile, and big data technologies will drive around 90% of all the growth in the IT market through 2020, and uses the term “third platform” to describe this combination.

The Open Group will identify the standards that will make Gartner’s Nexus of Forces and IDC’s Third Platform commercial realities. This will be the definition of Open Platform 3.0.

Disrupting Enterprise Use of IT

The new technologies are bringing new opportunities, but their use raises problems. In particular, end users find that working through IT departments in the traditional way is not satisfactory. The delays are too great for rapid, innovative development. They want to use the new technologies directly – “hands on”.

Increasingly, business departments are buying technology directly, by-passing their IT departments. Traditionally, the bulk of an enterprise’s IT budget was spent by the IT department and went on maintenance. A significant proportion is now spent by the business departments, on new technology.

Business and IT are not different worlds any more. Business analysts are increasingly using technical tools, and even doing application development, using exposed APIs. For example, marketing folk do search engine optimization, use business information tools, and analyze traffic on Twitter. Such operations require less IT skill than formerly because the new systems are easy to use. Also, users are becoming more IT-savvy. This is a revolution in business use of IT, comparable to the use of spreadsheets in the 1980s.

Also, business departments are hiring traditional application developers, who would once have only been found in IT departments.

Are You Ready?

These disruptive new technologies are changing, not just the IT architecture, but also the business architecture of the enterprises that use them. This is a sea change that affects us all.

The introduction of the PC had a dramatic impact on the way enterprises used IT, taking much of the technology out of the computer room and into the office. The new revolution is taking it out of the office and into the pocket. Cell phones and tablets give you windows into the world, not just your personal collection of applications and information. Through those windows you can see your friends, your best route home, what your customers like, how well your production processes are working, or whatever else you need to conduct your life and business.

This will change the way you work. You must learn how to tailor and combine the information and services available to you, to meet your personal objectives. If your role is to provide or help to provide IT services, you must learn how to support users working in this new way.

To negotiate this change successfully, and take advantage of it, each of us must understand what is happening, and how ready we are to deal with it.

The Open Group is conducting a survey of people’s reactions to the convergence of Cloud and other new technologies. Take the survey, to input your state of readiness, and get early sight of the results, to see how you compare with everyone else.

To take the survey, visit https://www.surveymonkey.com/s/convergent_tech

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

Comments Off

Filed under Cloud, Future Technologies, Open Platform 3.0, Platform 3.0

Why is Cloud Adoption Taking so Long?

By Chris Harding, The Open Group

At the end of last year, Gartner predicted that cloud computing would become an integral part of IT in 2013 (http://www.gartner.com/DisplayDocument?doc_cd=230929). This looks a pretty safe bet. The real question is, why is it taking so long?

Cloud Computing

Cloud computing is a simple concept. IT resources are made available, within an environment that enables them to be used, via a communications network, as a service. It is used within enterprises to enable IT departments to meet users’ needs more effectively, and by external providers to deliver better IT services to their enterprise customers.

There are established vendors of products to fit both of these scenarios. The potential business benefits are well documented. There are examples of real businesses gaining those benefits, such as Netflix as a public cloud user (see http://www.zdnet.com/the-biggest-cloud-app-of-all-netflix-7000014298/ ), and Unilever and Lufthansa as implementers of private cloud (see http://www.computerweekly.com/news/2240114043/Unilever-and-Lufthansa-Systems-deploy-Azure-Private-cloud ).

Slow Pace of Adoption

Yet we are still talking of cloud computing becoming an integral part of IT. In the 2012 Open Group Cloud ROI survey, less than half of the respondents’ organizations were using cloud computing, although most of the rest were investigating its use. (See http://www.opengroup.org/sites/default/files/contentimages/Documents/cloud_roi_formal_report_12_19_12-1.pdf ). Clearly, cloud computing is not being used for enterprise IT as a matter of routine.

Cloud computing is now at least seven years old. Amazon’s “Elastic Compute Cloud” was launched in August 2006, and there are services that we now regard as cloud computing, though they may not have been called that, dating from before then. Other IT revolutions – personal computers, for example – have reached the point of being an integral part of IT in half the time. Why has it taken Cloud so long?

The Reasons

One reason is that using Cloud requires a high level of trust. You can lock your PC in your office, but you cannot physically secure your cloud resources. You must trust the cloud service provider. Such trust takes time to earn.

Another reason is that, although it is a simple concept, cloud computing is described in a rather complex way. The widely-accepted NIST definition (see http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf ) has three service models and four deployment models, giving a total of twelve distinct delivery combinations. Each combination has different business drivers, and the three service models are based on very different technical capabilities. Real products, of course, often do not exactly correspond to the definition, and their vendors describe them in product-specific terms. This complexity often leads to misunderstanding and confusion.

A third reason is that you cannot “mix and match” cloud services from different providers. The market is consolidating, with a few key players emerging as dominant at the infrastructure and platform levels. Each of them has its own proprietary interfaces. There are no real vendor-neutral standards. A recent Information Week article on Netflix (http://www.informationweek.co.uk/cloud-computing/platform/how-netflix-is-ruining-cloud-computing/240151650 ) describes some of the consequences. Customers are beginning to talk of “vendor lock-in” in a way that we haven’t seen since the days of mainframes.

The Portability and Interoperability Guide

The Open Group Cloud Computing Portability and Interoperability Guide addresses this last problem, by providing recommendations to customers on how best to achieve portability and interoperability when working with current cloud products and services. It also makes recommendations to suppliers and standards bodies on how standards and best practice should evolve to enable greater portability and interoperability in the future.

The Guide tackles the complexity of its subject by defining a simple Distributed Computing Reference Model. This model shows how cloud services fit into the mix of products and services used by enterprises in distributed computing solutions today. It identifies the major components of cloud-enabled solutions, and describes their portability and interoperability interfaces.

Platform 3.0

Cloud is not the only new game in town. Enterprises are looking at mobile computing, social computing, big data, sensors, and controls as new technologies that can transform their businesses. Some of these – mobile and social computing, for example – have caught on faster than Cloud.

Portability and interoperability are major concerns for these technologies too. There is a need for a standard platform to enable enterprises to use all of the new technologies, individually and in combination, and “mix and match” different products. This is the vision of the Platform 3.0 Forum, recently formed by The Open Group. The distributed computing reference model is an important input to this work.

The State of the Cloud

It is now at least becoming routine to consider cloud computing when architecting a new IT solution. The chances of it being selected however appear to be less than fifty-fifty, in spite of its benefits. The reasons include those mentioned above: lack of trust, complexity, and potential lock-in.

The Guide removes some of the confusion caused by the complexity, and helps enterprises assess their exposure to lock-in, and take what measures they can to prevent it.

The growth of cloud computing is starting to be constrained by lack of standards to enable an open market with free competition. The Guide contains recommendations to help the industry and standards bodies produce the standards that are needed.

Let’s all hope that the standards do appear soon. Cloud is, quite simply, a good idea. It is an important technology paradigm that has the potential to transform businesses, to make commerce and industry more productive, and to benefit society as a whole, just as personal computing did. Its adoption really should not be taking this long.

The Open Group Cloud Computing Portability and Interoperability Guide is available from The Open Group bookstore at https://www2.opengroup.org/ogsys/catalog/G135

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

3 Comments

Filed under Platform 3.0

Enterprise Architecture in China: Who uses this stuff?

by Chris Forde, GM APAC and VP Enterprise Architecture, The Open Group

Since moving to China in March 2010 I have consistently heard a similar set of statements and questions, something like this….

“EA? That’s fine for Europe and America, who is using it here?”

“We know EA is good!”

“What is EA?”

“We don’t have the ability to do EA, is it a problem if we just focus on IT?”

And

“Mr Forde your comment about western companies not discussing their EA programs because they view them as a competitive advantage is accurate here too, we don’t discuss we have one for that reason.” Following that statement the lady walked away smiling, having not introduced herself or her company.

Well some things are changing in China relative to EA and events organized by The Open Group; here is a snapshot from May 2013.

M GaoThe Open Group held an Enterprise Architecture Practitioners Conference in Shanghai China May 22nd 2013. The conference theme was EA and the spectrum of business value. The presentations were made by a mix of non-member and member organizations of The Open Group, most but not all based in China. The audience was mostly non-members from 55 different organizations in a range of industries. There was a good mix of customer, supplier, government and academic organizations presenting and in the audience. The conference proceedings are available to registered attendees of the conference and members of The Open Group. Livestream recordings will also be available shortly.

Organizations large and small presented about the fact that EA was integral to delivering business value. Here’s the nutshell.

China

Huawei is a leading global ICT communications provider based in Shenzhen China.  They presented on EA applied to their business transformation program and the ongoing development of their core EA practice.

GKHB is a software services organization based in Chengdu China. They presented on an architecture practice applied to real time forestry and endangered species management.

Nanfang Media is a State Owned Enterprise, the second largest media organization in the country based in Guangzhou China. They presented on the need to rapidly transform themselves to a modern integrated digital based organization.

McKinsey & Co a Management Consulting company based in New York USA presented an analysis of a CIO survey they conducted with Peking University.

Mr Wang Wei a Partner in the Shanghai office of McKinsey & Co’s Business Technology Practice reviewed a survey they conducted in co-operation with Peking University.

wang wei.jpg

The Survey of CIO’s in China indicated a common problem of managing complexity in multiple dimensions: 1) “Theoretically” Common Business Functions, 2) Across Business Units with differing Operations and Product, 3) Across Geographies and Regions. The recommended approach was towards “Organic Integration” and to carefully determine what should be centralized and what should be distributed. An Architecture approach can help with managing and mitigating these realities. The survey also showed that the CIO’s are evenly split amongst those dedicated to a traditional CIO role and those that have a dual Business and CIO role.

Mr Yang Li Chao Director of EA and Planning at Huawei and Ms Wang Liqun leader of the EA Center of Excellence at Huawei yang li chao.jpgwang liqun.jpgoutlined the 5-year journey Huawei has been on to deal with the development, maturation and effectiveness of an Architecture practice in a company that has seen explosive growth and is competing on a global scale. They are necessarily paying a lot of attention to Talent Management and development of their Architects, as these people are at the forefront of the company Business Transformation efforts. Huawei constantly consults with experts on Architecture from around the world and incorporates what they consider best practice into their own method and framework, which is based on TOGAF®.

 Mr He Kun CIO of Nanfang Media described the enormous pressures his traditional media organization is under, such as a concurrent loss of advertising and talent to digital media.

he kun.jpgHe gave and example where China Mobile has started its own digital newspaper leveraging their delivery platform. So naturally, Nanfang media is also undergoing a transformation and is looking to leverage its current advantages as a trusted source and its existing market position. The discipline of Architecture is a key enabler and aids as a foundation for clearly communicating a transformation approach to other business leaders. This does not mean using EA Jargon but communicating in the language of his peers for the purpose of obtaining funding to accomplish the transformation effectively.

Mr Chen Peng Vice General Manager of GKHB Chengdu described the use of an Architecture approach to managing precious national resources such as forestry, bio diversity and endangered species. He descrichen peng.jpgbed the necessity for real time information in observation, tracking and responses in this area and the necessity of “Informationalization” of Forestry in China as a part of eGovernment initiatives not only for the above topics but also for the countries growth particularly in supplying the construction industry. The Architecture approach taken here is also based on TOGAF®.

The take away from this conference is that Enterprise Architecture is alive and well amongst certain organizations in China. It is being used in a variety of industries.  Value is being realized by executives and practitioners, and delivered for both IT and Business units. However for many companies EA is also a new idea and to date its value is unclear to them.

The speakers also made it clear that there are no easy answers, each organization has to find its own use and value from Enterprise Architecture and it is a learning journey. They expressed their appreciation that The Open Group and its standards are a place where they can make connections, pull from and contribute to in regards to Enterprise Architecture.

Comments Off

Filed under Enterprise Architecture, Enterprise Transformation, Professional Development, Standards, TOGAF, TOGAF®, Uncategorized

Flexibility, Agility and Open Standards

By Jose M. Sanchez Knaack, IBM

Flexibility and agility are terms used almost interchangeably these days as attributes of IT architectures designed to cope with rapidly changing business requirements. Did you ever wonder if they are actually the same? Don’t you have the feeling that these terms remain abstract and without a concrete link to the design of an IT architecture?

This post searches to provide clear definitions for both flexibility and agility, and explain how both relate to the design of IT architectures that exploit open standards. A ‘real-life’ example will help to understand these concepts and render them relevant to the Enterprise Architect’s daily job.

First, here is some context on why flexibility and agility are increasingly important for businesses. Today, the average smart phone has more computing power than the original Apollo mission to the moon. We live in times of exponential change; the new technological revolution seems to be always around the corner and is safe to state that the trend will continue as nicely visualized in this infographic by TIME Magazine.

The average lifetime of a company in the S&P 500 has fallen by 80 percent since 1937. In other words, companies need to adapt fast to capitalize on business opportunities created by new technologies at the price of loosing their leadership position.

Thus, flexibility and agility have become ever present business goals that need to be supported by the underlying IT architecture. But, what is the precise meaning of these two terms? The online Merriam-Webster dictionary offers the following definitions:

Flexible: characterized by a ready capability to adapt to new, different, or changing requirements.

Agile: marked by ready ability to move with quick easy grace.

To understand how these terms relate to IT architecture, let us explore an example based on an Enterprise Service Bus (ESB) scenario.

An ESB can be seen as the foundation for a flexible IT architecture allowing companies to integrate applications (processes) written in different programming languages and running on different platforms within and outside the corporate firewall.

ESB products are normally equipped with a set of pre-built adapters that allow integrating 70-80 percent of applications ‘out-of-the-box’, without additional programming efforts. For the remaining 20-30 percent of integration requirements, it is possible to develop custom adapters so that any application can be integrated with any other if required.

In other words, an ESB covers requirements regarding integration flexibility, that is, it can cope with changing requirements in terms of integrating additional applications via adapters, ‘out-of-the-box’ or custom built. How does this integration flexibility correlate to integration agility?

Let’s think of a scenario where the IT team has been requested to integrate an old manufacturing application with a new business partner. The integration needs to be ready within one month; otherwise the targeted business opportunity will not apply anymore.

The picture below shows the underlying IT architecture for this integration scenario.

jose diagram

Although the ESB is able to integrate the old manufacturing application, it requires an adapter to be custom developed since the application does not support any of the communication protocols covered by the pre-built adapters. To custom develop, test and deploy an adapter in a corporate environment is likely going to take longer that a month and the business opportunity will be lost because the IT architecture was not agile enough.

This is the subtle difference between flexible and agile.

Notice that if the manufacturing application had been able to communicate via open standards, the corresponding pre-built adapter would have significantly shortened the time required to integrate this application. Applications that do not support open standards still exist in corporate IT landscapes, like the above scenario illustrates. Thus, the importance of incorporating open standards when road mapping your IT architecture.

The key takeaway is that your architecture principles need to favor information technology built on open standards, and for that, you can leverage The Open Group Architecture Principle 20 on Interoperability.

Name Interoperability
Statement Software and hardware should conform to defined standards that promote interoperability for data, applications, and technology.

In summary, the accelerating pace of change requires corporate IT architectures to support the business goals of flexibility and agility. Establishing architecture principles that favor open standards as part of your architecture governance framework is one proven approach (although not the only one) to road map your IT architecture in the pursuit of resiliency.

linkedin - CopyJose M. Sanchez Knaack is Senior Manager with IBM Global Business Services in Switzerland. Mr. Sanchez Knaack professional background covers business aligned IT architecture strategy and complex system integration at global technology enabled transformation initiatives.

 

 

 

Comments Off

Filed under Enterprise Architecture

Why Business Needs Platform 3.0

By Chris Harding, The Open Group

The Internet gives businesses access to ever-larger markets, but it also brings more competition. To prosper, they must deliver outstanding products and services. Often, this means processing the ever-greater, and increasingly complex, data that the Internet makes available. The question they now face is, how to do this without spending all their time and effort on information technology.

Web Business Success

The success stories of giants such as Amazon are well-publicized, but there are other, less well-known companies that have profited from the Web in all sorts of ways. Here’s an example. In 2000 an English illustrator called Jacquie Lawson tried creating greetings cards on the Internet. People liked what she did, and she started an e-business whose website is now ranked by Alexa as number 2712 in the world, and #1879 in the USA. This is based on website traffic and is comparable, to take a company that may be better known, with toyota.com, which ranks slightly higher in the USA (#1314) but somewhat lower globally (#4838).

A company with a good product can grow fast. This also means, though, that a company with a better product, or even just better marketing, can eclipse it just as quickly. Social networking site Myspace was once the most visited site in the US. Now it is ranked by Alexa as #196, way behind Facebook, which is #2.

So who ranks as #1? You guessed it – Google. Which brings us to the ability to process large amounts of data, where Google excels.

The Data Explosion

The World-Wide Web probably contains over 13 billion pages, yet you can often find the information that you want in seconds. This is made possible by technology that indexes this vast amount of data – measured in petabytes (millions of gigabytes) – and responds to users’ queries.

The data on the world-wide-web originally came mostly from people, typing it in by hand. In future, we will often use data that is generated by sensors in inanimate objects. Automobiles, for example, can generate data that can be used to optimize their performance or assess the need for maintenance or repair.

The world population is measured in billions. It is estimated that the Internet of Things, in which data is collected from objects, could enable us to track 100 trillion objects in real time – ten thousand times as many things as there are people, tirelessly pumping out information. The amount of available data of potential value to businesses is set to explode yet again.

A New Business Generation

It’s not just the amount of data to be processed that is changing. We are also seeing changes in the way data is used, the way it is processed, and the way it is accessed. Following The Open Group conference in January, I wrote about the convergence of social, Cloud, and mobile computing with Big Data. These are the new technical trends that are taking us into the next generation of business applications.

We don’t yet know what all those applications will be – who in the 1990’s would have predicted greetings cards as a Web application – but there are some exciting ideas. They range from using social media to produce market forecasts to alerting hospital doctors via tablets and cellphones when monitors detect patient emergencies. All this, and more, is possible with technology that we have now, if we can use it.

The Problem

But there is a problem. Although there is technology that enables businesses to use social, Cloud, and mobile computing, and to analyze and process massive amounts of data of different kinds, it is not necessarily easy to use. A plethora of products is emerging, with different interfaces, and with no ability to work with each other.  This is fine for geeks who love to play with new toys, but not so good for someone who wants to realize a new business idea and make money.

The new generation of business applications cannot be built on a mish-mash of unstable products, each requiring a different kind of specialist expertise. It needs a solid platform, generally understood by enterprise architects and software engineers, who can translate the business ideas into technical solutions.

The New Platform

Former VMware CEO and current Pivotal Initiative leader Paul Maritz describes the situation very well in his recent blog on GigaOM. He characterizes the new breed of enterprises, that give customers what they want, when they want it and where they want it, by exploiting the opportunities provided by new technologies, as consumer grade. Paul says that, “Addressing these opportunities will require new underpinnings; a new platform, if you like. At the core of this platform, which needs to be Cloud-independent to prevent lock-in, will be new approaches to handling big and fast (real-time) data.”

The Open Group has announced its new Platform 3.0 Forum to help the industry define a standard platform to meet this need. As The Open Group CTO Dave Lounsbury says in his blog, the new Forum will advance The Open Group vision of Boundaryless Information Flow™ by helping enterprises to take advantage of these convergent technologies. This will be accomplished by identifying a set of new platform capabilities, and architecting and standardizing an IT platform by which enterprises can reap the business benefits of Platform 3.0.

Business Focus

A business set up to design greetings cards should not spend its time designing communications networks and server farms. It cannot afford to spend time on such things. Someone else will focus on its core business and take its market.

The Web provided a platform that businesses of its generation could build on to do what they do best without being overly distracted by the technology. Platform 3.0 will do this for the new generation of businesses.

Help It Happen!

To find out more about the Platform 3.0 Forum, and take part in its formation, watch out for the Platform 3.0 web meetings that will be announced by e-mail and twitter, and on our home page.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Platform 3.0

Gaining Greater Cohesion: Bringing Business Analysis and Business Architecture into Focus

By Craig Martin, Enterprise Architects

Having delivered many talks on Business Architecture over the years, I’m often struck by the common vision driving many members in the audience – a vision of building cohesion in a business, achieving the right balance between competing forces and bringing the business strategy and operations into harmony.  However, as with many ambitious visions, the challenge in this case is immense.  As I will explain, many of the people who envision this future state of nirvana are, in practice, inadvertently preventing it from happening.

Standards Silos
There are a host of standards and disciplines that are brought into play by enterprises to improve business performance and capabilities. For example standards such as PRINCE2, BABOK, BIZBOK, TOGAF, COBIT, ITIL and PMBOK are designed to ensure reliability of team output and approach across various business activities. However, in many instances these standards, operating together, present important gaps and overlaps. One wonders whose job it is to integrate and unify these standards. Whose job is it to understand the business requirements, business processes, drivers, capabilities and so on?

Apples to Apples?
As these standards evolve they often introduce new jargon to support their view of the world. Have you ever had to ask your business to explain what they do on a single page? The diversity of the views and models can be quite astonishing:

  • The target operating model
  • The business model
  • The process model
  • The capability model
  • The value chain model
  • The functional model
  • The business services model
  • The component business model
  • The business reference model
  • Business anchor model

The list goes on and on…

Each has a purpose and brings value in isolation. However, in the common scenario where they are developed using differing tools, methods, frameworks and techniques, the result is usually greater fragmentation, not more cohesion – and consequently we can end up with some very confused and exacerbated business stakeholders who care less about what standard we use and more about finding clarity to just get the job done.

The Convergence of Business Architecture and Business Analysis
Ask a room filled with business analysts and business architects how their jobs differ and relate, and I guarantee that would receive a multitude of alternative and sometimes conflicting perspectives.

Both of these disciplines try to develop standardised methods and frameworks for the description of the building blocks of an organization. They also seek to standardise the means by which to string them together to create better outcomes.

In other words, they are the disciplines that seek to create balance between two important business goals:

  • To produce consistent, predictable outcomes
  • To produce outcomes that meet desired objectives

In his book, “The Design of Business: Why Design Thinking is the Next Competitive Advantage,” Roger Martin describes the relationships and trade-offs between analytical thinking and intuitive thinking in business. He refers to the “knowledge funnel,” which charts the movement of business focus from solving business mysteries using heuristics to creating algorithms that increase reliability, reducing business complexity and costs and improving business performance.

The disciplines of Business Architecture and business analysis are both currently seeking to address this challenge. Martin refers to this as ”design thinking.”

Thinking Types v2

Vision Vs. Reality For Business Analysts and Business Architects

When examining the competency models for business analysis and Business Architecture, the desire is to position these two disciplines right across the spectrum of reliability and validity.

The reality is that both the business architect and the business analyst spend a large portion of their time in the reliability space, and I believe I’ve found the reason why.

Both the BABOK and the BIZBOK provide a body of knowledge focused predominantly around the reliability space. In other words, they look at how we define the building blocks of an organization, and less so at how we invent better building blocks within the organization.

Integrating the Disciplines

While we still have some way to go to integrate, the Business Architecture and business analysis disciplines are currently bringing great value to business through greater reliability and repeatability.

However, there is a significant opportunity to enable the intuitive thinkers to look at the bigger picture and identify opportunities to innovate their business models, their go-to-market, their product and service offerings and their operations.

Perhaps we might consider introducing a new function to bridge and unify the disciplines?

This newly created function might integrate a number of incumbent roles and functions and cover:

  • A holistic structural view covering the business model and the high-level relationships and interactions between all business systems
  • A market model view in which the focus is on understanding the market dynamics, segments and customer need
  • A products and services model view focusing on customer experience, value proposition, product and service mix and customer value
  • An operating model view – this is the current focus area of the business architect and business analyst. You need these building blocks defined in a reliable, repeatable and manageable structure. This enables agility within the organization and will support the assembly and mixing of building blocks to improve customer experience and value

At the end of the day, what matters most is not business analysis or Business Architecture themselves, but how the business will bridge the reliability and validity spectrum to reliably produce desired business outcomes.

I will discuss this topic in more detail at The Open Group Conference in Sydney, April 15-18, which will be the first Open Group event to be held in Australia.

Craig-MARTIN-ea-updated-3Craig Martin is the Chief Operating Officer and Chief Architect at Enterprise Architects, which is a specialist Enterprise Architecture firm operating in the U.S., UK, Asia and Australia. He is presenting the Business Architecture plenary at the upcoming Open Group conference in Sydney. 

1 Comment

Filed under Business Architecture

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

The Open Group Approves EMMM Technical Standard for Natural Resources Industry

By The Open Group Staff

The Open Group, a vendor- and technology-neutral consortium, which is represented locally by Real IRM, has approved the Exploration and Mining Business Reference Model (EM Model) as an Open Group Technical Standard. This is the first approved standard for the natural resources industry developed by the Exploration, Mining, Metals and Minerals (EMMM™) Forum, a Forum of The Open Group.

The development of the EM Model was overseen by The Open Group South Africa, and is the first step toward establishing a blueprint for organisations in the natural resources industry, providing standard operating practices and support for vendors delivering technical and business solutions to the industry.

“Designed to cater to business activities across a variety of different types of mining organisations, the model is helping companies align both their business and technical procedures to provide better measures for shared services, health, safety and environmental processes,” says Sarina Viljoen, senior consultant at Real IRM and Forum Director of The Open Group EMMM Forum. “I can confirm that the business reference model was accepted as an Open Group standard and will now form part of the standards information base.”

This is a significant development as the EMMM Forum aims to enable sustainable business value through collaboration around a common reference framework, and to support vendors in their delivery of technical and business solutions. Its outputs are common reference deliverables such as mining process, capability and information models. The first technical standard in the business space, the EM Model focuses on business processes within the exploration and mining sectors.

“Using the EM Model as a reference with clients allows us to engage with any client and any mining method. Since the model first went public I have not used anything else as a basis for discussion,” says Mike Woodhall, a mining executive with MineRP, one of the world’s largest providers of mining technical software, support and mining consulting services. “The EM Model captures the mining business generically and allows us and the clients to discuss further levels of detail based on understanding the specifics of the mining method. This is one of the two most significant parts of the exercise: the fact we have a multiparty definition – no one person could have produced the model – and the fact that we could capture it legibly on one page.”

Viljoen adds that Forum member organisations find the collaboration especially useful as it drives insight and clarity on shared challenges: “The Forum has built on the very significant endorsement of its first business process model by Gartner in its report ‘Process for Defining Architecture in an Integrated Mining Enterprise, 2020.’

“In the report, Gartner suggests that companies in the mining industry look to enterprise architectures as a way of creating better efficiencies and integration across the business, information and technology processes within mining companies,” says Viljoen.

Gartner highlights the following features of the EM Model as being particularly important in its approach, differing from many traditional models that have been developed by mining companies themselves:

  • Breadth – covers all aspects of mining and mining-related activities
  • Scale-Independent –suitable for any size businesses, even the largest of enterprise corporations
  • Product and Mining-Method Neutral – supports all products and mining methods
  • Extended and Extensible Model –provides a general level of process detail that can be extended by organisations to the activity or task level, as appropriate

The EM Model is available for download from The Open Group Bookstore here.

About The Open Group Exploration, Mining, Metals and Minerals Forum

The Open Group Exploration, Mining, Metals and Minerals (EMMM™) Forum is a global, vendor-neutral collaboration where members work to create a reference framework containing applicable standards for the exploration and  mining industry focused on all metals and minerals. The EMMM Forum functions to realize sustainable business value for the organisations within the industry through collaboration, and to support vendors in their delivery of technical and business solutions.

About Real IRM Solutions

Real IRM is the leading South African enterprise architecture specialist, offering a comprehensive portfolio of products and services to local and international organisations. www.realirm.com.

About The Open Group

The Open Group is an international vendor- and technology-neutral consortium upon which organizations rely to lead the development of IT standards and certifications, and to provide them with access to key industry peers, suppliers and best practices. The Open Group provides guidance and an open environment in order to ensure interoperability and vendor neutrality. Further information on The Open Group can be found at www.opengroup.org.

1 Comment

Filed under EMMMv™

The Open Group works with Microsoft to create Open Management Infrastructure

By Martin Kirk, The Open Group

Most data centers are comprised of many different types and kinds of hardware, often including a mish-mash of products made by various vendors and manufacturers in various stages of their product lifecycle. This makes data center management a bit of a nightmare for administrators because it has been difficult to centralize management on one common platform. In the past, this conundrum has forced companies to do one of two things – write their own proprietary abstraction layer to manage the different types of hardware or buy of all the same type of hardware and be subject to vendor lock-in.

Today, building cloud infrastructures has exasperated the problem of datacenter management and automation. To solve this, the notion of a datacenter abstraction layer (DAL) has evolved that will allow datacenter elements (network, storage, server, power and platform) to be managed and administered in a standard and consistent manner. Additionally, this will open up datacenter infrastructure management to any management application that chooses to support this standards-based management approach.

The Open Group has been working with a number of industry-leading companies for more than 10 years on the OpenPegasus Project, an open-source implementation of Distributed Management Task Force (DMTF) Common Information Model (CIM) as well as the DMTF Web Services for Management (WS-Management) standard. The OpenPegasus Project led the industry in implementing the DMTF CIM/WS-Management standards and has been provided as the standard solution on a very wide variety of IT platforms.  Microsoft has been a sponsor of the OpenPegasus Project for 4 years and has contributed greatly to the project.

Microsoft has also developed another implementation of the DMTF CIM/WS-Management standards and, based on their work together on the OpenPegasus Project, has brought this to The Open Group where it has become the Open Management Infrastructure (OMI) Project. Both Projects are now organized under the umbrella of the Open Management Project as a collection of open-source management projects.

OMI is a highly portable, easy to implement, high performance CIM/WS-Management Object Manager in OMI, designed specifically to implement the DMTF standards. OMI is written to be easy to implement in Linux and UNIX® systems. It will empower datacenter device vendors to compile and implement a standards-based management service into any device or platform in a clear and consistent way. The Open Group has made the source code for OMI available under an Apache 2 license.

OMI provides the following benefits (from Microsoft’s blog post on the announcement):

  • DMTF Standards Support: OMI implements its CIMOM server according to the DMTF standard.
  • Small System Support: OMI is designed to also be implemented in small systems (including embedded and mobile systems).
  • Easy Implementation: Greatly shortened path to implementing WS-Management and CIM in your devices/platforms.
  • Remote Manageability: Instant remote manageability from Windows and non-Windows clients and servers as well as other WS-Management-enabled platforms.
  • API compatibility with WMI:  Providers and management applications can be written on Linux and Windows by using the same APIs.
  • Support for CIM IDE: Tools for generating and developing CIM providers using tools, such as Visual Studio’s CIM IDE.

Making OMI available to the public as an open-source package allows companies of all sizes to more easily implement standards-based management into any device or platform. The long-term vision for the project is to provide a standard that allows any device to be managed clearly and consistently, as well as create an ecosystem of products that are based on open standards that can be more easily managed.

To read Microsoft’s blog on the announcement, please go to: http://blogs.technet.com/b/windowsserver/archive/2012/06/28/open-management-infrastructure.aspx

If you are interested in getting involved in OMI or OpenPegasus, please email omi-interest@opengroup.org.

mkMartin Kirk is a Program Director at The Open Group. Previously the head of the Operating System Technology Centre at British Telecom Research Labs, Mr. Kirk has been with The Open Group since 1990.

 

1 Comment

Filed under Standards

Three Best Practices for Successful Implementation of Enterprise Architecture Using the TOGAF® Framework and the ArchiMate® Modeling Language

By Henry Franken, Sven van Dijk and Bas van Gils, BiZZdesign

The discipline of Enterprise Architecture (EA) was developed in the 1980s with a strong focus on the information systems landscape of organizations. Since those days, the scope of the discipline has slowly widened to include more and more aspects of the enterprise as a whole. This holistic perspective takes into account the concerns of a wide variety of stakeholders. Architects, especially at the strategic level, attempt to answer the question: “How should we organize ourselves in order to be successful?”

An architecture framework is a foundational structure or set of structures for developing a broad range of architectures and consists of a process and a modeling component. The TOGAF® framework and the ArchiMate® modeling language – both maintained by The Open Group – are two leading and widely adopted standards in this field.

TA 

While both the TOGAF framework and the ArchiMate modeling language have a broad (enterprise-wide) scope and provide a practical starting point for an effective EA capability, a key factor is the successful embedding of EA standards and tools in the organization. From this perspective, the implementation of EA means that an organization adopts processes for the development and governance of EA artifacts and deliverables. Standards need to be tailored, and tools need to be configured in the right way in order to create the right fit. Or more popularly stated, “For an effective EA, it has to walk the walk, and talk the talk of the organization!”

EA touches on many aspects such as business, IT (and especially the alignment of these two), strategic portfolio management, project management and risk management. EA is by definition about cooperation and therefore it is impossible to operate in isolation. Successful embedding of an EA capability in the organization is typically approached as a change project with clearly defined goals, metrics, stakeholders, appropriate governance and accountability, and with assigned responsibilities in place.

With this in mind, we share three best practices for the successful implementation of Enterprise Architecture:

Think big, start small

The potential footprint of a mature EA capability is as big as the entire organization, but one of the key success factors for being successful with EA is to deliver value early on. Experience from our consultancy practice proves that a “think big, start small” approach has the most potential for success. This means that the process of implementing an EA capability is a process with iterative and incremental steps, based on a long term vision. Each step in the process must add measurable value to the EA practice, and priorities should be based on the needs and the change capacity of the organization.

Combine process and modeling

The TOGAF framework and the ArchiMate modeling language are a powerful combination. Deliverables in the architecture process are more effective when based on an approach that combines formal models with powerful visualization capabilities.

The TOGAF standard describes the architecture process in detail. The Architecture Development Method (ADM) is the core of the TOGAF standard. The ADM is a customer-focused and value-driven process for the sustainable development of a business capability. The ADM specifies deliverables throughout the architecture life-cycle with a focus on the effective communication to a variety of stakeholders. ArchiMate is fully complementary to the content as specified in the TOGAF standard. The ArchiMate standard can be used to describe all aspects of the EA in a coherent way, while tailoring the content for a specific audience. Even more, an architecture repository is a valuable asset that can be reused throughout the enterprise. This greatly benefits communication and cooperation of Enterprise Architects and their stakeholders.

Use a tool!

It is true, “a fool with a tool is still a fool.” In our teaching and consulting practice we have found; however, that adoption of a flexible and easy to use tool can be a strong driver in pushing the EA initiative forward.

EA brings together valuable information that greatly enhances decision making, whether on a strategic or more operational level. This knowledge not only needs to be efficiently managed and maintained, it also needs to be communicated to the right stakeholder at the right time, and even more importantly, in the right format. EA has a diverse audience that has business and technical backgrounds, and each of the stakeholders needs to be addressed in a language that is understood by all. Therefore, essential qualifications for EA tools are: rigidity when it comes to the management and maintenance of knowledge and flexibility when it comes to the analysis (ad-hoc, what-if, etc.), presentation and communication of the information to diverse audiences.

So what you are looking for is a tool with solid repository capabilities, flexible modeling and analysis functionality.

Conclusion

EA brings value to the organization because it answers more accurately the question: “How should we organize ourselves?” Standards for EA help monetize on investments in EA more quickly. The TOGAF framework and the ArchiMate modeling language are popular, widespread, open and complete standards for EA, both from a process and a language perspective. EA becomes even more effective if these standards are used in the right way. The EA capability needs to be carefully embedded in the organization. This is usually a process based on a long term vision and has the most potential for success if approached as “think big, start small.” Enterprise Architects can benefit from tool support, provided that it supports flexible presentation of content, so that it can be tailored for the communication to specific audiences.

More information on this subject can be found on our website: www.bizzdesign.com. Whitepapers are available for download, and our blog section features a number of very interesting posts regarding the subjects covered in this paper.

If you would like to know more or comment on this blog, or please do not hesitate to contact us directly!

Henry Franken

Henry Franken is the managing director of BiZZdesign and is chair of The Open Group ArchiMate Forum. As chair of The Open Group ArchiMate Forum, Henry led the development of the ArchiMate Version 2.o standard. Henry is a speaker at many conferences and has co-authored several international publications and Open Group White Papers. Henry is co-founder of the BPM-Forum. At BiZZdesign, Henry is responsible for research and innovation.

 

 

sven Sven van Dijk Msc. is a consultant and trainer at BiZZdesign North America. He worked as an application consultant on large scale ERP implementations and as a business consultant in projects on information management and IT strategy in various industries such as finance and construction. He gained nearly eight years of experience in applying structured methods and tools for Business Process Management and Enterprise Architecture.

 

basBas van Gils is a consultant, trainer and researcher for BiZZdesign. His primary focus is on strategic use of enterprise architecture. Bas has worked in several countries, across a wide range of organizations in industry, retail, and (semi)governmental settings.  Bas is passionate about his work, has published in various professional and academic journals and writes for several blogs.

2 Comments

Filed under ArchiMate®, Enterprise Architecture, TOGAF®

Successful Enterprise Architecture using the TOGAF® and ArchiMate® Standards

By Henry Franken, BiZZdesign

The discipline of Enterprise Architecture was developed in the 1980s with a strong focus on the information systems landscape of organizations. Since those days, the scope of the discipline has slowly widened to include more and more aspects of the enterprise as a whole. This holistic perspective takes into account the concerns of a wide variety of stakeholders. Architects, especially at the strategic level, attempt to answer the question “How should we organize ourselves in order to be successful?”

An architecture framework is a foundational structure, or set of structures, which can be used for developing a broad range of different architectures and consists of a process and a modeling component. TOGAF® framework and the ArchiMate® modeling language – both maintained by The Open Group® – are the two leading standards in this field.

TA

Much has been written on this topic in online forums, whitepapers, and blogs. On the BiZZdesign blog we have published several series on EA in general and these standards in particular, with a strong focus on the question: what should we do to be successful with EA using TOGAF framework and the ArchiMate modeling language? I would like to summarize some of our findings here:

Tip 1 One of the key success factors for being successful with EA is to deliver value early on. We have found that organizations who understand that a long-term vision and incremental delivery (“think big, act small”) have a larger chance of developing an effective EA capability
 
Tip 2 Combine process and modeling: TOGAF framework and the ArchiMate modeling language are a powerful combination. Deliverables in the architecture process are more effective when based on an approach that combines formal models with powerful visualization capabilities. Even more, an architecture repository is an valuable asset that can be reused throughout the enterprise
 
Tip 3 Use a tool! It is true that “a fool with a tool is still a fool”. In our teaching and consulting practice we have found, however, that adoption of a flexible and easy to use tool can be a strong driver in pushing the EA-initiative forward.

There will be several interesting presentations on this subject at the upcoming Open Group conference (Newport Beach, CA, USA, January 28 – 31: Look here), ranging from theory to case practice, focusing on getting started with EA as well as on advanced topics.

I will also present on this subject and will elaborate on the combined use of The Open Group standards for EA. I also gladly invite you to join me at the panel sessions. Look forward to see you there!

Henry FrankenHenry Franken is the managing director of BiZZdesign and is chair of The Open Group ArchiMate Forum. As chair of The Open Group ArchiMate Forum, Henry led the development of the ArchiMate Version 2.o standard. Henry is a speaker at many conferences and has co-authored several international publications and Open Group White Papers. Henry is co-founder of the BPM-Forum. At BiZZdesign, Henry is responsible for research and innovation.

2 Comments

Filed under ArchiMate®, Enterprise Architecture, TOGAF®

Operational Resilience through Managing External Dependencies

By Ian Dobson & Jim Hietala, The Open Group

These days, organizations are rarely self-contained. Businesses collaborate through partnerships and close links with suppliers and customers. Outsourcing services and business processes, including into Cloud Computing, means that key operations that an organization depends on are often fulfilled outside their control.

The challenge here is how to manage the dependencies your operations have on factors that are outside your control. The goal is to perform your risk management so it optimizes your operational success through being resilient against external dependencies.

The Open Group’s Dependency Modeling (O-DM) standard specifies how to construct a dependency model to manage risk and build trust over organizational dependencies between enterprises – and between operational divisions within a large organization. The standard involves constructing a model of the operations necessary for an organization’s success, including the dependencies that can affect each operation. Then, applying quantitative risk sensitivities to each dependency reveals those operations that have highest exposure to risk of not being successful, informing business decision-makers where investment in reducing their organization’s exposure to external risks will result in best return.

O-DM helps you to plan for success through operational resilience, assured business continuity, and effective new controls and contingencies, enabling you to:

  • Cut costs without losing capability
  • Make the most of tight budgets
  • Build a resilient supply chain
  •  Lead programs and projects to success
  • Measure, understand and manage risk from outsourcing relationships and supply chains
  • Deliver complex event analysis

The O-DM analytical process facilitates organizational agility by allowing you to easily adjust and evolve your organization’s operations model, and produces rapid results to illustrate how reducing the sensitivity of your dependencies improves your operational resilience. O-DM also allows you to drill as deep as you need to go to reveal your organization’s operational dependencies.

O-DM support training on the development of operational dependency models conforming to this standard is available, as are software computation tools to automate speedy delivery of actionable results in graphic formats to facilitate informed business decision-making.

The O-DM standard represents a significant addition to our existing Open Group Risk Management publications:

The O-DM standard may be accessed here.

Ian Dobson is the director of the Security Forum and the Jericho Forum for The Open Group, coordinating and facilitating the members to achieve their goals in our challenging information security world.  In the Security Forum, his focus is on supporting development of open standards and guides on security architectures and management of risk and security, while in the Jericho Forum he works with members to anticipate the requirements for the security solutions we will need in future.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

1 Comment

Filed under Cybersecurity, Security Architecture

Viewpoint: Technology Supply Chain Security – Becoming a Trust-Worthy Provider

By Andras Szakal, IBM

Increasingly, the critical systems of the planet — telecommunications, banking, energy and others — depend on and benefit from the intelligence and interconnectedness enabled by existing and emerging technologies. As evidence, one need only look to the increase in enterprise mobile applications and BYOD strategies to support corporate and government employees.

Whether these systems are trusted by the societies they serve depends in part on whether the technologies incorporated into them are fit for the purpose they are intended to serve. Fit for purpose is manifested in two essential ways: first, does the product meet essential functional requirements; and second, has the product or component been produced by trustworthy provider. Of course, the leaders or owners of these systems have to do their part to achieve security and safety (e.g., to install, use and maintain technology appropriately, and to pay attention to people and process aspects such as insider threats). Cybersecurity considerations must be addressed in a sustainable way from the get-go, by design, and across the whole ecosystem — not after the fact, or in just one sector or another, or in reaction to crisis.

In addressing the broader cybersecurity challenge, however, buyers of mission-critical technology naturally seek reassurance as to the quality and integrity of the products they procure. In our view, the fundamentals of the institutional response to that need are similar to those that have worked in prior eras and in other industries — like food.

For example:  Most of us are able to enjoy a meal of stir-fried shrimp and not give a second thought as to whether the shellfish is safe to eat.

Why is that? Because we are the beneficiaries of a system whose workings greatly increase the likelihood — in many parts of the world — that the shellfish served to end consumers is safe and uncontaminated. While tainted technology is not quite the same as tainted foods it’s a useful analogy.

Of course, a very high percentage of the seafood industry is extremely motivated to provide safe and delicious shellfish to the end consumer. So we start with the practical perspective that, much more likely than not in today’s hyper-informed and communicative world, the food supply system will provide reasonably safe and tasty products. Invisible though it may be to most of us, however, this generalized confidence rests on a worldwide system that is built on globally recognized standards and strong public-private collaboration.

This system is necessary because mistakes happen, expectations evolve and — worse — the occasional participant in the food supply chain may take a shortcut in their processing practices. Therefore, some kind of independent oversight and certification has proven useful to assure consumers that what they pay for — their desired size and quality grade and, always, safety — is what they will get. In many countries, close cooperation between industry and government results in industry-led development and implementation of food safety standards.[1]

Government’s role is limited but important. Clearly, government cannot look at and certify every piece of shellfish people buy. So its actions are focused on areas in which it can best contribute: to take action in the event of a reported issue; to help convene industry participants to create and update safety practices; to educate consumers on how to choose and prepare shellfish safely; and to recognize top performers.[2]

Is the system perfect? Of course not. But it works, and supports the most practical and affordable methods of conducting safe and global commerce.

Let’s apply this learning to another sphere: information technology. To wit:

  • We need to start with the realization that the overwhelming majority of technology suppliers are motivated to provide securely engineered products and services, and that competitive dynamics reward those who consistently perform well.
  • However, we also need to recognize that there is a gap in time between the corrective effect of the market’s Invisible Hand and the damage that can be done in any given incident. Mistakes will inevitably happen, and there are some bad actors. So some kind of oversight and governmental participation are important, to set the right incentives and expectations.
  • We need to acknowledge that third-party inspection and certification of every significant technology product at the “end of pipe” is not only impractical but also insufficient. It will not achieve trust across a wide variety of infrastructures and industries.  A much more effective approach is to gather the world’s experts and coalesce industry practices around the processes that the experts agree are best suited to produce desired end results.
  • Any proposed oversight or government involvement must not stymie innovation or endanger a provider’s intellectual capital by requiring exposure to 3rd party assessments or require overly burdensome escrow of source code.
  • Given the global and rapid manner in which technologies are invented, produced and sold, a global and agile approach to technology assurance is required to achieve scalable results.  The approach should be based on understood and transparently formulated standards that are, to the maximum extent possible, industry-led and global in their applicability.  Conformance to such standards once would then be recognized by multiple industry’s and geo-political regions.  Propagation of country or industry specific standards will result in economic fragmentation and slow the adoption of industry best practices.

The Open Group Trusted Technology Forum (OTTF)[3] is a promising and complementary effort in this regard. Facilitated by The Open Group, the OTTF is working with governments and industry worldwide to create vendor-neutral open standards and best practices that can be implemented by anyone. Membership continues to grow and includes representation from manufacturers world-wide.

Governments and enterprises alike will benefit from OTTF’s work. Technology purchasers can use the Open Trusted Technology Provider (OTTP) Standard and OTTP Framework best practice recommendations to guide their strategies.  And a wide range of technology vendors can use OTTF approaches to build security and integrity into their end-to-end supply chains. The first version of the OTTPS is focused on mitigating the risk of tainted and counterfeit technology components or products. The OTTF is currently working a program that will accredit technology providers to the OTTP Standard. We expect to begin pilot testing of the program by the end of 2012.

Don’t misunderstand us: Market leaders like IBM have every incentive to engineer security and quality into our products and services. We continually encourage and support others to do the same.

But we realize that trusted technology — like food safety — can only be achieved if we collaborate with others in industry and in government.  That’s why IBM is pleased to be an active member of the Trusted Technology Forum, and looks forward to contributing to its continued success.

A version of this blog post was originally posted by the IBM Institute for Advanced Security.

Andras Szakal is the Chief Architect and a Senior Certified Software IT Architect for IBM’s Federal Software Sales business unit. His responsibilities include developing e-Government software architectures using IBM middleware and managing the IBM federal government software IT architect team. Szakal is a proponent of service oriented and web services based enterprise architectures and participates in open standards and open source product development initiatives within IBM.

 

Comments Off

Filed under OTTF

How the Operating System Got Graphical

By Dave Lounsbury, The Open Group

The Open Group is a strong believer in open standards and our members strive to help businesses achieve objectives through open standards. In 1995, under the auspices of The Open Group, the Common Desktop Environment (CDE) was developed and licensed for use by HP, IBM, Novell and Sunsoft to make open systems desktop computers as easy to use as PCs.

CDE is a single, standard graphical user interface for managing data, files, and applications on an operating system. Both application developers and users embraced the technology and approach because it provided a simple and common approach to accessing data and applications on network. With a click of a mouse, users could easily navigate through the operating system – similar to how we work on PCs and Macs today.

It was the first successful attempt to standardize on a desktop GUI on multiple, competing platforms. In many ways, CDE is responsible for the look, feel, and functionality of many of the popular operating systems used today, and brings distributed computing capabilities to the end user’s desktop.

The Open Group is now passing the torch to a new CDE community, led by CDE suppliers and users such as Peter Howkins and Jon Trulson.

“I am grateful that The Open Group decided to open source the CDE codebase,” said Jon Trulson. “This technology still has its fans and is very fast and lightweight compared to the prevailing UNIX desktop environments commonly in use today. I look forward to seeing it grow.”

The CDE group is also releasing OpenMotif, which is the industry standard graphical interface that standardizes application presentation on open source operating systems such as Linux. OpenMotif is also the base graphical user interface toolkit for the CDE.

The Open Group thanks these founders of the new CDE community for their dedication and contribution to carrying this technology forward. We are delighted this community is moving forward with this project and look forward to the continued growth in adoption of this important technology.

For those of you who are interested in learning more about the CDE project and would like to get involved, please see http://sourceforge.net/projects/cdesktopenv.

Dave LounsburyDave Lounsbury is The Open Group‘s Chief Technology Officer, previously VP of Collaboration Services.  Dave holds three U.S. patents and is based in the U.S.

Comments Off

Filed under Standards

Cybersecurity Threats Key Theme at Washington, D.C. Conference – July 16-20, 2012

By The Open Group Conference Team

Identify risks and eliminating vulnerabilities that could undermine integrity and supply chain security is a significant global challenge and a top priority for governments, vendors, component suppliers, integrators and commercial enterprises around the world.

The Open Group Conference in Washington, D.C. will bring together leading minds in technology and government policy to discuss issues around cybersecurity and how enterprises can establish and maintain the necessary levels of integrity in a global supply chain. In addition to tutorial sessions on TOGAF and ArchiMate, the conference offers approximately 60 sessions on a varied of topics, including:

  • Cybersecurity threats and key approaches to defending critical assets and securing the global supply chain
  • Information security and Cloud security for global, open network environments within and across enterprises
  • Enterprise transformation, including Enterprise Architecture, TOGAF and SOA
  • Cloud Computing for business, collaborative Cloud frameworks and Cloud architectures
  • Transforming DoD avionics software through the use of open standards

Keynote sessions and speakers include:

  • America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime and Warfare - Keynote Speaker: Joel Brenner, author and attorney at Cooley LLP
  • Meeting the Challenge of Cybersecurity Threats through Industry-Government Partnerships - Keynote Speaker: Kristin Baldwin, principal deputy, deputy assistant secretary of defense for Systems Engineering
  • Implementation of the Federal Information Security Management Act (FISMA) - Keynote Speaker: Dr. Ron Ross, project leader at NIST (TBC)
  • Supply Chain: Mitigating Tainted and Counterfeit Products - Keynote Panel: Andras Szakal, VP and CTO at IBM Federal; Daniel Reddy, consulting product manager in the Product Security Office at EMC Corporation; John Boyens, senior advisor in the Computer Security Division at NIST; Edna Conway, chief security strategist of supply chain at Cisco; and Hart Rossman, VP and CTO of Cyber Security Services at SAIC
  • The New Role of Open Standards – Keynote Speaker: Allen Brown, CEO of The Open Group
  • Case Study: Ontario Healthcare - Keynote Speaker: Jason Uppal, chief enterprise architect at QRS
  • Future Airborne Capability Environment (FACE): Transforming the DoD Avionics Software Industry Through the Use of Open Standards - Keynote Speaker: Judy Cerenzia, program director at The Open Group; Kirk Avery of Lockheed Martin; and Robert Sweeney of Naval Air Systems Command (NAVAIR)

The full program can be found here: http://www3.opengroup.org/events/timetable/967

For more information on the conference tracks or to register, please visit our conference registration page. Please stay tuned throughout the next month as we continue to release blog posts and information leading up to The Open Group Conference in Washington, D.C. and be sure to follow the conference hashtag on Twitter – #ogDCA!

1 Comment

Filed under ArchiMate®, Cloud, Cloud/SOA, Conference, Cybersecurity, Enterprise Architecture, Information security, OTTF, Standards, Supply chain risk

The Open Group Conference, London: An open environment for challenging times

By Allen Brown, CEO, The Open Group

In little over a week, The Open Group will convene in London to debate some of today’s key IT issues such as Cloud Computing and Enterprise Architecture.

Our members span a range of companies and organisations, including Capgemini, HP, IBM, Oracle, Kingdee and SAP, and hail from around the globe. It’s not easy trying to get such a range of individuals to reach some sort of consensus; our conferences are vital in developing open standards and certifications. Our rich and varied membership certainly makes for interesting and lively debates. During the London Conference, May 9-13, we’ll hear plenty of opinions on a variety of topics including enterprise architecture (EA), business transformation, cyber-security, Cloud Computing, SOA and skills-based certifications.

We’ve got an excellent group of speakers attending the conference including Peter Edwards, Associate Director, IT & Communications Consulting, Arup, who’ll describe his experiences of being an Enterprise Architect in the land of Architects and Civil Engineers. His speech will discuss his position at Arup and some aspects of his role as Chief Enterprise Architect for the Olympic Delivery Authority (ODA)​. He’ll discuss examples from recent work on major airports, sports facilities, “smart cities” and efficient data centres, explaining how these all rely heavily and increasingly on complex, integrated systems and how the concepts, tools and techniques of enterprise architecture are helpful in planning and integrating such systems, and in helping to bridge the communication gap between the different types of stakeholders.

Other presenters will address the role of technical experts to investigate organised crime, Cloud vendor selection (how to pick the right combination of better, faster and cheaper), architecting Cloud Computing, securing the global supply chain and much more.

As the IT media is dominated by stories on Cloud and cyber-security, it will be refreshing to debate these in an open environment and discuss the many challenges we all face in navigating an increasingly complex IT world. I’d love to hear your views on the type of questions you’d like answered and any particular issues you feel passionate about.

The Open Group Conference, London, May 9-13 is almost here! Join us for best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Allen BrownAllen Brown is the President and CEO of The Open GroupFor more than ten years, he has been responsible for driving the organization’s strategic plan and day-to-day operations; he was also instrumental in the creation of The Association of Open Group Enterprise Architects (AOGEA). Allen is based in the U.K.

Comments Off

Filed under Cloud/SOA, Cybersecurity, Enterprise Architecture

“Making Standards Work®”

By Andrew Josey, The Open Group

Next month as part of the ongoing process of “Making Standards Work®,” we will be setting standards and policy with those attending the member meetings at The Open Group Conference, London, (May 9-12, Central Hall Westminster). The standards development activities include a wide range of subject areas from Cloud Computing, Tools and People certification, best practices for Trusted Technology, SOA and Quantum Lifecycle Management, as well as maintenance of existing standards such as TOGAF® and ArchiMate®. The common link with all these activities is that all of these are open standards developed by members of The Open Group.

Why do our members invest their time and efforts in development of open standards? The key reasons as I see them are as follows:

  1. Open standards are a core part of today’s infrastructure
  2. Open standards allow vendors to differentiate their offerings by offering a level of openness (portable interfaces and interoperability)
  3. Open standards establish a baseline from which competitors can innovate
  4. Open standards backed with certification enable customers to buy with increased confidence

This is all very well, you say — but what differentiates The Open Group from other standards organizations? Well, when The Open Group develops a new standard, we take an end-to-end view of the ecosystem all the way through from customer requirements, developing consensus standards to certification and procurement. We aim to deliver standards that meet a need in the marketplace and then back those up with certification that delivers an assurance about the products or in the case of people certification, their knowledge or skills and experience. We then take regular feedback on our standards, maintain them and evolve them according to marketplace needs. We also have a deterministic, timely process for developing our standards that helps to avoid the stalemate that can occur in some standards development.

Let’s look briefly at two of the most well known Open Group standards:  UNIX® and TOGAF®,. The UNIX® and TOGAF® standards are both examples of where a full ecosystem has been developed around the standard.

The UNIX® standard for operating systems has been around since 1995 and is now in its fourth major iteration. High reliability, availability and scalability are all attributes associated with certified UNIX® systems. As well as the multi-billion-dollar annual market in server systems from HP, Oracle, IBM and Fujitsu, there is an installed base of 50 million users* using The Open Group certified UNIX® systems on the desktop.

TOGAF® is the standard enterprise architecture method and framework. It encourages use with other frameworks and adoption of best practices for enterprise architecture. Now in its ninth iteration, it is freely available for internal use by any organization globally and is widely adopted with over 60% of the Fortune 50 and more than 80% of the Global Forbes 50. The TOGAF® certification program now has more than 15,000 certified individuals, including over 6,000 for TOGAF® 9.

If you are able to join us in London in May, I hope you will be able to also join us at the member meetings to continue making standards work. If you are not yet a member then I hope you will attend the conference itself and network with the members to find out more and consider joining us in Making Standards Work®!

For more information on The Open Group Standards Process visit http://www.opengroup.org/standardsprocess/

(*) Apple estimated number from Briefing October 2010. Mac OS X is certified to the UNIX 03 standard.

Standards development will be part of member meetings taking place at The Open Group Conference, London, May 9-13. Join us for best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Andrew Josey is Director of Standards within The Open Group, responsible for the Standards Process across the organization. Andrew leads the standards development activities within The Open Group Architecture Forum, including the development and maintenance of TOGAF® 9, and the TOGAF® 9 People certification program. He also chairs the Austin Group, the working group responsible for development and maintenance the POSIX 1003.1 standard that forms the core volumes of the Single UNIX® Specification. He is the ISO project editor for ISO/IEC 9945 (POSIX). He is a member of the IEEE Computer Society’s Golden Core and is the IEEE P1003.1 chair and the IEEE PASC Functional chair of Interpretations. Andrew is based in the UK.

Comments Off

Filed under Standards, UNIX, TOGAF