Monthly Archives: May 2011

Enterprise Architecture & Emerging Markets

By Balasubramanian Somasundram, Honeywell Technology Solutions Ltd.

Recently I came across an interesting announcement by an SaaS vendor NetSuite on two-tier ERP and an analyst’s observations on the same. The analyst mentioned that the industry is moving in cycles from multiple-ERP suites across the company locations, then flattening those differences by having a single corporate standard ERP and again multiple-ERP stack with the advent of SaaS options.

The crux of this phenomenon is how we manage the technology selection across a globally distributed organization with diversified complexities. I see it as an interesting challenge for the Enterprise Architecture practice to solve.

Enterprise Architecture, when governed from global/corporate headquarters of a company, needs to balance the needs of the global and local entities. Often, the needs are conflicting and it requires lots of experience and courage to balance the needs of both. The local needs of the Architecture are most often triggered by various factors such as:

  • Cost – Need to have a cost-effective solution at an emerging region
  • Size – Need to have a lightweight solution rather than a heavyweight (ERP)
  • Regulatory/Compliance Requirements – Need to comply with local laws
  • Business Processes – Need to accommodate business process variations or cater to different customer segments

In the event of choosing a local solution that is not a corporate standard, there is a need to govern those architecture exceptions including integration of two different solutions for a cohesive management.  The two-tier ERP mentioned above is a typical example of this scenario.

If we visualize Enterprise Architecture as a series of layers – Business/Information/Technology/Application Architectures – the verticals/segments across those layers would define the organizational units/locations (Local Specific or Organizational Unit specific Enterprise Architectures).

The location verticals, when influenced by the above factors, could lead to new technology selections such as Cloud Computing and Software-as-a-Service. While this practice can improve the autonomy at the local level, if unmanaged, it could soon lead to sphegetti architectures. The most important side-effect of localized adoption of cloud computing or mobile would lead to increased fragmentation (of data/process/technology). And that would defeat the purpose of Enterprise Architecture.

In another constructive scenario, if these standalone solutions need to exchange information with corporate information systems, again EA has a role to play by arbitrating the integration by the use of standards and guidelines.

As Serge Thorn articulated few weeks ago in The Open Group blog, it’s time to review our EA practices and make amendments to the core frameworks and processes to face the challenges emerging from technology mega trends (Cloud/Mobile) and evolving business models (emerging markets).

Balasubramanian Somasundaram is an Enterprise Architect with Honeywell Technology Solutions Ltd, Bangalore, a division of Honeywell Inc, USA. Bala has been with Honeywell Technology Solutions for the past five years and contributed in several technology roles. His current responsibilities include Architecture/Technology Planning and Governance, Solution Architecture Definition for business-critical programs, and Technical oversight/Review for programs delivered from Honeywell IT India center. With more than 12 years of experience in the IT services industry, Bala has worked with variety of technologies with a focus on IT architecture practice.  His current interests include Enterprise Architecture, Cloud Computing and Mobile Applications. He periodically writes about emerging technology trends that impact the Enterprise IT space on his blog. Bala holds a Master of Science in Computer Science from MKU University, India.

1 Comment

Filed under Enterprise Architecture

Government Outreach for Global Supply Chain Integrity (OTTF)

By Sally Long, The Open Group

On May 10th in London, a select group of technology, government and Cybersecurity leaders and supply chain strategists met for a lunchtime briefing and discussion during The Open Group Conference. The message that came across loud and clear by all who participated was that fostering honest and open dialogue between government and industry is critical to securing the global supply chain; and that the only way we will do this effectively is by working together to assure coordination and adoption among current and emerging approaches.

This industry/government roundtable event was the fourth in a series of planned events for government outreach. In December and January, members of The Open Group Trusted Technology Forum (OTTF) met with Howard Schmidt, US Cybersecurity Coordinator for the Obama Administration, and with US House and Senate Committees and the Department of Commerce. In March, there were some inroads made into the Japanese government, and in April we held a session with government officials in India. Coming up are more briefings and discussions planned for Europe, Canada, China and Brazil.

The event in London brought together representatives from Atsec, Boeing, CA Technologies, Capgemini, CESG, Chatham House, Cisco, Fraunhofer SIT, Fujitsu, Hewlett-Packard, IBM, IDA, Kingdee Software, Microsoft, MITRE, NASA, Oracle, Real IRM, SAIC, SAP, and the UK Government. These, along with thought leaders from Chatham House, discussed global supply-chain challenges and a potential solution through The Open Group Trusted Technology Provider Framework (O-TTPF). Other existing approaches were highlighted by CESG as effective in some areas, though those areas were not directly focused on supply-chain best practices.

The beauty of the O-TTPF, a set of best practices for engineering and secure development methods and supply chain integrity, is that the Framework and guidelines are being developed by industry — architects, developers, manufacturers and supply chain experts, with input from government(s) — for industry. The fact that these best practices will be open, international, publically available and translated where appropriate, will allow all providers to understand what they need to do to “Build with Integrity” – so that customers can “Buy with Confidence”.

This is critically important because as we all know, a chain is only as strong as its weakest link. Even though a large system vendor may follow the O-TTPF best practices, those vendors often rely on sub-component suppliers of software and hardware from around the world, and in order to maintain the integrity of their supply-chain their sub-suppliers need to understand what it means to be trustworthy as well.

One of the OTTF’s objectives is to develop an accreditation program, which will help customers, in government and industry, identify secure technology providers and products in the global supply chain. Governments and large enterprises that base their purchasing decisions on trusted technology providers who have developed their products using the best practices identified by the O-TTPF will be able to rely on a more comprehensive approach to risk management and product assurance when selecting COTS technology products.

One of the major messages at the Roundtable event was that the OTTF is not just about major industry providers. It’s about opening the doors to all providers and all customers, and it’s about reaching out to all governments to assure the O-TTPF best practice requirements are aligned with their acquisition requirements — so that there is true global recognition and demand for Trusted Technology Providers who conform to the O-TTPF Best Practices.

The OTTF members believe it is critical to reach out to governments around the world, to foster industry-government dialogue about government acquisition requirements for trusted technology and trusted technology providers, so they can enable the global recognition required for a truly secure global supply chain. Any government or government agency representative interested in working together to provide a trusted global supply chain can contact the OTTF global outreach and acquisition team through

The Forum operates under The Open Group, an international vendor- and technology-neutral consortium well known for providing an open and collaborative environment for such work. We are seeking additional participants from global government and commercial entities. If you are interested in learning more about the Forum please feel free to contact me, Sally Long, OTTF Forum Director, at

Sally Long, Director of Consortia Services at The Open Group, has been managing customer-vendor forums and collaborative development projects for the past nineteen years. She was the Release Engineering Section Manager for all collaborative, multi-vendor, development projects (OSF/1, DME, DCE, and Motif) at The Open Software Foundation (OSF), in Cambridge Massachusetts.  Following the merger of OSF and X/Open under The Open Group, Sally served as the Program Director for multiple Forums within The Open Group including: The Distributed Computing Environment (DCE) Forum, The Enterprise Management Forum, The Quality of Service (QoS) Task Force, The Real-time and Embedded Systems Forum and most recently the Open Group Trusted Technology Forum. Sally has also been instrumental in business development and program definition for certification programs developed and operated by The Open Group for the North American State and Provincial Lotteries Association (NASPL) and for the Near Field Communication (NFC) Forum. Sally has a Bachelor of Science degree in Electrical Engineering from Northeastern University in Boston, Massachusetts, and a Bachelor of Science degree in Occupational Therapy from The Ohio State University.

1 Comment

Filed under Cybersecurity, Supply chain risk

Facebook – the open source data center

By Mark Skilton, Capgemini

The recent announcement by Facebook of its decision to publish its data center specifications as open source illustrates a new emerging trend in commoditization of compute resources.

Key features of the new facility include:

  • The Oregon facility announced to the world press in April 2011 is 150,000 sq. ft., a $200 million investment. At any one time, the total of Facebook’s 500-million user capacity could be hosted in this one site. Another Facebook data center facility is scheduled to open in 2012 in North Carolina. There may possibly be future data centers in Europe or elsewhere if required by the Palo Alto, Calif.-based company
  • The Oregon data center enables Facebook to reduce its energy consumption per unit of computing power by 38%
  • The data center has a PUE of 1.07, well below the EPA-defined state-of-the-art industry average of 1.5. This means 93% of the energy from the grid makes it into every Open Compute Server.
  • Removed centralized chillers, eliminated traditional inline UPS systems and removed a 480V to 208V transformation
  • Ethernet-powered LED lighting and passive cooling infrastructure reduce energy spent on running the facility
  • New second-level “evaporative cooling system”, a multi-layer method of transforming room temperature and air filtration
  • Launch of the “Open Compute Project” to share the data center design as Open Source. The aim is to encourage collaboration of data center design to improve overall energy consumption and environmental impact. Other observers also see this as a way of reducing component sourcing costs further, as most of the designs are low-cost commodity hardware
  • The servers are 38% more efficient and 24% lower cost

While this can be simply described as a major Cloud services company seeing their data centers as commodity and non-core to their services business, this perhaps is something of a more significant shift in the Cloud Computing industry in general.

Facebook making its data centers specifications open source demonstrates that IaaS (Infrastructure as a Service) utility computing is now seen as a commodity and non-differentiating to companies like Facebook and anyone else who wants cheap compute resources.

What becomes essential is the efficiencies of operation that result in provisioning and delivery of these services are now the key differentiator.

Furthermore, it can be seen that it’s a trend towards what you do with the IaaS storage and compute. How we architect solutions that develop software as a service (SaaS) capabilities becomes the essential differentiator. It is how business models and consumers can maximize these benefits, which increases the importance of architecture and solutions for Cloud. This is key for The Open Group’s vision of “Boundaryless Information Flow™”. It’s how Cloud architecture services are architected, and how architects who design effective Cloud services that use these commodity Cloud resources and capabilities make the difference. Open standards and interoperability are critical to the success of this. How solutions and services are developed to build private, public or hybrid Clouds are the new differentiation. This does not ignore the fact that world-class data centers and infrastructure services are vital of course, but it’s now the way they are used to create value that becomes the debate.

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.


Filed under Cloud/SOA

The Open Group updates Enterprise Security Architecture, guidance and reference architecture for information security

By Jim Hietala, The Open Group

One of two key focus areas for The Open Group Security Forum is security architecture. The Security Forum has several ongoing projects in this area, including our TOGAF® and SABSA integration project, which will produce much needed guidance on how to use these frameworks together.

When the Network Application Consortium ceased operating a few years ago, The Open Group agreed to bring the intellectual property from the organization into our Security Forum, along with extending membership to the former NAC members. While the NAC did great work in information security, one publication from the NAC stood out as a highly valuable resource. This document, Enterprise Security Acrhitecture (ESA), A Framework and Template for Policy-Driven Security, was originally published by the NAC in 2004, and provided valuable guidance to IT architects and security architects. At the time it was first published, the ESA document filled a void in the IT security community by describing important information security functions, and how they related to each other in an overall enterprise security architecture. ESA was at the time unique in describing information security architectural concepts, and in providing examples in a reference architecture format.

The IT environment has changed significantly over the past several years since the original publication of the ESA document. Major changes that have affected information security architecture in this time include the increased usage of mobile computing devices, increased need to collaborate (and federation of identities among partner organizations), and changes in the threats and attacks.

Members of the Security Forum, having realized the need to revisit the document and update its guidance to address these changes, have significantly rewritten the document to provide new and revised guidance. Significant changes to the ESA document have been made in the areas of federated identity, mobile device security, designing for malice, and new categories of security controls including data loss prevention and virtualization security.

In keeping with the many changes to our industry, The Open Group Security Forum has now updated and published a significant revision to the Enterprise Security Architecture (O-ESA), which you can access and download (for free, minimal registration required) here; or purchase a hardcover edition here.

Our thanks to the many members of the Security Forum (and former NAC members) who contributed to this work, and in particular to Stefan Wahe who guided the revision, and to Gunnar Peterson, who managed the project and provided significant updates to the content.

Jim HietalaAn IT security industry veteran, Jim is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

Comments Off

Filed under Security Architecture

Twtpoll results from The Open Group Conference, London

The Open Group set up three informal Twitter polls this week during The Open Group Conference, London. If you wondered about the results, or just want to see what our Twitter followers think about some topline issues in the industry in very simple terms, see our graphs below.

On Day One of the Conference, when the focus of the discussions was on Enterprise Architecture, we polled our Twitter followers about the perceived value of certification. Your response was perhaps disappointing, but unsurprising:

On Day Two, while we focused on security during The Open Group Jericho Forum® Conference, we queried you about what you see as being the biggest organizational security threat. Out of four stated choices, and the opportunity to fill in your own answer, the answer was unanimous: two specific areas of security keep you up at night the most.

And finally, during Day Three’s emphasis on Cloud Computing, we asked our Twitter followers about the types of Cloud they’re using or are likely to use.

What do you think of our informal poll results? Do you agree? Disagree? And why?

Want some survey results you can really sink your teeth into? View the results of The Open Group’s State of the Industry Cloud Survey. Read our blog post about it, or download the slide deck from The Open Group bookstore.

The Open Group Conference, London is in member meetings for the rest of this week. Join us in Austin, July 18-22, for our next Conference! Join us for best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Comments Off

Filed under Certifications, Cloud/SOA, Cybersecurity, Enterprise Architecture

What does the Amazon EC2 downtime mean?

By Mark Skilton, Capgemini

The recent announcement of the Amazon EC2 outage in April this year triggers some thoughts about this very high-profile topic in Cloud Computing. How secure and available is your data in the Cloud?

While the outage was more to do with the service level availability (SLA) of data and services from your Cloud provider, the recent, potentially more concerning risk of Epsilon e-mail data stolen, and as I write this the Sony email theft is breaking news, further highlights this big topic in Cloud Computing.

My initial reaction on hearing the about the outage was that it was due to over-allocation due to high demand in the US EAST 2 region, which led to a cascade system failure. I subsequently read that Amazon said it was a network glitch, which triggered storage backups to automatically create more than needed, consuming the elastic block storage. This in turn, I theorized, seems to have created the supply unavailability problem.

From a business perspective, this focuses on the issues of using a primary Cloud provider. The businesses like and that were affected “live in the Cloud,” yet backup and secondary Cloud support needs are clearly important.  Some of these are economic decisions, trade-offs between loss of business and business continuity. It highlights the vulnerability of these enterprises even though a highly successful organization like Amazon makes this a rare event. Consumers of Cloud services need to consider taking mitigating actions such as disruption insurance; having secondary backups; and the issues of assurances of SLAs, which are largely out of the hands of SMB Market users. A result of outages in Cloud providers has been the emergence of a new market called “Cloud Backup,” which is starting to gain favor with customers and providers in providing added levels of protection of service fail over.

While these are concerning issues, I believe most outage issues may be addressed by taking due diligence in the procurement and usage behavior of any service that involves a third party. I’ve expanding the definition of due diligence in Cloud Computing to include at least six key processes that any prospective Cloud buyer should be aware and make contingency for, as you would with any purchase of a business critical service:

  • Security management
  • Compliance management
  • Service Management (ITSM and License controls)
  • Performance management
  • Account management
  • Ecosystem standards management

I don’t think publishing a bill of rights for consumers is enough to insure against failure. One thing that Cloud Computing design has taught me is that part of the architectural shift brought about by Cloud is the emergence of automation as an implicit part of the operating model design to enable elasticity. This automation may have been a factor, ironically, in the Amazon situation, but overall the benefits of Cloud far outweigh the downsides, which can be re-engineered and resolved.

A useful guide to address some of the business impact can be found in a new book by The Open Group on Cloud Computing for Business that we plan to publish this quarter. The topics of the book address many of these challenges in understanding and driving the value of the Cloud Computing in the language of business. The book covers chapters relating to business use of cloud and includes topics of risk management of the Cloud. Check The Open Group website for more information on The Open Group Cloud Computing Work Group and the Cloud publications in the bookstore at

Cloud Computing isa key topic of discussion at The Open Group Conference, London, May 9-13, which is currently underway. 

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

1 Comment

Filed under Cloud/SOA

SOA is not differentiating, Cloud Computing is

By Mark Skilton, Capgemini

Warning: I confess at the start of this blog that I chose a deliberately evocative title to try to get your attention and guess I did if are reading this now. Having written a couple of blogs to date with what I believed were finely honed words on current lessons learnt and futures of technology had created little reaction, so I thought I’d try the more direct approach and head directly towards a pressing matter of architectural and strategic concern.

Service Oriented Architecture (SOA) is now commonplace across all software development lifecycles and has entered the standard language of information technology design. We hear “service oriented” and “service enabled” as standard phrases handed out as common terms of reference. The point is that the processes and practices of SOA are industrial and are not differentiating, as everyone is doing these either from a design standpoint or as a business systems service approach. They enable standardization and abstraction of services in the design and build stages to align with key business and technology strategy goals, and enable technology to be developed or utilized that meets specific technical or business service requirements.

SOA practices are prerequisites to good design practice. SOA is a foundation of Service Management ITIL processes and is to be found in diverse software engineering methods from Business Process Management Systems (BPMS) to rapid Model Driven Architecture design techniques that build compose web-enabled services. SOA is seen as a key method along the journey to industrialization supporting consolidation and rationalization, as well as lean engineering techniques to optimize business and systems landscape. SOA provides good development practice in defining user requirements that provide what the user wants, and in translating these into understanding how best to build agile, decoupled and flexible architectural solutions.

My point is that these methods are now mainstream, and merely putting SOA into your proposal or as a stated capability is no longer going to be a “deal clincher” or a key “business differentiator”. The counterview I hear practitioners in SOA will say is that SOA is not just the standardized service practices but is also how the services can be identified that are differentiating. But that’s the rub. If SOA treats every requirement or design as a service problem, where is the difference?

A possible answer is in how SOA will be used. In the future and today it will be a business differentiator in the way the SOA method is used. But not all SOA methods are equal, and what will be necessary to highlight SOA method differentiation for business benefit?

Enter Cloud Computing, its origins in utility computing and the ubiquitous web services and Internet. The definitions of what is Cloud Computing, much like the early days of Service Orientation, is still evolving in understanding where the boundary and types of services it encompasses. But the big disruptive step change has been the new business model the Cloud Computing mode has introduced.

Cloud Computing has introduced automatic provisioning, self-service, automatic load balancing and scaling of resources in technology. Building on virtualization principles, it has extended into on-demand metering and billing consumption models, large-scale computing resource data centers, and large-scale distributed businesses on the web using the power of the Internet to reach and run new business models. I can hear industry observers say this is just a consequence of the timely convergence of pervasive technology network standards, the rapid falling costs per compute and storage costs and the massive “hockey stick” movement of bandwidth, smart devices and wide-scale adoption of web-based services.

But this is a step change movement from a simple realization that it’s just “another technology phase”.

Put another way: It has brought the back office computing resources and the on-demand Software as a Service Models into a dynamic new business model that changes the way business and IT work. It has “merged” physical and logical services into a new marketplace on-demand model that hitherto was “good practice“ to design as separate consumer and provider services. All that’s changed.

But does SOA fully realize these aspects of a Cloud Computing Architecture? Answer these three simple questions:

  • Does the logical service contracts define how multi-tenant environments need to work to support many concurrent services users?
  • Does SOA enable automating balancing and scaling to be considered if the initial set of declarative conditions in the service contract don’t “fit” the new operating conditions that need scaling up or down?
  • Does SOA recognize the wider marketplace and ecosystem dynamics that may result in evolving consumer/producer patterns that are dynamic and not static, driving new sourcing behaviors and usage patterns that may involve using services through a portal with no contract?

For sure, ecosystem principles are axiomatic in that they will drive standards for containers, protocols and semantics which SOA standards are perfect to adopt as boundary conditions for service contracts in a Service Portfolio. But my illustrations here are to broaden the debate as to how to engage SOA as a differentiator when it meets a “new kid on the block” like Cloud, which is rapidly morphing into new models “as we speak” extending into social networks, mobile services and location aware integration.

My real intention is to raise awareness and interest in the subjects and the activities that The Open Group is engaged in to address such topics. I sincerely hope you can follow these up as further reading and investigation with The Open Group; and of course, do feel free to comment and contact me J

Cloud Computing and SOA are key topics of discussion at The Open Group Conference, London, May 9-13, which is underway. 

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.


Filed under Cloud/SOA

The Cloud “Through a Glass, Darkly”

Results of The Open Group “State of the Industry” Cloud Survey

By Dr. Chris Harding, The Open Group

Cloud Computing has been a major topic of interest and excitement in the world of Information Technology for a couple of years now. This is time enough for enterprises to understand its impact, or so you would think. So how exactly are they planning to make use of this phenomenon?

Obtaining a clear view of a current cloud such as Cloud Computing is notoriously difficult. It is like trying to see the world outside clearly through the dirty, distorted windows that were commonplace in England in the 17th century, when the simile “as through a glass, darkly” became established. But the “State of the Industry” Cloud survey, released today by The Open Group, sheds light on the topic, and provides some interesting insights.

The Open Group is a vendor- and technology-neutral consortium of IT customers and vendors, with a strong focus on Enterprise Architecture. The State of the Industry survey captures the views of its customer side, which is well representative of the global IT user community. It gives us a good understanding of how user enterprises currently perceive the Cloud.

Cloud certainly has the users’ attention. Only 8% of survey respondents said that Cloud was not currently on their IT roadmap. But substantial take-up is only just starting. Nearly half of those for whom Cloud is on the roadmap have not yet begun to use it, and half of the rest have only started recently.

The respondents have a clear idea of how they will use the Cloud. The majority expect to have some element of private Cloud, with 29% saying that private Cloud would best meet their organisations’ business requirements, and 45% saying that hybrid Cloud would do so, as opposed to 17% for public Cloud. Only 9% were unsure.

They also have a clear view of the advantages and drawbacks. Cost, agility, and resource optimisation came out as the three main reasons for using Cloud Computing, with business continuity also a significant factor. Security, integration issues, and governance were the three biggest concerns, with ability to cope with change, vendor lock-in, cost to deploy, and regulatory compliance also being significant worries.

Return on Investment (ROI) is probably the most commonly used measure of success of a technical change, and The Open Group has produced a landmark White Paper “Building Return on Investment from Cloud Computing”. The survey respondents felt on balance (by 55% to 45%) that Cloud ROI should be easy to evaluate and justify. Cost, quality of delivered result, utilisation, speed of operation, and scale of operation were felt to be the most useful Cloud ROI metrics. But only 35% had mechanisms in place to measure Cloud ROI as opposed to 45% that did not, with the other 20% being unsure.

The question on the impact of Cloud produced the most striking of the survey’s results. While 82% said that they expected their Cloud initiatives to have significant impact on one or more business processes, only 28% said that they were prepared for these changes.

Cloud Computing is primarily a technical phenomenon, but it has the ability to transform business. Its lower cost and increased agility and speed of operation can dramatically improve profitability of existing business processes. More than this, and perhaps more importantly, it enables new ways of collaborative working and can support new processes. It is therefore not surprising that people do not yet feel fully prepared — but it is interesting that the survey should bring this point out quite so clearly.

The ability to transform business is the most exciting feature of the Cloud phenomenon. But users currently see it “through a glass darkly,” and perhaps with a measure of faith and hope. There is a lesson in this for industry consortia such as The Open Group. More needs to be done to develop understanding of the business impact of Cloud Computing, and we should focus on this, as well as on the technical possibilities.

To obtain a copy of the survey, download it here, or media may email us a request at

Cloud Computing is a major topic of discussion at The Open Group Conference, London, which is currently underway.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at The Open Group and other conferences on a range of topics, and contributes articles to online journals. He is a member of the BCS, the IEEE, and AOGEA, and is a certified TOGAF® practitioner. Chris is based in the U.K.


Filed under Cloud/SOA

Exploring Synergies between TOGAF® and Frameworx

By Andrew Josey, The Open Group

A joint team of The Open Group and the TM Forum has recently completed a technical report exploring the synergies and identifying integration points between the TM Forum Frameworx and TOGAF® specifications.

The results of this activity are now available as a 110-page technical report published by The Open Group and TM Forum, together with a Quick Reference Guide spreadsheet (available with the report).

The technical report focuses on mapping TOGAF® to the Frameworx: Business Process Framework (eTOM), Information Framework (SID) and Application Framework (TAM). The purpose of this mapping is to assess differences in their contents, complementary areas and the areas for application – with the TOGAF® Enterprise Continuum in mind.

Identified Synergies

A summary of the identified synergies is as follows:

  1. Immediate synergies have been identified between the TOGAF Architecture Development Method (ADM) phases Preliminary, A, B, C and the Common Systems Architecture of the Enterprise Continuum. This document addresses the TOGAF ADM phases from Preliminary to Phase C. The synergies between business services (formerly known as NGOSS contracts) and the Common Systems Architecture will be dealt with in a separate document.
  2. TOGAF® provides an Architecture Repository structure that can smoothly accommodate the mapping of TM Forum assets; this feature can be leveraged to identify and derive the added value of the content.
  3. TM Forum assets can be classified as either Industry Architectures or Common Systems Architecture in (TOGAF®) Enterprise Continuum language. TOGAF® provides a widely accepted methodology to leverage these architectures into the development of enterprise architecture.
  4. Professionals that use TM Forum assets will find templates and guidelines in TOGAF® that facilitate the transformation of such TM Forum assets into deliverables for a specific project/program.
  5. TOGAF concepts as defined in the TOGAF® Architecture Content Framework provide clear definitions as to what artifacts from TM Forum assets have to be developed in order to be consistent and comprehensive with an architecture construct.

The full report can be obtained from The Open Group and TM Forum websites. At The Open Group, you can download it here.

Andrew Josey is Director of Standards within The Open Group, responsible for the Standards Process across the organization. Andrew leads the standards development activities within The Open Group Architecture Forum, including the development and maintenance of TOGAF® 9, and the TOGAF® 9 People certification program. He also chairs the Austin Group, the working group responsible for development and maintenance of the POSIX 1003.1 standard that forms the core volumes of the Single UNIX® Specification. He is the ISO project editor for ISO/IEC 9945 (POSIX). He is a member of the IEEE Computer Society’s Golden Core and is the IEEE P1003.1 chair and the IEEE PASC Functional chair of Interpretations. Andrew is based in the UK.

1 Comment

Filed under Enterprise Architecture, TOGAF®

Cloud impact on platforms and applications – a perspective on architecture

By Shripadraj Mujumdar, Cognizant Technology Solutions


Today’s large businesses are heavily characterized by globalization and interconnectedness. Therefore, the key effective trends of importance can be summarized as- Business process improvement and consolidations, supply chains. Reducing enterprise costs and managing change initiatives, increasing the use of analytics. Improving enterprise workforce effectiveness, enable innovation and Targeting customers and markets more effectively. In a nutshell, organizations are looking for shorter lines of communication, deeper relationships between stakeholders, and more shared knowledge and data of the company and business, up and down the ranks and across linked partners.

Similarly, with ever-increasing definitions of the enterprise, boundaries to support above key focus areas in an organization’s set of technologies which are playing a pivotal role can be identified as virtualization and Cloud, Service Oriented Architecture, Web 2.0, mobile technologies, unified communications, business intelligence and document management and storage. The game changer here is specifically “The Cloud,” which provides the necessary model to commoditize the platforms, Infrastructure and applications.

Software application architectures have traversed the path of separation of functions, objects and  layers till separation of concerns in terms of services. The new applications are more open, collaborative, and social, and the main factor for facilitation has been maturity and adoption of standards for interoperability and compliance on feature sets. With Cloud power playing its role, this core definition of software and applications is set to undergo yet another paradigm shift, impacting the direction on how Enterprise IT systems are architected, leveraged and governed.

Impact of Cloud on platforms and architecture

In most of the literature, more often than less, the advantages listed for Cloud Computing put more emphasis on its infrastructure aspect and its economic benefit. The economic change that may come to computing services is in terms of changes in patterns and factors of production and consumption, besides aggregation of supply and demand. Moreover, the ability to make over the counter choice and substitution is also an important factor that will impact overall economics of computing in times to come. The real power of the Cloud model will be realized in terms of how platforms, applications and services will get proliferated on pay-per-use or more creative models. For applications which follow new generation of styles, here are possible considerations for impact that may happen in the architecture definition and process. In the future, off-the-shelf patterns will emerge to internalize many such considerations and we may hopefully even see support available at a platform/framework level.

  • Business Architecture: The base-level business processes would be set to benefit more from Cloud-based IT service choices in an open model, thus impacting overall business architecture at the enterprise level, driven by the need and ability to explore opportunities for optimization, enhancement and scaling
  • Enterprise Assets Reorganization- Based on the speed of adoption, there would be a trend to consolidate the enterprise assets, and replace and reassemble some of them using Cloud-based trusted services, platforms and infrastructure, thus eliminating redundancy. There will be a boost to such service-based outsourcing to leverage cost advantages and minimize in-house maintenance. The data architecture, again, will have a consolidation phase and should be planned to leverage Cloud-based data services – Data as Service — and segregate data within enterprise boundaries and across a public Cloud, creating data virtualization. Moreover, as more Cloud security standards and specifications evolve, potentially there will be shifts in data organization. The possibility of easy access to big data will drive enterprise-level data models to further levels
  • Solution and Application Architecture- The most important impact that is happening due to Cloud is on ability of solutions and applications to pop their head out of the traditional box and utilize on massive computing power and scale. Some of the features described below may apply based on the actual application scenario. However, for large-scale, multi-client, multi-scenario applications, all would make sense in the longer term
    • Service Orientation- The Cloud-based applications architecture provides a real home to service orientation of applications. With Cloud-based architectures, there would be out-of-the-box support, service end point abstractions, available for SAAS choice available, hence the new applications’ need to expose service interfaces beyond regular access channels to support virtualized service mash-ups. The applications which operate in hybrid mode may well be thought about as assembled from Cloud and non-Cloud parts of services and data
    • Multi-tenancy- Similar to the Cloud environment, this facilitates sharing of computing infrastructure across multiple consumers, Cloud applications, running on top of it need to support multiple tenants to avail economies of scale. There can be flexibility to adopt for specific tenancy models based on consumer preferences, accordingly isolation levels can be worked out
    • New Integration Scenarios- While applications and business platforms move to Cloud, their interlinkages will invariably have to shift in a similar way. It gives rise to possible new scenarios in which Cloud-based integration can be performed utilizing possible advantage points to find optimal ways
    • Cost of Operation/Consumption- The effect of commoditization will directly create a requirement at applications and related level to support similar model. Thus multiple models may have to be supported with blended price structure, similar to support levels we have in regular service provisioning
    • Interoperability- The applications and its services need to follow interoperability models considering wide scenario of usage. As of today, the Cloud platforms by themselves don’t set a great example of portability and interoperation. However, the services hosted on such platforms have more scale of access scenarios and need to follow well-established standards to facilitate mash-up scenarios
    • Dynamic & Flexible Definition- Considering pay-per-use model and service orientation, applications will need to support the selection of features and services to be used, and pricing model around those services, in order to be competitive and optimal. This is also in terms of transferring the benefit of granular pricing model at Cloud-level to end consumers
    • Extensibility- Many sorts of software today also have some extension features which hook up online for doing certain tasks. Going forward, the application extensibility will not be limited to local scenarios, but to be able to choose background services and features for consumption and addition of features dynamically. This will allow the creation of very powerful applications. However, from an architecture standpoint, provisioning for such extension points have to be well thought of
    • Elasticity- Configuration & Self Service- In order to support scalability and load variance, the dynamic elasticity on Cloud needs to be utilized in an optimal way. Applications may have to provision for a self-service model for configuration of load modeling and accordingly provision the Cloud resources. Similarly, a self-service model may have to be extended to encompass feature selection scenarios, and pricing will then depend on such selection
    • Parallelism– The batch process framework and scenarios in today’s applications may get redefined going forward, due to massive parallelization capabilities which can come out of Cloud-based infrastructure. This in essence will also have an impact on certain business processes when latency is removed. For generic frameworks and applications supporting a wide range of consumption patterns, this may have to be configurable
    • Context Awareness- The applications may have to be context-aware in order to be more usable. While the applications move to few centralized locations from a deployment perspective, the consumptions will increase due to high levels of commoditization and access, and a common base will have to cater to variety of contexts
    • Service Discovery and Catalogs- With multiple capabilities expected out of services and parameters, and pricing models they would support, discovery and metadata of services will undergo extensions with reference to description of services. The frameworks which support service discovery and facilitate consumption will undergo a change to support the same in the future
    • Monitoring, Operations- The monitoring and operational support on Cloud components in an application would be more complex and limited by facilities exposed by such components. Thus, based on specific need and Cloud infrastructure, the frameworks and architectural elements need to be considered for support
    • Metrics, SLAs- The Metrics and SLA as used in on-premise software may have change in their definitions. Those will be aggregated functions of on-premise and Cloud services/components collectively. Thus, beyond functional aspect of catalogs in Cloud services, the other aspects of operational specifications may also need to be noted and can become important aspect while choosing amongst similar components

In conclusion, The Cloud, which used to be one of the abstractions in traditional architecture diagrams, is now set to swallow other neighboring elements. When Cloud reaches its fully adopted level, it will influence application architectures by driving them to be more componentized and service-oriented to support agility, commoditization and choice. Consolidating on these points, my presentation on the “Reverse Impact of Cloud” on May 11, at The Open Group Conference, London, 2011 will cover some of the factors which are important considerations for being Cloud-ready, architectures and present scenarios which depict application integration possibilities with Cloud.

Shripadraj Mujumdar will be presenting on the “Reverse Impact of Cloud on Platforms and Architecture” at The Open Group Conference, London, May 9-13. Join us for best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Shripadraj R Mujumdar [Prasanna], Senior Architect at Cognizant Technology Solutions, India, is part of Cognizant’s Global Technology Office and carries with him an extensive experience of 16 years of consulting and architecting strategic technology initiatives in corporate and customer programs. He has been instrumental in the inception of new service offerings through COEs, competency & delivery excellence, and in leadership capacity to create high performance technology/domain-focused teams. Shripadraj has done substantial work in India and abroad for several top-notch global customers in order to provide technology solutions and consulting — solving their critical business issues. Apart from his work interest , he actively participates in the blogosphere and is avid reader of science and philosophy. He holds a degree in engineering and has undergone the corporate education program on business leadership.

1 Comment

Filed under Cloud/SOA, Enterprise Architecture

TOGAF® Camp London: a free unconference where rules are made to be broken

By Steve Nunn, The Open Group

TOGAF® Camp is one of the highlights of The Open Group Conference experience, in my opinion: a place where there are no rules that aren’t made to be broken, and where every voice counts. As we’re preparing to hold another TOGAF® Camp in London on May 11, it’s worth revisiting what one entails and the kinds of discussions we have had in past “unconferences.”

If you’ve read my previous post on what an unconference is, you know that there is no set discussion going into TOGAF® Camp; and indeed, any topic regarding Enterprise Architecture in general is welcomed. Previous discussions have revolved around evaluating TOGAF® usage in practice, including sharing experiences of such usage; and the use of  EA/TOGAF® as a transformative framework (how to go from EA as a stovepipe to architecture in transformative projects). A particularly popular and instructive session that I recall focused on how to introduce EA with an enterprise. The ability to share experiences in relatively small groups appears to catalyze all attendees into full participation.

When I run The Open Group’s TOGAF® Camps, I enjoy watching people really delve into topics such as these, and getting excited about sharing their own experiences and learning from others. It’s rewarding to know that people take away new or refined ideas from these sessions that they will apply inside their own enterprises. And you, too, can benefit — not just from attending the next one in London, but by also reading highlights of the discussions from ones in the past on a wiki accessible to all.

The TOGAF® Camp in London will take place at Central Hall Westminster on Storey’s Gate as part of The Open Group Conference, London, May 9-13. It will be held from 3:30 p.m. to 6 p.m. on Wednesday, May 11 during the afternoon track session, and is free and open to all — whether attending the rest of the conference or not. Find out more or register here, or in person. Join us for libations afterwards for the full unconference experience!

Steve Nunn is the COO of The Open Group and the CEO of the Association of Enterprise Architects. An attorney by training, Steve has been with The Open Group since 1993. He is a native of the U.K. and is based in the U.S.

Comments Off

Filed under Enterprise Architecture, TOGAF®