Monthly Archives: May 2011

Enterprise Architecture & Emerging Markets

By Balasubramanian Somasundram, Honeywell Technology Solutions Ltd.

Recently I came across an interesting announcement by an SaaS vendor NetSuite on two-tier ERP and an analyst’s observations on the same. The analyst mentioned that the industry is moving in cycles from multiple-ERP suites across the company locations, then flattening those differences by having a single corporate standard ERP and again multiple-ERP stack with the advent of SaaS options.

The crux of this phenomenon is how we manage the technology selection across a globally distributed organization with diversified complexities. I see it as an interesting challenge for the Enterprise Architecture practice to solve.

Enterprise Architecture, when governed from global/corporate headquarters of a company, needs to balance the needs of the global and local entities. Often, the needs are conflicting and it requires lots of experience and courage to balance the needs of both. The local needs of the Architecture are most often triggered by various factors such as:

  • Cost – Need to have a cost-effective solution at an emerging region
  • Size – Need to have a lightweight solution rather than a heavyweight (ERP)
  • Regulatory/Compliance Requirements – Need to comply with local laws
  • Business Processes – Need to accommodate business process variations or cater to different customer segments

In the event of choosing a local solution that is not a corporate standard, there is a need to govern those architecture exceptions including integration of two different solutions for a cohesive management.  The two-tier ERP mentioned above is a typical example of this scenario.

If we visualize Enterprise Architecture as a series of layers – Business/Information/Technology/Application Architectures – the verticals/segments across those layers would define the organizational units/locations (Local Specific or Organizational Unit specific Enterprise Architectures).

The location verticals, when influenced by the above factors, could lead to new technology selections such as Cloud Computing and Software-as-a-Service. While this practice can improve the autonomy at the local level, if unmanaged, it could soon lead to sphegetti architectures. The most important side-effect of localized adoption of cloud computing or mobile would lead to increased fragmentation (of data/process/technology). And that would defeat the purpose of Enterprise Architecture.

In another constructive scenario, if these standalone solutions need to exchange information with corporate information systems, again EA has a role to play by arbitrating the integration by the use of standards and guidelines.

As Serge Thorn articulated few weeks ago in The Open Group blog, it’s time to review our EA practices and make amendments to the core frameworks and processes to face the challenges emerging from technology mega trends (Cloud/Mobile) and evolving business models (emerging markets).

Balasubramanian Somasundaram is an Enterprise Architect with Honeywell Technology Solutions Ltd, Bangalore, a division of Honeywell Inc, USA. Bala has been with Honeywell Technology Solutions for the past five years and contributed in several technology roles. His current responsibilities include Architecture/Technology Planning and Governance, Solution Architecture Definition for business-critical programs, and Technical oversight/Review for programs delivered from Honeywell IT India center. With more than 12 years of experience in the IT services industry, Bala has worked with variety of technologies with a focus on IT architecture practice.  His current interests include Enterprise Architecture, Cloud Computing and Mobile Applications. He periodically writes about emerging technology trends that impact the Enterprise IT space on his blog. Bala holds a Master of Science in Computer Science from MKU University, India.

1 Comment

Filed under Enterprise Architecture

Government Outreach for Global Supply Chain Integrity (OTTF)

By Sally Long, The Open Group

On May 10th in London, a select group of technology, government and Cybersecurity leaders and supply chain strategists met for a lunchtime briefing and discussion during The Open Group Conference. The message that came across loud and clear by all who participated was that fostering honest and open dialogue between government and industry is critical to securing the global supply chain; and that the only way we will do this effectively is by working together to assure coordination and adoption among current and emerging approaches.

This industry/government roundtable event was the fourth in a series of planned events for government outreach. In December and January, members of The Open Group Trusted Technology Forum (OTTF) met with Howard Schmidt, US Cybersecurity Coordinator for the Obama Administration, and with US House and Senate Committees and the Department of Commerce. In March, there were some inroads made into the Japanese government, and in April we held a session with government officials in India. Coming up are more briefings and discussions planned for Europe, Canada, China and Brazil.

The event in London brought together representatives from Atsec, Boeing, CA Technologies, Capgemini, CESG, Chatham House, Cisco, Fraunhofer SIT, Fujitsu, Hewlett-Packard, IBM, IDA, Kingdee Software, Microsoft, MITRE, NASA, Oracle, Real IRM, SAIC, SAP, and the UK Government. These, along with thought leaders from Chatham House, discussed global supply-chain challenges and a potential solution through The Open Group Trusted Technology Provider Framework (O-TTPF). Other existing approaches were highlighted by CESG as effective in some areas, though those areas were not directly focused on supply-chain best practices.

The beauty of the O-TTPF, a set of best practices for engineering and secure development methods and supply chain integrity, is that the Framework and guidelines are being developed by industry — architects, developers, manufacturers and supply chain experts, with input from government(s) — for industry. The fact that these best practices will be open, international, publically available and translated where appropriate, will allow all providers to understand what they need to do to “Build with Integrity” – so that customers can “Buy with Confidence”.

This is critically important because as we all know, a chain is only as strong as its weakest link. Even though a large system vendor may follow the O-TTPF best practices, those vendors often rely on sub-component suppliers of software and hardware from around the world, and in order to maintain the integrity of their supply-chain their sub-suppliers need to understand what it means to be trustworthy as well.

One of the OTTF’s objectives is to develop an accreditation program, which will help customers, in government and industry, identify secure technology providers and products in the global supply chain. Governments and large enterprises that base their purchasing decisions on trusted technology providers who have developed their products using the best practices identified by the O-TTPF will be able to rely on a more comprehensive approach to risk management and product assurance when selecting COTS technology products.

One of the major messages at the Roundtable event was that the OTTF is not just about major industry providers. It’s about opening the doors to all providers and all customers, and it’s about reaching out to all governments to assure the O-TTPF best practice requirements are aligned with their acquisition requirements — so that there is true global recognition and demand for Trusted Technology Providers who conform to the O-TTPF Best Practices.

The OTTF members believe it is critical to reach out to governments around the world, to foster industry-government dialogue about government acquisition requirements for trusted technology and trusted technology providers, so they can enable the global recognition required for a truly secure global supply chain. Any government or government agency representative interested in working together to provide a trusted global supply chain can contact the OTTF global outreach and acquisition team through ottf-interest@opengroup.org.

The Forum operates under The Open Group, an international vendor- and technology-neutral consortium well known for providing an open and collaborative environment for such work. We are seeking additional participants from global government and commercial entities. If you are interested in learning more about the Forum please feel free to contact me, Sally Long, OTTF Forum Director, at s.long@opengroup.org.

Sally Long, Director of Consortia Services at The Open Group, has been managing customer-vendor forums and collaborative development projects for the past nineteen years. She was the Release Engineering Section Manager for all collaborative, multi-vendor, development projects (OSF/1, DME, DCE, and Motif) at The Open Software Foundation (OSF), in Cambridge Massachusetts.  Following the merger of OSF and X/Open under The Open Group, Sally served as the Program Director for multiple Forums within The Open Group including: The Distributed Computing Environment (DCE) Forum, The Enterprise Management Forum, The Quality of Service (QoS) Task Force, The Real-time and Embedded Systems Forum and most recently the Open Group Trusted Technology Forum. Sally has also been instrumental in business development and program definition for certification programs developed and operated by The Open Group for the North American State and Provincial Lotteries Association (NASPL) and for the Near Field Communication (NFC) Forum. Sally has a Bachelor of Science degree in Electrical Engineering from Northeastern University in Boston, Massachusetts, and a Bachelor of Science degree in Occupational Therapy from The Ohio State University.

1 Comment

Filed under Cybersecurity, Supply chain risk

Facebook – the open source data center

By Mark Skilton, Capgemini

The recent announcement by Facebook of its decision to publish its data center specifications as open source illustrates a new emerging trend in commoditization of compute resources.

Key features of the new facility include:

  • The Oregon facility announced to the world press in April 2011 is 150,000 sq. ft., a $200 million investment. At any one time, the total of Facebook’s 500-million user capacity could be hosted in this one site. Another Facebook data center facility is scheduled to open in 2012 in North Carolina. There may possibly be future data centers in Europe or elsewhere if required by the Palo Alto, Calif.-based company
  • The Oregon data center enables Facebook to reduce its energy consumption per unit of computing power by 38%
  • The data center has a PUE of 1.07, well below the EPA-defined state-of-the-art industry average of 1.5. This means 93% of the energy from the grid makes it into every Open Compute Server.
  • Removed centralized chillers, eliminated traditional inline UPS systems and removed a 480V to 208V transformation
  • Ethernet-powered LED lighting and passive cooling infrastructure reduce energy spent on running the facility
  • New second-level “evaporative cooling system”, a multi-layer method of transforming room temperature and air filtration
  • Launch of the “Open Compute Project” to share the data center design as Open Source. The aim is to encourage collaboration of data center design to improve overall energy consumption and environmental impact. Other observers also see this as a way of reducing component sourcing costs further, as most of the designs are low-cost commodity hardware
  • The servers are 38% more efficient and 24% lower cost

While this can be simply described as a major Cloud services company seeing their data centers as commodity and non-core to their services business, this perhaps is something of a more significant shift in the Cloud Computing industry in general.

Facebook making its data centers specifications open source demonstrates that IaaS (Infrastructure as a Service) utility computing is now seen as a commodity and non-differentiating to companies like Facebook and anyone else who wants cheap compute resources.

What becomes essential is the efficiencies of operation that result in provisioning and delivery of these services are now the key differentiator.

Furthermore, it can be seen that it’s a trend towards what you do with the IaaS storage and compute. How we architect solutions that develop software as a service (SaaS) capabilities becomes the essential differentiator. It is how business models and consumers can maximize these benefits, which increases the importance of architecture and solutions for Cloud. This is key for The Open Group’s vision of “Boundaryless Information Flow™”. It’s how Cloud architecture services are architected, and how architects who design effective Cloud services that use these commodity Cloud resources and capabilities make the difference. Open standards and interoperability are critical to the success of this. How solutions and services are developed to build private, public or hybrid Clouds are the new differentiation. This does not ignore the fact that world-class data centers and infrastructure services are vital of course, but it’s now the way they are used to create value that becomes the debate.

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

2 Comments

Filed under Cloud/SOA

The Open Group updates Enterprise Security Architecture, guidance and reference architecture for information security

By Jim Hietala, The Open Group

One of two key focus areas for The Open Group Security Forum is security architecture. The Security Forum has several ongoing projects in this area, including our TOGAF® and SABSA integration project, which will produce much needed guidance on how to use these frameworks together.

When the Network Application Consortium ceased operating a few years ago, The Open Group agreed to bring the intellectual property from the organization into our Security Forum, along with extending membership to the former NAC members. While the NAC did great work in information security, one publication from the NAC stood out as a highly valuable resource. This document, Enterprise Security Acrhitecture (ESA), A Framework and Template for Policy-Driven Security, was originally published by the NAC in 2004, and provided valuable guidance to IT architects and security architects. At the time it was first published, the ESA document filled a void in the IT security community by describing important information security functions, and how they related to each other in an overall enterprise security architecture. ESA was at the time unique in describing information security architectural concepts, and in providing examples in a reference architecture format.

The IT environment has changed significantly over the past several years since the original publication of the ESA document. Major changes that have affected information security architecture in this time include the increased usage of mobile computing devices, increased need to collaborate (and federation of identities among partner organizations), and changes in the threats and attacks.

Members of the Security Forum, having realized the need to revisit the document and update its guidance to address these changes, have significantly rewritten the document to provide new and revised guidance. Significant changes to the ESA document have been made in the areas of federated identity, mobile device security, designing for malice, and new categories of security controls including data loss prevention and virtualization security.

In keeping with the many changes to our industry, The Open Group Security Forum has now updated and published a significant revision to the Enterprise Security Architecture (O-ESA), which you can access and download (for free, minimal registration required) here; or purchase a hardcover edition here.

Our thanks to the many members of the Security Forum (and former NAC members) who contributed to this work, and in particular to Stefan Wahe who guided the revision, and to Gunnar Peterson, who managed the project and provided significant updates to the content.

Jim HietalaAn IT security industry veteran, Jim is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

Comments Off

Filed under Security Architecture

Twtpoll results from The Open Group Conference, London

The Open Group set up three informal Twitter polls this week during The Open Group Conference, London. If you wondered about the results, or just want to see what our Twitter followers think about some topline issues in the industry in very simple terms, see our twtpoll.com graphs below.

On Day One of the Conference, when the focus of the discussions was on Enterprise Architecture, we polled our Twitter followers about the perceived value of certification. Your response was perhaps disappointing, but unsurprising:

On Day Two, while we focused on security during The Open Group Jericho Forum® Conference, we queried you about what you see as being the biggest organizational security threat. Out of four stated choices, and the opportunity to fill in your own answer, the answer was unanimous: two specific areas of security keep you up at night the most.

And finally, during Day Three’s emphasis on Cloud Computing, we asked our Twitter followers about the types of Cloud they’re using or are likely to use.

What do you think of our informal poll results? Do you agree? Disagree? And why?

Want some survey results you can really sink your teeth into? View the results of The Open Group’s State of the Industry Cloud Survey. Read our blog post about it, or download the slide deck from The Open Group bookstore.

The Open Group Conference, London is in member meetings for the rest of this week. Join us in Austin, July 18-22, for our next Conference! Join us for best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Comments Off

Filed under Certifications, Cloud/SOA, Cybersecurity, Enterprise Architecture

What does the Amazon EC2 downtime mean?

By Mark Skilton, Capgemini

The recent announcement of the Amazon EC2 outage in April this year triggers some thoughts about this very high-profile topic in Cloud Computing. How secure and available is your data in the Cloud?

While the outage was more to do with the service level availability (SLA) of data and services from your Cloud provider, the recent, potentially more concerning risk of Epsilon e-mail data stolen, and as I write this the Sony email theft is breaking news, further highlights this big topic in Cloud Computing.

My initial reaction on hearing the about the outage was that it was due to over-allocation due to high demand in the US EAST 2 region, which led to a cascade system failure. I subsequently read that Amazon said it was a network glitch, which triggered storage backups to automatically create more than needed, consuming the elastic block storage. This in turn, I theorized, seems to have created the supply unavailability problem.

From a business perspective, this focuses on the issues of using a primary Cloud provider. The businesses like Quora.com and foursquare.com that were affected “live in the Cloud,” yet backup and secondary Cloud support needs are clearly important.  Some of these are economic decisions, trade-offs between loss of business and business continuity. It highlights the vulnerability of these enterprises even though a highly successful organization like Amazon makes this a rare event. Consumers of Cloud services need to consider taking mitigating actions such as disruption insurance; having secondary backups; and the issues of assurances of SLAs, which are largely out of the hands of SMB Market users. A result of outages in Cloud providers has been the emergence of a new market called “Cloud Backup,” which is starting to gain favor with customers and providers in providing added levels of protection of service fail over.

While these are concerning issues, I believe most outage issues may be addressed by taking due diligence in the procurement and usage behavior of any service that involves a third party. I’ve expanding the definition of due diligence in Cloud Computing to include at least six key processes that any prospective Cloud buyer should be aware and make contingency for, as you would with any purchase of a business critical service:

  • Security management
  • Compliance management
  • Service Management (ITSM and License controls)
  • Performance management
  • Account management
  • Ecosystem standards management

I don’t think publishing a bill of rights for consumers is enough to insure against failure. One thing that Cloud Computing design has taught me is that part of the architectural shift brought about by Cloud is the emergence of automation as an implicit part of the operating model design to enable elasticity. This automation may have been a factor, ironically, in the Amazon situation, but overall the benefits of Cloud far outweigh the downsides, which can be re-engineered and resolved.

A useful guide to address some of the business impact can be found in a new book by The Open Group on Cloud Computing for Business that we plan to publish this quarter. The topics of the book address many of these challenges in understanding and driving the value of the Cloud Computing in the language of business. The book covers chapters relating to business use of cloud and includes topics of risk management of the Cloud. Check The Open Group website for more information on The Open Group Cloud Computing Work Group and the Cloud publications in the bookstore at http://www.opengroup.org.

Cloud Computing isa key topic of discussion at The Open Group Conference, London, May 9-13, which is currently underway. 

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

1 Comment

Filed under Cloud/SOA

SOA is not differentiating, Cloud Computing is

By Mark Skilton, Capgemini

Warning: I confess at the start of this blog that I chose a deliberately evocative title to try to get your attention and guess I did if are reading this now. Having written a couple of blogs to date with what I believed were finely honed words on current lessons learnt and futures of technology had created little reaction, so I thought I’d try the more direct approach and head directly towards a pressing matter of architectural and strategic concern.

Service Oriented Architecture (SOA) is now commonplace across all software development lifecycles and has entered the standard language of information technology design. We hear “service oriented” and “service enabled” as standard phrases handed out as common terms of reference. The point is that the processes and practices of SOA are industrial and are not differentiating, as everyone is doing these either from a design standpoint or as a business systems service approach. They enable standardization and abstraction of services in the design and build stages to align with key business and technology strategy goals, and enable technology to be developed or utilized that meets specific technical or business service requirements.

SOA practices are prerequisites to good design practice. SOA is a foundation of Service Management ITIL processes and is to be found in diverse software engineering methods from Business Process Management Systems (BPMS) to rapid Model Driven Architecture design techniques that build compose web-enabled services. SOA is seen as a key method along the journey to industrialization supporting consolidation and rationalization, as well as lean engineering techniques to optimize business and systems landscape. SOA provides good development practice in defining user requirements that provide what the user wants, and in translating these into understanding how best to build agile, decoupled and flexible architectural solutions.

My point is that these methods are now mainstream, and merely putting SOA into your proposal or as a stated capability is no longer going to be a “deal clincher” or a key “business differentiator”. The counterview I hear practitioners in SOA will say is that SOA is not just the standardized service practices but is also how the services can be identified that are differentiating. But that’s the rub. If SOA treats every requirement or design as a service problem, where is the difference?

A possible answer is in how SOA will be used. In the future and today it will be a business differentiator in the way the SOA method is used. But not all SOA methods are equal, and what will be necessary to highlight SOA method differentiation for business benefit?

Enter Cloud Computing, its origins in utility computing and the ubiquitous web services and Internet. The definitions of what is Cloud Computing, much like the early days of Service Orientation, is still evolving in understanding where the boundary and types of services it encompasses. But the big disruptive step change has been the new business model the Cloud Computing mode has introduced.

Cloud Computing has introduced automatic provisioning, self-service, automatic load balancing and scaling of resources in technology. Building on virtualization principles, it has extended into on-demand metering and billing consumption models, large-scale computing resource data centers, and large-scale distributed businesses on the web using the power of the Internet to reach and run new business models. I can hear industry observers say this is just a consequence of the timely convergence of pervasive technology network standards, the rapid falling costs per compute and storage costs and the massive “hockey stick” movement of bandwidth, smart devices and wide-scale adoption of web-based services.

But this is a step change movement from a simple realization that it’s just “another technology phase”.

Put another way: It has brought the back office computing resources and the on-demand Software as a Service Models into a dynamic new business model that changes the way business and IT work. It has “merged” physical and logical services into a new marketplace on-demand model that hitherto was “good practice“ to design as separate consumer and provider services. All that’s changed.

But does SOA fully realize these aspects of a Cloud Computing Architecture? Answer these three simple questions:

  • Does the logical service contracts define how multi-tenant environments need to work to support many concurrent services users?
  • Does SOA enable automating balancing and scaling to be considered if the initial set of declarative conditions in the service contract don’t “fit” the new operating conditions that need scaling up or down?
  • Does SOA recognize the wider marketplace and ecosystem dynamics that may result in evolving consumer/producer patterns that are dynamic and not static, driving new sourcing behaviors and usage patterns that may involve using services through a portal with no contract?

For sure, ecosystem principles are axiomatic in that they will drive standards for containers, protocols and semantics which SOA standards are perfect to adopt as boundary conditions for service contracts in a Service Portfolio. But my illustrations here are to broaden the debate as to how to engage SOA as a differentiator when it meets a “new kid on the block” like Cloud, which is rapidly morphing into new models “as we speak” extending into social networks, mobile services and location aware integration.

My real intention is to raise awareness and interest in the subjects and the activities that The Open Group is engaged in to address such topics. I sincerely hope you can follow these up as further reading and investigation with The Open Group; and of course, do feel free to comment and contact me J

Cloud Computing and SOA are key topics of discussion at The Open Group Conference, London, May 9-13, which is underway. 

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

3 Comments

Filed under Cloud/SOA