Tag Archives: open standards

The Open Group Boston 2014 Preview: Talking People Architecture with David Foote

By The Open Group

Among all the issues that CIOs, CTOs and IT departments are facing today, staffing is likely near the top of the list of what’s keeping them up at night. Sure, there’s dealing with constant (and disruptive) technological changes and keeping up with the latest tech and business trends, such as having a Big Data, Internet of Things (IoT) or a mobile strategy, but without the right people with the right skills at the right time it’s impossible to execute on these initiatives.

Technology jobs are notoriously difficult to fill–far more difficult than positions in other industries where roles and skillsets may be much more static. And because technology is rapidly evolving, the roles for tech workers are also always in flux. Last year you may have needed an Agile developer, but today you may need a mobile developer with secure coding ability and in six months you might need an IoT developer with strong operations or logistics domain experience—with each position requiring different combinations of tech, functional area, solution and “soft” skillsets.

According to David Foote, IT Industry Analyst and co-founder of IT workforce research and advisory firm Foote Partners, the mash-up of HR systems and ad hoc people management practices most companies have been using for years to manage IT workers have become frighteningly ineffective. He says that to cope in today’s environment, companies need to architect their people infrastructure similar to how they have been architecting their technical infrastructure.

“People Architecture” is the term Foote has coined to describe the application of traditional architectural principles and practices that may already be in place elsewhere within an organization and applying them to managing the IT workforce. This includes applying such things as strategy and capability roadmaps, phase gate blueprints, benchmarks, performance metrics, governance practices and stakeholder management to human capital management (HCM).

HCM components for People Architecture typically include job definition and design, compensation, incentives and recognition, skills demand and acquisition, job and career paths, professional development and work/life balance.

Part of the dilemma for employers right now, Foote says, is that there is very little job title standardization in the marketplace and too many job titles floating around IT departments today. “There are too many dimensions and variability in jobs now that companies have gotten lost from an HR perspective. They’re unable to cope with the complexity of defining, determining pay and laying out career paths for all these jobs, for example. For many, serious retention and hiring problems are showing up for the first time. Work-around solutions used for years to cope with systemic weaknesses in their people management systems have stopped working,” says Foote. “Recruiters start picking off their best people and candidates are suddenly rejecting offers and a panic sets in. Tensions are palpable in their IT workforce. These IT realities are pervasive.”

Twenty-five years ago, Foote says, defining roles in IT departments was easier. But then the Internet exploded and technology became far more customer-facing, shifting basic IT responsibilities from highly technical people deep within companies to roles requiring more visibility and transparency within and outside the enterprise. Large chunks of IT budgets moved into the business lines while traditional IT became more of a business itself.

According to Foote, IT roles became siloed not just by technology but by functional areas such as finance and accounting, operations and logistics, sales, marketing and HR systems, and by industry knowledge and customer familiarity. Then the IT professional services industry rapidly expanded to compete with their customers for talent in the marketplace. Even the architect role changed: an Enterprise Architect today can specialize in applications, security or data architecture among others, or focus on a specific industry such as energy, retail or healthcare.

Foote likens the fragmentation of IT jobs and skillsets that’s happening now to the emergence of IT architecture 25 years ago. Just as technical architecture practices emerged to help make sense of the disparate systems rapidly growing within companies and how best to determine the right future tech investments, a people architecture approach today helps organizations better manage an IT workforce spread through the enterprise with roles ranging from architects and analysts to a wide variety of engineers, developers and project and program managers.

“Technical architecture practices were successful because—when you did them well—companies achieved an understanding of what they have systems-wise and then connected it to where they were going and how they were going to get there, all within a process inclusive of all the various stakeholders who shared the risk in the outcome. It helped clearly define enterprise technology capabilities and gave companies more options and flexibility going forward,” according to Foote.

“Right now employers desperately need to incorporate in human capital management systems and practice the same straightforward, inclusive architecture approaches companies are already using in other areas of their businesses. This can go a long way toward not just lessening staffing shortages but also executing more predictably and being more agile in face of constant uncertainties and the accelerating pace of change. Ultimately this translates into a more effective workforce whether they are full-timers or the contingent workforce of part-timers, consultants and contractors.

“It always comes down to your people. That’s not a platitude but a fact,” insists Foote. “If you’re not competitive in today’s labor marketplace and you’re not an employer where people want to work, you’re dead.”

One industry that he says has gotten it right is the consulting industry. “After all, their assets walk out the door every night. Consulting groups within firms such as IBM and Accenture have been good at architecting their staffing because it’s their job to get out in front of what’s coming technologically. Because these firms must anticipate customer needs before they get the call to implement services, they have to be ahead of the curve in already identifying and hiring the bench strength needed to fulfill demand. They do many things right to hire, develop and keep the staff they need in place.”

Unfortunately, many companies take too much of a just-in-time approach to their workforce so they are always managing staffing from a position of scarcity rather than looking ahead, Foote says. But, this is changing, in part due to companies being tired of never having the people they need and being able to execute predictably.

The key is to put a structure in place that addresses a strategy around what a company needs and when. This applies not just to the hiring process, but also to compensation, training and advancement.

“Architecting anything allows you to be able to, in a more organized way, be more agile in dealing with anything that comes at you. That’s the beauty of architecture. You plan for the fact that you’re going to continue to scale and continue to change systems, the world’s going to continue to change, but you have an orderly way to manage the governance, planning and execution of that, the strategy of that and the implementation of decisions knowing that the architecture provides a more agile and flexible modular approach,” he said.

Foote says organizations such as The Open Group can lend themselves to facilitating People Architecture in a couple different ways. First, through extending the principles of architecture to human capital management, and second through vendor-independent, expertise and experience driven certifications, such as TOGAF® or OpenCA and OpenCITS, that help companies define core competencies for people and that provide opportunities for training and career advancement.

“I’m pretty bullish on many vendor-independent certifications in general, particularly where a defined book of knowledge exists that’s achieved wide acceptance in the industry. And that’s what you’ve got with The Open Group. Nobody’s challenging the architectural framework supremacy of TOGAF that that I’m aware of. In fact, large vendors with their own certifications participated actively in developing the framework and applying it very successfully to their business models,” he said.

Although the process of implementing People Architecture can be difficult and may take several years to master (much like Enterprise Architecture), Foote says it is making a huge difference for companies that implement it.

To learn more about People Architecture and models for implementing it, plan to attend Foote’s session at The Open Group Boston 2014 on Tuesday July 22. Foote’s session will address how architectural principles are being applied to human capital so that organizations can better manage their workforces from hiring and training through compensation, incentives and advancement. He will also discuss how career paths for EAs can be architected. Following the conference, the session proceedings will be available to Open Group members and conference attendees at www.opengroup.org.

Join the conversation – #ogchat #ogBOS

footeDavid Foote is an IT industry research pioneer, innovator, and one of the most quoted industry analysts on global IT workforce trends and multiple facets of the human side of technology value creation. His two decades of groundbreaking deep research and analysis of IT-business cross-skilling and technology/business management integration and leading the industry in innovative IT skills demand and compensation benchmarking has earned him a place on a short list of thought leaders in IT human capital management.

A former Gartner and META Group analyst, David leads the research and analytical practice groups at Foote Partners that reach 2,300 customers on six continents.

2 Comments

Filed under architecture, Conference, Open CA, Open CITS, Professional Development, Standards, TOGAF®, Uncategorized

Why Technology Must Move Toward Dependability through Assuredness™

By Allen Brown, President and CEO, The Open Group

In early December, a technical problem at the U.K.’s central air traffic control center in Swanwick, England caused significant delays that were felt at airports throughout Britain and Ireland, also affecting flights in and out of the U.K. from Europe to the U.S. At Heathrow—one of the world’s largest airports—alone, there were a reported 228 cancellations, affecting 15 percent of the 1,300 daily flights flying to and from the airport. With a ripple effect that also disturbed flight schedules at airports in Birmingham, Dublin, Edinburgh, Gatwick, Glasgow and Manchester, the British National Air Traffic Services (NATS) were reported to have handled 20 percent fewer flights that day as a result of the glitch.

According to The Register, the problem was caused when a touch-screen telephone system that allows air traffic controllers to talk to each other failed to update during what should have been a routine shift change from the night to daytime system. According to news reports, the NATS system is the largest of its kind in Europe, containing more than a million lines of code. It took the engineering and manufacturing teams nearly a day to fix the problem. As a result of the snafu, Irish airline Ryanair even went so far as to call on Britain’s Civil Aviation Authority to intervene to prevent further delays and to make sure better contingency efforts are in place to prevent such failures happening again.

Increasingly complex systems

As businesses have come to rely more and more on technology, the systems used to keep operations running smoothly from day to day have gotten not only increasingly larger but increasingly complex. We are long past the days where a single mainframe was used to handle a few batch calculations.

Today, large global organizations, in particular, have systems that are spread across multiple centers of technical operations, often scattered in various locations throughout the globe. And with industries also becoming more inter-related, even individual company systems are often connected to larger extended networks, such as when trading firms are connected to stock exchanges or, as was the case with the Swanwick failure, airlines are affected by NATS’ network problems. Often, when systems become so large that they are part of even larger interconnected systems, the boundaries of the entire system are no longer always known.

The Open Group’s vision for Boundaryless Information Flow™ has never been closer to fruition than it is today. Systems have become increasingly open out of necessity because commerce takes place on a more global scale than ever before. This is a good thing. But as these systems have grown in size and complexity, there is more at stake when they fail than ever before.

The ripple effect felt when technical problems shut down major commercial systems cuts far, wide and deep. Problems such as what happened at Swanwick can affect the entire extended system. In this case, NATS, for example, suffers from damage to its reputation for maintaining good air traffic control procedures. The airlines suffer in terms of cancelled flights, travel vouchers that must be given out and angry passengers blasting them on social media. The software manufacturers and architects of the system are blamed for shoddy planning and for not having the foresight to prevent failures. And so on and so on.

Looking for blame

When large technical failures happen, stakeholders, customers, the public and now governments are beginning to look for accountability for these failures, for someone to assign blame. When the Obamacare website didn’t operate as expected, the U.S. Congress went looking for blame and jobs were lost. In the NATS fiasco, Ryanair asked for the government to intervene. Risk.net has reported that after the Royal Bank of Scotland experienced a batch processing glitch last summer, the U.K. Financial Services Authority wrote to large banks in the U.K. requesting they identify the people in their organization’s responsible for business continuity. And when U.S. trading company Knight Capital lost $440 million in 40 minutes when a trading software upgrade failed in August, U.S. Securities and Exchange Commission Chairman Mary Schapiro was quoted in the same article as stating: “If there is a financial loss to be incurred, it is the firm committing the error that should suffer that loss, not its customers or other investors. That more than anything sends a wake-up call to the entire industry.”

As governments, in particular, look to lay blame for IT failures, companies—and individuals—will no longer be safe from the consequences of these failures. And it won’t just be reputations that are lost. Lawsuits may ensue. Fines will be levied. Jobs will be lost. Today’s organizations are at risk, and that risk must be addressed.

Avoiding catastrophic failure through assuredness

As any IT person or Enterprise Architect well knows, completely preventing system failure is impossible. But mitigating system failure is not. Increasingly the task of keeping systems from failing—rather than just up and running—will be the job of CTOs and enterprise architects.

When systems grow to a level of massive complexity that encompasses everything from old legacy hardware to Cloud infrastructures to worldwide data centers, how can we make sure those systems are reliable, highly available, secure and maintain optimal information flow while still operating at a maximum level that is cost effective?

In August, The Open Group introduced the first industry standard to address the risks associated with large complex systems, the Dependability through Assuredness™ (O-DA) Framework. This new standard is meant to help organizations both determine system risk and help prevent failure as much as possible.

O-DA provides guidelines to make sure large, complex, boundaryless systems run according to the requirements set out for them while also providing contingencies for minimizing damage when stoppage occurs. O-DA can be used as a standalone or in conjunction with an existing architecture development method (ADM) such as the TOGAF® ADM.

O-DA encompasses lessons learned within a number of The Open Group’s forums and work groups—it borrows from the work of the Security Forum’s Dependency Modeling (O-DM) and Risk Taxonomy (O-RT) standards and also from work done within the Open Group Trusted Technology Forum and the Real-Time and Embedded Systems Forums. Much of the work on this standard was completed thanks to the efforts of The Open Group Japan and its members.

This standard addresses the issue of responsibility for technical failures by providing a model for accountability throughout any large system. Accountability is at the core of O-DA because without accountability there is no way to create dependability or assuredness. The standard is also meant to address and account for the constant change that most organization’s experience on a daily basis. The two underlying principles within the standard provide models for both a change accommodation cycle and a failure response cycle. Each cycle, in turn, provides instructions for creating a dependable and adaptable architecture, providing accountability for it along the way.

oda2

Ultimately, the O-DA will help organizations identify potential anomalies and create contingencies for dealing with problems before or as they happen. The more organizations can do to build dependability into large, complex systems, hopefully the less technical disasters will occur. As systems continue to grow and their boundaries continue to blur, assuredness through dependability and accountability will be an integral part of managing complex systems into the future.

Allen Brown

Allen Brown is President and CEO, The Open Group – a global consortium that enables the achievement of business objectives through IT standards.  For over 14 years Allen has been responsible for driving The Open Group’s strategic plan and day-to-day operations, including extending its reach into new global markets, such as China, the Middle East, South Africa and India. In addition, he was instrumental in the creation of the AEA, which was formed to increase job opportunities for all of its members and elevate their market value by advancing professional excellence.

Comments Off

Filed under Dependability through Assuredness™, Standards

Are You Ready for the Convergence of New, Disruptive Technologies?

By Chris Harding, The Open Group

The convergence of technical phenomena such as cloud, mobile and social computing, big data analysis, and the Internet of things that is being addressed by The Open Group’s Open Platform 3.0 Forum™ will transform the way that you use information technology. Are you ready? Take our survey at https://www.surveymonkey.com/s/convergent_tech

What the Technology Can Do

Mobile and social computing are leading the way. Recently, the launch of new iPhone models and the announcement of the Twitter stock flotation were headline news, reflecting the importance that these technologies now have for business. For example, banks use mobile text messaging to alert customers to security issues. Retailers use social media to understand their markets and communicate with potential customers.

Other technologies are close behind. In Formula One motor racing, sensors monitor vehicle operation and feed real-time information to the support teams, leading to improved design, greater safety, and lower costs. This approach could soon become routine for cars on the public roads too.

Many exciting new applications are being discussed. Stores could use sensors to capture customer behavior while browsing the goods on display, and give them targeted information and advice via their mobile devices. Medical professionals could monitor hospital patients and receive alerts of significant changes. Researchers could use shared cloud services and big data analysis to detect patterns in this information, and develop treatments, including for complex or uncommon conditions that are hard to understand using traditional methods. The potential is massive, and we are only just beginning to see it.

What the Analysts Say

Market analysts agree on the importance of the new technologies.

Gartner uses the term “Nexus of Forces” to describe the convergence and mutual reinforcement of social, mobility, cloud and information patterns that drive new business scenarios, and says that, although these forces are innovative and disruptive on their own, together they are revolutionizing business and society, disrupting old business models and creating new leaders.

IDC predicts that a combination of social cloud, mobile, and big data technologies will drive around 90% of all the growth in the IT market through 2020, and uses the term “third platform” to describe this combination.

The Open Group will identify the standards that will make Gartner’s Nexus of Forces and IDC’s Third Platform commercial realities. This will be the definition of Open Platform 3.0.

Disrupting Enterprise Use of IT

The new technologies are bringing new opportunities, but their use raises problems. In particular, end users find that working through IT departments in the traditional way is not satisfactory. The delays are too great for rapid, innovative development. They want to use the new technologies directly – “hands on”.

Increasingly, business departments are buying technology directly, by-passing their IT departments. Traditionally, the bulk of an enterprise’s IT budget was spent by the IT department and went on maintenance. A significant proportion is now spent by the business departments, on new technology.

Business and IT are not different worlds any more. Business analysts are increasingly using technical tools, and even doing application development, using exposed APIs. For example, marketing folk do search engine optimization, use business information tools, and analyze traffic on Twitter. Such operations require less IT skill than formerly because the new systems are easy to use. Also, users are becoming more IT-savvy. This is a revolution in business use of IT, comparable to the use of spreadsheets in the 1980s.

Also, business departments are hiring traditional application developers, who would once have only been found in IT departments.

Are You Ready?

These disruptive new technologies are changing, not just the IT architecture, but also the business architecture of the enterprises that use them. This is a sea change that affects us all.

The introduction of the PC had a dramatic impact on the way enterprises used IT, taking much of the technology out of the computer room and into the office. The new revolution is taking it out of the office and into the pocket. Cell phones and tablets give you windows into the world, not just your personal collection of applications and information. Through those windows you can see your friends, your best route home, what your customers like, how well your production processes are working, or whatever else you need to conduct your life and business.

This will change the way you work. You must learn how to tailor and combine the information and services available to you, to meet your personal objectives. If your role is to provide or help to provide IT services, you must learn how to support users working in this new way.

To negotiate this change successfully, and take advantage of it, each of us must understand what is happening, and how ready we are to deal with it.

The Open Group is conducting a survey of people’s reactions to the convergence of Cloud and other new technologies. Take the survey, to input your state of readiness, and get early sight of the results, to see how you compare with everyone else.

To take the survey, visit https://www.surveymonkey.com/s/convergent_tech

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

Comments Off

Filed under Cloud, Future Technologies, Open Platform 3.0, Platform 3.0

Why is Cloud Adoption Taking so Long?

By Chris Harding, The Open Group

At the end of last year, Gartner predicted that cloud computing would become an integral part of IT in 2013 (http://www.gartner.com/DisplayDocument?doc_cd=230929). This looks a pretty safe bet. The real question is, why is it taking so long?

Cloud Computing

Cloud computing is a simple concept. IT resources are made available, within an environment that enables them to be used, via a communications network, as a service. It is used within enterprises to enable IT departments to meet users’ needs more effectively, and by external providers to deliver better IT services to their enterprise customers.

There are established vendors of products to fit both of these scenarios. The potential business benefits are well documented. There are examples of real businesses gaining those benefits, such as Netflix as a public cloud user (see http://www.zdnet.com/the-biggest-cloud-app-of-all-netflix-7000014298/ ), and Unilever and Lufthansa as implementers of private cloud (see http://www.computerweekly.com/news/2240114043/Unilever-and-Lufthansa-Systems-deploy-Azure-Private-cloud ).

Slow Pace of Adoption

Yet we are still talking of cloud computing becoming an integral part of IT. In the 2012 Open Group Cloud ROI survey, less than half of the respondents’ organizations were using cloud computing, although most of the rest were investigating its use. (See http://www.opengroup.org/sites/default/files/contentimages/Documents/cloud_roi_formal_report_12_19_12-1.pdf ). Clearly, cloud computing is not being used for enterprise IT as a matter of routine.

Cloud computing is now at least seven years old. Amazon’s “Elastic Compute Cloud” was launched in August 2006, and there are services that we now regard as cloud computing, though they may not have been called that, dating from before then. Other IT revolutions – personal computers, for example – have reached the point of being an integral part of IT in half the time. Why has it taken Cloud so long?

The Reasons

One reason is that using Cloud requires a high level of trust. You can lock your PC in your office, but you cannot physically secure your cloud resources. You must trust the cloud service provider. Such trust takes time to earn.

Another reason is that, although it is a simple concept, cloud computing is described in a rather complex way. The widely-accepted NIST definition (see http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf ) has three service models and four deployment models, giving a total of twelve distinct delivery combinations. Each combination has different business drivers, and the three service models are based on very different technical capabilities. Real products, of course, often do not exactly correspond to the definition, and their vendors describe them in product-specific terms. This complexity often leads to misunderstanding and confusion.

A third reason is that you cannot “mix and match” cloud services from different providers. The market is consolidating, with a few key players emerging as dominant at the infrastructure and platform levels. Each of them has its own proprietary interfaces. There are no real vendor-neutral standards. A recent Information Week article on Netflix (http://www.informationweek.co.uk/cloud-computing/platform/how-netflix-is-ruining-cloud-computing/240151650 ) describes some of the consequences. Customers are beginning to talk of “vendor lock-in” in a way that we haven’t seen since the days of mainframes.

The Portability and Interoperability Guide

The Open Group Cloud Computing Portability and Interoperability Guide addresses this last problem, by providing recommendations to customers on how best to achieve portability and interoperability when working with current cloud products and services. It also makes recommendations to suppliers and standards bodies on how standards and best practice should evolve to enable greater portability and interoperability in the future.

The Guide tackles the complexity of its subject by defining a simple Distributed Computing Reference Model. This model shows how cloud services fit into the mix of products and services used by enterprises in distributed computing solutions today. It identifies the major components of cloud-enabled solutions, and describes their portability and interoperability interfaces.

Platform 3.0

Cloud is not the only new game in town. Enterprises are looking at mobile computing, social computing, big data, sensors, and controls as new technologies that can transform their businesses. Some of these – mobile and social computing, for example – have caught on faster than Cloud.

Portability and interoperability are major concerns for these technologies too. There is a need for a standard platform to enable enterprises to use all of the new technologies, individually and in combination, and “mix and match” different products. This is the vision of the Platform 3.0 Forum, recently formed by The Open Group. The distributed computing reference model is an important input to this work.

The State of the Cloud

It is now at least becoming routine to consider cloud computing when architecting a new IT solution. The chances of it being selected however appear to be less than fifty-fifty, in spite of its benefits. The reasons include those mentioned above: lack of trust, complexity, and potential lock-in.

The Guide removes some of the confusion caused by the complexity, and helps enterprises assess their exposure to lock-in, and take what measures they can to prevent it.

The growth of cloud computing is starting to be constrained by lack of standards to enable an open market with free competition. The Guide contains recommendations to help the industry and standards bodies produce the standards that are needed.

Let’s all hope that the standards do appear soon. Cloud is, quite simply, a good idea. It is an important technology paradigm that has the potential to transform businesses, to make commerce and industry more productive, and to benefit society as a whole, just as personal computing did. Its adoption really should not be taking this long.

The Open Group Cloud Computing Portability and Interoperability Guide is available from The Open Group bookstore at https://www2.opengroup.org/ogsys/catalog/G135

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

3 Comments

Filed under Platform 3.0

Enterprise Architecture in China: Who uses this stuff?

by Chris Forde, GM APAC and VP Enterprise Architecture, The Open Group

Since moving to China in March 2010 I have consistently heard a similar set of statements and questions, something like this….

“EA? That’s fine for Europe and America, who is using it here?”

“We know EA is good!”

“What is EA?”

“We don’t have the ability to do EA, is it a problem if we just focus on IT?”

And

“Mr Forde your comment about western companies not discussing their EA programs because they view them as a competitive advantage is accurate here too, we don’t discuss we have one for that reason.” Following that statement the lady walked away smiling, having not introduced herself or her company.

Well some things are changing in China relative to EA and events organized by The Open Group; here is a snapshot from May 2013.

M GaoThe Open Group held an Enterprise Architecture Practitioners Conference in Shanghai China May 22nd 2013. The conference theme was EA and the spectrum of business value. The presentations were made by a mix of non-member and member organizations of The Open Group, most but not all based in China. The audience was mostly non-members from 55 different organizations in a range of industries. There was a good mix of customer, supplier, government and academic organizations presenting and in the audience. The conference proceedings are available to registered attendees of the conference and members of The Open Group. Livestream recordings will also be available shortly.

Organizations large and small presented about the fact that EA was integral to delivering business value. Here’s the nutshell.

China

Huawei is a leading global ICT communications provider based in Shenzhen China.  They presented on EA applied to their business transformation program and the ongoing development of their core EA practice.

GKHB is a software services organization based in Chengdu China. They presented on an architecture practice applied to real time forestry and endangered species management.

Nanfang Media is a State Owned Enterprise, the second largest media organization in the country based in Guangzhou China. They presented on the need to rapidly transform themselves to a modern integrated digital based organization.

McKinsey & Co a Management Consulting company based in New York USA presented an analysis of a CIO survey they conducted with Peking University.

Mr Wang Wei a Partner in the Shanghai office of McKinsey & Co’s Business Technology Practice reviewed a survey they conducted in co-operation with Peking University.

wang wei.jpg

The Survey of CIO’s in China indicated a common problem of managing complexity in multiple dimensions: 1) “Theoretically” Common Business Functions, 2) Across Business Units with differing Operations and Product, 3) Across Geographies and Regions. The recommended approach was towards “Organic Integration” and to carefully determine what should be centralized and what should be distributed. An Architecture approach can help with managing and mitigating these realities. The survey also showed that the CIO’s are evenly split amongst those dedicated to a traditional CIO role and those that have a dual Business and CIO role.

Mr Yang Li Chao Director of EA and Planning at Huawei and Ms Wang Liqun leader of the EA Center of Excellence at Huawei yang li chao.jpgwang liqun.jpgoutlined the 5-year journey Huawei has been on to deal with the development, maturation and effectiveness of an Architecture practice in a company that has seen explosive growth and is competing on a global scale. They are necessarily paying a lot of attention to Talent Management and development of their Architects, as these people are at the forefront of the company Business Transformation efforts. Huawei constantly consults with experts on Architecture from around the world and incorporates what they consider best practice into their own method and framework, which is based on TOGAF®.

 Mr He Kun CIO of Nanfang Media described the enormous pressures his traditional media organization is under, such as a concurrent loss of advertising and talent to digital media.

he kun.jpgHe gave and example where China Mobile has started its own digital newspaper leveraging their delivery platform. So naturally, Nanfang media is also undergoing a transformation and is looking to leverage its current advantages as a trusted source and its existing market position. The discipline of Architecture is a key enabler and aids as a foundation for clearly communicating a transformation approach to other business leaders. This does not mean using EA Jargon but communicating in the language of his peers for the purpose of obtaining funding to accomplish the transformation effectively.

Mr Chen Peng Vice General Manager of GKHB Chengdu described the use of an Architecture approach to managing precious national resources such as forestry, bio diversity and endangered species. He descrichen peng.jpgbed the necessity for real time information in observation, tracking and responses in this area and the necessity of “Informationalization” of Forestry in China as a part of eGovernment initiatives not only for the above topics but also for the countries growth particularly in supplying the construction industry. The Architecture approach taken here is also based on TOGAF®.

The take away from this conference is that Enterprise Architecture is alive and well amongst certain organizations in China. It is being used in a variety of industries.  Value is being realized by executives and practitioners, and delivered for both IT and Business units. However for many companies EA is also a new idea and to date its value is unclear to them.

The speakers also made it clear that there are no easy answers, each organization has to find its own use and value from Enterprise Architecture and it is a learning journey. They expressed their appreciation that The Open Group and its standards are a place where they can make connections, pull from and contribute to in regards to Enterprise Architecture.

Comments Off

Filed under Enterprise Architecture, Enterprise Transformation, Professional Development, Standards, TOGAF, TOGAF®, Uncategorized

Flexibility, Agility and Open Standards

By Jose M. Sanchez Knaack, IBM

Flexibility and agility are terms used almost interchangeably these days as attributes of IT architectures designed to cope with rapidly changing business requirements. Did you ever wonder if they are actually the same? Don’t you have the feeling that these terms remain abstract and without a concrete link to the design of an IT architecture?

This post searches to provide clear definitions for both flexibility and agility, and explain how both relate to the design of IT architectures that exploit open standards. A ‘real-life’ example will help to understand these concepts and render them relevant to the Enterprise Architect’s daily job.

First, here is some context on why flexibility and agility are increasingly important for businesses. Today, the average smart phone has more computing power than the original Apollo mission to the moon. We live in times of exponential change; the new technological revolution seems to be always around the corner and is safe to state that the trend will continue as nicely visualized in this infographic by TIME Magazine.

The average lifetime of a company in the S&P 500 has fallen by 80 percent since 1937. In other words, companies need to adapt fast to capitalize on business opportunities created by new technologies at the price of loosing their leadership position.

Thus, flexibility and agility have become ever present business goals that need to be supported by the underlying IT architecture. But, what is the precise meaning of these two terms? The online Merriam-Webster dictionary offers the following definitions:

Flexible: characterized by a ready capability to adapt to new, different, or changing requirements.

Agile: marked by ready ability to move with quick easy grace.

To understand how these terms relate to IT architecture, let us explore an example based on an Enterprise Service Bus (ESB) scenario.

An ESB can be seen as the foundation for a flexible IT architecture allowing companies to integrate applications (processes) written in different programming languages and running on different platforms within and outside the corporate firewall.

ESB products are normally equipped with a set of pre-built adapters that allow integrating 70-80 percent of applications ‘out-of-the-box’, without additional programming efforts. For the remaining 20-30 percent of integration requirements, it is possible to develop custom adapters so that any application can be integrated with any other if required.

In other words, an ESB covers requirements regarding integration flexibility, that is, it can cope with changing requirements in terms of integrating additional applications via adapters, ‘out-of-the-box’ or custom built. How does this integration flexibility correlate to integration agility?

Let’s think of a scenario where the IT team has been requested to integrate an old manufacturing application with a new business partner. The integration needs to be ready within one month; otherwise the targeted business opportunity will not apply anymore.

The picture below shows the underlying IT architecture for this integration scenario.

jose diagram

Although the ESB is able to integrate the old manufacturing application, it requires an adapter to be custom developed since the application does not support any of the communication protocols covered by the pre-built adapters. To custom develop, test and deploy an adapter in a corporate environment is likely going to take longer that a month and the business opportunity will be lost because the IT architecture was not agile enough.

This is the subtle difference between flexible and agile.

Notice that if the manufacturing application had been able to communicate via open standards, the corresponding pre-built adapter would have significantly shortened the time required to integrate this application. Applications that do not support open standards still exist in corporate IT landscapes, like the above scenario illustrates. Thus, the importance of incorporating open standards when road mapping your IT architecture.

The key takeaway is that your architecture principles need to favor information technology built on open standards, and for that, you can leverage The Open Group Architecture Principle 20 on Interoperability.

Name Interoperability
Statement Software and hardware should conform to defined standards that promote interoperability for data, applications, and technology.

In summary, the accelerating pace of change requires corporate IT architectures to support the business goals of flexibility and agility. Establishing architecture principles that favor open standards as part of your architecture governance framework is one proven approach (although not the only one) to road map your IT architecture in the pursuit of resiliency.

linkedin - CopyJose M. Sanchez Knaack is Senior Manager with IBM Global Business Services in Switzerland. Mr. Sanchez Knaack professional background covers business aligned IT architecture strategy and complex system integration at global technology enabled transformation initiatives.

 

 

 

Comments Off

Filed under Enterprise Architecture

Why Business Needs Platform 3.0

By Chris Harding, The Open Group

The Internet gives businesses access to ever-larger markets, but it also brings more competition. To prosper, they must deliver outstanding products and services. Often, this means processing the ever-greater, and increasingly complex, data that the Internet makes available. The question they now face is, how to do this without spending all their time and effort on information technology.

Web Business Success

The success stories of giants such as Amazon are well-publicized, but there are other, less well-known companies that have profited from the Web in all sorts of ways. Here’s an example. In 2000 an English illustrator called Jacquie Lawson tried creating greetings cards on the Internet. People liked what she did, and she started an e-business whose website is now ranked by Alexa as number 2712 in the world, and #1879 in the USA. This is based on website traffic and is comparable, to take a company that may be better known, with toyota.com, which ranks slightly higher in the USA (#1314) but somewhat lower globally (#4838).

A company with a good product can grow fast. This also means, though, that a company with a better product, or even just better marketing, can eclipse it just as quickly. Social networking site Myspace was once the most visited site in the US. Now it is ranked by Alexa as #196, way behind Facebook, which is #2.

So who ranks as #1? You guessed it – Google. Which brings us to the ability to process large amounts of data, where Google excels.

The Data Explosion

The World-Wide Web probably contains over 13 billion pages, yet you can often find the information that you want in seconds. This is made possible by technology that indexes this vast amount of data – measured in petabytes (millions of gigabytes) – and responds to users’ queries.

The data on the world-wide-web originally came mostly from people, typing it in by hand. In future, we will often use data that is generated by sensors in inanimate objects. Automobiles, for example, can generate data that can be used to optimize their performance or assess the need for maintenance or repair.

The world population is measured in billions. It is estimated that the Internet of Things, in which data is collected from objects, could enable us to track 100 trillion objects in real time – ten thousand times as many things as there are people, tirelessly pumping out information. The amount of available data of potential value to businesses is set to explode yet again.

A New Business Generation

It’s not just the amount of data to be processed that is changing. We are also seeing changes in the way data is used, the way it is processed, and the way it is accessed. Following The Open Group conference in January, I wrote about the convergence of social, Cloud, and mobile computing with Big Data. These are the new technical trends that are taking us into the next generation of business applications.

We don’t yet know what all those applications will be – who in the 1990’s would have predicted greetings cards as a Web application – but there are some exciting ideas. They range from using social media to produce market forecasts to alerting hospital doctors via tablets and cellphones when monitors detect patient emergencies. All this, and more, is possible with technology that we have now, if we can use it.

The Problem

But there is a problem. Although there is technology that enables businesses to use social, Cloud, and mobile computing, and to analyze and process massive amounts of data of different kinds, it is not necessarily easy to use. A plethora of products is emerging, with different interfaces, and with no ability to work with each other.  This is fine for geeks who love to play with new toys, but not so good for someone who wants to realize a new business idea and make money.

The new generation of business applications cannot be built on a mish-mash of unstable products, each requiring a different kind of specialist expertise. It needs a solid platform, generally understood by enterprise architects and software engineers, who can translate the business ideas into technical solutions.

The New Platform

Former VMware CEO and current Pivotal Initiative leader Paul Maritz describes the situation very well in his recent blog on GigaOM. He characterizes the new breed of enterprises, that give customers what they want, when they want it and where they want it, by exploiting the opportunities provided by new technologies, as consumer grade. Paul says that, “Addressing these opportunities will require new underpinnings; a new platform, if you like. At the core of this platform, which needs to be Cloud-independent to prevent lock-in, will be new approaches to handling big and fast (real-time) data.”

The Open Group has announced its new Platform 3.0 Forum to help the industry define a standard platform to meet this need. As The Open Group CTO Dave Lounsbury says in his blog, the new Forum will advance The Open Group vision of Boundaryless Information Flow™ by helping enterprises to take advantage of these convergent technologies. This will be accomplished by identifying a set of new platform capabilities, and architecting and standardizing an IT platform by which enterprises can reap the business benefits of Platform 3.0.

Business Focus

A business set up to design greetings cards should not spend its time designing communications networks and server farms. It cannot afford to spend time on such things. Someone else will focus on its core business and take its market.

The Web provided a platform that businesses of its generation could build on to do what they do best without being overly distracted by the technology. Platform 3.0 will do this for the new generation of businesses.

Help It Happen!

To find out more about the Platform 3.0 Forum, and take part in its formation, watch out for the Platform 3.0 web meetings that will be announced by e-mail and twitter, and on our home page.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Platform 3.0