A Historical Look at Enterprise Architecture with John Zachman

By The Open Group

John Zachman’s Zachman Framework is widely recognized as the foundation and historical basis for Enterprise Architecture. On Tuesday, Feb. 3, during The Open Group’s San Diego 2015 event, Zachman will be giving the morning’s keynote address entitled “Zachman on the Zachman Framework and How it Complements TOGAF® and Other Frameworks.”

We recently spoke to Zachman in advance of the event about the origins of his framework, the state of Enterprise Architecture and the skills he believes EAs need today.

As a discipline, Enterprise Architecture is still fairly young. It began getting traction in the mid to late 1980s after John Zachman published an article describing a framework for information systems architectures in the IBM Systems Journal. Zachman said he lived to regret initially calling his framework “A Framework for Information Systems Architecture,” instead of “Enterprise Architecture” because the framework actually has nothing to do with information systems.

Rather, he said, it was “A Framework for Enterprise Architecture.” But at the time of publication, the idea of Enterprise Architecture was such a foreign concept, Zachman said, that people didn’t understand what it was. Even so, the origins of his ontological framework were already almost 20 years old by the time he first published them.

In the late 1960s, Zachman was working as an account executive in the Marketing Division of IBM. His account responsibility was working with the Atlantic Richfield Company (better known as ARCO). In 1969, ARCO had just been newly formed out of the merger of three separate companies, Atlantic Refining out of Philadelphia and Richfield in California, which merged and then bought Sinclair Oil in New York in 1969.

“It was the biggest corporate merger in history at the time where they tried to integrate three separate companies into one company. They were trying to deal with an enterprise integration issue, although they wouldn’t have called it that at the time,” Zachman said.

With three large companies to merge, ARCO needed help in figuring out how to do the integration. When the client asked Zachman how they should handle such a daunting task, he said he’d try to get some help. So he turned to a group within IBM called the Information Systems Control and Planning Group and the group’s Director of Architecture, Dewey Walker, for guidance.

Historically, when computers were first used in commercial applications, there already were significant “Methods and Procedures” systems communities in most large organizations whose job was to formalize many manual systems in order to manage the organization, Zachman said. When computers came on the scene, they were used to improve organizational productivity by replacing the people performing the organizations’ processes. However, because manual systems defined and codified organizational responsibilities, when management made changes within an organization, as they often did, it would render the computer systems obsolete, which required major redevelopment.

Zachman recalled Walker’s observation that “organizational responsibilities” and “processes” were two different things. As such, he believed systems should be designed to automate the process, not to encode the organizational responsibilities, because the process and the organization changed independently from one another. By separating these two independent variables, management could change organizational responsibilities without affecting or changing existing systems or the organization. Many years later, Jim Champy and Mike Hammer popularized this notion in their widely read 1991 book, “Reengineering the Corporation,” Zachman said.

According to Zachman, Walker created a methodology for defining processes as separate entities from the organizational structure. Walker came out to Los Angeles, where Zachman and ARCO were based to help provide guidance on the merger. Zachman recalls Walker telling him that the key to defining the systems for Enterprise purposes was in the data, not necessarily the process itself. In other words, the data across the company needed to be normalized so that they could maintain visibility into the assets and structure of the enterprise.

“The secret to this whole thing lies in the coding and the classification of the data,” Zachman recalled Walker saying. Walker’s methodology, he said, began by classifying data by its existence not by its use.

Since all of this was happening well before anyone came up with the concept of data modeling, there were no data models from which to design their system. “Data-oriented words were not yet in anyone’s vocabulary,” Zachman said. Walker had difficulty articulating his concepts because the words he had at his disposal were inadequate, Zachman said.

Walker understood that to have structural control over the enterprise, they needed to look at both processes and data as independent variables, Zachman said. That would provide the flexibility and knowledge base to accommodate escalating change. This was critical, he said, because the system is the enterprise. Therefore, creating an integrated structure of independent variables and maintaining visibility into that structure are crucial if you want to be able to manage and change it. Otherwise, he says, the enterprise “disintegrates.”

Although Zachman says Walker was “onto this stuff early on,” Walker eventually left IBM, leaving Zachman with the methodology Walker had named “Business Systems Planning.” (Zachman said Walker knew that it wasn’t just about the information systems, but about the business systems.) According to Zachman, he inherited Walker’s methodology because he’d been working closely with Walker. “I was the only person that had any idea what Dewey was doing,” he said.

What he was left with, Zachman says, was what today he would call a “Row 1 methodology”—or the “Executive Perspective” and the “Scope Contexts” in what would eventually become his ontology.

According to Zachman, Walker had figured out how to transcribe enterprise strategy in such a fashion that engineering work could be derived from it. “What we didn’t know how to do,” Zachman said, “was to transform the strategy (Zachman Framework Row 1), which tends to be described at a somewhat abstract level of definition into the operating Enterprise (Row 6), which was comprised of very precise instructions (explicit or implicit) for behavior of people and/or machines.”

Zachman said that they knew that “Architecture” had something to do with the Strategy to Instantiation transformation logic but they didn’t know what architecture for enterprises was in those days. His radical idea was to ask someone who did architecture for things like buildings, airplanes, locomotives, computers or battleships. What the architecture was for those Industrial Age products. Zachman believed if he could find out what they thought architecture was for those products, he might be able to figure out what architecture was for enterprises and thereby figure out how to transform the strategy into the operating enterprise to align the enterprise implementation with the strategy.

With this in mind, Zachman began reaching out to people in other disciplines to see how they put together things like buildings or airplanes. He spoke to an architect friend and also to some of the aircraft manufacturers that were based in Southern California at the time. He began gathering different engineering specs and studying them.

One day while he was sitting at his desk, Zachman said, he began sorting the design artifacts he’d collected for buildings and airplanes into piles. Suddenly he noticed there was something similar in how the design patterns were described.

“Guess what?” he said. “The way you describe buildings is identical to the way you describe airplanes, which turns out to be identical to the way you describe locomotives, which is identical to the way you describe computers. Which is identical to the way you describe anything else that humanity has ever described.”

Zachman says he really just “stumbled across” the way to describe the enterprise and attributes his discovery to providence, a miracle! Despite having kick-started the discipline of Enterprise Architecture with this recognition, Zachman claims he’s “actually not very innovative,” he said.

“I just saw the pattern and put enterprise names on it,” he said

Once he understood that Architectural design descriptions all used the same categories and patterns, he knew that he could also define Architecture for Enterprises. All it would take would be to apply the enterprise vocabulary to the same pattern and structure of the descriptive representations of everything else.

“All I did was, I saw the pattern of the structure of the descriptive representations for airplanes, buildings, locomotives and computers, and I put enterprise names on the same patterns,” he says. “Now you have the Zachman Framework, which basically is Architecture for Enterprises. It is Architecture for every other object known to human kind.”

Thus the Zachman Framework was born.

Ontology vs. Methodology

According to Zachman, what his Framework is ultimately intended for is describing a complex object, an Enterprise. In that sense, the Zachman Framework is the ontology for Enterprise Architecture, he says. What it doesn’t do, is tell you how to do Enterprise Architecture.

“Architecture is architecture is architecture. My framework is just the definition and structure of the descriptive representation for enterprises,” he said.

That’s where methodologies, such as TOGAF®, an Open Group standard, DoDAF or other methodological frameworks come in. To create and execute an Architecture, practitioners need both the ontology—to help them define, translate and place structure around the enterprise descriptive representations—and they need a methodology to populate and implement it. Both are needed—it’s an AND situation, not an OR, he said. The methodology simply needs to use (or reuse) the ontological constructs in creating the implementation instantiations in order for the enterprise to be “architected.”

The Need for Architecture

Unfortunately, Zachman says, there are still a lot of companies today that don’t understand the need to architect their enterprise. Enterprise Architecture is simply not on the radar of general management in most places.

“It’s not readily acknowledged on the general management agenda,” Zachman said.

Instead, he says, most companies focus their efforts on building and running systems, not engineering the enterprise as a holistic unit.

“We haven’t awakened to the concept of Enterprise Architecture,” he says. “The fundamental reason why is people think it takes too long and it costs too much. That is a shibboleth – it doesn’t take too long or cost too much if you know what you’re doing and have an ontological construct.”

Zachman believes many companies are particularly guilty of this type of thinking, which he attributes to a tendency to think that there isn’t any work being done unless the code is up and running. Never mind all the work it took to get that code up and running in the first place.

“Getting the code to run, I’m not arguing against that, but it ought to be in the context of the enterprise design. If you’re just providing code, you’re going to get exactly what you have right now—code. What does that have to do with management’s intentions or the Enterprise in its entirety?”

As such, Zachman compares today’s enterprises to log cabins rather than skyscrapers. Many organizations have not gotten beyond that “primitive” stage, he says, because they haven’t been engineered to be integrated or changed.

According to Zachman, the perception that Enterprise Architecture is too costly and time consuming must change. And, people also need to stop thinking that Enterprise Architecture belongs solely under the domain of IT.

“Enterprise Architecture is not about building IT models. It’s about solving general management problems,” he said. “If we change that perception, and we start with the problem and we figure out how to solve that problem, and then, oh by the way we’re doing Architecture, then we’re going to get a lot of Architecture work done.”

Zachman believes one way to do this is to build out the Enterprise Architecture iteratively and incrementally. By tackling one problem at a time, he says, general management may not even need to know whether you’re doing Enterprise Architecture or not, as long as their problem is being solved. The governance system controls the architectural coherence and integration of the increments. He expects that EA will trend in that direction over the next few years.

“We’re learning much better how to derive immediate value without having the whole enterprise engineered. If we can derive immediate value, that dispels the shibboleth—the misperception that architecture takes too long and costs too much. That’s the way to eliminate the obstacles for Enterprise Architecture.”

As far as the skills needed to do EA into the future, Zachman believes that enterprises will eventually need to have multiple types of architects with different skill sets to make sure everything is aligned. He speculates that someday, there may need to be specialists for every cell in the framework, saying that there is potentially room for a lot of specialization and people with different skill sets and a lot of creativity. Just as aircraft manufacturers need a variety of engineers—from aeronautic to hydraulic and everywhere in between—to get a plane built. One engineer does not engineer the entire airplane or a hundred-story building or an ocean liner, or, for that matter, a personal computer. Similarly, increasingly complex enterprises will likely need multiple types of engineering specialties. No one person knows everything.

“Enterprises are far more complex than 747s. In fact, an enterprise doesn’t have to be very big before it gets really complex,” he said. “As enterprise systems increase in size, there is increased potential for failure if they aren’t architected to respond to that growth. And if they fail, the lives and livelihoods of hundreds of thousand of people can be affected, particularly if it’s a public sector Enterprise.”

Zachman believes it may ultimately take a generation or two for companies to understand the need to better architect the way they run. As things are today, he says, the paradigm of the “system process first” Industrial Age is still too ingrained in how systems are created. He believes it will be a while before that paradigm shifts to a more Information Age-centric way of thinking where the enterprise is the object rather than the system.

“Although this afternoon is not too early to start working on it, it is likely that it will be the next generation that will make Enterprise Architecture an essential way of life like it is for buildings and airplanes and automobiles and every other complex object,” he said.

By The Open GroupJohn A. Zachman, Founder & Chairman, Zachman International, Executive Director of FEAC Institute, and Chairman of the Zachman Institute

Join the conversation – @theopengroup, #ogchat, #ogSAN

1 Comment

Filed under Enterprise Architecture, Standards, TOGAF®, Uncategorized

Catching Up with The Open Group Internet of Things Work Group

By The Open Group

The Open Group’s Internet of Things (IoT) Work Group is involved in developing open standards that will allow product and equipment management to evolve beyond the traditional limits of product lifecycle management. Meant to incorporate the larger systems management that will be required by the IoT, these standards will help to handle the communications needs of a network that may encompass products, devices, people and multiple organizations. Formerly known as the Quantum Lifecycle Management (QLM) Work Group, its name was recently changed to the Internet of Things Work Group to more accurately reflect its current direction and focus.

We recently caught up with Work Group Chairman Kary Främling to discuss its two new standards, both of which are geared toward the Internet of Things, and what the group has been focused on lately.

Over the past few years, The Open Group’s Internet of Things Work Group (formerly the Quantum Lifecycle Management Work Group) has been working behind the scenes to develop new standards related to the nascent Internet of Things and how to manage the lifecycle of these connected products, or as General Electric has referred to it, the “Industrial Internet.”

What their work ultimately aims to do is help manage all the digital information within a particular system—for example, vehicles, buildings or machines. By creating standard frameworks for handling this information, these systems and their related applications can be better run and supported during the course of their “lifetime,” with the information collected serving a variety of purposes, from maintenance to improved design and manufacturing to recycling and even refurbishing them.

According to Work Group Chairman Kary Främling, CEO of ControlThings and Professor of Practice in Building Information Modeling at Aalto University in Finland, the group has been working with companies such as Caterpillar and Fiat, as well as refrigerator and machine tool manufacturers, to enable machines and equipment to send sensor and status data on how machines are being used and maintained to their manufacturers. Data can also be provided to machine operators so they are also aware of how the machines are functioning in order to make changes if need be.

For example, Främling says that one application of this system management loop is in HVAC systems within buildings. By building Internet capabilities into the system, now a ventilation system—or air-handling unit—can be controlled via a smartphone from the moment it’s turned on inside a building. The system can provide data and alerts to facilities management about how well it’s operating and whether there are any problems within the system to whomever needs it. Främling also says that the system can provide information to both the maintenance company and the system manufacturer so they can collect information from the machines on performance, operations and other indicators. This allows users to determine things as simple as when an air filter may need changing or whether there are systematic problems with different machine models.

According to Främling, the ability to monitor systems in this way has already helped ventilation companies make adjustments to their products.

“What we noticed was there was a certain problem with certain models of fans in these machines. Based on all the sensor readings on the machine, I could deduce that the air extraction fan had broken down,” he said.

The ability to detect such problems via sensor data as they are happening can be extremely beneficial to manufacturers because they can more easily and more quickly make improvements to their systems. Another advantage afforded by machines with Web connectivity, Främling says, is that errors can also be corrected remotely.

“There’s so much software in these machines nowadays, so just by changing parameters you can make them work better in many ways,” he says.

In fact, Främling says that the Work Group has been working on systems such as these for a number of years already—well before the term “Internet of Things” became part of industry parlance. They first worked on a system for a connected refrigerator in 2007 and even worked on systems for monitoring how vehicles were used before then.

One of the other things the Work Group is focused on is working with the Open Platform 3.0 Forum since there are many synergies between the two groups. For instance, the Work Group provided a number of the uses cases for the Forum’s recent business scenarios.

“I really see what we are doing is enabling the use cases and these information systems,” Främling says.

Two New Standards

In October, the Work Group also published two new standards, both of which are two of the first standards to be developed for the Internet of Things (IoT). A number of companies and universities across the world have been instrumental in developing the standards including Aalto University in Finland, BIBA, Cambridge University, Infineon, InMedias, Politechnico di Milano, Promise Innovation, SAP and Trackway Ltd.

Främling likens these early IoT standards to what the HTML and HTTP protocols did for the Internet. For example, the Open Data Format (O-DF) Standard provides a common language for describing any kind of IoT object, much like HTML provided a language for the Web. The Open Messaging Interface (O-MI) Standard, on the other hand, describes a set of operations that enables users to read information about particular systems and then ask those systems for that information, much like HTTP. Write operations then allow users to also send information or new values to the system, for example, to update the system.

Users can also subscribe to information contained in other systems. For instance, Främling described a scenario in which he was able to create a program that allowed him to ask his car what was wrong with it via a smartphone when the “check engine” light came on. He was then able to use a smartphone application to send an O-MI message to the maintenance company with the error code and his location. Using an O-MI subscription the maintenance company would be able to send a message back asking for additional information. “Send these five sensor values back to us for the next hour and you should send them every 10 seconds, every 5 seconds for the temperature, and so on,” Främling said. Once that data is collected, the service center can analyze what’s wrong with the vehicle.

Främling says O-MI messages can easily be set up on-the-fly for a variety of connected systems with little programming. The standard also allows users to manage mobility and firewalls. O-MI communications are also run over systems that are already secure to help prevent security issues. Those systems can include anything from HTTP to USB sticks to SMTP, as well, Främling says.

Främling expects that these standards can also be applied to multiple types of functionalities across different industries, for example for connected systems in the healthcare industry or to help manage energy production and consumption across smart grids. With both standards now available, the Work Group is beginning to work on defining extensions for the Data Format so that vocabularies specific to certain industries, such as healthcare or manufacturing, can also be developed.

In addition, Främling expects that as protocols such as O-MI make it easier for machines to communicate amongst themselves, they will also be able to begin to optimize themselves over time. Cars, in fact, are already using this kind of capability, he says. But for other systems, such as buildings, that kind of communication is not happening yet. He says in Finland, his company has projects underway with manufacturers of diesel engines, cranes, elevators and even in Volkswagen factories to establish information flows between systems. Smart grids are also another potential use. In fact his home is wired to provide consumption rates in real-time to the electric company, although he says he does not believe they are currently doing anything with the data.

“In the past we used to speak about these applications for pizza or whatever that can tell a microwave oven how long it should be heated and the microwave oven also checks that the food hasn’t expired,” Främling said.

And while your microwave may not yet be able to determine whether your food has reached its expiration date, these recent developments by the Work Group are helping to bring the IoT vision to fruition by making it easier for systems to begin the process of “talking” to each other through a standardized messaging system.

By The Open GroupKary Främling is currently CEO of the Finnish company ControlThings, as well as Professor of Practice in Building Information Modeling (BIM) at Aalto University, Finland. His main research topics are on information management practices and applications for BIM and product lifecycle management in general. His main areas of competence are distributed systems, middleware, multi-agent systems, autonomously learning agents, neural networks and decision support systems. He is one of the worldwide pioneers in the Internet of Things domain, where he has been active since 2000.

@theopengroup; #ogchat

1 Comment

Filed under digital technologies, Enterprise Transformation, Future Technologies, Internet of Things, Open Platform 3.0, Uncategorized

Putting Information Technology at the Heart of the Business: The Open Group San Diego 2015

By The Open Group

The Open Group is hosting the “Enabling Boundaryless Information Flow™” event February 2 – 5, 2015 in San Diego, CA at the Westin San Diego Gaslamp Quarter. The event is set to focus on the changing role of IT within the enterprise and how new IT trends are empowering improvements in businesses and facilitating Enterprise Transformation. Key themes include Dependability through Assuredness™ (The Cybersecurity Connection) and The Synergy of Enterprise Architecture Frameworks. Particular attention throughout the event will be paid to the need for continued development of an open TOGAF® Architecture Development Method and its importance and value to the wider business architecture community. The goal of Boundaryless Information Flow will be featured prominently in a number of tracks throughout the event.

Key objectives for this year’s event include:

  • Explore how Cybersecurity and dependability issues are threatening business enterprises and critical infrastructure from an integrity and a Security perspective
  • Show the need for Boundaryless Information Flow™, which would result in more interoperable, real-time business processes throughout all business ecosystems
  • Outline current challenges in securing the Internet of Things, and about work ongoing in the Security Forum and elsewhere that will help to address the issues
  • Reinforce the importance of architecture methodologies to assure your enterprise is transforming its approach along with the ever-changing threat landscape
  • Discuss the key drivers and enablers of social business technologies in large organizations which play an important role in the co-creation of business value, and discuss the key building blocks of social business transformation program

Plenary speakers at the event include:

  • Chris Forde, General Manager, Asia Pacific Region & VP, Enterprise Architecture, The Open Group
  • John A. Zachman, Founder & Chairman, Zachman International, and Executive Director of FEAC Institute

Full details on the range of track speakers at the event can be found here, with the following (among many others) contributing:

  • Dawn C. Meyerriecks, Deputy Director for Science and Technology, CIA
  • Charles Betz, Founder, Digital Management Academy
  • Leonard Fehskens. Chief Editor, Journal of Enterprise Architecture, AEA

Registration for The Open Group San Diego 2015 is open and available to members and non-members. Please register here.

Join the conversation via Twitter – @theopengroup #ogSAN

 

17 Comments

Filed under Boundaryless Information Flow™, Dependability through Assuredness™, Internet of Things, Professional Development, Security, Standards, TOGAF®, Uncategorized

Open FAIR Certification for People Program

By Jim Hietala, VP Security, and Andrew Josey, Director of Standards, The Open Group

In this, the final installment of this Open FAIR blog series, we will look at the Open FAIR Certification for People program.

In early 2012, The Open Group Security Forum began exploring the idea of creating a certification program for Risk Analysts. Discussions with large enterprises regarding their risk analysis programs led us to the conclusion that there was a need for a professional certification program for Risk Analysts. In addition, Risk Analyst professionals and Open FAIR practitioners expressed interest in a certification program. Security and risk training organizations also expressed interest in providing training courses based upon the Open FAIR standards and Body of Knowledge.

The Open FAIR People Certification Program was designed to meet the requirements of employers and risk professionals. The certification program is a knowledge-based certification, testing candidates knowledge of the two standards, O-RA, and O-RT. Candidates are free to acquire their knowledge through self-study, or to take a course from an accredited training organization. The program currently has a single level (Foundation), with a more advanced certification level (Certified) planned for 2015.

Several resources are available from The Open Group to assist Risk Analysts preparing to sit for the exam, including the following:

  • Open FAIR Pocket Guide
  • Open FAIR Study Guide
  • Risk Taxonomy (O-RT), Version 2.0 (C13K, October 2013) defines a taxonomy for the factors that drive information security risk – Factor Analysis of Information Risk (FAIR).
  • Risk Analysis (O-RA) (C13G, October 2013) describes process aspects associated with performing effective risk analysis.

All of these can be downloaded from The Open Group publications catalog at http://www.opengroup.org/bookstore/catalog.

For training organizations, The Open Group accredits organizations wishing to offer training courses on Open FAIR. Testing of candidates is offered through Prometric test centers worldwide.

For more information on Open FAIR certification or accreditation, please contact us at: openfair-cert-auth@opengroup.org

By Jim Hietala and Andrew JoseyJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT Security, Risk Management and Healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on Information Security, Risk Management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

 

By Andrew JoseyAndrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate® 2.0, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX® Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

 

 

 

Leave a comment

Filed under Uncategorized, Enterprise Architecture, Cybersecurity, Certifications, Information security, Professional Development, RISK Management, Open FAIR Certification, Accreditations, Security

The Onion From The Inside Out

By Stuart Boardman, Senior Business Consultant, Business & IT Advisory, KPN Consulting and Ed Harrington, Senior Consulting Associate, Conexiam

The Open Group Open Platform 3.0™ (OP3.0) services often involve a complex network of interdependent parties[1]. Each party has its own concept of the value it expects from the service. One consequence of this is that each party depends on the value other parties place on the service. If it’s not core business for one of them, its availability and reliability could be in doubt. So the others need to be aware of this and have some idea of how much that matters to them.

In a previous post, we used the analogy of an onion to model various degrees of relationship between parties. At a high level the onion looks like this:

By Stuart Boardman, KPN“Onion”

Every player has their own version of this onion. Every player’s own perspective is from the middle of it. The complete set of players will be distributed across different layers of the onion depending on whose onion we are looking at.

In a short series of blogs, we’re going to use a concrete use-case to explore what various players’ onions look like. To understand that onion involves working from the middle out. We all know that you can’t peel an onion starting in the middle, so let’s not get hung up on the metaphor. It’s only useful in as far as it fits with our real business objective. In this case the objective is to have the best possible chance of understanding and then realizing the potential value of a service.

Defining and Realizing Value

Earlier this year, The Open Group published a set of Open Platform 3.0 use cases. One of these use cases (#15) considers the energy market ecosystem involved in smart charging of electric vehicles. The players in this use case include:

  • The Vehicle User
  • Supplier/Charging Operator(s)
  • Distribution Service Operator (DSO).
  • Electricity Bulk Generators
  • Transmission (National Grid) Operator
  • Local Government

By Stuart Boardman, KPN

The use case describes a scenario involving these players:

A local controller (a device – known in OP3.0 as part of the Internet of Things) controls one or more charging stations. The Charging Operator informs the vehicle (and possibly the Vehicle User) via the local controller how much capacity is available to it. If the battery is nearly full the vehicle can inform the local controller that it needs less capacity and this capacity can then be made available to other vehicles at other charging stations.

The Charging Operator determines the capacity to be made available on the basis of information provided by the DSO (maximum allowable capacity at that time), possibly combined with commercial information (e.g., current spot prices, predicted trends, flexibility agreements with vehicle-owners/customers where applicable). The DSO has predicted available capacity on the basis of currently predicted weather conditions and long-term usage patterns in the relevant area. The DSO is able to adapt to unexpected changes in real-time and restrict or increase the locally available capacity.

Value For The Various Parties

The Vehicle User

For the sake of making it interesting let’s say that the vehicle user is a taxi driver. For her, the value is primarily in being able to charge the vehicle at a convenient time, place, speed and cost. But the perception of what constitutes value in those categories may vary depending on whether she uses a public charging station or charges at home. In either case the service she uses is focused on the Supplier/Charging operator, because that is who she pays for the service. The bill includes generic DSO costs but the customer has no direct relationship with a DSO and is only really aware of them when maintenance is carried out. Factors like convenient time and place may bring Local Government into the picture, because they are often the party who make parking spaces for electric vehicles available.

By Stuart Boardman, KPN“The Taxi Driver’s Onion”

Local Government

Local government is then also responsible for policing the proper use of these spaces. The importance assigned by local government to making these facilities available is a question of policy balanced by cost/gain (licenses and parking fees). Policy is influenced by the economy, by the convictions of the councilors, by lobbyists (especially those connected with the DSO, Bulk Generators and Transmission Operators), by innovation and natural resources and by the attitude of the public towards electric vehicles, which in turn may be influenced by national government policy. In some countries (e.g. The Netherlands) there are tax incentives for the acquisition of electric cars. If this policy changes in a country, the number of electric vehicles could increase or decrease dramatically. Local government has a dependency on and formal relationship with the Supplier that manages the Charging Stations. The relationship with the DSO is indirect unless they have been partners in an initiative to promote electric vehicles.

 By Stuart Boardman, KPN “Local Government’s Onion”

The Distribution Service OperatorBy Stuart Boardman, KPN

Value for the DSO involves balancing its regulatory obligation to provide continuity of energy supply with the cost of investment to achieve that and with the public perception of the value of that service. The DSO also gains value in terms of reputation from investing in innovation and energy saving. That value is expressed in its own long-term future as an enterprise. The DSO, being very much the hub in this use case, is dependent on the Supplier and the Vehicle User (with the vehicle’s battery as proxy) to provide the information needed to ensure continuity – and of course on the Transmission Operator the Bulk Generators to provide power. It does not, however, have any direct relationship with any Bulk Generator or even necessarily know who they are or where they are located.

 

By Stuart Boardman, KPN“The Distribution Service Operator’s Onion”

The Bulk Generator

The Bulk Generator has no direct involvement in this use case but has an indirect dependency on anything affecting the level of usage of electricity, as this affects the market price and long-term future of its product. So there is generic value (or anti-value) in the use case if it is widely implemented.

To be continued…

Those were the basics of the approach. There’s a lot more to be done before you can say you have a grip on value realization in such a scenario.

In the next blog, we’ll dive deeper into the use case, identify other relevant stakeholders and look at other dependencies that may influence value across the chain.

[1] Open Platform 3.0 refers to this as a “wider business ecosystem”. In fact such ecosystems exist for all kinds of services. We just happen to be focusing on this kind of service.

By Stuart Boardman, KPNStuart Boardman is a Senior Business Consultant with KPN Consulting where he leads the Enterprise Architecture practice and consults to clients on Cloud Computing, Enterprise Mobility and The Internet of Everything. He is Co-Chair of The Open Group Open Platform 3.0™ Forum and was Co-Chair of the Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by KPN, the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI as well as several Open Group white papers, guides and standards. He is a frequent speaker at conferences on the topics of Open Platform 3.0 and Identity.

harrington_ed_0Ed Harrington is a Senior Consulting Associate with Conexiam, a Calgary, Canada headquartered consultancy. He also heads his own consultancy, EPH Associates. Prior positions include Principle Consultant with Architecting the Enterprise where he provided TOGAF and other Enterprise Architecture (EA) discipline training and consultancy; EVP and COO for Model Driven Solutions, an EA, SOA and Model Driven Architecture Consulting and Software Development company; various positions for two UK based companies, Nexor and ICL and 18 years at General Electric in various marketing and financial management positions. Ed has been an active member of The Open Group since 2000 when the EMA became part of The Open Group and is past chair of various Open Group Forums (including past Vice Chair of the Architecture Forum). Ed is TOGAF® 9 certified.

6 Comments

Filed under Business Architecture, Interoperability, Open Platform 3.0, Service Oriented Architecture, Strategy, Uncategorized

Using the Open FAIR Body of Knowledge with Other Open Group Standards

By Jim Hietala, VP Security, and Andrew Josey, Director of Standards, The Open Group

This is the third in our four part blog series introducing the Open FAIR Body of Knowledge. In this blog, we look at how the Open FAIR Body of Knowledge can be used with other Open Group standards.

The Open FAIR Body of Knowledge provides a model with which to decompose, analyze, and measure risk. Risk analysis and management is a horizontal enterprise capability that is common to many aspects of running a business. Risk management in most organizations exists at a high level as Enterprise Risk Management, and it exists in specialized parts of the business such as project risk management and IT security risk management. Because the proper analysis of risk is a fundamental requirement for different areas of Enterprise Architecture (EA), and for IT system operation, the Open FAIR Body of Knowledge can be used to support several other Open Group standards and frameworks.

The TOGAF® Framework

In the TOGAF 9.1 standard, Risk Management is described in Part III: ADM Guidelines and Techniques. Open FAIR can be used to help improve the measurement of various types of Risk, including IT Security Risk, Project Risk, Operational Risk, and other forms of Risk. Open FAIR can help to improve architecture governance through improved, consistent risk analysis and better Risk Management. Risk Management is described in the TOGAF framework as a necessary capability in building an EA practice. Use of the Open FAIR Body of Knowledge as part of an EA risk management capability will help to produce risk analysis results that are accurate and defensible, and that are more easily communicated to senior management and to stakeholders.

O-ISM3

The Open Information Security Management Maturity Model (O-ISM3) is a process-oriented approach to building an Information Security Management System (ISMS). Risk Management as a business function exists to identify risk to the organization, and in the context of O-ISM3, information security risk. Open FAIR complements the implementation of an O-ISM3-based ISMS by providing more accurate analysis of risk, which the ISMS can then be designed to address.

O-ESA

The Open Enterprise Security Architecture (O-ESA) from The Open Group describes a framework and template for policy-driven security architecture. O-ESA (in Sections 2.2 and 3.5.2) describes risk management as a governance principle in developing an enterprise security architecture. Open FAIR supports the objectives described in O-ESA by providing a consistent taxonomy for decomposing and measuring risk. Open FAIR can also be used to evaluate the cost and benefit, in terms of risk reduction, of various potential mitigating security controls.

O-TTPS

The O-TTPS standard, developed by The Open Group Trusted Technology Forum, provides a set of guidelines, recommendations, and requirements that help assure against maliciously tainted and counterfeit products throughout commercial off-the-shelf (COTS) information and communication technology (ICT) product lifecycles. The O-TTPS standard includes requirements to manage risk in the supply chain (SC_RSM). Specific requirements in the Risk Management section of O-TTPS include identifying, assessing, and prioritizing risk from the supply chain. The use of the Open FAIR taxonomy and risk analysis method can improve these areas of risk management.

The ArchiMate® Modeling Language

The ArchiMate modeling language, as described in the ArchiMate Specification, can be used to model Enterprise Architectures. The ArchiMate Forum is also considering extensions to the ArchiMate language to include modeling security and risk. Basing this risk modeling on the Risk Taxonomy (O-RT) standard will help to ensure that the relationships between the elements that create risk are consistently understood and applied to enterprise security and risk models.

O-DA

The O-DA standard ((Dependability Through Assuredness), developed by The Open Group Real-time and Embedded Systems Forum, provides the framework needed to create dependable system architectures. The requirements process used in O-DA requires that risk be analyzed before developing dependability requirements. Open FAIR can help to create a solid risk analysis upon which to build dependability requirements.

In the final installment of this blog series, we will look at the Open FAIR certification for people program.

The Open FAIR Body of Knowledge consists of the following Open Group standards:

  • Risk Taxonomy (O-RT), Version 2.0 (C13K, October 2013) defines a taxonomy for the factors that drive information security risk – Factor Analysis of Information Risk (FAIR).
  • Risk Analysis (O-RA) (C13G, October 2013) describes process aspects associated with performing effective risk analysis.

These can be downloaded from The Open Group publications catalog at http://www.opengroup.org/bookstore/catalog.

Our other publications include a Pocket Guide and a Certification Study Guide.

By Jim Hietala and Andrew JoseyJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT Security, Risk Management and Healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on Information Security, Risk Management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

 

By Andrew JoseyAndrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate® 2.1, IEEE Std 1003.1,2013 edition (POSIX), and the core specifications of the Single UNIX® Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

 

Leave a comment

Filed under ArchiMate®, Cybersecurity, Enterprise Architecture, O-TTF, O-TTPS, OTTF, real-time and embedded systems, RISK Management, Security, Standards, TOGAF®, Uncategorized

The Business of Managing IT: The Open Group IT4IT™ Forum

By The Open Group

At The Open Group London 2014 event in October, the launch of The Open Group IT4IT™ Forum was announced. The goal of the new Forum is to create a Reference Architecture and standard that will allow IT departments to take a more holistic approach to managing the business of IT with continuous insight and control, enabling Boundaryless Information Flow™ across the IT Value Chain.

We recently spoke to Forum member Charlie Betz, Founder, Digital Management Academy, LLC, about the new Forum, its origins and why it’s time for IT to be managed as if it were a business in itself.

As IT has become more central to organizations, its role has changed drastically from the days when companies had one large mainframe or just a few PCs. For many organizations today, particularly large enterprises, IT is becoming a business within the business.

The problem with most IT departments, though, is that IT has never really been run as if it was a business.

In order for IT to better cope with rapid technological change and become more efficient at transitioning to the service-based model that most businesses today require, IT departments need guidance as to how the business of IT can be run. What’s at stake are things such as how to better manage IT at scale, how to understand IT as a value chain in its own right and how organizations can get better visibility into the vast amount of economic activity that’s currently characterized in organizations through technology.

The Open Group’s latest Forum aims to do just that.

The Case for IT Management

In the age of digital transformation, IT has become an integral part of how business is done. So says Charlie Betz, one of the founding members of the IT4IT Forum. From the software in your car to the supply chain that brings you your bananas, IT has become an irreplaceable component of how things work.

Quoting industry luminary Marc Andreessen, Betz says “software is eating the world.” Similarly, Betz says, IT management is actually beginning to eat management, too. Although this might seem laughable, we have become increasingly dependent on computing systems in our everyday lives. With that dependence comes significant concerns about the complexity of those systems and the potential they carry for chaotic behaviors. Therefore, he says, as technology becomes pervasive, how IT is managed will increasingly dictate how businesses are managed.

“If IT is increasing in its proportion of all product management, and all markets are increasingly dependent on managing IT, then understanding pure IT management becomes critically important not just for IT but for all business management,” Betz says.

According to Betz, the conversation about running the business of IT has been going on in the industry for a number of years under the guise of ideas such as “enterprise resource planning for IT” and the like. Ultimately, though, Betz says managing IT comes down to determining what IT’s value chain is and how to deliver on it.

By The Open GroupBetz compares modern IT departments to atoms, cells and bits where atoms represent hardware, including servers, data centers and networks; cells represent people; and bits are represented by software. In this analogy, these three things comprise the fundamental resources that an IT department manages. When reduced to economic terms, Betz says, what is currently lacking in most IT departments is a sense of how much things are worth, what the total costs are for acquisition and maintenance for capabilities and the supply and demand dynamics for IT services.

For example, in traditional IT management, workloads are defined by projects, tickets and also a middle ground characterized by work that is smaller than a project and larger than a ticket, Betz says. Often IT departments lack an understanding of how the three relate to each other and how they affect resources—particularly in the form of people—which becomes problematic because there is no holistic view of what the department is doing. Without that aggregate view, management is not only difficult but nearly impossible.

Betz says that to get a grasp on the whole, IT needs to take a cue from the lean management movement and first understand where the work originates and what it’s nature is so activities and processes don’t continue to proliferate without being managed.

Betz believes part of the reason IT has not better managed itself to date is because the level of complexity within IT has grown so quickly. He likens it to the frog in the boiling water metaphor—if the heat is turned up incrementally, the frog doesn’t know what’s hit him until it’s too late.

“Back when you had one computer it just wasn’t a concern,” he said. “You had very few systems that you were automating. It’s not that way nowadays. You have thousands of them. The application portfolio in major enterprises—depending on how you count applications, which is not an easy question in and of itself—the range is between 5000-10,000 applications. One hundred thousand servers is not unheard of. These are massive numbers, and the complexity is unimaginable. The potential for emergent chaotic behavior is unprecedented in human technological development.”

Betz believes the reason there is a perception that IT is poorly managed is also because it’s at the cutting-edge of every management question in business today. And because no one has ever dealt with systems and issues this complex before, it’s difficult to get a handle on them. Which is why the time for creating a framework for how IT can be managed has come.

IT4IT

The IT4IT Forum grew out of a joint initiative that was originally undertaken by Royal Dutch Shell and HP. Begun as a high-level user group within HP, companies such as Accenture, Achmea, Munich RE and PwC have also been integral in pulling together the initial work that has been provided to The Open Group to create the Forum. As the group began to develop a framework, it was clear that what they were developing needed to become an open standard, Betz says, so the group turned to The Open Group.

“It was pretty clear that The Open Group was the best fit for this,” he says. “There was clearly recognition and understanding on the part of The Open Group senior staff that this was a huge opportunity. They were very positive about it from the get-go.”

Currently in development, the IT4IT standard will provide guidance and specifications for how IT departments can provide consistent end-to-end service across the IT Value Chain and lifecycle. The IT Value Chain is meant to provide a model for managing the IT services life cycle and for how those service can be brokered with enterprises. By providing the IT similar level functionality as other critical business functions (such as finance or HR), IT is enabled to achieve better levels of predictability and efficiency.

By The Open Group

Betz says developing a Reference Architecture for IT4IT will be helpful for IT departments because it will provide a tested model for departments to begin the process of better management. And having that model be created by a vendor-neutral consortium helps provide credibility for users because no one company is profiting from it.

“It’s the community telling itself a story of what it wants to be,” he said.

The Reference Architecture will not only include prescriptive methods for how to design, procure and implement the functionality necessary to better manage IT departments but will also include real-world use cases related to current industry trends such as Cloud-sourcing, Agile, Dev-Ops and service brokering. As an open standard, it will also be designed to work with existing industry standards that IT departments may already be using including ITIL®, CoBIT®, SAFe® and TOGAF®, an Open Group standard.

With almost 200 pages of material already developed toward a standard, Betz says the Forum released its initial Snapshot for the standard available in late November. From there the Forum will need to decide which sections should be included as normative parts for the standard. The hope is to have the first version of the IT4IT Reference Architecture standard available next summer, Betz says.

For more on The Open Group IT4IT Forum or to become a member, please visit http://www.opengroup.org/IT4IT.

 

Leave a comment

Filed under architecture, IT, IT4IT, Standards, Uncategorized, Value Chain