Tag Archives: open standards

A Historical Look at Enterprise Architecture with John Zachman

By The Open Group

John Zachman’s Zachman Framework is widely recognized as the foundation and historical basis for Enterprise Architecture. On Tuesday, Feb. 3, during The Open Group’s San Diego 2015 event, Zachman will be giving the morning’s keynote address entitled “Zachman on the Zachman Framework and How it Complements TOGAF® and Other Frameworks.”

We recently spoke to Zachman in advance of the event about the origins of his framework, the state of Enterprise Architecture and the skills he believes EAs need today.

As a discipline, Enterprise Architecture is still fairly young. It began getting traction in the mid to late 1980s after John Zachman published an article describing a framework for information systems architectures in the IBM Systems Journal. Zachman said he lived to regret initially calling his framework “A Framework for Information Systems Architecture,” instead of “Enterprise Architecture” because the framework actually has nothing to do with information systems.

Rather, he said, it was “A Framework for Enterprise Architecture.” But at the time of publication, the idea of Enterprise Architecture was such a foreign concept, Zachman said, that people didn’t understand what it was. Even so, the origins of his ontological framework were already almost 20 years old by the time he first published them.

In the late 1960s, Zachman was working as an account executive in the Marketing Division of IBM. His account responsibility was working with the Atlantic Richfield Company (better known as ARCO). In 1969, ARCO had just been newly formed out of the merger of three separate companies, Atlantic Refining out of Philadelphia and Richfield in California, which merged and then bought Sinclair Oil in New York in 1969.

“It was the biggest corporate merger in history at the time where they tried to integrate three separate companies into one company. They were trying to deal with an enterprise integration issue, although they wouldn’t have called it that at the time,” Zachman said.

With three large companies to merge, ARCO needed help in figuring out how to do the integration. When the client asked Zachman how they should handle such a daunting task, he said he’d try to get some help. So he turned to a group within IBM called the Information Systems Control and Planning Group and the group’s Director of Architecture, Dewey Walker, for guidance.

Historically, when computers were first used in commercial applications, there already were significant “Methods and Procedures” systems communities in most large organizations whose job was to formalize many manual systems in order to manage the organization, Zachman said. When computers came on the scene, they were used to improve organizational productivity by replacing the people performing the organizations’ processes. However, because manual systems defined and codified organizational responsibilities, when management made changes within an organization, as they often did, it would render the computer systems obsolete, which required major redevelopment.

Zachman recalled Walker’s observation that “organizational responsibilities” and “processes” were two different things. As such, he believed systems should be designed to automate the process, not to encode the organizational responsibilities, because the process and the organization changed independently from one another. By separating these two independent variables, management could change organizational responsibilities without affecting or changing existing systems or the organization. Many years later, Jim Champy and Mike Hammer popularized this notion in their widely read 1991 book, “Reengineering the Corporation,” Zachman said.

According to Zachman, Walker created a methodology for defining processes as separate entities from the organizational structure. Walker came out to Los Angeles, where Zachman and ARCO were based to help provide guidance on the merger. Zachman recalls Walker telling him that the key to defining the systems for Enterprise purposes was in the data, not necessarily the process itself. In other words, the data across the company needed to be normalized so that they could maintain visibility into the assets and structure of the enterprise.

“The secret to this whole thing lies in the coding and the classification of the data,” Zachman recalled Walker saying. Walker’s methodology, he said, began by classifying data by its existence not by its use.

Since all of this was happening well before anyone came up with the concept of data modeling, there were no data models from which to design their system. “Data-oriented words were not yet in anyone’s vocabulary,” Zachman said. Walker had difficulty articulating his concepts because the words he had at his disposal were inadequate, Zachman said.

Walker understood that to have structural control over the enterprise, they needed to look at both processes and data as independent variables, Zachman said. That would provide the flexibility and knowledge base to accommodate escalating change. This was critical, he said, because the system is the enterprise. Therefore, creating an integrated structure of independent variables and maintaining visibility into that structure are crucial if you want to be able to manage and change it. Otherwise, he says, the enterprise “disintegrates.”

Although Zachman says Walker was “onto this stuff early on,” Walker eventually left IBM, leaving Zachman with the methodology Walker had named “Business Systems Planning.” (Zachman said Walker knew that it wasn’t just about the information systems, but about the business systems.) According to Zachman, he inherited Walker’s methodology because he’d been working closely with Walker. “I was the only person that had any idea what Dewey was doing,” he said.

What he was left with, Zachman says, was what today he would call a “Row 1 methodology”—or the “Executive Perspective” and the “Scope Contexts” in what would eventually become his ontology.

According to Zachman, Walker had figured out how to transcribe enterprise strategy in such a fashion that engineering work could be derived from it. “What we didn’t know how to do,” Zachman said, “was to transform the strategy (Zachman Framework Row 1), which tends to be described at a somewhat abstract level of definition into the operating Enterprise (Row 6), which was comprised of very precise instructions (explicit or implicit) for behavior of people and/or machines.”

Zachman said that they knew that “Architecture” had something to do with the Strategy to Instantiation transformation logic but they didn’t know what architecture for enterprises was in those days. His radical idea was to ask someone who did architecture for things like buildings, airplanes, locomotives, computers or battleships. What the architecture was for those Industrial Age products. Zachman believed if he could find out what they thought architecture was for those products, he might be able to figure out what architecture was for enterprises and thereby figure out how to transform the strategy into the operating enterprise to align the enterprise implementation with the strategy.

With this in mind, Zachman began reaching out to people in other disciplines to see how they put together things like buildings or airplanes. He spoke to an architect friend and also to some of the aircraft manufacturers that were based in Southern California at the time. He began gathering different engineering specs and studying them.

One day while he was sitting at his desk, Zachman said, he began sorting the design artifacts he’d collected for buildings and airplanes into piles. Suddenly he noticed there was something similar in how the design patterns were described.

“Guess what?” he said. “The way you describe buildings is identical to the way you describe airplanes, which turns out to be identical to the way you describe locomotives, which is identical to the way you describe computers. Which is identical to the way you describe anything else that humanity has ever described.”

Zachman says he really just “stumbled across” the way to describe the enterprise and attributes his discovery to providence, a miracle! Despite having kick-started the discipline of Enterprise Architecture with this recognition, Zachman claims he’s “actually not very innovative,” he said.

“I just saw the pattern and put enterprise names on it,” he said

Once he understood that Architectural design descriptions all used the same categories and patterns, he knew that he could also define Architecture for Enterprises. All it would take would be to apply the enterprise vocabulary to the same pattern and structure of the descriptive representations of everything else.

“All I did was, I saw the pattern of the structure of the descriptive representations for airplanes, buildings, locomotives and computers, and I put enterprise names on the same patterns,” he says. “Now you have the Zachman Framework, which basically is Architecture for Enterprises. It is Architecture for every other object known to human kind.”

Thus the Zachman Framework was born.

Ontology vs. Methodology

According to Zachman, what his Framework is ultimately intended for is describing a complex object, an Enterprise. In that sense, the Zachman Framework is the ontology for Enterprise Architecture, he says. What it doesn’t do, is tell you how to do Enterprise Architecture.

“Architecture is architecture is architecture. My framework is just the definition and structure of the descriptive representation for enterprises,” he said.

That’s where methodologies, such as TOGAF®, an Open Group standard, DoDAF or other methodological frameworks come in. To create and execute an Architecture, practitioners need both the ontology—to help them define, translate and place structure around the enterprise descriptive representations—and they need a methodology to populate and implement it. Both are needed—it’s an AND situation, not an OR, he said. The methodology simply needs to use (or reuse) the ontological constructs in creating the implementation instantiations in order for the enterprise to be “architected.”

The Need for Architecture

Unfortunately, Zachman says, there are still a lot of companies today that don’t understand the need to architect their enterprise. Enterprise Architecture is simply not on the radar of general management in most places.

“It’s not readily acknowledged on the general management agenda,” Zachman said.

Instead, he says, most companies focus their efforts on building and running systems, not engineering the enterprise as a holistic unit.

“We haven’t awakened to the concept of Enterprise Architecture,” he says. “The fundamental reason why is people think it takes too long and it costs too much. That is a shibboleth – it doesn’t take too long or cost too much if you know what you’re doing and have an ontological construct.”

Zachman believes many companies are particularly guilty of this type of thinking, which he attributes to a tendency to think that there isn’t any work being done unless the code is up and running. Never mind all the work it took to get that code up and running in the first place.

“Getting the code to run, I’m not arguing against that, but it ought to be in the context of the enterprise design. If you’re just providing code, you’re going to get exactly what you have right now—code. What does that have to do with management’s intentions or the Enterprise in its entirety?”

As such, Zachman compares today’s enterprises to log cabins rather than skyscrapers. Many organizations have not gotten beyond that “primitive” stage, he says, because they haven’t been engineered to be integrated or changed.

According to Zachman, the perception that Enterprise Architecture is too costly and time consuming must change. And, people also need to stop thinking that Enterprise Architecture belongs solely under the domain of IT.

“Enterprise Architecture is not about building IT models. It’s about solving general management problems,” he said. “If we change that perception, and we start with the problem and we figure out how to solve that problem, and then, oh by the way we’re doing Architecture, then we’re going to get a lot of Architecture work done.”

Zachman believes one way to do this is to build out the Enterprise Architecture iteratively and incrementally. By tackling one problem at a time, he says, general management may not even need to know whether you’re doing Enterprise Architecture or not, as long as their problem is being solved. The governance system controls the architectural coherence and integration of the increments. He expects that EA will trend in that direction over the next few years.

“We’re learning much better how to derive immediate value without having the whole enterprise engineered. If we can derive immediate value, that dispels the shibboleth—the misperception that architecture takes too long and costs too much. That’s the way to eliminate the obstacles for Enterprise Architecture.”

As far as the skills needed to do EA into the future, Zachman believes that enterprises will eventually need to have multiple types of architects with different skill sets to make sure everything is aligned. He speculates that someday, there may need to be specialists for every cell in the framework, saying that there is potentially room for a lot of specialization and people with different skill sets and a lot of creativity. Just as aircraft manufacturers need a variety of engineers—from aeronautic to hydraulic and everywhere in between—to get a plane built. One engineer does not engineer the entire airplane or a hundred-story building or an ocean liner, or, for that matter, a personal computer. Similarly, increasingly complex enterprises will likely need multiple types of engineering specialties. No one person knows everything.

“Enterprises are far more complex than 747s. In fact, an enterprise doesn’t have to be very big before it gets really complex,” he said. “As enterprise systems increase in size, there is increased potential for failure if they aren’t architected to respond to that growth. And if they fail, the lives and livelihoods of hundreds of thousand of people can be affected, particularly if it’s a public sector Enterprise.”

Zachman believes it may ultimately take a generation or two for companies to understand the need to better architect the way they run. As things are today, he says, the paradigm of the “system process first” Industrial Age is still too ingrained in how systems are created. He believes it will be a while before that paradigm shifts to a more Information Age-centric way of thinking where the enterprise is the object rather than the system.

“Although this afternoon is not too early to start working on it, it is likely that it will be the next generation that will make Enterprise Architecture an essential way of life like it is for buildings and airplanes and automobiles and every other complex object,” he said.

By The Open GroupJohn A. Zachman, Founder & Chairman, Zachman International, Executive Director of FEAC Institute, and Chairman of the Zachman Institute

Join the conversation – @theopengroup, #ogchat, #ogSAN


Filed under Enterprise Architecture, Standards, TOGAF®, Uncategorized

Using the Open FAIR Body of Knowledge with Other Open Group Standards

By Jim Hietala, VP Security, and Andrew Josey, Director of Standards, The Open Group

This is the third in our four part blog series introducing the Open FAIR Body of Knowledge. In this blog, we look at how the Open FAIR Body of Knowledge can be used with other Open Group standards.

The Open FAIR Body of Knowledge provides a model with which to decompose, analyze, and measure risk. Risk analysis and management is a horizontal enterprise capability that is common to many aspects of running a business. Risk management in most organizations exists at a high level as Enterprise Risk Management, and it exists in specialized parts of the business such as project risk management and IT security risk management. Because the proper analysis of risk is a fundamental requirement for different areas of Enterprise Architecture (EA), and for IT system operation, the Open FAIR Body of Knowledge can be used to support several other Open Group standards and frameworks.

The TOGAF® Framework

In the TOGAF 9.1 standard, Risk Management is described in Part III: ADM Guidelines and Techniques. Open FAIR can be used to help improve the measurement of various types of Risk, including IT Security Risk, Project Risk, Operational Risk, and other forms of Risk. Open FAIR can help to improve architecture governance through improved, consistent risk analysis and better Risk Management. Risk Management is described in the TOGAF framework as a necessary capability in building an EA practice. Use of the Open FAIR Body of Knowledge as part of an EA risk management capability will help to produce risk analysis results that are accurate and defensible, and that are more easily communicated to senior management and to stakeholders.


The Open Information Security Management Maturity Model (O-ISM3) is a process-oriented approach to building an Information Security Management System (ISMS). Risk Management as a business function exists to identify risk to the organization, and in the context of O-ISM3, information security risk. Open FAIR complements the implementation of an O-ISM3-based ISMS by providing more accurate analysis of risk, which the ISMS can then be designed to address.


The Open Enterprise Security Architecture (O-ESA) from The Open Group describes a framework and template for policy-driven security architecture. O-ESA (in Sections 2.2 and 3.5.2) describes risk management as a governance principle in developing an enterprise security architecture. Open FAIR supports the objectives described in O-ESA by providing a consistent taxonomy for decomposing and measuring risk. Open FAIR can also be used to evaluate the cost and benefit, in terms of risk reduction, of various potential mitigating security controls.


The O-TTPS standard, developed by The Open Group Trusted Technology Forum, provides a set of guidelines, recommendations, and requirements that help assure against maliciously tainted and counterfeit products throughout commercial off-the-shelf (COTS) information and communication technology (ICT) product lifecycles. The O-TTPS standard includes requirements to manage risk in the supply chain (SC_RSM). Specific requirements in the Risk Management section of O-TTPS include identifying, assessing, and prioritizing risk from the supply chain. The use of the Open FAIR taxonomy and risk analysis method can improve these areas of risk management.

The ArchiMate® Modeling Language

The ArchiMate modeling language, as described in the ArchiMate Specification, can be used to model Enterprise Architectures. The ArchiMate Forum is also considering extensions to the ArchiMate language to include modeling security and risk. Basing this risk modeling on the Risk Taxonomy (O-RT) standard will help to ensure that the relationships between the elements that create risk are consistently understood and applied to enterprise security and risk models.


The O-DA standard ((Dependability Through Assuredness), developed by The Open Group Real-time and Embedded Systems Forum, provides the framework needed to create dependable system architectures. The requirements process used in O-DA requires that risk be analyzed before developing dependability requirements. Open FAIR can help to create a solid risk analysis upon which to build dependability requirements.

In the final installment of this blog series, we will look at the Open FAIR certification for people program.

The Open FAIR Body of Knowledge consists of the following Open Group standards:

  • Risk Taxonomy (O-RT), Version 2.0 (C13K, October 2013) defines a taxonomy for the factors that drive information security risk – Factor Analysis of Information Risk (FAIR).
  • Risk Analysis (O-RA) (C13G, October 2013) describes process aspects associated with performing effective risk analysis.

These can be downloaded from The Open Group publications catalog at http://www.opengroup.org/bookstore/catalog.

Our other publications include a Pocket Guide and a Certification Study Guide.

By Jim Hietala and Andrew JoseyJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT Security, Risk Management and Healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on Information Security, Risk Management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.


By Andrew JoseyAndrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate® 2.1, IEEE Std 1003.1,2013 edition (POSIX), and the core specifications of the Single UNIX® Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.


Comments Off on Using the Open FAIR Body of Knowledge with Other Open Group Standards

Filed under ArchiMate®, Cybersecurity, Enterprise Architecture, O-TTF, O-TTPS, OTTF, real-time and embedded systems, RISK Management, Security, Standards, TOGAF®, Uncategorized

The Open Group London 2014 – Day Three Highlights

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

After an evening spent in the wonderful surroundings of the Victoria and Albert Museum, delegates returned to another London landmark building, Westminster Central Hall, for the final day of The Open Group London 2014.

Following on from Tuesday’s schedule, The Open Group event continued with tracks covering topics including Risk Management, TOGAF®, an Open Group standard, Security as well as The Open Group Open Platform 3.0™. To begin the Open Platform 3.0 track, Mark Skilton, Professor of Practice, Information Systems Management, Warwick Business School discussed the real world implications of Open Platform 3.0. To do this he looked at both the theory and practice behind technologies such as Big Data, social media and even gamification and their adoption by companies such as Coca Cola and Hilton.

Mark detailed how such companies are amending their business strategy to take into account these new technologies to drive business benefit. Mark went on to say that Open Platform 3.0 is serving to help “contextualize the moment”, essentially making it easier for individuals or businesses to interact with goods or services. This he concluded is being driven by people’s growing value of time – we want a more seamless experience in our day-to-day lives whether to buy a coffee or to check in to a hotel – and technology is making this possible. The talk provided a fascinating glimpse into the future of convergent technologies and the important role that contextualization is set to play in this.

Following this, Stuart Boardman from KPN Consulting led a session which looked in detail at the capability requirements of Open Platform 3.0. In what was a lively debate, contributors discussed the importance of smart data, semantic consistency, platform hierarchies and sustainability.

The final session of the morning in the Open Platform 3.0 track looked at the topic of open public sector data with Deirdre Lee, Principal at Derilinx and Chris Harding, Director for Interoperability at The Open Group. Discussing a topic that has risen up government agendas recently, Deirdre began by providing a thorough overview of the background to open data in the public sector and the supporting forces behind it. Deirdre provided detail on how various authorities across Europe had provided impetus to the Open Data movement, and what economic impact these initiatives had resulted in. Subsequently, Chris looked at how The Open Group can play a role in the emergence of open data as a subject area.

Following lunch, the tracks were split into two, with Jim Hietala, VP, Security & Healthcare, The Open Group, leading a workshop on the “Voice of the Security Customer”. This specifically looked at the impact of Security Automation on overall Enterprise Security, provoking much discussion among attendees. In the other session, the Open Platform 3.0 Forum focused on the topic of data integration with Ronald Schuldt, Senior Partner, UDEF and Dimitrios Kyritsis, Deputy Director, EPFL, leading a productive debate on the topic.

With The Open Group London 2014 coming to a close, we would like to thank all the speakers for providing such thoughtful content and the 300 attendees for making the event another great success. Also, many thanks go to our sponsors BiZZdesign, Corso, BOC Group, Good e-Learning, AEA and Scape, and media sponsors Van Haren and Computer Weekly,

See you at The Open Group San Diego 2015 February 2 – 5!

Join the conversation – #ogchat

Loren K. BaynesLoren K. Baynes, Director, Global Marketing Communications, joined The Open Group in 2013 and spearheads corporate marketing initiatives, primarily the website, blog and media relations. Loren has over 20 years experience in brand marketing and public relations and, prior to The Open Group, was with The Walt Disney Company for over 10 years. Loren holds a Bachelor of Business Administration from Texas A&M University. She is based in the US.


Comments Off on The Open Group London 2014 – Day Three Highlights

Filed under Boundaryless Information Flow™, Future Technologies, Internet of Things, Open Platform 3.0, Professional Development, Standards, Uncategorized

The Open Group London 2014 Preview: A Conversation with RTI’s Stan Schneider about the Internet of Things and Healthcare

By The Open Group

RTI is a Silicon Valley-based messaging and communications company focused on helping to bring the Industrial Internet of Things (IoT) to fruition. Recently named “The Most Influential Industrial Internet of Things Company” by Appinions and published in Forbes, RTI’s EMEA Manager Bettina Swynnerton will be discussing the impact that the IoT and connected medical devices will have on hospital environments and the Healthcare industry at The Open Group London October 20-23. We spoke to RTI CEO Stan Schneider in advance of the event about the Industrial IoT and the areas where he sees Healthcare being impacted the most by connected devices.

Earlier this year, industry research firm Gartner declared the Internet of Things (IoT) to be the most hyped technology around, having reached the pinnacle of the firm’s famed “Hype Cycle.”

Despite the hype around consumer IoT applications—from FitBits to Nest thermostats to fashionably placed “wearables” that may begin to appear in everything from jewelry to handbags to kids’ backpacks—Stan Schneider, CEO of IoT communications platform company RTI, says that 90 percent of what we’re hearing about the IoT is not where the real value will lie. Most of media coverage and hype is about the “Consumer” IoT like Google glasses or sensors in refrigerators that tell you when the milk’s gone bad. However, most of the real value of the IoT will take place in what GE has coined as the “Industrial Internet”—applications working behind the scenes to keep industrial systems operating more efficiently, says Schneider.

“In reality, 90 percent of the real value of the IoT will be in industrial applications such as energy systems, manufacturing advances, transportation or medical systems,” Schneider says.

However, the reality today is that the IoT is quite new. As Schneider points out, most companies are still trying to figure out what their IoT strategy should be. There isn’t that much active building of real systems at this point.

Most companies, at the moment, are just trying to figure out what the Internet of Things is. I can do a webinar on ‘What is the Internet of Things?’ or ‘What is the Industrial Internet of Things?’ and get hundreds and hundreds of people showing up, most of whom don’t have any idea. That’s where most companies are. But there are several leading companies that very much have strategies, and there are a few that are even executing their strategies, ” he said. According to Schneider, these companies include GE, which he says has a 700+ person team currently dedicated to building their Industrial IoT platform, as well as companies such as Siemens and Audi, which already have some applications working.

For its part, RTI is actively involved in trying to help define how the Industrial Internet will work and how companies can take disparate devices and make them work with one another. “We’re a nuts-and-bolts, make-it-work type of company,” Schneider notes. As such, openness and standards are critical not only to RTI’s work but to the success of the Industrial IoT in general, says Schneider. RTI is currently involved in as many as 15 different industry standards initiatives.

IoT Drivers in Healthcare

Although RTI is involved in IoT initiatives in many industries, from manufacturing to the military, Healthcare is one of the company’s main areas of focus. For instance, RTI is working with GE Healthcare on the software for its CAT scanner machines. GE chose RTI’s DDS (data distribution service) product because it will let GE standardize on a single communications platform across product lines.

Schneider says there are three big drivers that are changing the medical landscape when it comes to connectivity: the evolution of standalone systems to distributed systems, the connection of devices to improve patient outcome and the replacement of dedicated wiring with networks.

The first driver is that medical devices that have been standalone devices for years are now being built on new distributed architectures. This gives practitioners and patients easier access to the technology they need.

For example, RTI customer BK Medical, a medical device manufacturer based in Denmark, is in the process of changing their ultrasound product architecture. They are moving from a single-user physical system to a wirelessly connected distributed design. Images will now be generated in and distributed by the Cloud, thus saving significant hardware costs while making the systems more accessible.

According to Schneider, ultrasound machine architecture hasn’t really changed in the last 30 or 40 years. Today’s ultrasound machines are still wheeled in on a cart. That cart contains a wired transducer, image processing hardware or software and a monitor. If someone wants to keep an image—for example images of fetuses in utero—they get carry out physical media. Years ago it was a Polaroid picture, today the images are saved to CDs and handed to the patient.

In contrast, BK’s new systems will be completely distributed, Schneider says. Doctors will be able to carry a transducer that looks more like a cellphone with them throughout the hospital. A wireless connection will upload the imaging data into the cloud for image calculation. With a distributed scenario, only one image processing system may be needed for a hospital or clinic. It can even be kept in the cloud off-site. Both patients and caregivers can access images on any display, wherever they are. This kind of architecture makes the systems much cheaper and far more efficient, Schneider says. The days of the wheeled-in cart are numbered.

The second IoT driver in Healthcare is connecting medical devices together to improve patient outcomes. Most hospital devices today are completely independent and standalone. So, if a patient is hooked up to multiple monitors, the only thing that really “connects” those devices today is a piece of paper at the end of a hospital bed that shows how each should be functioning. Nurses are supposed to check these devices on an hourly basis to make sure they’re working correctly and the patient is ok.

Schneider says this approach is error-ridden. First, the nurse may be too busy to do a good job checking the devices. Worse, any number of things can set off alarms whether there’s something wrong with the patient or not. As anyone who has ever visited a friend or relative in the hospital attest to, alarms are going off constantly, making it difficult to determine when someone is really in distress. In fact, one of the biggest problems in hospital settings today, Schneider says, is a phenomenon known as “alarm fatigue.” Single devices simply can’t reliably tell if there’s some minor glitch in data or if the patient is in real trouble. Thus, 80% of all device alarms in hospitals are turned off. Meaningless alarms fatigue personnel, so they either ignore or turn off the alarms…and people can die.

To deal with this problem, new technologies are being created that will connect devices together on a network. Multiple devices can then work in tandem to really figure out when something is wrong. If the machines are networked, alarms can be set to go off only when multiple distress indicators are indicated rather than just one. For example, if oxygen levels drop on both an oxygen monitor on someone’s finger and on a respiration monitor, the alarm is much more likely a real patient problem than if only one source shows a problem. Schneider says the algorithms to fix these problems are reasonably well understood; the barrier is the lack of networking to tie all of these machines together.

The third area of change in the industrial medical Internet is the transition to networked systems from dedicated wired designs. Surgical operating rooms offer a good example. Today’s operating room is a maze of wires connecting screens, computers, and video. Videos, for instance, come from dynamic x-ray imaging systems, from ultrasound navigation probes and from tiny cameras embedded in surgical instruments. Today, these systems are connected via HDMI or other specialized cables. These cables are hard to reconfigure. Worse, they’re difficult to sterilize, Schneider says. Thus, the surgical theater is hard to configure, clean and maintain.

In the future, the mesh of special wires can be replaced by a single, high-speed networking bus. Networks make the systems easier to configure and integrate, easier to use and accessible remotely. A single, easy-to-sterilize optical network cable can replace hundreds of wires. As wireless gets faster, even that cable can be removed.

“By changing these systems from a mesh of TV-cables to a networked data bus, you really change the way the whole system is integrated,” he said. “It’s much more flexible, maintainable and sharable outside the room. Surgical systems will be fundamentally changed by the Industrial IoT.”

IoT Challenges for Healthcare

Schneider says there are numerous challenges facing the integration of the IoT into existing Healthcare systems—from technical challenges to standards and, of course, security and privacy. But one of the biggest challenges facing the industry, he believes, is plain old fear. In particular, Schneider says, there is a lot of fear within the industry of choosing the wrong path and, in effect, “walking off a cliff” if they choose the wrong direction. Getting beyond that fear and taking risks, he says, will be necessary to move the industry forward, he says.

In a practical sense, the other thing currently holding back integration is the sheer number of connected devices currently being used in medicine, he says. Manufacturers each have their own systems and obviously have a vested interest in keeping their equipment in hospitals, so many have been reluctant to develop or become standards-compliant and push interoperability forward, Schneider says.

This is, of course, not just a Healthcare issue. “We see it in every single industry we’re in. It’s a real problem,” he said.

Legacy systems are also a problematic area. “You can’t just go into a Kaiser Permanente and rip out $2 billion worth of equipment,” he says. Integrating new systems with existing technology is a process of incremental change that takes time and vested leadership, says Schneider.

Cloud Integration a Driver

Although many of these technologies are not yet very mature, Schneider believes that the fundamental industry driver is Cloud integration. In Schneider’s view, the Industrial Internet is ultimately a systems problem. As with the ultrasound machine example from BK Medical, it’s not that an existing ultrasound machine doesn’t work just fine today, Schneider says, it’s that it could work better.

“Look what you can do if you connect it to the Cloud—you can distribute it, you can make it cheaper, you can make it better, you can make it faster, you can make it more available, you can connect it to the patient at home. It’s a huge system problem. The real overwhelming striking value of the Industrial Internet really happens when you’re not just talking about the hospital but you’re talking about the Cloud and hooking up with practitioners, patients, hospitals, home care and health records. You have to be able to integrate the whole thing together to get that ultimate value. While there are many point cases that are compelling all by themselves, realizing the vision requires getting the whole system running. A truly connected system is a ways out, but it’s exciting.”

Open Standards

Schneider also says that openness is absolutely critical for these systems to ultimately work. Just as agreeing on a standard for the HTTP running on the Internet Protocol (IP) drove the Web, a new device-appropriate protocol will be necessary for the Internet of Things to work. Consensus will be necessary, he says, so that systems can talk to each other and connectivity will work. The Industrial Internet will push that out to the Cloud and beyond, he says.

“One of my favorite quotes is from IBM, he says – IBM said, ‘it’s not a new Internet, it’s a new Web.’” By that, they mean that the industry needs new, machine-centric protocols to run over the same Internet hardware and base IP protocol, Schneider said.

Schneider believes that this new web will eventually evolve to become the new architecture for most companies. However, for now, particularly in hospitals, it’s the “things” that need to be integrated into systems and overall architectures.

One example where this level of connectivity will make a huge difference, he says, is in predictive maintenance. Once a system can “sense” or predict that a machine may fail or if a part needs to be replaced, there will be a huge economic impact and cost savings. For instance, he said Siemens uses acoustic sensors to monitor the state of its wind generators. By placing sensors next to the bearings in the machine, they can literally “listen” for squeaky wheels and thus figure out whether a turbine may soon need repair. These analytics let them know when the bearing must be replaced before the turbine shuts down. Of course, the infrastructure will need to connect all of these “things” to the each other and the cloud first. So, there will need to be a lot of system level changes in architectures.

Standards, of course, will be key to getting these architectures to work together. Schneider believes standards development for the IoT will need to be tackled from both horizontal and vertical standpoint. Both generic communication standards and industry specific standards like how to integrate an operating room must evolve.

“We are a firm believer in open standards as a way to build consensus and make things actually work. It’s absolutely critical,” he said.

stan_schneiderStan Schneider is CEO at Real-Time Innovations (RTI), the Industrial Internet of Things communications platform company. RTI is the largest embedded middleware vendor and has an extensive footprint in all areas of the Industrial Internet, including Energy, Medical, Automotive, Transportation, Defense, and Industrial Control.  Stan has published over 50 papers in both academic and industry press. He speaks at events and conferences widely on topics ranging from networked medical devices for patient safety, the future of connected cars, the role of the DDS standard in the IoT, the evolution of power systems, and understanding the various IoT protocols.  Before RTI, Stan managed a large Stanford robotics laboratory, led an embedded communications software team and built data acquisition systems for automotive impact testing.  Stan completed his PhD in Electrical Engineering and Computer Science at Stanford University, and holds a BS and MS from the University of Michigan. He is a graduate of Stanford’s Advanced Management College.


Comments Off on The Open Group London 2014 Preview: A Conversation with RTI’s Stan Schneider about the Internet of Things and Healthcare

Filed under architecture, Cloud, digital technologies, Enterprise Architecture, Healthcare, Internet of Things, Open Platform 3.0, Standards, Uncategorized

Open FAIR Blog Series – An Introduction to Risk Analysis and the Open FAIR Body of Knowledge

By Jim Hietala, VP, Security and Andrew Josey, Director of Standards, The Open Group

This is the first in a four-part series of blogs introducing the Open FAIR Body of Knowledge. In this first blog. we look at what the Open FAIR Body of Knowledge provides, and why a taxonomy is needed for Risk Analysis.

An Introduction to Risk Analysis and the Open FAIR Body of Knowledge

The Open FAIR Body of Knowledge provides a taxonomy and method for understanding, analyzing and measuring information risk. It allows organizations to:

  • Speak in one language concerning their risk using the standard taxonomy and terminology, and communicate risk effectively to senior management
  • Consistently study and apply risk analysis principles to any object or asset
  • View organizational risk in total
  • Challenge and defend risk decisions
  • Compare risk mitigation options

What does FAIR stand for?

FAIR is an acronym for Factor Analysis of Information Risk.

Risk Analysis: The Need for an Accurate Model and Taxonomy

Organizations seeking to analyze and manage risk encounter some common challenges. Put simply, it is difficult to make sense of risk without having a common understanding of both the factors that (taken together) contribute to risk, and the relationships between those factors. The Open FAIR Body of Knowledge provides such a taxonomy.

Here’s an example that will help to illustrate why a standard taxonomy is important. Let’s assume that you are an information security risk analyst tasked with determining how much risk your company is exposed to from a “lost or stolen laptop” scenario. The degree of risk that the organization experiences in such a scenario will vary widely depending on a number of key factors. To even start to approach an analysis of the risk posed by this scenario to your organization, you will need to answer a number of questions, such as:

  • Whose laptop is this?
  • What data resides on this laptop?
  • How and where did the laptop get lost or stolen?
  • What security measures were in place to protect the data on the laptop?
  • How strong were the security controls?

The level of risk to your organization will vary widely based upon the answers to these questions. The degree of overall organizational risk posed by lost laptops must also include an estimation of the frequency of occurrence of lost or stolen laptops across the organization.

In one extreme, suppose the laptop belonged to your CTO, who had IP stored on it in the form of engineering plans for a revolutionary product in a significant new market. If the laptop was unprotected in terms of security controls, and it was stolen while he was on a business trip to a country known for state-sponsored hacking and IP theft, then there is likely to be significant risk to your organization. On the other extreme, suppose the laptop belonged to a junior salesperson a few days into their job, it contained no customer or prospect lists, and it was lost at a security checkpoint at an airport. In this scenario, there’s likely to be much less risk. Or consider a laptop which is used by the head of sales for the organization, who has downloaded Personally Identifiable Information (PII) on customers from the CRM system in order to do sales analysis, and has his or her laptop stolen. In this case, there could be Primary Loss to the organization, and there might also be Secondary Losses associated with reactions by the individuals whose data is compromised.

The Open FAIR Body of Knowledge is designed to help you to ask the right questions to determine the asset at risk (is it the laptop itself, or the data?), the magnitude of loss, the skill level and motivations of the attacker, the resistance strength of any security controls in place, the frequency of occurrence of the threat and of an actual loss event, and other factors that contribute to the overall level of risk for any specific risk scenario.

In our next blog in this series, we will consider 5 reasons why you should use The Open FAIR Body of Knowledge for Risk Analysis.

The Open FAIR Body of Knowledge consists of the following Open Group standards:

  • Risk Taxonomy (O-RT), Version 2.0 (C13K, October 2013) defines a taxonomy for the factors that drive information security risk – Factor Analysis of Information Risk (FAIR).
  • Risk Analysis (O-RA) (C13G, October 2013) describes process aspects associated with performing effective risk analysis.

These can be downloaded from The Open Group publications catalog at http://www.opengroup.org/bookstore/catalog.

Our other publications include a Pocket Guide and a Certification Study Guide.

62940-hietalaJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT Security, Risk Management and Healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on Information Security, Risk Management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.


andrew-small1Andrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate® 2.0, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX® Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

1 Comment

Filed under Data management, digital technologies, Identity Management, Information security, Open FAIR Certification, RISK Management, Security, Standards, Uncategorized

The Open Group Boston 2014 Preview: Talking People Architecture with David Foote

By The Open Group

Among all the issues that CIOs, CTOs and IT departments are facing today, staffing is likely near the top of the list of what’s keeping them up at night. Sure, there’s dealing with constant (and disruptive) technological changes and keeping up with the latest tech and business trends, such as having a Big Data, Internet of Things (IoT) or a mobile strategy, but without the right people with the right skills at the right time it’s impossible to execute on these initiatives.

Technology jobs are notoriously difficult to fill–far more difficult than positions in other industries where roles and skillsets may be much more static. And because technology is rapidly evolving, the roles for tech workers are also always in flux. Last year you may have needed an Agile developer, but today you may need a mobile developer with secure coding ability and in six months you might need an IoT developer with strong operations or logistics domain experience—with each position requiring different combinations of tech, functional area, solution and “soft” skillsets.

According to David Foote, IT Industry Analyst and co-founder of IT workforce research and advisory firm Foote Partners, the mash-up of HR systems and ad hoc people management practices most companies have been using for years to manage IT workers have become frighteningly ineffective. He says that to cope in today’s environment, companies need to architect their people infrastructure similar to how they have been architecting their technical infrastructure.

“People Architecture” is the term Foote has coined to describe the application of traditional architectural principles and practices that may already be in place elsewhere within an organization and applying them to managing the IT workforce. This includes applying such things as strategy and capability roadmaps, phase gate blueprints, benchmarks, performance metrics, governance practices and stakeholder management to human capital management (HCM).

HCM components for People Architecture typically include job definition and design, compensation, incentives and recognition, skills demand and acquisition, job and career paths, professional development and work/life balance.

Part of the dilemma for employers right now, Foote says, is that there is very little job title standardization in the marketplace and too many job titles floating around IT departments today. “There are too many dimensions and variability in jobs now that companies have gotten lost from an HR perspective. They’re unable to cope with the complexity of defining, determining pay and laying out career paths for all these jobs, for example. For many, serious retention and hiring problems are showing up for the first time. Work-around solutions used for years to cope with systemic weaknesses in their people management systems have stopped working,” says Foote. “Recruiters start picking off their best people and candidates are suddenly rejecting offers and a panic sets in. Tensions are palpable in their IT workforce. These IT realities are pervasive.”

Twenty-five years ago, Foote says, defining roles in IT departments was easier. But then the Internet exploded and technology became far more customer-facing, shifting basic IT responsibilities from highly technical people deep within companies to roles requiring more visibility and transparency within and outside the enterprise. Large chunks of IT budgets moved into the business lines while traditional IT became more of a business itself.

According to Foote, IT roles became siloed not just by technology but by functional areas such as finance and accounting, operations and logistics, sales, marketing and HR systems, and by industry knowledge and customer familiarity. Then the IT professional services industry rapidly expanded to compete with their customers for talent in the marketplace. Even the architect role changed: an Enterprise Architect today can specialize in applications, security or data architecture among others, or focus on a specific industry such as energy, retail or healthcare.

Foote likens the fragmentation of IT jobs and skillsets that’s happening now to the emergence of IT architecture 25 years ago. Just as technical architecture practices emerged to help make sense of the disparate systems rapidly growing within companies and how best to determine the right future tech investments, a people architecture approach today helps organizations better manage an IT workforce spread through the enterprise with roles ranging from architects and analysts to a wide variety of engineers, developers and project and program managers.

“Technical architecture practices were successful because—when you did them well—companies achieved an understanding of what they have systems-wise and then connected it to where they were going and how they were going to get there, all within a process inclusive of all the various stakeholders who shared the risk in the outcome. It helped clearly define enterprise technology capabilities and gave companies more options and flexibility going forward,” according to Foote.

“Right now employers desperately need to incorporate in human capital management systems and practice the same straightforward, inclusive architecture approaches companies are already using in other areas of their businesses. This can go a long way toward not just lessening staffing shortages but also executing more predictably and being more agile in face of constant uncertainties and the accelerating pace of change. Ultimately this translates into a more effective workforce whether they are full-timers or the contingent workforce of part-timers, consultants and contractors.

“It always comes down to your people. That’s not a platitude but a fact,” insists Foote. “If you’re not competitive in today’s labor marketplace and you’re not an employer where people want to work, you’re dead.”

One industry that he says has gotten it right is the consulting industry. “After all, their assets walk out the door every night. Consulting groups within firms such as IBM and Accenture have been good at architecting their staffing because it’s their job to get out in front of what’s coming technologically. Because these firms must anticipate customer needs before they get the call to implement services, they have to be ahead of the curve in already identifying and hiring the bench strength needed to fulfill demand. They do many things right to hire, develop and keep the staff they need in place.”

Unfortunately, many companies take too much of a just-in-time approach to their workforce so they are always managing staffing from a position of scarcity rather than looking ahead, Foote says. But, this is changing, in part due to companies being tired of never having the people they need and being able to execute predictably.

The key is to put a structure in place that addresses a strategy around what a company needs and when. This applies not just to the hiring process, but also to compensation, training and advancement.

“Architecting anything allows you to be able to, in a more organized way, be more agile in dealing with anything that comes at you. That’s the beauty of architecture. You plan for the fact that you’re going to continue to scale and continue to change systems, the world’s going to continue to change, but you have an orderly way to manage the governance, planning and execution of that, the strategy of that and the implementation of decisions knowing that the architecture provides a more agile and flexible modular approach,” he said.

Foote says organizations such as The Open Group can lend themselves to facilitating People Architecture in a couple different ways. First, through extending the principles of architecture to human capital management, and second through vendor-independent, expertise and experience driven certifications, such as TOGAF® or OpenCA and OpenCITS, that help companies define core competencies for people and that provide opportunities for training and career advancement.

“I’m pretty bullish on many vendor-independent certifications in general, particularly where a defined book of knowledge exists that’s achieved wide acceptance in the industry. And that’s what you’ve got with The Open Group. Nobody’s challenging the architectural framework supremacy of TOGAF that that I’m aware of. In fact, large vendors with their own certifications participated actively in developing the framework and applying it very successfully to their business models,” he said.

Although the process of implementing People Architecture can be difficult and may take several years to master (much like Enterprise Architecture), Foote says it is making a huge difference for companies that implement it.

To learn more about People Architecture and models for implementing it, plan to attend Foote’s session at The Open Group Boston 2014 on Tuesday July 22. Foote’s session will address how architectural principles are being applied to human capital so that organizations can better manage their workforces from hiring and training through compensation, incentives and advancement. He will also discuss how career paths for EAs can be architected. Following the conference, the session proceedings will be available to Open Group members and conference attendees at www.opengroup.org.

Join the conversation – #ogchat #ogBOS

footeDavid Foote is an IT industry research pioneer, innovator, and one of the most quoted industry analysts on global IT workforce trends and multiple facets of the human side of technology value creation. His two decades of groundbreaking deep research and analysis of IT-business cross-skilling and technology/business management integration and leading the industry in innovative IT skills demand and compensation benchmarking has earned him a place on a short list of thought leaders in IT human capital management.

A former Gartner and META Group analyst, David leads the research and analytical practice groups at Foote Partners that reach 2,300 customers on six continents.

Comments Off on The Open Group Boston 2014 Preview: Talking People Architecture with David Foote

Filed under architecture, Conference, Open CA, Open CITS, Professional Development, Standards, TOGAF®, Uncategorized

Why Technology Must Move Toward Dependability through Assuredness™

By Allen Brown, President and CEO, The Open Group

In early December, a technical problem at the U.K.’s central air traffic control center in Swanwick, England caused significant delays that were felt at airports throughout Britain and Ireland, also affecting flights in and out of the U.K. from Europe to the U.S. At Heathrow—one of the world’s largest airports—alone, there were a reported 228 cancellations, affecting 15 percent of the 1,300 daily flights flying to and from the airport. With a ripple effect that also disturbed flight schedules at airports in Birmingham, Dublin, Edinburgh, Gatwick, Glasgow and Manchester, the British National Air Traffic Services (NATS) were reported to have handled 20 percent fewer flights that day as a result of the glitch.

According to The Register, the problem was caused when a touch-screen telephone system that allows air traffic controllers to talk to each other failed to update during what should have been a routine shift change from the night to daytime system. According to news reports, the NATS system is the largest of its kind in Europe, containing more than a million lines of code. It took the engineering and manufacturing teams nearly a day to fix the problem. As a result of the snafu, Irish airline Ryanair even went so far as to call on Britain’s Civil Aviation Authority to intervene to prevent further delays and to make sure better contingency efforts are in place to prevent such failures happening again.

Increasingly complex systems

As businesses have come to rely more and more on technology, the systems used to keep operations running smoothly from day to day have gotten not only increasingly larger but increasingly complex. We are long past the days where a single mainframe was used to handle a few batch calculations.

Today, large global organizations, in particular, have systems that are spread across multiple centers of technical operations, often scattered in various locations throughout the globe. And with industries also becoming more inter-related, even individual company systems are often connected to larger extended networks, such as when trading firms are connected to stock exchanges or, as was the case with the Swanwick failure, airlines are affected by NATS’ network problems. Often, when systems become so large that they are part of even larger interconnected systems, the boundaries of the entire system are no longer always known.

The Open Group’s vision for Boundaryless Information Flow™ has never been closer to fruition than it is today. Systems have become increasingly open out of necessity because commerce takes place on a more global scale than ever before. This is a good thing. But as these systems have grown in size and complexity, there is more at stake when they fail than ever before.

The ripple effect felt when technical problems shut down major commercial systems cuts far, wide and deep. Problems such as what happened at Swanwick can affect the entire extended system. In this case, NATS, for example, suffers from damage to its reputation for maintaining good air traffic control procedures. The airlines suffer in terms of cancelled flights, travel vouchers that must be given out and angry passengers blasting them on social media. The software manufacturers and architects of the system are blamed for shoddy planning and for not having the foresight to prevent failures. And so on and so on.

Looking for blame

When large technical failures happen, stakeholders, customers, the public and now governments are beginning to look for accountability for these failures, for someone to assign blame. When the Obamacare website didn’t operate as expected, the U.S. Congress went looking for blame and jobs were lost. In the NATS fiasco, Ryanair asked for the government to intervene. Risk.net has reported that after the Royal Bank of Scotland experienced a batch processing glitch last summer, the U.K. Financial Services Authority wrote to large banks in the U.K. requesting they identify the people in their organization’s responsible for business continuity. And when U.S. trading company Knight Capital lost $440 million in 40 minutes when a trading software upgrade failed in August, U.S. Securities and Exchange Commission Chairman Mary Schapiro was quoted in the same article as stating: “If there is a financial loss to be incurred, it is the firm committing the error that should suffer that loss, not its customers or other investors. That more than anything sends a wake-up call to the entire industry.”

As governments, in particular, look to lay blame for IT failures, companies—and individuals—will no longer be safe from the consequences of these failures. And it won’t just be reputations that are lost. Lawsuits may ensue. Fines will be levied. Jobs will be lost. Today’s organizations are at risk, and that risk must be addressed.

Avoiding catastrophic failure through assuredness

As any IT person or Enterprise Architect well knows, completely preventing system failure is impossible. But mitigating system failure is not. Increasingly the task of keeping systems from failing—rather than just up and running—will be the job of CTOs and enterprise architects.

When systems grow to a level of massive complexity that encompasses everything from old legacy hardware to Cloud infrastructures to worldwide data centers, how can we make sure those systems are reliable, highly available, secure and maintain optimal information flow while still operating at a maximum level that is cost effective?

In August, The Open Group introduced the first industry standard to address the risks associated with large complex systems, the Dependability through Assuredness™ (O-DA) Framework. This new standard is meant to help organizations both determine system risk and help prevent failure as much as possible.

O-DA provides guidelines to make sure large, complex, boundaryless systems run according to the requirements set out for them while also providing contingencies for minimizing damage when stoppage occurs. O-DA can be used as a standalone or in conjunction with an existing architecture development method (ADM) such as the TOGAF® ADM.

O-DA encompasses lessons learned within a number of The Open Group’s forums and work groups—it borrows from the work of the Security Forum’s Dependency Modeling (O-DM) and Risk Taxonomy (O-RT) standards and also from work done within the Open Group Trusted Technology Forum and the Real-Time and Embedded Systems Forums. Much of the work on this standard was completed thanks to the efforts of The Open Group Japan and its members.

This standard addresses the issue of responsibility for technical failures by providing a model for accountability throughout any large system. Accountability is at the core of O-DA because without accountability there is no way to create dependability or assuredness. The standard is also meant to address and account for the constant change that most organization’s experience on a daily basis. The two underlying principles within the standard provide models for both a change accommodation cycle and a failure response cycle. Each cycle, in turn, provides instructions for creating a dependable and adaptable architecture, providing accountability for it along the way.


Ultimately, the O-DA will help organizations identify potential anomalies and create contingencies for dealing with problems before or as they happen. The more organizations can do to build dependability into large, complex systems, hopefully the less technical disasters will occur. As systems continue to grow and their boundaries continue to blur, assuredness through dependability and accountability will be an integral part of managing complex systems into the future.

Allen Brown

Allen Brown is President and CEO, The Open Group – a global consortium that enables the achievement of business objectives through IT standards.  For over 14 years Allen has been responsible for driving The Open Group’s strategic plan and day-to-day operations, including extending its reach into new global markets, such as China, the Middle East, South Africa and India. In addition, he was instrumental in the creation of the AEA, which was formed to increase job opportunities for all of its members and elevate their market value by advancing professional excellence.

Comments Off on Why Technology Must Move Toward Dependability through Assuredness™

Filed under Dependability through Assuredness™, Standards