Category Archives: digital technologies

Balancing Complexity and Continuous Improvements – A Case Study from the Automotive Industry

By The Open Group


The automotive industry is currently facing massive challenges. For the past 30-40 years, automakers have faced stiff competition in the marketplace, as well as constant pressure to make more innovative and efficient vehicles while reducing the costs to manufacture them.

At the same time, current technological advances are making the industry—and the technology inside automobiles—increasingly complex. Digitalization is also affecting not only how automobiles work but is forcing changes in the manufacturing process and in how automakers run their businesses. With technology now touching nearly every part of the business and how it functions, the IT landscape for automakers is becoming a web of interconnected systems running both inside and outside of the business.

In addition, with computing systems becoming a more integral part of the systems that run vehicles, the lines between traditional IT functions and IT within cars themselves are beginning to blur. With trends such as Big Data and analytics, the Internet of Things and The Open Group Open Platform 3.0™ making cars, manufacturers, dealers and owners increasingly interconnected, automotive company IT departments are being forced to get involved in areas of the business, such as product development and maintenance, in ways they’ve never been before.

Between economic forces and technological change, automakers, like many businesses today, are facing massive upheaval and the need for major transformation in order to deal with levels of business complexity they’ve never seen before.


These challenges are very real for the automotive company in this case study. In addition to general economic and technological change, the company has gone through a number of transitions that have created additional infrastructure issues for the company. Over the past two decades, the company was bought then sold and bought again, bringing in two new owners and technological systems. Between the company’s original legacy IT systems and the systems brought in by its subsequent owners, the company’s IT landscape had become extremely complicated. In addition, the company is in the process of extending its footprint in the burgeoning Chinese market, a step that requires the company to invest in additional infrastructure in order to take advantage of China’s growing economic wealth to speed sales.

Between the company’s existing systems, the need to grow into emerging markets and increased digitalization across the company and its products, the company was in need of new approach to its overall architecture.


Although the company started early on to utilize IT to make the information flows across the company value chain as effective as possible, the existing IT environment had grown organically as the company had changed owners. In order to prepare themselves for an increasingly digital business environment, the company needed to address the increasing complexity of its systems without adding more complexity and while designing systems that could scale and run for the long haul.

Previously, the company had begun to consider using an Enterprise Architecture approach to address its growing complexity. Although the company had a number of solutions architects on staff, they soon realized that they needed a more holistic approach that could address the entire enterprise, not just the individual solutions that made up that IT landscape.

In an industry where time to market is of outmost importance there will always be challenges in balancing short-term solutions with strategic investments. As such, the company initially decided to invest in an Enterprise Architecture capability with the objective of addressing internal complexities to better understand and eventually deal with them. Because TOGAF®, an Open Group standard was seen as the de-facto industry standard for Enterprise Architecture it was the natural choice for the company to create its architecture framework. The majority of the Enterprise and solution Architects at the company were then trained and certified in TOGAF 9. Subsequently, TOGAF was adopted by the architecture community in the IT organization.

Within the IT department, TOGAF provided an ontology for discussing IT issues, and it also provided a foundation for the Enterprise Architecture repository. However, it was seen within the organization primarily as an IT architecture concern, not a framework for transformational change. The EA team decided that in order to really benefit from TOGAF and address the complexity challenges throughout the enterprise, they would need to prove that TOGAF could be used to add value throughout the entire organization and influence how changes were delivered to the IT landscape, as well as prove the value of a structured approach to addressing internal issues.

In order to prove that TOGAF could help with its overall transformation, the team decided to put together a couple of pilot projects within different business areas to showcase the benefits of using a structured approach to change. Due to a need to fix how the company sourced product components, the team decided to first pilot a TOGAF-based approach for its procurement process, since it was widely viewed as one of the most complex areas of the business.

A New Procurement Platform

The initial pilot project was aimed at modernizing the company’s procurement landscape. Although procurement is normally a fairly straightforward process, in the automotive business the intricacies and variations within the product structure, combined with a desire to control logistic costs and material flows, represented a major challenge for the company. In short, to save costs, the company only wanted to buy things they would actually use in the vehicle manufacturing process—no more, no less.

Over the years the IT supporting the company’s procurement process had become very fragmented due to investments in various point solutions and different partnerships that had been established over time. In addition, some parts of the system had been closed down, all of which made the information flow, including all the systems integrations that had occurred along the way, very difficult to map. There were also several significant gaps in the IT support of the procurement process that severely limited the transparency and integrity of the process.


Using TOGAF as an architecture framework and method in conjunction with ArchiMate®, an Open Group standard, for modelling notations and Sparx Enterprise Architect (EA) as a modelling tool, the team set out to establish a roadmap for implementing a new procurement platform. The TOGAF Architecture Development Method (ADM) was used to establish the architecture vision, and the architecture development phases were completed outlining a target architecture and a subsequent roadmap. No major adaptions were made to the ADM but the sourcing process for the platform was run in parallel to putting together the ADM, requiring an iterative approach to be used

As part of the roadmap, the following ArchiMate views were developed:

  • Motivation views
  • Information structure views
  • Baseline and target business process views
  • Baseline and target business function views
  • Baseline and target application function views
  • Baseline and target application landscape views
  • Baseline and target application usage views
  • Baseline and target infrastructure landscape views
  • Baseline and target infrastructure usage views

Each view was created using Sparx EA configured to facilitate the ADM process and acting as the architecture repository.

The TOGAF ADM provided a structured approach for developing a roadmap whose results could be traced back to the original vision. Having a well-defined methodology with clear deliverables and an artifacts meta-model made the work focused, and both TOGAF and ArchiMate were relatively easy to get buy in for.

The challenges for the project were mainly in one area—aligning the architecture development with the IT solution sourcing process. Because the company wanted to identify sourcing solutions early to assess costs and initiate negotiation, that emphasis pushed the project into identifying solutions building blocks very early on. In most cases, the output from the ADM process could directly be used as input for sourcing commercial of solutions; however, in this case, sourcing soon took precedence over the architecture development process. Usually moving through the ADM phases A to E can be done within a couple of months but evaluating solutions and securing funding within this company proved to be much more difficult and time consuming.


With a new procurement process roadmap in hand, the company has now begun to use the ADM to engage with and get Requests for Information (RFIs) from new suppliers. In addition, using TOGAF and ArchiMate to map the company’s procurement process and design an infrastructure roadmap helped to demystify what had been seen as an extremely complex procurement process. The project allowed the IT team to identify where the real complexities were in the process, many of which are at the component level rather than within the system itself. In addition, the company has been able to identify the areas that they need to prioritize as they begin their implementation process.


Initially TOGAF was seen as a silver bullet within the organization. However, companies must realize that the TOGAF methodology represents best practices, and there is still a need within any organization to have skilled, knowledgeable Enterprise Architects available and with the mandate to do the work.

As part of the project, the following benefits were provided by TOGAF:

  • Provided structure to the analysis
  • Ensured a holistic perspective for all domains
  • Kept the team focused on the outcome, definition, roadmap, etc.
  • Provided a good view into current and future data for the roadmap
  • Provided proven credibility for the analysis

ArchiMate added additional support by providing well-defined viewpoints, and Sparx EA is a cost effective modelling tool and repository that can easily be deployed to all stakeholder in an initiative.

However, within this particular organization, there were a number of challenges that need to be overcome, many of which can hinder the adoption of TOGAF. These challenges included:

  • Competing processes, methodologies and capabilities
  • Strong focus on solution design rather than architecture
  • Strong focus on project delivery tradition rather than managing programs and outcomes
  • Governance for solutions rather than architecture

Adopting Archimate proved to be more straightforward internally at this organization because it could be used to address immediate modelling needs but without requiring a coordinated approach around methodology and governance.

In cases such as this, it is probably best to sell the TOGAF and ArchiMate methodologies into the business organization as common sense solutions rather than as specific technology architecture methodologies. Although they may be presented as such to the EA community within the organization, it makes the decision process simpler not to oversell the technical solution, as it were, to the business, instead selling them the business benefits of the process.


Currently the company is beginning to move through the implementation phase of their roadmap. In addition, individuals throughout the organization have begun to regularly use ArchiMate as a tool for modeling different business areas within the organization. In addition the tools and concepts of TOGAF have been put into use successfully in several initiatives. The timeframe however for formally implementing a more comprehensive Enterprise Architecture Framework throughout other parts of the organization has been slowed down due to the company’s current focus on the release of new models. This is cyclical within the company and once the immediate focus on product delivery weakens, the need for consolidation and simplification will become a priority once again.

As with most companies, the key to a implementing a successful Enterprise Architecture capability within this company will come down to establishing a more effective partnership between the IT organization and the business organizations that IT is supporting. As such, for projects such as this, early engagement is key, and the IT organization must position itself not only as a delivery organization but a business partner that provides investment advice and helps minimize business risk through improved processes and technology based business transformation (as is prescribed by methodologies such as TOGAF and ArchiMate). This requires a unified view of the company mission and its business objectives and associated approaches from IT. Project managers, business analysts and Enterprise Architects must have a common view as to how to approach engagements for them to succeed. Without buy-in throughout the organization, the tools will only be useful techniques used by individuals and their real potential may not be realized.

Leave a comment

Filed under ArchiMate®, big data, digital technologies, EA, IoT, Open Platform 3.0, The Open Group, TOGAF

A World Without IT4IT: Why It’s Time to Run IT Like a Business

By Dave Lounsbury, CTO, The Open Group

IT departments today are under enormous pressure. In the digital world, businesses have become dependent on IT to help them remain competitive. However, traditional IT departments have their roots in skills such as development or operations and have not been set up to handle a business and technology environment that is trying to rapidly adapt to a constantly changing marketplace. As a result, many IT departments today may be headed for a crisis.

At one time, IT departments led technology adoption in support of business. Once a new technology was created—departmental servers, for instance—it took a relatively long time before businesses took advantage of it and even longer before they became dependent on the technology. But once a business did adopt the technology, it became subject to business rules—expectations and parameters for reliability, maintenance and upgrades that kept the technology up to date and allowed the business it supported to keep up with the market.

As IT became more entrenched in organizations throughout the 1980s and 1990s, IT systems increased in size and scope as technology companies fought to keep pace with market forces. In large enterprises, in particular, IT’s function became to maintain large infrastructures, requiring small armies of IT workers to sustain them.

A number of forces have combined to change all that. Today, most businesses do their business operations digitally—what Constellation Research analyst Andy Mulholland calls “Front Office Digital Business.” Technology-as-a-service models have changed how the technologies and applications are delivered and supported, with support and upgrades coming from outsourced vendors, not in-house staff. With Cloud models, an IT department may not even be necessary. Entrepreneurs can spin up a company with a swipe of a credit card and have all the technology they need at their fingertips, hosted remotely in the Cloud.

The Gulf between IT and Business

Although the gap between IT and business is closing, the gulf in how IT is run still remains. In structure, most IT departments today remain close to their technology roots. This is, in part, because IT departments are still run by technologists and engineers whose primary skills lie in the challenge (and excitement) of creating new technologies. Not every skilled engineer makes a good businessperson, but in most organizations, people who are good at their jobs often get promoted into management whether or not they are ready to manage. The Peter Principle is a problem that hinders many organizations, not just IT departments.

What has happened is that IT departments have not traditionally been run as if they were a business. Good business models for how IT should be run have been piecemeal or slow to develop—despite IT’s role in how the rest of the business is run. Although some standards have been developed as guides for how different parts of IT should be run (COBIT for governance, ITIL for service management, TOGAF®, an Open Group standard, for architecture), no overarching standard has been developed that encompasses how to holistically manage all of IT, from systems administration to development to management through governance and, of course, staffing. For all its advances, IT has yet to become a well-oiled business machine.

The business—and technological—climate today is not the same as it was when companies took three years to do a software upgrade. Everything in today’s climate happens nearly instantaneously. “Convergence” technologies like Cloud Computing, Big Data, social media, mobile and the Internet of Things are changing the nature of IT. New technical skills and methodologies are emerging every day, as well. Although languages such as Java or C may remain the top programming languages, new languages like Pig or Hive are emerging everyday, as are new approaches to development, such as Scrum, Agile or DevOps.

The Consequences of IT Business as Usual

With these various forces facing IT, departments will either need to change and adopt a model where IT is managed more effectively or departments may face some impending chaos that ends up hindering their organizations.

Without an effective management model for IT, companies won’t be able to mobilize quickly for a digital age. Even something as simple as an inability to utilize data could result in problems such as investing in a product prototype that customers aren’t interested in. Those are mistakes most companies can’t afford to make these days.

Having an umbrella view of what all of IT does also allows the department to make better decisions. With technology and development trends changing so quickly, how do you know what will fit your organization’s business goals? You want to take advantage of the trends or technologies that make sense for the company and leave behind those that don’t.

For example, in DevOps, one of the core concepts is to bring the development phase into closer alignment with releasing and operating the software. You need to know your business’s operating model to determine whether this approach will actually work or not. Having a sense of that also allows IT to make decisions about whether it’s wise to invest in training or hiring staff skilled in those methods or buying new technologies that will allow you to adopt the model.

Not having that management view can leave companies subject to the whims of technological evolution and also to current IT fads. If you don’t know what’s valuable to your business, you run the risk of chasing every new fad that comes along. There’s nothing worse—as the IT guy—than being the person who comes to the management meeting each month saying you’re trying yet another new approach to solve a problem that never seems to get solved. Business people won’t respond to that and will wonder if you know what you’re doing. IT needs to be decisive and choose wisely.

These issues not only affect the IT department but to trickle up to business operations. Ineffective IT shops will not know when to invest in the correct technologies, and they may miss out on working with new technologies that could benefit the business. Without a framework to plan how technology fits into the business, you could end up in the position of having great IT bows and arrows but when you walk out into the competitive world, you get machine-gunned.

The other side is cost and efficiency—if the entire IT shop isn’t running smoothly throughout then you end up spending too much money on problems, which in turn takes money away from other parts of the business that can keep the organization competitive. Failing to manage IT can lead to competitive loss across numerous areas within a business.

A New Business Model

To help prevent the consequences that may result if IT isn’t run more like a business, industry leaders such as Accenture; Achmea; AT&T; HP IT; ING Bank; Munich RE; PwC; Royal Dutch Shell; and University of South Florida, recently formed a consortium to address how to better run the business of IT. With billions of dollars invested in IT each year, these companies realized their investments must be made wisely and show governable results in order succeed.

The result of their efforts is The Open Group IT4IT™ Forum, which released a Snapshot of its proposed Reference Architecture for running IT more like a business this past November. The Reference Architecture is meant to serve as an operating model for IT, providing the “missing link” that previous IT-function specific models have failed to address. The model allows IT to achieve the same level of business, discipline, predictability and efficiency as other business functions.

The Snapshot includes a four-phase Value Chain for IT that provides both an operating model for an IT business and outlines how value can be added at every stage of the IT process. In addition to providing suggested best practices for delivery, the Snapshot includes technical models for the IT tools that organizations can use, whether for systems monitoring, release monitoring or IT point solutions. Providing guidance around IT tools will allow these tools to become more interoperable so that they can exchange information at the right place at the right time. In addition, it will allow for better control of information flow between various parts of the business through the IT shop, thus saving IT departments the time and hassle of aggregating tools or cobbling together their own tools and solutions. Staffing guidance models are also included in the Reference Architecture.

Why IT4IT now? Digitalization cannot be held back, particularly in an era of Cloud, Big Data and an impending Internet of Things. An IT4IT Reference Architecture provides more than just best practices for IT—it puts IT in the context of a business model that allows IT to be a contributing part of an enterprise, providing a roadmap for digital businesses to compete and thrive for years to come.

Join the conversation! @theopengroup #ogchat

By The Open GroupDavid is Chief Technical Officer (CTO) and Vice President, Services for The Open Group. As CTO, he ensures that The Open Group’s people and IT resources are effectively used to implement the organization’s strategy and mission.  As VP of Services, David leads the delivery of The Open Group’s proven collaboration processes for collaboration and certification both within the organization and in support of third-party consortia.

David holds a degree in Electrical Engineering from Worcester Polytechnic Institute, and is holder of three U.S. patents.


Filed under Cloud, digital technologies, Enterprise Transformation, Internet of Things, IT, IT4IT, TOGAF, TOGAF®

Catching Up with The Open Group Internet of Things Work Group

By The Open Group

The Open Group’s Internet of Things (IoT) Work Group is involved in developing open standards that will allow product and equipment management to evolve beyond the traditional limits of product lifecycle management. Meant to incorporate the larger systems management that will be required by the IoT, these standards will help to handle the communications needs of a network that may encompass products, devices, people and multiple organizations. Formerly known as the Quantum Lifecycle Management (QLM) Work Group, its name was recently changed to the Internet of Things Work Group to more accurately reflect its current direction and focus.

We recently caught up with Work Group Chairman Kary Främling to discuss its two new standards, both of which are geared toward the Internet of Things, and what the group has been focused on lately.

Over the past few years, The Open Group’s Internet of Things Work Group (formerly the Quantum Lifecycle Management Work Group) has been working behind the scenes to develop new standards related to the nascent Internet of Things and how to manage the lifecycle of these connected products, or as General Electric has referred to it, the “Industrial Internet.”

What their work ultimately aims to do is help manage all the digital information within a particular system—for example, vehicles, buildings or machines. By creating standard frameworks for handling this information, these systems and their related applications can be better run and supported during the course of their “lifetime,” with the information collected serving a variety of purposes, from maintenance to improved design and manufacturing to recycling and even refurbishing them.

According to Work Group Chairman Kary Främling, CEO of ControlThings and Professor of Practice in Building Information Modeling at Aalto University in Finland, the group has been working with companies such as Caterpillar and Fiat, as well as refrigerator and machine tool manufacturers, to enable machines and equipment to send sensor and status data on how machines are being used and maintained to their manufacturers. Data can also be provided to machine operators so they are also aware of how the machines are functioning in order to make changes if need be.

For example, Främling says that one application of this system management loop is in HVAC systems within buildings. By building Internet capabilities into the system, now a ventilation system—or air-handling unit—can be controlled via a smartphone from the moment it’s turned on inside a building. The system can provide data and alerts to facilities management about how well it’s operating and whether there are any problems within the system to whomever needs it. Främling also says that the system can provide information to both the maintenance company and the system manufacturer so they can collect information from the machines on performance, operations and other indicators. This allows users to determine things as simple as when an air filter may need changing or whether there are systematic problems with different machine models.

According to Främling, the ability to monitor systems in this way has already helped ventilation companies make adjustments to their products.

“What we noticed was there was a certain problem with certain models of fans in these machines. Based on all the sensor readings on the machine, I could deduce that the air extraction fan had broken down,” he said.

The ability to detect such problems via sensor data as they are happening can be extremely beneficial to manufacturers because they can more easily and more quickly make improvements to their systems. Another advantage afforded by machines with Web connectivity, Främling says, is that errors can also be corrected remotely.

“There’s so much software in these machines nowadays, so just by changing parameters you can make them work better in many ways,” he says.

In fact, Främling says that the Work Group has been working on systems such as these for a number of years already—well before the term “Internet of Things” became part of industry parlance. They first worked on a system for a connected refrigerator in 2007 and even worked on systems for monitoring how vehicles were used before then.

One of the other things the Work Group is focused on is working with the Open Platform 3.0 Forum since there are many synergies between the two groups. For instance, the Work Group provided a number of the uses cases for the Forum’s recent business scenarios.

“I really see what we are doing is enabling the use cases and these information systems,” Främling says.

Two New Standards

In October, the Work Group also published two new standards, both of which are two of the first standards to be developed for the Internet of Things (IoT). A number of companies and universities across the world have been instrumental in developing the standards including Aalto University in Finland, BIBA, Cambridge University, Infineon, InMedias, Politechnico di Milano, Promise Innovation, SAP and Trackway Ltd.

Främling likens these early IoT standards to what the HTML and HTTP protocols did for the Internet. For example, the Open Data Format (O-DF) Standard provides a common language for describing any kind of IoT object, much like HTML provided a language for the Web. The Open Messaging Interface (O-MI) Standard, on the other hand, describes a set of operations that enables users to read information about particular systems and then ask those systems for that information, much like HTTP. Write operations then allow users to also send information or new values to the system, for example, to update the system.

Users can also subscribe to information contained in other systems. For instance, Främling described a scenario in which he was able to create a program that allowed him to ask his car what was wrong with it via a smartphone when the “check engine” light came on. He was then able to use a smartphone application to send an O-MI message to the maintenance company with the error code and his location. Using an O-MI subscription the maintenance company would be able to send a message back asking for additional information. “Send these five sensor values back to us for the next hour and you should send them every 10 seconds, every 5 seconds for the temperature, and so on,” Främling said. Once that data is collected, the service center can analyze what’s wrong with the vehicle.

Främling says O-MI messages can easily be set up on-the-fly for a variety of connected systems with little programming. The standard also allows users to manage mobility and firewalls. O-MI communications are also run over systems that are already secure to help prevent security issues. Those systems can include anything from HTTP to USB sticks to SMTP, as well, Främling says.

Främling expects that these standards can also be applied to multiple types of functionalities across different industries, for example for connected systems in the healthcare industry or to help manage energy production and consumption across smart grids. With both standards now available, the Work Group is beginning to work on defining extensions for the Data Format so that vocabularies specific to certain industries, such as healthcare or manufacturing, can also be developed.

In addition, Främling expects that as protocols such as O-MI make it easier for machines to communicate amongst themselves, they will also be able to begin to optimize themselves over time. Cars, in fact, are already using this kind of capability, he says. But for other systems, such as buildings, that kind of communication is not happening yet. He says in Finland, his company has projects underway with manufacturers of diesel engines, cranes, elevators and even in Volkswagen factories to establish information flows between systems. Smart grids are also another potential use. In fact his home is wired to provide consumption rates in real-time to the electric company, although he says he does not believe they are currently doing anything with the data.

“In the past we used to speak about these applications for pizza or whatever that can tell a microwave oven how long it should be heated and the microwave oven also checks that the food hasn’t expired,” Främling said.

And while your microwave may not yet be able to determine whether your food has reached its expiration date, these recent developments by the Work Group are helping to bring the IoT vision to fruition by making it easier for systems to begin the process of “talking” to each other through a standardized messaging system.

By The Open GroupKary Främling is currently CEO of the Finnish company ControlThings, as well as Professor of Practice in Building Information Modeling (BIM) at Aalto University, Finland. His main research topics are on information management practices and applications for BIM and product lifecycle management in general. His main areas of competence are distributed systems, middleware, multi-agent systems, autonomously learning agents, neural networks and decision support systems. He is one of the worldwide pioneers in the Internet of Things domain, where he has been active since 2000.

@theopengroup; #ogchat

Comments Off on Catching Up with The Open Group Internet of Things Work Group

Filed under digital technologies, Enterprise Transformation, Future Technologies, Internet of Things, Open Platform 3.0, Uncategorized

The Open Group London 2014: Eight Questions on Retail Architecture

By The Open Group

If there’s any vertical sector that has been experiencing constant and massive transformation in the ages of the Internet and social media, it’s the retail sector. From the ability to buy goods whenever and however you’d like (in store, online and now, through mobile devices) to customers taking to social media to express their opinions about brands and service, retailers have a lot to deal with.

Glue Reply is a UK-based consulting firm that has worked with some of Europe’s largest retailers to help them plan their Enterprise Architectures and deal with the onslaught of constant technological change. Glue Reply Partner Daren Ward and Senior Consultant Richard Veryard sat down recently to answer our questions about how the challenges of building architectures for the retail sector, the difficulties of seasonal business and the need to keep things simple and agile. Ward spoke at The Open Group London 2014 on October 20.

What are some of the biggest challenges facing the retail industry right now?

There are a number of well-documented challenges facing the retail sector. Retailers are facing new competitors, especially from discount chains, as well as online-only retailers such as Amazon. Retailers are also experiencing an increasing fragmentation of spend—for example, grocery customers buying smaller quantities more frequently.

At the same time, the customer expectations are higher, especially across multiple channels. There is an increased intolerance of poor customer service, and people’s expectations of prompt response is increasing rapidly, especially via social media.

There is also an increasing concern regarding cost. Many retailers have huge amounts invested in physical space and human resources. They can’t just keep increasing these costs, they must understand how to become more efficient and create new ways to make use of these resources.

What role is technology playing in those changes, and which technologies are forcing the most change?

New technologies are allowing us to provide shoppers with a personalized customer experience more akin to an old school type service like when the store manager knew my name, my collar size, etc. Combining technologies such as mobile and iBeacons is allowing us to not only reach out to our customers, but to also provide a context and increase relevance.

Some retailers are becoming extremely adept in using social media. The challenge here is to link the social media with the business process, so that the customer service agent can quickly check the relevant stock position and reserve the stock before posting a response on Facebook.

Big data is becoming one of the key technology drivers. Large retailers are able to mobilize large amounts of data, both from their own operations as well as external sources. Some retailers have become highly data-driven enterprises, with the ability to make rapid adjustments to marketing campaigns and physical supply chains. As we gather more data from more devices all plugged into the Internet of Things (IoT), technology can help us make sense of this data and spot trends we didn’t realize existed.

What role can Enterprise Architecture play in helping retailers, and what can retailers gain from taking an architectural approach to their business?

One of the key themes of the digital transformation is the ability to personalize the service, to really better understand our customers and to hold a conversation with them that is meaningful. We believe there are four key foundation blocks to achieving this seamless digital transformation: the ability to change, to integrate, to drive value from data and to understand the customer journey. Core to the ability to change is a business-driven roadmap. It provides all involved with a common language, a common set of goals and a target vision. This roadmap is not a series of hurdles that must be delivered, but rather a direction of travel towards the target allowing us to assess the impact of course corrections as we go and ensure we are still capable of arriving at our destination. This is how we create an agile environment, where tactical changes are still simple course corrections continuing on the right direction of travel.

Glue Reply provides a range of architecture services to our retail clients, from capability led planning to practical development of integration solutions. For example, we produced a five-year roadmap for Sainsbury’s, which allows IT investment to combine longer-term foundation projects with short-term initiatives that can respond rapidly to customer demand.

Are there issues specific to the retail sector that are particularly challenging to deal with in creating an architecture and why?

Retail is a very seasonal business—sometimes this leaves a very small window for business improvements. This also exaggerates the differences in the business and IT lifecycles. The business strategy can change at a pace often driven by external factors, whilst elements of IT have a lifespan of many years. This is why we need a roadmap—to assess the impact of these changes and re-plan and prioritize our activities.

Are there some retailers that you think are doing a good job of handling these technology challenges? Which ones are getting it right?

Our client John Lewis has just been named ‘Omnichannel Retailer of the Year’ at the World Retail Awards 2014. They have a vision, and they can assess the impact of change. We have seen similar success at Sainsbury’s, where initiatives such as brand match are brought to market with real pace and quality.

How can industry standards help to support the retail industry?

Where appropriate, we have used industry standards such as the ARTS (Association for Retail Standards) data model to assist our clients in creating a version that is good enough. But mostly, we use our own business reference models, which we have built up over many years of experience working with a range of different retail businesses.

What can other industries learn from how retailers are incorporating architecture into their operations?

The principle of omnichannel has a lot of relevance for other consumer-facing organizations, but also retail’s focus on loyalty. It’s not about creating a sale stampede, it’s about the brand. Apple is clearly an excellent example—when people queue for hours to be the first to buy the new product, at a price that will only reduce over time. Some retailers are making great use of customer data and profiling. And above, all successful retailers understand three key architectural principles that will drive success in any other sector—keep it simple, drive value and execute well.

What can retailers do to continue to best meet customer expectations into the future?

It’s no longer about the channel, it’s about the conversation. We have worked with the biggest brands in Europe, helping them deliver multichannel solutions that consider the conversation. The retailer that enables this conversation will better understand their customers’ needs and build long-term relationships.

By The Open GroupDaren Ward is a Partner at Reply in the UK. As well as being a practicing Enterprise Architecture, Daren is responsible for the development of the Strategy and Architecture business as well as playing a key role in driving growth of Reply in the UK. He is committed to helping organizations drive genuine business value from IT investments, working with both commercial focused business units and IT professionals.  Daren has helped establish Architecture practices at many organizations. Be it enterprise, solutions, integration or information architecture, he has helped these practices delivery real business value through capability led architecture and business-driven roadmaps.


RichardVeryard 2 June 2014Richard Veryard is a Business Architect and author, specializing in capability-led planning, systems thinking and organizational intelligence. Last year, Richard joined Glue Reply as a senior consultant in the retail sector.


Comments Off on The Open Group London 2014: Eight Questions on Retail Architecture

Filed under big data, Business Architecture, digital technologies, Enterprise Architecture, Internet of Things, Uncategorized

Open FAIR Blog Series – Five Reasons You Should Use the Open FAIR Body of Knowledge

By Jim Hietala, VP, Security and Andrew Josey, Director of Standards, The Open Group

This is the second in our blog series introducing the Open FAIR Body of Knowledge.

In this blog, we provide 5 reasons why you should use the Open FAIR Body of Knowledge for Risk Analysis:

1. Emphasis on Risk

Often the emphasis in such analyses is placed on security threats and controls, without due consideration of impact.  For example, we have a firewall protecting all our customer information – but what if the firewall is breached and the customer information stolen or changed? Risk analysis using Open FAIR evaluates both the probability that bad things will happen, and the impact if they do happen. By using the Open FAIR Body of Knowledge, the analyst measures and communicates the risk, which is what management cares about.

2. Logical and Rational Framework

It provides a framework that explains the how and why of risk analysis. It improves consistency in undertaking analyses.

3. Quantitative

It’s easy to measure things without considering the risk context – for example, the systems should be maintained in full patch compliance – but what does that mean in terms of loss frequency or the magnitude of loss? The Open FAIR taxonomy and method provide the basis for meaningful metrics.

4. Flexible

Open FAIR can be used at different levels of abstraction to match the need, the available resources, and available data.

5. Rigorous

There is often a lack of rigor in risk analysis: statements are made such as: “that new application is high risk, we could lose millions …” with no formal rationale to support them. The Open FAIR risk analysis method provides a more rigorous approach that helps to reduce gaps and analyst bias. It improves the ability to defend conclusions and recommendations.

In our next blog, we will look at how the Open FAIR Body of Knowledge can be used with other Open Group standards.

The Open FAIR Body of Knowledge consists of the following Open Group standards:

  • Risk Taxonomy (O-RT), Version 2.0 (C13K, October 2013) defines a taxonomy for the factors that drive information security risk – Factor Analysis of Information Risk (FAIR).
  • Risk Analysis (O-RA) (C13G, October 2013) describes process aspects associated with performing effective risk analysis.

These can be downloaded from The Open Group publications catalog at

Our other publications include a Pocket Guide and a Certification Study Guide.

62940-hietalaJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT Security, Risk Management and Healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on Information Security, Risk Management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.


andrew-small1Andrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate® 2.0, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX® Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

Comments Off on Open FAIR Blog Series – Five Reasons You Should Use the Open FAIR Body of Knowledge

Filed under Data management, digital technologies, Information security, Open FAIR Certification, RISK Management, Security, Uncategorized

The Open Group London 2014 Preview: A Conversation with RTI’s Stan Schneider about the Internet of Things and Healthcare

By The Open Group

RTI is a Silicon Valley-based messaging and communications company focused on helping to bring the Industrial Internet of Things (IoT) to fruition. Recently named “The Most Influential Industrial Internet of Things Company” by Appinions and published in Forbes, RTI’s EMEA Manager Bettina Swynnerton will be discussing the impact that the IoT and connected medical devices will have on hospital environments and the Healthcare industry at The Open Group London October 20-23. We spoke to RTI CEO Stan Schneider in advance of the event about the Industrial IoT and the areas where he sees Healthcare being impacted the most by connected devices.

Earlier this year, industry research firm Gartner declared the Internet of Things (IoT) to be the most hyped technology around, having reached the pinnacle of the firm’s famed “Hype Cycle.”

Despite the hype around consumer IoT applications—from FitBits to Nest thermostats to fashionably placed “wearables” that may begin to appear in everything from jewelry to handbags to kids’ backpacks—Stan Schneider, CEO of IoT communications platform company RTI, says that 90 percent of what we’re hearing about the IoT is not where the real value will lie. Most of media coverage and hype is about the “Consumer” IoT like Google glasses or sensors in refrigerators that tell you when the milk’s gone bad. However, most of the real value of the IoT will take place in what GE has coined as the “Industrial Internet”—applications working behind the scenes to keep industrial systems operating more efficiently, says Schneider.

“In reality, 90 percent of the real value of the IoT will be in industrial applications such as energy systems, manufacturing advances, transportation or medical systems,” Schneider says.

However, the reality today is that the IoT is quite new. As Schneider points out, most companies are still trying to figure out what their IoT strategy should be. There isn’t that much active building of real systems at this point.

Most companies, at the moment, are just trying to figure out what the Internet of Things is. I can do a webinar on ‘What is the Internet of Things?’ or ‘What is the Industrial Internet of Things?’ and get hundreds and hundreds of people showing up, most of whom don’t have any idea. That’s where most companies are. But there are several leading companies that very much have strategies, and there are a few that are even executing their strategies, ” he said. According to Schneider, these companies include GE, which he says has a 700+ person team currently dedicated to building their Industrial IoT platform, as well as companies such as Siemens and Audi, which already have some applications working.

For its part, RTI is actively involved in trying to help define how the Industrial Internet will work and how companies can take disparate devices and make them work with one another. “We’re a nuts-and-bolts, make-it-work type of company,” Schneider notes. As such, openness and standards are critical not only to RTI’s work but to the success of the Industrial IoT in general, says Schneider. RTI is currently involved in as many as 15 different industry standards initiatives.

IoT Drivers in Healthcare

Although RTI is involved in IoT initiatives in many industries, from manufacturing to the military, Healthcare is one of the company’s main areas of focus. For instance, RTI is working with GE Healthcare on the software for its CAT scanner machines. GE chose RTI’s DDS (data distribution service) product because it will let GE standardize on a single communications platform across product lines.

Schneider says there are three big drivers that are changing the medical landscape when it comes to connectivity: the evolution of standalone systems to distributed systems, the connection of devices to improve patient outcome and the replacement of dedicated wiring with networks.

The first driver is that medical devices that have been standalone devices for years are now being built on new distributed architectures. This gives practitioners and patients easier access to the technology they need.

For example, RTI customer BK Medical, a medical device manufacturer based in Denmark, is in the process of changing their ultrasound product architecture. They are moving from a single-user physical system to a wirelessly connected distributed design. Images will now be generated in and distributed by the Cloud, thus saving significant hardware costs while making the systems more accessible.

According to Schneider, ultrasound machine architecture hasn’t really changed in the last 30 or 40 years. Today’s ultrasound machines are still wheeled in on a cart. That cart contains a wired transducer, image processing hardware or software and a monitor. If someone wants to keep an image—for example images of fetuses in utero—they get carry out physical media. Years ago it was a Polaroid picture, today the images are saved to CDs and handed to the patient.

In contrast, BK’s new systems will be completely distributed, Schneider says. Doctors will be able to carry a transducer that looks more like a cellphone with them throughout the hospital. A wireless connection will upload the imaging data into the cloud for image calculation. With a distributed scenario, only one image processing system may be needed for a hospital or clinic. It can even be kept in the cloud off-site. Both patients and caregivers can access images on any display, wherever they are. This kind of architecture makes the systems much cheaper and far more efficient, Schneider says. The days of the wheeled-in cart are numbered.

The second IoT driver in Healthcare is connecting medical devices together to improve patient outcomes. Most hospital devices today are completely independent and standalone. So, if a patient is hooked up to multiple monitors, the only thing that really “connects” those devices today is a piece of paper at the end of a hospital bed that shows how each should be functioning. Nurses are supposed to check these devices on an hourly basis to make sure they’re working correctly and the patient is ok.

Schneider says this approach is error-ridden. First, the nurse may be too busy to do a good job checking the devices. Worse, any number of things can set off alarms whether there’s something wrong with the patient or not. As anyone who has ever visited a friend or relative in the hospital attest to, alarms are going off constantly, making it difficult to determine when someone is really in distress. In fact, one of the biggest problems in hospital settings today, Schneider says, is a phenomenon known as “alarm fatigue.” Single devices simply can’t reliably tell if there’s some minor glitch in data or if the patient is in real trouble. Thus, 80% of all device alarms in hospitals are turned off. Meaningless alarms fatigue personnel, so they either ignore or turn off the alarms…and people can die.

To deal with this problem, new technologies are being created that will connect devices together on a network. Multiple devices can then work in tandem to really figure out when something is wrong. If the machines are networked, alarms can be set to go off only when multiple distress indicators are indicated rather than just one. For example, if oxygen levels drop on both an oxygen monitor on someone’s finger and on a respiration monitor, the alarm is much more likely a real patient problem than if only one source shows a problem. Schneider says the algorithms to fix these problems are reasonably well understood; the barrier is the lack of networking to tie all of these machines together.

The third area of change in the industrial medical Internet is the transition to networked systems from dedicated wired designs. Surgical operating rooms offer a good example. Today’s operating room is a maze of wires connecting screens, computers, and video. Videos, for instance, come from dynamic x-ray imaging systems, from ultrasound navigation probes and from tiny cameras embedded in surgical instruments. Today, these systems are connected via HDMI or other specialized cables. These cables are hard to reconfigure. Worse, they’re difficult to sterilize, Schneider says. Thus, the surgical theater is hard to configure, clean and maintain.

In the future, the mesh of special wires can be replaced by a single, high-speed networking bus. Networks make the systems easier to configure and integrate, easier to use and accessible remotely. A single, easy-to-sterilize optical network cable can replace hundreds of wires. As wireless gets faster, even that cable can be removed.

“By changing these systems from a mesh of TV-cables to a networked data bus, you really change the way the whole system is integrated,” he said. “It’s much more flexible, maintainable and sharable outside the room. Surgical systems will be fundamentally changed by the Industrial IoT.”

IoT Challenges for Healthcare

Schneider says there are numerous challenges facing the integration of the IoT into existing Healthcare systems—from technical challenges to standards and, of course, security and privacy. But one of the biggest challenges facing the industry, he believes, is plain old fear. In particular, Schneider says, there is a lot of fear within the industry of choosing the wrong path and, in effect, “walking off a cliff” if they choose the wrong direction. Getting beyond that fear and taking risks, he says, will be necessary to move the industry forward, he says.

In a practical sense, the other thing currently holding back integration is the sheer number of connected devices currently being used in medicine, he says. Manufacturers each have their own systems and obviously have a vested interest in keeping their equipment in hospitals, so many have been reluctant to develop or become standards-compliant and push interoperability forward, Schneider says.

This is, of course, not just a Healthcare issue. “We see it in every single industry we’re in. It’s a real problem,” he said.

Legacy systems are also a problematic area. “You can’t just go into a Kaiser Permanente and rip out $2 billion worth of equipment,” he says. Integrating new systems with existing technology is a process of incremental change that takes time and vested leadership, says Schneider.

Cloud Integration a Driver

Although many of these technologies are not yet very mature, Schneider believes that the fundamental industry driver is Cloud integration. In Schneider’s view, the Industrial Internet is ultimately a systems problem. As with the ultrasound machine example from BK Medical, it’s not that an existing ultrasound machine doesn’t work just fine today, Schneider says, it’s that it could work better.

“Look what you can do if you connect it to the Cloud—you can distribute it, you can make it cheaper, you can make it better, you can make it faster, you can make it more available, you can connect it to the patient at home. It’s a huge system problem. The real overwhelming striking value of the Industrial Internet really happens when you’re not just talking about the hospital but you’re talking about the Cloud and hooking up with practitioners, patients, hospitals, home care and health records. You have to be able to integrate the whole thing together to get that ultimate value. While there are many point cases that are compelling all by themselves, realizing the vision requires getting the whole system running. A truly connected system is a ways out, but it’s exciting.”

Open Standards

Schneider also says that openness is absolutely critical for these systems to ultimately work. Just as agreeing on a standard for the HTTP running on the Internet Protocol (IP) drove the Web, a new device-appropriate protocol will be necessary for the Internet of Things to work. Consensus will be necessary, he says, so that systems can talk to each other and connectivity will work. The Industrial Internet will push that out to the Cloud and beyond, he says.

“One of my favorite quotes is from IBM, he says – IBM said, ‘it’s not a new Internet, it’s a new Web.’” By that, they mean that the industry needs new, machine-centric protocols to run over the same Internet hardware and base IP protocol, Schneider said.

Schneider believes that this new web will eventually evolve to become the new architecture for most companies. However, for now, particularly in hospitals, it’s the “things” that need to be integrated into systems and overall architectures.

One example where this level of connectivity will make a huge difference, he says, is in predictive maintenance. Once a system can “sense” or predict that a machine may fail or if a part needs to be replaced, there will be a huge economic impact and cost savings. For instance, he said Siemens uses acoustic sensors to monitor the state of its wind generators. By placing sensors next to the bearings in the machine, they can literally “listen” for squeaky wheels and thus figure out whether a turbine may soon need repair. These analytics let them know when the bearing must be replaced before the turbine shuts down. Of course, the infrastructure will need to connect all of these “things” to the each other and the cloud first. So, there will need to be a lot of system level changes in architectures.

Standards, of course, will be key to getting these architectures to work together. Schneider believes standards development for the IoT will need to be tackled from both horizontal and vertical standpoint. Both generic communication standards and industry specific standards like how to integrate an operating room must evolve.

“We are a firm believer in open standards as a way to build consensus and make things actually work. It’s absolutely critical,” he said.

stan_schneiderStan Schneider is CEO at Real-Time Innovations (RTI), the Industrial Internet of Things communications platform company. RTI is the largest embedded middleware vendor and has an extensive footprint in all areas of the Industrial Internet, including Energy, Medical, Automotive, Transportation, Defense, and Industrial Control.  Stan has published over 50 papers in both academic and industry press. He speaks at events and conferences widely on topics ranging from networked medical devices for patient safety, the future of connected cars, the role of the DDS standard in the IoT, the evolution of power systems, and understanding the various IoT protocols.  Before RTI, Stan managed a large Stanford robotics laboratory, led an embedded communications software team and built data acquisition systems for automotive impact testing.  Stan completed his PhD in Electrical Engineering and Computer Science at Stanford University, and holds a BS and MS from the University of Michigan. He is a graduate of Stanford’s Advanced Management College.


Comments Off on The Open Group London 2014 Preview: A Conversation with RTI’s Stan Schneider about the Internet of Things and Healthcare

Filed under architecture, Cloud, digital technologies, Enterprise Architecture, Healthcare, Internet of Things, Open Platform 3.0, Standards, Uncategorized

Business Benefit from Public Data

By Dr. Chris Harding, Director for Interoperability, The Open Group

Public bodies worldwide are making a wealth of information available, and encouraging its commercial exploitation. This sounds like a bonanza for the private sector at the public expense, but entrepreneurs are holding back. A healthy market for products and services that use public-sector information would provide real benefits for everyone. What can we do to bring it about?

Why Governments Give Away Data

The EU directive of 2003 on the reuse of public sector information encourages the Member States to make as much information available for reuse as possible. This directive was revised and strengthened in 2013. The U.S. Open Government Directive of 2009 provides similar encouragement, requiring US government agencies to post at least three high-value data sets online and register them on its portal. Other countries have taken similar measures to make public data publicly available.

Why are governments doing this? There are two main reasons.

One is that it improves the societies that they serve and the governments themselves. Free availability of information about society and government makes people more effective citizens and makes government more efficient. It illuminates discussion of civic issues, and points a searchlight at corruption.

The second reason is that it has a positive effect on the wealth of nations and their citizens. The EU directive highlights the ability of European companies to exploit the potential of public-sector information, and contribute to economic growth and job creation. Information is not just the currency of democracy. It is also the lubricant of a successful economy.

Success Stories

There are some big success stories.

If you drive a car, you probably use satellite navigation to find your way about, and this may use public-sector information. In the UK, for example, map data that can be used by sat-nav systems is supplied for commercial use by a government agency, the Ordnance Survey.

When you order something over the web for delivery to your house, you often enter a postal code and see most of the address auto-completed by the website. Postcode databases are maintained by national postal authorities, which are generally either government departments or regulated private corporations, and made available by them for commercial use. Here, the information is not directly supporting a market, but is contributing to the sale of a range of unrelated products and services.

The data may not be free. There are commercial arrangements for supply of map and postcode data. But it is available, and is the basis for profitable products and for features that make products more competitive.

The Bonanza that Isn’t

These successes are, so far, few in number. The economic benefits of open government data could be huge. The McKinsey Global Institute estimates a potential of between 3 and 5 trillion dollars annually. Yet the direct impact of Open Data on the EU economy in 2010, seven years after the directive was issued, is estimated by Capgemini at only about 1% of that, although the EU accounts for nearly a quarter of world GDP.

The business benefits to be gained from using map and postcode data are obvious. There are other kinds of public sector data, where the business benefits may be substantial, but they are not easy to see. For example, data is or could be available about public transport schedules and availability, about population densities, characteristics and trends, and about real estate and land use. These are all areas that support substantial business activity, but businesses in these areas seldom make use of public sector information today.

Where are the Products?

Why are entrepreneurs not creating these potentially profitable products and services? There is one obvious reason. The data they are interested in is not always available and, where it is available, it is provided in different ways, and comes in different formats. Instead of a single large market, the entrepreneur sees a number of small markets, none of which is worth tackling. For example, the market for an application that plans public transport journeys across a single town is not big enough to justify substantial investment in product development. An application that could plan journeys across any town in Europe would certainly be worthwhile, but is not possible unless all the towns make this data available in a common format.

Public sector information providers often do not know what value their data has, or understand its applications. Working within tight budgets, they cannot afford to spend large amounts of effort on assembling and publishing data that will not be used. They follow the directives but, without common guidelines, they simply publish whatever is readily to hand, in whatever form it happens to be.

The data that could support viable products is not available everywhere and, where it is available, it comes in different formats. (One that is often used is PDF, which is particularly difficult to process as an information source.) The result is that the cost of product development is high, and the expected return is low.

Where is the Market?

There is a second reason why entrepreneurs hesitate. The shape of the market is unclear. In a mature market, everyone knows who the key players are, understands their motivations, and can predict to some extent how they will behave. The market for products and services based on public sector information is still taking shape. No one is even sure what kinds of organization will take part, or what they will do. How far, for example, will public-sector bodies go in providing free applications? Can large corporations buy future dominance with loss-leader products? Will some unknown company become an overnight success, like Facebook? With these unknowns, the risks are very high.

Finding the Answers

Public sector information providers and standards bodies are tackling these problems. The Open Group participates in SHARE-PSI, the European network for the exchange of experience and ideas around implementing open data policies in the public sector. The experience gained by SHARE-PSI will be used by the World-Wide Web Consortium as a basis for standards and guidelines for publication of public sector information. These standards and guidelines may be used, not just by the public sector, but by not-for-profit bodies and even commercial corporations, many of which have information that they want to make freely available.

The Open Group is making a key contribution by helping to map the shape of the market. It is using the Business Scenario technique from its well-known Enterprise Architecture methodology TOGAF® to identify the kinds of organization that will take part, and their objectives and concerns.

There will be a preview of this on October 22 at The Open Group event in London which will feature a workshop session on Open Public Sector Data. This workshop will look at how Open Data can help business, present a draft of the Business Scenario, and take input from participants to help develop its conclusions.

The developed Business Scenario will be presented at the SHARE-PSI workshop in Lisbon on December 3-4. The theme of this workshop is encouraging open data usage by commercial developers. It will bring a wide variety of stakeholders together to discuss and build the relationship between the public and private sectors. It will also address, through collaboration with the EU LAPSI project, the legal framework for use of open public sector data.

Benefit from Participation!

If you are thinking about publishing or using public-sector data, you can benefit from these workshops by gaining an insight into the way that the market is developing. In the long term, you can influence the common standards and guidelines that are being developed. In the short term, you can find out what is happening and network with others who are interested.

The social and commercial benefits of open public-sector data are not being realized today. They can be realized through a healthy market in products and services that process the data and make it useful to citizens. That market will emerge when public bodies and businesses clearly understand the roles that they can play. Now is the time to develop that understanding and begin to profit from it.

Register for The Open Group London 2014 event at

Find out how to participate in the Lisbon SHARE-PSI workshop at


Chris HardingDr. Chris Harding is Director for Interoperability at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0™ Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

Comments Off on Business Benefit from Public Data

Filed under big data, Cloud, digital technologies, Enterprise Architecture, Open Platform 3.0, TOGAF®, Uncategorized