Tag Archives: standards

Business Capabilities – Taking Your Organization into the Next Dimension

By Stuart Macgregor, Chief Executive, Real IRM Solutions

Decision-makers in large enterprises today face a number of paradoxes when it comes to implementing a business operating model and deploying Enterprise Architecture:

- How to stabilize and embed concrete systems that ensure control and predictability, but at the same time remain flexible and open to new innovations?

- How to employ new technology to improve the productivity of the enterprise and its staff in the face of continual pressures on the IT budget?

- How to ensure that Enterprise Architecture delivers tangible results today, but remains relevant in an uncertain future environment.

Answering these tough questions requires an enterprise to elevate its thinking beyond ‘business processes’ and develop a thorough understanding of its ‘business capabilities’. It demands that the enterprise optimizes and leverages these capabilities to improve every aspect of the business – from coal-face operations to blue-sky strategy.

Business capabilities articulate an organization’s inner-workings: the people, process, technology, tools, and content (information). Capabilities map the ways in which each component interfaces with each other, developing an intricate line-drawing of the entire organizational ecosystem at a technical and social level.  By understanding one’s current business capabilities, an organization is armed with a strategic planning tool. We refer to what is known as the BIDAT framework – which addresses the business, information, data, applications and technology architecture domains.

From this analysis, the journey to addressing the organization’s Enterprise Architecture estate begins. This culminates in the organization being able to dynamically optimize, add and improve on its capabilities as the external environment shifts and evolves. A BIDAT approach provides a permanent bridge between the two islands of business architecture and technology architecture.

Put another way, business capability management utilizes the right architectural solutions to deliver the business strategy. In this way, Enterprise Architecture is inextricably linked to capability management. It is the integrated architecture (combined with effective organizational change leadership) that develops the business capabilities and unleashes their power.

This can at times feel very conceptual and hard to apply to real-world environments. Perhaps the best recent example of tangible widespread implementations of a capability-based Enterprise Architecture approach is in South Africa’s minerals and mining sector.

Known as the Exploration and Mining Business Capability Reference Map, and published as part of a set of standards, this framework was developed by The Open Group Exploration, Mining, Metals and Minerals (EMMM™) Forum.  Focusing on all levels of mining operations, from strategic planning, portfolio planning, program enablement and project enablement – and based on the principles of open standards – this framework provides miners with a capability-based approach to information, processes, technology, and people.

The Reference Map isolates specific capabilities within mining organizations, analyzes them from multiple dimensions, and shows their various relationships to other parts of the organization. In the context of increased automation in the mining sector, this becomes an invaluable tool in determining those functions that are ripe for automation.

In this new dimension, this new era of business, there is no reason why achievements from the EMMM’s Business Capability Reference Map cannot be repeated in every industry, and in every mid- to large-scale enterprise throughout the globe.

For more information on joining The Open Group, please visit:  http://www.opengroup.org/getinvolved/becomeamember

For more information on joining The Open Group EMMM™ Forum, please visit:  http://opengroup.co.za/emmm

Photo - Stuart #2Stuart Macgregor is the Chief Executive of the South African company, Real IRM Solutions. Through his personal achievements, he has gained the reputation of an Enterprise Architecture and IT Governance specialist, both in South Africa and internationally.

Macgregor participated in the development of the Microsoft Enterprise Computing Roadmap in Seattle. He was then invited by John Zachman to Scottsdale Arizona to present a paper on using the Zachman framework to implement ERP systems. In addition, Macgregor was selected as a member of both the SAP AG Global Customer Council for Knowledge Management, and of the panel that developed COBIT 3rd Edition Management Guidelines. He has also assisted a global Life Sciences manufacturer to define their IT Governance framework, a major financial institution to define their global, regional and local IT organizational designs and strategy. He was also selected as a core member of the team that developed the South African Breweries (SABMiller) plc global IT strategy.

Stuart, as the lead researcher, assisted the IT Governance Institute map CobiT 4.0 to TOGAF® This mapping document was published by ISACA and The Open Group. More recently, he participated in the COBIT 5 development workshop held in London during May 2010.

 

 

 

1 Comment

Filed under EMMMv™, Enterprise Architecture, Enterprise Transformation, Standards, Uncategorized

The Onion & The Open Group Open Platform 3.0™

By Stuart Boardman, Senior Business Consultant, KPN Consulting, and Co-Chair of The Open Group Open Platform 3.0™

Onion1

The onion is widely used as an analogy for complex systems – from IT systems to mystical world views.Onion2

 

 

 

It’s a good analogy. From the outside it’s a solid whole but each layer you peel off reveals a new onion (new information) underneath.

And a slice through the onion looks quite different from the whole…Onion3

What (and how much) you see depends on where and how you slice it.Onion4

 

 

 

 

The Open Group Open Platform 3.0™ is like that. Use-cases for Open Platform 3.0 reveal multiple participants and technologies (Cloud Computing, Big Data Analytics, Social networks, Mobility and The Internet of Things) working together to achieve goals that vary by participant. Each participant’s goals represent a different slice through the onion.

The Ecosystem View
We commonly use the idea of peeling off layers to understand large ecosystems, which could be Open Platform 3.0 systems like the energy smart grid but could equally be the workings of a large cooperative or the transport infrastructure of a city. We want to know what is needed to keep the ecosystem healthy and what the effects could be of the actions of individuals on the whole and therefore on each other. So we start from the whole thing and work our way in.

Onion5

The Service at the Centre of the Onion

If you’re the provider or consumer (or both) of an Open Platform 3.0 service, you’re primarily concerned with your slice of the onion. You want to be able to obtain and/or deliver the expected value from your service(s). You need to know as much as possible about the things that can positively or negatively affect that. So your concern is not the onion (ecosystem) as a whole but your part of it.

Right in the middle is your part of the service. The first level out from that consists of other participants with whom you have a direct relationship (contractual or otherwise). These are the organizations that deliver the services you consume directly to enable your own service.

One level out from that (level 2) are participants with whom you have no direct relationship but on whose services you are still dependent. It’s common in Platform 3.0 that your partners too will consume other services in order to deliver their services (see the use cases we have documented). You need to know as much as possible about this level , because whatever happens here can have a positive or negative effect on you.

One level further from the centre we find indirect participants who don’t necessarily delivery any part of the service but whose actions may well affect the rest. They could just be indirect materials suppliers. They could also be part of a completely different value network in which your level 1 or 2 “partners” participate. You can’t expect to understand this level in detail but you know that how that value network performs can affect your partners’ strategy or even their very existence. The knock-on impact on your own strategy can be significant.

We can conceive of more levels but pretty soon a law of diminishing returns sets in. At each level further from your own organization you will see less detail and more variety. That in turn means that there will be fewer things you can actually know (with any certainty) and not much more that you can even guess at. That doesn’t mean that the ecosystem ends at this point. Ecosystems are potentially infinite. You just need to decide how deep you can usefully go.

Limits of the Onion
At a certain point one hits the limits of an analogy. If everybody sees their own organization as the centre of the onion, what we actually have is a bunch of different, overlapping onions.

Onion6

And you can’t actually make onions overlap, so let’s not take the analogy too literally. Just keep it in mind as we move on. Remember that our objective is to ensure the value of the service we’re delivering or consuming. What we need to know therefore is what can change that’s outside of our own control and what kind of change we might expect. At each visible level of the theoretical onion we will find these sources of variety. How certain of their behaviour we can be will vary – with a tendency to the less certain as we move further from the centre of the onion. We’ll need to decide how, if at all, we want to respond to each kind of variety.

But that will have to wait for my next blog. In the meantime, here are some ways people look at the onion.

Onion7   Onion8

 

 

 

 

SONY DSCStuart Boardman is a Senior Business Consultant with KPN Consulting where he leads the Enterprise Architecture practice and consults to clients on Cloud Computing, Enterprise Mobility and The Internet of Everything. He is Co-Chair of The Open Group Open Platform 3.0™ Forum and was Co-Chair of the Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by KPN, the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI as well as several Open Group white papers, guides and standards. He is a frequent speaker at conferences on the topics of Open Platform 3.0 and Identity.

2 Comments

Filed under Cloud, Cloud/SOA, Conference, Enterprise Architecture, Open Platform 3.0, Service Oriented Architecture, Standards, Uncategorized

ArchiMate® Users Group Meeting

By The Open Group

During a special ArchiMate® users group meeting on Wednesday, May 14 in Amsterdam, Andrew Josey, Director of Standards within The Open Group, presented on the ArchiMate certification program and adoption of the language. Andrew is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate 2.1, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX Specification, Version 4.

ArchiMate®, a standard of The Open Group, is an open and independent modeling language for Enterprise Architecture that is supported by different vendors and consulting firms. ArchiMate provides instruments to enable Enterprise Architects to describe, analyze and visualize the relationships among business domains in an unambiguous way. ArchiMate is not an isolated development. The relationships with existing methods and techniques, like modeling languages such as UML and BPMN, and methods and frameworks like TOGAF and Zachman, are well-described.

In this talk, Andrew provided an overview of the ArchiMate 2 certification program, including information on the adoption of the ArchiMate modeling language. He gave an overview of the major milestones in the development of Archimate and referred to the Dutch origins of the language. The Dutch Telematica Institute created the Archimate language in the period 2002-2004 and the language is now widespread. There have been over 41,000 downloads of different versions of the ArchiMate specification from more than 150 countries. At 52%, The Netherlands is leading the “Top 10 Certifications by country”. However, the “Top 20 Downloads by country” is dominated by the USA (19%), followed by the UK (14%) and The Netherlands (12%). One of the tools developed to support ArchiMate is Archi, a free open-source tool created by Phil Beauvoir at the University of Bolton in the UK. Since its development, Archi also has grown from a relatively small, home-grown tool to become a widely used open-source resource that averages 3,000 downloads per month and whose community ranges from independent practitioners to Fortune 500 companies. It is no surprise that again, Archi is mostly downloaded in The Netherlands (17.67%), the United States (12.42%) and the United Kingdom (8.81%).

After these noteworthy facts and figures, Henk Jonkers took a deep dive into modeling risk and security. Henk Jonkers is a senior research consultant, involved in BiZZdesign’s innovations in the areas of Enterprise Architecture and engineering. He was one of the main developers of the ArchiMate language, an author of the ArchiMate 1.0 and 2.0 Specifications, and is actively involved in the activities of the ArchiMate Forum of The Open Group. In this talk, Henk showed several examples of how risk and security aspects can be incorporated in Enterprise Architecture models using the ArchiMate language. He also explained how the resulting models could be used to analyze risks and vulnerabilities in the different architectural layers, and to visualize the business impact that they have.

First Henk described the limitations of current approaches – existing information security and risk management methods do not systematically identify potential attacks. They are based on checklists, heuristics and experience. Security controls are applied in a bottom-up way and are not based on a thorough analysis of risks and vulnerabilities. There is no explicit definition of security principles and requirements. Existing systems only focus on IT security. They have difficulties in dealing with complex attacks on socio-technical systems, combining physical and digital access, and social engineering. Current approaches focus on preventive security controls, and corrective and curative controls are not considered. Security by Design is a must, and there is always a trade-off between the risk factor versus process criticality. Henk gave some arguments as to why ArchiMate provides the right building blocks for a solid risk and security architecture. ArchiMate is widely accepted as an open standard for modeling Enterprise Architecture and support is widely available. ArchiMate is also suitable as a basis for qualitative and quantitative analysis. And last but not least: there is a good fit with other Enterprise Architecture and security frameworks (TOGAF, Zachman, SABSA).

“The nice thing about standards is that there are so many to choose from”, emeritus professor Andrew Stuart Tanenbaum once said. Using this quote as a starting point, Gerben Wierda focused his speech on the relationship between the ArchiMate language and Business Process Model and Notation (BPMN). In particular he discussed Bruce Silver’s BPMN Method and Style. He stated that ArchiMate and BPMN can exist side by side. Why would you link BPMN and Archimate? According to Gerben there is a fundamental vision behind all of this. “There are unavoidably many ‘models’ of the enterprise that are used. We cannot reduce that to one single model because of fundamentally different uses. We even cannot reduce that to a single meta-model (or pattern/structure) because of fundamentally different requirements. Therefore, what we need to do is look at the documentation of the enterprise as a collection of models with different structures. And what we thus need to do is make this collection coherent.”

Gerben is Lead Enterprise Architect of APG Asset Management, one of the largest Fiduciary Managers (± €330 billion Assets under Management) in the world, with offices in Heerlen, Amsterdam, New York, Hong Kong and Brussels. He has overseen the construction of one of the largest single ArchiMate models in the world to date and is the author of the book “Mastering ArchiMate”, based on his experience in large scale ArchiMate modeling. In his speech, Gerben showed how the leading standards ArchiMate and BPMN (Business Process Modeling Notation, an OMG standard) can be used together, creating one structured logically coherent and automatically synchronized description that combines architecture and process details.

Marc Lankhorst, Managing Consultant and Service Line Manager Enterprise Architecture at BiZZdesign, presented on the topic of capability modeling in ArchiMate. As an internationally recognized thought leader on Enterprise Architecture, he guides the development of BiZZdesign’s portfolio of services, methods, techniques and tools in this field. Marc is also active as a consultant in government and finance. In the past, he has managed the development of the ArchiMate language for Enterprise Architecture modeling, now a standard of The Open Group. Marc is a certified TOGAF9 Enterprise Architect and holds an MSc in Computer Science from the University of Twente and a PhD from the University of Groningen in the Netherlands. In his speech, Marc discussed different notions of “capability” and outlined the ways in which these might be modeled in ArchiMate. In short, a business capability is something an enterprise does or can do, given the various resources it possesses. Marc described the use of capability-based planning as a way of translating enterprise strategy to architectural choices and look ahead at potential extensions of ArchiMate for capability modeling. Business capabilities provide a high-level view of current and desired abilities of the organization, in relation to strategy and environment. Enterprise Architecture practitioners design extensive models of the enterprise, but these are often difficult to communicate with business leaders. Capabilities form a bridge between the business leaders and the Enterprise Architecture practitioners. They are very helpful in business transformation and are the ratio behind capability based planning, he concluded.

For more information on ArchiMate, please visit:

http://www.opengroup.org/subjectareas/enterprise/archimate

For information on the Archi tool, please visit: http://www.archimatetool.com/

For information on joining the ArchiMate Forum, please visit: http://www.opengroup.org/getinvolved/forums/archimate

 

1 Comment

Filed under ArchiMate®, Certifications, Conference, Enterprise Architecture, Enterprise Transformation, Professional Development, Standards, TOGAF®

The Open Group Summit Amsterdam 2014 – Day Two Highlights

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

On Tuesday, May 13, day two of The Open Group Summit Amsterdam, the morning plenary began with a welcome from The Open Group President and CEO Allen Brown. He presented an overview of the Forums and the corresponding Roadmaps. He described the process of standardization, from the initial work to a preliminary standard, including review documents, whitepapers and snapshots, culminating in the final publication of an open standard. Brown also announced that Capgemini is again a Platinum member of The Open Group and contributes to the realization of the organization’s objectives in various ways.

Charles Betz, Chief Architect, Signature Client Group, AT&T and Karel van Zeeland, Lead IT4IT Architect, Shell IT International, presented the second keynote of the morning, ‘A Reference Architecture For the Business of IT’.  When the IT Value Chain and IT4IT Reference Architecture is articulated, instituted and automated, the business can experience huge cost savings in IT and significantly improved response times for IT service delivery, as well as increasing customer satisfaction.

AmsterdamPlenaryKarel van Zeeland, Charles Betz and Allen Brown

In 1998, Shell Information Technology started to restructure the IT Management and the chaos was complete. There were too many tools, too many vendors, a lack of integration, no common data model, a variety of user interfaces and no standards to support rapid implementation. With more than 28 different solutions for incident management and more than 160 repositories of configuration data, the complexity was immense. An unclear relationship with Enterprise Architecture and other architectural issues made the case even worse.

Restructuring the IT Management turned out to be a long journey for the Shell managers. How to manage 1,700 locations in 90 countries, 8,000 applications, 25,000 servers, dozens of global and regional datacenters,125,000 PCs and laptops, when at the same time you are confronted with trends like BYOD, mobility, cloud computing, security, big data and the Internet of Things (IoT).  According to Betz and van Zeeland, IT4IT is a promising platform for evolution of the IT profession. IT4IT however has the potential to become a full open standard for managing the business of IT.

Jeroen Tas, CEO of Healthcare Informatics Solutions and Services within Philips Healthcare, explained in his keynote speech, “Philips is becoming a software company”. Digital solutions connect and streamline workflow across the continuum of care to improve patient outcomes. Today, big data is supporting adaptive therapies. Smart algorithms are used for early warning and active monitoring of patients in remote locations. Tas has a dream, he wants to make a valuable contribution to a connected healthcare world for everyone.

In January 2014, Royal Philips announced the formation of Healthcare Informatics Solutions and Services, a new business group within Philips’ Healthcare sector that offers hospitals and health systems the customized clinical programs, advanced data analytics and interoperable, cloud-based platforms necessary to implement new models of care. Tas, who previously served as the Chief Information Officer of Philips, leads the group.

In January of this year, The Open Group launched The Open Group Healthcare Forum whichfocuses on bringing Boundaryless Information Flow™ to the healthcare industry enabling data to flow more easily throughout the complete healthcare ecosystem.

Ed Reynolds, HP Fellow and responsible for the HP Enterprise Security Services in the US, described the role of information risk in a new technology landscape. How do C-level executives think about risk? This is a relevant and urgent question because it can take more than 243 days before a data breach is detected. Last year, the average cost associated with a data breach increased 78% to 11.9 million dollars. Critical data assets may be of strategic national importance, have massive corporate value or have huge significance to an employee or citizen, be it the secret recipe of Coca Cola or the medical records of a patient. “Protect your crown jewels” is the motto.

Bart Seghers, Cyber Security Manager, Thales Security and Henk Jonkers, Senior Research Consultant of BiZZdesign, visualized the Business Impact of Technical Cyber Risks. Attacks on information systems are becoming increasingly sophisticated. Organizations are increasingly networked and thus more complex. Attacks use digital, physical and social engineering and the departments responsible for each of these domains within an organization operate in silos. Current risk management methods cannot handle the resulting complexity. Therefore they are using ArchiMate® as a risk and security architecture. ArchiMate is a widely accepted open standard for modeling Enterprise Architecture. There is also a good fit with other EA and security frameworks, such as TOGAF®. A pentest-based Business Impact Assessment (BIA) is a powerful management dashboard that increases the return on investment for your Enterprise Architecture effort, they concluded.

Risk Management was also a hot topic during several sessions in the afternoon. Moderator Jim Hietala, Vice President, Security at The Open Group, hosted a panel discussion on Risk Management.

In the afternoon several international speakers covered topics including Enterprise Architecture & Business Value, Business & Data Architecture and Open Platform 3.0™. In relation to social networks, Andy Jones, Technical Director, EMEA, SOA Software, UK, presented “What Facebook, Twitter and Netflix Didn’t Tell You”.

The Open Group veteran Dr. Chris Harding, Director for Interoperability at The Open Group, and panelists discussed and emphasized the importance of The Open Group Open Platform 3.0™. The session also featured a live Q&A via Twitter #ogchat, #ogop3.

The podcast is now live. Here are the links:

Briefings Direct Podcast Home Page: http://www.briefingsdirect.com/

PODCAST STREAM: http://traffic.libsyn.com/interarbor/BriefingsDirect-The_Open_Group_Amsterdam_Conference_Panel_Delves_into_How_to_Best_Gain_Business_Value_From_Platform_3.mp3

PODCAST SUMMARY: http://briefingsdirect.com/the-open-group-amsterdam-panel-delves-into-how-to-best-gain-business-value-from-platform-30

In the evening, The Open Group hosted a tour and dinner experience at the world-famous Heineken Brewery.

For those of you who attended the summit, please give us your feedback! https://www.surveymonkey.com/s/AMST2014

Comments Off

Filed under ArchiMate®, Boundaryless Information Flow™, Certifications, Enterprise Architecture, Enterprise Transformation, Healthcare, Open Platform 3.0, RISK Management, Standards, TOGAF®, Uncategorized

Improving Patient Care and Reducing Costs in Healthcare

By Jason Lee, Director of Healthcare and Security Forums, The Open Group

Recently, The Open Group Healthcare Forum hosted a tweet jam to discuss IT and Enterprise Architecture (EA) issues as they relate to two of the most persistent problems in healthcare: reducing costs and improving patient care. Below I summarize the key points that followed from a rather unique discussion. Unique how? Unique in that rather than address these issues from the perspective of “must do” priorities (including EHR implementation, transitioning to ICD-10, and meeting enhanced HIPAA security requirements), we focused on “should do” opportunities.

We asked how stakeholders in the healthcare system can employ “Boundaryless Information Flow™” and standards development through the application of EA approaches that have proven effective in other industries to add new insights and processes to reduce costs and improve quality.

Question 1: What barriers exist for collaboration among providers in healthcare, and what can be done to improve things?
• tetradian: Huge barriers of language, terminology, mindset, worldview, paradigm, hierarchy, role and much more
• jasonsleephd: Financial, organizational, structural, lack of enabling technology, cultural, educational, professional insulation
• jim_hietala: EHRs with proprietary interfaces represent a big barrier in healthcare
• Technodad: Isn’t question really what barriers exist for collaboration between providers and patients in healthcare?
• tetradian: Communication b/w patients and providers is only one (type) amongst very many
• Technodad: Agree. Debate needs to identify whose point of view the #healthcare problem is addressing.
• Dana_Gardner: Where to begin? A Tower of Babel exists on multiple levels among #healthcare ecosystems. Too complex to fix wholesale.
• EricStephens: Also, legal ramifications of sharing information may impede sharing
• efeatherston: Patient needs provider collaboration to see any true benefit (I don’t just go to one provider)
• Dana_Gardner: Improve first by identifying essential collaborative processes that have most impact, and then enable them as secure services.
• Technodad: In US at least, solutions will need to be patient-centric to span providers- Bring Your Own Wellness (BYOW™) for HC info.
• loseby: Lack of shared capabilities & interfaces between EHRs leads to providers w/o comprehensive view of patient
• EricStephens: Are incentives aligned sufficiently to encourage collaboration? + lack of technology integration.
• tetradian: Vast numbers of stakeholder-groups, many beyond medicine – e.g. pharma, university, politics, local care (esp. outside of US)
• jim_hietala: Gap in patient-centric information flow
• Technodad: I think patents will need to drive the collaboration – they have more incentive to manage info than providers.
• efeatherston: Agreed, stakeholder list could be huge
• EricStephens: High-deductible plans will drive patients (us) to own our health care experience
• Dana_Gardner: Take patient-centric approach to making #healthcare processes better: drives adoption, which drives productivity, more adoption
• jasonsleephd: Who thinks standards development and data sharing is an essential collaboration tool?
• tetradian: not always patient-centric – e.g. epidemiology /public-health is population centric – i.e. _everything_ is ‘the centre’
• jasonsleephd: How do we break through barriers to collaboration? For one thing, we need to create financial incentives to collaborate (e.g., ACOs)
• efeatherston: Agreed, the challenge is to get them to challenge (if that makes sense). Many do not question
• EricStephens: Some will deify those in a lab coat.
• efeatherston: Still do, especially older generations, cultural
• Technodad: Agree – also displaying, fusing data from different providers, labs, monitors etc.
• dianedanamac: Online collaboration, can be cost effective & promote better quality but must financially incented
• efeatherston: Good point, unless there is a benefit/incentive for provider, they may not be bothered to try
• tetradian: “must financially incented” – often other incentives work better – money can be a distraction – also who pays?

Participants identified barriers that are not atypical: financial disincentives, underpowered technology, failure to utilize existing capability, lack of motivation to collaborate. Yet all participants viewed more collaboration as key. Consensus developed around:
• The patient (and by one commenter, the population) as the main driver of collaboration, and
• The patient as the most important stakeholder at the center of information flow.

Question 2: Does implementing remote patient tele-monitoring and online collaboration drive better and more cost-effective patient care?
• EricStephens: “Hell yes” comes to mind. Why drag yourself into a dr. office when a device can send the information (w/ video)
• efeatherston: Will it? Will those with high deductible plans have ability/understanding/influence to push for it?
• EricStephens: Driving up participation could drive up efficacy
• jim_hietala: Big opportunities to improve patient care thru remote tele-monitoring
• jasonsleephd: Tele-ICUs can keep patients (and money) in remote settings while receiving quality care
• jasonsleephd: Remote monitoring of patients admitted with CHF can reduce rehospitalization w/i 6 months @connectedhealth.org
• Dana_Gardner: Yes! Pacemakers now uplink to centralized analysis centers, communicate trends back to attending doctor. Just scratches surface
• efeatherston: Amen. Do that now, monthly uplink, annual check in with doctor to discuss any trends he sees.
• tetradian: Assumes tele-monitoring options even exist – very wide range of device-capabilities, from very high to not-much, and still not common.
• tetradian: (General request to remember that there’s more to the world, and medicine, than just the US and its somewhat idiosyncratic systems?)
• efeatherston: Yes, I do find myself looking through the lens of my own experiences, forgetting the way we do things may not translate
• jasonsleephd: Amen to point about our idiosyncrasies! Still, we have to live with them, and we can do so much better with good information flow!
• Dana_Gardner: Governments should remove barriers so more remote patient tele-monitoring occurs. Need to address the malpractice risks issue.
• TerryBlevins: Absolutely. Just want the information to go to the right place!
• Technodad: . Isn’t “right place” someplace you & all your providers can access? Need interoperability!
• TerryBlevins: It requires interoperability yes – the info must flow to those that must know.
• Technodad: Many areas where continuous monitoring can help. Improved IoT (internet of things) sensors e.g. cardio, blood chemistry coming. http://t.co/M3xw3tNvv3
• tetradian: Ethical/privacy concerns re how/with-whom that data is shared – e.g. with pharma, research, epidemiology etc
• efeatherston: Add employers to that etc. list of how/who/what is shared

Participants agreed that remote patient monitoring and telemonitoring can improve collaboration, improve patient care, and put patients more in control of their own healthcare data. However, participants expressed concerns about lack of widespread availability and the related issue of high cost. In addition, they raised important questions about who has access to these data, and they addressed nagging privacy and liability concerns.

Question 3: Can a mobile strategy improve patient experience, empowerment and satisfaction? If so, how?
• jim_hietala: mobile is a key area where patient health information can be developed/captured
• EricStephens: Example: link blood sugar monitor to iPhone to MyFitnessPal + gamification to drive adherence (and drive $$ down?)
• efeatherston: Mobile along with #InternetOfThings, wearables linked to mobile. Contact lens measuring blood sugar in recent article as ex.
• TerryBlevins: Sick people, or people getting sick are on the move. In a patient centric world we must match need.
• EricStephens: Mobile becomes a great data acquisition point. Something as simple as SMS can drive adherence with complication drug treatments
• jasonsleephd: mHealth is a very important area for innovation, better collaboration, $ reduction & quality improvement. Google recent “Webby Awards & handheld devices”
• tetradian: Mobile can help – e.g. use of SMS for medicine in Africa etc
• Technodad: Mobile isn’t option any more. Retail, prescription IoT, mobile network & computing make this a must-have. http://t.co/b5atiprIU9
• dianedanamac: Providers need to be able to receive the information mHealth
• Dana_Gardner: Healthcare should go location-independent. Patient is anywhere, therefore so is care, data, access. More than mobile, IMHO.
• Technodad: Technology and mobile demand will outrun regional provider systems, payers, regulation
• Dana_Gardner: As so why do they need to be regional? Cloud can enable supply-demand optimization regardless of location for much.
• TerryBlevins: And the caregivers are also on the move!
• Dana_Gardner: Also, more machine-driven care, i.e. IBM Watson, for managing the routing and prioritization. Helps mitigate overload.
• Technodad: Agree – more on that later!
• Technodad: Regional providers are the reality in the US. Would love to have more national/global coverage.
• Dana_Gardner: Yes, let the market work its magic by making it a larger market, when information is the key.
• tetradian: “let the market do its work” – ‘the market’ is probably the quickest way to destroy trust! – not a good idea…
• Technodad: To me, problem is coordinating among multi providers, labs etc. My health info seems to move at glacial pace then.
• tetradian: “Regional providers are the reality in the US.” – people move around: get info follow them is _hard_ (1st-hand exp. there…)
• tetradian: danger of hype/fear-driven apps – may need regulation, or at least regulatory monitoring
• jasonsleephd: Regulators, as in FDA or something similar?
• tetradian: “Regulators as in FDA” etc – at least oversight of that kind, yes (cf. vitamins, supplements, health-advice services)
• jim_hietala: mobile, consumer health device innovation moving much faster than IT ability to absorb
• tetradian: also beware of IT-centrism and culture – my 90yr-old mother has a cell-phone, but has almost no idea how to use it!
• Dana_Gardner: Information and rely of next steps (in prevention or acute care) are key, and can be mobile. Bring care to the patient ASAP.

Participants began in full agreement. Mobile health is not even an option but a “given” now. Recognition that provider ability to receive information is lacking. Cloud viewed as means to overcome regionalization of data storage problems. When the discussion turned to further development of mHealth there was some debate on what can be left to the market and whether some form of regulatory action is needed.

Question 4: Does better information flow and availability in healthcare reduce operation cost, and free up resources for more patient care?
• tetradian: A4: should do, but it’s _way_ more complex than most IT-folks seem to expect or understand (e.g. repeated health-IT fails in UK)
• jim_hietala: A4: removing barriers to health info flow may reduce costs, but for me it’s mostly about opportunity to improve patient care
• jasonsleephd: Absolutely. Consider claims processing alone. Admin costs in private health ins. are 20% or more. In Medicare less than 2%.
• loseby: Absolutely! ACO model is proving it. Better information flow and availability also significantly reduces hospital admissions
• dianedanamac: I love it when the MD can access my x-rays and lab results so we have more time.
• efeatherston: I love it when the MD can access my x-rays and lab results so we have more time.
• EricStephens: More info flow + availability -> less admin staff -> more med staff.
• EricStephens: Get the right info to the ER Dr. can save a life by avoiding contraindicated medicines
• jasonsleephd: EricStephens GO CPOE!!
• TerryBlevins: @theopengroup. believe so, but ask the providers. My doctor is more focused on patient by using simple tech to improve info flow
• tetradian: don’t forget link b/w information-flows and trust – if trust fails, so does the information-flow – worse than where we started!
• jasonsleephd: Yes! Trust is really key to this conversation!
• EricStephens: processing a claim, in most cases, should be no more difficult than an expense report or online order. Real-time adjudication
• TerryBlevins: Great point.
• efeatherston: Agreed should be, would love to see it happen. Trust in the data as mentioned earlier is key (and the process)
• tetradian: A4: sharing b/w patient and MD is core, yes, but who else needs to access that data – or _not_ see it? #privacy
• TerryBlevins: A4: @theopengroup can’t forget that if info doesn’t flow sometimes the consequences are fatal, so unblocked the flow.
• tetradian: .@TerryBlevins A4: “if info doesn’t flow sometimes the consequences are fatal,” – v.important!
• Technodad: . @tetradian To me, problem is coordinating among multi providers, labs etc. My health info seems to move at glacial pace then.
• TerryBlevins: A4: @Technodad @tetradian I have heard that a patient moving on a gurney moves faster than the info in a hospital.
• Dana_Gardner: A4 Better info flow in #healthcare like web access has helped. Now needs to go further to be interactive, responsive, predictive.
• jim_hietala: A4: how about pricing info flow in healthcare, which is almost totally lacking
• Dana_Gardner: A4 #BigData, #cloud, machine learning can make 1st points of #healthcare contact a tech interface. Not sci-fi, but not here either.

Starting with the recognition that this is a very complicated issue, the conversation quickly produced a consensus view that mobile health is key, both to cost reduction and quality improvement and increased patient satisfaction. Trust that information is accurate, available and used to support trust in the provider-patient relationship emerged as a relevant issue. Then, naturally, privacy issues surfaced. Coordination of information flow and lack of interoperability were recognized as important barriers and the conversation finally turned somewhat abstract and technical with mentions of big data and the cloud and pricing information flows without much in the way of specifying how to connect the dots.

Question 5: Do you think payers and providers are placing enough focus on using technology to positively impact patient satisfaction?
• Technodad: A5: I think there are positive signs but good architecture is lacking. Current course will end w/ provider information stovepipes.
• TerryBlevins: A5: @theopengroup Providers are doing more. I think much more is needed for payers – they actually may be worse.
• theopengroup: @TerryBlevins Interesting – where do you see opportunities for improvements with payers?
• TerryBlevins: A5: @theopengroup like was said below claims processing – an onerous job for providers and patients – mostly info issue.
• tetradian: A5: “enough focus on using tech”? – no, not yet – but probably won’t until tech folks properly face the non-tech issues…
• EricStephens: A5 No. I’m not sure patient satisfaction (customer experience/CX?) is even a factor sometimes. Patients not treated like customers
• dianedanamac: .@EricStephens SO TRUE! Patients not treated like customers
• Technodad: . @EricStephens Amen to that. Stovepipe data in provider systems is barrier to understanding my health & therefore satisfaction.
• dianedanamac: “@mclark497: @EricStephens issue is the customer is treat as only 1 dimension. There is also the family experience to consider too
• tetradian: .@EricStephens A5: “Patients not treated like customers” – who _is_ ‘the customer’? – that’s a really tricky question…
• efeatherston: @tetradian @EricStephens Trickiest question. to the provider is the patient or the payer the customer?
• tetradian: .@efeatherston “patient or payer” – yeah, though it gets _way_ more complex than that once we explore real stakeholder-relations
• efeatherston: @tetradian So true.
• jasonsleephd: .@tetradian @efeatherston Very true. There are so many diff stakeholders. But to align payers and pts would be huge
• efeatherston: @jasonsleephd @tetradian re: aligning payers and patients, agree, it would be huge and a good thing
• jasonsleephd: .@efeatherston @tetradian @EricStephens Ideally, there should be no dividing line between the payer and the patient!
• efeatherston: @jasonsleephd @tetradian @EricStephens Ideally I agree, and long for that ideal world.
• EricStephens: .@jasonsleephd @efeatherston @tetradian the payer s/b a financial proxy for the patient. and nothing more
• TerryBlevins: @EricStephens @jasonsleephd @efeatherston @tetradian … got a LOL out of me.
• Technodad: . @tetradian @EricStephens That’s a case of distorted marketplace. #Healthcare architecture must cut through to patient.
• tetradian: .@Technodad “That’s a case of distorted marketplace.” – yep. now add in the politics of consultants and their hierarchies, etc?
• TerryBlevins: A5: @efeatherston @tetradian @EricStephens in patient cetric world it is the patient and or their proxy.
• jasonsleephd: A5: Not enough emphasis on how proven technologies and architectural structures in other industries can benefit healthcare
• jim_hietala: A5: distinct tension in healthcare between patient-focus and meeting mandates (a US issue)
• tetradian: .@jim_hietala A5: “meeting mandates (a US issue)” – UK NHS (national-health-service) may be even worse than US – a mess of ‘targets’
• EricStephens: A5 @jim_hietala …and avoiding lawsuits
• tetradian: A5: most IT-type tech still not well-suited to the level of mass-uniqueness inherent in the healthcare context
• Dana_Gardner: A5 They are using tech, but patient “satisfaction” not yet a top driver. We have a long ways to go on that. But it can help a ton.
• theopengroup: @Dana_Gardner Agree, there’s a long way to go. What would you say is the starting point for providers to tie the two together?
• Dana_Gardner: @theopengroup An incentive other than to avoid lawsuits. A transparent care ratings capability. Outcomes focus based on total health
• Technodad: A5: I’d be satisfied just to not have to enter my patient info & history on a clipboard in every different provider I go to!
• dianedanamac: A5 @tetradian Better data sharing & Collab. less redundancy, lower cost, more focus on patient needs -all possible w/ technology
• Technodad: A5: The patient/payer discussion is a red herring. If the patient weren’t there, rest of the system would be unnecessary.
• jim_hietala: RT @Technodad: The patient/payer discussion is a red herring. If the patient weren’t there, rest of system unnecessary. AMEN

Very interesting conversation. Positive signs of progress were noted but so too were indications that healthcare will remain far behind the technology curve in the foreseeable future. Providers were given higher “grades” than payers. Yet, claims processing would seemingly be one of the easiest areas for technology-assisted improvement. One discussant noted that there will not be enough focus on technology in healthcare “until the tech folks properly face the non-tech issues”. This would seem to open a wide door for EA experts to enter the healthcare domain! The barriers (and opportunities) to this may be the topic of another tweet jam, or Open Group White Paper.
Interestingly, part way into the discussion the topic turned to the lack of a real customer/patient focus in healthcare. Not enough emphasis on patient satisfaction. Not enough attention to patient outcomes. There needs to be a better/closer alignment between what motivates payers and the needs of patients.

Question 6: As some have pointed out, many of the EHR systems are highly proprietary, how can standards deliver benefits in healthcare?
• jim_hietala: A6: Standards will help by lowering the barriers to capturing data, esp. for mhealth, and getting it to point of care
• tetradian: .@jim_hietala “esp. for mhealth” – focus on mhealth may be a way to break the proprietary logjam, ‘cos it ain’t proprietary yet
• TerryBlevins: A6: @theopengroup So now I deal with at least 3 different EHR systems. All requiring me to be the info steward! Hmmm
• TerryBlevins: A6 @theopengroup following up if they shared data through standards maybe they can synchronize.
• EricStephens: A6 – Standards lead to better interoperability, increased viscosity of information which will lead to lowers costs, better outcomes.
• efeatherston: @EricStephens and greater trust in the info (as was mentioned earlier, trust in the information key to success)
• jasonsleephd: A6: Standards development will not kill innovation but rather make proprietary systems interoperable
• Technodad: A6: Metcalfe’s law rules! HC’s many providers-many patients structure means interop systems will be > cost effective in long run.
• tetradian: A6: the politics of this are _huge_, likewise the complexities – if we don’t face those issues right up-front, this is going nowhere

On his April 24, 2014 post at www.weblog.tetradian.com, Tom Graves provided a clearly stated position on the role of The Open Group in delivering standards to help healthcare improve. He wrote:

“To me, this is where The Open Group has an obvious place and a much-needed role, because it’s more than just an IT-standards body. The Open Group membership are mostly IT-type organisations, yes, which tends to guide towards IT-standards, and that’s unquestionably of importance here. Yet perhaps the real role for The Open Group as an organisation is in its capabilities and experience in building consortia across whole industries: EMMM™ and FACE are two that come immediately to mind. Given the maze of stakeholders and the minefields of vested-interests across the health-context, those consortia-building skills and experience are perhaps what’s most needed here.”

The Open Group is the ideal organization to engage in this work. There are many ways to collaborate. You can join The Open Group Healthcare Forum, follow the Forum on Twitter @ogHealthcare and connect on The Open Group Healthcare Forum LinkedIn Group.

Jason Lee headshotJason Lee, Director of Healthcare and Security Forums at The Open Group, has conducted healthcare research, policy analysis and consulting for over 20 years. He is a nationally recognized expert in healthcare organization, finance and delivery and applies his expertise to a wide range of issues, including healthcare quality, value-based healthcare, and patient-centered outcomes research. Jason worked for the legislative branch of the U.S. Congress from 1990-2000 — first at GAO, then at CRS, then as Health Policy Counsel for the Chairman of the House Energy and Commerce Committee (in which role the National Journal named him a “Top Congressional Aide” and he was profiled in the Almanac of the Unelected). Subsequently, Jason held roles of increasing responsibility with non-profit organizations — including AcademyHealth, NORC, NIHCM, and NEHI. Jason has published quantitative and qualitative findings in Health Affairs and other journals and his work has been quoted in Newsweek, the Wall Street Journal and a host of trade publications. He is a Fellow of the Employee Benefit Research Institute, was an adjunct faculty member at the George Washington University, and has served on several boards. Jason earned a Ph.D. in social psychology from the University of Michigan and completed two postdoctoral programs (supported by the National Science Foundation and the National Institutes of Health). He is the proud father of twins and lives outside of Boston.

Comments Off

Filed under Boundaryless Information Flow™, Data management, Enterprise Architecture, Enterprise Transformation, Healthcare, Professional Development, Standards

How the Open Trusted Technology Provider Standard (O-TTPS) and Accreditation Will Help Lower Cyber Risk

By Andras Szakal, Vice President and Chief Technology Officer, IBM U.S. Federal

Changing business dynamics and enabling technologies

In 2008, IBM introduced the concept of a “Smarter Planet.” The Smarter Planet initiative focused, in part, on the evolution of globalization against the backdrop of changing business dynamics and enabling technologies. A key concept was the need for infrastructure to be tightly integrated, interconnected, and intelligent, thereby facilitating collaboration between people, government and businesses in order to meet the world’s growing appetite for data and automation. Since then, many industries and businesses have adopted this approach, including the ICT (information and communications technology) industries that support the global technology manufacturing supply chain.

Intelligent and interconnected critical systems

This transformation has infused technology into virtually all aspects of our lives, and involves, for example, government systems, the electric grid and healthcare. Most of these technological solutions are made up of hundreds or even thousands of components that are sourced from the growing global technology supply chain.
Intelligent and interconnected critical systems

In the global technology economy, no one technology vendor or integrator is able to always provide a single source solution. It is no longer cost competitive to design all of the electronic components, printed circuit boards, card assemblies, or other sub-assemblies in-house. Adapting to the changing market place and landscape by balancing response time and cost efficiency, in an expedient manner, drives a more wide-spread use of OEM (original equipment manufacturer) products.

As a result, most technology providers procure from a myriad of global component suppliers, who very often require similarly complex supply chains to source their components. Every enterprise has a supplier network, and each of their suppliers has a supply chain network, and these sub-tier suppliers have their own supply chain networks. The resultant technology supply chain is manifested into a network of integrated suppliers.

Increasingly, the critical systems of the planet — telecommunications, banking, energy and others — depend on and benefit from the intelligence and interconnectedness enabled by existing and emerging technologies. As evidence, one need only look to the increase in enterprise mobile applications and BYOD strategies to support corporate and government employees.

Cybersecurity by design: Addressing risk in a sustainable way across the ecosystem

Whether these systems are trusted by the societies they serve depends in part on whether the technologies incorporated into them are fit for the purpose they are intended to serve. Fit for purpose is manifested in two essential ways:

- Does the product meet essential functional requirements?
– Has the product or component been produced by trustworthy provider?

Of course, the leaders or owners of these systems have to do their part to achieve security and safety: e.g., to install, use and maintain technology appropriately, and to pay attention to people and process aspects such as insider threats. Cybersecurity considerations must be addressed in a sustainable way from the get-go, by design, and across the whole ecosystem — not after the fact, or in just one sector or another, or in reaction to crisis.

Assuring the quality and integrity of mission-critical technology

In addressing the broader cybersecurity challenge, however, buyers of mission-critical technology naturally seek reassurance as to the quality and integrity of the products they procure. In our view, the fundamentals of the institutional response to that need are similar to those that have worked in prior eras and in other industries — like food.

The very process of manufacturing technology is not immune to cyber-attack. The primary purpose of attacking the supply chain typically is motivated by monetary gain. The primary goals of a technology supply chain attack are intended to inflict massive economic damage in an effort to gain global economic advantage or as a way to seeding targets with malware that provides unfettered access for attackers.

It is for this reason that the global technology manufacturing industry must establish practices that mitigate this risk by increasing the cost barriers of launching such attacks and increasing the likelihood of being caught before the effects of such an attack are irreversible. As these threats evolve, the global ICT industry must deploy enhanced security through advanced automated cyber intelligence analysis. As critical infrastructure becomes more automated, integrated and essential to critical to functions, the technology supply chain that surrounds it must be considered a principle theme of the overall global security and risk mitigation strategy.

A global, agile, and scalable approach to supply chain security

Certainly, the manner in which technologies are invented, produced, and sold requires a global, agile, and scalable approach to supply chain assurance and is essential to achieve the desired results. Any technology supply chain security standard that hopes to be widely adopted must be flexible and country-agnostic. The very nature of the global supply chain (massively segmented and diverse) requires an approach that provides practicable guidance but avoids being overtly prescriptive. Such an approach would require the aggregation of industry practices that have been proven beneficial and effective at mitigating risk.

The OTTF (The Open Group Trusted Technology Forum) is an increasingly recognized and promising industry initiative to establish best practices to mitigate the risk of technology supply chain attack. Facilitated by The Open Group, a recognized international standards and certification body, the OTTF is working with governments and industry worldwide to create vendor-neutral open standards and best practices that can be implemented by anyone. Current membership includes a list of the most well-known technology vendors, integrators, and technology assessment laboratories.

The benefits of O-TTPS for governments and enterprises

IBM is currently a member of the OTTF and has been honored to hold the Chair for the last three years.  Governments and enterprises alike will benefit from the work of the OTTF. Technology purchasers can use the Open Trusted Technology Provider™ Standard (O-TTPS) and Framework best-practice recommendations to guide their strategies.

A wide range of technology vendors can use O-TTPS approaches to build security and integrity into their end-to-end supply chains. The first version of the O-TTPS is focused on mitigating the risk of maliciously tainted and counterfeit technology components or products. Note that a maliciously tainted product is one that has been produced by the provider and acquired through reputable channels but which has been tampered maliciously. A counterfeit product is produced other than by or for the provider, or is supplied by a non-reputable channel, and is represented as legitimate. The OTTF is currently working on a program that will accredit technology providers who conform to the O-TTPS. IBM expects to complete pilot testing of the program by 2014.

IBM has actively supported the formation of the OTTF and the development of the O-TTPS for several reasons. These include but are not limited to the following:

- The Forum was established within a trusted and respected international standards body – The Open Group.
– The Forum was founded, in part, through active participation by governments in a true public-private partnership in which government members actively participate.
– The OTTF membership includes some of the most mature and trusted commercial technology manufactures and vendors because a primary objective of the OTTF was harmonization with other standards groups such as ISO (International Organization for Standardization) and Common Criteria.

The O-TTPS defines a framework of organizational guidelines and best practices that enhance the security and integrity of COTS ICT. The first version of the O-TTPS is focused on mitigating certain risks of maliciously tainted and counterfeit products within the technology development / engineering lifecycle. These best practices are equally applicable for systems integrators; however, the standard is intended to primarily address the point of view of the technology manufacturer.

O-TTPS requirements

The O-TTPS requirements are divided into three categories:

1. Development / Engineering Process and Method
2. Secure Engineering Practices
3. Supply Chain Security Practices

The O-TTPS is intended to establish a normalized set of criteria against which a technology provider, component supplier, or integrator can be assessed. The standard is divided into categories that define best practices for engineering development practices, secure engineering, and supply chain security and integrity intended to mitigate the risk of maliciously tainted and counterfeit components.

The accreditation program

As part of the process for developing the accreditation criteria and policy, the OTTF established a pilot accreditation program. The purpose of the pilot was to take a handful of companies through the accreditation process and remediate any potential process or interpretation issues. IBM participated in the OTTP-S accreditation pilot to accredit a very significant segment of the software product portfolio; the Application Infrastructure Middleware Division (AIM) which includes the flagship WebSphere product line. The AIM pilot started in mid-2013 and completed in the first week of 2014 and was formally recognized as accredited in the fist week of February 2014.

IBM is currently leveraging the value of the O-TTPS and working to accredit additional development organizations. Some of the lessons learned during the IBM AIM initial O-TTPS accreditation include:

- Conducting a pre-assessment against the O-TTPS should be conducted by an organization before formally entering accreditation. This allows for remediation of any gaps and reduces potential assessment costs and project schedule.
– Starting with a segment of your development portfolio that has a mature secure engineering practices and processes. This helps an organization address accreditation requirements and facilitates interactions with the 3rd party lab.
– Using your first successful O-TTPS accreditation to create templates that will help drive data gathering and validate practices to establish a repeatable process as your organization undertakes additional accreditations.

andras-szakalAndras Szakal, VP and CTO, IBM U.S. Federal, is responsible for IBM’s industry solution technology strategy in support of the U.S. Federal customer. Andras was appointed IBM Distinguished Engineer and Director of IBM’s Federal Software Architecture team in 2005. He is an Open Group Distinguished Certified IT Architect, IBM Certified SOA Solution Designer and a Certified Secure Software Lifecycle Professional (CSSLP).  Andras holds undergraduate degrees in Biology and Computer Science and a Masters Degree in Computer Science from James Madison University. He has been a driving force behind IBM’s adoption of government IT standards as a member of the IBM Software Group Government Standards Strategy Team and the IBM Corporate Security Executive Board focused on secure development and cybersecurity. Andras represents the IBM Software Group on the Board of Directors of The Open Group and currently holds the Chair of the IT Architect Profession Certification Standard (ITAC). More recently, he was appointed chair of The Open Group Trusted Technology Forum and leads the development of The Open Trusted Technology Provider Framework.

1 Comment

Filed under Accreditations, Cybersecurity, government, O-TTF, O-TTPS, OTTF, RISK Management, Standards, supply chain, Supply chain risk

Q&A with Allen Brown, President and CEO of The Open Group

By The Open Group

Last month, The Open Group hosted its San Francisco 2014 conference themed “Toward Boundaryless Information Flow™.” Boundaryless Information Flow has been the pillar of The Open Group’s mission since 2002 when it was adopted as the organization’s vision for Enterprise Architecture. We sat down at the conference with The Open Group President and CEO Allen Brown to discuss the industry’s progress toward that goal and the industries that could most benefit from it now as well as The Open Group’s new Dependability through Assuredness™ Standard and what the organization’s Forums are working on in 2014.

The Open Group adopted Boundaryless Information Flow as its vision in 2002, and the theme of the San Francisco Conference has been “Towards Boundaryless Information Flow.” Where do you think the industry is at this point in progressing toward that goal?

Well, it’s progressing reasonably well but the challenge is, of course, when we established that vision back in 2002, life was a little less complex, a little bit less fast moving, a little bit less fast-paced. Although organizations are improving the way that they act in a boundaryless manner – and of course that changes by industry – some industries still have big silos and stovepipes, they still have big boundaries. But generally speaking we are moving and everyone understands the need for information to flow in a boundaryless manner, for people to be able to access and integrate information and to provide it to the teams that they need.

One of the keynotes on Day One focused on the opportunities within the healthcare industry and The Open Group recently started a Healthcare Forum. Do you see Healthcare industry as a test case for Boundaryless Information Flow and why?

Healthcare is one of the verticals that we’ve focused on. And it is not so much a test case, but it is an area that absolutely seems to need information to flow in a boundaryless manner so that everyone involved – from the patient through the administrator through the medical teams – have all got access to the right information at the right time. We know that in many situations there are shifts of medical teams, and from one medical team to another they don’t have access to the same information. Information isn’t easily shared between medical doctors, hospitals and payers. What we’re trying to do is to focus on the needs of the patient and improve the information flow so that you get better outcomes for the patient.

Are there other industries where this vision might be enabled sooner rather than later?

I think that we’re already making significant progress in what we call the Exploration, Mining and Minerals industry. Our EMMM™ Forum has produced an industry-wide model that is being adopted throughout that industry. We’re also looking at whether we can have an influence in the airline industry, automotive industry, manufacturing industry. There are many, many others, government and retail included.

The plenary on Day Two of the conference focused on The Open Group’s Dependability through Assuredness standard, which was released last August. Why is The Open Group looking at dependability and why is it important?

Dependability is ultimately what you need from any system. You need to be able to rely on that system to perform when needed. Systems are becoming more complex, they’re becoming bigger. We’re not just thinking about the things that arrive on the desktop, we’re thinking about systems like the barriers at subway stations or Tube stations, we’re looking at systems that operate any number of complex activities. And they bring an awful lot of things together that you have to rely upon.

Now in all of these systems, what we’re trying to do is to minimize the amount of downtime because downtime can result in financial loss or at worst human life, and we’re trying to focus on that. What is interesting about the Dependability through Assuredness Standard is that it brings together so many other aspects of what The Open Group is working on. Obviously the architecture is at the core, so it’s critical that there’s an architecture. It’s critical that we understand the requirements of that system. It’s also critical that we understand the risks, so that fits in with the work of the Security Forum, and the work that they’ve done on Risk Analysis, Dependency Modeling, and out of the dependency modeling we can get the use cases so that we can understand where the vulnerabilities are, what action has to be taken if we identify a vulnerability or what action needs to be taken in the event of a failure of the system. If we do that and assign accountability to people for who will do what by when, in the event of an anomaly being detected or a failure happening, we can actually minimize that downtime or remove it completely.

Now the other great thing about this is it’s not only a focus on the architecture for the actual system development, and as the system changes over time, requirements change, legislation changes that might affect it, external changes, that all goes into that system, but also there’s another circle within that system that deals with failure and analyzes it and makes sure it doesn’t happen again. But there have been so many evidences of failure recently. In the banks for example in the UK, a bank recently was unable to process debit cards or credit cards for customers for about three or four hours. And that was probably caused by the work done on a routine basis over a weekend. But if Dependability through Assuredness had been in place, that could have been averted, it could have saved an awfully lot of difficulty for an awful lot of people.

How does the Dependability through Assuredness Standard also move the industry toward Boundaryless Information Flow?

It’s part of it. It’s critical that with big systems the information has to flow. But this is not so much the information but how a system is going to work in a dependable manner.

Business Architecture was another featured topic in the San Francisco plenary. What role can business architecture play in enterprise transformation vis a vis the Enterprise Architecture as a whole?

A lot of people in the industry are talking about Business Architecture right now and trying to focus on that as a separate discipline. We see it as a fundamental part of Enterprise Architecture. And, in fact, there are three legs to Enterprise Architecture, there’s Business Architecture, there’s the need for business analysts, which are critical to supplying the information, and then there are the solutions, and other architects, data, applications architects and so on that are needed. So those three legs are needed.

We find that there are two or three different types of Business Architect. Those that are using the analysis to understand what the business is doing in order that they can inform the solutions architects and other architects for the development of solutions. There are those that are more integrated with the business that can understand what is going on and provide input into how that might be improved through technology. And there are those that can actually go another step and talk about here we have the advances and the technology and here are the opportunities for advancing our competitiveness and organization.

What are some of the other key initiatives that The Open Group’s forum and work groups will be working on in 2014?

That kind question is like if you’ve got an award, you’ve got to thank your friends, so apologies to anyone that I leave out. Let me start alphabetically with the Architecture Forum. The Architecture Forum obviously is working on the evolution of TOGAF®, they’re also working with the harmonization of TOGAF with Archimate® and they have a number of projects within that, of course Business Architecture is on one of the projects going on in the Architecture space. The Archimate Forum are pushing ahead with Archimate—they’ve got two interesting activities going on at the moment, one is called ArchiMetals, which is going to be a sister publication to the ArchiSurance case study, where the ArchiSurance provides the example of Archimate is used in the insurance industry, ArchiMetals is going to be used in a manufacturing context, so there will be a whitepaper on that and there will be examples and artifacts that we can use. They’re also working on in Archimate a standard for interoperability for modeling tools. There are four tools that are accredited and certified by The Open Group right now and we’re looking for that interoperability to help organizations that have multiple tools as many of them do.

Going down the alphabet, there’s DirecNet. Not many people know about DirecNet, but Direcnet™ is work that we do around the U.S. Navy. They’re working on standards for long range, high bandwidth mobile networking. We can go to the FACE™ Consortium, the Future Airborne Capability Environment. The FACE Consortium are working on their next version of their standard, they’re working toward accreditation, a certification program and the uptake of that through procurement is absolutely amazing, we’re thrilled about that.

Healthcare we’ve talked about. The Open Group Trusted Technology Forum, where they’re working on how we can trust the supply chain in developed systems, they’ve released the Open Trusted Technology Provider™ Standard (O-TTPS) Accreditation Program, that was launched this week, and we already have one accredited vendor and two certified test labs, assessment labs. That is really exciting because now we’ve got a way of helping any organization that has large complex systems that are developed through a global supply chain to make sure that they can trust their supply chain. And that is going to be invaluable to many industries but also to the safety of citizens and the infrastructure of many countries. So the other part of the O-TTPS is that standard we are planning to move toward ISO standardization shortly.

The next one moving down the list would be Open Platform 3.0™. This is really exciting part of Boundaryless Information Flow, it really is. This is talking about the convergence of SOA, Cloud, Social, Mobile, Internet of Things, Big Data, and bringing all of that together, this convergence, this bringing together of all of those activities is really something that is critical right now, and we need to focus on. In the different areas, some of our Cloud computing standards have already gone to ISO and have been adopted by ISO. We’re working right now on the next products that are going to move through. We have a governance standard in process and an ecosystem standard has recently been published. In the area of Big Data there’s a whitepaper that’s 25 percent completed, there’s also a lot of work on the definition of what Open Platform 3.0 is, so this week the members have been working on trying to define Open Platform 3.0. One of the really interesting activities that’s gone on, the members of the Open Platform 3.0 Forum have produced something like 22 different use cases and they’re really good. They’re concise and they’re precise and the cover a number of different industries, including healthcare and others, and the next stage is to look at those and work on the ROI of those, the monetization, the value from those use cases, and that’s really exciting, I’m looking forward to peeping at that from time to time.

The Real Time and Embedded Systems Forum (RTES) is next. Real-Time is where we incubated the Dependability through Assuredness Framework and that was where that happened and is continuing to develop and that’s really good. The core focus of the RTES Forum is high assurance system, and they’re doing some work with ISO on that and a lot of other areas with multicore and, of course, they have a number of EC projects that we’re partnering with other partners in the EC around RTES.

The Security Forum, as I mentioned earlier, they’ve done a lot of work on risk and dependability. So they’ve not only their standards for the Risk Taxonomy and Risk Analysis, but they’ve now also developed the Open FAIR Certification for People, which is based on those two standards of Risk Analysis and Risk Taxonomy. And we’re already starting to see people being trained and being certified under that Open FAIR Certification Program that the Security Forum developed.

A lot of other activities are going on. Like I said, I probably left a lot of things out, but I hope that gives you a flavor of what’s going on in The Open Group right now.

The Open Group will be hosting a summit in Amsterdam May 12-14, 2014. What can we look forward to at that conference?

In Amsterdam we have a summit – that’s going to bring together a lot of things, it’s going to be a bigger conference that we had here. We’ve got a lot of activity in all of our activities; we’re going to bring together top-level speakers, so we’re looking forward to some interesting work during that week.

 

 

 

1 Comment

Filed under ArchiMate®, Boundaryless Information Flow™, Business Architecture, Conference, Cybersecurity, EMMMv™, Enterprise Architecture, FACE™, Healthcare, O-TTF, RISK Management, Standards, TOGAF®