Tag Archives: Open Group

Secure Integration of Convergent Technologies – a Challenge for Open Platform™

By Dr. Chris Harding, The Open Group

The results of The Open Group Convergent Technologies survey point to secure integration of the technologies as a major challenge for Open Platform 3.0.  This and other input is the basis for the definition of the platform, where the discussion took place at The Open Group conference in London.

Survey Highlights

Here are some of the highlights from The Open Group Convergent Technologies survey.

  • 95% of respondents felt that the convergence of technologies such as social media, mobility, cloud, big data, and the Internet of things represents an opportunity for business
  • Mobility currently has greatest take-up of these technologies, and the Internet of things has least.
  • 84% of those from companies creating solutions want to deal with two or more of the technologies in combination.
  • Developing the understanding of the technologies by potential customers is the first problem that solution creators must overcome. This is followed by integrating with products, services and solutions from other suppliers, and using more than one technology in combination.
  • Respondents saw security, vendor lock-in, integration and regulatory compliance as the main problems for users of software that enables use of these convergent technologies for business purposes.
  • When users are considered separately from other respondents, security and vendor lock-in show particularly strongly as issues.

The full survey report is available at: https://www2.opengroup.org/ogsys/catalog/R130

Open Platform 3.0

Analysts forecast that convergence of technical phenomena including mobility, cloud, social media, and big data will drive the growth in use of information technology through 2020. Open Platform 3.0 is an initiative that will advance The Open Group vision of Boundaryless Information Flow™ by helping enterprises to use them.

The survey confirms the value of an open platform to protect users of these technologies from vendor lock-in. It also shows that security is a key concern that must be addressed, that the platform must make the technologies easy to use, and that it must enable them to be used in combination.

Understanding the Requirements

The Open Group is conducting other work to develop an understanding of the requirements of Open Platform 3.0. This includes:

  • The Open Platform 3.0 Business Scenario, that was recently published, and is available from https://www2.opengroup.org/ogsys/catalog/R130
  • A set of business use cases, currently in development
  • A high-level round-table meeting to gain the perspective of CIOs, who will be key stakeholders.

The requirements input have been part of the discussion at The Open Group Conference, which took place in London this week. Monday’s keynote presentation by Andy Mulholland, Former Global CTO at Capgemini on “Just Exactly What Is Going on in Business and Technology?” included the conclusions from the round-table meeting. This week’s presentation and panel discussion on the requirements for Open Platform 3.0 covered all the inputs.

Delivering the Platform

Review of the inputs in the conference was followed by a members meeting of the Open Platform 3.0 Forum, to start developing the architecture of Open Platform 3.0, and to plan the delivery of the platform definition. The aim is to have a snapshot of the definition early in 2014, and to deliver the first version of the standard a year later.

Meeting the Challenge

Open Platform 3.0 will be crucial to establishing openness and interoperability in the new generation of information technologies. This is of first importance for everyone in the IT industry.

Following the conference, there will be an opportunity for everyone to input material and ideas for the definition of the platform. If you want to be part of the community that shapes the definition, to work on it with like-minded people in other companies, and to gain early insight of what it will be, then your company must join the Open Platform 3.0 Forum. (For more information on this, contact Chris Parnell – c.parnell@opengroup.org)

Providing for secure integration of the convergent technologies, and meeting the other requirements for Open Platform 3.0, will be a difficult but exciting challenge. I’m looking forward to continue to tackle the challenge with the Forum members.

Dr. Chris Harding

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

1 Comment

Filed under Cloud/SOA, Conference, Data management, Future Technologies, Open Platform 3.0, Semantic Interoperability, Service Oriented Architecture, Standards

Gaining Dependability Across All Business Activities Requires Standard of Standards to Tame Dynamic Complexity, Says The Open Group CEO

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here

Hello, and welcome to a special BriefingsDirect Thought Leadership

Interview series, coming to you in conjunction with The Open Group Conference on July 15, in Philadelphia.

88104-aaadanaI’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout these discussions on enterprise transformation in the finance, government, and healthcare sector.

We’re here now with the President and CEO of The Open Group, Allen Brown, to explore the increasingly essential role of standards, in an undependable, unpredictable world. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Welcome back, Allen.

Allen Brown: It’s good to be here, Dana. abrown

Gardner: What are the environmental variables that many companies are facing now as they try to improve their businesses and assess the level of risk and difficulty? It seems like so many moving targets.

 Brown: Absolutely. There are a lot of moving targets. We’re looking at a situation where organizations are having to put in increasingly complex systems. They’re expected to make them highly available, highly safe, highly secure, and to do so faster and cheaper. That’s kind of tough.

Gardner: One of the ways that organizations have been working towards a solution is to have a standardized approach, perhaps some methodologies, because if all the different elements of their business approach this in a different way, we don’t get too far too quickly, and it can actually be more expensive.

Perhaps you could paint for us the vision of an organization like The Open Group in terms of helping organizations standardize and be a little bit more thoughtful and proactive towards these changed elements?

Brown: With the vision of The Open Group, the headline is “Boundaryless Information Flow.” That was established back in 2002, at a time when organizations were breakingdown the stovepipes or the silos within and between organizations and getting people to work together across functioning. They found, having done that, or having made some progress towards that, that the applications and systems were built for those silos. So how can we provide integrated information for all those people?

As we have moved forward, those boundaryless systems have become bigger

and much more complex. Now, boundarylessness and complexity are giving everyone different types of challenges. Many of the forums or consortia that make up The Open Group are all tackling it from their own perspective, and it’s all coming together very well.

We have got something like the Future Airborne Capability Environment (FACE) Consortium, which is a managed consortium of The Open Group focused on federal aviation. In the federal aviation world they’re dealing with issues like weapons systems.

New weapons

Over time, building similar weapons is going to be more expensive, inflation happens. But the changing nature of warfare is such that you’ve then got a situation where you’ve got to produce new weapons. You have to produce them quickly and you have to produce them inexpensively.

So how can we have standards that make for more plug and play? How can the avionics within a cockpit of whatever airborne vehicle be more interchangeable, so that they can be adapted more quickly and do things faster and at lower cost.

After all, cost is a major pressure on government departments right now.

We’ve also got the challenges of the supply chain. Because of the pressure on costs, it’s critical that large, complex systems are developed using a global supply chain. It’s impossible to do it all domestically at a cost. Given that, countries around the world, including the US and China, are all concerned about what they’re putting into their complex systems that may have tainted or malicious code or counterfeit products.

The Open Group Trusted Technology Forum (OTTF) provides a standard that ensures that, at each stage along the supply chain, we know that what’s going into the products is clean, the process is clean, and what goes to the next link in the chain is clean. And we’re working on an accreditation program all along the way.

We’re also in a world, which when we mention security, everyone is concerned about being attacked, whether it’s cybersecurity or other areas of security, and we’ve got to concern ourselves with all of those as we go along the way.

Our Security Forum is looking at how we build those things out. The big thing about large, complex systems is that they’re large and complex. If something goes wrong, how can you fix it in a prescribed time scale? How can you establish what went wrong quickly and how can you address it quickly?

If you’ve got large, complex systems that fail, it can mean human life, as it did with the BP oil disaster at Deepwater Horizon or with Space Shuttle Challenger. Or it could be financial. In many organizations, when something goes wrong, you end up giving away service.

An example that we might use is at a railway station where, if the barriers don’t work, the only solution may be to open them up and give free access. That could be expensive. And you can use that analogy for many other industries, but how can we avoid that human or financial cost in any of those things?

A couple of years after the Space Shuttle Challenger disaster, a number of criteria were laid down for making sure you had dependable systems, you could assess risk, and you could know that you would mitigate against it.

What The Open Group members are doing is looking at how you can get dependability and assuredness through different systems. Our Security Forum has done a couple of standards that have got a real bearing on this. One is called Dependency Modeling, and you can model out all of the dependencies that you have in any system.

Simple analogy

A very simple analogy is that if you are going on a road trip in a car, you’ve got to have a competent driver, have enough gas in the tank, know where you’re going, have a map, all of those things.

What can go wrong? You can assess the risks. You may run out of gas or you may not know where you’re going, but you can mitigate those risks, and you can also assign accountability. If the gas gauge is going down, it’s the driver’s accountability to check the gauge and make sure that more gas is put in.

We’re trying to get that same sort of thinking through to these large complex systems. What you’re looking at doing, as you develop or evolve large, complex systems, is to build in this accountability and build in understanding of the dependencies, understanding of the assurance cases that you need, and having these ways of identifying anomalies early, preventing anything from failing. If it does fail, you want to minimize the stoppage and, at the same time, minimize the cost and the impact, and more importantly, making sure that that failure never happens again in that system.

The Security Forum has done the Dependency Modeling standard. They have also provided us with the Risk Taxonomy. That’s a separate standard that helps us analyze risk and go through all of the different areas of risk.

Now, the Real-time & Embedded Systems Forum has produced the Dependability through Assuredness, a standard of The Open Group, that brings all of these things together. We’ve had a wonderful international endeavor on this, bringing a lot of work from Japan, working with the folks in the US and other parts of the world. It’s been a unique activity.

Dependability through Assuredness depends upon having two interlocked cycles. The first is a Change Management Cycle that says that, as you look at requirements, you build out the dependencies, you build out the assurance cases for those dependencies, and you update the architecture. Everything has to start with architecture now.

You build in accountability, and accountability, importantly, has to be accepted. You can’t just dictate that someone is accountable. You have to have a negotiation. Then, through ordinary operation, you assess whether there are anomalies that can be detected and fix those anomalies by new requirements that lead to new dependabilities, new assurance cases, new architecture and so on.

The other cycle that’s critical in this, though, is the Failure Response Cycle. If there is a perceived failure or an actual failure, there is understanding of the cause, prevention of it ever happening again, and repair. That goes through the Change Accommodation Cycle as well, to make sure that we update the requirements, the assurance cases, the dependability, the architecture, and the accountability.

So the plan is that with a dependable system through that assuredness, we can manage these large, complex systems much more easily.

Gardner: Allen, many of The Open Group activities have been focused at the enterprise architect or business architect levels. Also with these risk and security issues, you’re focusing at chief information security officers or governance, risk, and compliance (GRC), officials or administrators. It sounds as if the Dependability through Assuredness standard shoots a little higher. Is this something a board-level mentality or leadership should be thinking about, and is this something that reports to them?

Board-level issue

Brown: In an organization, risk is a board-level issue, security has become a board-level issue, and so has organization design and architecture. They’re all up at that level. It’s a matter of the fiscal responsibility of the board to make sure that the organization is sustainable, and to make sure that they’ve taken the right actions to protect their organization in the future, in the event of an attack or a failure in their activities.

The risks to an organization are financial and reputation, and those risks can be very real. So, yes, they should be up there. Interestingly, when we’re looking at areas like business architecture, sometimes that might be part of the IT function, but very often now we’re seeing as reporting through the business lines. Even in governments around the world, the business architects are very often reporting up to business heads.

Gardner: Here in Philadelphia, you’re focused on some industry verticals, finance, government, health. We had a very interesting presentation this morning by Dr. David Nash, who is the Dean of the Jefferson School of Population Health, and he had some very interesting insights about what’s going on in the United States vis-à-vis public policy and healthcare.

One of the things that jumped out at me was, at the end of his presentation, he was saying how important it was to have behavior modification as an element of not only individuals taking better care of themselves, but also how hospitals, providers, and even payers relate across those boundaries of their organization.

That brings me back to this notion that these standards are very powerful and useful, but without getting people to change, they don’t have the impact that they should. So is there an element that you’ve learned and that perhaps we can borrow from Dr. Nash in terms of applying methods that actually provoke change, rather than react to change?

Brown: Yes, change is a challenge for many people. Getting people to change is like taking a horse to water, but will it drink? We’ve got to find methods of doing that.

One of the things about The Open Group standards is that they’re pragmatic and practical standards. We’ve seen’ in many of our standards’ that where they apply to product or service, there is a procurement pull through. So the FACE Consortium, for example, a $30 billion procurement means that this is real and true.

In the case of healthcare, Dr. Nash was talking about the need for boundaryless information sharing across the organizations. This is a major change and it’s a change to the culture of the organizations that are involved. It’s also a change to the consumer, the patient, and the patient advocates.

All of those will change over time. Some of that will be social change, where the change is expected and it’s a social norm. Some of that change will change as people and generations develop. The younger generations are more comfortable with authority that they perceive with the healthcare professionals, and also of modifying the behavior of the professionals.

The great thing about the healthcare service very often is that we have professionals who want to do a number of things. They want to improve the lives of their patients, and they also want to be able to do more with less.

Already a need

There’s already a need. If you want to make any change, you have to create a need, but in healthcare, there is already a pent-up need that people see that they want to change. We can provide them with the tools and the standards that enable it to do that, and standards are critically important, because you are using the same language across everyone.

It’s much easier for people to apply the same standards if they are using the same language, and you get a multiplier effect on the rate of change that you can achieve by using those standards. But I believe that there is this pent-up demand. The need for change is there. If we can provide them with the appropriate usable standards, they will benefit more rapidly.

Gardner: Of course, measuring the progress with the standards approach helps as well. We can determine where we are along the path as either improvements are happening or not happening. It gives you a common way of measuring.

The other thing that was fascinating to me with Dr. Nash’s discussion was that he was almost imploring the IT people in the crowd to come to the rescue. He’s looking for a cavalry and he’d really seemed to feel that IT, the data, the applications, the sharing, the collaboration, and what can happen across various networks, all need to be brought into this.

How do we bring these worlds together? There is this policy, healthcare and population statisticians are doing great academic work, and then there is the whole IT world. Is this something that The Open Group can do — bridge these large, seemingly unrelated worlds?

Brown: At the moment, we have the capability of providing the tools for them to do that and the processes for them to do that. Healthcare is a very complex world with the administrators and the healthcare professionals. You have different grades of those in different places. Each department and each organization has its different culture, and bringing them together is a significant challenge.

In some of that processes, certainly, you start with understanding what it is you’re trying to address. You start with what are the pain points, what are the challenges, what are the blockages, and how can we overcome those blockages? It’s a way of bringing people together in workshops. TOGAF, a standard of The Open Group, has the business scenario method, bringing people together, building business scenarios, and understanding what people’s pain points are.

As long as we can then follow through with the solutions and not disappoint people, there is the opportunity for doing that. The reality is that you have to do that in small areas at a time. We’re not going to take the entire population of the United States and get everyone in the workshop and work altogether.

But you can start in pockets and then generate evangelists, proof points, and successful case studies. The work will then start emanating out to all other areas.

Gardner: It seems too that, with a heightened focus on vertical industries, there are lessons that could be learned in one vertical industry and perhaps applied to another. That also came out in some of the discussions around big data here at the conference.

The financial industry recognized the crucial role that data plays, made investments, and brought the constituencies of domain expertise in finance with the IT domain expertise in data and analysis, and came up with some very impressive results.

Do you see that what has been the case in something like finance is now making its way to healthcare? Is this an enterprise or business architect role that opens up more opportunity for those individuals as business and/or enterprise architects in healthcare? Why don’t we see more enterprise architects in healthcare?

Good folks

Brown: I don’t know. We haven’t run the numbers to see how many there are. There are some very competent enterprise architects within the healthcare industry around the world. We’ve got some good folks there.

The focus of The Open Group for the last couple of decades or so has always been on horizontal standards, standards that are applicable to any industry. Our focus is always about pragmatic standards that can be implemented and touched and felt by end-user consumer organizations.

Now, we’re seeing how we can make those even more pragmatic and relevant by addressing the verticals, but we’re not going to lose the horizontal focus. We’ll be looking at what lessons can be learned and what we can build on. Big data is a great example of the fact that the same kind of approach of gathering the data from different sources, whatever that is, and for mixing it up and being able to analyze it, can be applied anywhere.

The challenge with that, of course, is being able to capture it, store it, analyze it, and make some sense of it. You need the resources, the storage, and the capability of actually doing that. It’s not just a case of, “I’ll go and get some big data today.”

I do believe that there are lessons learned that we can move from one industry to another. I also believe that, since some geographic areas and some countries are ahead of others, there’s also a cascading of knowledge and capability around the world in a given time scale as well.

Gardner: Well great. I’m afraid we’ll have to leave it there. We’ve been talking about the increasingly essential role of standards in a complex world, where risk and dependability become even more essential. We have seen how The Open Group is evolving to meet these challenges through many of its activities and through many of the discussions here at the conference.

Please join me now in thanking our guest, Allen Brown, President and CEO of The Open Group. Thank you.

Brown: Thanks for taking the time to talk to us, Dana.

Comments Off

Filed under ArchiMate®, Business Architecture, Cloud, Conference, Enterprise Architecture, Healthcare, Open Platform 3.0, Professional Development, Service Oriented Architecture, TOGAF, TOGAF®

The Open Group Conference to Emphasize Healthcare as Key Sector for Ecosystem-Wide Interactions

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here

Dana Gardner: Hello, and welcome to a special BriefingsDirect Thought Leadership Interview series, coming to you in conjunction with The Open Group Conference on July 15, in Philadelphia. Registration to the conference remains open. Follow the conference on Twitter at #ogPHL.

Gardner

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout these discussions on enterprise transformation in the finance, government, and healthcare sector.

We’re here now with a panel of experts to explore how new IT trends are empowering improvements, specifically in the area of healthcare. We’ll learn how healthcare industry organizations are seeking large-scale transformation and what are some of the paths they’re taking to realize that.

We’ll see how improved cross-organizational collaboration and such trends as big data and cloud computing are helping to make healthcare more responsive and efficient.

With that, please join me in welcoming our panel, Jason Uppal, Chief Architect and Acting CEO at clinicalMessage. Welcome, Jason.

Jason Uppal: Thank you, Dana.

Inside of healthcare and inside the healthcare ecosystem, information either doesn’t flow well or it only flows at a great cost.

Gardner: And we’re also joined by Larry Schmidt, Chief Technologist at HP for the Health and Life Sciences Industries. Welcome, Larry.

Larry Schmidt: Thank you.

Gardner: And also, Jim Hietala, Vice President of Security at The Open Group. Welcome back, Jim. [Disclosure: The Open Group and HP are sponsors of BriefingsDirect podcasts.]

Jim Hietala: Thanks, Dana. Good to be with you.

Gardner: Let’s take a look at this very interesting and dynamic healthcare sector, Jim. What, in particular, is so special about healthcare and why do things like enterprise architecture and allowing for better interoperability and communication across organizational boundaries seem to be so relevant here?

Hietala: There’s general acknowledgement in the industry that, inside of healthcare and inside the healthcare ecosystem, information either doesn’t flow well or it only flows at a great cost in terms of custom integration projects and things like that.

Fertile ground

From The Open Group’s perspective, it seems that the healthcare industry and the ecosystem really is fertile ground for bringing to bear some of the enterprise architecture concepts that we work with at The Open Group in order to improve, not only how information flows, but ultimately, how patient care occurs.

Gardner: Larry Schmidt, similar question to you. What are some of the unique challenges that are facing the healthcare community as they try to improve on responsiveness, efficiency, and greater capabilities?

Schmidt: There are several things that have not really kept up with what technology is able to do today.

For example, the whole concept of personal observation comes into play in what we would call “value chains” that exist right now between a patient and a doctor. We look at things like mobile technologies and want to be able to leverage that to provide additional observation of an individual, so that the doctor can make a more complete diagnosis of some sickness or possibly some medication that a person is on.

We want to be able to see that observation in real life, as opposed to having to take that in at the office, which typically winds up happening. I don’t know about everybody else, but every time I go see my doctor, oftentimes I get what’s called white coat syndrome. My blood pressure will go up. But that’s not giving the doctor an accurate reading from the standpoint of providing great observations.

Technology has advanced to the point where we can do that in real time using mobile and other technologies, yet the communication flow, that information flow, doesn’t exist today, or is at best, not easily communicated between doctor and patient.

There are plenty of places that additional collaboration and communication can improve the whole healthcare delivery model.

If you look at the ecosystem, as Jim offered, there are plenty of places that additional collaboration and communication can improve the whole healthcare delivery model.

That’s what we’re about. We want to be able to find the places where the technology has advanced, where standards don’t exist today, and just fuel the idea of building common communication methods between those stakeholders and entities, allowing us to then further the flow of good information across the healthcare delivery model.

Gardner: Jason Uppal, let’s think about what, in addition to technology, architecture, and methodologies can bring to bear here? Is there also a lag in terms of process thinking in healthcare, as well as perhaps technology adoption?

Uppal: I’m going to refer to a presentation that I watched from a very well-known surgeon from Harvard, Dr. Atul Gawande. His point was is that, in the last 50 years, the medical industry has made great strides in identifying diseases, drugs, procedures, and therapies, but one thing that he was alluding to was that medicine forgot the cost, that everything is cost.

At what price?

Today, in his view, we can cure a lot of diseases and lot of issues, but at what price? Can anybody actually afford it?

Uppal

His view is that if healthcare is going to change and improve, it has to be outside of the medical industry. The tools that we have are better today, like collaborative tools that are available for us to use, and those are the ones that he was recommending that we need to explore further.

That is where enterprise architecture is a powerful methodology to use and say, “Let’s take a look at it from a holistic point of view of all the stakeholders. See what their information needs are. Get that information to them in real time and let them make the right decisions.”

Therefore, there is no reason for the health information to be stuck in organizations. It could go with where the patient and providers are, and let them make the best decision, based on the best practices that are available to them, as opposed to having siloed information.

So enterprise-architecture methods are most suited for developing a very collaborative environment. Dr. Gawande was pointing out that, if healthcare is going to improve, it has to think about it not as medicine, but as healthcare delivery.

There are definitely complexities that occur based on the different insurance models and how healthcare is delivered across and between countries.

Gardner: And it seems that not only are there challenges in terms of technology adoption and even operating more like an efficient business in some ways. We also have very different climates from country to country, jurisdiction to jurisdiction. There are regulations, compliance, and so forth.

Going back to you, Larry, how important of an issue is that? How complex does it get because we have such different approaches to healthcare and insurance from country to country?

Schmidt: There are definitely complexities that occur based on the different insurance models and how healthcare is delivered across and between countries, but some of the basic and fundamental activities in the past that happened as a result of delivering healthcare are consistent across countries.

As Jason has offered, enterprise architecture can provide us the means to explore what the art of the possible might be today. It could allow us the opportunity to see how innovation can occur if we enable better communication flow between the stakeholders that exist with any healthcare delivery model in order to give us the opportunity to improve the overall population.

After all, that’s what this is all about. We want to be able to enable a collaborative model throughout the stakeholders to improve the overall health of the population. I think that’s pretty consistent across any country that we might work in.

Ongoing work

Gardner: Jim Hietala, maybe you could help us better understand what’s going on within The Open Group and, even more specifically, at the conference in Philadelphia. There is the Population Health Working Group and there is work towards a vision of enabling the boundaryless information flow between the stakeholders. Any other information and detail you could offer would be great.[Registration to the conference remains open. Follow the conference on Twitter at #ogPHL.]

Hietala: On Tuesday of the conference, we have a healthcare focus day. The keynote that morning will be given by Dr. David Nash, Dean of the Jefferson School of Population Health. He’ll give what’s sure to be a pretty interesting presentation, followed by a reactors’ panel, where we’ve invited folks from different stakeholder constituencies.

Hietala

We are going to have clinicians there. We’re going to have some IT folks and some actual patients to give their reaction to Dr. Nash’s presentation. We think that will be an interesting and entertaining panel discussion.

The balance of the day, in terms of the healthcare content, we have a workshop. Larry Schmidt is giving one of the presentations there, and Jason and myself and some other folks from our working group are involved in helping to facilitate and carry out the workshop.

The goal of it is to look into healthcare challenges, desired outcomes, the extended healthcare enterprise, and the extended healthcare IT enterprise and really gather those pain points that are out there around things like interoperability to surface those and develop a work program coming out of this.

We want to be able to enable a collaborative model throughout the stakeholders to improve the overall health of the population.

So we expect it to be an interesting day if you are in the healthcare IT field or just the healthcare field generally, it would definitely be a day well spent to check it out.

Gardner: Larry, you’re going to be talking on Tuesday. Without giving too much away, maybe you can help us understand the emphasis that you’re taking, the area that you’re going to be exploring.

Schmidt: I’ve titled the presentation “Remixing Healthcare through Enterprise Architecture.” Jason offered some thoughts as to why we want to leverage enterprise architecture to discipline healthcare. My thoughts are that we want to be able to make sure we understand how the collaborative model would work in healthcare, taking into consideration all the constituents and stakeholders that exist within the complete ecosystem of healthcare.

This is not just collaboration across the doctors, patients, and maybe the payers in a healthcare delivery model. This could be out as far as the drug companies and being able to get drug companies to a point where they can reorder their raw materials to produce new drugs in the case of an epidemic that might be occurring.

Real-time model

It would be a real-time model that allows us the opportunity to understand what’s truly happening, both to an individual from a healthcare standpoint, as well as to a country or a region within a country and so on from healthcare. This remixing of enterprise architecture is the introduction to that concept of leveraging enterprise architecture into this collaborative model.

Then, I would like to talk about some of the technologies that I’ve had the opportunity to explore around what is available today in technology. I believe we need to have some type of standardized messaging or collaboration models to allow us to further facilitate the ability of that technology to provide the value of healthcare delivery or betterment of healthcare to individuals. I’ll talk about that a little bit within my presentation and give some good examples.

It’s really interesting. I just traveled from my company’s home base back to my home base and I thought about something like a body scanner that you get into in the airport. I know we’re in the process of eliminating some of those scanners now within the security model from the airports, but could that possibly be something that becomes an element within healthcare delivery? Every time your body is scanned, there’s a possibility you can gather information about that, and allow that to become a part of your electronic medical record.

There is a lot of information available today that could be used in helping our population to be healthier.

Hopefully, that was forward thinking, but that kind of thinking is going to play into the art of the possible, with what we are going to be doing, both in this presentation and talking about that as part of the workshop.

Gardner: Larry, we’ve been having some other discussions with The Open Group around what they call Open Platform 3.0™, which is the confluence of big data, mobile, cloud computing, and social.

One of the big issues today is this avalanche of data, the Internet of things, but also the Internet of people. It seems that the more work that’s done to bring Open Platform 3.0 benefits to bear on business decisions, it could very well be impactful for centers and other data that comes from patients, regardless of where they are, to a medical establishment, regardless of where it is.

So do you think we’re really on the cusp of a significant shift in how medicine is actually conducted?

Schmidt: I absolutely believe that. There is a lot of information available today that could be used in helping our population to be healthier. And it really isn’t only the challenge of the communication model that we’ve been speaking about so far. It’s also understanding the information that’s available to us to take that and make that into knowledge to be applied in order to help improve the health of the population.

As we explore this from an as-is model in enterprise architecture to something that we believe we can first enable through a great collaboration model, through standardized messaging and things like that, I believe we’re going to get into even deeper detail around how information can truly provide empowered decisions to physicians and individuals around their healthcare.

So it will carry forward into the big data and analytics challenges that we have talked about and currently are talking about with The Open Group.

Healthcare framework

Gardner: Jason Uppal, we’ve also seen how in other business sectors, industries have faced transformation and have needed to rely on something like enterprise architecture and a framework like TOGAF® in order to manage that process and make it something that’s standardized, understood, and repeatable.

It seems to me that healthcare can certainly use that, given the pace of change, but that the impact on healthcare could be quite a bit larger in terms of actual dollars. This is such a large part of the economy that even small incremental improvements can have dramatic effects when it comes to dollars and cents.

So is there a benefit to bringing enterprise architect to healthcare that is larger and greater than other sectors because of these economics and issues of scale?

Uppal: That’s a great way to think about this thing. In other industries, applying enterprise architecture to do banking and insurance may be easily measured in terms of dollars and cents, but healthcare is a fundamentally different economy and industry.

It’s not about dollars and cents. It’s about people’s lives, and loved ones who are sick, who could very easily be treated, if they’re caught in time and the right people are around the table at the right time. So this is more about human cost than dollars and cents. Dollars and cents are critical, but human cost is the larger play here.

Whatever systems and methods are developed, they have to work for everybody in the world.

Secondly, when we think about applying enterprise architecture to healthcare, we’re not talking about just the U.S. population. We’re talking about global population here. So whatever systems and methods are developed, they have to work for everybody in the world. If the U.S. economy can afford an expensive healthcare delivery, what about the countries that don’t have the same kind of resources? Whatever methods and delivery mechanisms you develop have to work for everybody globally.

That’s one of the things that a methodology like TOGAF brings out and says to look at it from every stakeholder’s point of view, and unless you have dealt with every stakeholder’s concerns, you don’t have an architecture, you have a system that’s designed for that specific set of audience.

The cost is not this 18 percent of the gross domestic product in the U.S. that is representing healthcare. It’s the human cost, which is many multitudes of that. That’s is one of the areas where we could really start to think about how do we affect that part of the economy, not the 18 percent of it, but the larger part of the economy, to improve the health of the population, not only in the North America, but globally.

If that’s the case, then what really will be the impact on our greater world economy is improving population health, and population health is probably becoming our biggest problem in our economy.

We’ll be testing these methods at a greater international level, as opposed to just at an organization and industry level. This is a much larger challenge. A methodology like TOGAF is a proven and it could be stressed and tested to that level. This is a great opportunity for us to apply our tools and science to a problem that is larger than just dollars. It’s about humans.

All “experts”

Gardner: Jim Hietala, in some ways, we’re all experts on healthcare. When we’re sick, we go for help and interact with a variety of different services to maintain our health and to improve our lifestyle. But in being experts, I guess that also means we are witnesses to some of the downside of an unconnected ecosystem of healthcare providers and payers.

One of the things I’ve noticed in that vein is that I have to deal with different organizations that don’t seem to communicate well. If there’s no central process organizer, it’s really up to me as the patient to pull the lines together between the different services — tests, clinical observations, diagnosis, back for results from tests, sharing the information, and so forth.

Have you done any studies or have anecdotal information about how that boundaryless information flow would be still relevant, even having more of a centralized repository that all the players could draw on, sort of a collaboration team resource of some sort? I know that’s worked in other industries. Is this not a perfect opportunity for that boundarylessness to be managed?

Hietala: I would say it is. We all have experiences with going to see a primary physician, maybe getting sent to a specialist, getting some tests done, and the boundaryless information that’s flowing tends to be on paper delivered by us as patients in all the cases.

So the opportunity to improve that situation is pretty obvious to anybody who’s been in the healthcare system as a patient. I think it’s a great place to be doing work. There’s a lot of money flowing to try and address this problem, at least here in the U.S. with the HITECH Act and some of the government spending around trying to improve healthcare.

We’ll be testing these methods at a greater international level, as opposed to just at an organization and industry level.

You’ve got healthcare information exchanges that are starting to develop, and you have got lots of pain points for organizations in terms of trying to share information and not having standards that enable them to do it. It seems like an area that’s really a great opportunity area to bring lots of improvement.

Gardner: Let’s look for some examples of where this has been attempted and what the success brings about. I’ll throw this out to anyone on the panel. Do you have any examples that you can point to, either named organizations or anecdotal use case scenarios, of a better organization, an architectural approach, leveraging IT efficiently and effectively, allowing data to flow, putting in processes that are repeatable, centralized, organized, and understood. How does that work out?

Uppal: I’ll give you an example. One of the things that happens when a patient is admitted to hospital and in hospital is that they get what’s called a high-voltage care. There is staff around them 24×7. There are lots of people around, and every specialty that you can think of is available to them. So the patient, in about two or three days, starts to feel much better.

When that patient gets discharged, they get discharged to home most of the time. They go from very high-voltage care to next to no care. This is one of the areas where in one of the organizations we work with is able to discharge the patient and, instead of discharging them to the primary care doc, who may not receive any records from the hospital for several days, they get discharged to into a virtual team. So if the patient is at home, the virtual team is available to them through their mobile phone 24×7.

Connect with provider

If, at 3 o’clock in the morning, the patient doesn’t feel right, instead of having to call an ambulance to go to hospital once again and get readmitted, they have a chance to connect with their care provider at that time and say, “This is what the issue is. What do you want me to do next? Is this normal for the medication that I am on, or this is something abnormal that is happening?”

When that information is available to that care provider who may not necessarily have been part of the care team when the patient was in the hospital, that quick readily available information is key for keeping that person at home, as opposed to being readmitted to the hospital.

We all know that the cost of being in a hospital is 10 times more than it is being at home. But there’s also inconvenience and human suffering associated with being in a hospital, as opposed to being at home.

Those are some of the examples that we have, but they are very limited, because our current health ecosystem is a very organization specific, not  patient and provider specific. This is the area there is a huge room for opportunities for healthcare delivery, thinking about health information, not in the context of the organization where the patient is, as opposed to in a cloud, where it’s an association between the patient and provider and health information that’s there.

Extending that model will bring infinite value to not only reducing the cost, but improving the cost and quality of care.

In the past, we used to have emails that were within our four walls. All of a sudden, with Gmail and Yahoo Mail, we have email available to us anywhere. A similar thing could be happening for the healthcare record. This could be somewhere in the cloud’s eco setting, where it’s securely protected and used by only people who have granted access to it.

Those are some of the examples where extending that model will bring infinite value to not only reducing the cost, but improving the cost and quality of care.

Schmidt: Jason touched upon the home healthcare scenario and being able to provide touch points at home. Another place that we see evolving right now in the industry is the whole concept of mobile office space. Both countries, as well as rural places within countries that are developed, are actually getting rural hospitals and rural healthcare offices dropped in by helicopter to allow the people who live in those communities to have the opportunity to talk to a doctor via satellite technologies and so on.

The whole concept of a architecture around and being able to deal with an extension of what truly lines up being telemedicine is something that we’re seeing today. It would be wonderful if we could point to things like standards that allow us to be able to facilitate both the communication protocols as well as the information flows in that type of setting.

Many corporations can jump on the bandwagon to help the rural communities get the healthcare information and capabilities that they need via the whole concept of telemedicine.

That’s another area where enterprise architecture has come into play. Now that we see examples of that working in the industry today, I am hoping that as part of this working group, we’ll get to the point where we’re able to facilitate that much better, enabling innovation to occur for multiple companies via some of the architecture or the architecture work we are planning on producing.

Single view

Gardner: It seems that we’ve come a long way on the business side in many industries of getting a single view of the customer, as it’s called, the customer relationship management, big data, spreading the analysis around among different data sources and types. This sounds like a perfect fit for a single view of the patient across their life, across their care spectrum, and then of course involving many different types of organizations. But the government also needs to have a role here.

Jim Hietala, at The Open Group Conference in Philadelphia, you’re focusing on not only healthcare, but finance and government. Regarding the government and some of the agencies that you all have as members on some of your panels, how well do they perceive this need for enterprise architecture level abilities to be brought to this healthcare issue?

Hietala: We’ve seen encouraging signs from folks in government that are encouraging to us in bringing this work to the forefront. There is a recognition that there needs to be better data flowing throughout the extended healthcare IT ecosystem, and I think generally they are supportive of initiatives like this to make that happen.

Gardner: Of course having conferences like this, where you have a cross pollination between vertical industries, will perhaps allow some of the technical people to talk with some of the government people too and also have a conversation with some of the healthcare people. That’s where some of these ideas and some of the collaboration could also be very powerful.

We’ve seen encouraging signs from folks in government that are encouraging to us in bringing this work to the forefront.

I’m afraid we’re almost out of time. We’ve been talking about an interesting healthcare transition, moving into a new phase or even era of healthcare.

Our panel of experts have been looking at some of the trends in IT and how they are empowering improvement for how healthcare can be more responsive and efficient. And we’ve seen how healthcare industry organizations can take large scale transformation using cross-organizational collaboration, for example, and other such tools as big data, analytics, and cloud computing to help solve some of these issues.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference this July in Philadelphia. Registration to the conference remains open. Follow the conference on Twitter at #ogPHL, and you will hear more about healthcare or Open Platform 3.0 as well as enterprise transformation in the finance, government, and healthcare sectors.

With that, I’d like to thank our panel. We’ve been joined today by Jason Uppal, Chief Architect and Acting CEO at clinicalMessage. Thank you so much, Jason.

Uppal: Thank you, Dana.

Gardner: And also Larry Schmidt, Chief Technologist at HP for the Health and Life Sciences Industries. Thanks, Larry.

Schmidt: You bet, appreciate the time to share my thoughts. Thank you.

Gardner: And then also Jim Hietala, Vice President of Security at The Open Group. Thanks so much.

Hietala: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator throughout these thought leader interviews. Thanks again for listening and come back next time.

Comments Off

Filed under ArchiMate®, Business Architecture, Cloud, Conference, Enterprise Architecture, Healthcare, Open Platform 3.0, Professional Development, Service Oriented Architecture, TOGAF, TOGAF®

As Platform 3.0 ripens, expect agile access and distribution of actionable intelligence across enterprises, says The Open Group panel

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here

This latest BriefingsDirect discussion, leading into the The Open Group Conference on July 15 in Philadelphia, brings together a panel of experts to explore the business implications of the current shift to so-called Platform 3.0.

Known as the new model through which big data, cloud, and mobile and social — in combination — allow for advanced intelligence and automation in business, Platform 3.0 has so far lacked standards or even clear definitions.

The Open Group and its community are poised to change that, and we’re here now to learn more how to leverage Platform 3.0 as more than a IT shift — and as a business game-changer. It will be a big topic at next week’s conference.

The panel: Dave Lounsbury, Chief Technical Officer at The Open Group; Chris Harding, Director of Interoperability at The Open Group, and Mark Skilton, Global Director in the Strategy Office at Capgemini. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

This special BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference, which is focused on enterprise transformation in the finance, government, and healthcare sectors. Registration to the conference remains open. Follow the conference on Twitter at #ogPHL. [Disclosure: The Open Group is a sponsor of this and other BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: A lot of people are still wrapping their minds around this notion of Platform 3.0, something that is a whole greater than the sum of the parts. Why is this more than an IT conversation or a shift in how things are delivered? Why are the business implications momentous?

Lounsbury: Well, Dana, there are lot of IT changes or technical changes going on that are bringing together a lot of factors. They’re turning into this sort of super-saturated solution of ideas and possibilities and this emerging idea that this represents a new platform. I think it’s a pretty fundamental change.

Lounsbury

If you look at history, not just the history of IT, but all of human history, you see that step changes in societies and organizations are frequently driven by communication or connectedness. Think about the evolution of speech or the invention of the alphabet or movable-type printing. These technical innovations that we’re seeing are bringing together these vast sources of data about the world around us and doing it in real time.

Further, we’re starting to see a lot of rapid evolution in how you turn data into information and presenting the information in a way such that people can make decisions on it. Given all that we’re starting to realize, we’re on the cusp of another step of connectedness and awareness.

Fundamental changes

This really is going to drive some fundamental changes in the way we organize ourselves. Part of what The Open Group is doing, trying to bring Platform 3.0 together, is to try to get ahead of this and make sure that we understand not just what technical standards are needed, but how businesses will need to adapt and evolve what business processes they need to put in place in order to take maximum advantage of this to see change in the way that we look at the information.

Harding: Enterprises have to keep up with the way that things are moving in order to keep their positions in their industries. Enterprises can’t afford to be working with yesterday’s technology. It’s a case of being able to understand the information that they’re presented, and make the best decisions.

Harding

We’ve always talked about computers being about input, process, and output. Years ago, the input might have been through a teletype, the processing on a computer in the back office, and the output on print-out paper.

Now, we’re talking about the input being through a range of sensors and social media, the processing is done on the cloud, and the output goes to your mobile device, so you have it wherever you are when you need it. Enterprises that stick in the past are probably going to suffer.

Gardner: Mark Skilton, the ability to manage data at greater speed and scale, the whole three Vs — velocity, volume, and value — on its own could perhaps be a game changing shift in the market. The drive of mobile devices into lives of both consumers and workers is also a very big deal.

Of course, cloud has been an ongoing evolution of emphasis towards agility and efficiency in how workloads are supported. But is there something about the combination of how these are coming together at this particular time that, in your opinion, substantiates The Open Group’s emphasis on this as a literal platform shift?

Skilton: It is exactly that in terms of the workloads. The world we’re now into is the multi-workload environment, where you have mobile workloads, storage and compute workloads, and social networking workloads. There are many different types of data and traffic today in different cloud platforms and devices.

Skilton

It has to do with not just one solution, not one subscription model — because we’re now into this subscription-model era … the subscription economy, as one group tends to describe it. Now, we’re looking for not only just providing the security, the infrastructure, to deliver this kind of capability to a mobile device, as Chris was saying. The question is, how can you do this horizontally across other platforms? How can you integrate these things? This is something that is critical to the new order.

So Platform 3.0 addressing this point by bringing this together. Just look at the numbers. Look at the scale that we’re dealing with — 1.7 billion mobile devices sold in 2012, and 6.8 billion subscriptions estimated according to the International Telecommunications Union (ITU) equivalent to 96 percent of the world population.

Massive growth

We had massive growth in scale of mobile data traffic and internet data expansion. Mobile data is increasing 18 percent fold from 2011 to 2016 reaching 130 exabytes annually.  We passed 1 zettabyte of global online data storage back in 2010 and IP data traffic predicted to pass 1.3 zettabytes by 2016, with internet video accounting for 61 percent of total internet data according to Cisco studies.

These studies also predict data center traffic combining network and internet based storage will reach 6.6 zettabytes annually, and nearly two thirds of this will be cloud based by 2016.  This is only going to grow as social networking is reaching nearly one in four people around the world with 1.7 billion using at least one form of social networking in 2013, rising to one in three people with 2.55 billion global audience by 2017 as another extraordinary figure from an eMarketing.com study.

It is not surprising that many industry analysts are seeing growth in technologies of mobility, social computing, big data and cloud convergence at 30 to 40 percent and the shift to B2C commerce passing $1 trillion in 2012 is just the start of a wider digital transformation.

These numbers speak volumes in terms of the integration, interoperability, and connection of the new types of business and social realities that we have today.

Gardner: Why should IT be thinking about this as a fundamental shift, rather than a modest change?

Lounsbury: A lot depends on how you define your IT organization. It’s useful to separate the plumbing from the water. If we think of the water as the information that’s flowing, it’s how we make sure that the water is pure and getting to the places where you need to have the taps, where you need to have the water, etc.

But the plumbing also has to be up to the job. It needs to have the capacity. It needs to have new tools to filter out the impurities from the water. There’s no point giving someone data if it’s not been properly managed or if there’s incorrect information.

What’s going to happen in IT is not only do we have to focus on the mechanics of the plumbing, where we see things like the big database that we’ve seen in the open-source  role and things like that nature, but there’s the analytics and the data stewardship aspects of it.

We need to bring in mechanisms, so the data is valid and kept up to date. We need to indicate its freshness to the decision makers. Furthermore, IT is going to be called upon, whether as part of the enterprise IP or where end users will drive the selection of what they’re going to do with analytic tools and recommendation tools to take the data and turn it into information. One of the things you can’t do with business decision makers is overwhelm them with big rafts of data and expect them to figure it out.

You really need to present the information in a way that they can use to quickly make business decisions. That is an addition to the role of IT that may not have been there traditionally — how you think about the data and the role of what, in the beginning, was called data scientist and things of that nature.

Shift in constituency

Skilton: I’d just like to add to Dave’s excellent points about, the shape of data has changed, but also about why should IT get involved. We’re seeing that there’s a shift in the constituency of who is using this data.

We have the Chief Marketing Officer and the Chief Procurement Officer and other key line of business managers taking more direct control over the uses of information technology that enable their channels and interactions through mobile, social and data analytics. We’ve got processes that were previously managed just by IT and are now being consumed by significant stakeholders and investors in the organization.

We have to recognize in IT that we are the masters of our own destiny. The information needs to be sorted into new types of mobile devices, new types of data intelligence, and ways of delivering this kind of service.

I read recently in MIT Sloan Management Review an article that asked what is the role of the CIO. There is still the critical role of managing the security, compliance, and performance of these systems. But there’s also a socialization of IT, and this is where  the  positioning architectures which are cross platform is key to  delivering real value to the business users in the IT community.

Gardner: How do we prevent this from going off the rails?

Harding: This a very important point. And to add to the difficulties, it’s not only that a whole set of different people are getting involved with different kinds of information, but there’s also a step change in the speed with which all this is delivered. It’s no longer the case, that you can say, “Oh well, we need some kind of information system to manage this information. We’ll procure it and get a program written” that a year later that would be in place in delivering reports to it.

Now, people are looking to make sense of this information on the fly if possible. It’s really a case of having the platforms be the standard technology platform and also the systems for using it, the business processes, understood and in place.

Then, you can do all these things quickly and build on learning from what people have gone in the past, and not go out into all sorts of new experimental things that might not lead anywhere. It’s a case of building up the standard platform in the industry best practice. This is where The Open Group can really help things along by being a recipient and a reflector of best practice and standard.

Skilton: Capgemini has been doing work in this area. I break it down into four levels of scalability. It’s the platform scalability of understanding what you can do with your current legacy systems in introducing cloud computing or big data, and the infrastructure that gives you this, what we call multiplexing of resources. We’re very much seeing this idea of introducing scalable platform resource management, and you see that a lot with the heritage of virtualization.

Going into networking and the network scalability, a lot of the customers have who inherited their old telecommunications networks are looking to introduce new MPLS type scalable networks. The reason for this is that it’s all about connectivity in the field. I meet a number of clients who are saying, “We’ve got this cloud service,” or “This service is in a certain area of my country. If I move to another parts of the country or I’m traveling, I can’t get connectivity.” That’s the big issue of scaling.

Another one is application programming interfaces (APIs). What we’re seeing now is an explosion of integration and application services using API connectivity, and these are creating huge opportunities of what Chris Anderson of Wired used to call the “long tail effect.” It is now a reality in terms of building that kind of social connectivity and data exchange that Dave was talking about.

Finally, there are the marketplaces. Companies needs to think about what online marketplaces they need for digital branding, social branding, social networks, and awareness of your customers, suppliers, and employees. Customers can see that these four levels are where they need to start thinking about for IT strategy, and Platform 3.0 is right on this target of trying to work out what are the strategies of each of these new levels of scalability.

Gardner: We’re coming up on The Open Group Conference in Philadelphia very shortly. What should we expect from that? What is The Open Group doing vis-à-vis Platform 3.0, and how can organizations benefit from seeing a more methodological or standardized approach to some way of rationalizing all of this complexity? [Registration to the conference remains open. Follow the conference on Twitter at #ogPHL.]

Lounsbury: We’re still in the formational stages of  “third platform” or Platform 3.0 for The Open Group as an industry. To some extent, we’re starting pretty much at the ground floor with that in the Platform 3.0 forum. We’re leveraging a lot of the components that have been done previously by the work of the members of The Open Group in cloud, services-oriented architecture (SOA), and some of the work on the Internet of things.

First step

Our first step is to bring those things together to make sure that we’ve got a foundation to depart from. The next thing is that, through our Platform 3.0 Forum and the Steering Committee, we can ask people to talk about what their scenarios are for adoption of Platform 3.0?

That can range from things like the technological aspects of it and what standards are needed, but also to take a clue from our previous cloud working group. What are the best business practices in order to understand and then adopt some of these Platform 3.0 concepts to get your business using them?

What we’re really working toward in Philadelphia is to set up an exchange of ideas among the people who can, from the buy side, bring in their use cases from the supply side, bring in their ideas about what the technology possibilities are, and bring those together and start to shape a set of tracks where we can create business and technical artifacts that will help businesses adopt the Platform 3.0 concept.

Harding: We certainly also need to understand the business environment within which Platform 3.0 will be used. We’ve heard already about new players, new roles of various kinds that are appearing, and the fact that the technology is there and the business is adapting to this to use technology in new ways.

For example, we’ve heard about the data scientist. The data scientist is a new kind of role, a new kind of person, that is playing a particular part in all this within enterprises. We’re also hearing about marketplaces for services, new ways in which services are being made available and combined.

We really need to understand the actors in this new kind of business scenario. What are the pain points that people are having? What are the problems that need to be resolved in order to understand what kind of shape the new platform will have? That is one of the key things that the Platform 3.0 Forum members will be getting their teeth into.

Gardner: Looking to the future, when we think about the ability of the data to be so powerful when processed properly, when recommendations can be delivered to the right place at the right time, but we also recognize that there are limits to a manual or even human level approach to that, scientist by scientist, analysis by analysis.

When we think about the implications of automation, it seems like there were already some early examples of where bringing cloud, data, social, mobile, interactions, granularity of interactions together, that we’ve begun to see that how a recommendation engine could be brought to bear. I’m thinking about the Siri capability at Apple and even some of the examples of the Watson Technology at IBM.

So to our panel, are there unknown unknowns about where this will lead in terms of having extraordinary intelligence, a super computer or data center of super computers, brought to bear almost any problem instantly and then the result delivered directly to a center, a smart phone, any number of end points?

It seems that the potential here is mind boggling. Mark Skilton, any thoughts?

Skilton: What we’re talking about is the next generation of the Internet.  The advent of IPv6 and the explosion in multimedia services, will start to drive the next generation of the Internet.

I think that in the future, we’ll be talking about a multiplicity of information that is not just about services at your location or your personal lifestyle or your working preferences. We’ll see a convergence of information and services across multiple devices and new types of “co-presence services” that interact with your needs and social networks to provide predictive augmented information value.

When you start to get much more information about the context of where you are, the insight into what’s happening, and the predictive nature of these, it becomes something that becomes much more embedding into everyday life and in real time in context of what you are doing.

I expect to see much more intelligent applications coming forward on mobile devices in the next 5 to 10 years driven by this interconnected explosion of real time processing data, traffic, devices and social networking we describe in the scope of platform 3.0. This will add augmented intelligence and is something that’s really exciting and a complete game changer. I would call it the next killer app.

First-mover benefits

Gardner: There’s this notion of intelligence brought to bear rapidly in context, at a manageable cost. This seems to me a big change for businesses. We could, of course, go into the social implications as well, but just for businesses, that alone to me would be an incentive to get thinking and acting on this. So any thoughts about where businesses that do this well would be able to have significant advantage and first mover benefits?

Harding: Businesses always are taking stock. They understand their environments. They understand how the world that they live in is changing and they understand what part they play in it. It will be down to individual businesses to look at this new technical possibility and say, “So now this is where we could make a change to our business.” It’s the vision moment where you see a combination of technical possibility and business advantage that will work for your organization.

It’s going to be different for every business, and I’m very happy to say this, it’s something that computers aren’t going to be able to do for a very long time yet. It’s going to really be down to business people to do this as they have been doing for centuries and millennia, to understand how they can take advantage of these things.

So it’s a very exciting time, and we’ll see businesses understanding and developing their individual business visions as the starting point for a cycle of business transformation, which is what we’ll be very much talking about in Philadelphia. So yes, there will be businesses that gain advantage, but I wouldn’t point to any particular business, or any particular sector and say, “It’s going to be them” or “It’s going to be them.”

Gardner: Dave Lounsbury, a last word to you. In terms of some of the future implications and vision, where could this could lead in the not too distant future?

Lounsbury: I’d disagree a bit with my colleagues on this, and this could probably be a podcast on its own, Dana. You mentioned Siri, and I believe IBM just announced the commercial version of its Watson recommendation and analysis engine for use in some customer-facing applications.

I definitely see these as the thin end of the wedge on filling that gap between the growth of data and the analysis of data. I can imagine in not in the next couple of years, but in the next couple of technology cycles, that we’ll see the concept of recommendations and analysis as a service, to bring it full circle to cloud. And keep in mind that all of case law is data and all of the medical textbooks ever written are data. Pick your industry, and there is huge amount of knowledge base that humans must currently keep on top of.

This approach and these advances in the recommendation engines driven by the availability of big data are going to produce profound changes in the way knowledge workers produce their job. That’s something that businesses, including their IT functions, absolutely need to stay in front of to remain competitive in the next decade or so.

Comments Off

Filed under ArchiMate®, Business Architecture, Cloud, Cloud/SOA, Conference, Data management, Enterprise Architecture, Platform 3.0, Professional Development, TOGAF®

Enterprise Architecture in China: Who uses this stuff?

by Chris Forde, GM APAC and VP Enterprise Architecture, The Open Group

Since moving to China in March 2010 I have consistently heard a similar set of statements and questions, something like this….

“EA? That’s fine for Europe and America, who is using it here?”

“We know EA is good!”

“What is EA?”

“We don’t have the ability to do EA, is it a problem if we just focus on IT?”

And

“Mr Forde your comment about western companies not discussing their EA programs because they view them as a competitive advantage is accurate here too, we don’t discuss we have one for that reason.” Following that statement the lady walked away smiling, having not introduced herself or her company.

Well some things are changing in China relative to EA and events organized by The Open Group; here is a snapshot from May 2013.

M GaoThe Open Group held an Enterprise Architecture Practitioners Conference in Shanghai China May 22nd 2013. The conference theme was EA and the spectrum of business value. The presentations were made by a mix of non-member and member organizations of The Open Group, most but not all based in China. The audience was mostly non-members from 55 different organizations in a range of industries. There was a good mix of customer, supplier, government and academic organizations presenting and in the audience. The conference proceedings are available to registered attendees of the conference and members of The Open Group. Livestream recordings will also be available shortly.

Organizations large and small presented about the fact that EA was integral to delivering business value. Here’s the nutshell.

China

Huawei is a leading global ICT communications provider based in Shenzhen China.  They presented on EA applied to their business transformation program and the ongoing development of their core EA practice.

GKHB is a software services organization based in Chengdu China. They presented on an architecture practice applied to real time forestry and endangered species management.

Nanfang Media is a State Owned Enterprise, the second largest media organization in the country based in Guangzhou China. They presented on the need to rapidly transform themselves to a modern integrated digital based organization.

McKinsey & Co a Management Consulting company based in New York USA presented an analysis of a CIO survey they conducted with Peking University.

Mr Wang Wei a Partner in the Shanghai office of McKinsey & Co’s Business Technology Practice reviewed a survey they conducted in co-operation with Peking University.

wang wei.jpg

The Survey of CIO’s in China indicated a common problem of managing complexity in multiple dimensions: 1) “Theoretically” Common Business Functions, 2) Across Business Units with differing Operations and Product, 3) Across Geographies and Regions. The recommended approach was towards “Organic Integration” and to carefully determine what should be centralized and what should be distributed. An Architecture approach can help with managing and mitigating these realities. The survey also showed that the CIO’s are evenly split amongst those dedicated to a traditional CIO role and those that have a dual Business and CIO role.

Mr Yang Li Chao Director of EA and Planning at Huawei and Ms Wang Liqun leader of the EA Center of Excellence at Huawei yang li chao.jpgwang liqun.jpgoutlined the 5-year journey Huawei has been on to deal with the development, maturation and effectiveness of an Architecture practice in a company that has seen explosive growth and is competing on a global scale. They are necessarily paying a lot of attention to Talent Management and development of their Architects, as these people are at the forefront of the company Business Transformation efforts. Huawei constantly consults with experts on Architecture from around the world and incorporates what they consider best practice into their own method and framework, which is based on TOGAF®.

 Mr He Kun CIO of Nanfang Media described the enormous pressures his traditional media organization is under, such as a concurrent loss of advertising and talent to digital media.

he kun.jpgHe gave and example where China Mobile has started its own digital newspaper leveraging their delivery platform. So naturally, Nanfang media is also undergoing a transformation and is looking to leverage its current advantages as a trusted source and its existing market position. The discipline of Architecture is a key enabler and aids as a foundation for clearly communicating a transformation approach to other business leaders. This does not mean using EA Jargon but communicating in the language of his peers for the purpose of obtaining funding to accomplish the transformation effectively.

Mr Chen Peng Vice General Manager of GKHB Chengdu described the use of an Architecture approach to managing precious national resources such as forestry, bio diversity and endangered species. He descrichen peng.jpgbed the necessity for real time information in observation, tracking and responses in this area and the necessity of “Informationalization” of Forestry in China as a part of eGovernment initiatives not only for the above topics but also for the countries growth particularly in supplying the construction industry. The Architecture approach taken here is also based on TOGAF®.

The take away from this conference is that Enterprise Architecture is alive and well amongst certain organizations in China. It is being used in a variety of industries.  Value is being realized by executives and practitioners, and delivered for both IT and Business units. However for many companies EA is also a new idea and to date its value is unclear to them.

The speakers also made it clear that there are no easy answers, each organization has to find its own use and value from Enterprise Architecture and it is a learning journey. They expressed their appreciation that The Open Group and its standards are a place where they can make connections, pull from and contribute to in regards to Enterprise Architecture.

Comments Off

Filed under Enterprise Architecture, Enterprise Transformation, Professional Development, Standards, TOGAF, TOGAF®, Uncategorized

The Open Group Certified Architect (Open CA) Program Transformed My Career

By Bala Prasad Peddigari, Tata Consultancy Services Limited

openca

Learning has been a continuous journey for me throughout my career, but certification in TOGAF® truly benchmarked my knowledge and Open CA qualified my capability as a practitioner. Open CA not only tested my skills as a practitioner, but also gave me valuable recognition and respect as an Enterprise Architect within my organization.

When I was nominated to undergo the Open CA Certification in 2010, I didn’t realize that this certification would transform my career, improve my architecture maturity and provide me with the such wide spread peer recognition.

The Open CA certification has enabled me to gain increased recognition at my organization. Furthermore, our internal leadership recognizes my abilities and has helped me to get into elite panels of jury regarding key initiatives at the organization level and at my parent company’s organization level. The Open CA certification has helped me to improve my Architecture Maturity and drive enterprise solutions.

With recognition, comes a greater responsibility – hence my attempt to create a community of architects to within my organization and expand the Enterprise Architecture culture. I started the Architects Cool Community a year ago. Today, this community has grown to roughly 350 associates who continuously share knowledge, come together to solve architecture problems, share best practices and contribute to The Open Group Working Groups to build reference architectures.

I can without a doubt state that TOGAF and Open CA have made a difference in my career transformation: they created organization-wide visibility, helped me to get both internal and external recognition as an Enterprise Architect and helped me to achieve required growth. My Open CA certification has also been well received by customers, particularly when I meet enterprise customers from Australia and the U.S. The Open CA certification exemplifies solid practitioner knowledge and large-scale end-to-end thinking. The certification also provided me with self-confidence in architecture problem solving to drive the right rationale.

I would like to thank my leadership team, who provided the platform and offered lot of support to drive the architecture initiatives. I would like to thank The Open Group’s Open CA team and the board who interviewed me to measure and certify my skills. I strongly believe you earn the certification because you are able to support your claims to satisfy the conformance requirements and achieving it proves that you have the skills and capabilities to carry out architecture work.

You can find out if you can meet the requirements of the program by completing the Open CA Self Assessment Tool.

balaBala Prasad Peddigari (Bala) is an Enterprise Architect and Business Value Consultant with Tata Consultancy Services Limited. Bala specializes in Enterprise Architecture, IT Strategies, Business Value consulting, Cloud based technology solutions and Scalable architectures. Bala has been instrumental in delivering IT Solutions for Finance, Insurance, Telecom and HiTech verticals. Bala currently heads the HiTech Innovative Solutions Technology Excellence Group with a focus on Cloud, Microsoft, Social Computing, Java and Open source technologies. He received accolades in Microsoft Tech Ed for his cloud architectural strengths and Won the Microsoft ALM Challenge. Bala published his papers in IEEE and regular speaker in Open Group conference and Microsoft events. Bala serves on the Open CA Certification Board for The Open Group.

Comments Off

Filed under Certifications, Open CA, Professional Development, TOGAF, TOGAF®

2013 Open Group Predictions, Vol. 2

By The Open Group

Continuing on the theme of predictions, here are a few more, which focus on global IT trends, business architecture, OTTF and Open Group events in 2013.

Global Enterprise Architecture

By Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities

Cloud is no longer a bleeding edge technology – most organizations are already well on their way to deploying cloud technology.  However, Cloud implementations are resurrecting a perennial problem for organizations—integration. Now that Cloud infrastructures are being deployed, organizations are having trouble integrating different systems, especially with systems hosted by third parties outside their organization. What will happen when two, three or four technical delivery systems are hosted on AND off premise? This presents a looming integration problem.

As we see more and more organizations buying into cloud infrastructures, we’ll see an increase in cross-platform integration architectures globally in 2013. The role of the enterprise architect will become more complex. Architectures must not only ensure that systems are integrated properly, but architects also need to figure out a way to integrate outsourced teams and services and determine responsibility across all systems. Additionally, outsourcing and integration will lead to increased focus on security in the coming year, especially in healthcare and financial sectors. When so many people are involved, and responsibility is shared or lost in the process, gaping holes can be left unnoticed. As data is increasingly shared between organizations and current trends escalate, security will also become more and more of a concern. Integration may yield great rewards architecturally, but it also means greater exposure to vulnerabilities outside of your firewall.

Within the Architecture Forum, we will be working on improvements to the TOGAF® standard throughout 2013, as well as an effort to continue to harmonize the TOGAF specification with the ArchiMate® modelling language.  The Forum also expects to publish a whitepaper on application portfolio management in the new year, as well as be involved in the upcoming Cloud Reference Architecture.

In China, The Open Group is progressing well. In 2013, we’ll continue translating The Open Group website, books and whitepapers from English to Chinese. Partnerships and Open CA certification will remain in the forefront of global priorities, as well as enrolling TOGAF trainers throughout Asia Pacific as Open Group members. There are a lot of exciting developments arising, and we will keep you updated as we expand our footprint in China and the rest of Asia.

Open Group Events in 2013

By Patty Donovan, Vice President of Membership and Events

In 2013, the biggest change for us will be our quarterly summit. The focus will shift toward an emphasis on verticals. This new focus will debut at our April event in Sydney where the vertical themes include Mining, Government, and Finance. Additional vertical themes that we plan to cover throughout the year include: Healthcare, Transportation, Retail, just to name a few. We will also continue to increase the number of our popular Livestream sessions as we have seen an extremely positive reaction to them as well as all of our On-Demand sessions – listen to best selling authors and industry leaders who participated as keynote and track speakers throughout the year.

Regarding social media, we made big strides in 2012 and will continue to make this a primary focus of The Open Group. If you haven’t already, please “like” us on Facebook, follow us on Twitter, join the chat on (#ogchat) one of our Security focused Tweet Jams, and join our LinkedIn Group. And if you have the time, we’d love for you to contribute to The Open Group blog.

We’re always open to new suggestions, so if you have a creative idea on how we can improve your membership, Open Group events, webinars, podcasts, please let me know! Also, please be sure to attend the upcoming Open Group Conference in Newport Beach, Calif., which is taking place on January 28-31. The conference will address Big Data.

Business Architecture

By Steve Philp, Marketing Director for Open CA and Open CITS

Business Architecture is still a relatively new discipline, but in 2013 I think it will continue to grow in prominence and visibility from an executive perspective. C-Level decision makers are not just looking at operational efficiency initiatives and cost reduction programs to grow their future revenue streams; they are also looking at market strategy and opportunity analysis.

Business Architects are extremely valuable to an organization when they understand market and technology trends in a particular sector. They can then work with business leaders to develop strategies based on the capabilities and positioning of the company to increase revenue, enhance their market position and improve customer loyalty.

Senior management recognizes that technology also plays a crucial role in how organizations can achieve their business goals. A major role of the Business Architect is to help merge technology with business processes to help facilitate this business transformation.

There are a number of key technology areas for 2013 where Business Architects will be called upon to engage with the business such as Cloud Computing, Big Data and social networking. Therefore, the need to have competent Business Architects is a high priority in both the developed and emerging markets and the demand for Business Architects currently exceeds the supply. There are some training and certification programs available based on a body of knowledge, but how do you establish who is a practicing Business Architect if you are looking to recruit?

The Open Group is trying to address this issue and has incorporated a Business Architecture stream into The Open Group Certified Architect (Open CA) program. There has already been significant interest in this stream from both organizations and practitioners alike. This is because Open CA is a skills- and experience-based program that recognizes, at different levels, those individuals who are actually performing in a Business Architecture role. You must complete a candidate application package and be interviewed by your peers. Achieving certification demonstrates your competency as a Business Architect and therefore will stand you in good stead for both next year and beyond.

You can view the conformance criteria for the Open CA Business Architecture stream at https://www2.opengroup.org/ogsys/catalog/X120.

Trusted Technology

By Sally Long, Director of Consortia Services

The interdependency of all countries on global technology providers and technology providers’ dependencies on component suppliers around the world is more certain than ever before.  The need to work together in a vendor-neutral, country-neutral environment to assure there are standards for securing technology development and supply chain operations will become increasingly apparent in 2013. Securing the global supply chain can not be done in a vacuum, by a few providers or a few governments, it must be achieved by working together with all governments, providers, component suppliers and integrators and it must be done through open standards and accreditation programs that demonstrate conformance to those standards and are available to everyone.

The Open Group’s Trusted Technology Forum is providing that open, vendor and country-neutral environment, where suppliers from all countries and governments from around the world can work together in a trusted collaborative environment, to create a standard and an accreditation program for securing the global supply chain. The Open Trusted Technology Provider Standard (O-TTPS) Snapshot (Draft) was published in March of 2012 and is the basis for our 2013 predictions.

We predict that in 2013:

  • Version 1.0 of the O-TTPS (Standard) will be published.
  • Version 1.0 will be submitted to the ISO PAS process in 2013, and will likely become part of the ISO/IEC 27036 standard, where Part 5 of that ISO standard is already reserved for the O-TTPS work
  • An O-TTPS Accreditation Program – open to all providers, component suppliers, and integrators, will be launched
  • The Forum will continue the trend of increased member participation from governments and suppliers around the world

4 Comments

Filed under Business Architecture, Conference, Enterprise Architecture, O-TTF, OTTF

Operational Resilience through Managing External Dependencies

By Ian Dobson & Jim Hietala, The Open Group

These days, organizations are rarely self-contained. Businesses collaborate through partnerships and close links with suppliers and customers. Outsourcing services and business processes, including into Cloud Computing, means that key operations that an organization depends on are often fulfilled outside their control.

The challenge here is how to manage the dependencies your operations have on factors that are outside your control. The goal is to perform your risk management so it optimizes your operational success through being resilient against external dependencies.

The Open Group’s Dependency Modeling (O-DM) standard specifies how to construct a dependency model to manage risk and build trust over organizational dependencies between enterprises – and between operational divisions within a large organization. The standard involves constructing a model of the operations necessary for an organization’s success, including the dependencies that can affect each operation. Then, applying quantitative risk sensitivities to each dependency reveals those operations that have highest exposure to risk of not being successful, informing business decision-makers where investment in reducing their organization’s exposure to external risks will result in best return.

O-DM helps you to plan for success through operational resilience, assured business continuity, and effective new controls and contingencies, enabling you to:

  • Cut costs without losing capability
  • Make the most of tight budgets
  • Build a resilient supply chain
  •  Lead programs and projects to success
  • Measure, understand and manage risk from outsourcing relationships and supply chains
  • Deliver complex event analysis

The O-DM analytical process facilitates organizational agility by allowing you to easily adjust and evolve your organization’s operations model, and produces rapid results to illustrate how reducing the sensitivity of your dependencies improves your operational resilience. O-DM also allows you to drill as deep as you need to go to reveal your organization’s operational dependencies.

O-DM support training on the development of operational dependency models conforming to this standard is available, as are software computation tools to automate speedy delivery of actionable results in graphic formats to facilitate informed business decision-making.

The O-DM standard represents a significant addition to our existing Open Group Risk Management publications:

The O-DM standard may be accessed here.

Ian Dobson is the director of the Security Forum and the Jericho Forum for The Open Group, coordinating and facilitating the members to achieve their goals in our challenging information security world.  In the Security Forum, his focus is on supporting development of open standards and guides on security architectures and management of risk and security, while in the Jericho Forum he works with members to anticipate the requirements for the security solutions we will need in future.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

1 Comment

Filed under Cybersecurity, Security Architecture

2013 Security Priorities – Tweet Jam

By Patty Donovan, The Open Group

On Tuesday, December 11, The Open Group will host a tweet jam examining the topic of IT security and what is in store for 2013.

2012 was a big year for security. Congress debated cybersecurity legislation in the face of attacks on vulnerabilities in the nation’s critical infrastructure systems; social networking site LinkedIn was faulted for one of the largest security breaches of the year; and global cyber espionage was a trending topic. With the year coming to a close, the big questions on peoples’ minds are what security issues will dominate headlines in 2013. In October, Gartner predicted that by 2014, employee-owned devices will be infected with malware at more than double the rate of corporate-owned devices, and by 2017, 40% of an enterprise’s contact information will have been leaked into Facebook through the use of mobile device collaboration applications. These predictions only touch the tip of the iceberg for security concerns in the coming year.

Please join us on Tuesday, December 11 at 9:00 a.m. PT/12:00 p.m. ET/5:00 p.m. GMT for a tweet jam that will discuss and debate the mega trends that will shape the security landscape in 2013. Key areas that will be addressed during the discussion include: mobile security, BYOD, supply chain security, advanced persistent threats, and cloud and data security. We welcome Open Group members and interested participants from all backgrounds to join the session and interact with our panel of IT security experts, analysts and thought leaders. To access the discussion, please follow the #ogChat hashtag during the allotted discussion time.

And for those of you who are unfamiliar with tweet jams, here is some background information:

What Is a Tweet Jam?

A tweet jam is a one hour “discussion” hosted on Twitter. The purpose of the tweet jam is to share knowledge and answer questions on a chosen topic. Each tweet jam is led by a moderator and a dedicated group of experts to keep the discussion flowing. The public (or anyone using Twitter interested in the topic) is free (and encouraged!) to join the discussion.

Participation Guidance

Whether you’re a newbie or veteran Twitter user, here are a few tips to keep in mind:

  • Have your first #ogChat tweet be a self-introduction: name, affiliation, occupation.
  • Start all other tweets with the question number you’re responding to and the #ogChat hashtag.
    • Sample: “Q1 The biggest security threat in 2013 will continue to be securing data in the cloud #ogChat”
  • Please refrain from product or service promotions. The goal of a tweet jam is to encourage an exchange of knowledge and stimulate discussion.
  • While this is a professional get-together, we don’t have to be stiff! Informality will not be an issue!
  • A tweet jam is akin to a public forum, panel discussion or Town Hall meeting – let’s be focused and thoughtful.

If you have any questions prior to the event or would like to join as a participant, please direct them to Rod McLeod (rmcleod at bateman-group dot com). We anticipate a lively chat and hope you will be able to join!

Patricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

1 Comment

Filed under Cybersecurity, Tweet Jam

ArchiMate® 2.0 and Beyond

By The Open Group Conference Team

In this video, Henry Franken of BiZZdesign discusses ArchiMate® 2.0, the new version of the graphical modeling language for Enterprise Architecture that provides businesses with the means to communicate with different stakeholders from the business goals level to implementation scenarios.

Franken explains that the first edition allowed users to express Enterprise Architecture at its core – modeling business applications and infrastructure. ArchiMate® 2.0 has two major additions to make it fully aligned with TOGAF® – the motivation extension and the migration and planning extension. The motivation extension provides users with the ability to fully express business motivations and goals to enterprise architects; the migration and planning extension helps lay out programs and projects to make a business transition.

There are several sessions on ArchiMate® at the upcoming Open Group Conference in Barcelona. Notably, Henry Franken’s “Delivering Enterprise Architecture with TOGAF® and ArchiMate®” session on October 22 at 2:00-2:45 p.m. UTC / 8:00-8:45 a.m. EST will be livestreamed on The Open Group Website.

To view these sessions and for more information on the conference, please go to: http://www3.opengroup.org/barcelona2012

Comments Off

Filed under ArchiMate®, Conference, Enterprise Architecture

Viewpoint: Technology Supply Chain Security – Becoming a Trust-Worthy Provider

By Andras Szakal, IBM

Increasingly, the critical systems of the planet — telecommunications, banking, energy and others — depend on and benefit from the intelligence and interconnectedness enabled by existing and emerging technologies. As evidence, one need only look to the increase in enterprise mobile applications and BYOD strategies to support corporate and government employees.

Whether these systems are trusted by the societies they serve depends in part on whether the technologies incorporated into them are fit for the purpose they are intended to serve. Fit for purpose is manifested in two essential ways: first, does the product meet essential functional requirements; and second, has the product or component been produced by trustworthy provider. Of course, the leaders or owners of these systems have to do their part to achieve security and safety (e.g., to install, use and maintain technology appropriately, and to pay attention to people and process aspects such as insider threats). Cybersecurity considerations must be addressed in a sustainable way from the get-go, by design, and across the whole ecosystem — not after the fact, or in just one sector or another, or in reaction to crisis.

In addressing the broader cybersecurity challenge, however, buyers of mission-critical technology naturally seek reassurance as to the quality and integrity of the products they procure. In our view, the fundamentals of the institutional response to that need are similar to those that have worked in prior eras and in other industries — like food.

For example:  Most of us are able to enjoy a meal of stir-fried shrimp and not give a second thought as to whether the shellfish is safe to eat.

Why is that? Because we are the beneficiaries of a system whose workings greatly increase the likelihood — in many parts of the world — that the shellfish served to end consumers is safe and uncontaminated. While tainted technology is not quite the same as tainted foods it’s a useful analogy.

Of course, a very high percentage of the seafood industry is extremely motivated to provide safe and delicious shellfish to the end consumer. So we start with the practical perspective that, much more likely than not in today’s hyper-informed and communicative world, the food supply system will provide reasonably safe and tasty products. Invisible though it may be to most of us, however, this generalized confidence rests on a worldwide system that is built on globally recognized standards and strong public-private collaboration.

This system is necessary because mistakes happen, expectations evolve and — worse — the occasional participant in the food supply chain may take a shortcut in their processing practices. Therefore, some kind of independent oversight and certification has proven useful to assure consumers that what they pay for — their desired size and quality grade and, always, safety — is what they will get. In many countries, close cooperation between industry and government results in industry-led development and implementation of food safety standards.[1]

Government’s role is limited but important. Clearly, government cannot look at and certify every piece of shellfish people buy. So its actions are focused on areas in which it can best contribute: to take action in the event of a reported issue; to help convene industry participants to create and update safety practices; to educate consumers on how to choose and prepare shellfish safely; and to recognize top performers.[2]

Is the system perfect? Of course not. But it works, and supports the most practical and affordable methods of conducting safe and global commerce.

Let’s apply this learning to another sphere: information technology. To wit:

  • We need to start with the realization that the overwhelming majority of technology suppliers are motivated to provide securely engineered products and services, and that competitive dynamics reward those who consistently perform well.
  • However, we also need to recognize that there is a gap in time between the corrective effect of the market’s Invisible Hand and the damage that can be done in any given incident. Mistakes will inevitably happen, and there are some bad actors. So some kind of oversight and governmental participation are important, to set the right incentives and expectations.
  • We need to acknowledge that third-party inspection and certification of every significant technology product at the “end of pipe” is not only impractical but also insufficient. It will not achieve trust across a wide variety of infrastructures and industries.  A much more effective approach is to gather the world’s experts and coalesce industry practices around the processes that the experts agree are best suited to produce desired end results.
  • Any proposed oversight or government involvement must not stymie innovation or endanger a provider’s intellectual capital by requiring exposure to 3rd party assessments or require overly burdensome escrow of source code.
  • Given the global and rapid manner in which technologies are invented, produced and sold, a global and agile approach to technology assurance is required to achieve scalable results.  The approach should be based on understood and transparently formulated standards that are, to the maximum extent possible, industry-led and global in their applicability.  Conformance to such standards once would then be recognized by multiple industry’s and geo-political regions.  Propagation of country or industry specific standards will result in economic fragmentation and slow the adoption of industry best practices.

The Open Group Trusted Technology Forum (OTTF)[3] is a promising and complementary effort in this regard. Facilitated by The Open Group, the OTTF is working with governments and industry worldwide to create vendor-neutral open standards and best practices that can be implemented by anyone. Membership continues to grow and includes representation from manufacturers world-wide.

Governments and enterprises alike will benefit from OTTF’s work. Technology purchasers can use the Open Trusted Technology Provider (OTTP) Standard and OTTP Framework best practice recommendations to guide their strategies.  And a wide range of technology vendors can use OTTF approaches to build security and integrity into their end-to-end supply chains. The first version of the OTTPS is focused on mitigating the risk of tainted and counterfeit technology components or products. The OTTF is currently working a program that will accredit technology providers to the OTTP Standard. We expect to begin pilot testing of the program by the end of 2012.

Don’t misunderstand us: Market leaders like IBM have every incentive to engineer security and quality into our products and services. We continually encourage and support others to do the same.

But we realize that trusted technology — like food safety — can only be achieved if we collaborate with others in industry and in government.  That’s why IBM is pleased to be an active member of the Trusted Technology Forum, and looks forward to contributing to its continued success.

A version of this blog post was originally posted by the IBM Institute for Advanced Security.

Andras Szakal is the Chief Architect and a Senior Certified Software IT Architect for IBM’s Federal Software Sales business unit. His responsibilities include developing e-Government software architectures using IBM middleware and managing the IBM federal government software IT architect team. Szakal is a proponent of service oriented and web services based enterprise architectures and participates in open standards and open source product development initiatives within IBM.

 

Comments Off

Filed under OTTF

Take a Lesson from History to Integrate to the Cloud

By E.G. Nadhan, HP

In an earlier post for The Open Group Blog on the Top 5 tell-tale signs of SOA evolving to the Cloud, I had outlined the various characteristics of SOA that serve as a foundation for the cloud computing paradigm.  Steady growth of service oriented practices and the continued adoption of cloud computing across enterprises has resulted in the need for integrating out to the cloud.  When doing so, we must take a look back in time at the evolution of integration solutions starting with point-to-point solutions maturing to integration brokers and enterprise services buses over the years.  We should take a lesson from history to ensure that this time around, when integrating to the cloud, we prevent undue proliferation of point-to-point solutions across the extended enterprise.

We must exercise the same due-diligence and governance as is done for services within the enterprise. There is an increased risk of point-to-point solutions proliferating because of consumerization of IT and the ease of availability of such services to individual business units.

Thus, here are 5 steps that need to be taken to ensure a more systemic approach when integrating to cloud-based service providers.

  1. Extend your SOA strategy to the Cloud. Review your current SOA strategy and extend this to accommodate cloud based as-a-service providers.
  2. Extend Governance around Cloud Services.   Review your existing IT governance and SOA governance processes to accommodate the introduction and adoption of cloud based as-a-service providers.
  3. Identify Cloud based Integration models. It is not a one-size fits all. Therefore multiple integration models could apply to the cloud-based service provider depending upon the enterprise integration architecture. These integration models include a) point-to-point solutions, b) cloud to on-premise ESB and c) cloud based connectors that adopt a service centric approach to integrate cloud providers to enterprise applications and/or other cloud providers.
  4. Apply right models for right scenarios. Review the scenarios involved and apply the right models to the right scenarios.
  5. Sustain and evolve your services taxonomy. Provide enterprise-wide visibility to the taxonomy of services – both on-premise and those identified for integration with the cloud-based service providers. Continuously evolve these services to integrate to a rationalized set of providers who cater to the integration needs of the enterprise in the cloud.

The biggest challenge enterprises have in driving this systemic adoption of cloud-based services comes from within its business units. Multiple business units may unknowingly avail the same services from the same providers in different ways. Therefore, enterprises must ensure that such point-to-point integrations do not proliferate like they did during the era preceding integration brokers.

Enterprises should not let history repeat itself when integrating to the cloud by adopting service-oriented principles.

How about your enterprise? How are you going about doing this? What is your approach to integrating to cloud service providers?

A version of this post was originally published on HP’s Enterprise Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Cloud, Cloud/SOA

UNIX® is Still as Relevant as Ever

By Andrew Josey, The Open Group

Despite being as old as man landing on the moon, the UNIX® operating system is still as relevant today as it was in 1969. UNIX is older than the PC, microprocessor and video display at 43. In fact, few software technologies since have since proved more durable or adaptable than the UNIX operating system. The operating system’s durability lies its stability – this is why the UNIX programming standard is crucially important. Since 1995, any operating system wishing to use the UNIX trademark has to conform to the Single UNIX Specification, a standard of The Open Group. In this blog we identify some of the reasons why this standard is still relevant today.

One of the key reasons is that the UNIX standard programming interfaces are an integral and scalable foundation for today’s infrastructure from embedded systems, mobile devices, internet routers, servers and workstations, all the way up to distributed supercomputers. The standard provides portability across related operating systems such as Linux and the BSD systems and many parts of the standard are present in embedded and server systems from HP, Oracle, IBM, Fujitsu, Silicon Graphics and SCO Group as well as desktop systems from Apple.

The Single UNIX Specification provides a level of openness which those without the standard cannot, ensuring compatibility across all these platforms. Because the standard establishes a baseline of core functionality above which suppliers can innovate, applications written to the standard can be easily moved across a wide range of platforms. It enables suppliers to focus on offering added value and guarantee the underlying durability of their products with the core interfaces standardised. UNIX interfaces have found use on more machines than any other operating system of its kind, demonstrating why having a single, maintained standard is incredibly important. The UNIX standard enables customers to buy with increased confidence, backed with certification.

The Open Group works closely with the community to further the development of standards conformant systems by evolving and maintaining the value of the UNIX standard. This includes making the standard freely available on the web, permitting reuse of the standard documentation in open source projects, providing test tools, and developing the POSIX and UNIX certification programmes.

The open source movement has brought new vitality to UNIX and its user community is larger than ever including commercial vendors, operating system developers and an entirely new generation of programmers. Forty years after it was first created, UNIX is still here, long after Buzz Aldrin and Neil Armstrong hung up their moon boots. With the right standards in place to protect it, there’s no reason why it shouldn’t continue to grow in the future.

 UNIX is a registered trademark of The Open Group.

Andrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF 9.1, ArchiMate 2.0, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

2 Comments

Filed under Standards, Uncategorized, UNIX

SOCCI: Behind the Scenes

By E.G. Nadhan, HP

Cloud Computing standards, like other standards go through a series of evolutionary phases similar to the ones I outlined in the Top 5 phases of IaaS standards evolution. IaaS standards, in particular, take longer than their SaaS and PaaS counterparts because a balance is required between the service-orientation of the core infrastructure components in Cloud Computing.

This balance is why today’s announcement of the release of the industry’s first technical standard, Service Oriented Cloud Computing Infrastructure (SOCCI) is significant.

As one of the co-chairs of this project, here is some insight into the manner in which The Open Group went about creating the definition of this standard:

  • Step One: Identify the key characteristics of service orientation, as well as those for the cloud as defined by the National Institute of Standards and Technology (NIST). Analyze these characteristics and the resulting synergies through the application of service orientation in the cloud. Compare and contrast their evolution from the traditional environment through service orientation to the Cloud.
  • Step Two: Identify the key architectural building blocks that enable the Operational Systems Layer of the SOA Reference Architecture and the Cloud Reference Architecture that is in progress.
  • Step Three: Map these building blocks across the architectural layers while representing the multi-faceted perspectives of various viewpoints including those of the consumer, provider and developer.
  • Step Four: Define a Motor Cars in the Cloud business scenario: You, the consumer  are downloading auto-racing videos through an environment managed by a Service Integrator which requires the use of services for software, platform and infrastructure along with  traditional technologies. Provide a behind-the-curtains perspective on the business scenario where the SOCCI building blocks slowly but steadily come to life.
  • Step Five: Identify the key connection points with the other Open Group projects in the areas of architecture, business use cases, governance and security.

The real test of a standard is in its breadth of adoption. This standard can be used in multiple ways by the industry at large in order to ensure that the architectural nuances are comprehensively addressed. It could be used to map existing Cloud-based deployments to a standard architectural template. It can also serve as an excellent set of Cloud-based building blocks that can be used to build out a new architecture.

Have you taken a look at this standard? If not, please do so. If so, where and how do you think this standard could be adopted? Are there ways that the standard can be improved in future releases to make it better suited for broader adoption? Please let me know your thoughts.

This blog post was originally posted on HP’s Grounded in the Cloud Blog.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project.

Comments Off

Filed under Cloud, Cloud/SOA, Semantic Interoperability, Service Oriented Architecture, Standards

It’s a mad, mad, mad, mad world!

By Garry Doherty, The Open Group

Why is the world such a crazy place? Why does it seem that everything is crashing down around our ears, bringing chaos, confusion and uncertainty?

http://www.freedigitalphotos.net/images/view_photog.php?photogid=1804 Well, there is a very, very simple reason. The universe is entropic*.

Things were pretty simple back then when the Big Bang kicked off. All that existed was electromagnetism, gravitation and  nuclear interaction, but, as the space/time continuum, er… continued, something troublesome came to light.

Just when matter had started to get going nicely and shape of the universe began to emerge from the Big Bang itself, complexity was born! Nowadays of course, with complexity being almost as old as the universe itself, it’s also been around the block a few times and knows a thing or two about getting its own way, but it is possible to fight back. Entropy isn’t necessarily the only fate that awaits us.

Scientists expect the universe to exist for around 15 Billion years… then there’s going to be a hard stop — a very, very hard stop! Now I’m not saying that TOGAF™ can save the universe, but from where I’m sitting, it looks like our best bet at the moment!

*http://hyperphysics.phy-astr.gsu.edu/hbase/therm/entrop.html

Garry DohertyGarry Doherty is an experienced product marketer and product manager with a background in the IT and telecommunications industries. Garry is the TOGAF™ Product Manager and the ArchiMate® Forum Director at The Open Group. Garry is based in the U.K.

TOGAF™ will be a topic of discussion at The Open Group Conference, San Diego, Feb. 7-11. Join us for TOGAF™ Camp, best practices, case studies and the future of information security, presented by preeminent thought leaders in the industry.

Comments Off

Filed under Enterprise Architecture, TOGAF®

Underfunding IT security programs

By Jim Hietala, The Open Group

A news story in my local newspaper caught my eye today. State fails “hacker” test was the headline. The state of Colorado (U.S.) hired an outside security assessment firm to perform penetration tests across various state agency IT infrastructure.

The findings from the assessment firm were sadly predictable. The pen testers were able to find their way into many state networks and IT systems, and they found many instances of common security problems, including easily guessable logins and passwords, system default passwords that were never changed, and systems that were never hardened and had unnecessary ports open and services running. The assessment firm was able to access lots of private data and personally identifiable information. The story also had predictable comments from lawmakers expressing indignation at the sorry state of security for Colorado’s IT systems.

http://www.freedigitalphotos.net/images/view_photog.php?photogid=659The real story, however, was buried in the article. The state agency in Colorado that was tasked with securing state IT systems estimated that the cost of implementing an adequate cybersecurity plan across all state IT systems would be $40M… and the office had a budget of $400K! Is it any wonder they failed their security audit? For every $100 that they need to perform the job adequately, the IT security professionals are getting a whopping $1 to implement their security plans and controls.

With the present economic climate, I’d guess most governmental entities (and probably a lot of businesses as well) are in a similar situation: They don’t have the tax revenues to adequately fund IT security, and therefore can’t effectively protect access to information.

The “reality disconnect” here is that in the U.S., at least 45 of the 50 states have passed something similar to the groundbreaking California data privacy law, SB1386. It calls to mind that old hypocritical saying from parents to children, “Do as we say, not as we do”.

I talk with and work with many security professionals, and I rarely hear one say that things are getting better on the threat side of information security.  Underfunding IT security programs is a recipe for disaster.

Situations like this also point towards the need for better alignment of security controls with business objectives, and increased use of metrics in information security. The Open Group’s Security Forum is working on initiatives in this area… Watch this space for announcements of standards that security practitioners will find useful in driving more effective information security management.

Jim HietalaAn IT security industry veteran, Jim Hietala is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

Cybersecurity will be a topic of discussion at The Open Group Conference, San Diego, Feb. 7-11. Join us for best practices, case studies and the future of information security, presented by preeminent thought leaders in the industry.

1 Comment

Filed under Cybersecurity

New year, new certification

By Steve Philp, The Open Group

At the beginning of every new calendar year, many organizations discuss with employees specific job-related objectives and career development plans for the next 12 months and beyond. For many individuals, certification is highlighted as something that they should be working towards during the course of the year.

Until recently, virtually all IT certifications have been based on an individual’s recollection of a body of knowledge and his/her ability to pass a computer-based test. Unfortunately, these certifications do not prove that you can apply this knowledge successfully in practice. To achieve certified status you usually have to attend the relevant training course or read the appropriate self-study material before taking the examination. However, knowledge in itself is not an accurate measure of competence and, while question-based tests are practical and objective, they are also more susceptible to fraud.http://www.freedigitalphotos.net/images/view_photog.php?photogid=1152

Perhaps a better method of evaluating competence to carry out a specific role is to examine the skills and experience that an individual has demonstrated in his/her work. This type of certification usually requires you to prepare some form of written application followed by either an individual or panel interview which may or may not involve a formal presentation as part of the process.

In recent years, The Open Group has developed the IT Architect Certification (ITAC) and IT Specialist (ITSC) programs that are based entirely on skills and experience, and that assess an individual’s “people skills” as well as their technical abilities. There is no test-based examination but instead, applicants must complete a comprehensive application package and then be interviewed by three existing certified board members. Each of the interviews last for one hour and gives the candidate the opportunity to explain to the interviewer how they have met the conformance requirements of the program.

Many organizations around the world have identified this type of skills- and experienced-based program as a necessary part of the process to develop their own internal IT profession. These certifications can also be used in the recruitment process and help to guarantee a consistent and quality-assured service on project proposals, procurements and on service level agreements. As a result, the benefit of achieving this type of IT certification often proves to be much more rewarding for both individuals and organizations.

Steve PhilpSteve Philp is the Marketing Director for the IT Architect and IT Specialist certification programs at The Open Group. Over the past 20 years, Steve has worked predominantly in sales, marketing and general management roles within the IT training industry. Based in Reading, UK, he joined The Open Group in 2008 to promote and develop the organization’s skills and experience-based IT certifications.

3 Comments

Filed under Certifications, Enterprise Architecture

IT: The professionals

By Steve Philp, The Open Group

The European Commission (EC) recently warned of a potential 350,000-plus shortfall in IT practitioners in the region by 2015 and criticised the UK for failing to adequately promote professionalism in the industry.  According to EC principal administrator André Richier, although Europe has approximately four million IT practitioners, 50 per cent are not IT degree-qualified.certification

While the EC raises some interesting points about the education of those entering the field of IT, it’s important not to lose sight of what’s really important – ensuring IT executives are continually improving and developing their skills and capabilities.

Developments in technology are moving faster than ever and bringing about major changes to the lives of IT professionals.  Today, for instance, it’s crucial IT professionals are not just technical experts but able to speak the language of business and ensure the work of the IT function is closely aligned to business objectives.  This is particularly so when it comes to cloud computing where pressure is mounting for IT teams to clearly articulate the benefits the technology can offer the business.

Business decision makers aren’t interested in the details of cloud computing implementation but do want to know that IT teams understand their situation and are well placed to solve the challenges they face.  In short, they want to know important IT decisions being made in their business are in the hands of true professionals.

ITSCCertification can act as an important mark of professional standards and inspire confidence by verifying the qualities and skills IT executives have with regards to the effective deployment, implementation and operation of IT solutions. It’s these factors that led to the launch of the Open Group’s IT Specialist Certification (ITSC) Programme.  The programme is peer reviewed, vendor-neutral and global, ensuring IT executives can use it to distinguish their skills regardless of the organisation they work for.  As such, it guarantees a professional standard, assuring business leaders that the IT professionals they have in place can help address the challenges they face.  Given the current pressures to do more with less and the rising importance of IT to business, expect to see certification rise in importance in the months ahead.

Steve PhilpSteve Philp is the Marketing Director for the IT Architect and IT Specialist certification programs at The Open Group. Over the past 20 years, Steve has worked predominantly in sales, marketing and general management roles within the IT training industry. Based in Reading, UK, he joined the Open Group in 2008 to promote and develop the organization’s skills and experience-based IT certifications.

1 Comment

Filed under Certifications, Enterprise Architecture

The Trusted Technology Forum: Best practices for securing the global technology supply chain

By Mary Ann Davidson, Oracle

Hello, I am Mary Ann Davidson. I am the Chief Security Officer for Oracle and I want to talk about The Open Group Trusted Technology Provider Frameworkhardware (O-TTPF). What, you may ask, is that? The Trusted Technology Forum (OTTF) is an effort within The Open Group to develop a body of practices related to software and hardware manufacturing — the O-TTPF — that will address procurers’ supply chain risk management concerns.

That’s a mouthful, isn’t it? Putting it in layman’s terms, if you are an entity purchasing hardware and software for mission-critical systems, you want to know that your supplier has reasonable practices as to how they build and maintain their products that addresses specific (and I would argue narrow, more on which below) supply chain risks. The supplier ought to be doing “reasonable and prudent” practices to mitigate those risks and to be able to tell their buyers, “here is what I did.” Better industry practices related to supply chain risks with more transparency to buyers are both, in general, good things.

Real-world solutions

One of the things I particularly appreciate is that the O-TTPF is being developed by, among others, actual builders of software and hardware. So many of the “supply chain risk frameworks” I’ve seen to date appear to have been developed by people who have no actual software development and/or hardware manufacturing expertise. I think we all know that even well-intended and smart people without direct subject matter experience who want to “solve a problem” will often not solve the right problem, or will mandate remedies that may be ineffective, expensive and lack the always-needed dose of “real world pragmatism.”  In my opinion, an ounce of “pragmatic and implementable” beats a pound of “in a perfect world with perfect information and unlimited resources” any day of the week.

I know this from my own program management office in software assurance. When my team develops good ideas to improve software, we always vet them by our security leads in development, to try to achieve consensus and buy-in in some key areas:

  • Are our ideas good?
  • Can they be implemented?  Specifically, is our proposal the best way to solve the stated problem?
  • Given the differences in development organizations and differences in technology, is there a body of good practices that development can draw from rather than require a single practice for everyone?

That last point is a key one. There is almost never a single “best practice” that everybody on the planet should adhere in almost any area of life. The reality is that there are often a number of ways to get to a positive outcome, and the nature of business – particularly, the competitiveness and innovation that enables business – depends on flexibility.  The OTTF is outcomes-focused and “body of practice” oriented, because there is no single best way to build hardware and software and there is no single, monolithic supply chain risk management practice that will work for everybody or is appropriate for everybody.

BakingIt’s perhaps a stretch, but consider baking a pie. There is – last time I checked – no International Organization for Standardization (ISO) standard for how to bake a cherry pie (and God forbid there ever is one). Some people cream butter and sugar together before adding flour. Other people dump everything in a food processor. (I buy pre-made piecrusts and skip this step.) Some people add a little liqueur to the cherries for a kick, other people just open a can of cherries and dump it in the piecrust. There are no standards organization smack downs over two-crust vs. one-crust pies, and whether to use a crumble on the top or a pastry crust to constitute a “standards-compliant cherry pie.” Pie consumers want to know that the baker used reasonable ingredients – piecrust and cherries – that none of the ingredients were bad and that the baker didn’t allow any errant flies to wander into the dough or the filling. But the buyer should not be specifying exactly how the baker makes the pie or exactly how they keep flies out of the pie (or they can bake it themselves). The only thing that prescribing a single “best” way to bake a cherry pie will lead to is a chronic shortage of really good cherry pies and a glut of tasteless and mediocre ones.

Building on standards

Another positive aspect of the O-TTPF is that it is intended to build upon and incorporate existing standards – such as the international Common Criteria – rather than replace them. Incorporating and referring to existing standards is important because supply chain risk is not the same thing as software assurance — though they are related. For example, many companies evaluate ­one or more products, but not all products they produce. Therefore, even to the extent their CC evaluations incorporate a validation of the “security of the software development environment,” it is related to a product, and not necessarily to the overall corporate development environment. More importantly, one of the best things about the Common Criteria is that it is an existing ISO standard (ISO/IEC 15408:2005) and, thanks to the Common Criteria recognition arrangement (CCRA), a vendor can do a single evaluation accepted in many countries. Having to reevaluate the same product in multiple locations – or having to do a “supply chain certification” that covers the same sorts of areas that the CC covers – would be wasteful and expensive. The O-TTPF builds on but does not replace existing standards.

Another positive: The focus I see on “solving the right problems.” Too many supply chain risk discussions fail to define “supply chain risk” and in particular define every possible concern with a product as a supply chain risk. (If I buy a car that turns out to be a lemon, is it a supply chain risk problem? Or just a “lemon?”) For example, consider a system integrator who took a bunch of components and glued them together without delivering the resultant system in a locked down configuration. The weak configuration is not, per se, a supply chain risk; though arguably it is poor security practice and I’d also say it’s a weak software assurance practice. With regard to OTTF, we defined “supply chain attack” as (paraphrased) an attempt to deliberately subvert the manufacturing process rather than exploiting defects that happened to be in the product. Every product has defects, some are security defects, and some of those are caused by coding errors. That’s a lot different – and profoundly different — from someone putting a back door in code. The former is a software assurance problem and the second is a supply chain attack.

Why does this matter? Because supply chain risk – real supply chain risk, not every single concern either a vendor or a customer could have aboutManufacturing a product – needs focus to be able to address the concern. As has been said about priorities, if everything is priority number one, then nothing is.  In particular, if everything is “a supply chain risk,” then we can’t focus our efforts, and hone in on a reasonable, achievable, practical and implementable set  – “set” meaning “multiple avenues that lead to positive outcomes” – of practices that can lead to better supply chain practices for all, and a higher degree of confidence among purchasers.

Consider the nature of the challenges that OTTF is trying to address, and the nature of the challenges our industry faces, I am pleased that Oracle is participating in the OTTF. I look forward to working with peers – and consumers of technology – to help improve everyone’s supply chain risk management practices and the confidence of consumers of our technologies.

Mary Ann DavidsonMary Ann Davidson is the Chief Security Officer at Oracle Corporation, responsible for Oracle product security, as well as security evaluations, assessments and incident handling. She had been named one of Information Security’s top five “Women of Vision,” is a Fed100 award recipient from Federal Computer Week and was recently named to the Information Systems Security Association Hall of Fame. She has testified on the issue of cybersecurity multiple times to the US Congress. Ms. Davidson has a B.S.M.E. from the University of Virginia and a M.B.A. from the Wharton School of the University of Pennsylvania. She has also served as a commissioned officer in the U.S. Navy Civil Engineer Corps. She is active in The Open Group Trusted Technology Forum and writes a blog at Oracle.

6 Comments

Filed under Cybersecurity, Supply chain risk

The Newest from SOA: The SOA Ontology Technical Standard

By Heather Kreger, IBM

The Open Group just announced the availability of The Open Group SOA Ontology Technical Standard.

Ontology?? Sounds very ‘semantic Web,’ doesn’t it? Just smacks of reasoning engines. What on earth do architects using SOA want with reasoning engines?

Actually, Ontologies are misunderstood — an Ontology is simply the definition of a set of concepts and the relationships between them for a particular domain — in this case, the domain is SOA.

They don’t HAVE to be used for reasoning… or semantic Web. And they are more than a simple glossary which defines terms, because they also define relationships between them — something important for SOA, we thought. It’s also important to note that they are more formal than Reference Models, usually by providing representations in OWL (just in case you want to use popular tools for Ontology and reasoners).

What would an architect do with THIS ontology?Image credit: jscreationzs

It can be used simply to read and understand the key concepts of SOA, and more importantly, a set of definitions and UNDERSTANDING of key concepts that you can agree to use with others in your company and between organizations. Making sure you are ‘speaking the same language’ is essential for any architect to be able to communicate effectively with IT, business, and marketing professionals within the enterprise as well as with vendors and suppliers outside the enterprise. This common language can help ensure that you can ask the right questions and interpret the answers you get unambiguously.

It can be used as a basis for the models for the SOA solution as well. In fact, this is happening in the SOA repository standard under development in OASIS, S-RAMP, where they have used the SOA Ontology as the foundational business model for registry/repository integration.

The Ontology can also be augmented with additional related domain-specific ontologies; for example, on Governance or Business Process Management… or even in a vertical industry like retail where ARTS is developing service models. In fact, we, the SOA Ontology project, tried to define the minimum, absolutely core concepts needed for SOA and allow other domain experts to define additional details for Policy, Process, Service Contract, etc.

This Ontology was developed to be consistent with existing and developing SOA standards including OMG’s SOA/ML and BPMN and those in The Open Group SOA Workgroup: SOA Governance Framework, OSIMM, and the SOA Reference Architecture. It seems it would have been good to have developed this standard before now, but the good news is that it is grounded in extensive real-world experience developing, deploying and communicating about SOA solutions over the past five years. The Ontology reflects the lessons learned about what terms NOT to use to avoid confusion, and how to best distinguish among some common and often overused concepts like service composition, process, service contracts, and policy and their roles in SOA.

Have a look at the new SOA Ontology and see if it can help you in your communications for SOA. It’s available to you free at this link: http://www.opengroup.org/bookstore/catalog/c104.htm

Additional Links:

Heather KregerHeather Kreger is IBM’s lead architect for Smarter Planet, Policy, and SOA Standards in the IBM Software Group, with 15 years of standards experience. She has led the development of standards for Cloud, SOA, Web services, Management and Java in numerous standards organizations, including W3C, OASIS, DMTF, and Open Group. Heather is currently co-chair for The Open Group’s SOA Work Group and liaison for the Open Group SOA and Cloud Work Groups to ISO/IEC JTC1 SC7 SOA SG and INCITS DAPS38 (US TAG to ISO/IEC JTC 1 SC38). Heather is also the author of numerous articles and specifications, as well as the book Java and JMX, Building Manageable Systems, and most recently was co-editor of Navigating the SOA Open Standards Landscape Around Architecture.

6 Comments

Filed under Cloud/SOA