Category Archives: Uncategorized

The Open Group Summit Amsterdam – ArchiMate® Day – May 14, 2014

By Andrew Josey, Director of Standards, The Open Group

The Open Group Summit 2014 Amsterdam features an all day track on the ArchiMate® modeling language, followed by an ArchiMate Users Group meeting in the evening. The meeting attendees include the core developers of the ArchiMate language, users and tool developers.

The sessions include tutorials, a panel session on the past, present and future of the language and case studies. The Users Group meeting follows in the evening. The evening session is free and open to all — whether attending the rest of the conference or not — and starts at 6pm with free beer and pizza!

The timetable for ArchiMate Day is as follows:

• Tutorials (09:00 – 10:30), Henry Franken, CEO, BiZZdesign, and Alan Burnett, COO & Consulting Head, Corso

Henry Franken will show how the TOGAF® and ArchiMate® standards can be used to provide an actionable EA capability. Alan Burnett will present on how the ArchiMate language can be extended to support roadmapping, which is a fundamental part of strategic planning and enterprise architecture.

• Panel Discussion (11:00 – 12:30), Moderator: Henry Franken, Chair of The Open Group ArchiMate Forum

The  topic for the Panel Discussion is the ArchiMate Language — Past, Present and Future. The panel is comprised of key developers and users of the ArchiMate® language, including Marc Lankhorst and Henk Jonkers from the ArchiMate Core team, Jan van Gijsen from SNS REAAL, a Dutch financial institution, and Gerben Wierda author of Mastering ArchiMate. The session will include brief updates on current status from the panel members (30 minutes) and a 60-minute panel discussion with questions from the moderator and audience.

• Case Studies (14:00 – 16:00), Geert Van Grootel, Senior Researcher, Department of Economy, Science & Innovation, Flemish Government; Patrick Derde, Consultant, Envizion; and Pieter De Leenheer, Co-Founder and Research Director, Collibra. Walter Zondervan, Member – Architectural Board, ASL-BiSL Foundation. Adina Aldea, BiZZdesign.

There are three case studies:

Geert Van Grootel, Patrick Derde, and Pieter De Leenheer will present on how you can manage your business meta data by means of the use of data model patterns and an Integrated Information Architecture approach supported by a standard formal architecture language ArchiMate.

Walter Zondervan will present an ArchiMate reference architecture for governance, based on BiSL.

Adina Aldea will present on how high level strategic models can be used and modelled based on the Strategizer method.

• ArchiMate Users Group Meeting (18:00 – 21:00)

The evening session is free and open to all — whether attending the rest of the conference or not. It will start at 6pm with free beer and pizza. Invited speakers for the Users Group Meeting include: Andrew Josey, Henk Jonkers,  Marc Lankhorst and Gerben Wierda:

- Andrew Josey will present on the ArchiMate certification program and adoption of the language
– Henk Jonkers will present on modeling risk and security
– Marc Lankhorst will present about capability modeling in ArchiMate
– Gerben Wierda will present about relating ArchiMate and BPMN

Why should you attend?
• Spend time interacting directly with other ArchiMate users and tool providers in a relaxed, engaging environment
• Opportunity to listen and understand how ArchiMate can be used to develop solutions to common industry problems
• Learn about the future directions and meet with key users and developers of the language and tools
• Interact with peers to broaden your expertise and knowledge in the ArchiMate language

For detailed information, see the ArchiMate Day agenda at http://www.opengroup.org/amsterdam2014/archimate / or our YouTube event video at http://youtu.be/UVARza3uZZ4

How to register

Registration for the ArchiMate® Users Group meeting is independent of The Open Group Conference registration. There is no fee but registration is required. Please register here, select one-day pass for pass type, insert the promotion code (AMST14-AUG), tick the box against Wednesday May 14th and select ArchiMate Users Group from the conference session list. You will then be registered for the event and should not be charged.  Please note that this promotion code should only be used for those attending only the evening meeting from 6:00 p.m. Anyone attending the conference or just the ArchiMate Day will have to pay the applicable registration fee.  User Group members who want to attend The Open Group conference and who are not members of The Open Group can register using the affiliate code AMST14-AFFIL.

 Andrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF® 9.1, ArchiMate 2.1, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

Comments Off

Filed under ArchiMate®, Enterprise Architecture, Professional Development, Standards, TOGAF®, Uncategorized

ArchiMate® Q&A with Phil Beauvoir

By The Open Group

The Open Group’s upcoming Amsterdam Summit in May will feature a full day on May 14 dedicated to ArchiMate®, an open and independent modeling language for Enterprise Architecture, supported by tools that allow Enterprise Architects to describe, analyze and visualize relationships among business domains in an unambiguous way.

One of the tools developed to support ArchiMate is Archi, a free, open-source tool created by Phil Beauvoir at the University of Bolton in the UK as part of a Jisc-funded Enterprise Architecture project that ran from 2009-2012. Since its development, Archi has grown from a relatively small, home-grown tool to become a widely used open-source resource that averages 3000 downloads per month and whose community ranges from independent practitioners to Fortune 500 companies. Here we talk with Beauvoir about how Archi was developed, the problems inherent in sustaining an open source product, its latest features and whether it was named after the Archie comic strip.

Beauvoir will be a featured speaker during the ArchiMate Day in Amsterdam.

Tell us about the impetus for creating the Archi tool and how it was created…
My involvement with the ArchiMate language has mainly been through the development of the software tool, Archi. Archi has, I believe, acted as a driver and as a hub for activity around the ArchiMate language and Enterprise Architecture since it was first created.

I’ll tell you the story of how Archi came about. Let’s go back to the end of 2009. At that point, I think ArchiMate and Enterprise Architecture were probably being used quite extensively in the commercial sector, especially in The Netherlands. The ArchiMate language had been around for a while at that point but was a relatively new thing to many people, at least here in the UK. If you weren’t part of the EA scene, it would have been a new thing to you. In the UK, it was certainly new for many in higher education and universities, which is where I come in.

Jisc, the UK funding body, funded a number of programs in higher education exploring digital technologies and other initiatives. One of the programs being funded was to look at how to improve systems using Enterprise Architecture within the university sector. Some of the universities had already been led to ArchiMate and Enterprise Architecture and were trying it out for themselves – they were new to it and, of course, one of the first things they needed were tools. At that time, and I think it’s still true today, a lot of the tools were quite expensive. If you’re a big commercial organization, you might be able to afford the licensing costs for tools and support, but for a small university project it can be prohibitive, especially if you’re just dipping your toe into something like this. So some colleagues within Jisc and the university I worked at said, ‘well, what about creating a small, open source project tool which isn’t over-complicated but does enough to get people started in ArchiMate? And we can fund six months of money to do this as a proof of concept tool’.

That takes us into 2010, when I was working for the university that was approached to do this work. After six months, by June 2010, I had created the first 1.0 version of Archi and it was (and still is) free, open source and cross-platform. Some of the UK universities said ‘well, that’s great, because now the barrier to entry has been lowered, we can use this tool to start exploring the ArchiMate language and getting on board with Enterprise Architecture’. That’s really where it all started.

So some of the UK universities that were exploring ArchiMate and Enterprise Architecture had a look at this first version of Archi, version 1.0, and said ‘it’s good because it means that we can engage with it without committing at this stage to the bigger tooling solutions.’ You have to remember, of course, that universities were (and still are) a bit strapped for cash, so that’s a big issue for them. At the time, and even now, there really aren’t any other open-source or free tools doing this. That takes us to June 2010. At this point we got some more funding from the Jisc, and kept on developing the tool and adding more features to it. That takes us through 2011 and then up to the end of 2012, when my contract came to an end.

Since the official funding ended and my contract finished, I’ve continued to develop Archi and support the community that’s built up around it. I had to think about the sustainability of the software beyond the project, and sometimes this can be difficult, but I took it upon myself to continue to support and develop it and to engage with the Archi/ArchiMate community.

How did you get involved with The Open Group and bringing the tool to them?
I think it was inevitable really due to where Archi originated, and because the funding came from the Jisc, and they are involved with The Open Group. So, I guess The Open Group became aware of Archi through the Jisc program and then I became involved with the whole ArchiMate initiative and The Open Group. I think The Open Group is in favor of Archi, because it’s an open source tool that provides a neutral reference implementation of the ArchiMate language. When you have an open standard like ArchiMate, it’s good to have a neutral reference model implementation.

How is this tool different from other tools out there and what does it enable people to do?
Well, firstly Archi is a tool for modeling Enterprise Architecture using the ArchiMate language and notation, but what really makes it stand out from the other tools is its accessibility and the fact that it is free, open source and cross-platform. It can do a lot of, if not all of, the things that the bigger tools provide without any financial or other commitment. However, free is not much use if there’s no quality. One thing I’ve always strived for in developing Archi is to ensure that even if it only does a few things compared with the bigger tools, it does those things well. I think with a tool that is free and open-source, you have a lot of support and good-will from users who provide positive encouragement and feedback, and you end up with an interesting open development process.

I suppose you might regard Archi’s relationship to the bigger ArchiMate tools in the same way as you’d compare Notepad to Microsoft Word. Notepad provides the essential writing features, but if you want to go for the full McCoy then you go and buy Microsoft Word. The funny thing is, this is where Archi was originally targeted – at beginners, getting people to start to use the ArchiMate language. But then I started to get emails — even just a few months after its first release — from big companies, insurance companies and the like saying things like ‘hey, we’re using this tool and it’s great, and ‘thanks for this, when are we going to add this or that feature?’ or ‘how many more features are you going to add?’ This surprised me somewhat since I wondered why they hadn’t invested in one of the available commercial tools. Perhaps ArchiMate, and even Enterprise Architecture itself, was new to these organizations and they were using Archi as their first software tool before moving on to something else. Having said that, there are some large organizations out there that do use Archi exclusively.

Which leads to an interesting dilemma — if something is free, how do you continue developing and sustaining it? This is an issue that I’m contending with right now. There is a PayPal donation button on the front page of the website, but the software is open source and, in its present form, will remain open source; but how do you sustain something like this? I don’t have the complete answer right now.

Given that it’s a community product, it helps that the community contributes ideas and develops code, but at the same time you still need someone to give their time to coordinate all of the activity and support. I suppose the classic model is one of sponsorship, but we don’t have that right now, so at the moment I’m dealing with issues around sustainability.

How much has the community contributed to the tool thus far?
The community has contributed a lot in many different ways. Sometimes a user might find a bug and report it or they might offer a suggestion on how a feature can be improved. In fact, some of the better features have been suggested by users. Overall, community contributions seem to have really taken off more in the last few months than in the whole lifespan of Archi. I think this may be due to the new Archi website and a lot more renewed activity. Lately there have been more code contributions, corrections to the documentation and user engagement in the future of Archi. And then there are users who are happy to ask ‘when is Archi going to implement this big feature, and when is it going to have full support for repositories?’ and of course they want this for free. Sometimes that’s quite hard to accommodate, because you think ‘sure, but who’s going to do all this work and contribute the effort.’ That’s certainly an interesting issue for me.

How many downloads of the tool are you getting per month? Where is it being used?
At the moment we’re seeing around 3,000 downloads a month of the tool — I think that’s a lot actually. Also, I understand that some EA training organizations use Archi for their ArchiMate training, so there are quite a few users there, as well.

The number one country for downloading the app and visiting the website is the Netherlands, followed by the UK and the United States. In the past three months, the UK and The Netherlands have been about equal in numbers in their visits to the website and downloads, followed by the United States, France, Germany, Canada, then Australia, Belgium, and Norway. We have some interest from Russia too. Sometimes it depends on whether ArchiMate or Archi is in the news at any given time. I’ve noticed that when there’s a blog post about ArchiMate, for example, you’ll see a spike in the download figures and the number of people visiting the website.

How does the tool fit into the overall schema of the modeling language?
It supports all of the ArchiMate language concepts, and I think it offers the core functionality of you’d want from an ArchiMate modeling tool — the ability to create diagrams, viewpoints, analysis of model objects, reporting, color schemes and so on. Of course, the bigger ArchiMate tools will let you manipulate the model in more sophisticated ways and create more detailed reports and outputs. This is an area that we are trying to improve, and the people who are now actively contributing to Archi are full-time Enterprise Architects who are able to contribute to these areas. For example, we have a user and contributor from France, and he and his team use Archi, and so they are able to see first-hand where Archi falls short and they are able to say ‘well, OK, we would like it to do this, or that could be improved,’ so now they’re working towards strengthening any weak areas.

How did you come up with the name?
What happens is you have pet names for projects and I think it just came about that we started calling it “Archie,” like the guy’s name. When it was ready to be released I said, ‘OK, what should we really call the app?’ and by that point everyone had started to refer to it as “Archie.” Then somebody said ‘well, everybody’s calling it by that name so why don’t we just drop the “e” from the name and go with that?’ – so it became “Archi.” I suppose we could have spent more time coming up with a different name, but by then the name had stuck and everybody was calling it that. Funnily enough, there’s a comic strip called ‘Archie’ and an insurance company that was using the software at the time told me that they’d written a counterpart tool called ‘Veronica,’ named after a character in the comic strip.

What are you currently working on with the tool?
For the last few months, I’ve been adding new features – tweaks, improvements, tightening things up, engaging with the user community, listening to what’s needed and trying to implement these requests. I’ve also been adding new resources to the Archi website and participating on social media like Twitter, spreading the word. I think the use of social media is really important. Twitter, the User Forums and the Wikis are all points where people can provide feedback and engage with me and other Archi developers and users. On the development side of things, we host the code at GitHub, and again that’s an open resource that users and potential developers can go to. I think the key words are ‘open’ and ‘community driven.’ These social media tools, GitHub and the forums all contribute to that. In this way everyone, from developer to user, becomes a stakeholder – everyone can play their part in the development of Archi and its future. It’s a community product and my role is to try and manage it all.

What will you be speaking about in Amsterdam?
I think the angle I’m interested in is what can be achieved by a small number of people taking the open source approach to developing software and building and engaging with the community around it. For me, the interesting part of the Archi story is not so much about the software itself and what it does, but rather the strong community that’s grown around it, the extent of the uptake of the tool and the way in which it has enabled people to get on board with Enterprise Architecture and ArchiMate. It’s the accessibility and agility of this whole approach that I like and also the activity and buzz around the software and from the community – that for me is the interesting thing about this process.

For more information on ArchiMate, please visit:
http://www.opengroup.org/subjectareas/enterprise/archimate

For information on the Archi tool, please visit: http://www.archimatetool.com/

For information on joining the ArchiMate Forum, please visit: http://www.opengroup.org/getinvolved/forums/archimate

philbeauvoirPhil Beauvoir has been developing, writing, and speaking about software tools and development for over 25 years. He was Senior Researcher and Developer at Bangor University, and, later, the Institute for Educational Cybernetics at Bolton University, both in the UK. During this time he co-developed a peer-to-peer learning management and groupware system, a suite of software tools for authoring and delivery of standards-compliant learning objects and meta-data, and tooling to create IMS Learning Design compliant units of learning.  In 2010, working with the Institute for Educational Cybernetics, Phil created the open source ArchiMate Modelling Tool, Archi. Since 2013 he has been curating the development of Archi independently. Phil holds a degree in Medieval English and Anglo-Saxon Literature.

1 Comment

Filed under ArchiMate®, Certifications, Conference, Enterprise Architecture, Uncategorized

The Financial Incentive for Health Information Exchanges

By Jim Hietala, VP, Security, The Open Group

Health IT professionals have always known that interoperability would be one of the most important aspects of the Affordable Care Act (ACA). Now doctors have financial incentive to be proactive in taking part in the process of exchange information between computer systems.

According to a recent article in MedPage Today, doctors are now “clamoring” for access to patient information ahead of the deadlines for the government’s “meaningful use” program. Doctors and hospitals will get hit with fines for not knowing about patients’ health histories, for patient readmissions and unnecessary retesting. “Meaningful use” refers to provisions in the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act, which authorized incentive payments through Medicare and Medicaid to clinicians and hospitals that use electronic health records in a meaningful way that significantly improves clinical care.
Doctors who accept Medicare will find themselves penalized for not adopting or successfully demonstrating meaningful use of a certified electronic health record (EHR) technology by 2015. Health professionals’ Medicare physician fee schedule amount for covered professional services will be adjusted down by 1% each year for certain categories.  If less than 75% of Eligible Professionals (EPs) have become meaningful users of EHRs by 2018, the adjustment will change by 1% point each year to a maximum of 5% (95% of Medicare covered amount).

With the stick, there’s also a carrot. The Medicare and Medicaid EHR Incentive Programs provide incentive payments to eligible professionals, eligible hospitals and critical access hospitals (CAHs) as they adopt, implement, upgrade or demonstrate meaningful use of certified EHR technology. Eligible professionals can receive up to $44,000 through the Medicare EHR Incentive Program and up to $63,750 through the Medicaid EHR Incentive Program.

According to HealthIT.Gov, interoperability is essential for applications that interact with users (such as e-prescribing), systems that communicate with each other (such as messaging standards) information processes and management (such as health information exchange) how consumer devices integrate with other systems and applications (such as tablet, smart phones and PCs).

The good news is that more and more hospitals and doctors are participating in data exchanges and sharing patient information. On January 30th, the eHealth Exchange, formerly the Nationwide Health Information Network, and operated by Healtheway, reported a surge in network participation numbers and increases in secure online transactions among members.

According to the news release, membership in the eHealth Exchange is currently pegged at 41 participants who together represent some 800 hospitals, 6,000 mid-to-large medical groups, 800 dialysis centers and 850 retail pharmacies nationwide. Some of the earliest members to sign on with the exchange were the Veterans Health Administration, Department of Defense, Kaiser Permanente, the Social Security Administration and Dignity Health.

While the progress in health information exchanges is good, there is still much work to do in defining standards, so that the right information is available at the right time and place to enable better patient care. Devices are emerging that can capture continuous information on our health status. The information captured by these devices can enable better outcomes, but only if the information is made readily available to medical professionals.

The Open Group recently formed The Open Group Healthcare Forum, which focuses on bringing  Boundaryless Information Flow™ to the healthcare industry enabling data to flow more easily throughout the complete healthcare ecosystem.  By leveraging the discipline and principles of Enterprise Architecture, including TOGAF®, an Open Group standard, the forum aims to develop standardized vocabulary and messaging that will result in higher quality outcomes, streamlined business practices and innovation within the industry.

62940-hietalaJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security, risk management and healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

1 Comment

Filed under Boundaryless Information Flow™, Enterprise Architecture, Healthcare, Professional Development, Standards, TOGAF®, Uncategorized

Q&A with Jim Hietala on Security and Healthcare

By The Open Group

We recently spoke with Jim Hietala, Vice President, Security for The Open Group, at the 2014 San Francisco conference to discuss upcoming activities in The Open Group’s Security and Healthcare Forums.

Jim, can you tell us what the Security Forum’s priorities are going to be for 2014 and what we can expect to see from the Forum?

In terms of our priorities for 2014, we’re continuing to do work in Security Architecture and Information Security Management. In the area of Security Architecture, the big project that we’re doing is adding security to TOGAF®, so we’re working on the next version of the TOGAF standard and specification and there’s an active project involving folks from the Architecture Forum and the Security Forum to integrate security into and stripe it through TOGAF. So, on the Security Architecture side, that’s the priority. On the Information Security Management side, we’re continuing to do work in the area of Risk Management. We introduced a certification late last year, the OpenFAIR certification, and we’ll continue to do work in the area of Risk Management and Risk Analysis. We’re looking to add a second level to the certification program, and we’re doing some other work around the Risk Analysis standards that we’ve introduced.

The theme of this conference was “Towards Boundaryless Information Flow™” and many of the tracks focused on convergence, and the convergence of things Big Data, mobile, Cloud, also known as Open Platform 3.0. How are those things affecting the realm of security right now?

I think they’re just beginning to. Cloud—obviously the security issues around Cloud have been here as long as Cloud has been over the past four or five years. But if you look at things like the Internet of Things and some of the other things that comprise Open Platform 3.0, the security impacts are really just starting to be felt and considered. So I think information security professionals are really just starting to wrap their hands around, what are those new security risks that come with those technologies, and, more importantly, what do we need to do about them? What do we need to do to mitigate risk around something like the Internet of Things, for example?

What kind of security threats do you think companies need to be most worried about over the next couple of years?

There’s a plethora of things out there right now that organizations need to be concerned about. Certainly advanced persistent threat, the idea that maybe nation states are trying to attack other nations, is a big deal. It’s a very real threat, and it’s something that we have to think about – looking at the risks we’re facing, exactly what is that adversary and what are they capable of? I think profit-motivated criminals continue to be on everyone’s mind with all the credit card hacks that have just come out. We have to be concerned about cyber criminals who are profit motivated and who are very skilled and determined and obviously there’s a lot at stake there. All of those are very real things in the security world and things we have to defend against.

The Security track at the San Francisco conference focused primarily on risk management. How can companies better approach and manage risk?

As I mentioned, we did a lot of work over the last few years in the area of Risk Management and the FAIR Standard that we introduced breaks down risk into what’s the frequency of bad things happening and what’s the impact if they do happen? So I would suggest that taking that sort of approach, using something like taking the Risk Taxonomy Standard that we’ve introduced and the Risk Analysis Standard, and really looking at what are the critical assets to protect, who’s likely to attack them, what’s the probably frequency of attacks that we’ll see? And then looking at the impact side, what’s the consequence if somebody successfully attacks them? That’s really the key—breaking it down, looking at it that way and then taking the right mitigation steps to reduce risk on those assets that are really important.

You’ve recently become involved in The Open Group’s new Healthcare Forum. Why a healthcare vertical forum for The Open Group?

In the area of healthcare, what we see is that there’s just a highly fragmented aspect to the ecosystem. You’ve got healthcare information that’s captured in various places, and the information doesn’t necessarily flow from provider to payer to other providers. In looking at industry verticals, the healthcare industry seemed like an area that really needed a lot of approaches that we bring from The Open Group—TOGAF and Enterprise Architecture approaches that we have.

If you take it up to a higher level, it really needs the Boundaryless Information Flow that we talk about in The Open Group. We need to get to the point where our information as patients is readily available in a secure manner to the people who need to give us care, as well as to us because in a lot of cases the information exists as islands in the healthcare industry. In looking at healthcare it just seemed like a natural place where, in our economies – and it’s really a global problem – a lot of money is spent on healthcare and there’s a lot of opportunities for improvement, both in the economics but in the patient care that’s delivered to individuals through the healthcare system. It just seemed like a great area for us to focus on.

As the new Healthcare Forum kicks off this year, what are the priorities for the Forum?

The Healthcare Forum has just published a whitepaper summarizing the workshop findings for the workshop that we held in Philadelphia last summer. We’re also working on a treatise, which will outline our views about the healthcare ecosystem and where standards and architecture work is most needing to be done. We expect to have that whitepaper produced over the next couple of months. Beyond that, we see a lot of opportunities for doing architecture and standards work in the healthcare sector, and our membership is going to determine which of those areas to focus on, which projects to initiate first.

For more on the The Open Group Security Forum, please visit http://www.opengroup.org/subjectareas/security. For more on the The Open Group Healthcare Forum, see http://www.opengroup.org/getinvolved/industryverticals/healthcare.

62940-hietalaJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security, risk management and healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

Comments Off

Filed under Cloud/SOA, Conference, Data management, Healthcare, Information security, Open FAIR Certification, Open Platform 3.0, RISK Management, TOGAF®, Uncategorized

What the C-Suite Needs to Prepare for in the Era of BYO Technology

By Allen Brown, President and CEO, The Open Group

IT today is increasingly being driven by end-users. This phenomenon, known as the “consumerization of IT,” is a result of how pervasive technology has become in daily life. Years ago, IT was the primarily the realm of technologists and engineers. Most people, whether in business settings or at home, did not have the technical know-how to source their own applications, write code for a web page or even set up their own workstation.

Today’s technologies are more user-friendly than ever and they’ve become ubiquitous. The introduction of smartphones and tablets has ushered in the era of “BYO” with consumers now bringing the technologies they like and are most comfortable working with into the workplace, all with the expectation that IT will support them. The days where IT decided what technologies would be used within an organization are no more.

At the same time, IT has lost another level of influence due to Cloud computing and Big Data. Again, the “consumers” of IT within the enterprise—line of business managers, developers, marketers, etc.—are driving these changes. Just as users want the agility offered by the devices they know and love, they also want to be able to buy and use the technologies they need to do their job and do it on the fly rather than wait for an IT department to go through a months’ (or years’) long process of requisitions and approvals. And it’s not just developers or IT staff that are sourcing their own applications—marketers are buying applications with their credit cards, and desktop users are sharing documents and spreadsheets via web-based office solutions.

When you can easily buy the processing capacity you need when you need it with your credit card or use applications online for free, why wait for approval?

The convergence of this next era of computing – we call it Open Platform 3.0™ – is creating a Balkanization of the traditional IT department. IT is no longer the control center for technology resources. As we’ve been witnessing over the past few years and as industry pundits have been prognosticating, IT is changing to become more of a service-based command central than a control center from which IT decisions are made.

These changes are happening within enterprises everywhere. The tides of change being brought about by Open Platform 3.0 cannot be held back. As I mentioned in my recent blog on Future Shock and the need for agile organizations, adaptation will be key for companies’ survival as constant change and immediacy become the “new normal” for how they operate.

These changes will, in fact, be positive for most organizations. As technologies converge and users drive the breakdown of traditional departmental silos and stovepipes, organizations will become more interoperable. More than ever, new computing models are driving the industry toward The Open Group’s vision of Boundaryless Information Flow™ within organizations. But the changes resulting from consumer-led IT are not just the problem of the IT department. They are on track to usher in a whole host of organizational changes that all executives must not only be aware of, but must also prepare and plan for.

One of the core of issues around consumerized IT that must be considered is the control of resources. Resource planning in terms of enabling business processes through technology must now be the concern of every person within the C-Suite from the CEO to the CIO and even the CMO.

Take, for example, the financial controls that must be considered in a BYO world. This issue, in particular, hits two very distinct centers of operations most closely—the offices of both the CIO and the CFO.

In the traditional IT paradigm, technology has been a cost center for most businesses with CFOs usually having the final say in what technologies can be bought and used based on budget. There have been very specific controls placed on purchases, each leaving an audit trail that the finance department could easily track and handle. With the Open Platform 3.0 paradigm, those controls go straight out the window. When someone in marketing buys and uses an application on their own without the CIO approving its use or the CFO having an paper trail for the purchase, accounting and financial or technology auditing can become a potential corporate nightmare.

Alternatively, when users share information over the Web using online documents, the CIO, CTO or CSO may have no idea what information is going in and out of the organization or how secure it is. But sharing information through web-based documents—or a CRM system—might be the best way for the CMO to work with vendors or customers or keep track of them. The CMO may also need to begin tracking IT purchases within their own department.

The audit trail that must be considered in this new computing era can extend in many directions. IT may need an accounting of technical and personal assets. Legal may need information for e-Discovery purposes—how does one account for information stored on tablets or smartphones brought from home or work-related emails from sent from personal accounts? The CSO may require risk assessments to be performed on all devices or may need to determine how far an organization’s “perimeter” extends for security purposes. The trail is potentially as large as the organization itself and its entire extended network of employees, vendors, customers, etc.

What can organizations do to help mitigate the potential chaos of a consumer-led IT revolution?

Adapt. Be flexible and nimble. Plan ahead. Strategize. Start talking about what these changes will mean for your organization—and do it sooner rather than later. Work together. Help create standards that can help organizations maintain flexible but open parameters (and perimeters) for sourcing and sharing resources.

Executive teams, in particular, will need to know more about the functions of other departments than ever before. IT departments—including CTOs and EAs—will need to know more about other business functions—such as finance—if they are to become IT service centers. CFOs will need to know more about technology, security, marketing and strategic planning. CMOs and CIOs will need to understand regulatory guidelines not only around securing information but around risk and data privacy.

Putting enterprise and business architectures and industry standards in place can go a long way toward helping to create structures that maintain a healthy balance between providing the flexibility needed for Open Platform 3.0 and BYO while allowing enough organizational control to prevent chaos. With open architectures and standards, organizations will better be able to decide where controls are needed and when and how information should be shared among departments. Interoperability and Boundaryless Information Flow—where and when they’re needed—will be key components of these architectures.

The convergence being brought about Open Platform 3.0 is not just about technology. It’s about the convergence of many things—IT, people, operations, processes, information. It will require significant cultural changes for most organizations and within different departments and organizational functions that are not used to sharing, processing and analyzing information beyond the silos that have been built up around them.

In this new computing model, Enterprise Architectures, interoperability and standards can and must play a central role in guiding the C-Suite through this time of rapid change so that users have the tools they need to be able to innovate, executives have the information they need to steer the proverbial ship and organizations don’t get left behind.

brown-smallAllen Brown is the President and CEO of The Open GroupFor more than ten years, he has been responsible for driving the organization’s strategic plan and day-to-day operations; he was also instrumental in the creation of the Association of Enterprise Architects (AEA). Allen is based in the U.K.

Comments Off

Filed under Business Architecture, Cloud/SOA, Enterprise Architecture, Enterprise Transformation, Standards, Uncategorized

The Open Group San Francisco 2014 – Day Two Highlights

By Loren K. Baynes, Director, Global Marketing Communications

Day two, February 4th, of The Open Group San Francisco conference kicked off with a welcome and opening remarks from Steve Nunn, COO of The Open Group and CEO of the Association of Enterprise Architects.

Nunn introduced Allen Brown, President and CEO of The Open Group, who provided highlights from The Open Group’s last quarter.  As of Q4 2013, The Open Group had 45,000 individual members in 134 countries hailing from 449 member companies in 38 countries worldwide. Ten new member companies have already joined The Open Group in 2014, and 24 members joined in the last quarter of 2013, with the first member company joining from Vietnam. In addition, 6,500 individuals attended events sponsored by The Open Group in Q4 2013 worldwide.

Updates on The Open Group’s ongoing work were provided including updates on the FACE™ Consortium, DirectNet® Waveform Standard, Architecture Forum, Archimate® Forum, Open Platform 3.0™ Forum and Security Forum.

Of note was the ongoing development of TOGAF® and introduction of a three-volume work including individual volumes outlining the TOGAF framework, guidance and tools and techniques for the standard, as well as collaborative work that allows the Archimate modeling language to be used for risk management in enterprise architectures.

In addition, Open Platform 3.0 Forum has already put together 22 business use cases outlining ROI and business value for various uses related to technology convergence. The Cloud Work Group’s Cloud Reference Architecture has also been submitted to ISO for international standards certification, and the Security Forum has introduced certification programs for OpenFAIR risk management certification for individuals.

The morning plenary centered on The Open Group’s Dependability through Assuredness™ (O-DA) Framework, which was released last August.

Speaking first about the framework was Dr. Mario Tokoro, Founder and Executive Advisor for Sony Computer Science Laboratories. Dr. Tokoro gave an overview of the Dependable Embedded OS project (DEOS), a large national project in Japan originally intended to strengthen the country’s embedded systems. After considerable research, the project leaders discovered they needed to consider whether large, open systems could be dependable when it came to business continuity, accountability and ensuring consistency throughout the systems’ lifecycle. Because the boundaries of large open systems are ever-changing, the project leaders knew they must put together dependability requirements that could accommodate constant change, allow for continuous service and provide continuous accountability for the systems based on consensus. As a result, they put together a framework to address both the change accommodation cycle and failure response cycles for large systems – this framework was donated to The Open Group’s Real-Time Embedded Systems Forum and released as the O-DA standard.

Dr. Tokoro’s presentation was followed by a panel discussion on the O-DA standard. Moderated by Dave Lounsbury, VP and CTO of The Open Group, the panel included Dr. Tokoro; Jack Fujieda, Founder and CEO ReGIS, Inc.; T.J. Virdi, Senior Enterprise IT Architect at Boeing; and Bill Brierly, Partner and Senior Consultant, Conexiam. The panel discussed the importance of openness for systems, iterating the conference theme of boundaries and the realities of having standards that can ensure openness and dependability at the same time. They also discussed how the O-DA standard provides end-to-end requirements for system architectures that also account for accommodating changes within the system and accountability for it.

Lounsbury concluded the track by iterating that assuring systems’ dependability is not only fundamental to The Open Group mission of Boundaryless Information Flow™ and interoperability but also in preventing large system failures.

Tuesday’s late morning sessions were split into two tracks, with one track continuing the Dependability through Assuredness theme hosted by Joe Bergmann, Forum Chair of The Open Group’s Real-Time and Embedded Systems Forum. In this track, Fujieda and Brierly furthered the discussion of O-DA outlining the philosophy and vision of the standard, as well as providing a roadmap for the standard.

In the morning Business Innovation & Transformation track, Alan Hakimi, Consulting Executive, Microsoft presented “Zen and the Art of Enterprise Architecture: The Dynamics of Transformation in a Complex World.” Hakimi emphasized that transformation needs to focus on a holistic view of an organization’s ecosystem and motivations, economics, culture and existing systems to help foster real change. Based on Buddhist philosophy, he presented an eightfold path to transformation that can allow enterprise architects to approach transformation and discuss it with other architects and business constituents in a way that is meaningful to them and allows for complexity and balance.

This was followed by “Building the Knowledge-Based Enterprise,” a session given by Bob Weisman, Head Management Consultant for Build the Vision.

Tuesday’s afternoon sessions centered on a number of topics including Business Innovation and Transformation, Risk Management, Archimate, TOGAF tutorials and case studies and Professional Development.

In the Archimate track, Vadim Polyakov of Inovalon, Inc., presented “Implementing an EA Practice in an Agile Enterprise” a case study centered on how his company integrated its enterprise architecture with the principles of agile development and how they customized the Archimate framework as part of the process.

The Risk Management track featured William Estrem, President, Metaplexity Associates, and Jim May of Windsor Software discussing how the Open FAIR Standard can be used in conjunction with TOGAF 9.1 to enhance risk management in organizations in their session, “Integrating Open FAIR Risk Analysis into the Enterprise Architecture Capability.” Jack Jones, President of CXOWARE, also discussed the best ways for “Communicating the Value Proposition” for cohesive enterprise architectures to business managers using risk management scenarios.

The plenary sessions and many of the track sessions from today’s tracks can be viewed on The Open Group’s Livestream channel at http://new.livestream.com/opengroup.

The day culminated with dinner and a Lion Dance performance in honor of Chinese New Year performed by Leung’s White Crane Lion & Dragon Dance School of San Francisco.

We would like to express our gratitude for the support by our following sponsors:  BIZZDesign, Corso, Good e-Learning, I-Server and Metaplexity Associates.

IMG_1460 copy

O-DA standard panel discussion with Dave Lounsbury, Bill Brierly, Dr. Mario Tokoro, Jack Fujieda and TJ Virdi

Comments Off

Filed under Conference, Enterprise Architecture, Enterprise Transformation, Standards, TOGAF®, Uncategorized

The Open Group San Francisco 2014 – Day One Highlights

By Loren K. Baynes, Director, Global Marketing Communications

The Open Group’s San Francisco conference, held at the Marriott Union Square, began today highlighting the theme of how the industry is moving Toward Boundaryless Information Flow™.”

The morning plenary began with a welcome from The Open Group President and CEO Allen Brown.  He began the day’s sessions by discussing the conference theme, reminding the audience that The Open Group’s vision of Boundaryless Information Flow began in 2002 as a means to breakdown the silos within organizations and provide better communications within, throughout and beyond organizational walls.

Heather Kreger, Distinguished Engineer and CTO of International Standards at IBM, presented the first session of the day, “Open Technologies Fuel the Business and IT Renaissance.” Kreger discussed how converging technologies such as social and mobile, Big Data, the Internet of Things, analytics, etc.—all powered by the cloud and open architectures—are forcing a renaissance within both IT and companies. Fueling this renaissance is a combination of open standards and open source technologies, which can be used to build out the platforms needed to support these technologies at the speed that is enabling innovation. To adapt to these new circumstances, architects should broaden their skillsets so they have deeper skills and competencies in multiple disciplines, technologies and cultures in order to better navigate this world of open source based development platforms.

The second keynote of the morning, “Enabling the Opportunity to Achieve Boundaryless Information Flow™,” was presented by Larry Schmidt, HP Fellow at Hewlett-Packard, and Eric Stephens, Enterprise Architect, Oracle. Schmidt and Stephens addressed how to cultivate a culture within healthcare ecosystems to enable better information flow. Because healthcare ecosystems are now primarily digital (including not just individuals but technology architectures and the Internet of Things), boundaryless communication is imperative so that individuals can become the managers of their health and the healthcare ecosystem can be better defined. This in turn will help in creating standards that help solve the architectural problems currently hindering the information flow within current healthcare systems, driving better costs and better outcomes.

Following the first two morning keynotes Schmidt provided a brief overview of The Open Group’s new Healthcare Forum. The forum plans to leverage existing Open Group best practices such as harmonization, existing standards (such as TOGAF®) and work with other forums and vertical to create new standards to address the problems facing the healthcare industry today.

Mike Walker, Enterprise Architect at Hewlett-Packard, and Mark Dorfmueller, Associate Director Global Business Services for Procter & Gamble, presented the morning’s final keynote entitled “Business Architecture: The Key to Enterprise Transformation.” According to Walker, business architecture is beginning to change how enterprise architecture is done within organizations. In order to do so, Walker believes that business architects must be able to understand business processes, communicate ideas and engage with others (including other architects) within the business and offer services in order to implement and deliver successful programs. Dorfmueller illustrated business architecture in action by presenting how Procter & Gamble uses their business architecture to change how business is done within the company based on three primary principles—being relevant, practical and making their work consumable for those within the company that implement the architectures.

The morning plenary sessions culminated with a panel discussion on “Future Technology and Enterprise Transformation,” led by Dave Lounsbury, VP and CTO of The Open Group. The panel, which included all of the morning’s speakers, took a high-level view of how emerging technologies are eroding traditional boundaries within organizations. Things within IT that have been specialized in the past are now becoming commoditized to the point where they are now offering new opportunities for companies. This is due to how commonplace they’ve become and because we’re becoming smarter in how we use and get value out of our technologies, as well as the rapid pace of technology innovation we’re experiencing today.

Finally, wrapping up the morning was the Open Trusted Technology Forum (OTTF), a forum of The Open Group, with forum director Sally Long presenting an overview of a new Open Trusted Technology Provider™ Standard (O-TTPS) Accreditation Program which launched today.  The program is the first such accreditation to provide third-party certification for companies guaranteeing their supply chains are free from maliciously tainted or counterfeit products and conformant to the Open Trusted Technology Provider™ Standard (O-TTPS). IBM is the first company to earn the accreditation and there are at least two other companies that are currently going through the accreditation process.

Monday’s afternoon sessions were split between two tracks, Enterprise Architecture (EA) and Enterprise Transformation and Open Platform 3.0.

In the EA & Enterprise Transformation track, Purna Roy and John Raspen, both Directors of Consulting at Cognizant Technology Solutions, discussed the need to take a broad view and consider factors beyond just IT architectures in their session, “Enterprise Transformation: More than an Architectural Transformation.”  In contrast, Kirk DeCosta, Solution Architect at PNC Financial Services, argued that existing architectures can indeed serve as the foundation for transformation in “The Case for Current State – A Contrarian Viewpoint.”

The Open Platform 3.0 track addressed issues around the convergence of technologies based on cloud platforms, including the impact of Big Data as an enabler of information architectures by Helen Sun, Enterprise Architect at Oracle, and predictive analytics. Dipanjan Sengupta, Principal Architect at Cognizant Technology Solutions, discussed why integration platforms are critical for managing distribution application portfolios in “The Need for a High Performance Integration Platform in the Cloud Era.”

Today’s plenary sessions and many of the track sessions can be viewed on The Open Group’s Livestream channel at http://new.livestream.com/opengroup.

The day ended with an opportunity for everyone to share cocktails and conversation at a networking reception held at the hotel.

photo

Andras Szakal, VP & CTO, IBM U.S. Federal and Chair of the OTTF, presented with a plaque in honor of IBM’s contribution to the O-TTPS Accreditation Program, along with the esteemed panel who were key to the success of the launch.

Comments Off

Filed under Business Architecture, Conference, Enterprise Architecture, Enterprise Transformation, Uncategorized

Measuring the Immeasurable: You Have More Data Than You Think You Do

By Jim Hietala, Vice President, Security, The Open Group

According to a recent study by the Ponemon Institute, the average U.S. company experiences more than 100 successful cyber-attacks each year at a cost of $11.6M. By enabling security technologies, those companies can reduce losses by nearly $4M and instituting security governance reduces costs by an average of $1.5M, according to the study.

In light of increasing attacks and security breaches, executives are increasingly asking security and risk professionals to provide analyses of individual company risk and loss estimates. For example, the U.S. healthcare sector has been required by the HIPAA Security rule to perform annual risk assessments for some time now. The recent HITECH Act also added security breach notification and disclosure requirements, increased enforcement in the form of audits and increased penalties in the form of fines. Despite federal requirements, the prospect of measuring risk and doing risk analyses can be a daunting task that leaves even the best of us with a case of “analysis paralysis.”

Many IT experts agree that we are nearing a time where risk analysis is not only becoming the norm, but when those risk figures may well be used to cast blame (or be used as part of a defense in a lawsuit) if and when there are catastrophic security breaches that cost consumers, investors and companies significant losses.

In the past, many companies have been reluctant to perform risk analyses due to the perception that measuring IT security risk is too difficult because it’s intangible. But if IT departments could soon become accountable for breaches, don’t you want to be able to determine your risk and the threats potentially facing your organization?

In his book, How to Measure Anything, father of Applied Information Economics Douglas Hubbard points out that immeasurability is an illusion and that organizations do, in fact, usually have the information they need to create good risk analyses. Part of the misperception of immeasurability stems from a lack of understanding of what measurement is actually meant to be. According to Hubbard, most people, and executives in particular, expect measurement and analysis to produce an “exact” number—as in, “our organization has a 64.5 percent chance of having a denial of service attack next year.”

Hubbard argues that, as risk analysts, we need to look at measurement more like how scientists look at things—measurement is meant to reduce uncertainty—not to produce certainty—about a quantity based on observation.  Proper measurement should not produce an exact number, but rather a range of possibility, as in “our organization has a 30-60 percent chance of having a denial of service attack next year.” Realistic measurement of risk is far more likely when expressed as a probability distribution with a range of outcomes than in terms of one number or one outcome.

The problem that most often produces “analysis paralysis” is not just the question of how to derive those numbers but also how to get to the information that will help produce those numbers. If you’ve been tasked, for instance, with determining the risk of a breach that has never happened to your organization before, perhaps a denial of service attack against your web presence, how can you make an accurate determination about something that hasn’t happened in the past? Where do you get your data to do your analysis? How do you model that analysis?

In an article published in CSO Magazine, Hubbard argues that organizations have far more data than they think they do and they actually need less data than they may believe they do in order to do proper analyses. Hubbard says that IT departments, in particular, have gotten so used to having information stored in databases that they can easily query, they forget there are many other sources to gather data from. Just because something hasn’t happened yet and you haven’t been gathering historical data on it and socking it away in your database doesn’t mean you either don’t have any data or that you can’t find what you need to measure your risk. Even in the age of Big Data, there is plenty of useful data outside of the big database.

You will still need to gather that data. But you just need enough to be able to measure it accurately not necessarily precisely. In our recently published Open Group Risk Assessment Standard (O-RA), this is called calibration of estimates. Calibration provides a method for making good estimates, which are necessary for deriving a measured range of probability for risk. Section 3 of the O-RA standard uses provides a comprehensive look at how best to come up with calibrated estimates, as well as how to determine other risk factors using the FAIR (Factor Analysis of Information Risk) model.

So where do you get your data if it’s not already stored and easily accessible in a database? There are numerous sources you can turn to, both externally and internally. You just have to do the research to find it. For example, even if your company hasn’t experienced a DNS attack, many others have—what was their experience when it happened? This information is out there online—you just need to search for it. Industry reports are another source of information. Verizon publishes its own annual Verizon Data Breach Investigations Report for one. DatalossDB publishes an open data beach incident database that provides information on data loss incidents worldwide. Many vendors publish annual security reports and issue regular security advisories. Security publications and analyst firms such as CSO, Gartner, Forrester or Securosis all have research reports that data can be gleaned from.

Then there’s your internal information. Chances are your IT department has records you can use—they likely count how many laptops are lost or stolen each year. You should also look to the experts within your company to help. Other people can provide a wealth of valuable information for use in your analysis. You can also look to the data you do have on related or similar attacks as a gauge.

Chances are, you already have the data you need or you can easily find it online. Use it.

With the ever-growing list of threats and risks organizations face today, we are fast reaching a time when failing to measure risk will no longer be acceptable—in the boardroom or even by governments.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

1 Comment

Filed under Cybersecurity, Data management, Information security, Open FAIR Certification, RISK Management, Uncategorized

Evolving Business and Technology Toward an Open Platform 3.0™

By Dave Lounsbury, Chief Technical Officer, The Open Group

The role of IT within the business is one that constantly evolves and changes. If you’ve been in the technology industry long enough, you’ve likely had the privilege of seeing IT grow to become integral to how businesses and organizations function.

In his recent keynote “Just Exactly What Is Going On in Business and Technology?” at The Open Group London Conference in October, Andy Mulholland, former Global Chief Technology Officer at Capgemini, discussed how the role of IT has changed from being traditionally internally focused (inside the firewall, proprietary, a few massive applications, controlled by IT) to one that is increasingly externally focused (outside the firewall, open systems, lots of small applications, increasingly controlled by users). This is due to the rise of a number of disruptive forces currently affecting the industry such as BYOD, Cloud, social media tools, Big Data, the Internet of Things, cognitive computing. As Mulholland pointed out, IT today is about how people are using technology in the front office. They are bringing their own devices, they are using apps to get outside of the firewall, they are moving further and further away from traditional “back office” IT.

Due to the rise of the Internet, the client/server model of the 1980s and 1990s that kept everything within the enterprise is no more. That model has been subsumed by a model in which development is fast and iterative and information is constantly being pushed and pulled primarily from outside organizations. The current model is also increasingly mobile, allowing users to get the information they need anytime and anywhere from any device.

At the same time, there is a push from business and management for increasingly rapid turnaround times and smaller scale projects that are, more often than not, being sourced via Cloud services. The focus of these projects is on innovating business models and acting in areas where the competition does not act. These forces are causing polarization within IT departments between internal IT operations based on legacy systems and new external operations serving buyers in business functions that are sourcing their own services through Cloud-based apps.

Just as UNIX® provided a standard platform for applications on single computers and the combination of servers, PCs and the Internet provided a second platform for web apps and services, we now need a new platform to support the apps and services that use cloud, social, mobile, big data and the Internet of Things. Rather than merely aligning with business goals or enabling business, the next platform will be embedded within the business as an integral element bringing together users, activity and data. To work properly, this must be a standard platform so that these things can work together effectively and at low cost, providing vendors a worthwhile market for their products.

Industry pundits have already begun to talk about this layer of technology. Gartner calls it the “Nexus of Forces.” IDC calls it the “third platform.” At the The Open Group, we refer to it as Open Platform 3.0™, and we announced a new Forum to address how organizations can address and support these technologies earlier this year. Open Platform 3.0 is meant to enable organizations (including standards bodies, users and vendors) coordinate their approaches to the new business models and IT practices driving the new platform to support a new generation of interoperable business solutions.

As is always the case with technologies, a point is reached where technical innovation must transition to business benefit. Open Platform 3.0 is, in essence, the next evolution of computing. To help the industry sort through these changes and create vendor-neutral standards that foster the cohesive adoption of new technologies, The Open Group must also evolve its focus and standards to respond to where the industry is headed.

The work of the Open Platform 3.0 Forum has already begun. Initial actions for the Forum have been identified and were shared during the London conference.  Our recent survey on Convergent Technologies confirmed the need to address these issues. Of those surveyed, 95 percent of respondents felt that converged technologies were an opportunity for business, and 84 percent of solution providers are already dealing with two or more of these technologies in combination. Respondents also saw vendor lock-in as a potential hindrance to using these technologies underscoring the need for an industry standard that will address interoperability. In addition to the survey, the Forum has also produced an initial Business Scenario to begin to address these industry needs and formulate requirements for this new platform.

If you have any questions about Open Platform 3.0 or if you would like to join the new Forum, please contact Chris Harding (c.harding@opengroup.org) for queries regarding the Forum or Chris Parnell (c.parnell@opengroup.org) for queries regarding membership.

 

Dave LounsburyDave is Chief Technical Officer (CTO) and Vice President, Services for The Open Group. As CTO, he ensures that The Open Group’s people and IT resources are effectively used to implement the organization’s strategy and mission.  As VP of Services, Dave leads the delivery of The Open Group’s proven collaboration processes for collaboration and certification both within the organization and in support of third-party consortia. Dave holds a degree in Electrical Engineering from Worcester Polytechnic Institute, and is holder of three U.S. patents.

 

 

1 Comment

Filed under Cloud, Data management, Future Technologies, Open Platform 3.0, Standards, Uncategorized, UNIX

Three Things We Learned at The Open Group, London

By Manuel Ponchaux, Senior Consultant, Corso

The Corso team recently visited London for The Open Group’s “Business Transformation in Finance, Government & Healthcare” conference (#ogLON). The event was predominantly for learning how experts address organisational change when aligning business needs with information technology – something very relevant in today’s climate. Nonetheless, there were a few other things we learnt as well…

1. Lean Enterprise Architecture

We were told that Standard Frameworks are too complex and multidimensional – people were interested in how we use them to provide simple working guidelines to the architecture team.

There were a few themes that frequently popped up, one of them being the measurement of Enterprise Architecture (EA) complexity. There seemed to be a lot of talk about Lean Enterprise Architecture as a solution to complexity issues.

2. Risk Management was popular

Clearly the events of the past few years e.g. financial crisis, banking regulations and other business transformations mean that managing risk is increasingly more important. So, it was no surprise that the Risk Management and EA sessions were very popular and probably attracted the biggest crowd. The Corso session showcasing our IBM/CIO case study was successful with 40+ attending!

3. Business challenges

People visited our stand and told us they were having trouble generating up to date heat maps. There was also a large number of attendee’s interested in Software as a Service as an alternative to traditional on-premise licensing.

So what did we learn from #ogLON?

Attendees are attracted to the ease of use of Corso’s ArchiMate plugin. http://www.corso3.com/products/archimate/

Together with the configurable nature of System Architect, ArchiMate® is a simple framework to use and makes a good starting point for supporting Lean Architecture.

Roadmapping and performing impact analysis reduces the influence of risk when executing any business transformation initiative.

We also learnt that customers in the industry are starting to embrace the concept of SaaS offerings as it provides them with a solution that can get them up and running quickly and easily – something we’re keen to pursue – which is why we’re now offering IBM Rational tools on the Corso cloud. Visit our website at http://www.corsocloud.com

http://info.corso3.com/blog/bid/323481/3-interesting-things-we-learned-at-The-Open-Group-London

Manuel Poncheau Manuel Ponchaux, Senior Consultant, Corso

1 Comment

Filed under ArchiMate®, Enterprise Architecture, Standards, Uncategorized

Redefining traceability in Enterprise Architecture and implementing the concept with TOGAF 9.1 and/or ArchiMate 2.0

By Serge Thorn, Architecting the Enterprise

One of the responsibilities of an Enterprise Architect is to provide complete traceability from requirements analysis and design artefacts, through to implementation and deployment.

Along the years, I have found out that the term traceability is not always really considered in the same way by different Enterprise Architects.

Let’s start with a definition of traceability. Traceable is an adjective; capable of being traced. Trying to find a definition even from a dictionary is a challenge and the most relevant one I found on Wikipedia which may be used as a reference could be “The formal definition of traceability is the ability to chronologically interrelate uniquely identifiable entities in a way that is verifiable.”

In Enterprise Architecture, traceability may mean different things to different people.

Some people refer to

  • Enterprise traceability which proves alignment to business goals
  • End-to-end traceability to business requirements and processes
  • A traceability matrix, the mapping of systems back to capabilities or of system functions back to operational activities
  • Requirements traceability which  assists  in quality  solutions that meets the business needs
  • Traceability between requirements and TOGAF artifacts
  • Traceability across artifacts
  • Traceability of services to business processes and architecture
  • Traceability from application to business function to data entity
  • Traceability between a technical component and a business goal
  • Traceability of security-related architecture decisions
  • Traceability of IT costs
  • Traceability to tests scripts
  • Traceability between  artifacts from business and IT strategy to solution development and delivery
  • Traceability from the initial design phase through to deployment
  • And probably more

The TOGAF 9.1 specification rarely refers to traceability and the only sections where the concept is used are in the various architecture domains where we should document a requirements traceability report or traceability from application to business function to data entity.

The most relevant section is probably where in the classes of architecture engagement it says:

“Using the traceability between IT and business inherent in enterprise architecture, it is possible to evaluate the IT portfolio against operational performance data and business needs (e.g., cost, functionality, availability, responsiveness) to determine areas where misalignment is occurring and change needs to take place.”

And how do we define and document Traceability from an end user or stakeholder perspective? The best approach would probably to use a tool which would render a view like in this diagram:

serge1In this diagram, we show the relationships between the components from the four architecture domains. Changing one of the components would allow doing an impact analysis.

Components may have different meanings as illustrated in the next diagram:

serge2Using the TOGAF 9.1 framework, we would use concepts of the Metamodel. The core metamodel entities show the purpose of each entity and the key relationships that support architectural traceability as stipulated in the section 34.2.1 Core Content Metamodel Concepts.

So now, how do we build that traceability? This is going to happen along the various ADM cycles that an enterprise will support. It is going to be quite a long process depending on the complexity, the size and the various locations where the business operates.

There may be five different ways to build that traceability:

  • Manually using an office product
  • With an enterprise architecture tool not linked to the TOGAF 9.1 framework
  • With an enterprise architecture tool using the TOGAF 9.1 artifacts
  • With an enterprise architecture tool using ArchiMate 2.0
  • Replicating the content of an Enterprise Repository such as a CMDB in an Architecture repository

1. Manually using an office product

You will probably document your architecture with the use of word processing, spread sheets and diagramming tools and store these documents in a file structure on a file server, ideally using some form of content management system.

Individually these tools are great but collectively they fall short in forming a cohesive picture of the requirements and constraints of a system or an enterprise. The links between these deliverables soon becomes non manageable and in the long term impact analysis of any change will become quite impossible. Information will be hard to find and to trace from requirements all the way back to the business goal that drives it. This is particularly difficult to achieve when requirements are stored in spread sheets and use cases and business goals are contained in separate documents. Other issues such as maintenance and consistency would have to be considered.

serge3

2. With an enterprise architecture tool not linked to the TOGAF 9.1 framework

Many enterprise architecture tools or suites provide different techniques to support traceability but do not really describe how things work and focus mainly on describing requirements traceability.  In the following example, we use a traceability matrix between user requirements and functional specifications, use cases, components, software artifacts, test cases, business processes, design specifications and more.

Mapping the requirements to use cases and other information can be very labor-intensive.

serge4

Some tools also allow for the creation of relationships between the various layers using grids or allowing the user to create the relationships by dragging lines between elements.

Below is an example of what traceability would look like in an enterprise architecture tool after some time.  That enterprise architecture ensures appropriate traceability from business architecture to the other allied architectures.

serge5

3. With an enterprise architecture tool using the TOGAF 9.1 artifacts

The TOGAF 9.1 core metamodel provides a minimum set of architectural content to support traceability across artifacts. Usually we use catalogs, matrices and diagrams to build traceability independently of dragging lines between elements (except possibly for the diagrams). Using catalogs and matrices are activities which may be assigned to various stakeholders in the organisation and theoretically can sometimes hide the complexity associated with an enterprise architecture tool.

serge6Using artifacts creates traceability. As an example coming from the specification; “A Business Footprint diagram provides a clear traceability between a technical component and the business goal that it satisfies, while also demonstrating ownership of the services identified”. There are other artifacts which also describe other traceability: Data Migration Diagram and Networked Computing/Hardware Diagram.

4. With an enterprise architecture tool using ArchiMate 2.0

Another possibility could be the use of the ArchiMate standard from The Open Group. Some of the that traceability could  also be achievable in some way using BPMN and UML for specific domains such as process details in Business Architecture or building the bridge between Enterprise Architecture and Software architecture.

With ArchiMate 2.0 we can define the end to end traceability and produce several viewpoints such as the Layered Viewpoint which shows several layers and aspects of an enterprise architecture in a single diagram. Elements are modelled in five different layers when displaying the enterprise architecture; these are then linked with each other using relationships. We differentiate between the following layers and extensions:

  • Business layer
  • Application layer
  • Technology layer
  • Motivation extension
  • Implementation and migration extension

The example from the specification below documents the various architecture layers.

serge7
As you will notice, this ArchiMate 2.0 viewpoint looks quite similar to the TOGAF 9.1 Business Footprint Diagram which provides a clear traceability between a technical component and the business goal that it satisfies, while also demonstrating ownership of the services identified.

Another example could be the description of the traceability among business goals, technical capabilities, business benefits and metrics.  The key point about the motivation extension is to work with the requirement object.

Using the motivation viewpoint from the specification as a reference (motivation extension), you could define business benefits / expectations within the business goal object, and then define sub-goals as KPIs to measure the benefits of the plan and list all of the identified requirements of the project / program.  Finally, you could link these requirements with either application or infrastructure service object representing software or technical capabilities. (Partial example below).

serge8
One of the common questions I have recently received from various enterprise architects is “Now that I know TOGAF and ArchiMate… how should I model my enterprise? Should I use the TOGAF 9.1 artifacts to create that traceability? Should I use ArchiMate 2.0? Should I use both? Should I forget the artifacts…”. These are good questions and I’m afraid that there is not a single answer.

What I know is that if I select an enterprise architecture tool supporting both TOGAF 9.1 and ArchiMate 2.0, I would like to be able to be able to have a full synchronization. If I model a few ArchiMate models I would like my TOGAF 9.1 artifacts to be created at the same time (catalogs and matrices) and if I create artifacts from the taxonomy, I would like my ArchiMate models also to be created.

Unfortunately I do not know the current level of tools maturity and whether tools vendors provide that synchronization. This would obviously require some investigation and should be one of the key criteria if you were currently looking for a product supporting both standards.

5. Replicating the content of an Enterprise Repository such as a CMDB in an Architecture repository

This other possibility requires that you have an up to date Configuration Management Database and that you developed an interface with your Architecture Repository, your enterprise architecture tool. If you are able to replicate the relationships between the infrastructure components and applications (CIs) into your enterprise architecture tool that would partially create your traceability.

If I summarise the various choices to build that enterprise architecture traceability, I potentially have three main possibilities:

serge9
Achieving traceability within an Enterprise Architecture is key because the architecture needs to be understood by all participants and not just by technical people.  It helps to incorporate the enterprise architecture efforts into the rest of the organization and it takes it to the board room (or at least the CIO’s office) where it belongs.

  • Describe your traceability from your Enterprise Architecture to the system development and project documentation.
  • Review that traceability periodically, making sure that it is up to date, and produce analytics out of it.

If a development team is looking for a tool that can help them document, and provide end to end traceability throughout the life cycle EA is the way to go, make sure you use the right standard and platform. Finally, communicate and present to your stakeholders the results of your effort.

Serge Thorn is CIO of Architecting the Enterprise.  He has worked in the IT Industry for over 25 years, in a variety of roles, which include; Development and Systems Design, Project Management, Business Analysis, IT Operations, IT Management, IT Strategy, Research and Innovation, IT Governance, Architecture and Service Management (ITIL). He is the Chairman of the itSMF (IT Service Management forum) Swiss chapter and is based in Geneva, Switzerland.

2 Comments

Filed under ArchiMate®, Enterprise Architecture, Standards, TOGAF, TOGAF®, Uncategorized

Enterprise Architecture in China: Who uses this stuff?

by Chris Forde, GM APAC and VP Enterprise Architecture, The Open Group

Since moving to China in March 2010 I have consistently heard a similar set of statements and questions, something like this….

“EA? That’s fine for Europe and America, who is using it here?”

“We know EA is good!”

“What is EA?”

“We don’t have the ability to do EA, is it a problem if we just focus on IT?”

And

“Mr Forde your comment about western companies not discussing their EA programs because they view them as a competitive advantage is accurate here too, we don’t discuss we have one for that reason.” Following that statement the lady walked away smiling, having not introduced herself or her company.

Well some things are changing in China relative to EA and events organized by The Open Group; here is a snapshot from May 2013.

M GaoThe Open Group held an Enterprise Architecture Practitioners Conference in Shanghai China May 22nd 2013. The conference theme was EA and the spectrum of business value. The presentations were made by a mix of non-member and member organizations of The Open Group, most but not all based in China. The audience was mostly non-members from 55 different organizations in a range of industries. There was a good mix of customer, supplier, government and academic organizations presenting and in the audience. The conference proceedings are available to registered attendees of the conference and members of The Open Group. Livestream recordings will also be available shortly.

Organizations large and small presented about the fact that EA was integral to delivering business value. Here’s the nutshell.

China

Huawei is a leading global ICT communications provider based in Shenzhen China.  They presented on EA applied to their business transformation program and the ongoing development of their core EA practice.

GKHB is a software services organization based in Chengdu China. They presented on an architecture practice applied to real time forestry and endangered species management.

Nanfang Media is a State Owned Enterprise, the second largest media organization in the country based in Guangzhou China. They presented on the need to rapidly transform themselves to a modern integrated digital based organization.

McKinsey & Co a Management Consulting company based in New York USA presented an analysis of a CIO survey they conducted with Peking University.

Mr Wang Wei a Partner in the Shanghai office of McKinsey & Co’s Business Technology Practice reviewed a survey they conducted in co-operation with Peking University.

wang wei.jpg

The Survey of CIO’s in China indicated a common problem of managing complexity in multiple dimensions: 1) “Theoretically” Common Business Functions, 2) Across Business Units with differing Operations and Product, 3) Across Geographies and Regions. The recommended approach was towards “Organic Integration” and to carefully determine what should be centralized and what should be distributed. An Architecture approach can help with managing and mitigating these realities. The survey also showed that the CIO’s are evenly split amongst those dedicated to a traditional CIO role and those that have a dual Business and CIO role.

Mr Yang Li Chao Director of EA and Planning at Huawei and Ms Wang Liqun leader of the EA Center of Excellence at Huawei yang li chao.jpgwang liqun.jpgoutlined the 5-year journey Huawei has been on to deal with the development, maturation and effectiveness of an Architecture practice in a company that has seen explosive growth and is competing on a global scale. They are necessarily paying a lot of attention to Talent Management and development of their Architects, as these people are at the forefront of the company Business Transformation efforts. Huawei constantly consults with experts on Architecture from around the world and incorporates what they consider best practice into their own method and framework, which is based on TOGAF®.

 Mr He Kun CIO of Nanfang Media described the enormous pressures his traditional media organization is under, such as a concurrent loss of advertising and talent to digital media.

he kun.jpgHe gave and example where China Mobile has started its own digital newspaper leveraging their delivery platform. So naturally, Nanfang media is also undergoing a transformation and is looking to leverage its current advantages as a trusted source and its existing market position. The discipline of Architecture is a key enabler and aids as a foundation for clearly communicating a transformation approach to other business leaders. This does not mean using EA Jargon but communicating in the language of his peers for the purpose of obtaining funding to accomplish the transformation effectively.

Mr Chen Peng Vice General Manager of GKHB Chengdu described the use of an Architecture approach to managing precious national resources such as forestry, bio diversity and endangered species. He descrichen peng.jpgbed the necessity for real time information in observation, tracking and responses in this area and the necessity of “Informationalization” of Forestry in China as a part of eGovernment initiatives not only for the above topics but also for the countries growth particularly in supplying the construction industry. The Architecture approach taken here is also based on TOGAF®.

The take away from this conference is that Enterprise Architecture is alive and well amongst certain organizations in China. It is being used in a variety of industries.  Value is being realized by executives and practitioners, and delivered for both IT and Business units. However for many companies EA is also a new idea and to date its value is unclear to them.

The speakers also made it clear that there are no easy answers, each organization has to find its own use and value from Enterprise Architecture and it is a learning journey. They expressed their appreciation that The Open Group and its standards are a place where they can make connections, pull from and contribute to in regards to Enterprise Architecture.

Comments Off

Filed under Enterprise Architecture, Enterprise Transformation, Professional Development, Standards, TOGAF, TOGAF®, Uncategorized

Developing standards to secure our global supply chain

By Sally Long, Director of The Open Group Trusted Technology Forum (OTTF)™

In a world where tainted and counterfeit products pose significant risks to organizations, we see an increasing need for a standard that protects both organizations and consumers. Altered or non-genuine products introduce the possibility of untracked malicious behavior or poor performance. These risks can damage both customers and suppliers resulting in the potential for failed or inferior products, revenue and brand equity loss and disclosure of intellectual property.

On top of this, cyber-attacks are growing more sophisticated, forcing technology suppliers and governments to take a more comprehensive approach to risk management as it applies to product integrity and supply chain security. Customers are now seeking assurances that their providers are following standards to mitigate the risks of tainted and counterfeit components, while providers of Commercial Off-the-Shelf (COTS) Information and Communication Technology (ICT) products are focusing on protecting the integrity of their products and services as they move through the global supply chain.

In this climate we need a standard more than ever, which is why today we’re proud to announce the publication of the Open Trusted Technology Provider Standard (O-TTPS)™(Standard). The O-TTPS is the first complete standard published by The Open Group Trusted Technology Forum (OTTF)™ which will benefit global providers and acquirers of COTS and ICT products.

The first of its kind, the open standard has been developed to help organizations achieve Trusted Technology Provider status, assuring the integrity of COTS and ICT products worldwide and safeguarding the global supply chain against the increased sophistication of cyber security attacks.

Specifically intended to prevent maliciously tainted and counterfeit products from entering the supply chain, the standard codifies best practices across the entire COTS ICT product lifecycle, including the design, sourcing, build, fulfilment, distribution, sustainment, and disposal phases. Our intention is that it will help raise the bar globally by helping the technology industry and its customers to “Build with Integrity, Buy with Confidence.”™.

What’s next?

The OTTF is now working to develop an accreditation program to help provide assurance that Trusted Technology Providers conform to the O-TTPS Standard. The planned accreditation program is intended to mitigate maliciously tainted and counterfeit products by raising the assurance bar for: component suppliers, technology providers, and integrators, who are part of and depend on the global supply chain.Using the guidelines and best practices documented in the Standard as a basis, the OTTF will also release updated versions of the O-TTPS Standard based on changes to the threat landscape.

Interested in seeing the Standard for yourself? You can download it directly from The Open Group Bookstore, here. For more information on The Open Group Trusted Technology Forum, please click here, or keep checking back on the blog for updates.

 

2 Comments

Filed under Uncategorized

An Update on ArchiMate® 2 Certification

By Andrew Josey, The Open Group

In this blog we provide latest news on the status of the ArchiMate® Certification for People program. Recent changes to the program include the availability of the ArchiMate 2 Examination through Prometric test centers and also the addition of the ArchiMate 2 Foundation qualification.

Program Vision

The vision for the ArchiMate 2 Certification Program is to define and promote a market-driven education and certification program to support the ArchiMate modeling language standard. The program is supported by an Accredited ArchiMate Training program, in which there are currently 10 accredited courses. There are self-study materials available.

Certification Levels

There are two levels defined for ArchiMate 2 People Certification:

  • Level 1: ArchiMate 2 Foundation
  • Level 2: ArchiMate 2 Certified

The difference between the two certification levels is that for ArchiMate 2 Certified there are further requirements in addition to passing the ArchiMate 2 Examination as shown in the figure below.

What are the study paths to become certified?

ArchiMate 2

The path to certification depends on the Level. For Level 2, ArchiMate Certified: you achieve certification only after satisfactorily completing an Accredited ArchiMate Training Course, including completion of practical exercises, together with an examination. For Level 1 you may choose to self study or attend a training course. For Level 1 the requirement is only to pass the ArchiMate 2 examination.

How can I find out about the syllabus and examinations?

To obtain a high level view, read the datasheets that describe certification that are available from the ArchiMate Certification website. For detail on what is expected from candidates, see the Conformance Requirements document. The Conformance Requirements apply to both Level 1 and Level 2.

The ArchiMate 2 examination comprises 40 questions in simple multiple choice format. A Practice examination is included as part of an Accredited ArchiMate Training course and also in the ArchiMate 2 Foundation Study Guide.

For Level 2, a set of Practical exercises are included as part of the training course and these must be successfully completed. They are assessed by the trainer as part of an accredited training course.

More Information and Resources

More information on the program is available at the ArchiMate 2 Certification site at http://www.opengroup.org/certifications/archimate/

Details of the ArchiMate 2 Examination are available at: http://www.opengroup.org/certifications/archimate/docs/exam

The calendar of Accredited ArchiMate 2 Training courses is available at: http://www.opengrou.org/archimate/training-calendar/

The ArchiMate 2 Foundation Self Study Pack is available for purchase and immediate download at http://www.opengroup.org/bookstore/catalog/b132.htm

ArchiMate is a registered trademark of The Open Group.

Andrew Josey is Director of Standards within The Open Group. He is currently managing the standards process for The Open Group, and has recently led the standards development projects for TOGAF 9.1, ArchiMate 2.0, IEEE Std 1003.1-2008 (POSIX), and the core specifications of the Single UNIX Specification, Version 4. Previously, he has led the development and operation of many of The Open Group certification development projects, including industry-wide certification programs for the UNIX system, the Linux Standard Base, TOGAF, and IEEE POSIX. He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects.

6 Comments

Filed under ArchiMate®, Uncategorized

The Open Group Conference Plenary Speaker Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Keynoter Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

This is a transcript of a sponsored podcast discussion on Big Data analytics and its role in business processes, in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group® Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference will focus on big data and the transformation we need to embrace today.

We are here now with one of the main speakers at the conference; Michael Cavaretta, PhD, Technical Leader of Predictive Analytics for Ford Research and Advanced Engineering in Dearborn, Michigan.

We’ll see how Ford has exploited the strengths of big data analytics by directing them internally to improve business results. In doing so, they scour the metrics from the company’s best processes across myriad manufacturing efforts and through detailed outputs from in-use automobiles, all to improve and help transform their business.

Cavaretta has led multiple data-analytic projects at Ford to break down silos inside the company to best define Ford’s most fruitful datasets. Ford has successfully aggregated customer feedback, and extracted all the internal data to predict how best new features in technologies will improve their cars.

As a lead-in to his Open Group presentation, Michael and I will now explore how big data is fostering business transformation by allowing deeper insights into more types of data efficiently, and thereby improving processes, quality control, and customer satisfaction.

With that, please join me in welcoming Michael Cavaretta. Welcome to BriefingsDirect, Michael.

Michael Cavaretta: Thank you very much.

Gardner: Your upcoming presentation for The Open Group Conference is going to describe some of these new approaches to big data and how that offers some valuable insights into internal operations, and therefore making a better product. To start, what’s different now in being able to get at this data and do this type of analysis from, say, five years ago?

Cavaretta: The biggest difference has to do with the cheap availability of storage and processing power, where a few years ago people were very much concentrated on filtering down the datasets that were being stored for long-term analysis. There has been a big sea change with the idea that we should just store as much as we can and take advantage of that storage to improve business processes.

Gardner: That sounds right on the money, but how do we get here? How do we get to the point where we could start using these benefits from a technology perspective, as you say, better storage, networks, being able to move big dataset, that sort of thing, to wrenching out benefits. What’s the process behind the benefit?

Cavaretta: The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises. There’s this idea that you don’t need to spend so much time figuring out what data you want to store and worry about the cost associated with it, and more about data as an asset. There is value in being able to store it, and being able to go back and extract different insights from it. This really comes from this really cheap storage, access to parallel processing machines, and great software.

Gardner: It seems to me that for a long time, the mindset was that data is simply the output from applications, with applications being primary and the data being almost an afterthought. It seems like we sort flipped that. The data now is perhaps as important, even more important, than the applications. Does that seem to hold true?

Cavaretta: Most definitely, and we’ve had a number of interesting engagements where people have thought about the data that’s being collected. When we talk to them about big data, storing everything at the lowest level of transactions, and what could be done with that, their eyes light up and they really begin to get it.

Gardner: I suppose earlier, when cost considerations and technical limitations were at work, we would just go for a tip of the iceberg level. Now, as you say, we can get almost all the data. So, is this a matter of getting at more data, different types of data, bringing in unstructured data, all the above? How much you are really going after here?

Cavaretta: I like to talk to people about the possibility that big data provides and I always tell them that I have yet to have a circumstance where somebody is giving me too much data. You can pull in all this information and then answer a variety of questions, because you don’t have to worry that something has been thrown out. You have everything.

You may have 100 questions, and each one of the questions uses a very small portion of the data. Those questions may use different portions of the data, a very small piece, but they’re all different. If you go in thinking, “We’re going to answer the top 20 questions and we’re just going to hold data for that,” that leaves so much on the table, and you don’t get any value out of it.

Gardner: I suppose too that we can think about small samples or small datasets and aggregate them or join them. We have new software capabilities to do that efficiently, so that we’re able to not just look for big honking, original datasets, but to aggregate, correlate, and look for a lifecycle level of data. Is that fair as well?

Cavaretta: Definitely. We’re a big believer in mash-ups and we really believe that there is a lot of value in being able to take even datasets that are not specifically big-data sizes yet, and then not go deep, not get more detailed information, but expand the breadth. So it’s being able to augment it with other internal datasets, bridging across different business areas as well as augmenting it with external datasets.

A lot of times you can take something that is maybe a few hundred thousand records or a few million records, and then by the time you’re joining it, and appending different pieces of information onto it, you can get the big dataset sizes.

Gardner: Just to be clear, you’re unique. The conventional wisdom for big data is to look at what your customers are doing, or just the external data. You’re really looking primarily at internal data, while also availing yourself of what external data might be appropriate. Maybe you could describe a little bit about your organization, what you do, and why this internal focus is so important for you.

Cavaretta: I’m part of a larger department that is housed over in the research and advanced-engineering area at Ford Motor Company, and we’re about 30 people. We work as internal consultants, kind of like Capgemini or Ernst & Young, but only within Ford Motor Company. We’re responsible for going out and looking for different opportunities from the business perspective to bring advanced technologies. So, we’ve been focused on the area of statistical modeling and machine learning for I’d say about 15 years or so.

And in this time, we’ve had a number of engagements where we’ve talked with different business customers, and people have said, “We’d really like to do this.” Then, we’d look at the datasets that they have, and say, “Wouldn’t it be great if we would have had this. So now we have to wait six months or a year.”

These new technologies are really changing the game from that perspective. We can turn on the complete fire-hose, and then say that we don’t have to worry about that anymore. Everything is coming in. We can record it all. We don’t have to worry about if the data doesn’t support this analysis, because it’s all there. That’s really a big benefit of big-data technologies.

Gardner: If you’ve been doing this for 15 years, you must be demonstrating a return on investment (ROI) or a value proposition back to Ford. Has that value proposition been changing? Do you expect it to change? What might be your real value proposition two or three years from now?

Cavaretta: The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts who are really interested in looking at things from a data-driven perspective. From when I first came in to now, the biggest change has been when Alan Mulally came into the company, and really pushed the idea of data-driven decisions.

Before, we were getting a lot of interest from people who are really very focused on the data that they had internally. After that, they had a lot of questions from their management and from upper level directors and vice-president saying, “We’ve got all these data assets. We should be getting more out of them.” This strategic perspective has really changed a lot of what we’ve done in the last few years.

Gardener: As I listen to you Michael, it occurs to me that you are applying this data-driven mentality more deeply. As you pointed out earlier, you’re also going after all the data, all the information, whether that’s internal or external.

In the case of an automobile company, you’re looking at the factory, the dealers, what drivers are doing, what the devices within the automobile are telling you, factoring that back into design relatively quickly, and then repeating this process. Are we getting to the point where this sort of Holy Grail notion of a total feedback loop across the lifecycle of a major product like an automobile is really within our grasp? Are we getting there, or is this still kind of theoretical. Can we pull it altogether and make it a science?

Cavaretta: The theory is there. The question has more to do with the actual implementation and the practicality of it. We still are talking a lot of data where even with new advanced technologies and techniques that’s a lot of data to store, it’s a lot of data to analyze, there’s a lot of data to make sure that we can mash-up appropriately.

And, while I think the potential is there and I think the theory is there. There is also a work in being able to get the data from multiple sources. So everything which you can get back from the vehicle, fantastic. Now if you marry that up with internal data, is it survey data, is it manufacturing data, is it quality data? What are the things do you want to go after first? We can’t do everything all at the same time.

Our perspective has been let’s make sure that we identify the highest value, the greatest ROI areas, and then begin to take some of the major datasets that we have and then push them and get more detail. Mash them up appropriately and really prove up the value for the technologists.

Gardner: Clearly, there’s a lot more to come in terms of where we can take this, but I suppose it’s useful to have a historic perspective and context as well. I was thinking about some of the early quality gurus like Deming and some of the movement towards quality like Six Sigma. Does this fall within that same lineage? Are we talking about a continuum here over that last 50 or 60 years, or is this something different?

Cavaretta: That’s a really interesting question. From the perspective of analyzing data, using data appropriately, I think there is a really good long history, and Ford has been a big follower of Deming and Six Sigma for a number of years now.

The difference though, is this idea that you don’t have to worry so much upfront about getting the data. If you’re doing this right, you have the data right there, and this has some great advantages. You’ll have to wait until you get enough history to look for somebody’s patterns. Then again, it also has some disadvantage, which is you’ve got so much data that it’s easy to find things that could be spurious correlations or models that don’t make any sense.

The piece that is required is good domain knowledge, in particular when you are talking about making changes in the manufacturing plant. It’s very appropriate to look at things and be able to talk with people who have 20 years of experience to say, “This is what we found in the data. Does this match what your intuition is?” Then, take that extra step.

Gardner: Tell me a little about sort a day in the life of your organization and your team to let us know what you do. How do you go about making more data available and then reaching some of these higher-level benefits?

Cavaretta: We’re very much focused on interacting with the business. Most of all, we do have to deal with working on pilot projects and working with our business customers to bring advanced analytics and big data technologies to bear against these problems. So we work in kind of what we call push-and-pull model.

We go out and investigate technologies and say these are technologies that Ford should be interested in. Then, we look internally for business customers who would be interested in that. So, we’re kind of pushing the technologies.

From the pull perspective, we’ve had so many successful engagements in such good contacts and good credibility within the organization that we’ve had people come to us and say, “We’ve got a problem. We know this has been in your domain. Give us some help. We’d love to be able to hear your opinions on this.”

So we’ve pulled from the business side and then our job is to match up those two pieces. It’s best when we will be looking at a particular technology and we have somebody come to us and we say, “Oh, this is a perfect match.”

Those types of opportunities have been increasing in the last few years, and we’ve been very happy with the number of internal customers that have really been very excited about the areas of big data.

Gardner: Because this is The Open Group conference and an audience that’s familiar with the IT side of things, I’m curious as to how this relates to software and software development. Of course there are so many more millions of lines of code in automobiles these days, software being more important than just about everything. Are you applying a lot of what you are doing to the software side of the house or are the agile and the feedback loops and the performance management issues a separate domain, or it’s your crossover here?

Cavaretta: There’s some crossover. The biggest area that we’ve been focused on has been picking information, whether internal business processes or from the vehicle, and then being able to bring it back in to derive value. We have very good contacts in the Ford IT group, and they have been fantastic to work with in bringing interesting tools and technology to bear, and then looking at moving those into production and what’s the best way to be able to do that.

A fantastic development has been this idea that we’re using some of the more agile techniques in this space and Ford IT has been pushing this for a while. It’s been fantastic to see them work with us and be able to bring these techniques into this new domain. So we’re pushing the envelope from two different directions.

Gardner: It sounds like you will be meeting up at some point with a complementary nature to your activities.

Cavaretta: Definitely.

Gardner: Let’s move on to this notion of the “Internet of things,” a very interesting concept that lot of people talk about. It seems relevant to what we’ve been discussing. We have sensors in these cars, wireless transfer of data, more-and-more opportunity for location information to be brought to bear, where cars are, how they’re driven, speed information, all sorts of metrics, maybe making those available through cloud providers that assimilate this data.

So let’s not go too deep, because this is a multi-hour discussion all on its own, but how is this notion of the Internet of things being brought to bear on your gathering of big data and applying it to the analytics in your organization?

Cavaretta: It is a huge area, and not only from the internal process perspective —  RFID tags within the manufacturing plans, as well as out on the plant floor, and then all of the information that’s being generated by the vehicle itself.

The Ford Energi generates about 25 gigabytes of data per hour. So you can imagine selling couple of million vehicles in the near future with that amount of data being generated. There are huge opportunities within that, and there are also some interesting opportunities having to do with opening up some of these systems for third-party developers. OpenXC is an initiative that we have going on to add at Research and Advanced Engineering.

We have a lot of data coming from the vehicle. There’s huge number of sensors and processors that are being added to the vehicles. There’s data being generated there, as well as communication between the vehicle and your cell phone and communication between vehicles.

There’s a group over at Ann Arbor Michigan, the University of Michigan Transportation Research Institute (UMTRI), that’s investigating that, as well as communication between the vehicle and let’s say a home system. It lets the home know that you’re on your way and it’s time to increase the temperature, if it’s winter outside, or cool it at the summer time. The amount of data that’s been generated there is invaluable information and could be used for a lot of benefits, both from the corporate perspective, as well as just the very nature of the environment.

Gardner: Just to put a stake in the ground on this, how much data do cars typically generate? Do you have a sense of what now is the case, an average?

Cavaretta: The Energi, according to the latest information that I have, generates about 25 gigabytes per hour. Different vehicles are going to generate different amounts, depending on the number of sensors and processors on the vehicle. But the biggest key has to do with not necessarily where we are right now but where we will be in the near future.

With the amount of information that’s being generated from the vehicles, a lot of it is just internal stuff. The question is how much information should be sent back for analysis and to find different patterns? That becomes really interesting as you look at external sensors, temperature, humidity. You can know when the windshield wipers go on, and then to be able to take that information, and mash that up with other external data sources too. It’s a very interesting domain.

Gardner: So clearly, it’s multiple gigabytes per hour per vehicle and probably going much higher.

Cavaretta: Easily.

Gardner: Let’s move forward now for those folks who have been listening and are interested in bringing this to bear on their organizations and their vertical industries, from the perspective of skills, mindset, and culture. Are there standards, certification, or professional organizations that you’re working with in order to find the right people?

It’s a big question. Let’s look at what skills do you target for your group, and what ways you think that you can improve on that. Then, we’ll get into some of those larger issues about culture and mindset.

Cavaretta: The skills that we have in our department, in particular on our team, are in the area of computer science, statistics, and some good old-fashioned engineering domain knowledge. We’ve really gone about this from a training perspective. Aside from a few key hires, it’s really been an internally developed group.

The biggest advantage that we have is that we can go out and be very targeted with the amount of training that we have. There are such big tools out there, especially in the open-source realm, that we can spin things up with relatively low cost and low risk, and do a number of experiments in the area. That’s really the way that we push the technologies forward.

Gardner: Why The Open Group? Why is that a good forum for your message, and for your research here?

Cavaretta: The biggest reason is the focus on the enterprise, where there are a lot of advantages and a lot of business cases, looking at large enterprises and where there are a lot of systems, companies that can take a relatively small improvement, and it can make a large difference on the bottom-line.

Talking with The Open Group really gives me an opportunity to be able to bring people on board with the idea that you should be looking at a difference in mindset. It’s not “Here’s a way that data is being generated, look, try and conceive of some questions that we can use, and we’ll store that too.” Let’s just take everything, we’ll worry about it later, and then we’ll find the value.

Gardner: I’m sure the viewers of your presentation on January 28 will be gathering a lot of great insights. A lot of the people that attend The Open Group conferences are enterprise architects. What do you think those enterprise architects should be taking away from this? Is there something about their mindset that should shift in recognizing the potential that you’ve been demonstrating?

Cavaretta: It’s important for them to be thinking about data as an asset, rather than as a cost. You even have to spend some money, and it may be a little bit unsafe without really solid ROI at the beginning. Then, move towards pulling that information in, and being able to store it in a way that allows not just the high-level data scientist to get access to and provide value, but people who are interested in the data overall. Those are very important pieces.

The last one is how do you take a big-data project, how do you take something where you’re not storing in the traditional business intelligence (BI) framework that an enterprise can develop, and then connect that to the BI systems and look at providing value to those mash-ups. Those are really important areas that still need some work.

Gardner: Another big constituency within The Open Group community are those business architects. Is there something about mindset and culture, getting back to that topic, that those business-level architects should consider? Do you really need to change the way you think about planning and resource allocation in a business setting, based on the fruits of things that you are doing with big data?

Cavaretta: I really think so. The digital asset that you have can be monetized to change the way the business works, and that could be done by creating new assets that then can be sold to customers, as well as improving the efficiencies of the business.

This idea that everything is going to be very well-defined and there is a lot of work that’s being put into  making sure that data has high quality, I think those things need to be changed somewhat. As you’re pulling the data in, as you are thinking about long-term storage, it’s more the access to the information, rather than the problem in just storing it.

Gardner: Interesting that you brought up that notion that the data becomes a product itself and even a profit center perhaps.

Cavaretta: Exactly. There are many companies, especially large enterprises, that are looking at their data assets and wondering what can they do to monetize this, not only to just pay for the efficiency improvement but as a new revenue stream.

Gardner: We’re almost out of time. For those organizations that want to get started on this, are there any 20/20 hindsights or Monday morning quarterback insights you can provide. How do you get started? Do you appoint a leader? Do you need a strategic roadmap, getting this culture or mindset shifted, pilot programs? How would you recommend that people might begin the process of getting into this?

Cavaretta: We’re definitely a huge believer in pilot projects and proof of concept, and we like to develop roadmaps by doing. So get out there. Understand that it’s going to be messy. Understand that it maybe going to be a little bit more costly and the ROI isn’t going to be there at the beginning.

But get your feet wet. Start doing some experiments, and then, as those experiments turn from just experimentation into really providing real business value, that’s the time to start looking at a more formal aspect and more formal IT processes. But you’ve just got to get going at this point.

Gardner: I would think that the competitive forces are out there. If you are in a competitive industry, and those that you compete against are doing this and you are not, that could spell some trouble.

Cavaretta:  Definitely.

Gardner: We’ve been talking with Michael Cavaretta, PhD, Technical Leader of Predictive Analytics at Ford Research and Advanced Engineering in Dearborn, Michigan. Michael and I have been exploring how big data is fostering business transformation by allowing deeper insights into more types of data and all very efficiently. This is improving processes, updating quality control and adding to customer satisfaction.

Our conversation today comes as a lead-in to Michael’s upcoming plenary presentation. He is going to be talking on January 28 in Newport Beach California, as part of The Open Group conference.

You will hear more from Michael and others, the global leaders on big data that are going to be gathering to talk about business transformation from big data at this conference. So a big thank you to Michael for joining us in this fascinating discussion. I really enjoyed it and I look forward to your presentation on the 28.

Cavaretta: Thank you very much.

Gardner: And I would encourage our listeners and readers to attend the conference or follow more of the threads in social media from the event. Again, it’s going to be happening from January 27 to January 30 in Newport Beach, California.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator through the thought leadership interviews. Thanks again for listening, and come back next time.

1 Comment

Filed under Conference, Uncategorized

Leveraging Social Media at The Open Group Newport Beach Conference (#ogNB)

By The Open Group Conference Team

By attending conferences hosted by The Open Group®, attendees are able to learn from industry experts, understand the latest technologies and standards and discuss and debate current industry trends. One way to maximize the benefits is to make technology work for you. If you are attending The Open Group Conference in Newport Beach next week, we’ve put together a few tips on how to leverage social media to make networking at the conference easier, quicker and more effective.

Using Twitter at #ogNB

Twitter is a real-time news-sharing tool that anyone can use. The official hashtag for the conference is #ogNB. This enables anybody, whether they are physically attending the event or not, to follow what’s happening at the Newport Beach conference in real-time and interact with each other.

Before the conference, be sure to update your Twitter account to monitor #ogNB and, of course, to tweet about the conference.

Using Facebook at The Open Group Conference in Newport Beach

You can also track what is happening at the conference on The Open Group Facebook page. We will be running another photo contest, where all of entries will be uploaded to our page. Members and Open Group Facebook fans can vote by “liking” a photo. The photos with the most “likes” in each category will be named the winner. Submissions will be uploaded in real-time, so the sooner you submit a photo, the more time members and fans will have to vote for it!

For full details of the contest and how to enter see The Open Group blog at: http://blog.opengroup.org/2013/01/22/the-open-group-photo-contest-document-the-magic-at-the-newport-beach-conference/

LinkedIn during The Open Group Conference in Newport Beach

Inspired by one of the sessions? Interested in what your peers have to say? Start a discussion on The Open Group LinkedIn Group page. We’ll also be sharing interesting topics and questions related to The Open Group Conference as it is happening. If you’re not a member already, requesting membership is easy. Simply go to the group page and click the “Join Group” button. We’ll accept your request as soon as we can!

Blogging during The Open Group Conference in Newport Beach

Stay tuned for daily conference recaps here on The Open Group blog. In case you missed a session or you weren’t able to make it to Newport Beach, we’ll be posting the highlights and recaps on the blog. If you are attending the conference and would like to submit a recap of your own, please contact opengroup (at) bateman-group.com.

If you have any questions about social media usage at the conference, feel free to tweet the conference team @theopengroup.

Comments Off

Filed under Uncategorized

The Death of Planning

By Stuart Boardman, KPN

If I were to announce that planning large scale transformation projects was a waste of time, you’d probably think I’d taken leave of my senses. And yet, somehow this thought has been nagging at me for some time now. Bear with me.

It’s not so long ago that we still had debates about whether complex projects should be delivered as a “big bang” or in phases. These days the big bang has pretty much been forgotten. Why is that? I think the main reason is the level of risk involved with running a long process and dropping it into the operational environment just like that. This applies to any significant change, whether related to a business model and processes or IT architecture or physical building developments. Even if it all works properly, the level of sudden organizational change involved may stop it in its tracks.

So it has become normal to plan the change as a series of phases. We develop a roadmap to get us from here (as-is) to the end goal (to-be). And this is where I begin to identify the problem.

A few months ago I spent an enjoyable and thought provoking day with Jack Martin Leith (@jackmartinleith). Jack is a master in demystifying clichés but when he announced his irritation with “change is a journey,” I could only respond, “but Jack, it is.” What Jack made me see is that, whilst the original usage was a useful insight, it’s become a cliché which is commonly completely misused. It results in some pretty frustrating journeys! To understand that let’s take the analogy literally. Suppose your objective is to travel to San Diego but there are no direct flights from where you live. If the first step on your journey is a 4 hour layover at JFK, that’s at best a waste of your time and energy. There’s no value in this step. A day in Manhattan might be a different story. We can (and do) deal with this kind of thing for journeys of a day or so but imagine a journey that takes three or more years and all you see on the way is the inside of airports.

My experience has been that the same problem too often manifests itself in transformation programs. The first step may be logical from an implementation perspective, but it delivers no discernible value (tangible or intangible). It’s simply a validation that something has been done, as if, in our travel analogy, we were celebrating travelling the first 1000 kilometers, even if that put us somewhere over the middle of Lake Erie.

What would be better? An obvious conclusion that many have drawn is that we need to ensure every step delivers business value but that’s easier said than done.

Why is it so hard? The next thing Jack said helped me understand why. His point is that when you’ve taken the first step on your journey, it’s not just some intermediate station. It’s the “new now.” The new reality. The new as-is. And if the new reality is hanging around in some grotty airport trying to do your job via a Wi-Fi connection of dubious security and spending too much money on coffee and cookies…….you get the picture.

The problem with identifying that business value is that we’re not focusing on the new now but on something much more long-term. We’re trying to interpolate the near term business value out of the long term goal, which wasn’t defined based on near term needs.

What makes this all the more urgent is the increasing rate and unpredictability of change – in all aspects of doing business. This has led us to shorter planning horizons and an increasing tendency to regard that “to be” as nothing more than a general sense of direction. We’re thinking, “If we could deliver the whole thing really, really quickly on the basis of what we know we’d like to be able to do now, if it were possible, then it would look like this” – but knowing all the time that by the time we get anywhere near that end goal, it will have changed. It’s pretty obvious then that a first step, whose justification is entirely based on that imagined end goal, can easily be of extremely limited value.

So why not put more focus on the first step? That’s going to be the “new now.” How about making that our real target? Something that the enterprise sees as real value and that is actually feasible in a reasonable time scale (whatever that is). Instead of scoping that step as an intermediate (and rather immature) layover, why not put all our efforts into making it something really good? And when we get there and people know how the new now looks and feels, we can all think afresh about where to go next. After all, a journey is not simply defined by its destination but by how you get there and what you see and do on the way. If the actual journey itself is valuable, we may not want to get to the end of it.

Now that doesn’t mean we have to forget all about where we might want to be in three or even five years — not at all. The long term view is still important in helping us to make smart decisions about shorter term changes. It helps us allow for future change, even if only because it lets us see how much might change. And that helps us make sound decisions. But we should accept that our three or five year horizon needs to be continually open to revision – not on some artificial yearly cycle but every time there’s a “new now.” And this needs to include the times where the new now is not something we planned but is an emergent development from within or outside of the enterprise or is due to a major regulatory or market change.

So, if the focus is all on the first step and if our innovation cycle is getting steadily shorter, what’s the value of planning anything? Relax, I’m not about to fire the entire planning profession. If you don’t plan how you’re going to do something, what your dependencies are, how to react to the unexpected, etc., you’re unlikely to achieve your goal at all. Arguably that’s just project planning.

What about program planning? Well, if the program is so exposed to change maybe our concept of program planning needs to change. Instead of the plan being a thing fixed in stone that dictates everything, it could become a process in which the whole enterprise participates – itself open to emergence. The more I think about it, the more appealing that idea seems.

In my next post, I’ll go into more detail about how this might work, in particular from the perspective of Enterprise Architecture. I’ll also look more at how “the new planning” relates to innovation, emergence and social business and at the conflicts and synergies between these concerns. In the meantime, feel free to throw stones and see where the story doesn’t hold up.

Stuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity. 

7 Comments

Filed under Enterprise Architecture, Uncategorized

The Center of Excellence: Relating Everything Back to Business Objectives

By Serge Thorn, Architecting the Enterprise

This is the third and final installment of a series discussing how to implement SOA through TOGAF®. In my first blog post I explained the concept of the Center of Excellence, and creating a vision for your organization, my second blog post suggested how the Center of Excellence would define a Reference Architecture for the organization.

 SOA principles should clearly relate back to the business objectives and key architecture drivers. They will be constructed on the same mode as TOGAF 9.1 principles with the use of statement, rationale and implications. Below examples of the types of services which may be created:

  • Put the computing near the data
  • Services are technology neutral
  • Services are consumable
  • Services are autonomous
  • Services share a formal contract
  • Services are loosely coupled
  • Services abstract underlying logic
  • Services are reusable
  • Services are composable
  • Services are stateless
  • Services are discoverable
  • Location Transparency

Here is a detailed principle example:

  • Service invocation
    • All service invocations between application silos will be exposed through the Enterprise Service Bus (ESB)
    • The only exception to this principle will be when the service meets all the following criteria:
      • It will be used only within the same application silo
      • There is no potential right now or in the near future for re-use of this service
      • The service has already been right-sized
      • The  Review Team has approved the exception

As previously indicated, the SOA Center of Excellence (CoE) would also have to provide guidelines on SOA processes and related technologies. This may include:

  • Service analysis (Enterprise Architecture, BPM, OO, requirements and models, UDDI Model)
  • Service design (SOAD, specification, Discovery Process, Taxonomy)
  • Service provisioning (SPML, contracts, SLA)
  • Service implementation development (BPEL, SOAIF)
  • Service assembly and integration (JBI, ESB)
  • Service testing
  • Service deployment (the software on the network)
  • Service discovery (UDDI, WSIL, registry)
  • Service publishing (SLA, security, certificates, classification, location, UDDI, etc.)
  • Service consumption (WSDL, BPEL)
  • Service execution  (WSDM)
  • Service versioning (UDDI, WSDL)
  • Service Management and monitoring
  • Service operation
  • Programming, granularity and abstraction

Other activities may be considered by the SOA CoE such as providing a collaboration platform, asset management (service are just another type of assets), compliance with standards and best practices, use of guidelines, etc. These activities could also be supported by an Enterprise Architecture team.

As described in the TOGAF® 9.1 Framework, the SOA CoE can act as the governance body for SOA implementation, work with the Enterprise Architecture team, overseeing what goes into a new architecture that the organization is creating and ensuring that the architecture will meet the current and future needs of the organization.

The Center of Excellence provides expanded opportunities for organizations to leverage and reuse service-oriented infrastructure and knowledgebase to facilitate the implementation of cost-effective and timely SOA based solutions.

Serge Thorn is CIO of Architecting the Enterprise.  He has worked in the IT Industry for over 25 years, in a variety of roles, which include; Development and Systems Design, Project Management, Business Analysis, IT Operations, IT Management, IT Strategy, Research and Innovation, IT Governance, Architecture and Service Management (ITIL). He is the Chairman of the itSMF (IT Service Management forum) Swiss chapter and is based in Geneva, Switzerland.

Comments Off

Filed under Cloud/SOA, Enterprise Architecture, Standards, TOGAF, TOGAF®, Uncategorized

Call for Submissions

By Patty Donovan, The Open Group

The Open Group Blog is celebrating its second birthday this month! Over the past few years, our blog posts have tended to cover Open Group activities – conferences, announcements, our lovely members, etc. While several members and Open Group staff serve as regular contributors, we’d like to take this opportunity to invite our community members to share their thoughts and expertise on topics related to The Open Group’s areas of expertise as guest contributors.

Here are a few examples of popular guest blog posts that we’ve received over the past year

Blog posts generally run between 500 and 800 words and address topics relevant to The Open Group workgroups, forums, consortiums and events. Some suggested topics are listed below.

  • ArchiMate®
  • Big Data
  • Business Architecture
  • Cloud Computing
  • Conference recaps
  • DirectNet
  • Enterprise Architecture
  • Enterprise Management
  • Future of Airborne Capability Environment (FACE™)
  • Governing Board Businesses
  • Governing Board Certified Architects
  • Governing Board Certified IT Specialists
  • Identity Management
  • IT Security
  • The Jericho Forum
  • The Open Group Trusted Technology Forum (OTTF)
  • Quantum Lifecycle Management
  • Real-Time Embedded Systems
  • Semantic Interoperability
  • Service-Oriented Architecture
  • TOGAF®

If you have any questions or would like to contribute, please contact opengroup (at) bateman-group.com.

Please note that all content submitted to The Open Group blog is subject to The Open Group approval process. The Open Group reserves the right to deny publication of any contributed works. Anything published shall be copyright of The Open Group.

Patricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

1 Comment

Filed under Uncategorized

Challenges to Building a Global Identity Ecosystem

By Jim Hietala and Ian Dobson, The Open Group

In our five identity videos from the Jericho Forum, a forum of The Open Group:

  • Video #1 explained the “Identity First Principles” – about people (or any entity) having a core identity and how we all operate with a number of personas.
  • Video #2 “Operating with Personas” explained how we use a digital core identifier to create digital personas –as many as we like – to mirror the way we use personas in our daily lives.
  • Video #3 described how “Trust and Privacy interact to provide a trusted privacy-enhanced identity ecosystem.
  • Video #4 “Entities and Entitlement” explained why identity is not just about people – we must include all entities that we want to identify in our digital world, and how “entitlement” rules control access to resources.

In this fifth video – Building a Global Identity Ecosystem – we highlight what we need to change and develop to build a viable identity ecosystem.

The Internet is global, so any identity ecosystem similarly must be capable of being adopted and implemented globally.

This means that establishing a trust ecosystem is essential to widespread adoption of an identity ecosystem. To achieve this, an identity ecosystem must demonstrate its architecture is sufficiently robust to scale to handle the many billions of entities that people all over the world will want, not only to be able to assert their identities and attributes, but also to handle the identities they will also want for all their other types of entities.

It also means that we need to develop an open implementation reference model, so that anyone in the world can develop and implement interoperable identity ecosystem identifiers, personas, and supporting services.

In addition, the trust ecosystem for asserting identities and attributes must be robust, to allow entities to make assertions that relying parties can be confident to consume and therefore use to make risk-based decisions. Agile roots of trust are vital if the identity ecosystem is to have the necessary levels of trust in entities, personas and attributes.

Key to the trust in this whole identity ecosystem is being able to immutably (enduringly and changelessly) link an entity to a digital Core Identifier, so that we can place full trust in knowing that only the person (or other type of entity) holding that Core Identifier can be the person (or other type of entity) it was created from, and no-one or thing can impersonate it. This immutable binding must be created in a form that guarantees the binding and include the interfaces necessary to connect with the digital world.  It should also be easy and cost-effective for all to use.

Of course, the cryptography and standards that this identity ecosystem depends on must be fully open, peer-reviewed and accepted, and freely available, so that all governments and interested parties can assure themselves, just as they can with AES encryption today, that it’s truly open and there are no barriers to implementation. The technologies needed around cryptography, one-way trusts, and zero-knowledge proofs, all exist today, and some of these are already implemented. They need to be gathered into a standard that will support the required model.

Adoption of an identity ecosystem requires a major mindset change in the thinking of relying parties – to receive, accept and use trusted identities and attributes from the identity ecosystem, rather than creating, collecting and verifying all this information for themselves. Being able to consume trusted identities and attributes will bring significant added value to relying parties, because the information will be up-to-date and from authoritative sources, all at significantly lower cost.

Now that you have followed these five Identity Key Concepts videos, we encourage you to use our Identity, Entitlement and Access (IdEA) commandments as the test to evaluate the effectiveness of all identity solutions – existing and proposed. The Open Group is also hosting an hour-long webinar that will preview all five videos and host an expert Q&A shortly afterward on Thursday, August 16.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

 

Ian Dobson is the director of the Security Forum and the Jericho Forum for The Open Group, coordinating and facilitating the members to achieve their goals in our challenging information security world.  In the Security Forum, his focus is on supporting development of open standards and guides on security architectures and management of risk and security, while in the Jericho Forum he works with members to anticipate the requirements for the security solutions we will need in future.

1 Comment

Filed under Identity Management, Uncategorized