Tag Archives: Newport Beach Conference

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

Complexity from Big Data and Cloud Trends Makes Architecture Tools like ArchiMate and TOGAF More Powerful, Says Expert Panel

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: Complexity from Big Data and Cloud Trends Makes Architecture Tools like ArchiMate and TOGAF More Powerful, Says Expert Panel, or read the transcript here.

We recently assembled a panel of Enterprise Architecture (EA) experts to explain how such simultaneous and complex trends as big data, Cloud Computing, security, and overall IT transformation can be helped by the combined strengths of The Open Group Architecture Framework (TOGAF®) and the ArchiMate® modeling language.

The panel consisted of Chris Forde, General Manager for Asia-Pacific and Vice President of Enterprise Architecture at The Open Group; Iver Band, Vice Chair of The Open Group ArchiMate Forum and Enterprise Architect at The Standard, a diversified financial services company; Mike Walker, Senior Enterprise Architecture Adviser and Strategist at HP and former Director of Enterprise Architecture at DellHenry Franken, the Chairman of The Open Group ArchiMate Forum and Managing Director at BIZZdesign, and Dave Hornford, Chairman of the Architecture Forum at The Open Group and Managing Partner at Conexiam. I served as the moderator.

This special BriefingsDirect thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on “Big Data — he transformation we need to embrace today.” [Disclosure: The Open Group and HP are sponsors ofBriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Is there something about the role of the enterprise architect that is shifting?

Walker: There is less of a focus on the traditional things we come to think of EA such as standards, governance and policies, but rather into emerging areas such as the soft skills, Business Architecture, and strategy.

To this end I see a lot in the realm of working directly with the executive chain to understand the key value drivers for the company and rationalize where they want to go with their business. So we’re moving into a business-transformation role in this practice.

At the same time, we’ve got to be mindful of the disruptive external technology forces coming in as well. EA can’t just divorce from the other aspects of architecture as well. So the role that enterprise architects play becomes more and more important and elevated in the organization.

Two examples of this disruptive technology that are being focused on at the conference are Big Data and Cloud Computing. Both are providing impacts to our businesses not because of some new business idea but because technology is available to enhance or provide new capabilities to our business. The EA’s still do have to understand these new technology innovations and determine how they will apply to the business.

We need to get really good enterprise architects, it’s difficult to find good ones. There is a shortage right now especially given that a lot of focus is being put on the EA department to really deliver sound architectures.

Not standalone

Gardner: We’ve been talking a lot here about Big Data, but usually that’s not just a standalone topic. It’s Big Data and Cloud, Cloud, mobile and security.

So with these overlapping and complex relationships among multiple trends, why is EA and things like the TOGAF framework and the ArchiMate modeling language especially useful?

Band: One of the things that has been clear for a while now is that people outside of IT don’t necessarily have to go through the technology function to avail themselves of these technologies any more. Whether they ever had to is really a question as well.

One of things that EA is doing, and especially in the practice that I work in, is using approaches like the ArchiMate modeling language to effect clear communication between the business, IT, partners and other stakeholders. That’s what I do in my daily work, overseeing our major systems modernization efforts. I work with major partners, some of which are offshore.

I’m increasingly called upon to make sure that we have clear processes for making decisions and clear ways of visualizing the different choices in front of us. We can’t always unilaterally dictate the choice, but we can make the conversation clearer by using frameworks like the TOGAF standard and the ArchiMate modeling language, which I use virtually every day in my work.

Hornford: The fundamental benefit of these tools is the organization realizing its capability and strategy. I just came from a session where a fellow quoted a Harvard study, which said that around a third of executives thought their company was good at executing on its strategy. He highlighted that this means that two-thirds are not good at executing on their strategy.

If you’re not good at executing on your strategy and you’ve got Big Data, mobile, consumerization of IT and Cloud, where are you going? What’s the correct approach? How does this fit into what you were trying to accomplish as an enterprise?

An enterprise architect that is doing their job is bringing together the strategy, goals and objectives of the organization. Also, its capabilities with the techniques that are available, whether it’s offshoring, onshoring, Cloud, or Big Data, so that the organization is able to move forward to where it needs to be, as opposed to where it’s going to randomly walk to.

Forde: One of the things that has come out in several of the presentations is this kind of capability-based planning, a technique in EA to get their arms around this thing from a business-driver perspective. Just to polish what Dave said a little bit, it’s connecting all of those things. We see enterprises talking about a capability-based view of things on that basis.

Gardner: Let’s get a quick update. The TOGAF framework, where are we and what have been the highlights from this particular event?

Minor upgrade

Hornford: In the last year, we’ve published a minor upgrade for TOGAF version 9.1 which was based upon cleaning up consistency in the language in the TOGAF documentation. What we’re working on right now is a significant new release, the next release of the TOGAF standard, which is dividing the TOGAF documentation to make it more consumable, more consistent and more useful for someone.

Today, the TOGAF standard has guidance on how to do something mixed into the framework of what you should be doing. We’re peeling those apart. So with that peeled apart, we won’t have guidance that is tied to classic application architecture in a world of Cloud.

What we find when we have done work with the Banking Industry Architecture Network (BIAN) for banking architecture, Sherwood Applied Business Security Architecture (SABSA) for security architecture, and the TeleManagement Forum, is that the concepts in the TOGAF framework work across industries and across trends. We need to move the guidance into a place so that we can be far nimbler on how to tie Cloud with my current strategy, how to tie consumerization of IT with on-shoring?

Franken: The ArchiMate modeling language turned two last year, and the ArchiMate 1.0 standard is the language to model out the core of your EA. The ArchiMate 2.0 standard added two specifics to it to make it better aligned also to the process of EA.

According to the TOGAF standard, this is being able to model out the motivation, why you’re doing EA, stakeholders and the goals that drive us. The second extension to the ArchiMate standard is being able to model out its planning and migration.

So with the core EA and these two extensions, together with the TOGAF standard process working, you have a good basis on getting EA to work in your organization.

Gardner: Mike, fill us in on some of your thoughts about the role of information architecture vis-à-vis the larger business architect and enterprise architect roles.

Walker: Information architecture is an interesting topic in that it hasn’t been getting a whole lot of attention until recently.

Information architecture is an aspect of Enterprise Architecture that enables an information strategy or business solution through the definition of the company’s business information assets, their sources, structure, classification and associations that will prescribe the required application architecture and technical capabilities.

Information architecture is the bridge between the Business Architecture world and the application and technology architecture activities.

The reason I say that is because information architecture is a business-driven discipline that details the information strategy of the company. As we know, and from what we’ve heard at the conference keynotes like in the case of NASA, Big Data, and security presentations, the preservation and classification of that information is vital to understanding what your architecture should be.

Least matured

From an industry perspective, this is one of the least matured, as far as being incorporated into a formal discipline. The TOGAF standard actually has a phase dedicated to it in data architecture. Again, there are still lots of opportunities to grow and incorporate additional methods, models and tools by the enterprise information management discipline.

Enterprise information management not only it captures traditional topic areas like master data management (MDM), metadata and unstructured types of information architecture but also focusing on the information governance, and the architecture patterns and styles implemented in MDM, Big Data, etc. There is a great deal of opportunity there.

From the role of information architects, I’m seeing more and more traction in the industry as a whole. I’ve dealt with an entire group that’s focused on information architecture and building up an enterprise information management practice, so that we can take our top line business strategies and understand what architectures we need to put there.

This is a critical enabler for global companies, because oftentimes they’re restricted by regulation, typically handled at a government or regional area. This means we have to understand that we build our architecture. So it’s not about the application, but rather the data that it processes, moves, or transforms.

Gardner: Up until not too long ago, the conventional thinking was that applications generate data. Then you treat the data in some way so that it can be used, perhaps by other applications, but that the data was secondary to the application.

But there’s some shift in that thinking now more toward the idea that the data is the application and that new applications are designed to actually expand on the data’s value and deliver it out to mobile tiers perhaps. Does that follow in your thinking that the data is actually more prominent as a resource perhaps on par with applications?

Walker: You’re spot on, Dana. Before the commoditization of these technologies that resided on premises, we could get away with starting at the application layer and work our way back because we had access to the source code or hardware behind our firewalls. We could throw servers out, and we used to put the firewalls in front of the data to solve the problem with infrastructure. So we didn’t have to treat information as a first-class citizen. Times have changed, though.

Information access and processing is now democratized and it’s being pushed as the first point of presentment. A lot of times this is on a mobile device and even then it’s not the corporate’s mobile device, but your personal device. So how do you handle that data?

It’s the same way with Cloud, and I’ll give you a great example of this. I was working as an adviser for a company, and they were looking at their Cloud strategy. They had made a big bet on one of the big infrastructures and Cloud-service providers. They looked first at what the features and functions that that Cloud provider could provide, and not necessarily the information requirements. There were two major issues that they ran into, and that was essentially a showstopper. They had to pull off that infrastructure.

The first one was that in that specific Cloud provider’s terms of service around intellectual property (IP) ownership. Essentially, that company was forced to cut off their IP rights.

Big business

As you know, IP is a big business these days, and so that was a showstopper. It actually broke the core regulatory laws around being able to discover information.

So focusing on the applications to make sure it meets your functional needs is important. However, we should take a step back and look at the information first and make sure that for the people in your organization who can’t say no, their requirements are satisfied.

Gardner: Data architecture is it different from EA and Business Architecture, or is it a subset? What’s the relationship, Dave?

Hornford: Data architecture is part of an EA. I won’t use the word subset, because a subset starts to imply that it is a distinct thing that you can look at on its own. You cannot look at your Business Architecture without understanding your information architecture. When you think about Big Data, cool. We’ve got this pile of data in the corner. Where did it come from? Can we use it? Do we actually have legitimate rights, as Mike highlighted, to use this information? Are we allowed to mix it and who mixes it?

When we look at how our business is optimized, they normally optimize around work product, what the organization is delivering. That’s very easy. You can see who consumes your work product. With information, you often have no idea who consumes your information. So now we have provenance, we have source and as we move for global companies, we have the trends around consumerization, Cloud and simply tightening cycle time.

Gardner: Of course, the end game for a lot of the practitioners here is to create that feedback loop of a lifecycle approach, rapid information injection and rapid analysis that could be applied. So what are some of the ways that these disciplines and tools can help foster that complete lifecycle?

Band: The disciplines and tools can facilitate the right conversations among different stakeholders. One of the things that we’re doing at The Standard is building cadres equally balanced between people in business and IT.

We’re training them in information management, going through a particular curriculum, and having them study for an information management certification that introduces a lot of these different frameworks and standard concepts.

Creating cadres

We want to create these cadres to be able to solve tough and persistent information management problems that affect all companies in financial services, because information is a shared asset. The purpose of the frameworks is to ensure proper stewardship of that asset across disciplines and across organizations within an enterprise.

Hornford: The core is from the two standards that we have, the ArchiMate standard and the TOGAF standard. The TOGAF standard has, from its early roots, focused on the components of EA and how to build a consistent method of understanding of what I’m trying to accomplish, understanding where I am, and where I need to be to reach my goal.

When we bring in the ArchiMate standard, I have a language, a descriptor, a visual descriptor that allows me to cross all of those domains in a consistent description, so that I can do that traceability. When I pull in this lever or I have this regulatory impact, what does it hit me with, or if I have this constraint, what does it hit me with?

If I don’t do this, if I don’t use the framework of the TOGAF standard, or I don’t use the discipline of formal modeling in the ArchiMate standard, we’re going to do it anecdotally. We’re going to trip. We’re going to fall. We’re going to have a non-ending series of surprises, as Mike highlighted.

“Oh, terms of service. I am violating the regulations. Beautiful. Let’s take that to our executive and tell him right as we are about to go live that we have to stop, because we can’t get where we want to go, because we didn’t think about what it took to get there.” And that’s the core of EA in the frameworks.

Walker: To build on what Dave has just talked about and going back to your first question Dana, the value statement on TOGAF from a business perspective. The businesses value of TOGAF is that they get a repeatable and a predictable process for building out our architectures that properly manage risks and reliably produces value.

The TOGAF framework provides a methodology to ask what problems you’re trying to solve and where you are trying to go with your business opportunities or challenges. That leads to Business Architecture, which is really a rationalization in technical or architectural terms the distillation of the corporate strategy.

From there, what you want to understand is information — how does that translate, what information architecture do we need to put in place? You get into all sorts of things around risk management, etc., and then it goes on from there, until what we were talking about earlier about information architecture.

If the TOGAF standard is applied properly you can achieve the same result every time, That is what interests business stakeholders in my opinion. And the ArchiMate modeling language is great because, as we talked about, it provides very rich visualizations so that people cannot only show a picture, but tie information together. Different from other aspects of architecture, information architecture is less about the boxes and more about the lines.

Quality of the individuals

Forde: Building on what Dave was saying earlier and also what Iver was saying is that while the process and the methodology and the tools are of interest, it’s the discipline and the quality of the individuals doing the work

Iver talked about how the conversation is shifting and the practice is improving to build communications groups that have a discipline to operate around. What I am hearing is implied, but actually I know what specifically occurs, is that we end up with assets that are well described and reusable.

And there is a point at which you reach a critical mass that these assets become an accelerator for decision making. So the ability of the enterprise and the decision makers in the enterprise at the right level to respond is improved, because they have a well disciplined foundation beneath them.

A set of assets that are reasonably well-known at the right level of granularity for them to absorb the information and the conversation is being structured so that the technical people and the business people are in the right room together to talk about the problems.

This is actually a fairly sophisticated set of operations that I am discussing and doesn’t happen overnight, but is definitely one of the things that we see occurring with our members in certain cases.

Hornford: I want to build on that what Chris said. It’s actually the word “asset.” While he was talking, I was thinking about how people have talked about information as an asset. Most of us don’t know what information we have, how it’s collected, where it is, but we know we have got a valuable asset.

I’ll use an analogy. I have a factory some place in the world that makes stuff. Is that an asset? If I know that my factory is able to produce a particular set of goods and it’s hooked into my supply chain here, I’ve got an asset. Before that, I just owned a thing.

I was very encouraged listening to what Iver talked about. We’re building cadres. We’re building out this approach and I have seen this. I’m not using that word, but now I’m stealing that word. It’s how people build effective teams, which is not to take a couple of specialists and put them in an ivory tower, but it’s to provide the method and the discipline of how we converse about it, so that we can have a consistent conversation.

When I tie it with some of the tools from the Architecture Forum and the ArchiMate Forum, I’m able to consistently describe it, so that I now have an asset I can identify, consume and produce value from.

Business context

Forde: And this is very different from data modeling. We are not talking about entity relationship, junk at the technical detail, or third normal form and that kind of stuff. We’re talking about a conversation that’s occurring around the business context of what needs to go on supported by the right level of technical detail when you need to go there in order to clarify.

Comments Off

Filed under ArchiMate®, Enterprise Architecture, TOGAF®

Open Group Panel Explores Changing Field of Risk Management and Analysis in the Era of Big Data

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Panel Explores Changing Field of Risk Management and Analysis in Era of Big Data

This is a transcript of a sponsored podcast discussion on the threats from and promise of Big Data in securing enterprise information assets in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference itself is focusing on Big Data the transformation we need to embrace today.

We’re here now with a panel of experts to explore new trends and solutions in the area of risk management and analysis. We’ll learn how large enterprises are delivering risk assessments and risk analysis, and we’ll see how Big Data can be both an area to protect from in form of risks, but also as a tool for better understanding and mitigating risks.

With that, please join me in welcoming our panel. We’re here with Jack Freund, PhD, the Information Security Risk Assessment Manager at TIAA-CREF. Welcome, Jack.

Jack Freund: Hello Dana, how are you?

Gardner: I’m great. Glad you could join us.

We are also here with Jack Jones, Principal of CXOWARE. He has more than nine years of experience as a Chief Information Security Officer, is the inventor of the Factor Analysis Information Risk (FAIR) framework. Welcome, Jack.

Jack Jones: Thank you. And we’re also here with Jim Hietala, Vice President, Security for The Open Group. Welcome, Jim.

Jim Hietala: Thanks, Dana.

Gardner: All right, let’s start out with looking at this from a position of trends. Why is the issue of risk analysis so prominent now? What’s different from, say, five years ago? And we’ll start with you, Jack Jones.

Jones: The information security industry has struggled with getting the attention of and support from management and businesses for a long time, and it has finally come around to the fact that the executives care about loss exposure — the likelihood of bad things happening and how bad those things are likely to be.

It’s only when we speak of those terms or those issues in terms of risk, that we make sense to those executives. And once we do that, we begin to gain some credibility and traction in terms of getting things done.

Gardner: So we really need to talk about this in the terms that a business executive would appreciate, not necessarily an IT executive.

Effects on business

Jones: Absolutely. They’re tired of hearing about vulnerabilities, hackers, and that sort of thing. It’s only when we can talk in terms of the effect on the business that it makes sense to them.

Gardner: Jack Freund, I should also point out that you have more than 14 years in enterprise IT experience. You’re a visiting professor at DeVry University and you chair a risk-management subcommittee for ISACA? Is that correct?

Freund: ISACA, yes.

Gardner: And do you agree?

Freund: The problem that we have as a profession, and I think it’s a big problem, is that we have allowed ourselves to escape the natural trend that the other IT professionals have already taken.

There was a time, years ago, when you could code in the basement, and nobody cared much about what you were doing. But now, largely speaking, developers and systems administrators are very focused on meeting the goals of the organization.

Security has been allowed to miss that boat a little. We have been allowed to hide behind this aura of a protector and of an alerter of terrible things that could happen, without really tying ourselves to the problem that the organizations are facing and how can we help them succeed in what they’re doing.

Gardner: Jim Hietala, how do you see things that are different now than a few years ago when it comes to risk assessment?

Hietala: There are certainly changes on the threat side of the landscape. Five years ago, you didn’t really have hacktivism or this notion of an advanced persistent threat (APT).

That highly skilled attacker taking aim at governments and large organizations didn’t really exist -– or didn’t exist to the degree it does today. So that has changed.

You also have big changes to the IT platform landscape, all of which bring new risks that organizations need to really think about. The mobility trend, the Cloud trend, the big-data trend that we are talking about today, all of those things bring new risk to the organization.

As Jack Jones mentioned, business executives don’t want to hear about, “I’ve got 15 vulnerabilities in the mobility part of my organization.” They want to understand what’s the risk of bad things happening because of mobility, what we’re doing about it, and what’s happening to risk over time?

So it’s a combination of changes in the threats and attackers, as well as just changes to the IT landscape, that we have to take a different look at how we measure and present risk to the business.

Gardner: Because we’re at a big-data conference, do you share my perception, Jack Jones, that Big Data can be a source of risk and vulnerability, but also the analytics and the business intelligence (BI) tools that we’re employing with Big Data can be used to alert you to risks or provide a strong tool for better understanding your true risk setting or environment.

Crown jewels

Jones: You are absolutely right. You think of Big Data and, by definition, it’s where your crown jewels, and everything that leads to crown jewels from an information perspective, are going to be found. It’s like one-stop shopping for the bad guy, if you want to look at it in that context. It definitely needs to be protected. The architecture surrounding it and its integration across a lot of different platforms and such, can be leveraged and probably result in a complex landscape to try and secure.

There are a lot of ways into that data and such, but at least if you can leverage that same Big Data architecture, it’s an approach to information security. With log data and other threat and vulnerability data and such, you should be able to make some significant gains in terms of how well-informed your analyses and your decisions are, based on that data.

Gardner: Jack Freund, do you share that? How does Big Data fit into your understanding of the evolving arena of risk assessment and analysis?

Freund: If we fast-forward it five years, and this is even true today, a lot of people on the cutting edge of Big Data will tell you the problem isn’t so much building everything together and figuring out what it can do. They are going to tell you that the problem is what we do once we figure out everything that we have. This is the problem that we have traditionally had on a much smaller scale in information security. When everything is important, nothing is important.

Gardner: To follow up on that, where do you see the gaps in risk analysis in large organizations? In other words, what parts of organizations aren’t being assessed for risk and should be?

Freund: The big problems that exist largely today in the way that risk assessments are done, is the focus on labels. We want to quickly address the low, medium, and high things and know where they are. But the problem is that there are inherent problems in the way that we think about those labels, without doing any of the analysis legwork.

I think that’s what’s really missing is that true analysis. If the system goes offline, do we lose money? If the system becomes compromised, what are the cost-accounting things that will happen that allow us to figure out how much money we’re going to lose.

That analysis work is largely missing. That’s the gap. The gap is if the control is not in place, then there’s a risk that must be addressed in some fashion. So we end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

Every day, our organizations are out there selling products, offering services, which is and of itself, its own risky venture. So tying what we do from an information security perspective to that is critical for not just the success of the organization, but the success of our profession.

Gardner: So we can safely say that large companies are probably pretty good at a cost-benefit analysis or they wouldn’t be successful. Now, I guess we need to ask them to take that a step further and do a cost-risk analysis, but in business terms, being mindful that their IT systems might be a much larger part of that than they had at once considered. Is that fair, Jack?

Risk implications

Jones: Businesses have been making these decisions, chasing the opportunity, but generally, without any clear understanding of the risk implications, at least from the information security perspective. They will have us in the corner screaming and throwing red flags in there, and talking about vulnerabilities and threats from one thing or another.

But, we come to the table with red, yellow, and green indicators, and on the other side of the table, they’ve got numbers. Well, here is what we expect to earn in revenue from this initiative, and the information security people are saying it’s crazy. How do you normalize the quantitative revenue gain versus red, yellow, and green?

Gardner: Jim Hietala, do you see it in the same red, yellow, green or are there some other frameworks or standard methodologies that The Open Group is looking at to make this a bit more of a science?

Hietala: Probably four years ago, we published what we call the Risk Taxonomy Standard which is based upon FAIR, the management framework that Jack Jones invented. So, we’re big believers in bringing that level of precision to doing risk analysis. Having just gone through training for FAIR myself, as part of the standards effort that we’re doing around certification, I can say that it really brings a level of precision and a depth of analysis to risk analysis that’s been lacking frequently in IT security and risk management.

Gardner: We’ve talked about how organizations need to be mindful that their risks are higher and different than in the past and we’ve talked about how standardization and methodologies are important, helping them better understand this from a business perspective, instead of just a technology perspective.

But, I’m curious about a cultural and organizational perspective. Whose job should this fall under? Who is wearing the white hat in the company and can rally the forces of good and make all the bad things managed? Is this a single person, a cultural, an organizational mission? How do you make this work in the enterprise in a real-world way? Let’s go to you, Jack Freund.

Freund: The profession of IT risk management is changing. That profession will have to sit between the business and information security inclusive of all the other IT functions that make that happen.

In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups. You have to be able to understand profit and loss and capital expenditure on the business side. On the IT risk side, you have to be technical enough to do all those sorts of things.

But I think the sum total of those two things is probably only about 50 percent of the job of IT risk management today. The other 50 percent is communication. Finding ways to translate that language and to understand the needs and concerns of each side of that relationship is really the job of IT risk management.

To answer your question, I think it’s absolutely the job of IT risk management to do that. From my own experiences with the FAIR framework, I can say that using FAIR is the Rosetta Stone for speaking between those two groups.

Necessary tools

It gives you the tools necessary to speak in the insurance and risk terms that business appreciate. And it gives you the ability to be as technical and just nerdy, if you will, as you need to be in order to talk to IT security and the other IT functions in order to make sure everybody is on the same page and everyone feels like their concerns are represented in the risk-assessment functions that are happening.

Gardner: Jack Jones, can you add to that?

Jones: I agree with what Jack said wholeheartedly. I would add, though, that integration or adoption of something like this is a lot easier the higher up in the organization you go.

For CFOs traditionally, their neck is most clearly on the line for risk-related issues within most organizations. At least in my experience, if you get their ear on this and present the information security data analyses to them, they jump on board, they drive it through the organization, and it’s just brain-dead easy.

If you try to drive it up through the ranks, maybe you get an enthusiastic supporter in the information security organization, especially if it’s below the CISO level, and they try a grassroots sort of effort to bring it in, it’s a tougher thing. It can still work. I’ve seen it work very well, but, it’s a longer row to hoe.

Gardner: There have been a lot of research, studies, and surveys on data breaches. What are some of the best sources, or maybe not so good sources, for actually measuring this? How do you know if you’re doing it right? How do you know if you’re moving from yellow to green, instead of to red? To you, Jack Freund.

Freund: There are a couple of things in that question. The first is there’s this inherent assumption in a lot of organizations that we need to move from yellow to green, and that may not be the case. So, becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

That’s part of the official mindset of IT security. When you graduate an information security person today, they are minted knowing that there are a lot of bad things out there, and their goal in life is to reduce them. But, that may not be the case. The case may very well be that things are okay now, but we have bigger things to fry over here that we’re going to focus on. So, that’s one thing.

The second thing, and it’s a very good question, is how we know that we’re getting better? How do we trend that over time? Overall, measuring that value for the organization has to be able to show a reduction of a risk or at least reduction of risk to the risk-tolerance levels of the organization.

Calculating and understanding that requires something that I always phrase as we have to become comfortable with uncertainty. When you are talking about risk in general, you’re talking about forward-looking statements about things that may or may not happen. So, becoming comfortable with the fact that they may or may not happen means that when you measure them today, you have to be willing to be a little bit squishy in how you’re representing that.

In FAIR and in other academic works, they talk about using ranges to do that. So, things like high, medium, and low, could be represented in terms of a minimum, maximum, and most likely. And that tends to be very, very effective. People can respond to that fairly well.

Gathering data

Jones: With regard to the data sources, there are a lot of people out there doing these sorts of studies, gathering data. The problem that’s hamstringing that effort is the lack of a common set of definitions, nomenclature, and even taxonomy around the problem itself.

You will have one study that will have defined threat, vulnerability, or whatever differently from some other study, and so the data can’t be normalized. It really harms the utility of it. I see data out there and I think, “That looks like that can be really useful.” But, I hesitate to use it because I don’t understand. They don’t publish their definitions, approach, and how they went after it.

There’s just so much superficial thinking in the profession on this that we now have dug under the covers. Too often, I run into stuff that just can’t be defended. It doesn’t make sense, and therefore the data can’t be used. It’s an unfortunate situation.

I do think we’re heading in a positive direction. FAIR can provide a normalizing structure for that sort of thing. The VERIS framework, which by the way, is also derived in part from FAIR, also has gained real attraction in terms of the quality of the research they have done and the data they’re generating. We’re headed in the right direction, but we’ve got a long way to go.

Gardner: Jim Hietala, we’re seemingly looking at this on a company-by-company basis. But, is there a vertical industry slice or industry-wide slice where we could look at what’s happening to everyone and put some standard understanding, or measurement around what’s going on in the overall market, maybe by region, maybe by country?

Hietala: There are some industry-specific initiatives and what’s really needed, as Jack Jones mentioned, are common definitions for things like breach, exposure, loss, all those, so that the data sources from one organization can be used in another, and so forth. I think about the financial services industry. I know that there is some information sharing through an organization called the FS-ISAC about what’s happening to financial services organizations in terms of attacks, loss, and those sorts of things.

There’s an opportunity for that on a vertical-by-vertical basis. But, like Jack said, there is a long way to go on that. In some industries, healthcare for instance, you are so far from that, it’s ridiculous. In the US here, the HIPAA security rule says you must do a risk assessment. So, hospitals have done annual risk assessments, will stick the binder on the shelf, and they don’t think much about information security in between those annual risk assessments. That’s a generalization, but various industries are at different places on a continuum of maturity of their risk management approaches.

Gardner: As we get better with having a common understanding of the terms and the measurements and we share more data, let’s go back to this notion of how to communicate this effectively to those people that can use it and exercise change management as a result. That could be the CFO, the CEO, what have you, depending on the organization.

Do you have any examples? Can we look to an organization that’s done this right, and examine their practices, the way they’ve communicated it, some of the tools they’ve used and say, “Aha, they’re headed in the right direction maybe we could follow a little bit.” Let’s start with you, Jack Freund.

Freund: I have worked and consulted for various organizations that have done risk management at different levels. The ones that have embraced FAIR tend to be the ones that overall feel that risk is an integral part of their business strategy. And I can give a couple of examples of scenarios that have played out that I think have been successful in the way they have been communicated.

Coming to terms

The key to keep in mind with this is that one of the really important things is that when you’re a security professional, you’re again trained to feel like you need results. But, the results for the IT risk management professional are different. The results are “I’ve communicated this effectively, so I am done.” And then whatever the results are, are the results that needed to be. And that’s a really hard thing to come to terms with.

I’ve been involved in large-scale efforts to assess risk for a Cloud venture. We needed to move virtually every confidential record that we have to the Cloud in order to be competitive with the rest of our industry. If our competitors are finding ways to utilize the Cloud before us, we can lose out. So, we need to find a way to do that, and to be secure and compliant with all the laws and regulations and such.

Through that scenario, one of the things that came out was that key ownership became really, really important. We had the opportunity to look at the various control structures and we analyzed them using FAIR. What we ended up with was sort of a long-tail risk. Most people will probably do their job right over a long enough period of time. But, over that same long period of time, the odds of somebody making a mistake not in your favor are probably likely, but, not significantly enough so that you can’t make the move.

But, the problem became that the loss side, the side that typically gets ignored with traditional risk-assessment methodologies, was so significant that the organization needed to make some judgment around that, and they needed to have a sense of what we needed to do in order to minimize that.

That became a big point of discussion for us and it drove the conversation away from bad things could happen. We didn’t bury the lead. The lead was that this is the most important thing to this organization in this particular scenario.

So, let’s talk about things we can do. Are we comfortable with it? Do we need to make any sort of changes? What are some control opportunities? How much do they cost? This is a significantly more productive conversation than just, “Here is a bunch of bad things that happen. I’m going to cross my arms and say no.”

Gardner: Jack Jones, examples at work?

Jones: In an organization that I’ve been working with recently, their board of directors said they wanted a quantitative view of information security risk. They just weren’t happy with the red, yellow, green. So, they came to us, and there were really two things that drove them there. One was that they were looking at cyber insurance. They wanted to know how much cyber insurance they should take out, and how do you figure that out when you’ve got a red, yellow, green scale?

They were able to do a series of analyses on a population of the scenarios that they thought were relevant in their world, get an aggregate view of their annualized loss exposure, and make a better informed decision about that particular problem.

Gardner: I’m curious how prevalent cyber insurance is, and is that going to be a leveling effect in the industry where people speak a common language the equivalent of actuarial tables, but for security in enterprise and cyber security?

Jones: One would dream and hope, but at this point, what I’ve seen out there in terms of the basis on which insurance companies are setting their premiums and such is essentially the same old “risk assessment” stuff that the industry has been doing poorly for years. It’s not based on data or any real analysis per se, at least what I’ve run into. What they do is set their premiums high to buffer themselves and typically cover as few things as possible. The question of how much value it’s providing the customers becomes a problem.

Looking to the future

Gardner: We’re coming up on our time limit. So, let’s quickly look to the future. Is there such thing as risk management as a service? Can we outsource this? Is there a way in which moving more of IT into Cloud or hybrid models would mitigate risk, because the Cloud provider would standardize? Then, many players in that environment, those who were buying those services, would be under that same umbrella? Let’s start with you Jim Hietala. What’s the future of this and what do the Cloud trends bring to the table?

Hietala: I’d start with a maxim that comes out of the financial services industry, which is that you can outsource the function, but you still own the risk. That’s an unfortunate reality. You can throw things out in the Cloud, but it doesn’t absolve you from understanding your risk and then doing things to manage it to transfer it if there’s insurance or whatever the case may be.

That’s just a reality. Organizations in the risky world we live in are going to have to get more serious about doing effective risk analysis. From The Open Group standpoint, we see this as an opportunity area.

As I mentioned, we’ve standardized the taxonomy piece of FAIR. And we really see an opportunity around the profession going forward to help the risk-analysis community by further standardizing FAIR and launching a certification program for a FAIR-certified risk analyst. That’s in demand from large organizations that are looking for evidence that people understand how to apply FAIR and use it in doing risk analyses.

Gardner: Jack Freund, looking into your crystal ball, how do you see this discipline evolving?

Freund: I always try to consider things as they exist within other systems. Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

Risk will always be owned by the organization that is offering that service. If we decide at some point that we can move to the Cloud and all these other things, we need to look to the legal system. There is a series of pressures that they are going to apply, and who is going to own that, and how that plays itself out.

If we look to the Europeans and the way that they’re managing risk and compliance, they’re still as strict as we in United States think that they may be about things, but there’s still a lot of leeway in a lot of the ways that laws are written. You’re still being asked to do things that are reasonable. You’re still being asked to do things that are standard for your industry. But, we’d still like the ability to know what that is, and I don’t think that’s going to go away anytime soon.

Judgment calls

We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things. Whenever that happens, you have to make a judgment call. What’s the most important thing that I care about? And that’s why risk management exists, because there’s a certain series of things that we have to deal with. We don’t have the resources to do them all, and I don’t think that’s going to change over time. Regardless of whether the landscape changes, that’s the one that remains true.

Gardner: The last word to you, Jack Jones. It sounds as if we’re continuing down the path of being mostly reactive. Is there anything you can see on the horizon that would perhaps tip the scales, so that the risk management and analysis practitioners can really become proactive and head things off before they become a big problem?

Jones: If we were to take a snapshot at any given point in time of an organization’s loss exposure, how much risk they have right then, that’s a lagging indicator of the decisions they’ve made in the past, and their ability to execute against those decisions.

We can do some great root-cause analysis around that and ask how we got there. But, we can also turn that coin around and ask how good we are at making well-informed decisions, and then executing against them, the asking what that implies from a risk perspective downstream.

If we understand the relationship between our current state, and past and future states, we have those linkages defined, especially, if we have an analytic framework underneath it. We can do some marvelous what-if analysis.

What if this variable changed in our landscape? Let’s run a few thousand Monte Carlo simulations against that and see what comes up. What does that look like? Well, then let’s change this other variable and then see which combination of dials, when we turn them, make us most robust to change in our landscape.

But again, we can’t begin to get there, until we have this foundational set of definitions, frameworks, and such to do that sort of analysis. That’s what we’re doing with FAIR, but without some sort of framework like that, there’s no way you can get there.

Gardner: I am afraid we’ll have to leave it there. We’ve been talking with a panel of experts on how new trends and solutions are emerging in the area of risk management and analysis. And we’ve seen how new tools for communication and using Big Data to understand risks are also being brought to the table.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference in Newport Beach, California. I’d like to thank our panel: Jack Freund, PhD, Information Security Risk Assessment Manager at TIAA-CREF. Thanks so much Jack.

Freund: Thank you, Dana.

Gardner: We’ve also been speaking with Jack Jones, Principal at CXOWARE.

Jones: Thank you. Thank you, pleasure to be here.

Gardner: And last, Jim Hietala, the Vice President for Security at The Open Group. Thanks.

Hietala: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions; your host and moderator through these thought leadership interviews. Thanks again for listening and come back next time.

Comments Off

Filed under Security Architecture

On Demand Broadcasts from Day One at The Open Group Conference in Newport Beach

By The Open Group Conference Team

Since not everyone could make the trip to The Open Group Conference in Newport Beach, we’ve put together a recap of day one’s plenary speakers. Stay tuned for more recaps coming soon!

Big Data at NASA

In his talk titled, “Big Data at NASA,” Chris Gerty, deputy program manager, Open Innovation Program, National Aeronautics and Space Administration (NASA), discussed how Big Data is being interpreted by the next generation of rocket scientists. Chris presented a few lessons learned from his experiences at NASA:

  1. A traditional approach is not always the best approach. A tried and proven method may not translate. Creating more programs for more data to store on bigger hard drives is not always effective. We need to address the never-ending challenges that lie ahead in the shift of society to the information age.
  2. A plan for openness. Based on a government directive, Chris’ team looked to answer questions by asking the right people. For example, NASA asked the people gathering data on a satellite to determine what data was the most important, which enabled NASA to narrow focus and solve problems. Furthermore, by realizing what can also be useful to the public and what tools have already been developed by the public, open source development can benefit the masses. Through collaboration, governments and citizens can work together to solve some of humanity’s biggest problems.
  3. Embrace the enormity of the universe. Look for Big Data where no one else is looking by putting sensors and information gathering tools. If people continue to be scared of Big Data, we will be resistant to gathering more of it. By finding Big Data where it has yet to be discovered, we can solve problems and innovate.

To view Chris’s presentation, please watch the broadcasted session here: http://new.livestream.com/opengroup/Gerty-NPB13

Bringing Order to the Chaos

David Potter, chief technical officer at Promise Innovation and Ron Schuldt, senior partner at UDEF-IT, LLC discussed how The Open Group’s evolving Quantum Lifecycle Management (QLM) standard coupled with its complementary Universal Data Element Framework (UDEF) standard help bring order to the terminology chaos that faces Big Data implementations.

The QLM standard provides a framework for the aggregation of lifecycle data from a multiplicity of sources to add value to the decision making process. Gathering mass amounts of data is useless if it cannot be analyzed. The QLM framework provides a means to interpret the information gathered for business intelligence. The UDEF allows each piece of data to be paired with an unambiguous key to provide clarity. By partnering with the UDEF, the QLM framework is able to separate itself from domain-specific semantic models. The UDEF also provides a ready-made key for international language support. As an open standard, the UDEF is data model independent and as such supports normalization across data models.

One example of successful implementation is by Compassion International. The organization needed to find a balance between information that should be kept internal (e.g., payment information) and information that should be shared with its international sponsors. In this instance, UDEF was used as a structured process for harmonizing the terms used in IT systems between funding partners.

The beauty of the QLM framework and UDEF integration is that they are flexible and can be applied to any product, domain and industry.

To view David and Ron’s presentation, please watch the broadcasted session here: http://new.livestream.com/opengroup/potter-NPB13

Big Data – Panel Discussion

Moderated by Dana Gardner, Interarbor Solution, Robert Weisman , Build The Vision, Andras Szakal, IBM, Jim Hietala, The Open Group, and Chris Gerty, NASA, discussed the implications of Big Data and what it means for business architects and enterprise architects.

Big Data is not about the size but about analyzing that data. Robert mentioned that most organizations store more data than they need or use, and from an enterprise architect’s perspective, it’s important to focus on the analysis of the data and to provide information that will ultimately aid it in some way. When it comes to security, Jim explained that newer Big Data platforms are not built with security in mind. While data is data, many security controls don’t translate to new platforms or scale with the influx of data.

Cloud Computing is Big Data-ready, and price can be compelling, but there are significant security and privacy risks. Robert brought up the argument over public and private Cloud adoption, and said, “It’s not one size fits all.” But can Cloud and Big Data come together? Andras explained that Cloud is not the almighty answer to Big Data. Every organization needs to find the Enterprise Architecture that fits its needs.

The fruits of Big Data can be useful to more than just business intelligence professionals. With the trend of mobility and application development in mind, Chris suggested that developers keep users in mind. Big Data can be used to tell us many different things, but it’s about finding out what is most important and relevant to users in a way that is digestible.

Finally, the panel discussed how Big Data bringing about big changes in almost every aspect of an organization. It is important not to generalize, but customize. Every enterprise needs its own set of architecture to fit its needs. Each organization finds importance in different facets of the data gathered, and security is different at every organization. With all that in mind, the panel agreed that focusing on the analytics is the key.

To view the panel discussion, please watch the broadcasted session here: http://new.livestream.com/opengroup/events/1838807

Comments Off

Filed under Conference

Capturing The Open Group Conference in Newport Beach

By The Open Group Conference Team

It is time to announce the winners of the Newport Beach Photo Contest! For those of you who were unable to attend, conference attendees submitted some of their best photos to the contest for a chance to win one free conference pass to one of The Open Group’s global conferences over the next year – a prize valued at more than $1,000/€900 value.

Southern California is known for its palm trees and warm sandy beaches. While Newport Beach is most recognized for its high-end real estate and association with popular television show, “The OC,” enterprise architects invaded the beach and boating town for The Open Group Conference.

The contest ended Friday at noon PDT, and it is time to announce the winners…

Best of The Open Group Conference in Newport Beach - For any photo taken during conference activities

The winner is Henry Franken, BiZZdesign!

 Henry Franken 01 BiZZdesign table

A busy BiZZdesign exhibitor booth

The Real OC Award – For best photo taken in or around Newport Beach

The winner is Andrew Josey, The Open Group!

 Andrew Josey 02

A local harbor in Newport Beach, Calif.

Thank you to all those who participated in this contest – whether it was submitting one of your own photos or voting for your favorites. Please visit The Open Group’s Facebook page to view all of the submissions and conference photos.

We’re always trying to improve our programs, so if you have any feedback regarding the photo contest, please email photo@opengroup.org or leave a comment below. We’ll see you in Sydney!

Comments Off

Filed under Conference

The Open Group Conference Plenary Speaker Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Keynoter Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

This is a transcript of a sponsored podcast discussion on Big Data analytics and its role in business processes, in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group® Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference will focus on big data and the transformation we need to embrace today.

We are here now with one of the main speakers at the conference; Michael Cavaretta, PhD, Technical Leader of Predictive Analytics for Ford Research and Advanced Engineering in Dearborn, Michigan.

We’ll see how Ford has exploited the strengths of big data analytics by directing them internally to improve business results. In doing so, they scour the metrics from the company’s best processes across myriad manufacturing efforts and through detailed outputs from in-use automobiles, all to improve and help transform their business.

Cavaretta has led multiple data-analytic projects at Ford to break down silos inside the company to best define Ford’s most fruitful datasets. Ford has successfully aggregated customer feedback, and extracted all the internal data to predict how best new features in technologies will improve their cars.

As a lead-in to his Open Group presentation, Michael and I will now explore how big data is fostering business transformation by allowing deeper insights into more types of data efficiently, and thereby improving processes, quality control, and customer satisfaction.

With that, please join me in welcoming Michael Cavaretta. Welcome to BriefingsDirect, Michael.

Michael Cavaretta: Thank you very much.

Gardner: Your upcoming presentation for The Open Group Conference is going to describe some of these new approaches to big data and how that offers some valuable insights into internal operations, and therefore making a better product. To start, what’s different now in being able to get at this data and do this type of analysis from, say, five years ago?

Cavaretta: The biggest difference has to do with the cheap availability of storage and processing power, where a few years ago people were very much concentrated on filtering down the datasets that were being stored for long-term analysis. There has been a big sea change with the idea that we should just store as much as we can and take advantage of that storage to improve business processes.

Gardner: That sounds right on the money, but how do we get here? How do we get to the point where we could start using these benefits from a technology perspective, as you say, better storage, networks, being able to move big dataset, that sort of thing, to wrenching out benefits. What’s the process behind the benefit?

Cavaretta: The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises. There’s this idea that you don’t need to spend so much time figuring out what data you want to store and worry about the cost associated with it, and more about data as an asset. There is value in being able to store it, and being able to go back and extract different insights from it. This really comes from this really cheap storage, access to parallel processing machines, and great software.

Gardner: It seems to me that for a long time, the mindset was that data is simply the output from applications, with applications being primary and the data being almost an afterthought. It seems like we sort flipped that. The data now is perhaps as important, even more important, than the applications. Does that seem to hold true?

Cavaretta: Most definitely, and we’ve had a number of interesting engagements where people have thought about the data that’s being collected. When we talk to them about big data, storing everything at the lowest level of transactions, and what could be done with that, their eyes light up and they really begin to get it.

Gardner: I suppose earlier, when cost considerations and technical limitations were at work, we would just go for a tip of the iceberg level. Now, as you say, we can get almost all the data. So, is this a matter of getting at more data, different types of data, bringing in unstructured data, all the above? How much you are really going after here?

Cavaretta: I like to talk to people about the possibility that big data provides and I always tell them that I have yet to have a circumstance where somebody is giving me too much data. You can pull in all this information and then answer a variety of questions, because you don’t have to worry that something has been thrown out. You have everything.

You may have 100 questions, and each one of the questions uses a very small portion of the data. Those questions may use different portions of the data, a very small piece, but they’re all different. If you go in thinking, “We’re going to answer the top 20 questions and we’re just going to hold data for that,” that leaves so much on the table, and you don’t get any value out of it.

Gardner: I suppose too that we can think about small samples or small datasets and aggregate them or join them. We have new software capabilities to do that efficiently, so that we’re able to not just look for big honking, original datasets, but to aggregate, correlate, and look for a lifecycle level of data. Is that fair as well?

Cavaretta: Definitely. We’re a big believer in mash-ups and we really believe that there is a lot of value in being able to take even datasets that are not specifically big-data sizes yet, and then not go deep, not get more detailed information, but expand the breadth. So it’s being able to augment it with other internal datasets, bridging across different business areas as well as augmenting it with external datasets.

A lot of times you can take something that is maybe a few hundred thousand records or a few million records, and then by the time you’re joining it, and appending different pieces of information onto it, you can get the big dataset sizes.

Gardner: Just to be clear, you’re unique. The conventional wisdom for big data is to look at what your customers are doing, or just the external data. You’re really looking primarily at internal data, while also availing yourself of what external data might be appropriate. Maybe you could describe a little bit about your organization, what you do, and why this internal focus is so important for you.

Cavaretta: I’m part of a larger department that is housed over in the research and advanced-engineering area at Ford Motor Company, and we’re about 30 people. We work as internal consultants, kind of like Capgemini or Ernst & Young, but only within Ford Motor Company. We’re responsible for going out and looking for different opportunities from the business perspective to bring advanced technologies. So, we’ve been focused on the area of statistical modeling and machine learning for I’d say about 15 years or so.

And in this time, we’ve had a number of engagements where we’ve talked with different business customers, and people have said, “We’d really like to do this.” Then, we’d look at the datasets that they have, and say, “Wouldn’t it be great if we would have had this. So now we have to wait six months or a year.”

These new technologies are really changing the game from that perspective. We can turn on the complete fire-hose, and then say that we don’t have to worry about that anymore. Everything is coming in. We can record it all. We don’t have to worry about if the data doesn’t support this analysis, because it’s all there. That’s really a big benefit of big-data technologies.

Gardner: If you’ve been doing this for 15 years, you must be demonstrating a return on investment (ROI) or a value proposition back to Ford. Has that value proposition been changing? Do you expect it to change? What might be your real value proposition two or three years from now?

Cavaretta: The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts who are really interested in looking at things from a data-driven perspective. From when I first came in to now, the biggest change has been when Alan Mulally came into the company, and really pushed the idea of data-driven decisions.

Before, we were getting a lot of interest from people who are really very focused on the data that they had internally. After that, they had a lot of questions from their management and from upper level directors and vice-president saying, “We’ve got all these data assets. We should be getting more out of them.” This strategic perspective has really changed a lot of what we’ve done in the last few years.

Gardener: As I listen to you Michael, it occurs to me that you are applying this data-driven mentality more deeply. As you pointed out earlier, you’re also going after all the data, all the information, whether that’s internal or external.

In the case of an automobile company, you’re looking at the factory, the dealers, what drivers are doing, what the devices within the automobile are telling you, factoring that back into design relatively quickly, and then repeating this process. Are we getting to the point where this sort of Holy Grail notion of a total feedback loop across the lifecycle of a major product like an automobile is really within our grasp? Are we getting there, or is this still kind of theoretical. Can we pull it altogether and make it a science?

Cavaretta: The theory is there. The question has more to do with the actual implementation and the practicality of it. We still are talking a lot of data where even with new advanced technologies and techniques that’s a lot of data to store, it’s a lot of data to analyze, there’s a lot of data to make sure that we can mash-up appropriately.

And, while I think the potential is there and I think the theory is there. There is also a work in being able to get the data from multiple sources. So everything which you can get back from the vehicle, fantastic. Now if you marry that up with internal data, is it survey data, is it manufacturing data, is it quality data? What are the things do you want to go after first? We can’t do everything all at the same time.

Our perspective has been let’s make sure that we identify the highest value, the greatest ROI areas, and then begin to take some of the major datasets that we have and then push them and get more detail. Mash them up appropriately and really prove up the value for the technologists.

Gardner: Clearly, there’s a lot more to come in terms of where we can take this, but I suppose it’s useful to have a historic perspective and context as well. I was thinking about some of the early quality gurus like Deming and some of the movement towards quality like Six Sigma. Does this fall within that same lineage? Are we talking about a continuum here over that last 50 or 60 years, or is this something different?

Cavaretta: That’s a really interesting question. From the perspective of analyzing data, using data appropriately, I think there is a really good long history, and Ford has been a big follower of Deming and Six Sigma for a number of years now.

The difference though, is this idea that you don’t have to worry so much upfront about getting the data. If you’re doing this right, you have the data right there, and this has some great advantages. You’ll have to wait until you get enough history to look for somebody’s patterns. Then again, it also has some disadvantage, which is you’ve got so much data that it’s easy to find things that could be spurious correlations or models that don’t make any sense.

The piece that is required is good domain knowledge, in particular when you are talking about making changes in the manufacturing plant. It’s very appropriate to look at things and be able to talk with people who have 20 years of experience to say, “This is what we found in the data. Does this match what your intuition is?” Then, take that extra step.

Gardner: Tell me a little about sort a day in the life of your organization and your team to let us know what you do. How do you go about making more data available and then reaching some of these higher-level benefits?

Cavaretta: We’re very much focused on interacting with the business. Most of all, we do have to deal with working on pilot projects and working with our business customers to bring advanced analytics and big data technologies to bear against these problems. So we work in kind of what we call push-and-pull model.

We go out and investigate technologies and say these are technologies that Ford should be interested in. Then, we look internally for business customers who would be interested in that. So, we’re kind of pushing the technologies.

From the pull perspective, we’ve had so many successful engagements in such good contacts and good credibility within the organization that we’ve had people come to us and say, “We’ve got a problem. We know this has been in your domain. Give us some help. We’d love to be able to hear your opinions on this.”

So we’ve pulled from the business side and then our job is to match up those two pieces. It’s best when we will be looking at a particular technology and we have somebody come to us and we say, “Oh, this is a perfect match.”

Those types of opportunities have been increasing in the last few years, and we’ve been very happy with the number of internal customers that have really been very excited about the areas of big data.

Gardner: Because this is The Open Group conference and an audience that’s familiar with the IT side of things, I’m curious as to how this relates to software and software development. Of course there are so many more millions of lines of code in automobiles these days, software being more important than just about everything. Are you applying a lot of what you are doing to the software side of the house or are the agile and the feedback loops and the performance management issues a separate domain, or it’s your crossover here?

Cavaretta: There’s some crossover. The biggest area that we’ve been focused on has been picking information, whether internal business processes or from the vehicle, and then being able to bring it back in to derive value. We have very good contacts in the Ford IT group, and they have been fantastic to work with in bringing interesting tools and technology to bear, and then looking at moving those into production and what’s the best way to be able to do that.

A fantastic development has been this idea that we’re using some of the more agile techniques in this space and Ford IT has been pushing this for a while. It’s been fantastic to see them work with us and be able to bring these techniques into this new domain. So we’re pushing the envelope from two different directions.

Gardner: It sounds like you will be meeting up at some point with a complementary nature to your activities.

Cavaretta: Definitely.

Gardner: Let’s move on to this notion of the “Internet of things,” a very interesting concept that lot of people talk about. It seems relevant to what we’ve been discussing. We have sensors in these cars, wireless transfer of data, more-and-more opportunity for location information to be brought to bear, where cars are, how they’re driven, speed information, all sorts of metrics, maybe making those available through cloud providers that assimilate this data.

So let’s not go too deep, because this is a multi-hour discussion all on its own, but how is this notion of the Internet of things being brought to bear on your gathering of big data and applying it to the analytics in your organization?

Cavaretta: It is a huge area, and not only from the internal process perspective —  RFID tags within the manufacturing plans, as well as out on the plant floor, and then all of the information that’s being generated by the vehicle itself.

The Ford Energi generates about 25 gigabytes of data per hour. So you can imagine selling couple of million vehicles in the near future with that amount of data being generated. There are huge opportunities within that, and there are also some interesting opportunities having to do with opening up some of these systems for third-party developers. OpenXC is an initiative that we have going on to add at Research and Advanced Engineering.

We have a lot of data coming from the vehicle. There’s huge number of sensors and processors that are being added to the vehicles. There’s data being generated there, as well as communication between the vehicle and your cell phone and communication between vehicles.

There’s a group over at Ann Arbor Michigan, the University of Michigan Transportation Research Institute (UMTRI), that’s investigating that, as well as communication between the vehicle and let’s say a home system. It lets the home know that you’re on your way and it’s time to increase the temperature, if it’s winter outside, or cool it at the summer time. The amount of data that’s been generated there is invaluable information and could be used for a lot of benefits, both from the corporate perspective, as well as just the very nature of the environment.

Gardner: Just to put a stake in the ground on this, how much data do cars typically generate? Do you have a sense of what now is the case, an average?

Cavaretta: The Energi, according to the latest information that I have, generates about 25 gigabytes per hour. Different vehicles are going to generate different amounts, depending on the number of sensors and processors on the vehicle. But the biggest key has to do with not necessarily where we are right now but where we will be in the near future.

With the amount of information that’s being generated from the vehicles, a lot of it is just internal stuff. The question is how much information should be sent back for analysis and to find different patterns? That becomes really interesting as you look at external sensors, temperature, humidity. You can know when the windshield wipers go on, and then to be able to take that information, and mash that up with other external data sources too. It’s a very interesting domain.

Gardner: So clearly, it’s multiple gigabytes per hour per vehicle and probably going much higher.

Cavaretta: Easily.

Gardner: Let’s move forward now for those folks who have been listening and are interested in bringing this to bear on their organizations and their vertical industries, from the perspective of skills, mindset, and culture. Are there standards, certification, or professional organizations that you’re working with in order to find the right people?

It’s a big question. Let’s look at what skills do you target for your group, and what ways you think that you can improve on that. Then, we’ll get into some of those larger issues about culture and mindset.

Cavaretta: The skills that we have in our department, in particular on our team, are in the area of computer science, statistics, and some good old-fashioned engineering domain knowledge. We’ve really gone about this from a training perspective. Aside from a few key hires, it’s really been an internally developed group.

The biggest advantage that we have is that we can go out and be very targeted with the amount of training that we have. There are such big tools out there, especially in the open-source realm, that we can spin things up with relatively low cost and low risk, and do a number of experiments in the area. That’s really the way that we push the technologies forward.

Gardner: Why The Open Group? Why is that a good forum for your message, and for your research here?

Cavaretta: The biggest reason is the focus on the enterprise, where there are a lot of advantages and a lot of business cases, looking at large enterprises and where there are a lot of systems, companies that can take a relatively small improvement, and it can make a large difference on the bottom-line.

Talking with The Open Group really gives me an opportunity to be able to bring people on board with the idea that you should be looking at a difference in mindset. It’s not “Here’s a way that data is being generated, look, try and conceive of some questions that we can use, and we’ll store that too.” Let’s just take everything, we’ll worry about it later, and then we’ll find the value.

Gardner: I’m sure the viewers of your presentation on January 28 will be gathering a lot of great insights. A lot of the people that attend The Open Group conferences are enterprise architects. What do you think those enterprise architects should be taking away from this? Is there something about their mindset that should shift in recognizing the potential that you’ve been demonstrating?

Cavaretta: It’s important for them to be thinking about data as an asset, rather than as a cost. You even have to spend some money, and it may be a little bit unsafe without really solid ROI at the beginning. Then, move towards pulling that information in, and being able to store it in a way that allows not just the high-level data scientist to get access to and provide value, but people who are interested in the data overall. Those are very important pieces.

The last one is how do you take a big-data project, how do you take something where you’re not storing in the traditional business intelligence (BI) framework that an enterprise can develop, and then connect that to the BI systems and look at providing value to those mash-ups. Those are really important areas that still need some work.

Gardner: Another big constituency within The Open Group community are those business architects. Is there something about mindset and culture, getting back to that topic, that those business-level architects should consider? Do you really need to change the way you think about planning and resource allocation in a business setting, based on the fruits of things that you are doing with big data?

Cavaretta: I really think so. The digital asset that you have can be monetized to change the way the business works, and that could be done by creating new assets that then can be sold to customers, as well as improving the efficiencies of the business.

This idea that everything is going to be very well-defined and there is a lot of work that’s being put into  making sure that data has high quality, I think those things need to be changed somewhat. As you’re pulling the data in, as you are thinking about long-term storage, it’s more the access to the information, rather than the problem in just storing it.

Gardner: Interesting that you brought up that notion that the data becomes a product itself and even a profit center perhaps.

Cavaretta: Exactly. There are many companies, especially large enterprises, that are looking at their data assets and wondering what can they do to monetize this, not only to just pay for the efficiency improvement but as a new revenue stream.

Gardner: We’re almost out of time. For those organizations that want to get started on this, are there any 20/20 hindsights or Monday morning quarterback insights you can provide. How do you get started? Do you appoint a leader? Do you need a strategic roadmap, getting this culture or mindset shifted, pilot programs? How would you recommend that people might begin the process of getting into this?

Cavaretta: We’re definitely a huge believer in pilot projects and proof of concept, and we like to develop roadmaps by doing. So get out there. Understand that it’s going to be messy. Understand that it maybe going to be a little bit more costly and the ROI isn’t going to be there at the beginning.

But get your feet wet. Start doing some experiments, and then, as those experiments turn from just experimentation into really providing real business value, that’s the time to start looking at a more formal aspect and more formal IT processes. But you’ve just got to get going at this point.

Gardner: I would think that the competitive forces are out there. If you are in a competitive industry, and those that you compete against are doing this and you are not, that could spell some trouble.

Cavaretta:  Definitely.

Gardner: We’ve been talking with Michael Cavaretta, PhD, Technical Leader of Predictive Analytics at Ford Research and Advanced Engineering in Dearborn, Michigan. Michael and I have been exploring how big data is fostering business transformation by allowing deeper insights into more types of data and all very efficiently. This is improving processes, updating quality control and adding to customer satisfaction.

Our conversation today comes as a lead-in to Michael’s upcoming plenary presentation. He is going to be talking on January 28 in Newport Beach California, as part of The Open Group conference.

You will hear more from Michael and others, the global leaders on big data that are going to be gathering to talk about business transformation from big data at this conference. So a big thank you to Michael for joining us in this fascinating discussion. I really enjoyed it and I look forward to your presentation on the 28.

Cavaretta: Thank you very much.

Gardner: And I would encourage our listeners and readers to attend the conference or follow more of the threads in social media from the event. Again, it’s going to be happening from January 27 to January 30 in Newport Beach, California.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator through the thought leadership interviews. Thanks again for listening, and come back next time.

1 Comment

Filed under Conference, Uncategorized

Leveraging Social Media at The Open Group Newport Beach Conference (#ogNB)

By The Open Group Conference Team

By attending conferences hosted by The Open Group®, attendees are able to learn from industry experts, understand the latest technologies and standards and discuss and debate current industry trends. One way to maximize the benefits is to make technology work for you. If you are attending The Open Group Conference in Newport Beach next week, we’ve put together a few tips on how to leverage social media to make networking at the conference easier, quicker and more effective.

Using Twitter at #ogNB

Twitter is a real-time news-sharing tool that anyone can use. The official hashtag for the conference is #ogNB. This enables anybody, whether they are physically attending the event or not, to follow what’s happening at the Newport Beach conference in real-time and interact with each other.

Before the conference, be sure to update your Twitter account to monitor #ogNB and, of course, to tweet about the conference.

Using Facebook at The Open Group Conference in Newport Beach

You can also track what is happening at the conference on The Open Group Facebook page. We will be running another photo contest, where all of entries will be uploaded to our page. Members and Open Group Facebook fans can vote by “liking” a photo. The photos with the most “likes” in each category will be named the winner. Submissions will be uploaded in real-time, so the sooner you submit a photo, the more time members and fans will have to vote for it!

For full details of the contest and how to enter see The Open Group blog at: http://blog.opengroup.org/2013/01/22/the-open-group-photo-contest-document-the-magic-at-the-newport-beach-conference/

LinkedIn during The Open Group Conference in Newport Beach

Inspired by one of the sessions? Interested in what your peers have to say? Start a discussion on The Open Group LinkedIn Group page. We’ll also be sharing interesting topics and questions related to The Open Group Conference as it is happening. If you’re not a member already, requesting membership is easy. Simply go to the group page and click the “Join Group” button. We’ll accept your request as soon as we can!

Blogging during The Open Group Conference in Newport Beach

Stay tuned for daily conference recaps here on The Open Group blog. In case you missed a session or you weren’t able to make it to Newport Beach, we’ll be posting the highlights and recaps on the blog. If you are attending the conference and would like to submit a recap of your own, please contact opengroup (at) bateman-group.com.

If you have any questions about social media usage at the conference, feel free to tweet the conference team @theopengroup.

Comments Off

Filed under Uncategorized