Tag Archives: Conference

New Health Data Deluges Require Secure Information Flow Enablement Via Standards, Says The Open Group’s New Healthcare Director

By The Open Group

Below is the transcript of The Open Group podcast on how new devices and practices have the potential to expand the information available to Healthcare providers and facilities.

Listen to the podcast here.

Dana Gardner: Hello, and welcome to a special BriefingsDirect Thought Leadership Interview coming to you in conjunction with The Open Group’s upcoming event, Enabling Boundaryless Information Flow™ July 21-22, 2014 in Boston.

GardnerI’m Dana Gardner, Principal Analyst at Interarbor Solutions and I’ll be your host and moderator for the series of discussions from the conference on Boundaryless Information Flow, Open Platform 3.0™, Healthcare, and Security issues.

One area of special interest is the Healthcare arena, and Boston is a hotbed of innovation and adaption for how technology, Enterprise Architecture, and standards can improve the communication and collaboration among Healthcare ecosystem players.

And so, we’re joined by a new Forum Director at The Open Group to learn how an expected continued deluge of data and information about patients, providers, outcomes, and efficiencies is pushing the Healthcare industry to rapid change.

WJason Lee headshotith that, please join me now in welcoming our guest. We’re here with Jason Lee, Healthcare and Security Forums Director at The Open Group. Welcome, Jason.

Jason Lee: Thank you so much, Dana. Good to be here.

Gardner: Great to have you. I’m looking forward to the Boston conference and want to remind our listeners and readers that it’s not too late to sign up. You can learn more at http://www.opengroup.org.

Jason, let’s start by talking about the relationship between Boundaryless Information Flow, which is a major theme of the conference, and healthcare. Healthcare perhaps is the killer application for Boundaryless Information Flow.

Lee: Interesting, I haven’t heard it referred to that way, but healthcare is 17 percent of the US economy. It’s upwards of $3 trillion. The costs of healthcare are a problem, not just in the United States, but all over the world, and there are a great number of inefficiencies in the way we practice healthcare.

We don’t necessarily intend to be inefficient, but there are so many places and people involved in healthcare, it’s very difficult to get them to speak the same language. It’s almost as if you’re in a large house with lots of different rooms, and every room you walk into they speak a different language. To get information to flow from one room to the other requires some active efforts and that’s what we’re undertaking here at The Open Group.

Gardner: What is it about the current collaboration approaches that don’t work? Obviously, healthcare has been around for a long time and there have been different players involved. What’s the hurdle? What prevents a nice, seamless, easy flow and collaboration in information that gets better outcomes? What’s the holdup?

Lee: There are many ways to answer that question, because there are many barriers. Perhaps the simplest is the transformation of healthcare from a paper-based industry to a digital industry. Everyone has walked into an office, looked behind the people at the front desk, and seen file upon file and row upon row of folders, information that’s kept in a written format.

When there’s been movement toward digitizing that information, not everyone has used the same system. It’s almost like trains running on a different gauge track. Obviously if the track going east to west is a different gauge than going north to south, then trains aren’t going to be able to travel on those same tracks. In the same way, healthcare information does not flow easily from one office to another or from one provider to another.

Gardner: So not only do we have disparate strategies for collecting and communicating health data, but we’re also seeing much larger amounts of data coming from a variety of new and different places. Some of them now even involve sensors inside of patients themselves or devices that people will wear. So is the data deluge, the volume, also an issue here?

Lee: Certainly. I heard recently that an integrated health plan, which has multiple hospitals involved, contains more elements of data than the Library of Congress. As information is collected at multiple points in time, over a relatively short period of time, you really do have a data deluge. Figuring out how to find your way through all the data and look at the most relevant for the patient is a great challenge.

Gardner: I suppose the bad news is that there is this deluge of data, but it’s also good news, because more data means more opportunity for analysis, a better ability to predict and determine best practices, and also provide overall lower costs with better patient care.

So it seems like the stakes are rather high here to get this right, to not just crumble under a volume or an avalanche of data, but to master it, because it’s perhaps the future. The solution is somewhere in there too.

Lee: No question about it. At The Open Group, our focus is on solutions. We, like others, put a great deal of effort into describing the problems, but figuring out how to bring IT technologies to bear on business problems, how to encourage different parts of organizations to speak to one another and across organizations to speak the same language, and to operate using common standards and language. That’s really what we’re all about.

And it is, in a large sense, part of the process of helping to bring healthcare into the 21st Century. A number of industries are a couple of decades ahead of healthcare in the way they use large datasets — big data, some people refer to it as. I’m talking about companies like big department stores and large online retailers. They really have stepped up to the plate and are using that deluge of data in ways that are very beneficial to them, and healthcare can do the same. We’re just not quite at the same level of evolution.

Gardner: And to your point, the stakes are so much higher. Retail is, of course, a big deal in the economy, but as you pointed out, healthcare is such a much larger segment and portion. So just making modest improvements in communication, collaboration, or data analysis can reap huge rewards.

Lee: Absolutely true. There is the cost side of things, but there is also the quality side. So there are many ways in which healthcare can improve through standardization and coordinated development, using modern technology that cannot just reduce cost, but improve quality at the same time.

Gardner: I’d like to get into a few of the hotter trends, but before we do, it seems that The Open Group has recognized the importance here by devoting the entire second day of their conference in Boston, that will be on July 22, to Healthcare.

Maybe you could give us a brief overview of what participants, and even those who come in online and view recorded sessions of the conference at http://new.livestream.com/opengroup should expect? What’s going to go on July 22nd?

Lee: We have a packed day. We’re very excited to have Dr. Joe Kvedar, a physician at Partners HealthCare and Founding Director of the Center for Connected Health, as our first plenary speaker. The title of his presentation is “Making Health Additive.” Dr. Kvedar is a widely respected expert on mobile health, which is currently the Healthcare Forum’s top work priority. As mobile medical devices become ever more available and diversified, they will enable consumers to know more about their own health and wellness. A great deal of data of potentially useful health data will be generated. How this information can be used–not just by consumers but also by the healthcare establishment that takes care of them as patients, will become a question of increasing importance. It will become an area where standards development and The Open Group can be very helpful.

Our second plenary speaker, Proteus Duxbury, Chief Technology Officer at Connect for Health Colorado,will discuss a major feature of the Affordable Care Act—the health insurance exchanges–which are designed to bring health insurance to tens of millions of people who previously did not have access to it. Mr. Duxbury is going to talk about how Enterprise Architecture–which is really about getting to solutions by helping the IT folks talk to the business folks and vice versa–has helped the State of Colorado develop their Health Insurance Exchange.

After the plenaries, we will break up into 3 tracks, one of which is Healthcare-focused. In this track there will be three presentations, all of which discuss how Enterprise Architecture and the approach to Boundaryless Information Flow can help healthcare and healthcare decision-makers become more effective and efficient.

One presentation will focus on the transformation of care delivery at the Visiting Nurse Service of New York. Another will address stewarding healthcare transformation using Enterprise Architecture, focusing on one of our Platinum members, Oracle, and a company called Intelligent Medical Objects, and how they’re working together in a productive way, bringing IT and healthcare decision-making together.

Then, the final presentation in this track will focus on the development of an Enterprise Architecture-based solution at an insurance company. The payers, or the insurers–the big companies that are responsible for paying bills and collecting premiums–have a very important role in the healthcare system that extends beyond administration of benefits. Yet, payers are not always recognized for their key responsibilities and capabilities in the area of clinical improvements and cost improvements.

With the increase in payer data brought on in large part by the adoption of a new coding system–the ICD-10–which will come online this year, there will be a huge amount of additional data, including clinical data, that become available. At The Open Group, we consider payers—health insurance companies (some of which are integrated with providers)–as very important stakeholders in the big picture..

In the afternoon, we’re going to switch gears a bit and have a speaker talk about the challenges, the barriers, the “pain points” in introducing new technology into the healthcare systems. The focus will return to remote or mobile medical devices and the predictable but challenging barriers to getting newly generated health information to flow to doctors’ offices and into patients records, electronic health records, and hospitals data keeping and data sharing systems.

We’ll have a panel of experts that responds to these pain points, these challenges, and then we’ll draw heavily from the audience, who we believe will be very, very helpful, because they bring a great deal of expertise in guiding us in our work. So we’re very much looking forward to the afternoon as well.

Gardner: It’s really interesting. A couple of these different plenaries and discussions in the afternoon come back to this user-generated data. Jason, we really seem to be on the cusp of a whole new level of information that people will be able to develop from themselves through their lifestyle, new devices that are connected.

We hear from folks like Apple, Samsung, Google, and Microsoft. They’re all pulling together information and making it easier for people to not only monitor their exercise, but their diet, and maybe even start to use sensors to keep track of blood sugar levels, for example.

In fact, a new Flurry Analytics survey showed 62 percent increase in the use of health and fitness application over the last six months on the popular mobile devices. This compares to a 33 percent increase in other applications in general. So there’s an 87 percent faster uptick in the use of health and fitness applications.

Tell me a little bit how you see this factoring in. Is this a mixed blessing? Will so much data generated from people in addition to the electronic medical records, for example, be a bad thing? Is this going to be a garbage in, garbage out, or is this something that could potentially be a game-changer in terms of how people react to their own data and then bring more data into the interactions they have with care providers?

Lee: It’s always a challenge to predict what the market is going to do, but I think that’s a remarkable statistic that you cited. My prediction is that the increased volume of person- generated data from mobile health devices is going to be a game-changer. This view also reflects how the Healthcare Forum members (which includes members from Capgemini, Philips, IBM, Oracle and HP) view the future.

The commercial demand for mobile medical devices, things that can be worn, embedded, or swallowed, as in pills, as you mentioned, is growing ever more. The software and the applications that will be developed to be used with the devices is going to grow by leaps and bounds. As you say, there are big players getting involved. Already some of the pedometer type devices that measure the number of steps taken in a day have captured the interest of many, many people. Even David Sedaris, serious guy that he is, was writing about it recently in ‘The New Yorker’.

What we will find is that many of the health indicators that we used to have to go to the doctor or nurse or lab to get information on will become available to us through these remote devices.

There will be a question, of course, as to reliability and validity of the information, to your point about garbage in, garbage out, but I think standards development will help here This, again, is where The Open Group comes in. We might also see the FDA exercising its role in ensuring safety here, as well as other organizations, in determining which devices are reliable.

The Open Group is working in the area of mobile data and information systems that are developed around them, and their ability to (a) talk to one another and (b) talk to the data devices/infrastructure used in doctors’ offices and in hospitals. This is called interoperability and it’s certainly lacking in the country.

There are already problems around interoperability and connectivity of information in the healthcare establishment as it is now. When patients and consumers start collecting their own data, and the patient is put at the center of the nexus of healthcare, then the question becomes how does that information that patients collect get back to the doctor/clinician in ways in which the data can be trusted and where the data are helpful?

After all, if a patient is wearing a medical device, there is the opportunity to collect data, about blood sugar level let’s say, throughout the day. And this is really taking healthcare outside of the four walls of the clinic and bringing information to bear that can be very, very useful to clinicians and beneficial to patients.

In short, the rapid market dynamic in mobile medical devices and in the software and hardware that facilitates interoperability begs for standards-based solutions that reduce costs and improve quality, and all of which puts the patient at the center. This is The Open Group’s Healthcare Forum’s sweet spot.

Gardner: It seems to me a real potential game-changer as well, and that something like Boundaryless Information Flow and standards will play an essential role. Because one of the big question marks with many of the ailments in a modern society has to do with lifestyle and behavior.

So often, the providers of the care only really have the patient’s responses to questions, but imagine having a trove of data at their disposal, a 360-degree view of the patient to then further the cause of understanding what’s really going on, on a day-to-day basis.

But then, it’s also having a two-way street, being able to deliver perhaps in an automated fashion reinforcements and incentives, information back to the patient in real-time about behavior and lifestyles. So it strikes me as something quite promising, and I look forward to hearing more about it at the Boston conference.

Any other thoughts on this issue about patient flow of data, not just among and between providers and payers, for example, or providers in an ecosystem of care, but with the patient as the center of it all, as you said?

Lee: As more mobile medical devices come to the market, we’ll find that consumers own multiple types of devices at least some of which collect multiple types of data. So even for the patient, being at the center of their own healthcare information collection, there can be barriers to having one device talk to the other. If a patient wants to keep their own personal health record, there may be difficulties in bringing all that information into one place.

So the interoperability issue, the need for standards, guidelines, and voluntary consensus among stakeholders about how information is represented becomes an issue, not just between patients and their providers, but for individual consumers as well.

Gardner: And also the cloud providers. There will be a variety of large organizations with cloud-modeled services, and they are going to need to be, in some fashion, brought together, so that a complete 360-degree view of the patient is available when needed. It’s going to be an interesting time.

Of course, we’ve also looked at many other industries and tried to have a cloud synergy, a cloud-of-clouds approach to data and also the transaction. So it’s interesting how what’s going on in multiple industries is common, but it strikes me that, again, the scale and the impact of the healthcare industry makes it a leader now, and perhaps a driver for some of these long overdue structured and standardized activities.

Lee: It could become a leader. There is no question about it. Moreover, there is a lot Healthcare can learn from other companies, from mistakes that other companies have made, from lessons they have learned, from best practices they have developed (both on the content and process side). And there are issues, around security in particular, where Healthcare will be at the leading edge in trying to figure out how much is enough, how much is too much, and what kinds of solutions work.

There’s a great future ahead here. It’s not going to be without bumps in the road, but organizations like The Open Group are designed and experienced to help multiple stakeholders come together and have the conversations that they need to have in order to push forward and solve some of these problems.

Gardner: Well, great. I’m sure there will be a lot more about how to actually implement some of those activities at the conference. Again, that’s going to be in Boston, beginning on July 21, 2014.

We’ll have to leave it there. We’re about out of time. We’ve been talking with a new Director at The Open Group to learn how an expected continued deluge of data and information about patients and providers, outcomes and efficiencies are all working together to push the Healthcare industry to rapid change. And, as we’ve heard, that might very well spill over into other industries as well.

So we’ve seen how innovation and adaptation around technology, Enterprise Architecture and standards can improve the communication and collaboration among Healthcare ecosystem players.

It’s not too late to register for The Open Group Boston 2014 (http://www.opengroup.org/boston2014) and join the conversation via Twitter #ogchat #ogBOS, where you will be able to learn more about Boundaryless Information Flow, Open Platform 3.0, Healthcare and other relevant topics.

So a big thank you to our guest. We’ve been joined by Jason Lee, Healthcare and Security Forums Director at The Open Group. Thanks so much, Jason.

Lee: Thank you very much.

 

 

 

 

 

 

 

 

 

Leave a comment

Filed under Boundaryless Information Flow™, Cloud, Conference, Data management, Enterprise Architecture, Enterprise Transformation, Healthcare, Information security, Interoperability, Open Platform 3.0, Standards, Uncategorized

The Open Group Boston 2014 to Explore How New IT Trends are Empowering Improvements in Business

By The Open Group

The Open Group Boston 2014 will be held on July 21-22 and will cover the major issues and trends surrounding Boundaryless Information Flow™. Thought-leaders at the event will share their outlook on IT trends, capabilities, best practices and global interoperability, and how this will lead to improvements in responsiveness and efficiency. The event will feature presentations from representatives of prominent organizations on topics including Healthcare, Service-Oriented Architecture, Security, Risk Management and Enterprise Architecture. The Open Group Boston will also explore how cross-organizational collaboration and trends such as big data and cloud computing are helping to make enterprises more effective.

The event will consist of two days of plenaries and interactive sessions that will provide in-depth insight on how new IT trends are leading to improvements in business. Attendees will learn how industry organizations are seeking large-scale transformation and some of the paths they are taking to realize that.

The first day of the event will bring together subject matter experts in the Open Platform 3.0™, Boundaryless Information Flow™ and Enterprise Architecture spaces. The day will feature thought-leaders from organizations including Boston University, Oracle, IBM and Raytheon. One of the keynotes is from Marshall Van Alstyne, Professor at Boston University School of Management & Researcher at MIT Center for Digital Business, which reveals the secret of internet-driven marketplaces. Other content:

• The Open Group Open Platform 3.0™ focuses on new and emerging technology trends converging with each other and leading to new business models and system designs. These trends include mobility, social media, big data analytics, cloud computing and the Internet of Things.
• Cloud security and the key differences in securing cloud computing environments vs. traditional ones as well as the methods for building secure cloud computing architectures
• Big Data as a service framework as well as preparing to deliver on Big Data promises through people, process and technology
• Integrated Data Analytics and using them to improve decision outcomes

The second day of the event will have an emphasis on Healthcare, with keynotes from Joseph Kvedar, MD, Partners HealthCare, Center for Connected Health, and Connect for Health Colorado CTO, Proteus Duxbury. The day will also showcase speakers from Hewlett Packard and Blue Cross Blue Shield, multiple tracks on a wide variety of topics such as Risk and Professional Development, and Archimate® tutorials. Key learnings include:

• Improving healthcare’s information flow is a key enabler to improving healthcare outcomes and implementing efficiencies within today’s delivery models
• Identifying the current state of IT standards and future opportunities which cover the healthcare ecosystem
• How Archimate® can be used by Enterprise Architects for driving business innovation with tried and true techniques and best practices
• Security and Risk Management evolving as software applications become more accessible through APIs – which can lead to vulnerabilities and the potential need to increase security while still understanding the business value of APIs

Member meetings will also be held on Wednesday and Thursday, June 23-24.

Don’t wait, register now to participate in these conversations and networking opportunities during The Open Group Boston 2014: http://www.opengroup.org/boston2014/registration

Join us on Twitter – #ogchat #ogBOS

Leave a comment

Filed under ArchiMate®, Boundaryless Information Flow™, Business Architecture, Cloud/SOA, Conference, Enterprise Architecture, Enterprise Transformation, Healthcare, Information security, Open Platform 3.0, Professional Development, RISK Management, Service Oriented Architecture, Standards, Uncategorized

Q&A with Allen Brown, President and CEO of The Open Group

By The Open Group

Last month, The Open Group hosted its San Francisco 2014 conference themed “Toward Boundaryless Information Flow™.” Boundaryless Information Flow has been the pillar of The Open Group’s mission since 2002 when it was adopted as the organization’s vision for Enterprise Architecture. We sat down at the conference with The Open Group President and CEO Allen Brown to discuss the industry’s progress toward that goal and the industries that could most benefit from it now as well as The Open Group’s new Dependability through Assuredness™ Standard and what the organization’s Forums are working on in 2014.

The Open Group adopted Boundaryless Information Flow as its vision in 2002, and the theme of the San Francisco Conference has been “Towards Boundaryless Information Flow.” Where do you think the industry is at this point in progressing toward that goal?

Well, it’s progressing reasonably well but the challenge is, of course, when we established that vision back in 2002, life was a little less complex, a little bit less fast moving, a little bit less fast-paced. Although organizations are improving the way that they act in a boundaryless manner – and of course that changes by industry – some industries still have big silos and stovepipes, they still have big boundaries. But generally speaking we are moving and everyone understands the need for information to flow in a boundaryless manner, for people to be able to access and integrate information and to provide it to the teams that they need.

One of the keynotes on Day One focused on the opportunities within the healthcare industry and The Open Group recently started a Healthcare Forum. Do you see Healthcare industry as a test case for Boundaryless Information Flow and why?

Healthcare is one of the verticals that we’ve focused on. And it is not so much a test case, but it is an area that absolutely seems to need information to flow in a boundaryless manner so that everyone involved – from the patient through the administrator through the medical teams – have all got access to the right information at the right time. We know that in many situations there are shifts of medical teams, and from one medical team to another they don’t have access to the same information. Information isn’t easily shared between medical doctors, hospitals and payers. What we’re trying to do is to focus on the needs of the patient and improve the information flow so that you get better outcomes for the patient.

Are there other industries where this vision might be enabled sooner rather than later?

I think that we’re already making significant progress in what we call the Exploration, Mining and Minerals industry. Our EMMM™ Forum has produced an industry-wide model that is being adopted throughout that industry. We’re also looking at whether we can have an influence in the airline industry, automotive industry, manufacturing industry. There are many, many others, government and retail included.

The plenary on Day Two of the conference focused on The Open Group’s Dependability through Assuredness standard, which was released last August. Why is The Open Group looking at dependability and why is it important?

Dependability is ultimately what you need from any system. You need to be able to rely on that system to perform when needed. Systems are becoming more complex, they’re becoming bigger. We’re not just thinking about the things that arrive on the desktop, we’re thinking about systems like the barriers at subway stations or Tube stations, we’re looking at systems that operate any number of complex activities. And they bring an awful lot of things together that you have to rely upon.

Now in all of these systems, what we’re trying to do is to minimize the amount of downtime because downtime can result in financial loss or at worst human life, and we’re trying to focus on that. What is interesting about the Dependability through Assuredness Standard is that it brings together so many other aspects of what The Open Group is working on. Obviously the architecture is at the core, so it’s critical that there’s an architecture. It’s critical that we understand the requirements of that system. It’s also critical that we understand the risks, so that fits in with the work of the Security Forum, and the work that they’ve done on Risk Analysis, Dependency Modeling, and out of the dependency modeling we can get the use cases so that we can understand where the vulnerabilities are, what action has to be taken if we identify a vulnerability or what action needs to be taken in the event of a failure of the system. If we do that and assign accountability to people for who will do what by when, in the event of an anomaly being detected or a failure happening, we can actually minimize that downtime or remove it completely.

Now the other great thing about this is it’s not only a focus on the architecture for the actual system development, and as the system changes over time, requirements change, legislation changes that might affect it, external changes, that all goes into that system, but also there’s another circle within that system that deals with failure and analyzes it and makes sure it doesn’t happen again. But there have been so many evidences of failure recently. In the banks for example in the UK, a bank recently was unable to process debit cards or credit cards for customers for about three or four hours. And that was probably caused by the work done on a routine basis over a weekend. But if Dependability through Assuredness had been in place, that could have been averted, it could have saved an awfully lot of difficulty for an awful lot of people.

How does the Dependability through Assuredness Standard also move the industry toward Boundaryless Information Flow?

It’s part of it. It’s critical that with big systems the information has to flow. But this is not so much the information but how a system is going to work in a dependable manner.

Business Architecture was another featured topic in the San Francisco plenary. What role can business architecture play in enterprise transformation vis a vis the Enterprise Architecture as a whole?

A lot of people in the industry are talking about Business Architecture right now and trying to focus on that as a separate discipline. We see it as a fundamental part of Enterprise Architecture. And, in fact, there are three legs to Enterprise Architecture, there’s Business Architecture, there’s the need for business analysts, which are critical to supplying the information, and then there are the solutions, and other architects, data, applications architects and so on that are needed. So those three legs are needed.

We find that there are two or three different types of Business Architect. Those that are using the analysis to understand what the business is doing in order that they can inform the solutions architects and other architects for the development of solutions. There are those that are more integrated with the business that can understand what is going on and provide input into how that might be improved through technology. And there are those that can actually go another step and talk about here we have the advances and the technology and here are the opportunities for advancing our competitiveness and organization.

What are some of the other key initiatives that The Open Group’s forum and work groups will be working on in 2014?

That kind question is like if you’ve got an award, you’ve got to thank your friends, so apologies to anyone that I leave out. Let me start alphabetically with the Architecture Forum. The Architecture Forum obviously is working on the evolution of TOGAF®, they’re also working with the harmonization of TOGAF with Archimate® and they have a number of projects within that, of course Business Architecture is on one of the projects going on in the Architecture space. The Archimate Forum are pushing ahead with Archimate—they’ve got two interesting activities going on at the moment, one is called ArchiMetals, which is going to be a sister publication to the ArchiSurance case study, where the ArchiSurance provides the example of Archimate is used in the insurance industry, ArchiMetals is going to be used in a manufacturing context, so there will be a whitepaper on that and there will be examples and artifacts that we can use. They’re also working on in Archimate a standard for interoperability for modeling tools. There are four tools that are accredited and certified by The Open Group right now and we’re looking for that interoperability to help organizations that have multiple tools as many of them do.

Going down the alphabet, there’s DirecNet. Not many people know about DirecNet, but Direcnet™ is work that we do around the U.S. Navy. They’re working on standards for long range, high bandwidth mobile networking. We can go to the FACE™ Consortium, the Future Airborne Capability Environment. The FACE Consortium are working on their next version of their standard, they’re working toward accreditation, a certification program and the uptake of that through procurement is absolutely amazing, we’re thrilled about that.

Healthcare we’ve talked about. The Open Group Trusted Technology Forum, where they’re working on how we can trust the supply chain in developed systems, they’ve released the Open Trusted Technology Provider™ Standard (O-TTPS) Accreditation Program, that was launched this week, and we already have one accredited vendor and two certified test labs, assessment labs. That is really exciting because now we’ve got a way of helping any organization that has large complex systems that are developed through a global supply chain to make sure that they can trust their supply chain. And that is going to be invaluable to many industries but also to the safety of citizens and the infrastructure of many countries. So the other part of the O-TTPS is that standard we are planning to move toward ISO standardization shortly.

The next one moving down the list would be Open Platform 3.0™. This is really exciting part of Boundaryless Information Flow, it really is. This is talking about the convergence of SOA, Cloud, Social, Mobile, Internet of Things, Big Data, and bringing all of that together, this convergence, this bringing together of all of those activities is really something that is critical right now, and we need to focus on. In the different areas, some of our Cloud computing standards have already gone to ISO and have been adopted by ISO. We’re working right now on the next products that are going to move through. We have a governance standard in process and an ecosystem standard has recently been published. In the area of Big Data there’s a whitepaper that’s 25 percent completed, there’s also a lot of work on the definition of what Open Platform 3.0 is, so this week the members have been working on trying to define Open Platform 3.0. One of the really interesting activities that’s gone on, the members of the Open Platform 3.0 Forum have produced something like 22 different use cases and they’re really good. They’re concise and they’re precise and the cover a number of different industries, including healthcare and others, and the next stage is to look at those and work on the ROI of those, the monetization, the value from those use cases, and that’s really exciting, I’m looking forward to peeping at that from time to time.

The Real Time and Embedded Systems Forum (RTES) is next. Real-Time is where we incubated the Dependability through Assuredness Framework and that was where that happened and is continuing to develop and that’s really good. The core focus of the RTES Forum is high assurance system, and they’re doing some work with ISO on that and a lot of other areas with multicore and, of course, they have a number of EC projects that we’re partnering with other partners in the EC around RTES.

The Security Forum, as I mentioned earlier, they’ve done a lot of work on risk and dependability. So they’ve not only their standards for the Risk Taxonomy and Risk Analysis, but they’ve now also developed the Open FAIR Certification for People, which is based on those two standards of Risk Analysis and Risk Taxonomy. And we’re already starting to see people being trained and being certified under that Open FAIR Certification Program that the Security Forum developed.

A lot of other activities are going on. Like I said, I probably left a lot of things out, but I hope that gives you a flavor of what’s going on in The Open Group right now.

The Open Group will be hosting a summit in Amsterdam May 12-14, 2014. What can we look forward to at that conference?

In Amsterdam we have a summit – that’s going to bring together a lot of things, it’s going to be a bigger conference that we had here. We’ve got a lot of activity in all of our activities; we’re going to bring together top-level speakers, so we’re looking forward to some interesting work during that week.

 

 

 

1 Comment

Filed under ArchiMate®, Boundaryless Information Flow™, Business Architecture, Conference, Cybersecurity, EMMMv™, Enterprise Architecture, FACE™, Healthcare, O-TTF, RISK Management, Standards, TOGAF®

What I learnt at The Open Group Bangalore Conference last weekend

By Sreekanth Iyer, Executive IT Architect, IBM

It was quite a lot of learning on a Saturday attending The Open Group conference at Bangalore. Actually it was a two day program this year. I could not make it on Friday because of other work commitments. I heard from the people who attended that it was a great session on Friday. At least I knew about a fellow IBMer Jithesh Kozhipurath’s presentation on Friday. I’d the chance to look at that excellent material on applying TOGAF® practices for integrated IT Operations Enterprise Architecture which was his experience sharing of the lab infra optimization work that he was leading.

I started bit late on Saturday, thinking it was happening at the Leela Palace which was near to my home (Ah.. that was in 2008) Realized late that it was at the Philips Innovation Campus at Manyata. But managed to reach just on time before the start of the sessions.

The day started with an Architecture as a Service discussion. The presentation was short but there were lot of interesting questions and interactions post the session.  I was curious know more about the “self-service” aspect on that topic.

Then we had Jason Uppal of ClinicialMessage Inc. on stage (see picture below) , who gave a wonderful presentation on the human touch to the architecture and how to leverage EA to make disruptive changes without disrupting the working systems.

Jason bangaloreLots of take-aways from the session. Importantly the typical reasons why certain Architectures can fail… caused many a times we have a solution already in our mind and we are trying to fit that into the requirement. And most of these times if we look at the Requirements artifact we will be see that the problems are not rightly captured. Couldn’t agree more with the good practices that he discussed.

Starting with  “Identifying the Problem Right” – I thought that is definitely the first and important step in Architecture.  Then Jason talked about significance of communicating and engaging people and stakeholders in the architecture — point that he drove home with a good example from the health care industry. He talked about the criticality of communicating and engaging the stakeholders — engagement of course improves quality. Building the right levers in the architecture and solving the whole problem were some of the other key points that I noted down. More importantly the key message was as Architects, we have to go beyond drawing the lines and boxes to deliver the change, may be look to deliver things that can create an impact in 30 days balancing the short term and long term goals.

I got the stage for couple of minutes to update on the AEA Bangalore Chapter activities. My request to the attendees was to leverage the chapter for their own professional development – using that as a platform to share expertise, get answers to queries, connect with other professionals of similar interest and build the network. Hopefully will see more participation in the Bangalore chapter events this year.

On the security track, had multiple interesting sessions. Began with Jim Hietala of The Open Group discussing the Risk Management Framework. I’ve been attending a course on the subject. But this one provided a lot of insight on the taxonomy (O-RT) and the analysis part – more of taking a quantitative approach than a qualitative approach. Though the example was based on risks with regard to laptop thefts, there is no reason we can’t apply the principles to real issues like quantifying the threats for moving workloads to cloud. (that’s another to-do added to my list).

Then it was my session on the Best practices for moving workloads to cloud for Indian Banks. Talked about the progress so far with the whitepaper. The attendees were limited as there was Jason’s EA workshop happening in parallel. But those who attended were really interested in the subject. We did have a good discussion on the benefits, challenges and regulations with regard to the Indian Banking workloads and their movement to cloud.  We discussed few interesting case studies. There are areas that need more content and I’ve requested the people who attended the session to participate in the workgroup. We are looking at getting a first draft done in the next 30 days.

Finally, also sat in the presentation by Ajit A. Matthew on the security implementation at Intel. Everywhere the message is clear. You need to implement context based security and security intelligence to enable the new age innovation but at the same time protect your core assets.

It was a Saturday well spent. Added had some opportunities to connect with few new folks and understand their security challenges with cloud.  Looking to keep the dialog going and have an AEA Bangalore chapter event sometime during Q1. In that direction, I took the first step to write this up and share with my network.

Event Details:
The Open Group Bangalore, India
January 24-25, 2014

Sreekanth IyerSreekanth Iyer is an Executive IT Architect in IBM Security Systems CTO office and works on developing IBM’s Cloud Security Technical Strategy. He is an Open Group Certified Distinguished Architect and is a core member of the Bangalore Chapter of the Association of Enterprise Architects. He has over 18 years’ industry experience and has led several client solutions across multiple industries. His key areas of work include Information Security, Cloud Computing, SOA, Event Processing, and Business Process management. He has authored several technical articles, blogs and is a core contributor to multiple Open Group as well as IBM publications. He works out of the IBM India Software Lab Bangalore and you can follow him on Twitter @sreek.

Comments Off

Filed under Conference, Enterprise Architecture, Healthcare, TOGAF®

Three Things We Learned at The Open Group, London

By Manuel Ponchaux, Senior Consultant, Corso

The Corso team recently visited London for The Open Group’s “Business Transformation in Finance, Government & Healthcare” conference (#ogLON). The event was predominantly for learning how experts address organisational change when aligning business needs with information technology – something very relevant in today’s climate. Nonetheless, there were a few other things we learnt as well…

1. Lean Enterprise Architecture

We were told that Standard Frameworks are too complex and multidimensional – people were interested in how we use them to provide simple working guidelines to the architecture team.

There were a few themes that frequently popped up, one of them being the measurement of Enterprise Architecture (EA) complexity. There seemed to be a lot of talk about Lean Enterprise Architecture as a solution to complexity issues.

2. Risk Management was popular

Clearly the events of the past few years e.g. financial crisis, banking regulations and other business transformations mean that managing risk is increasingly more important. So, it was no surprise that the Risk Management and EA sessions were very popular and probably attracted the biggest crowd. The Corso session showcasing our IBM/CIO case study was successful with 40+ attending!

3. Business challenges

People visited our stand and told us they were having trouble generating up to date heat maps. There was also a large number of attendee’s interested in Software as a Service as an alternative to traditional on-premise licensing.

So what did we learn from #ogLON?

Attendees are attracted to the ease of use of Corso’s ArchiMate plugin. http://www.corso3.com/products/archimate/

Together with the configurable nature of System Architect, ArchiMate® is a simple framework to use and makes a good starting point for supporting Lean Architecture.

Roadmapping and performing impact analysis reduces the influence of risk when executing any business transformation initiative.

We also learnt that customers in the industry are starting to embrace the concept of SaaS offerings as it provides them with a solution that can get them up and running quickly and easily – something we’re keen to pursue – which is why we’re now offering IBM Rational tools on the Corso cloud. Visit our website at http://www.corsocloud.com

http://info.corso3.com/blog/bid/323481/3-interesting-things-we-learned-at-The-Open-Group-London

Manuel Poncheau Manuel Ponchaux, Senior Consultant, Corso

1 Comment

Filed under ArchiMate®, Enterprise Architecture, Standards, Uncategorized

Connect at The Open Group Conference in Sydney (#ogSYD) via Social Media

By The Open Group Conference Team

By attending The Open Group’s conferences, attendees are able to learn from industry experts, understand the latest technologies and standards and discuss and debate current industry trends. One way to maximize the benefits is to make technology work for you. If you are attending The Open Group Conference in Sydney next week, we’ve put together a few tips on how to leverage social media to make networking at the conference easier, quicker and more effective.

Using Twitter at #ogSYD

Twitter is a real-time news-sharing tool that anyone can use. The official hashtag for the conference is #ogSYD. This enables anybody, whether they are physically attending the event or not, to follow what’s happening at The Open Group Conference in Sydney in real-time and interact with each other.

Before the conference, be sure to update your Twitter account to monitor #ogSYD and, of course, to tweet about the conference.

Using Facebook at The Open Group Conference in Sydney

You can also track what is happening at the conference on The Open Group Facebook Page. We will be posting photos from conference events throughout the week. If you’re willing to share, your photos with us, we’re happy to post them to our page with a photo credit. Please email your photos, captions, full name and organization to photo (at) opengroup.org!

LinkedIn during The Open Group Conference in Sydney

Motivated by one of the sessions? Interested in what your peers have to say? Start a discussion on The Open Group LinkedIn Group page. We’ll also be sharing interesting topics and questions related to The Open Group Conference as it is happening. If you’re not a member already, requesting membership is easy. Simply go to the group page and click the “Join Group” button. We’ll accept your request as soon as we can!

Blogging during The Open Group Conference in Sydney

Stay tuned for conference recaps here on The Open Group blog. In case you missed a session or you weren’t able to make it to Sydney, we’ll be posting the highlights and recaps on the blog. If you are attending the conference and would like to submit a recap of your own, please contact ukopengroup (at) hotwirepr.com.

If you have any questions about social media usage at the conference, feel free to tweet the conference team @theopengroup.

Comments Off

Filed under Conference

The Open Group Conference in Sydney Plenary Sessions Preview

By The Open Group Conference Team

Taking place April 15-18, 2013, The Open Group Conference in Sydney will bring together industry experts to discuss the evolving role of Enterprise Architecture and how it transforms the enterprise. As the conference quickly approaches, let’s take a deeper look into the plenary sessions that kick-off day one and two. And if you haven’t already, register for The Open Group Conference in Sydney today!

Enterprise Transformation and the Role of Open Standards

By Allen Brown, President & CEO, The Open Group

Enterprise transformation seems to be gathering momentum within the Enterprise Architecture community.  The term, enterprise transformation, suggests the process of fundamentally changing an enterprise.  Sometimes the transformation is dramatic but for most of us it is a steady process. Allen will kick off the conference by discussing how to set expectations, the planning process for enterprise transformation and the role of standards, and provide an overview of ongoing projects by The Open Group’s members.

TOGAF® as a Powerful Took to Kick Start Business Transformation

By Peter Haviland, Chief Business Architect, and Martin Keywood, Partner, Ernst & Young

Business transformation is a tricky beast. It requires many people to work together toward a singular vision, and even more people to be aligned to an often multi-year execution program throughout which personal and organizational priorities will change. As a firm with considerable Business Architecture and transformation experience, Ernst & Young (EY) deploys multi-disciplinary teams of functional and technical experts and uses a number of approaches, anchored on TOGAF framework, to address these issues. This is necessary to get a handle on the complexity inherent to today’s business environment so that stakeholders are aligned and remain actively engaged, past investments in both processes and systems can be maximized, and transformation programs are set up for success and can be driven with sustained momentum.

In this session Peter and Martin will take us through EY’s Transformation Design approach – an approach that, within 12 weeks, can define a transformation vision, get executives on board, create a high level multi-domain architecture, broadly outline transformation alternatives and finally provide initial estimates of the necessary work packages to achieve transformation. They will also share case studies and metrics from the approach of financial services, oil and gas and professional services sectors. The session should interest executives looking to increase buy-in amongst their peers or professionals charged with stakeholder engagement and alignment. It will also show how to use the TOGAF framework within this situation.

Building a More Cohesive Organization Using Business Architecture

 By Craig Martin, COO & Chief Architect, Enterprise Architects

In shifting the focus away from Enterprise Architecture being seen purely as an IT discipline, organizations are beginning to formalize the development of Business Architecture practices and outcomes. The Open Group has made the differentiation between business, IT and enterprise architects through various working groups and certification tracks. However, industry at present is grappling to try to understand where the discipline of Business Architecture resides in the business and what value it can provide separate of the traditional project based business analysis focus.

Craig will provide an overview of some of the critical questions being asked by businesses and how these are addressed through Business Architecture. Using both method as well as case study examples, he will show an approach to building more cohesion across the business landscape. Craig will focus on the use of business motivation models, strategic scenario planning and capability based planning techniques to provide input into the strategic planning process.

Other plenary speakers include:

  • Capability Based Strategic Planning in Transforming a Mining Environment by David David, EA Manager, Rio Tinto
  • Development of the National Broadband Network IT Architecture – A Greenfield Telco Transformation by Roger Venning, Chief IT Architect, NBN Co. Ltd
  • Business Architecture in Finance Panel moderated by Chris Forde, VP Enterprise Architecture, The Open Group

More details about the conference can be found here: http://www.opengroup.org/sydney2013

1 Comment

Filed under Conference

3 Steps to Proactively Address Board-Level Security Concerns

By E.G. Nadhan, HP

Last month, I shared the discussions that ensued in a Tweet Jam conducted by The Open Group on Big Data and Security where the key takeaway was: Protecting Data is Good.  Protecting Information generated from Big Data is priceless.  Security concerns around Big Data continue to the extent that it has become a Board-level concern as explained in this article in ComputerWorldUK.  Board-level concerns must be addressed proactively by enterprises.  To do so, enterprises must provide the business justification for such proactive steps needed to address such board-level concerns.

Nadhan blog image

At The Open Group Conference in Sydney in April, the session on “Which information risks are shaping our lives?” by Stephen Singam, Chief Technology Officer, HP Enterprise Security Services, Australia provides great insight on this topic.  In this session, Singam analyzes the current and emerging information risks while recommending a proactive approach to address them head-on with adversary-centric solutions.

The 3 steps that enterprises must take to proactively address security concerns are below:

Computing the cost of cyber-crime

The HP Ponemon 2012 Cost of Cyber Crime Study revealed that cyber attacks have more than doubled in a three year period with the financial impact increasing by nearly 40 percent. Here are the key takeaways from this research:

  • Cyber-crimes continue to be costly. The average annualized cost of cyber-crime for 56 organizations is $8.9 million per year, with a range of $1.4 million to $46 million.
  • Cyber attacks have become common occurrences. Companies experienced 102 successful attacks per week and 1.8 successful attacks per company per week in 2012.
  • The most costly cyber-crimes are those caused by denial of service, malicious insiders and web-based attacks.

When computing the cost of cyber-crime, enterprises must address direct, indirect and opportunity costs that result from the loss or theft of information, disruption to business operations, revenue loss and destruction of property, plant and equipment. The following phases of combating cyber-crime must also be factored in to comprehensively determine the total cost:

  1. Detection of patterns of behavior indicating an impending attack through sustained monitoring of the enabling infrastructure
  2. Investigation of the security violation upon occurrence to determine the underlying root cause and take appropriate remedial measures
  3. Incident response to address the immediate situation at hand, communicate the incidence of the attack raise all applicable alerts
  4. Containment of the attack by controlling its proliferation across the enterprise
  5. Recovery from the damages incurred as a result of the attack to ensure ongoing business operations based upon the business continuity plans in place

Identifying proactive steps that can be taken to address cyber-crime

  1. “Better get security right,” says HP Security Strategist Mary Ann Mezzapelle in her keynote on Big Data and Security at The Open Group Conference in Newport Beach. Asserting that proactive risk management is the most effective approach, Mezzapelle challenged enterprises to proactively question the presence of shadow IT, data ownership, usage of security tools and standards while taking a comprehensive approach to security end-to-end within the enterprise.
  2. Art Gilliland suggested that learning from cyber criminals and understanding their methods in this ZDNet article since the very frameworks enterprises strive to comply with (such as ISO and PCI) set a low bar for security that adversaries capitalize on.
  3. Andy Ellis discussed managing risk with psychology instead of brute force in his keynote at the 2013 RSA Conference.
  4. At the same conference, in another keynote, world re-knowned game-designer and inventor of SuperBetter, Jane McGonigal suggested the application of the “collective intelligence” that gaming generates can combat security concerns.
  5. In this interview, Bruce Schneier, renowned security guru and author of several books including LIARS & Outliers, suggested “Bad guys are going to invent new stuff — whether we want them to or not.” Should we take a cue from Hollywood and consider the inception of OODA loop into the security hacker’s mind?

The Balancing Act.

Can enterprises afford to take such proactive steps? Or more importantly, can they afford not to?

Enterprises must define their risk management strategy and determine the proactive steps that are best in alignment with their business objectives and information security standards.  This will enable organizations to better assess the cost of execution for such measures.  While the actual cost is likely to vary by enterprise, inaction is not an acceptable alternative.  Like all other critical corporate initiatives, these proactive measures must receive the board-level attention they deserve.

Enterprises must balance the cost of executing such proactive measures against the potential cost of data loss and reputational harm. This will ensure that the right proactive measures are taken with executive support.

How about you?  Has your enterprise taken the steps to assess the cost of cybercrime?  Have you considered various proactive steps to combat cybercrime?  Share your thoughts with me in the comments section below.

NadhanHP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Conference

Join us for The Open Group Conference in Sydney – April 15-18

By The Open Group Conference Team

The Open Group is busy gearing up for the Sydney conference, which will take place on April 15-18, 2013. With over 2,000 Associate of Enterprise Architects (AEA) members in Australia, Sydney is an ideal setting for industry experts from around the world to gather and discuss the evolution of Enterprise Architecture and its role in transforming the enterprise. Be sure to register today!

The conference offers roughly 60 sessions on a varied of topics including:

  • Cloud infrastructure as an enabler of innovation in enterprises
  • Simplifying data integration in the government and defense sectors
  • Merger transformation with TOGAF® framework and ArchiMate® modeling language
  • Measuring and managing cybersecurity risks
  • Pragmatic IT road-mapping with ArchiMate modeling language
  • The value of Enterprise Architecture certification within a professional development framework

Plenary speakers will include:

  • Allen Brown, President & CEO, The Open Group
  • Peter Haviland, Chief Business Architect, with Martin Keywood, Partner, Ernst & Young
  • David David, EA Manager, Rio Tinto
  • Roger Venning, Chief IT Architect, NBN Co. Ltd
  • Craig Martin, COO & Chief Architect, Enterprise Architects
  • Chris Forde, VP Enterprise Architecture, The Open Group

The full conference agenda is available here. Tracks include:

  • Finance & Commerce
  • Government & Defense
  • Energy & Natural Resources

And topics of discussion include, but are not limited to:

  • Cloud
  • Business Transformation
  • Enterprise Architecture
  • Technology & Innovation
  • Data Integration/Information Sharing
  • Governance & Security
  • Architecture Reference Models
  • Strategic Planning
  • Distributed Services Architecture

Upcoming Conference Submission Deadlines

Would you like a chance to speak an Open Group conference? There are upcoming deadlines for speaker proposal submissions for upcoming conferences in Philadelphia and London. To submit a proposal to speak, click here.

Venue Industry Focus Submission Deadline
Philadelphia (July 15-17) Healthcare, Finance, Government & Defense April 5, 2013
London (October 21-23) Finance, Government, Healthcare July 8, 2013

 

The agenda for Philadelphia and London are filling up fast, so it is important for proposals to be submitted as early as possible. Proposals received after the deadline dates will still be considered, space permitting; if not, proposals may be carried over to a future conference. Priority will be given to proposals received by the deadline dates and to proposals that include an end-user organization, at least as a co-presenter.

Comments Off

Filed under Conference

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

The Open Group Panel Explores How the Big Data Era Now Challenges the IT Status Quo

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group panel explores how the Big Data era now challenges the IT status quo, or view the on-demand video recording on this discussion here: http://new.livestream.com/opengroup/events/1838807.

We recently assembled a panel of experts to explore how Big Data changes the status quo for architecting the enterprise. The bottom line from the discussion is that large enterprises should not just wade into Big Data as an isolated function, but should anticipate the strategic effects and impacts of Big Data — as well the simultaneous complicating factors of Cloud Computing and mobile– as soon as possible.

The panel consisted of Robert Weisman, CEO and Chief Enterprise Architect at Build The Vision; Andras Szakal, Vice President and CTO of IBM’s Federal Division; Jim Hietala, Vice President for Security at The Open Group, and Chris Gerty, Deputy Program Manager at the Open Innovation Program at NASA. I served as the moderator.

And this special thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on “Big Data — he transformation we need to embrace today.”

Threaded factors

An interesting thread for me throughout the conference was to factor where Big Data begins and plain old data, if you will, ends. Of course, it’s going to vary quite a bit from organization to organization.

But Gerty from NASA, part of our panel, provided a good example: It’s when you run out of gas with your old data methods, and your ability to deal with the data — and it’s not just the size of the data itself.

Therefore, Big Data means do things differently — not just to manage the velocity and the volume and the variety of the data, but to really think about data fundamentally and differently. And, we need to think about security, risk and governance. If it’s a “boundaryless organization” when it comes your data, either as a product or service or a resource, that control and management of which data should be exposed, which should be opened, and which should be very closely guarded all need to be factored, determined and implemented.

Here are some excerpts from the on-stage discussion:

Dana Gardner: You mentioned that Big Data to you is not a factor of the size, because NASA’s dealing with so much. It’s when you run out of steam, as it were, with the methodologies. Maybe you could explain more. When do you know that you’ve actually run out of steam with the methodologies?

Gerty: When we collect data, we have some sort of goal in minds of what we might get out of it. When we put the pieces from the data together, it either maybe doesn’t fit as well as you thought or you are successful and you continue to do the same thing, gathering archives of information.

Gardner: Andras, does that square with where you are in your government interactions — that data now becomes a different type of resource, and that you need to know when to do things differently?At that point, where you realize there might even something else that you want to do with the data, different than what you planned originally, that’s when we have to pivot a little bit and say, “Now I need to treat this as a living archive. It’s a ‘it may live beyond me’ type of thing.” At that point, I think you treat it as setting up the infrastructure for being used later, whether it’d be by you or someone else. That’s an important transition to make and might be what one could define as Big Data.

Szakal: The importance of data hasn’t changed. The data itself, the veracity of the data, is still important. Transactional data will always need to exist. The difference is that you have certainly the three or four Vs, depending on how you look at it, but the importance of data is in its veracity, and your ability to understand or to be able to use that data before the data’s shelf life runs out.

Gardner: Bob, we’ve seen the price points on storage go down so dramatically. We’ve seem people just decide to hold on to data that they wouldn’t have before, simply because they can and they can afford to do so. That means we need to try to extract value and use that data. From the perspective of an enterprise architect, how are things different now, vis-à-vis this much larger set of data and variety of data, when it comes to planning and executing as architects?Some data has a shelf life that’s long lived. Other data has very little shelf life, and you would use different approaches to being able to utilize that information. It’s ultimately not about the data itself, but it’s about gaining deep insight into that data. So it’s not storing data or manipulating data, but applying those analytical capabilities to data.

Weisman: One of the major issues is that normally organizations are holding two orders of magnitude more data then they need. It’s an huge overhead, both in terms of the applications architecture that has a code basis, larger than it should be, and also from the technology architecture that is supporting a horrendous number of servers and a whole bunch of technology stuff that they don’t need.

The issue for the architect is to figure out as what data is useful, institute a governance process, so that you can have data lifecycle management, have a proper disposition,  focus the organization on information data and knowledge that is basically going to provide business value to the organization, and help them innovate and have a competitive advantage.

Can’t afford it

And in terms of government, just improve service delivery, because there’s waste right now on information infrastructure, and we can’t afford it anymore.

Gardner: So it’s difficult to know what to keep and what not to keep. I’ve actually spoken to a few people lately who want to keep everything, just because they want to mine it, and they are willing to spend the money and effort to do that.

Jim Hietala, when people do get to this point of trying to decide what to keep, what not to keep, and how to architect properly for that, they also need to factor in security. It shouldn’t become later in the process. It should come early. What are some of the precepts that you think are important in applying good security practices to Big Data?

Hietala: One of the big challenges is that many of the big-data platforms weren’t built from the get-go with security in mind. So some of the controls that you’ve had available in your relational databases, for instance, you move over to the Big Data platforms and the access control authorizations and mechanisms are not there today.

Gardner: There are a lot of unknown unknowns out there, as we discovered with our tweet chat last month. Some people think that the data is just data, and you apply the same security to it. Do you think that’s the case with Big Data? Is it just another follow-through of what you always did with data in the first place?Planning the architecture, looking at bringing in third-party controls to give you the security mechanisms that you are used to in your older platforms, is something that organizations are going to have to do. It’s really an evolving and emerging thing at this point.

Hietala: I would say yes, at a conceptual level, but it’s like what we saw with virtualization. When there was a mad rush to virtualize everything, many of those traditional security controls didn’t translate directly into the virtualized world. The same thing is true with Big Data.

When you’re talking about those volumes of data, applying encryption, applying various security controls, you have to think about how those things are going to scale? That may require new solutions from new technologies and that sort of thing.

Gardner: Chris Gerty, when it comes to that governance, security, and access control, are there any lessons that you’ve learned that you are aware of in terms of the best of openness, but also with the ability to manage the spigot?

Gerty: Spigot is probably a dangerous term to use, because it implies that all data is treated the same. The sooner that you can tag the data as either sensitive or not, mostly coming from the person or team that’s developed or originated the data, the better.

Kicking the can

Once you have it on a hard drive, once you get crazy about storing everything, if you don’t know where it came from, you’re forced to put it into a secure environment. And that’s just kicking the can down the road. It’s really a disservice to people who might use the data in a useful way to address their problems.

We constantly have satellites that are made for one purpose. They send all the data down. It’s controlled either for security or for intellectual property (IP), so someone can write a paper. Then, after the project doesn’t get funded or it just comes to a nice graceful close, there is that extra step, which is almost a responsibility of the originators, to make it useful to the rest of the world.

Gardner: Let’s look at Big Data through the lens of some other major trends right now. Let’s start with Cloud. You mentioned that at NASA, you have your own private Cloud that you’re using a lot, of course, but you’re also now dabbling in commercial and public Clouds. Frankly, the price points that these Cloud providers are offering for storage and data services are pretty compelling.

So we should expect more data to go to the Cloud. Bob, from your perspective, as organizations and architects have to think about data in this hybrid Cloud on-premises off-premises, moving back and forth, what do you think enterprise architects need to start thinking about in terms of managing that, planning for the right destination of data, based on the right mix of other requirements?

Weisman: It’s a good question. As you said, the price point is compelling, but the security and privacy of the information is something else that has to be taken into account. Where is that information going to reside? You have to have very stringent service-level agreements (SLAs) and in certain cases, you might say it’s a price point that’s compelling, but the risk analysis that I have done means that I’m going to have to set up my own private Cloud.

Gardner: Andras, how do the Cloud and Big Data come together in a way that’s intriguing to you?Right now, everybody’s saying is the public Cloud is going to be the way to go. Vendors are going to have to be very sensitive to that and many are, at this point in time, addressing a lot of the needs of some of the large client basis. So it’s not one-size-fits-all and it’s more than just a price for service. Architecture can bring down the price pretty dramatically, even within an enterprise.

Szakal: Actually it’s a great question. We could take the rest of the 22 minutes talking on this one question. I helped lead the President’s Commission on Big Data that Steve Mills from IBM and — I forget the name of the executive from SAP — led. We intentionally tried to separate Cloud from Big Data architecture, primarily because we don’t believe that, in all cases, Cloud is the answer to all things Big Data. You have to define the architecture that’s appropriate for your business needs.

However, it also depends on where the data is born. Take many of the investments IBM has made into enterprise market management, for example, Coremetrics, several of these services that we now offer for helping customers understand deep insight into how their retail market or supply chain behaves.

Born in the Cloud

All of that information is born in the Cloud. But if you’re talking about actually using Cloud as infrastructure and moving around huge sums of data or constructing some of these solutions on your own, then some of the ideas that Bob conveyed are absolutely applicable.

I think it becomes prohibitive to do that and easier to stand up a hybrid environment for managing the amount of data. But I think that you have to think about whether your data is real-time data, whether it’s data that you could apply some of these new technologies like Hadoop to, Hadoop MapReduce-type solutions, or whether it’s traditional data warehousing.

Data warehouses are going to continue to exist and they’re going to continue to evolve technologically. You’re always going to use a subset of data in those data warehouses, and it’s going to be an applicable technology for many years to come.

Gardner: So suffice it to say, an enterprise architect who is well versed in both Cloud infrastructure requirements, technologies, and methods, as well as Big Data, will probably be in quite high demand. That specialization in one or the other isn’t as valuable as being able to cross-pollinate between them.

Szakal: Absolutely. It’s enabling our architects and finding deep individuals who have this unique set of skills, analytics, mathematics, and business. Those individuals are going to be the future architects of the IT world, because analytics and Big Data are going to be integrated into everything that we do and become part of the business processing.

Gardner: Well, that’s a great segue to the next topic that I am interested in, and it’s around mobility as a trend and also application development. The reason I lump them together is that I increasingly see developers being tasked with mobile first.

When you create a new app, you have to remember that this is going to run in the mobile tier and you want to make sure that the requirements, the UI, and the complexity of that app don’t go beyond the ability of the mobile app and the mobile user. This is interesting to me, because data now has a different relationship with apps.

We used to think of apps as creating data and then the data would be stored and it might be used or integrated. Now, we have applications that are simply there in order to present the data and we have the ability now to present it to those mobile devices in the mobile tier, which means it goes anywhere, everywhere all the time.

Let me start with you Jim, because it’s security and risk, but it’s also just rethinking the way we use data in a mobile tier. If we can do it safely, and that’s a big IF, how important should it be for organizations to start thinking about making this data available to all of these devices and just pour out into that mobile tier as possible?

Hietala: In terms of enabling the business, it’s very important. There are a lot of benefits that accrue from accessing your data from whatever device you happen to be on. To me, it is that question of “if,” because now there’s a whole lot of problems to be solved relative to the data floating around anywhere on Android, iOS, whatever the platform is, and the organization being able to lock down their data on those devices, forgetting about whether it’s the organization device or my device. There’s a set of issues around that that the security industry is just starting to get their arms around today.

Mobile ability

Gardner: Chris, any thoughts about this mobile ability that the data gets more valuable the more you can use it and apply it, and then the more you can apply it, the more data you generate that makes the data more valuable, and we start getting into that positive feedback loop?

Gerty: Absolutely. It’s almost an appreciation of what more people could do and get to the problem. We’re getting to the point where, if it’s available on your desktop, you’re going to find a way to make it available on your device.

That same security questions probably need to be answered anyway, but making it mobile compatible is almost an acknowledgment that there will be someone who wants to use it. So let me go that extra step to make it compatible and see what I get from them. It’s more of a cultural benefit that you get from making things compatible with mobile.

Gardner: Any thoughts about what developers should be thinking by trying to bring the fruits of Big Data through these analytics to more users rather than just the BI folks or those that are good at SQL queries? Does this change the game by actually making an application on a mobile device, simple, powerful but accessing this real time updated treasure trove of data?

Gerty: I always think of the astronaut on the moon. He’s got a big, bulky glove and he might have a heads-up display in front of him, but he really needs to know exactly a certain piece of information at the right moment, dealing with bandwidth issues, dealing with the environment, foggy helmet wherever.

It’s very analogous to what the day-to-day professional will use trying to find out that quick e-mail he needs to know or which meeting to go to — which one is more important — and it all comes down to putting your developer in the shoes of the user. So anytime you can get interaction between the two, that’s valuable.

Weisman: From an Enterprise Architecture point of view my background is mainly defense and government, but defense mobile computing has been around for decades. So you’ve always been dealing with that.

The main thing is that in many cases, if they’re coming up with information, the whole presentation layer is turning into another architecture domain with information visualization and also with your security controls, with an integrated identity management capability.

It’s like you were saying about astronaut getting it right. He doesn’t need to know everything that’s happening in the world. He needs to know about his heads-up display, the stuff that’s relevant to him.

So it’s getting the right information to person in an authorized manner, in a way that he can visualize and make sense of that information, be it straight data, analytics, or whatever. The presentation layer, ergonomics, visual communication are going to become very important in the future for that. There are also a lot of problems. Rather than doing it at the application level, you’re doing it entirely in one layer.

Governance and security

Gardner: So clearly the implications of data are cutting across how we think about security, how we think about UI, how we factor in mobility. What we now think about in terms of governance and security, we have to do differently than we did with older data models.

Jim Hietala, what about the impact on spurring people towards more virtualized desktop delivery, if you don’t want to have the date on that end device, if you want solve some of the issues about control and governance, and if you want to be able to manage just how much data gets into that UI, not too much not too little.

Do you think that some of these concerns that we’re addressing will push people to look even harder, maybe more aggressive in how they go to desktop and application virtualization, as they say, keep it on the server, deliver out just the deltas?

Hietala: That’s an interesting point. I’ve run across a startup in the last month or two that is doing is that. The whole value proposition is to virtualize the environment. You get virtual gold images. You don’t have to worry about what’s actually happening on the physical device and you know when the devices connect. The security threat goes away. So we may see more of that as a solution to that.

Gardner: Andras, do you see that that some of the implications of Big Data, far fetched as it may be, are propelling people to cultivate their servers more and virtualize their apps, their data, and their desktop right up to the end devices?

Szakal: Yeah, I do. I see IBM providing solutions for virtual desktop, but I think it was really a security question you were asking. You’re certainly going to see an additional number of virtualized desktop environments.

Ultimately, our network still is not stable enough or at a high enough bandwidth to really make that useful exercise for all but the most menial users in the enterprise. From a security point of view, there is a lot to be still solved.

And part of the challenge in the Cloud environment that we see today is the proliferation of virtual machines (VMs) and the inability to actually contain the security controls within those machines and across these machines from an enterprise perspective. So we’re going to see more solutions proliferate in this area and to try to solve some of the management issues, as well as the security issues, but we’re a long ways away from that.

Gerty: Big Data, by itself, isn’t magical. It doesn’t have the answers just by being big. If you need more, you need to pry deeper into it. That’s the example. They realized early enough that they were able to make something good.

Gardner: Jim Hietala, any thoughts about examples that illustrate where we’re going and why this is so important?

Hietala: Being a security guy, I tend to talk about scare stories, horror stories. One example from last year that struck me. One of the major retailers here in the U.S. hit the news for having predicted, through customer purchase behavior, when people were pregnant.

They could look and see, based upon buying 20 things, that if you’re buying 15 of these and your purchase behavior has changed, they can tell that. The privacy implications to that are somewhat concerning.

An example was that this retailer was sending out coupons related to somebody being pregnant. The teenage girl, who was pregnant hadn’t told her family yet. The father found it. There was alarm in the household and at the local retailer store, when the father went and confronted them.

Privacy implications

There are privacy implications from the use of Big Data. When you get powerful new technology in marketing people’s hands, things sometimes go awry. So I’d throw that out just as a cautionary tale that there is that aspect to this. When you can see across people’s buying transactions, things like that, there are privacy considerations that we’ll have to think about, and that we really need to think about as an industry and a society.

Comments Off

Filed under Conference

Open Group Panel Explores Changing Field of Risk Management and Analysis in the Era of Big Data

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Panel Explores Changing Field of Risk Management and Analysis in Era of Big Data

This is a transcript of a sponsored podcast discussion on the threats from and promise of Big Data in securing enterprise information assets in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference itself is focusing on Big Data the transformation we need to embrace today.

We’re here now with a panel of experts to explore new trends and solutions in the area of risk management and analysis. We’ll learn how large enterprises are delivering risk assessments and risk analysis, and we’ll see how Big Data can be both an area to protect from in form of risks, but also as a tool for better understanding and mitigating risks.

With that, please join me in welcoming our panel. We’re here with Jack Freund, PhD, the Information Security Risk Assessment Manager at TIAA-CREF. Welcome, Jack.

Jack Freund: Hello Dana, how are you?

Gardner: I’m great. Glad you could join us.

We are also here with Jack Jones, Principal of CXOWARE. He has more than nine years of experience as a Chief Information Security Officer, is the inventor of the Factor Analysis Information Risk (FAIR) framework. Welcome, Jack.

Jack Jones: Thank you. And we’re also here with Jim Hietala, Vice President, Security for The Open Group. Welcome, Jim.

Jim Hietala: Thanks, Dana.

Gardner: All right, let’s start out with looking at this from a position of trends. Why is the issue of risk analysis so prominent now? What’s different from, say, five years ago? And we’ll start with you, Jack Jones.

Jones: The information security industry has struggled with getting the attention of and support from management and businesses for a long time, and it has finally come around to the fact that the executives care about loss exposure — the likelihood of bad things happening and how bad those things are likely to be.

It’s only when we speak of those terms or those issues in terms of risk, that we make sense to those executives. And once we do that, we begin to gain some credibility and traction in terms of getting things done.

Gardner: So we really need to talk about this in the terms that a business executive would appreciate, not necessarily an IT executive.

Effects on business

Jones: Absolutely. They’re tired of hearing about vulnerabilities, hackers, and that sort of thing. It’s only when we can talk in terms of the effect on the business that it makes sense to them.

Gardner: Jack Freund, I should also point out that you have more than 14 years in enterprise IT experience. You’re a visiting professor at DeVry University and you chair a risk-management subcommittee for ISACA? Is that correct?

Freund: ISACA, yes.

Gardner: And do you agree?

Freund: The problem that we have as a profession, and I think it’s a big problem, is that we have allowed ourselves to escape the natural trend that the other IT professionals have already taken.

There was a time, years ago, when you could code in the basement, and nobody cared much about what you were doing. But now, largely speaking, developers and systems administrators are very focused on meeting the goals of the organization.

Security has been allowed to miss that boat a little. We have been allowed to hide behind this aura of a protector and of an alerter of terrible things that could happen, without really tying ourselves to the problem that the organizations are facing and how can we help them succeed in what they’re doing.

Gardner: Jim Hietala, how do you see things that are different now than a few years ago when it comes to risk assessment?

Hietala: There are certainly changes on the threat side of the landscape. Five years ago, you didn’t really have hacktivism or this notion of an advanced persistent threat (APT).

That highly skilled attacker taking aim at governments and large organizations didn’t really exist -– or didn’t exist to the degree it does today. So that has changed.

You also have big changes to the IT platform landscape, all of which bring new risks that organizations need to really think about. The mobility trend, the Cloud trend, the big-data trend that we are talking about today, all of those things bring new risk to the organization.

As Jack Jones mentioned, business executives don’t want to hear about, “I’ve got 15 vulnerabilities in the mobility part of my organization.” They want to understand what’s the risk of bad things happening because of mobility, what we’re doing about it, and what’s happening to risk over time?

So it’s a combination of changes in the threats and attackers, as well as just changes to the IT landscape, that we have to take a different look at how we measure and present risk to the business.

Gardner: Because we’re at a big-data conference, do you share my perception, Jack Jones, that Big Data can be a source of risk and vulnerability, but also the analytics and the business intelligence (BI) tools that we’re employing with Big Data can be used to alert you to risks or provide a strong tool for better understanding your true risk setting or environment.

Crown jewels

Jones: You are absolutely right. You think of Big Data and, by definition, it’s where your crown jewels, and everything that leads to crown jewels from an information perspective, are going to be found. It’s like one-stop shopping for the bad guy, if you want to look at it in that context. It definitely needs to be protected. The architecture surrounding it and its integration across a lot of different platforms and such, can be leveraged and probably result in a complex landscape to try and secure.

There are a lot of ways into that data and such, but at least if you can leverage that same Big Data architecture, it’s an approach to information security. With log data and other threat and vulnerability data and such, you should be able to make some significant gains in terms of how well-informed your analyses and your decisions are, based on that data.

Gardner: Jack Freund, do you share that? How does Big Data fit into your understanding of the evolving arena of risk assessment and analysis?

Freund: If we fast-forward it five years, and this is even true today, a lot of people on the cutting edge of Big Data will tell you the problem isn’t so much building everything together and figuring out what it can do. They are going to tell you that the problem is what we do once we figure out everything that we have. This is the problem that we have traditionally had on a much smaller scale in information security. When everything is important, nothing is important.

Gardner: To follow up on that, where do you see the gaps in risk analysis in large organizations? In other words, what parts of organizations aren’t being assessed for risk and should be?

Freund: The big problems that exist largely today in the way that risk assessments are done, is the focus on labels. We want to quickly address the low, medium, and high things and know where they are. But the problem is that there are inherent problems in the way that we think about those labels, without doing any of the analysis legwork.

I think that’s what’s really missing is that true analysis. If the system goes offline, do we lose money? If the system becomes compromised, what are the cost-accounting things that will happen that allow us to figure out how much money we’re going to lose.

That analysis work is largely missing. That’s the gap. The gap is if the control is not in place, then there’s a risk that must be addressed in some fashion. So we end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

Every day, our organizations are out there selling products, offering services, which is and of itself, its own risky venture. So tying what we do from an information security perspective to that is critical for not just the success of the organization, but the success of our profession.

Gardner: So we can safely say that large companies are probably pretty good at a cost-benefit analysis or they wouldn’t be successful. Now, I guess we need to ask them to take that a step further and do a cost-risk analysis, but in business terms, being mindful that their IT systems might be a much larger part of that than they had at once considered. Is that fair, Jack?

Risk implications

Jones: Businesses have been making these decisions, chasing the opportunity, but generally, without any clear understanding of the risk implications, at least from the information security perspective. They will have us in the corner screaming and throwing red flags in there, and talking about vulnerabilities and threats from one thing or another.

But, we come to the table with red, yellow, and green indicators, and on the other side of the table, they’ve got numbers. Well, here is what we expect to earn in revenue from this initiative, and the information security people are saying it’s crazy. How do you normalize the quantitative revenue gain versus red, yellow, and green?

Gardner: Jim Hietala, do you see it in the same red, yellow, green or are there some other frameworks or standard methodologies that The Open Group is looking at to make this a bit more of a science?

Hietala: Probably four years ago, we published what we call the Risk Taxonomy Standard which is based upon FAIR, the management framework that Jack Jones invented. So, we’re big believers in bringing that level of precision to doing risk analysis. Having just gone through training for FAIR myself, as part of the standards effort that we’re doing around certification, I can say that it really brings a level of precision and a depth of analysis to risk analysis that’s been lacking frequently in IT security and risk management.

Gardner: We’ve talked about how organizations need to be mindful that their risks are higher and different than in the past and we’ve talked about how standardization and methodologies are important, helping them better understand this from a business perspective, instead of just a technology perspective.

But, I’m curious about a cultural and organizational perspective. Whose job should this fall under? Who is wearing the white hat in the company and can rally the forces of good and make all the bad things managed? Is this a single person, a cultural, an organizational mission? How do you make this work in the enterprise in a real-world way? Let’s go to you, Jack Freund.

Freund: The profession of IT risk management is changing. That profession will have to sit between the business and information security inclusive of all the other IT functions that make that happen.

In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups. You have to be able to understand profit and loss and capital expenditure on the business side. On the IT risk side, you have to be technical enough to do all those sorts of things.

But I think the sum total of those two things is probably only about 50 percent of the job of IT risk management today. The other 50 percent is communication. Finding ways to translate that language and to understand the needs and concerns of each side of that relationship is really the job of IT risk management.

To answer your question, I think it’s absolutely the job of IT risk management to do that. From my own experiences with the FAIR framework, I can say that using FAIR is the Rosetta Stone for speaking between those two groups.

Necessary tools

It gives you the tools necessary to speak in the insurance and risk terms that business appreciate. And it gives you the ability to be as technical and just nerdy, if you will, as you need to be in order to talk to IT security and the other IT functions in order to make sure everybody is on the same page and everyone feels like their concerns are represented in the risk-assessment functions that are happening.

Gardner: Jack Jones, can you add to that?

Jones: I agree with what Jack said wholeheartedly. I would add, though, that integration or adoption of something like this is a lot easier the higher up in the organization you go.

For CFOs traditionally, their neck is most clearly on the line for risk-related issues within most organizations. At least in my experience, if you get their ear on this and present the information security data analyses to them, they jump on board, they drive it through the organization, and it’s just brain-dead easy.

If you try to drive it up through the ranks, maybe you get an enthusiastic supporter in the information security organization, especially if it’s below the CISO level, and they try a grassroots sort of effort to bring it in, it’s a tougher thing. It can still work. I’ve seen it work very well, but, it’s a longer row to hoe.

Gardner: There have been a lot of research, studies, and surveys on data breaches. What are some of the best sources, or maybe not so good sources, for actually measuring this? How do you know if you’re doing it right? How do you know if you’re moving from yellow to green, instead of to red? To you, Jack Freund.

Freund: There are a couple of things in that question. The first is there’s this inherent assumption in a lot of organizations that we need to move from yellow to green, and that may not be the case. So, becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

That’s part of the official mindset of IT security. When you graduate an information security person today, they are minted knowing that there are a lot of bad things out there, and their goal in life is to reduce them. But, that may not be the case. The case may very well be that things are okay now, but we have bigger things to fry over here that we’re going to focus on. So, that’s one thing.

The second thing, and it’s a very good question, is how we know that we’re getting better? How do we trend that over time? Overall, measuring that value for the organization has to be able to show a reduction of a risk or at least reduction of risk to the risk-tolerance levels of the organization.

Calculating and understanding that requires something that I always phrase as we have to become comfortable with uncertainty. When you are talking about risk in general, you’re talking about forward-looking statements about things that may or may not happen. So, becoming comfortable with the fact that they may or may not happen means that when you measure them today, you have to be willing to be a little bit squishy in how you’re representing that.

In FAIR and in other academic works, they talk about using ranges to do that. So, things like high, medium, and low, could be represented in terms of a minimum, maximum, and most likely. And that tends to be very, very effective. People can respond to that fairly well.

Gathering data

Jones: With regard to the data sources, there are a lot of people out there doing these sorts of studies, gathering data. The problem that’s hamstringing that effort is the lack of a common set of definitions, nomenclature, and even taxonomy around the problem itself.

You will have one study that will have defined threat, vulnerability, or whatever differently from some other study, and so the data can’t be normalized. It really harms the utility of it. I see data out there and I think, “That looks like that can be really useful.” But, I hesitate to use it because I don’t understand. They don’t publish their definitions, approach, and how they went after it.

There’s just so much superficial thinking in the profession on this that we now have dug under the covers. Too often, I run into stuff that just can’t be defended. It doesn’t make sense, and therefore the data can’t be used. It’s an unfortunate situation.

I do think we’re heading in a positive direction. FAIR can provide a normalizing structure for that sort of thing. The VERIS framework, which by the way, is also derived in part from FAIR, also has gained real attraction in terms of the quality of the research they have done and the data they’re generating. We’re headed in the right direction, but we’ve got a long way to go.

Gardner: Jim Hietala, we’re seemingly looking at this on a company-by-company basis. But, is there a vertical industry slice or industry-wide slice where we could look at what’s happening to everyone and put some standard understanding, or measurement around what’s going on in the overall market, maybe by region, maybe by country?

Hietala: There are some industry-specific initiatives and what’s really needed, as Jack Jones mentioned, are common definitions for things like breach, exposure, loss, all those, so that the data sources from one organization can be used in another, and so forth. I think about the financial services industry. I know that there is some information sharing through an organization called the FS-ISAC about what’s happening to financial services organizations in terms of attacks, loss, and those sorts of things.

There’s an opportunity for that on a vertical-by-vertical basis. But, like Jack said, there is a long way to go on that. In some industries, healthcare for instance, you are so far from that, it’s ridiculous. In the US here, the HIPAA security rule says you must do a risk assessment. So, hospitals have done annual risk assessments, will stick the binder on the shelf, and they don’t think much about information security in between those annual risk assessments. That’s a generalization, but various industries are at different places on a continuum of maturity of their risk management approaches.

Gardner: As we get better with having a common understanding of the terms and the measurements and we share more data, let’s go back to this notion of how to communicate this effectively to those people that can use it and exercise change management as a result. That could be the CFO, the CEO, what have you, depending on the organization.

Do you have any examples? Can we look to an organization that’s done this right, and examine their practices, the way they’ve communicated it, some of the tools they’ve used and say, “Aha, they’re headed in the right direction maybe we could follow a little bit.” Let’s start with you, Jack Freund.

Freund: I have worked and consulted for various organizations that have done risk management at different levels. The ones that have embraced FAIR tend to be the ones that overall feel that risk is an integral part of their business strategy. And I can give a couple of examples of scenarios that have played out that I think have been successful in the way they have been communicated.

Coming to terms

The key to keep in mind with this is that one of the really important things is that when you’re a security professional, you’re again trained to feel like you need results. But, the results for the IT risk management professional are different. The results are “I’ve communicated this effectively, so I am done.” And then whatever the results are, are the results that needed to be. And that’s a really hard thing to come to terms with.

I’ve been involved in large-scale efforts to assess risk for a Cloud venture. We needed to move virtually every confidential record that we have to the Cloud in order to be competitive with the rest of our industry. If our competitors are finding ways to utilize the Cloud before us, we can lose out. So, we need to find a way to do that, and to be secure and compliant with all the laws and regulations and such.

Through that scenario, one of the things that came out was that key ownership became really, really important. We had the opportunity to look at the various control structures and we analyzed them using FAIR. What we ended up with was sort of a long-tail risk. Most people will probably do their job right over a long enough period of time. But, over that same long period of time, the odds of somebody making a mistake not in your favor are probably likely, but, not significantly enough so that you can’t make the move.

But, the problem became that the loss side, the side that typically gets ignored with traditional risk-assessment methodologies, was so significant that the organization needed to make some judgment around that, and they needed to have a sense of what we needed to do in order to minimize that.

That became a big point of discussion for us and it drove the conversation away from bad things could happen. We didn’t bury the lead. The lead was that this is the most important thing to this organization in this particular scenario.

So, let’s talk about things we can do. Are we comfortable with it? Do we need to make any sort of changes? What are some control opportunities? How much do they cost? This is a significantly more productive conversation than just, “Here is a bunch of bad things that happen. I’m going to cross my arms and say no.”

Gardner: Jack Jones, examples at work?

Jones: In an organization that I’ve been working with recently, their board of directors said they wanted a quantitative view of information security risk. They just weren’t happy with the red, yellow, green. So, they came to us, and there were really two things that drove them there. One was that they were looking at cyber insurance. They wanted to know how much cyber insurance they should take out, and how do you figure that out when you’ve got a red, yellow, green scale?

They were able to do a series of analyses on a population of the scenarios that they thought were relevant in their world, get an aggregate view of their annualized loss exposure, and make a better informed decision about that particular problem.

Gardner: I’m curious how prevalent cyber insurance is, and is that going to be a leveling effect in the industry where people speak a common language the equivalent of actuarial tables, but for security in enterprise and cyber security?

Jones: One would dream and hope, but at this point, what I’ve seen out there in terms of the basis on which insurance companies are setting their premiums and such is essentially the same old “risk assessment” stuff that the industry has been doing poorly for years. It’s not based on data or any real analysis per se, at least what I’ve run into. What they do is set their premiums high to buffer themselves and typically cover as few things as possible. The question of how much value it’s providing the customers becomes a problem.

Looking to the future

Gardner: We’re coming up on our time limit. So, let’s quickly look to the future. Is there such thing as risk management as a service? Can we outsource this? Is there a way in which moving more of IT into Cloud or hybrid models would mitigate risk, because the Cloud provider would standardize? Then, many players in that environment, those who were buying those services, would be under that same umbrella? Let’s start with you Jim Hietala. What’s the future of this and what do the Cloud trends bring to the table?

Hietala: I’d start with a maxim that comes out of the financial services industry, which is that you can outsource the function, but you still own the risk. That’s an unfortunate reality. You can throw things out in the Cloud, but it doesn’t absolve you from understanding your risk and then doing things to manage it to transfer it if there’s insurance or whatever the case may be.

That’s just a reality. Organizations in the risky world we live in are going to have to get more serious about doing effective risk analysis. From The Open Group standpoint, we see this as an opportunity area.

As I mentioned, we’ve standardized the taxonomy piece of FAIR. And we really see an opportunity around the profession going forward to help the risk-analysis community by further standardizing FAIR and launching a certification program for a FAIR-certified risk analyst. That’s in demand from large organizations that are looking for evidence that people understand how to apply FAIR and use it in doing risk analyses.

Gardner: Jack Freund, looking into your crystal ball, how do you see this discipline evolving?

Freund: I always try to consider things as they exist within other systems. Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

Risk will always be owned by the organization that is offering that service. If we decide at some point that we can move to the Cloud and all these other things, we need to look to the legal system. There is a series of pressures that they are going to apply, and who is going to own that, and how that plays itself out.

If we look to the Europeans and the way that they’re managing risk and compliance, they’re still as strict as we in United States think that they may be about things, but there’s still a lot of leeway in a lot of the ways that laws are written. You’re still being asked to do things that are reasonable. You’re still being asked to do things that are standard for your industry. But, we’d still like the ability to know what that is, and I don’t think that’s going to go away anytime soon.

Judgment calls

We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things. Whenever that happens, you have to make a judgment call. What’s the most important thing that I care about? And that’s why risk management exists, because there’s a certain series of things that we have to deal with. We don’t have the resources to do them all, and I don’t think that’s going to change over time. Regardless of whether the landscape changes, that’s the one that remains true.

Gardner: The last word to you, Jack Jones. It sounds as if we’re continuing down the path of being mostly reactive. Is there anything you can see on the horizon that would perhaps tip the scales, so that the risk management and analysis practitioners can really become proactive and head things off before they become a big problem?

Jones: If we were to take a snapshot at any given point in time of an organization’s loss exposure, how much risk they have right then, that’s a lagging indicator of the decisions they’ve made in the past, and their ability to execute against those decisions.

We can do some great root-cause analysis around that and ask how we got there. But, we can also turn that coin around and ask how good we are at making well-informed decisions, and then executing against them, the asking what that implies from a risk perspective downstream.

If we understand the relationship between our current state, and past and future states, we have those linkages defined, especially, if we have an analytic framework underneath it. We can do some marvelous what-if analysis.

What if this variable changed in our landscape? Let’s run a few thousand Monte Carlo simulations against that and see what comes up. What does that look like? Well, then let’s change this other variable and then see which combination of dials, when we turn them, make us most robust to change in our landscape.

But again, we can’t begin to get there, until we have this foundational set of definitions, frameworks, and such to do that sort of analysis. That’s what we’re doing with FAIR, but without some sort of framework like that, there’s no way you can get there.

Gardner: I am afraid we’ll have to leave it there. We’ve been talking with a panel of experts on how new trends and solutions are emerging in the area of risk management and analysis. And we’ve seen how new tools for communication and using Big Data to understand risks are also being brought to the table.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference in Newport Beach, California. I’d like to thank our panel: Jack Freund, PhD, Information Security Risk Assessment Manager at TIAA-CREF. Thanks so much Jack.

Freund: Thank you, Dana.

Gardner: We’ve also been speaking with Jack Jones, Principal at CXOWARE.

Jones: Thank you. Thank you, pleasure to be here.

Gardner: And last, Jim Hietala, the Vice President for Security at The Open Group. Thanks.

Hietala: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions; your host and moderator through these thought leadership interviews. Thanks again for listening and come back next time.

Comments Off

Filed under Security Architecture

On Demand Broadcasts from Day One at The Open Group Conference in Newport Beach

By The Open Group Conference Team

Since not everyone could make the trip to The Open Group Conference in Newport Beach, we’ve put together a recap of day one’s plenary speakers. Stay tuned for more recaps coming soon!

Big Data at NASA

In his talk titled, “Big Data at NASA,” Chris Gerty, deputy program manager, Open Innovation Program, National Aeronautics and Space Administration (NASA), discussed how Big Data is being interpreted by the next generation of rocket scientists. Chris presented a few lessons learned from his experiences at NASA:

  1. A traditional approach is not always the best approach. A tried and proven method may not translate. Creating more programs for more data to store on bigger hard drives is not always effective. We need to address the never-ending challenges that lie ahead in the shift of society to the information age.
  2. A plan for openness. Based on a government directive, Chris’ team looked to answer questions by asking the right people. For example, NASA asked the people gathering data on a satellite to determine what data was the most important, which enabled NASA to narrow focus and solve problems. Furthermore, by realizing what can also be useful to the public and what tools have already been developed by the public, open source development can benefit the masses. Through collaboration, governments and citizens can work together to solve some of humanity’s biggest problems.
  3. Embrace the enormity of the universe. Look for Big Data where no one else is looking by putting sensors and information gathering tools. If people continue to be scared of Big Data, we will be resistant to gathering more of it. By finding Big Data where it has yet to be discovered, we can solve problems and innovate.

To view Chris’s presentation, please watch the broadcasted session here: http://new.livestream.com/opengroup/Gerty-NPB13

Bringing Order to the Chaos

David Potter, chief technical officer at Promise Innovation and Ron Schuldt, senior partner at UDEF-IT, LLC discussed how The Open Group’s evolving Quantum Lifecycle Management (QLM) standard coupled with its complementary Universal Data Element Framework (UDEF) standard help bring order to the terminology chaos that faces Big Data implementations.

The QLM standard provides a framework for the aggregation of lifecycle data from a multiplicity of sources to add value to the decision making process. Gathering mass amounts of data is useless if it cannot be analyzed. The QLM framework provides a means to interpret the information gathered for business intelligence. The UDEF allows each piece of data to be paired with an unambiguous key to provide clarity. By partnering with the UDEF, the QLM framework is able to separate itself from domain-specific semantic models. The UDEF also provides a ready-made key for international language support. As an open standard, the UDEF is data model independent and as such supports normalization across data models.

One example of successful implementation is by Compassion International. The organization needed to find a balance between information that should be kept internal (e.g., payment information) and information that should be shared with its international sponsors. In this instance, UDEF was used as a structured process for harmonizing the terms used in IT systems between funding partners.

The beauty of the QLM framework and UDEF integration is that they are flexible and can be applied to any product, domain and industry.

To view David and Ron’s presentation, please watch the broadcasted session here: http://new.livestream.com/opengroup/potter-NPB13

Big Data – Panel Discussion

Moderated by Dana Gardner, Interarbor Solution, Robert Weisman , Build The Vision, Andras Szakal, IBM, Jim Hietala, The Open Group, and Chris Gerty, NASA, discussed the implications of Big Data and what it means for business architects and enterprise architects.

Big Data is not about the size but about analyzing that data. Robert mentioned that most organizations store more data than they need or use, and from an enterprise architect’s perspective, it’s important to focus on the analysis of the data and to provide information that will ultimately aid it in some way. When it comes to security, Jim explained that newer Big Data platforms are not built with security in mind. While data is data, many security controls don’t translate to new platforms or scale with the influx of data.

Cloud Computing is Big Data-ready, and price can be compelling, but there are significant security and privacy risks. Robert brought up the argument over public and private Cloud adoption, and said, “It’s not one size fits all.” But can Cloud and Big Data come together? Andras explained that Cloud is not the almighty answer to Big Data. Every organization needs to find the Enterprise Architecture that fits its needs.

The fruits of Big Data can be useful to more than just business intelligence professionals. With the trend of mobility and application development in mind, Chris suggested that developers keep users in mind. Big Data can be used to tell us many different things, but it’s about finding out what is most important and relevant to users in a way that is digestible.

Finally, the panel discussed how Big Data bringing about big changes in almost every aspect of an organization. It is important not to generalize, but customize. Every enterprise needs its own set of architecture to fit its needs. Each organization finds importance in different facets of the data gathered, and security is different at every organization. With all that in mind, the panel agreed that focusing on the analytics is the key.

To view the panel discussion, please watch the broadcasted session here: http://new.livestream.com/opengroup/events/1838807

Comments Off

Filed under Conference

Capturing The Open Group Conference in Newport Beach

By The Open Group Conference Team

It is time to announce the winners of the Newport Beach Photo Contest! For those of you who were unable to attend, conference attendees submitted some of their best photos to the contest for a chance to win one free conference pass to one of The Open Group’s global conferences over the next year – a prize valued at more than $1,000/€900 value.

Southern California is known for its palm trees and warm sandy beaches. While Newport Beach is most recognized for its high-end real estate and association with popular television show, “The OC,” enterprise architects invaded the beach and boating town for The Open Group Conference.

The contest ended Friday at noon PDT, and it is time to announce the winners…

Best of The Open Group Conference in Newport Beach - For any photo taken during conference activities

The winner is Henry Franken, BiZZdesign!

 Henry Franken 01 BiZZdesign table

A busy BiZZdesign exhibitor booth

The Real OC Award – For best photo taken in or around Newport Beach

The winner is Andrew Josey, The Open Group!

 Andrew Josey 02

A local harbor in Newport Beach, Calif.

Thank you to all those who participated in this contest – whether it was submitting one of your own photos or voting for your favorites. Please visit The Open Group’s Facebook page to view all of the submissions and conference photos.

We’re always trying to improve our programs, so if you have any feedback regarding the photo contest, please email photo@opengroup.org or leave a comment below. We’ll see you in Sydney!

Comments Off

Filed under Conference

The Open Group Conference Plenary Speaker Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Keynoter Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

This is a transcript of a sponsored podcast discussion on Big Data analytics and its role in business processes, in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group® Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference will focus on big data and the transformation we need to embrace today.

We are here now with one of the main speakers at the conference; Michael Cavaretta, PhD, Technical Leader of Predictive Analytics for Ford Research and Advanced Engineering in Dearborn, Michigan.

We’ll see how Ford has exploited the strengths of big data analytics by directing them internally to improve business results. In doing so, they scour the metrics from the company’s best processes across myriad manufacturing efforts and through detailed outputs from in-use automobiles, all to improve and help transform their business.

Cavaretta has led multiple data-analytic projects at Ford to break down silos inside the company to best define Ford’s most fruitful datasets. Ford has successfully aggregated customer feedback, and extracted all the internal data to predict how best new features in technologies will improve their cars.

As a lead-in to his Open Group presentation, Michael and I will now explore how big data is fostering business transformation by allowing deeper insights into more types of data efficiently, and thereby improving processes, quality control, and customer satisfaction.

With that, please join me in welcoming Michael Cavaretta. Welcome to BriefingsDirect, Michael.

Michael Cavaretta: Thank you very much.

Gardner: Your upcoming presentation for The Open Group Conference is going to describe some of these new approaches to big data and how that offers some valuable insights into internal operations, and therefore making a better product. To start, what’s different now in being able to get at this data and do this type of analysis from, say, five years ago?

Cavaretta: The biggest difference has to do with the cheap availability of storage and processing power, where a few years ago people were very much concentrated on filtering down the datasets that were being stored for long-term analysis. There has been a big sea change with the idea that we should just store as much as we can and take advantage of that storage to improve business processes.

Gardner: That sounds right on the money, but how do we get here? How do we get to the point where we could start using these benefits from a technology perspective, as you say, better storage, networks, being able to move big dataset, that sort of thing, to wrenching out benefits. What’s the process behind the benefit?

Cavaretta: The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises. There’s this idea that you don’t need to spend so much time figuring out what data you want to store and worry about the cost associated with it, and more about data as an asset. There is value in being able to store it, and being able to go back and extract different insights from it. This really comes from this really cheap storage, access to parallel processing machines, and great software.

Gardner: It seems to me that for a long time, the mindset was that data is simply the output from applications, with applications being primary and the data being almost an afterthought. It seems like we sort flipped that. The data now is perhaps as important, even more important, than the applications. Does that seem to hold true?

Cavaretta: Most definitely, and we’ve had a number of interesting engagements where people have thought about the data that’s being collected. When we talk to them about big data, storing everything at the lowest level of transactions, and what could be done with that, their eyes light up and they really begin to get it.

Gardner: I suppose earlier, when cost considerations and technical limitations were at work, we would just go for a tip of the iceberg level. Now, as you say, we can get almost all the data. So, is this a matter of getting at more data, different types of data, bringing in unstructured data, all the above? How much you are really going after here?

Cavaretta: I like to talk to people about the possibility that big data provides and I always tell them that I have yet to have a circumstance where somebody is giving me too much data. You can pull in all this information and then answer a variety of questions, because you don’t have to worry that something has been thrown out. You have everything.

You may have 100 questions, and each one of the questions uses a very small portion of the data. Those questions may use different portions of the data, a very small piece, but they’re all different. If you go in thinking, “We’re going to answer the top 20 questions and we’re just going to hold data for that,” that leaves so much on the table, and you don’t get any value out of it.

Gardner: I suppose too that we can think about small samples or small datasets and aggregate them or join them. We have new software capabilities to do that efficiently, so that we’re able to not just look for big honking, original datasets, but to aggregate, correlate, and look for a lifecycle level of data. Is that fair as well?

Cavaretta: Definitely. We’re a big believer in mash-ups and we really believe that there is a lot of value in being able to take even datasets that are not specifically big-data sizes yet, and then not go deep, not get more detailed information, but expand the breadth. So it’s being able to augment it with other internal datasets, bridging across different business areas as well as augmenting it with external datasets.

A lot of times you can take something that is maybe a few hundred thousand records or a few million records, and then by the time you’re joining it, and appending different pieces of information onto it, you can get the big dataset sizes.

Gardner: Just to be clear, you’re unique. The conventional wisdom for big data is to look at what your customers are doing, or just the external data. You’re really looking primarily at internal data, while also availing yourself of what external data might be appropriate. Maybe you could describe a little bit about your organization, what you do, and why this internal focus is so important for you.

Cavaretta: I’m part of a larger department that is housed over in the research and advanced-engineering area at Ford Motor Company, and we’re about 30 people. We work as internal consultants, kind of like Capgemini or Ernst & Young, but only within Ford Motor Company. We’re responsible for going out and looking for different opportunities from the business perspective to bring advanced technologies. So, we’ve been focused on the area of statistical modeling and machine learning for I’d say about 15 years or so.

And in this time, we’ve had a number of engagements where we’ve talked with different business customers, and people have said, “We’d really like to do this.” Then, we’d look at the datasets that they have, and say, “Wouldn’t it be great if we would have had this. So now we have to wait six months or a year.”

These new technologies are really changing the game from that perspective. We can turn on the complete fire-hose, and then say that we don’t have to worry about that anymore. Everything is coming in. We can record it all. We don’t have to worry about if the data doesn’t support this analysis, because it’s all there. That’s really a big benefit of big-data technologies.

Gardner: If you’ve been doing this for 15 years, you must be demonstrating a return on investment (ROI) or a value proposition back to Ford. Has that value proposition been changing? Do you expect it to change? What might be your real value proposition two or three years from now?

Cavaretta: The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts who are really interested in looking at things from a data-driven perspective. From when I first came in to now, the biggest change has been when Alan Mulally came into the company, and really pushed the idea of data-driven decisions.

Before, we were getting a lot of interest from people who are really very focused on the data that they had internally. After that, they had a lot of questions from their management and from upper level directors and vice-president saying, “We’ve got all these data assets. We should be getting more out of them.” This strategic perspective has really changed a lot of what we’ve done in the last few years.

Gardener: As I listen to you Michael, it occurs to me that you are applying this data-driven mentality more deeply. As you pointed out earlier, you’re also going after all the data, all the information, whether that’s internal or external.

In the case of an automobile company, you’re looking at the factory, the dealers, what drivers are doing, what the devices within the automobile are telling you, factoring that back into design relatively quickly, and then repeating this process. Are we getting to the point where this sort of Holy Grail notion of a total feedback loop across the lifecycle of a major product like an automobile is really within our grasp? Are we getting there, or is this still kind of theoretical. Can we pull it altogether and make it a science?

Cavaretta: The theory is there. The question has more to do with the actual implementation and the practicality of it. We still are talking a lot of data where even with new advanced technologies and techniques that’s a lot of data to store, it’s a lot of data to analyze, there’s a lot of data to make sure that we can mash-up appropriately.

And, while I think the potential is there and I think the theory is there. There is also a work in being able to get the data from multiple sources. So everything which you can get back from the vehicle, fantastic. Now if you marry that up with internal data, is it survey data, is it manufacturing data, is it quality data? What are the things do you want to go after first? We can’t do everything all at the same time.

Our perspective has been let’s make sure that we identify the highest value, the greatest ROI areas, and then begin to take some of the major datasets that we have and then push them and get more detail. Mash them up appropriately and really prove up the value for the technologists.

Gardner: Clearly, there’s a lot more to come in terms of where we can take this, but I suppose it’s useful to have a historic perspective and context as well. I was thinking about some of the early quality gurus like Deming and some of the movement towards quality like Six Sigma. Does this fall within that same lineage? Are we talking about a continuum here over that last 50 or 60 years, or is this something different?

Cavaretta: That’s a really interesting question. From the perspective of analyzing data, using data appropriately, I think there is a really good long history, and Ford has been a big follower of Deming and Six Sigma for a number of years now.

The difference though, is this idea that you don’t have to worry so much upfront about getting the data. If you’re doing this right, you have the data right there, and this has some great advantages. You’ll have to wait until you get enough history to look for somebody’s patterns. Then again, it also has some disadvantage, which is you’ve got so much data that it’s easy to find things that could be spurious correlations or models that don’t make any sense.

The piece that is required is good domain knowledge, in particular when you are talking about making changes in the manufacturing plant. It’s very appropriate to look at things and be able to talk with people who have 20 years of experience to say, “This is what we found in the data. Does this match what your intuition is?” Then, take that extra step.

Gardner: Tell me a little about sort a day in the life of your organization and your team to let us know what you do. How do you go about making more data available and then reaching some of these higher-level benefits?

Cavaretta: We’re very much focused on interacting with the business. Most of all, we do have to deal with working on pilot projects and working with our business customers to bring advanced analytics and big data technologies to bear against these problems. So we work in kind of what we call push-and-pull model.

We go out and investigate technologies and say these are technologies that Ford should be interested in. Then, we look internally for business customers who would be interested in that. So, we’re kind of pushing the technologies.

From the pull perspective, we’ve had so many successful engagements in such good contacts and good credibility within the organization that we’ve had people come to us and say, “We’ve got a problem. We know this has been in your domain. Give us some help. We’d love to be able to hear your opinions on this.”

So we’ve pulled from the business side and then our job is to match up those two pieces. It’s best when we will be looking at a particular technology and we have somebody come to us and we say, “Oh, this is a perfect match.”

Those types of opportunities have been increasing in the last few years, and we’ve been very happy with the number of internal customers that have really been very excited about the areas of big data.

Gardner: Because this is The Open Group conference and an audience that’s familiar with the IT side of things, I’m curious as to how this relates to software and software development. Of course there are so many more millions of lines of code in automobiles these days, software being more important than just about everything. Are you applying a lot of what you are doing to the software side of the house or are the agile and the feedback loops and the performance management issues a separate domain, or it’s your crossover here?

Cavaretta: There’s some crossover. The biggest area that we’ve been focused on has been picking information, whether internal business processes or from the vehicle, and then being able to bring it back in to derive value. We have very good contacts in the Ford IT group, and they have been fantastic to work with in bringing interesting tools and technology to bear, and then looking at moving those into production and what’s the best way to be able to do that.

A fantastic development has been this idea that we’re using some of the more agile techniques in this space and Ford IT has been pushing this for a while. It’s been fantastic to see them work with us and be able to bring these techniques into this new domain. So we’re pushing the envelope from two different directions.

Gardner: It sounds like you will be meeting up at some point with a complementary nature to your activities.

Cavaretta: Definitely.

Gardner: Let’s move on to this notion of the “Internet of things,” a very interesting concept that lot of people talk about. It seems relevant to what we’ve been discussing. We have sensors in these cars, wireless transfer of data, more-and-more opportunity for location information to be brought to bear, where cars are, how they’re driven, speed information, all sorts of metrics, maybe making those available through cloud providers that assimilate this data.

So let’s not go too deep, because this is a multi-hour discussion all on its own, but how is this notion of the Internet of things being brought to bear on your gathering of big data and applying it to the analytics in your organization?

Cavaretta: It is a huge area, and not only from the internal process perspective —  RFID tags within the manufacturing plans, as well as out on the plant floor, and then all of the information that’s being generated by the vehicle itself.

The Ford Energi generates about 25 gigabytes of data per hour. So you can imagine selling couple of million vehicles in the near future with that amount of data being generated. There are huge opportunities within that, and there are also some interesting opportunities having to do with opening up some of these systems for third-party developers. OpenXC is an initiative that we have going on to add at Research and Advanced Engineering.

We have a lot of data coming from the vehicle. There’s huge number of sensors and processors that are being added to the vehicles. There’s data being generated there, as well as communication between the vehicle and your cell phone and communication between vehicles.

There’s a group over at Ann Arbor Michigan, the University of Michigan Transportation Research Institute (UMTRI), that’s investigating that, as well as communication between the vehicle and let’s say a home system. It lets the home know that you’re on your way and it’s time to increase the temperature, if it’s winter outside, or cool it at the summer time. The amount of data that’s been generated there is invaluable information and could be used for a lot of benefits, both from the corporate perspective, as well as just the very nature of the environment.

Gardner: Just to put a stake in the ground on this, how much data do cars typically generate? Do you have a sense of what now is the case, an average?

Cavaretta: The Energi, according to the latest information that I have, generates about 25 gigabytes per hour. Different vehicles are going to generate different amounts, depending on the number of sensors and processors on the vehicle. But the biggest key has to do with not necessarily where we are right now but where we will be in the near future.

With the amount of information that’s being generated from the vehicles, a lot of it is just internal stuff. The question is how much information should be sent back for analysis and to find different patterns? That becomes really interesting as you look at external sensors, temperature, humidity. You can know when the windshield wipers go on, and then to be able to take that information, and mash that up with other external data sources too. It’s a very interesting domain.

Gardner: So clearly, it’s multiple gigabytes per hour per vehicle and probably going much higher.

Cavaretta: Easily.

Gardner: Let’s move forward now for those folks who have been listening and are interested in bringing this to bear on their organizations and their vertical industries, from the perspective of skills, mindset, and culture. Are there standards, certification, or professional organizations that you’re working with in order to find the right people?

It’s a big question. Let’s look at what skills do you target for your group, and what ways you think that you can improve on that. Then, we’ll get into some of those larger issues about culture and mindset.

Cavaretta: The skills that we have in our department, in particular on our team, are in the area of computer science, statistics, and some good old-fashioned engineering domain knowledge. We’ve really gone about this from a training perspective. Aside from a few key hires, it’s really been an internally developed group.

The biggest advantage that we have is that we can go out and be very targeted with the amount of training that we have. There are such big tools out there, especially in the open-source realm, that we can spin things up with relatively low cost and low risk, and do a number of experiments in the area. That’s really the way that we push the technologies forward.

Gardner: Why The Open Group? Why is that a good forum for your message, and for your research here?

Cavaretta: The biggest reason is the focus on the enterprise, where there are a lot of advantages and a lot of business cases, looking at large enterprises and where there are a lot of systems, companies that can take a relatively small improvement, and it can make a large difference on the bottom-line.

Talking with The Open Group really gives me an opportunity to be able to bring people on board with the idea that you should be looking at a difference in mindset. It’s not “Here’s a way that data is being generated, look, try and conceive of some questions that we can use, and we’ll store that too.” Let’s just take everything, we’ll worry about it later, and then we’ll find the value.

Gardner: I’m sure the viewers of your presentation on January 28 will be gathering a lot of great insights. A lot of the people that attend The Open Group conferences are enterprise architects. What do you think those enterprise architects should be taking away from this? Is there something about their mindset that should shift in recognizing the potential that you’ve been demonstrating?

Cavaretta: It’s important for them to be thinking about data as an asset, rather than as a cost. You even have to spend some money, and it may be a little bit unsafe without really solid ROI at the beginning. Then, move towards pulling that information in, and being able to store it in a way that allows not just the high-level data scientist to get access to and provide value, but people who are interested in the data overall. Those are very important pieces.

The last one is how do you take a big-data project, how do you take something where you’re not storing in the traditional business intelligence (BI) framework that an enterprise can develop, and then connect that to the BI systems and look at providing value to those mash-ups. Those are really important areas that still need some work.

Gardner: Another big constituency within The Open Group community are those business architects. Is there something about mindset and culture, getting back to that topic, that those business-level architects should consider? Do you really need to change the way you think about planning and resource allocation in a business setting, based on the fruits of things that you are doing with big data?

Cavaretta: I really think so. The digital asset that you have can be monetized to change the way the business works, and that could be done by creating new assets that then can be sold to customers, as well as improving the efficiencies of the business.

This idea that everything is going to be very well-defined and there is a lot of work that’s being put into  making sure that data has high quality, I think those things need to be changed somewhat. As you’re pulling the data in, as you are thinking about long-term storage, it’s more the access to the information, rather than the problem in just storing it.

Gardner: Interesting that you brought up that notion that the data becomes a product itself and even a profit center perhaps.

Cavaretta: Exactly. There are many companies, especially large enterprises, that are looking at their data assets and wondering what can they do to monetize this, not only to just pay for the efficiency improvement but as a new revenue stream.

Gardner: We’re almost out of time. For those organizations that want to get started on this, are there any 20/20 hindsights or Monday morning quarterback insights you can provide. How do you get started? Do you appoint a leader? Do you need a strategic roadmap, getting this culture or mindset shifted, pilot programs? How would you recommend that people might begin the process of getting into this?

Cavaretta: We’re definitely a huge believer in pilot projects and proof of concept, and we like to develop roadmaps by doing. So get out there. Understand that it’s going to be messy. Understand that it maybe going to be a little bit more costly and the ROI isn’t going to be there at the beginning.

But get your feet wet. Start doing some experiments, and then, as those experiments turn from just experimentation into really providing real business value, that’s the time to start looking at a more formal aspect and more formal IT processes. But you’ve just got to get going at this point.

Gardner: I would think that the competitive forces are out there. If you are in a competitive industry, and those that you compete against are doing this and you are not, that could spell some trouble.

Cavaretta:  Definitely.

Gardner: We’ve been talking with Michael Cavaretta, PhD, Technical Leader of Predictive Analytics at Ford Research and Advanced Engineering in Dearborn, Michigan. Michael and I have been exploring how big data is fostering business transformation by allowing deeper insights into more types of data and all very efficiently. This is improving processes, updating quality control and adding to customer satisfaction.

Our conversation today comes as a lead-in to Michael’s upcoming plenary presentation. He is going to be talking on January 28 in Newport Beach California, as part of The Open Group conference.

You will hear more from Michael and others, the global leaders on big data that are going to be gathering to talk about business transformation from big data at this conference. So a big thank you to Michael for joining us in this fascinating discussion. I really enjoyed it and I look forward to your presentation on the 28.

Cavaretta: Thank you very much.

Gardner: And I would encourage our listeners and readers to attend the conference or follow more of the threads in social media from the event. Again, it’s going to be happening from January 27 to January 30 in Newport Beach, California.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator through the thought leadership interviews. Thanks again for listening, and come back next time.

1 Comment

Filed under Conference, Uncategorized

Leveraging Social Media at The Open Group Newport Beach Conference (#ogNB)

By The Open Group Conference Team

By attending conferences hosted by The Open Group®, attendees are able to learn from industry experts, understand the latest technologies and standards and discuss and debate current industry trends. One way to maximize the benefits is to make technology work for you. If you are attending The Open Group Conference in Newport Beach next week, we’ve put together a few tips on how to leverage social media to make networking at the conference easier, quicker and more effective.

Using Twitter at #ogNB

Twitter is a real-time news-sharing tool that anyone can use. The official hashtag for the conference is #ogNB. This enables anybody, whether they are physically attending the event or not, to follow what’s happening at the Newport Beach conference in real-time and interact with each other.

Before the conference, be sure to update your Twitter account to monitor #ogNB and, of course, to tweet about the conference.

Using Facebook at The Open Group Conference in Newport Beach

You can also track what is happening at the conference on The Open Group Facebook page. We will be running another photo contest, where all of entries will be uploaded to our page. Members and Open Group Facebook fans can vote by “liking” a photo. The photos with the most “likes” in each category will be named the winner. Submissions will be uploaded in real-time, so the sooner you submit a photo, the more time members and fans will have to vote for it!

For full details of the contest and how to enter see The Open Group blog at: http://blog.opengroup.org/2013/01/22/the-open-group-photo-contest-document-the-magic-at-the-newport-beach-conference/

LinkedIn during The Open Group Conference in Newport Beach

Inspired by one of the sessions? Interested in what your peers have to say? Start a discussion on The Open Group LinkedIn Group page. We’ll also be sharing interesting topics and questions related to The Open Group Conference as it is happening. If you’re not a member already, requesting membership is easy. Simply go to the group page and click the “Join Group” button. We’ll accept your request as soon as we can!

Blogging during The Open Group Conference in Newport Beach

Stay tuned for daily conference recaps here on The Open Group blog. In case you missed a session or you weren’t able to make it to Newport Beach, we’ll be posting the highlights and recaps on the blog. If you are attending the conference and would like to submit a recap of your own, please contact opengroup (at) bateman-group.com.

If you have any questions about social media usage at the conference, feel free to tweet the conference team @theopengroup.

Comments Off

Filed under Uncategorized

Improving Signal-to-Noise in Risk Management

By Jack Jones, CXOWARE

One of the most important responsibilities of the information security professional (or any IT professional, for that matter) is to help management make well-informed decisions. Unfortunately, this has been an illusive objective when it comes to risk. Although we’re great at identifying control deficiencies, and we can talk all day long about the various threats we face, we have historically had a poor track record when it comes to risk. There are a number of reasons for this, but in this article I’ll focus on just one — definition.

You’ve probably heard the old adage, “You can’t manage what you can’t measure.”  Well, I’d add to that by saying, “You can’t measure what you haven’t defined.” The unfortunate fact is that the information security profession has been inconsistent in how it defines and uses the term “risk.” Ask a number of professionals to define the term, and you will get a variety of definitions.

Besides inconsistency, another problem regarding the term “risk” is that many of the common definitions don’t fit the information security problem space or simply aren’t practical. For example, the ISO27000 standard defines risk as, “the effect of uncertainty on objectives.” What does that mean? Fortunately (or perhaps unfortunately), I must not be the only one with that reaction because the ISO standard goes on to define “effect,” “uncertainty,” and “objectives,” as follows:

  • Effect: A deviation from the expected — positive and/or negative
  • Uncertainty: The state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence or likelihood
  • Objectives: Can have different aspects (such as financial, health and safety, information security, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product and process)

NOTE: Their definition for ”objectives” doesn’t appear to be a definition at all, but rather an example. 

Although I understand, conceptually, the point this definition is getting at, my first concern is practical in nature. As a Chief Information Security Officer (CISO), I invariably have more to do than I have resources to apply. Therefore, I must prioritize and prioritization requires comparison and comparison requires measurement. It isn’t clear to me how “uncertainty regarding deviation from the expected (positive and/or negative) that might affect my organization’s objectives” can be applied to measure, and thus compare and prioritize, the issues I’m responsible for dealing with.

This is just an example though, and I don’t mean to pick on ISO because much of their work is stellar. I could have chosen any of several definitions in our industry and expressed varied concerns.

In my experience, information security is about managing how often loss takes place, and how much loss will be realized when/if it occurs. That is our profession’s value proposition, and it’s what management cares about. Consequently, whatever definition we use needs to align with this purpose.

The Open Group’s Risk Taxonomy (shown below), based on Factor Analysis of Information Risk (FAIR), helps to solve this problem by providing a clear and practical definition for risk. In this taxonomy, Risk is defined as, “the probable frequency and probable magnitude of future loss.”

Taxonomy image

The elements below risk in the taxonomy form a Bayesian network that models risk factors and acts as a framework for critically evaluating risk. This framework has been evolving for more than a decade now and is helping information security professionals across many industries understand, measure, communicate and manage risk more effectively.

In the communications context, you have to have a very clear understanding of what constitutes signal before you can effectively and reliably filter it out from noise. The Open Group’s Risk Taxonomy gives us an important foundation for achieving a much clearer signal.

I will be discussing this topic in more detail next week at The Open Group Conference in Newport Beach. For more information on my session or the conference, visit: http://www.opengroup.org/newportbeach2013.

Jack Jones HeadshotJack Jones has been employed in technology for the past twenty-nine years, and has specialized in information security and risk management for twenty-two years.  During this time, he’s worked in the United States military, government intelligence, consulting, as well as the financial and insurance industries.  Jack has over nine years of experience as a CISO, with five of those years at a Fortune 100 financial services company.  His work there was recognized in 2006 when he received the 2006 ISSA Excellence in the Field of Security Practices award at that year’s RSA conference.  In 2007, he was selected as a finalist for the Information Security Executive of the Year, Central United States, and in 2012 was honored with the CSO Compass award for leadership in risk management.  He is also the author and creator of the Factor Analysis of Information Risk (FAIR) framework.

1 Comment

Filed under Cybersecurity

How Should we use Cloud?

By Chris Harding, The Open Group

How should we use Cloud? This is the key question at the start of 2013.

The Open Group® conferences in recent years have thrown light on, “What is Cloud?” and, “Should we use Cloud?” It is time to move on.

Cloud as a Distributed Processing Platform

The question is an interesting one, because the answer is not necessarily, “Use Cloud resources just as you would use in-house resources.” Of course, you can use Cloud processing and storage to replace or supplement what you have in-house, and many companies are doing just that. You can also use the Cloud as a distributed computing platform, on which a single application instance can use multiple processing and storage resources, perhaps spread across many countries.

It’s a bit like contracting a company to do a job, rather than hiring a set of people. If you hire a set of people, you have to worry about who will do what when. Contract a company, and all that is taken care of. The company assembles the right people, schedules their work, finds replacements in case of sickness, and moves them on to other things when their contribution is complete.

This doesn’t only make things easier, it also enables you to tackle bigger jobs. Big Data is the latest technical phenomenon. Big Data can be processed effectively by parceling the work out to multiple computers. Cloud providers are beginning to make the tools to do this available, using distributed file systems and map-reduce. We do not yet have, “Distributed Processing as a Service” – but that will surely come.

Distributed Computing at the Conference

Big Data is the main theme of the Newport Beach conference. The plenary sessions have keynote presentations on Big Data, including the crucial aspect of security, and there is a Big Data track that explores in depth its use in Enterprise Architecture.

There are also Cloud tracks that explore the business aspects of using Cloud and the use of Cloud in Enterprise Architecture, including a session on its use for Big Data.

Service orientation is generally accepted as a sound underlying principle for systems using both Cloud and in-house resources. The Service Oriented Architecture (SOA) movement focused initially on its application within the enterprise. We are now looking to apply it to distributed systems of all kinds. This may require changes to specific technology and interfaces, but not to the fundamental SOA approach. The Distributed Services Architecture track contains presentations on the theory and practice of SOA.

Distributed Computing Work in The Open Group

Many of the conference presentations are based on work done by Open Group members in the Cloud Computing, SOA and Semantic Interoperability Work Groups, and in the Architecture, Security and Jericho Forums. The Open Group enables people to come together to develop standards and best practices for the benefit of the architecture community. We have active Work Groups and Forums working on artifacts such as a Cloud Computing Reference Architecture, a Cloud Portability and Interoperability Guide, and a Guide to the use of TOGAF® framework in Cloud Ecosystems.

The Open Group Conference in Newport Beach

Our conferences provide an opportunity for members and non-members to discuss ideas together. This happens not only in presentations and workshops, but also in informal discussions during breaks and after the conference sessions. These discussions benefit future work at The Open Group. They also benefit the participants directly, enabling them to bring to their enterprises ideas that they have sounded out with their peers. People from other companies can often bring new perspectives.

Most enterprises now know what Cloud is. Many have identified specific opportunities where they will use it. The challenge now for enterprise architects is determining how best to do this, either by replacing in-house systems, or by using the Cloud’s potential for distributed processing. This is the question for discussion at The Open Group Conference in Newport Beach. I’m looking forward to an interesting conference!

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

1 Comment

Filed under Cloud, Conference

2013 Open Group Predictions, Vol. 2

By The Open Group

Continuing on the theme of predictions, here are a few more, which focus on global IT trends, business architecture, OTTF and Open Group events in 2013.

Global Enterprise Architecture

By Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities

Cloud is no longer a bleeding edge technology – most organizations are already well on their way to deploying cloud technology.  However, Cloud implementations are resurrecting a perennial problem for organizations—integration. Now that Cloud infrastructures are being deployed, organizations are having trouble integrating different systems, especially with systems hosted by third parties outside their organization. What will happen when two, three or four technical delivery systems are hosted on AND off premise? This presents a looming integration problem.

As we see more and more organizations buying into cloud infrastructures, we’ll see an increase in cross-platform integration architectures globally in 2013. The role of the enterprise architect will become more complex. Architectures must not only ensure that systems are integrated properly, but architects also need to figure out a way to integrate outsourced teams and services and determine responsibility across all systems. Additionally, outsourcing and integration will lead to increased focus on security in the coming year, especially in healthcare and financial sectors. When so many people are involved, and responsibility is shared or lost in the process, gaping holes can be left unnoticed. As data is increasingly shared between organizations and current trends escalate, security will also become more and more of a concern. Integration may yield great rewards architecturally, but it also means greater exposure to vulnerabilities outside of your firewall.

Within the Architecture Forum, we will be working on improvements to the TOGAF® standard throughout 2013, as well as an effort to continue to harmonize the TOGAF specification with the ArchiMate® modelling language.  The Forum also expects to publish a whitepaper on application portfolio management in the new year, as well as be involved in the upcoming Cloud Reference Architecture.

In China, The Open Group is progressing well. In 2013, we’ll continue translating The Open Group website, books and whitepapers from English to Chinese. Partnerships and Open CA certification will remain in the forefront of global priorities, as well as enrolling TOGAF trainers throughout Asia Pacific as Open Group members. There are a lot of exciting developments arising, and we will keep you updated as we expand our footprint in China and the rest of Asia.

Open Group Events in 2013

By Patty Donovan, Vice President of Membership and Events

In 2013, the biggest change for us will be our quarterly summit. The focus will shift toward an emphasis on verticals. This new focus will debut at our April event in Sydney where the vertical themes include Mining, Government, and Finance. Additional vertical themes that we plan to cover throughout the year include: Healthcare, Transportation, Retail, just to name a few. We will also continue to increase the number of our popular Livestream sessions as we have seen an extremely positive reaction to them as well as all of our On-Demand sessions – listen to best selling authors and industry leaders who participated as keynote and track speakers throughout the year.

Regarding social media, we made big strides in 2012 and will continue to make this a primary focus of The Open Group. If you haven’t already, please “like” us on Facebook, follow us on Twitter, join the chat on (#ogchat) one of our Security focused Tweet Jams, and join our LinkedIn Group. And if you have the time, we’d love for you to contribute to The Open Group blog.

We’re always open to new suggestions, so if you have a creative idea on how we can improve your membership, Open Group events, webinars, podcasts, please let me know! Also, please be sure to attend the upcoming Open Group Conference in Newport Beach, Calif., which is taking place on January 28-31. The conference will address Big Data.

Business Architecture

By Steve Philp, Marketing Director for Open CA and Open CITS

Business Architecture is still a relatively new discipline, but in 2013 I think it will continue to grow in prominence and visibility from an executive perspective. C-Level decision makers are not just looking at operational efficiency initiatives and cost reduction programs to grow their future revenue streams; they are also looking at market strategy and opportunity analysis.

Business Architects are extremely valuable to an organization when they understand market and technology trends in a particular sector. They can then work with business leaders to develop strategies based on the capabilities and positioning of the company to increase revenue, enhance their market position and improve customer loyalty.

Senior management recognizes that technology also plays a crucial role in how organizations can achieve their business goals. A major role of the Business Architect is to help merge technology with business processes to help facilitate this business transformation.

There are a number of key technology areas for 2013 where Business Architects will be called upon to engage with the business such as Cloud Computing, Big Data and social networking. Therefore, the need to have competent Business Architects is a high priority in both the developed and emerging markets and the demand for Business Architects currently exceeds the supply. There are some training and certification programs available based on a body of knowledge, but how do you establish who is a practicing Business Architect if you are looking to recruit?

The Open Group is trying to address this issue and has incorporated a Business Architecture stream into The Open Group Certified Architect (Open CA) program. There has already been significant interest in this stream from both organizations and practitioners alike. This is because Open CA is a skills- and experience-based program that recognizes, at different levels, those individuals who are actually performing in a Business Architecture role. You must complete a candidate application package and be interviewed by your peers. Achieving certification demonstrates your competency as a Business Architect and therefore will stand you in good stead for both next year and beyond.

You can view the conformance criteria for the Open CA Business Architecture stream at https://www2.opengroup.org/ogsys/catalog/X120.

Trusted Technology

By Sally Long, Director of Consortia Services

The interdependency of all countries on global technology providers and technology providers’ dependencies on component suppliers around the world is more certain than ever before.  The need to work together in a vendor-neutral, country-neutral environment to assure there are standards for securing technology development and supply chain operations will become increasingly apparent in 2013. Securing the global supply chain can not be done in a vacuum, by a few providers or a few governments, it must be achieved by working together with all governments, providers, component suppliers and integrators and it must be done through open standards and accreditation programs that demonstrate conformance to those standards and are available to everyone.

The Open Group’s Trusted Technology Forum is providing that open, vendor and country-neutral environment, where suppliers from all countries and governments from around the world can work together in a trusted collaborative environment, to create a standard and an accreditation program for securing the global supply chain. The Open Trusted Technology Provider Standard (O-TTPS) Snapshot (Draft) was published in March of 2012 and is the basis for our 2013 predictions.

We predict that in 2013:

  • Version 1.0 of the O-TTPS (Standard) will be published.
  • Version 1.0 will be submitted to the ISO PAS process in 2013, and will likely become part of the ISO/IEC 27036 standard, where Part 5 of that ISO standard is already reserved for the O-TTPS work
  • An O-TTPS Accreditation Program – open to all providers, component suppliers, and integrators, will be launched
  • The Forum will continue the trend of increased member participation from governments and suppliers around the world

4 Comments

Filed under Business Architecture, Conference, Enterprise Architecture, O-TTF, OTTF

The Open Group Newport Beach Conference – Early Bird Registration Ends January 4

By The Open Group Conference Team

The Open Group is busy gearing up for the Newport Beach Conference. Taking place January 28-31, 2013, the conference theme is “Big Data – The Transformation We Need to Embrace Today” and will bring together leading minds in technology to discuss the challenges and solutions facing Enterprise Architecture around the growth of Big Data. Register today!

Information is power, and we stand at a time when 90% of the data in the world today was generated in the last two years alone.  Despite the sheer enormity of the task, off the shelf hardware, open source frameworks, and the processing capacity of the Cloud, mean that Big Data processing is within the cost-effective grasp of the average business. Organizations can now initiate Big Data projects without significant investment in IT infrastructure.

In addition to tutorial sessions on TOGAF® and ArchiMate®, the conference offers roughly 60 sessions on a varied of topics including:

  • The ways that Cloud Computing is transforming the possibilities for collecting, storing, and processing big data.
  • How to contend with Big Data in your Enterprise?
  • How does Big Data enable your Business Architecture?
  • What does the Big Data revolution mean for the Enterprise Architect?
  • Real-time analysis of Big Data in the Cloud.
  • Security challenges in the world of outsourced data.
  • What is an architectural view of Security for the Cloud?

Plenary speakers include:

  • Christian Verstraete, Chief Technologist – Cloud Strategy, HP
  • Mary Ann Mezzapelle, Strategist – Security Services, HP
  • Michael Cavaretta, Ph.D, Technical Leader, Predictive Analytics / Data Mining Research and Advanced Engineering, Ford Motor Company
  • Adrian Lane, Analyst and Chief Technical Officer, Securosis
  • David Potter, Chief Technical Officer, Promise Innovation Oy
  • Ron Schuldt, Senior Partner, UDEF-IT, LLC

A full conference agenda is available here. Tracks include:

  • Architecting Big Data
  • Big Data and Cloud Security
  • Data Architecture and Big Data
  • Business Architecture
  • Distributed Services Architecture
  • EA and Disruptive Technologies
  • Architecting the Cloud
  • Cloud Computing for Business

Early Bird Registration

Early Bird registration for The Open Group Conference in Newport Beach ends January 4. Register now and save! For more information or to register: http://www.opengroup.org/event/open-group-newport-beach-2013/reg

Upcoming Conference Submission Deadlines

In addition to the Early Bird registration deadline to attend the Newport Beach conference, there are upcoming deadlines for speaker proposal submissions to Open Group conferences in Sydney, Philadelphia and London. To submit a proposal to speak, click here.

Venue Industry Focus Submission Deadline
Sydney (April 15-17) Finance, Defense, Mining January 18, 2013
Philadelphia (July 15-17) Healthcare, Finance, Defense April 5, 2013
London (October 21-23) Finance, Government, Healthcare July 8, 2013

We expect space on the agendas of these events to be at a premium, so it is important for proposals to be submitted as early as possible. Proposals received after the deadline dates will still be considered, if space is available; if not, they may be carried over to a future conference. Priority will be given to proposals received by the deadline dates and to proposals that include an end-user organization, at least as a co-presenter.

Comments Off

Filed under Conference