Global Cooperation and Cybersecurity: A Q&A with Bruce McConnell

By The Open Group

Cyber threats are becoming an increasingly critical issue for both companies and governments. The recent disclosure that the U.S. Office of Personnel Management had been hacked is proof that it’s not just private industry that is vulnerable to attack. In order to address the problems that countries and industry face, there must be more global cooperation in terms of what behaviors are acceptable and unacceptable in cyberspace.

Bruce McConnell is Senior Vice President of the EastWest Institute (EWI), and is responsible for its global cooperation in cyberspace initiative. Bruce has served in the U.S. Department of Homeland Security and as Deputy Under Secretary Cybersecurity, where he was responsible for ensuring the cybersecurity of all federal civilian agencies and helping the owners and operators of the most critical U.S. infrastructure protect themselves from cyber threats. We recently spoke with him in advance of The Open Group Baltimore event about the threats facing government and businesses today, the need for better global cooperation in cyberspace and the role that standards can play in helping to foster that cooperation.

In your role as Deputy Under Secretary for Cybersecurity in the Obama Administration, you were responsible for protecting U.S. infrastructure from cyber threats. In your estimation, what are the most serious threats in cyberspace today?

User error. I say that because a lot of people these days like to talk about these really scary sounding cyber threats, like some nation state or terrorist group that is going to take down the grid or turn off Wall Street, and I think we spend too much time focusing on the threat and less time focusing on other aspects of the risk equation.

The three elements of risk are threats, vulnerability and consequences. A lot of what needs to be done is to reduce vulnerability. Part of what EWI is working on is promoting the availability of more secure information and communications in technology so that buyers and users can start with an infrastructure that is actually defensible as opposed to the infrastructure we have today which is very difficult to defend. We figure that, yes, there are threats, and yes, there are potential consequences, but one of the places that we need more work in particular is reducing vulnerabilities.

EWI is also working on reducing threats and consequences by working with countries to, for example, agree that certain key assets, such as core Internet infrastructure or financial services markets and clearinghouses should not be attacked by anybody. You have to work all aspects of the equation.

What steps can be taken by governments or businesses to better shore up the infrastructure from cyber threats?

One of the things that has been missing is a signal from the marketplace that it wants more secure technology. There’s been complacency for a long time and denial that this is really a problem, and the increasing visibility of these high profile attacks, like on Target, Sony, JP Morgan Chase and others, are getting companies at the most senior level—in the C-Suite and in the Boardroom—to start paying attention and asking questions of their IT team: ‘How are we protecting ourselves?’ ‘Are we going to be the next ones?’ Because there are two kinds of companies in the U.S.—those that have been hacked and those that know they’ve been hacked.

One of the things EWI has been working on with The Open Group and some of the large IT companies is a set of questions that buyers of IT could ask suppliers about what they do to make sure their products are secure—how they are paying attention to their supply chain, who’s responsible for security at their organization, etc. We think that companies and the government—from the standpoint of education, not regulation—can do more to send signals to the marketplace and suppliers so that they offer more secure technology. In the past customers haven’t been willing to pay more for security—it does cost more. I think that’s changing, but we need to give them tools to be able to ask that question in a smart way.

With respect to government specifically, I think one of the great things the U.S government has done recently is coming out with a Cybersecurity Framework, which was developed mostly by the private sector. NIST, of course, acted as the facilitator, but there’s a lot of uptake there that we’re seeing in terms of companies and sectors—like the financial services sector—adopting and adapting it. It has raised the level of security inside corporations. Insurance carriers are starting to use it as the basis for underwriting insurance policies. It’s not mandatory but it’s a good guidepost, and I think it will become a standard of care.

Why has there been that level of complacency for so long?

I think it’s two things, and they’re both cultural.

One is that the IT community inside companies has not been able to communicate effectively to senior management regarding the nature of the threat or the degree of risk. They don’t speak the same language. When the CFO comes into the CEO’s office and talks about foreign exchange exposure or the General Counsel comes in and speaks about reputational risk, they’re speaking a language that most CEOs can understand. But when the IT guy comes in and talks about Trojans and botnets, he’s speaking a foreign language. There’s been a tendency for that message to not be expressed in business terms that the CEO can understand or be able to quantify and think about as a risk. But it’s a risk just like any of those other risks—foreign exchange risk, competitive risk, natural disasters, cyber attacks. I think that’s changing now, and some companies are pulling the Chief Information Security Officer out from under the CIO and having them report to the Chief Risk Officer, whether it’s the General Counsel or the CFO. That puts them in a different position, and then it can be positioned against other risks and managed in a different way. It’s not a technology problem, it’s as much a human problem—it’s about training employees, it’s about background checks on systems administrators.

The second piece is that it’s invisible. Unlike a hurricane or fire, where you can see the damage, the damage from a cyber attack is invisible. When I was at Homeland Security, we said, ‘What’s it going to take for people to wake up? Well, something really bad will have to happen.’ And something really bad is happening all the time. There’s billions of dollars of financial fraud and theft, there’s theft of intellectual property, the theft of identities—there’s lots of bad things happening but they’re kind of invisible. People don’t react to something they can’t see, we react to the threats that we can see. I think that there’s just a conceptual gap that security professionals haven’t figured out how to convert into something tangible.

How much difference is there anymore in the threats that governments are facing as opposed to businesses? Are these things converging more?

We certainly saw the Office of Personnel Management got the same kind of breaches that Target got: people’s personal data. In the intellectual property area, attackers steal from both businesses and governments. Fraud is probably more directed at businesses and banks just because they handle the money, although some of the IRS data will probably be used to perpetrate fraud. Certainly the government has some systems that are of higher value to society than any single corporate system, but if the core Internet infrastructure, which is owned and run by companies, went down, that would be bad for everybody.

I think the threats are converging also in the sense that attackers are always looking for high-value targets so both governments and companies these days have high-value targets. And they use similar tactics—what we saw was that one family of malware would be used to attack government systems and a slightly different version of that family would be used to attack commercial systems. It was the same kind of malware, and maybe the same perpetrators.

Your session at The Open Group Baltimore event is focused on global cooperation in cyberspace. Where does global cooperation in cyberspace stand today, and why is it important to have that cooperation?

It’s in the spirit of the Baltimore event—Boundaryless Information Flow™. The Internet is a global phenomenon and not a great respecter of national boundaries. The information and technology we all use comes from all over the world. From a security and management standpoint, this is not something that any single government can manage on its own. In order to allow for the boundaryless movement of information in a secure way, governments have to work together to put the right policies and incentives in place. That includes cooperating on catching and investigating cyber criminals. It involves the matter of ensuring buyers can get the best, most secure technology no matter where it is manufactured. It involves cooperating on the types of behavior that are unacceptable in cyberspace. Even reaching agreement on what institutions can be used to manage this global resource is crucial because there’s no real governance of the Internet—it’s still run on an ad hoc basis. That’s been great, but the Internet is becoming too important to be left to everybody’s good will. I’ll cover these issues in more depth in Baltimore.

Who is working on these issues right now and what kind of things are they doing? Who are the “allies” in trying to put together global cooperation initiatives?

There are a lot of different coalitions of people working together. They range from a group called the United Nations Group of Governmental Experts, which by the time of the Baltimore conference will have conducted its fourth in a series of meetings over a two-year period to discuss norms of behavior in cyberspace, along the lines of what kinds of behaviors should nation states not engage in vis a vis cyberattacks. There’s a case where you have a U.N.-based organization and 20 countries or so working together to try to come up with some agreements in that area. Certainly EWI’s work is supported primarily by companies, both U.S. and foreign companies. We bring a broad multi-stakeholder group of people together from countries, companies and non-profit organizations from all the major cyber powers, whether they are national cyber powers like China, Russia, U.S, Germany, India, or corporate cyber powers like Microsoft and Huawei Technologies because in the Internet, companies are important. There are a lot of different activities going on to find ways of cooperating and increasingly recognize the seriousness of the problem.

In terms of better cooperation, what are some of the issues that need to be addressed first and how can those things be better accomplished?

There are so many things to work on. Despite efforts, the state of cooperation isn’t great. There’s a lot of rhetoric being applied and countries are leveling charges and accusing each other of attacking them. Whether or not those charges are true, this is not the way to build trust and cooperation. One of the first things that governments really need to do if they want to cooperate with each other is tone down the rhetoric. They need to sit down, listen to each other and try to understand where the other one’s coming from rather than just trading charges in public. That’s the first thing.

There’s also a reflection of the lack of trust between the major cyber powers these days. How do you build trust? You build trust by working together on easy projects first, and then working your way up to more difficult topics. EWI has been promoting conversations between governments about how to respond if there’s a server in one country that’s been captured by a bot and is attacking machines in another country. You have to say, ‘Could you take a look at that?’ But what are the procedures for reducing the impact of an incident in one country caused by malware coming from a server in of another country? This assumes, of course, that the country itself is not doing it deliberately. In a lot of these attacks people are spoofing servers so it looks like they’re coming from one place but it’s actually originating someplace else. Maybe if we can get governments cooperating on mutual assistance in incident response, it would help build confidence and trust that we could work on larger issues.

As the Internet becomes increasingly more crucial to businesses and government and there are more attacks out there, will this necessitate a position or department that needs to be a bridge between state departments and technology? Do you envision a role for someone to be a negotiator in that area and is that a diplomatic or technological position or both?

Most of the major national powers have cyber ambassadors. The German’s Foreign Office has a cyber ambassador, the Chinese have one. The U.S. has a cyber coordinator, the French have a cyber ambassador and the British just named a new cyber ambassador. States are recognizing there is a role for the foreign ministry to play in this area. It’s not just a diplomatic conversation.

There are also global forums where countries, companies and NGOs get together to talk about these things. EWI hosts one every year – this year’ it’s in New York September 9-10. I think there are a lot of places where the conversations are happening. That gets to a different question: At some point do we need more structure in the way these issues are managed on a global basis? There’s a big debate right now just on the topic of the assignment of Internet names and numbers as the U.S. lets go of its contract with ICANN—who’s going to take that on, what’s it going to look like? Is it going to be a multi-stakeholder body that involves companies sitting at the table or is it only going to be only governments?

Do you see a role for technology standards in helping to foster better cooperation in cyberspace? What role can they play?

Absolutely. In the work we’re doing to try to tell companies they want more secure products. We’re referencing a lot of different standards including those The Open Group and the Trusted Technology Forum have been developing. Those kind of technical standards are critical to getting everyone on a level playing fields in terms of being able to measure how secure products are and to having a conversation that’s fact-based instead of brochure based. There’s a lot of work to be done, but they’re going to be critical to the implementation of any of these larger cooperative agreements. There’s a lot of exciting work going on.

Join the conversation @theopegroup #ogchat #ogBWI

*********

Beginning in 2009, Bruce McConnell provided programmatic and policy leadership to the cybersecurity mission at the U.S. Department of Homeland Security. He became Deputy Under Secretary for Cybersecurity in 2013, and responsible for ensuring the cybersecurity of all federal civilian agencies and for helping the owners and operators of the most critical U.S. infrastructure protect themselves from growing cyber threats. During his tenure, McConnell was instrumental in building the national and international credibility of DHS as a trustworthy partner that relies on transparency and collaboration to protect privacy and enhance security.

Before DHS, McConnell served on the Obama-Biden Presidential Transition Team, working on open government and technology issues. From 2000-2008 he created, built, and sold McConnell International and Government Futures, boutique consultancies that provided strategic and tactical advice to clients in technology, business and government markets. From 2005-2008, he served on the Commission on Cybersecurity for the 44th Presidency.

From 1999-2000, McConnell was Director of the International Y2K Cooperation Center, sponsored by the United Nations and the World Bank, where he coordinated regional and global preparations of governments and critical private sector organizations to successfully defeat the Y2K bug.

McConnell was Chief of Information Policy and Technology in the U.S. Office of Management and Budget from 1993-1999, where he led the government-industry team that reformed U.S. encryption export policy, created an information security strategy for government agencies, redirected government technology procurement and management along commercial lines, and extended the presumption of open government information onto the Internet.

McConnell is also a senior advisor at the Center for Strategic and International Studies. He received a Master of Public Administration from the Evans School for Public Policy at the University of Washington, where he maintains a faculty affiliation, and a Bachelor of Sciences from Stanford University.

 

Leave a comment

Filed under Cybersecurity, RISK Management, the open group, The Open Group Baltimore 2015

Using Risk Management Standards: A Q&A with Ben Tomhave, Security Architect and Former Gartner Analyst

By The Open Group

IT Risk Management is currently in a state of flux with many organizations today unsure not only how to best assess risk but also how to place it within the context of their business. Ben Tomhave, a Security Architect and former Gartner analyst, will be speaking at The Open Group Baltimore on July 20 on “The Strengths and Limitations of Risk Management Standards.”

We recently caught up with Tomhave pre-conference to discuss the pros and cons of today’s Risk Management standards, the issues that organizations are facing when it comes to Risk Management and how they can better use existing standards to their advantage.

How would you describe the state of Risk Management and Risk Management standards today?

The topic of my talk is really on the state of standards for Security and Risk Management. There’s a handful of significant standards out there today, varying from some of the work at The Open Group to NIST and the ISO 27000 series, etc. The problem with most of those is that they don’t necessarily provide a prescriptive level of guidance for how to go about performing or structuring risk management within an organization. If you look at ISO 31000 for example, it provides a general guideline for how to structure an overall Risk Management approach or program but it’s not designed to be directly implementable. You can then look at something like ISO 27005 that provides a bit more detail, but for the most part these are fairly high-level guides on some of the key components; they don’t get to the point of how you should be doing Risk Management.

In contrast, one can look at something like the Open FAIR standard from The Open Group, and that gets a bit more prescriptive and directly implementable, but even then there’s a fair amount of scoping and education that needs to go on. So the short answer to the question is, there’s no shortage of documented guidance out there, but there are, however, still a lot of open-ended questions and a lot of misunderstanding about how to use these.

What are some of the limitations that are hindering risk standards then and what needs to be added?

I don’t think it’s necessarily a matter of needing to fix or change the standards themselves, I think where we’re at is that we’re still at a fairly prototypical stage where we have guidance as to how to get started and how to structure things but we don’t necessarily have really good understanding across the industry about how to best make use of it. Complicating things further is an open question about just how much we need to be doing, how much value can we get from these, do we need to adopt some of these practices? If you look at all of the organizations that have had major breaches over the past few years, all of them, presumably, were doing some form of risk management—probably qualitative Risk Management—and yet they still had all these breaches anyway. Inevitably, they were compliant with any number of security standards along the way, too, and yet bad things happen. We have a lot of issues with how organizations are using standards less than with the standards themselves.

Last fall The Open Group fielded an IT Risk Management survey that found that many organizations are struggling to understand and create business value for Risk Management. What you’re saying really echoes those results. How much of this has to do with problems within organizations themselves and not having a better understanding of Risk Management?

I think that’s definitely the case. A lot of organizations are making bad decisions in many areas right now, and they don’t know why or aren’t even aware and are making bad decisions up until the point it’s too late. As an industry we’ve got this compliance problem where you can do a lot of work and demonstrate completion or compliance with check lists and still be compromised, still have massive data breaches. I think there’s a significant cognitive dissonance that exists, and I think it’s because we’re still in a significant transitional period overall.

Security should really have never been a standalone industry or a standalone environment. Security should have just been one of those attributes of the operating system or operating environments from the outset. Unfortunately, because of the dynamic nature of IT (and we’re still going through what I refer to as this Digital Industrial Revolution that’s been going on for 40-50 years), everything’s changing everyday. That will be the case until we hit a stasis point that we can stabilize around and grow a generation that’s truly native with practices and approaches and with the tools and technologies underlying this stuff.

An analogy would be to look at Telecom. Look at Telecom in the 1800s when they were running telegraph poles and running lines along railroad tracks. You could just climb a pole, put a couple alligator clips on there and suddenly you could send and receive messages, too, using the same wires. Now we have buried lines, we have much greater integrity of those systems. We generally know when we’ve lost integrity on those systems for the most part. It took 100 years to get there. So we’re less than half that way with the Internet and things are a lot more complicated, and the ability of an attacker, one single person spending all their time to go after a resource or a target, that type of asymmetric threat is just something that we haven’t really thought about and engineered our environments for over time.

I think it’s definitely challenging. But ultimately Risk Management practices are about making better decisions. How do we put the right amount of time and energy into making these decisions and providing better information and better data around those decisions? That’s always going to be a hard question to answer. Thinking about where the standards really could stand to improve, it’s helping organizations, helping people, understand the answer to that core question—which is, how much time and energy do I have to put into this decision?

When I did my graduate work at George Washington University, a number of years ago, one of the courses we had to take went through decision management as a discipline. We would run through things like decision trees. I went back to the executives at the company that I was working at and asked them, ‘How often do you use decision trees to make your investment decisions?” And they just looked at me funny and said, ‘Gosh, we haven’t heard of or thought about decision trees since grad school.’ In many ways, a lot of the formal Risk Management stuff that we talk about and drill into—especially when you get into the quantitative risk discussions—a lot of that goes down the same route. It’s great academically, it’s great in theory, but it’s not the kind of thing where on a daily basis you need to pull it out and use it for every single decision or every single discussion. Which, by the way, is where the FAIR taxonomy within Open FAIR provides an interesting and very valuable breakdown point. There are many cases where just using the taxonomy to break down a problem and think about it a little bit is more than sufficient, and you don’t have to go the next step of populating it with the actual quantitative estimates and do the quantitative estimations for a FAIR risk analysis. You can use it qualitatively and improve the overall quality and defensibility of your decisions.

How mature are most organizations in their understanding of risk today, and what are some of the core reasons they’re having such a difficult time with Risk Management?

The answer to that question varies to a degree by industry. Industries like financial services just seem to deal with this stuff better for the most part, but then if you look at multibillion dollar write offs for JP Morgan Chase, you think maybe they don’t understand risk after all. I think for the most part most large enterprises have at least some people in the organization that have a nominal understanding of Risk Management and risk assessment and how that factors into making good decisions.

That doesn’t mean that everything’s perfect. Look at the large enterprises that had major breaches in 2014 and 2013 and clearly you can look at those and say ‘Gosh, you guys didn’t make very good decisions.’ Home Depot is a good example or even the NSA with the Snowden stuff. In both cases, they knew they had an exposure, they had done a reasonable job of risk management, they just didn’t move fast enough with their remediation. They just didn’t get stuff in place soon enough to make a meaningful difference.

For the most part, larger enterprises or organizations will have better facilities and capabilities around risk management, but they may have challenges with velocity in terms of being able to put to rest issues in a timely fashion. Now slip down to different sectors and you look at retail, they continue to have issues with cardholder data and that’s where the card brands are asserting themselves more aggressively. Look at healthcare. Healthcare organizations, for one thing, simply don’t have the budget or the control to make a lot of changes, and they’re well behind the curve in terms of protecting patient records and data. Then look at other spaces like SMBs, which make up more than 90 percent of U.S. employment firms or look at the education space where they simply will never have the kinds of resources to do everything that’s expected of them.

I think we have a significant challenge here – a lot of these organizations will never have the resources to have adequate Risk Management in-house, and they will always be tremendously resource-constrained, preventing them from doing all that they really need to do. The challenge for them is, how do we provide answers or tools or methods to them that they can then use that don’t require a lot of expertise but can guide them toward making better decisions overall even if the decision is ‘Why are we doing any of this IT stuff at all when we can simply be outsourcing this to a service that specializes in my industry or specializes in my SMB business size that can take on some of the risk for me that I wasn’t even aware of?’

It ends up being a very basic educational awareness problem in many regards, and many of these organizations don’t seem to be fully aware of the type of exposure and legal liability that they’re carrying at any given point in time.

One of the other IT Risk Management Survey findings was that where the Risk Management function sits in organizations is pretty inconsistent—sometimes IT, sometimes risk, sometimes security—is that part of the problem too?

Yes and no—it’s a hard question to answer directly because we have to drill in on what kind of Risk Management we’re talking about. Because there’s enterprise Risk Management reporting up to a CFO or CEO, and one could argue that the CEO is doing Risk Management.

One of the problems that we historically run into, especially from a bottom-up perspective, is a lot of IT Risk Management people or IT Risk Management professionals or folks from the audit world have mistakenly thought that everything should boil down to a single, myopic view of ‘What is risk?’ And yet it’s really not how executives run organizations. Your chief exec, your board, your CFO, they’re not looking at performance on a single number every day. They’re looking at a portfolio of risk and how different factors are balancing out against everything. So it’s really important for folks in Op Risk Management and IT Risk Management to really truly understand and make sure that they’re providing a portfolio view up the chain that adequately represents the state of the business, which typically will represent multiple lines of business, multiple systems, multiple environments, things like that.

I think one of the biggest challenges we run into is just in an ill-conceived desire to provide value that’s oversimplified. We end up hyper-aggregating results and data, and suddenly everything boils down to a stop light that IT today is either red, yellow or green. That’s not really particularly informative, and it doesn’t help you make better decisions. How can I make better investment decisions around IT systems if all I know is that today things are yellow? I think it comes back to the educational awareness topic. Maybe people aren’t always best placed within organizations but really it’s more about how they’re representing the data and whether they’re getting it into the right format that’s most accessible to that audience.

What should organizations look for in choosing risk standards?

I usually get a variety of questions and they’re all about risk assessment—‘Oh, we need to do risk assessment’ and ‘We hear about this quant risk assessment thing that sounds really cool, where do we get started?’ Inevitably, it comes down to, what’s your actual Risk Management process look like? Do you actually have a context for making decisions, understanding the business context, etc.? And the answer more often than not is no, there is no actual Risk Management process. I think really where people can leverage the standards is understanding what the overall risk management process looks like or can look like and in constructing that, making sure they identify the right stakeholders overall and then start to drill down to specifics around impact analysis, actual risk analysis around remediation and recovery. All of these are important components but they have to exist within the broader context and that broader context has to functionally plug into the organization in a meaningful, measurable manner. I think that’s really where a lot of the confusion ends up occurring. ‘Hey I went to this conference, I heard about this great thing, how do I make use of it?’ People may go through certification training but if they don’t know how to go back to their organization and put that into practice not just on a small-scale decision basis, but actually going in and plugging it into a larger Risk Management process, it will never really demonstrate a lot of value.

The other piece of the puzzle that goes along with this, too, is you can’t just take these standards and implement them verbatim; they’re not designed to do that. You have to spend some time understanding the organization, the culture of the organization and what will work best for that organization. You have to really get to know people and use these things to really drive conversations rather than hoping that one of these risk assessments results will have some meaningful impact at some point.

How can organizations get more value from Risk Management and risk standards?

Starting with latter first, the value of the Risk Management standards is that you don’t have to start from scratch, you don’t have to reinvent the wheel. There are, in fact, very consistent and well-conceived approaches to structuring risk management programs and conducting risk assessment and analysis. That’s where the power of the standards come from, from establishing a template or guideline for establishing things.

The challenge of course is you have to have it well-grounded within the organization. In order to get value from a Risk Management program, it has to be part of daily operations. You have to plug it into things like procurement cycles and other similar types of decision cycles so that people aren’t just making gut decisions based off whatever their existing biases are.

One of my favorite examples is password complexity requirements. If you look back at the ‘best practice’ standards requirements over the years, going all the way back to the Orange Book in the 80s or the Rainbow Series which came out of the federal government, they tell you ‘oh, you have to have 8-character passwords and they have to have upper case, lower, numbers, special characters, etc.’ The funny thing is that while that was probably true in 1985, that is probably less true today. When we actually do risk analysis to look at the problem, and understand what the actual scenario is that we’re trying to guard against, password complexity ends up causing more problems than it solves because what we’re really protecting against is a brute force attack against a log-in interface or guessability on a log-in interface. Or maybe we’re trying to protect against a password database being compromised and getting decrypted. Well, password complexity has nothing to do with solving how that data is protected in storage. So why would we look at something like password complexity requirements as some sort of control against compromise of a database that may or may not be encrypted?

This is where Risk Management practices come into play because you can use Risk Management and risk assessment techniques to look at a given scenario—whether it be technology decisions or security control decisions, administrative or technical controls—we can look at this and say what exactly are we trying to protect against, what problem are we trying to solve? And then based on our understanding of that scenario, let’s look at the options that we can apply to achieve an appropriate degree of protection for the organization.

That ultimately is what we should be trying to achieve with Risk Management. Unfortunately, that’s usually not what we see implemented. A lot of the time, what’s described as risk management is really just an extension of audit practices and issuing a bunch of surveys, questionnaires, asking a lot of questions but never really putting it into a proper business context. Then we see a lot of bad practices applied, and we start seeing a lot of math-magical practices come in where we take categorical data—high, medium, low, more or less, what’s the impact to the business? A lot, a little—we take these categorical labels and suddenly start assigning numerical values to them and doing arithmetic calculations on them, and this is a complete violation of statistical principles. You shouldn’t be doing that at all. By definition, you don’t do arithmetic on categorical data, and yet that’s what a lot of these alleged Risk Management and risk assessment programs are doing.

I think Risk Management gets a bad rap as a result of these poor practices. Conducting a survey, asking questions is not a risk assessment. A risk assessment is taking a scenario, looking at the business impact analysis for that scenario, looking at the risk tolerance, what the risk capacity is for that scenario, and then looking at what the potential threats and weaknesses are within that scenario that could negatively impact the business. That’s a risk assessment. Asking people a bunch of questions about ‘Do you have passwords? Do you use complex passwords? Have you hardened the server? Are there third party people involved?’ That’s interesting information but it’s not usually reflective of the risk state and ultimately we want to find out what the risk state is.

How do you best determine that risk state?

If you look at any of the standards—and again this is where the standards do provide some value—if you look at what a Risk Management process is and the steps that are involved in it, take for example ISO 31000—step one is establishing context, which includes establishing potential business impact or business importance, business priority for applications and data, also what the risk tolerance, risk capacity is for a given scenario. That’s your first step. Then the risk assessment step is taking that data and doing additional analysis around that scenario.

In the technical context, that’s looking at how secure is this environment, what’s the exposure of the system, who has access to it, how is the data stored or protected? From that analysis, you can complete the assessment by saying ‘Given that this is a high value asset, there’s sensitive data in here, but maybe that data is strongly encrypted and access controls have multiple layers of defense, etc., the relative risk here of a compromise or attack being successful is fairly low.’ Or ‘We did this assessment, and we found in the application that we could retrieve data even though it was supposedly stored in an encrypted state, so we could end up with a high risk statement around the business impact, we’re looking at material loss,’ or something like that.

Pulling all of these pieces together is really key, and most importantly, you cannot skip over context setting. If you don’t ever do context setting, and establish the business importance, nothing else ends up mattering. Just because a system has a vulnerability doesn’t mean that it’s a material risk to the business. And you can’t even know that unless you establish the context.

In terms of getting started, leveraging the standards makes a lot of sense, but not from a perspective of this is a compliance check list that I’m going to use verbatim. You have to use it as a structured process, you have to get some training and get educated on how these things work and then what requirements you have to meet and then do what makes sense for the organizational role. At the end of the day, there’s no Easy Button for these things, you have to invest some time and energy and build something that makes sense and is functional for your organization.

To download the IT Risk Management survey summary, please click here.

By The Open GroupFormer Gartner analyst Ben Tomhave (MS, CISSP) is Security Architect for a leading online education organization where he is putting theories into practice. He holds a Master of Science in Engineering Management (Information Security Management concentration) from The George Washington University, and is a member and former co-chair of the American Bar Association Information Security Committee, senior member of ISSA, former board member of the Northern Virginia OWASP chapter, and member and former board member for the Society of Information Risk Analysts. He is a published author and an experienced public speaker, including recent speaking engagements with the RSA Conference, the ISSA International Conference, Secure360, RVAsec, RMISC, and several Gartner events.

Join the conversation! @theopengroup #ogchat #ogBWI

Leave a comment

Filed under Cybersecurity, RISK Management, Security, Security Architecture, Standards, The Open Group Baltimore 2015, Uncategorized

Managing Your Vulnerabilities: A Q&A with Jack Daniel

By The Open Group

With hacks and security breaches becoming more prevalent everyday, it’s incumbent on organizations to determine the areas where their systems may be vulnerable and take actions to better handle those vulnerabilities. Jack Daniel, a strategist with Tenable Network Security who has been active in securing networks and systems for more than 20 years, says that if companies start implementing vulnerability management on an incremental basis and use automation to help them, they can hopefully reach a point where they’re not constantly handling vulnerability crises.

Daniel will be speaking at The Open Group Baltimore event on July 20, presenting on “The Evolution of Vulnerability Management.” In advance of that event, we recently spoke to Daniel to get his perspective on hacker motivations, the state of vulnerability management in organizations today, the human problems that underlie security issues and why automation is key to better handling vulnerabilities.

How do you define vulnerability management?

Vulnerability detection is where this started. News would break years ago of some vulnerability, some weakness in a system—a fault in the configuration or software bug that allows bad things to happen. We used to really to do a hit-or-miss job of it, it didn’t have to be rushed at all. Depending on where you were or what you were doing, you might not be targeted—it would take months after something was released before bad people would start doing things with it. As criminals discovered there was money to be made in exploiting vulnerabilities, the attackers became more and more motivated by more than just notoriety. The early hacker scene that was disruptive or did criminal things was largely motivated by notoriety. As people realized they could make money, it became a problem, and that’s when we turned to management.

You have to manage finding vulnerabilities, detecting vulnerabilities and resolving them, which usually means patching but not always. There are a lot of ways to resolve or mitigate without actually patching, but the management aspect is discovering all the weaknesses in your environment—and that’s a really broad brush, depending on what you’re worried about. That could be you’re not compliant with PCI if you’re taking credit cards or it could be that bad guys can steal your database full of credit card numbers or intellectual property.

It’s finding all the weaknesses in your environment, the vulnerabilities, tracking them, resolving them and then continuing to track as new ones appear to make sure old ones don’t reappear. Or if they do reappear, what in your corporate process is allowing bad things to happen over and over again? It’s continuously doing this.

The pace of bad things has accelerated, the motivations of the actors have forked in a couple of directions, and to do a good job of vulnerability management really requires gathering data of different qualities and being able to make assessments about it and then applying what you know to what’s the most effective use of your resources—whether it’s time or money or employees to fix what you can.

What are the primary motivations you’re seeing with hacks today?

They fall into a couple big buckets, and there are a whole bunch of them. One common one is financial—these are the people that are stealing credit cards, stealing credentials so they can do bank wire fraud, or some other way to get at money. There are a variety of financial motivators.

There are also some others, depending on who you are. There’s the so-called ‘Hacktivist,’ which used to be a thing in the early days of hacking but has now become more widespread. These are folks like the Syrian Electronic Army or there’s various Turkish groups that through the years have done website defacements. These people are not trying to steal money, they’re trying to embarrass you, they’re trying to promote a message. It may be, as with the Syrian Electronic Army, they’re trying to support the ruler of whatever’s left of Syria. So there are political motivations. Anonymous did a lot of destructive things—or people calling themselves ‘Anonymous’—that’s a whole other conversation, but people do things under the banner of Anonymous as hacktivism that struck out at corporations they thought were unjust or unfair or they did political things.

Intellectual property theft would be the third big one, I think. Generally the finger is pointed at China, but it’s unfair to say they’re the only ones stealing trade secrets. People within your own country or your own market or region are stealing trade secrets continuously, too.

Those are the three big ones—money, hacktivism and intellectual property theft. It trickles down. One of the things that has come up more often over the past few years is people get attacked because of who they’re connected to. It’s a smaller portion of it and one that’s overlooked but is a message that people need to hear. For example, in the Target breach, it is claimed that the initial entry point was through the heating and air conditioning vendors’ computer systems and their access to the HVAC systems inside a Target facility, and, from there, they were able to get through. There are other stories about the companies where organizations have been targeted because of who they do business with. That’s usually a case of trying to attack somebody that’s well-secured and there’s not an easy way in, so you find out who does their heating and air-conditioning or who manages their remote data centers or something and you attack those people and then come in.

How is vulnerability management different from risk management?

It’s a subset of risk management. Risk management, when done well, gives a scope of a very large picture and helps you drill down into the details, but it has to factor in things above and beyond the more technical details of what we more typically think of as vulnerability management. Certainly they work together—you have to find what’s vulnerable and then you have to make assessments as to how you’re going to address your vulnerabilities, and that ideally should be done in a risk-based manner. Because as much as all of the reports from Verizon Data Breach Report and others say you have to fix everything, the reality is that not only can we not fix everything, we can’t fix a lot immediately so you really have to prioritize things. You have to have information to prioritize things, and that’s a challenge for many organizations.

Your session at The Open Group Baltimore event is on the evolution of vulnerability management—where does vulnerability management stand today and where does it need to go?

One of my opening slides sums it up—it used to be easy, and it’s not anymore. It’s like a lot of other things in security, it’s sort of a buzz phrase that’s never really taken off like it needs to at the enterprise level, which is as part of the operationalization of security. Security needs to be a component of running your organization and needs to be factored into a number of things.

The information security industry has a challenge and history of being a department in the middle and being obstructionist, which is I think is well deserved. But the real challenge is to cooperate more. We have to get a lot more information, which means working well with the rest of the organization, particularly networking and systems administrators and having conversations with them as far as the data and the environment and sharing and what we discover as problems without being the judgmental know-it-all security people. That is our stereotype. The adversaries are often far more cooperative than we are. In a lot of criminal forums, people will be fairly supportive of other people in their community—they’ll go up to where they reach the trade-secret level and stop—but if somebody’s not cutting into their profits, rumor is these people are cooperating and collaborating.

Within an organization, you need to work cross-organizationally. Information sharing is a very real piece of it. That’s not necessarily vulnerability management, but when you step into risk analysis and how you manage your environment, knowing what vulnerabilities you have is one thing, but knowing what vulnerabilities people are actually going to do bad things to requires information sharing, and that’s an industry wide challenge. It’s a challenge within our organizations, and outside it’s a real challenge across the enterprise, across industry, across government.

Why has that happened in the Security industry?

One is the stereotype—a lot of teams are very siloed, a lot of teams have their fiefdoms—that’s just human nature.

Another problem that everyone in security and technology faces is that we talk to all sorts of people and have all sorts of great conversations, learn amazing things, see amazing things and a lot of it is under NDA, formal or informal NDAs. And if it weren’t for friend-of-a-friend contacts a lot of information sharing would be dramatically less. A lot of the sanitized information that comes out is too sanitized to be useful. The Verizon Data Breach Report pointed out that there are similarities in attacks but they don’t line up with industry verticals as you might expect them to, so we have that challenge.

Another serious challenge we have in security, especially in the research community, is that there’s total distrust of the government. The Snowden revelations have really severely damaged the technology and security community’s faith in the government and willingness to cooperate with them. Further damaging that are the discussions about criminalizing many security tools—because the people in Congress don’t understand these things. We have a president who claims to be technologically savvy, and he is more than any before him, but he still doesn’t get it and he’s got advisors that don’t get it. So we have a great distrust of the government, which has been earned, despite the fact that any one of us in the industry knows folks at various agencies—whether the FBI or intelligence agencies or military —who are fantastic people—brilliant, hardworking patriotic—but the entities themselves are political entities, and that causes a lot of distrust in information sharing.

And there are just a lot of people that have the idea that they want proprietary information. This is not unique to security. There are a couple of different types of managers—there are people in organizations who strive to make themselves irreplaceable. As a manager, you’ve got to get those people out of your environment because they’re just poisonous. There are other people who strive to make it so that they can walk away at any time and it will be a minor inconvenience for someone to pick up the notes and run. Those are the type of people you should hang onto for dear life because they share information, they build knowledge, they build relationships. That’s just human nature. In security I don’t think there are enough people who are about building those bridges, building those communications paths, sharing what they’ve learned and trying to advance the cause. I think there’s still too many who horde information as a tool or a weapon.

Security is fundamentally a human problem amplified by technology. If you don’t address the human factors in it, you can have technological controls, but it still has to be managed by people. Human nature is a big part of what we do.

You advocate for automation to help with vulnerability management. Can automation catch the threats when hackers are becoming increasingly sophisticated and use bots themselves? Will this become a war of bot vs. bot?

A couple of points about automation. Our adversaries are using automation against us. We need to use automation to fight them, and we need to use as much automation as we can rely on to improve our situation. But at some point, we need smart people working on hard problems, and that’s not unique to security at all. The more you automate, at some point in time you have to look at whether your automation processes are improving things or not. If you’ve ever seen a big retailer or grocery store that has a person working full-time to manage the self-checkout line, that’s failed automation. That’s just one example of failed automation. Or if there’s a power or network outage at a hospital where everything is regulated and medications are regulated and then nobody can get their medications because the network’s down. Then you have patients suffering until somebody does something. They have manual systems that they have to fall back on and eventually some poor nurse has to spend an entire shift doing data entry because the systems failed so badly.

Automation doesn’t solve the problems—you have to automate the right things in the right ways, and the goal is to do the menial tasks in an automated fashion so you have to spend less human cycles. As a system or network administrator, you run into the same repetitive tasks over and over and you write scripts to do it or buy a tool to automate it. They same applies here –you want to filter through as much of the data as you can because one of the things that modern vulnerability management requires is a lot of data. It requires a ton of data, and it’s very easy to fall into an information overload situation. Where the tools can help is by filtering it down and reducing the amount of stuff that gets put in front of people to make decisions about, and that’s challenging. It’s a balance that requires continuous tuning—you don’t want it to miss anything so you want it to tell you everything that’s questionable but it can’t throw too many things at you that aren’t actually problems or people give up and ignore the problems. That was allegedly part of a couple of the major breaches last year. Alerts were triggered but nobody paid attention because they get tens of thousands of alerts a day as opposed to one big alert. One alert is hard to ignore—40,000 alerts and you just turn it off.

What’s the state of automated solutions today?

It’s pretty good if you tune it, but it takes maintenance. There isn’t an Easy Button, to use the Staples tagline. There’s not an Easy Button, and anyone promising an Easy Button is probably not being honest with you. But if you understand your environment and tune the vulnerability management and patch management tools (and a lot of them are administrative tools), you can automate a lot of it and you can reduce the pain dramatically. It does require a couple of very hard first steps. The first step in all of it is knowing what’s in your environment and knowing what’s crucial in your environment and understanding what you have because if you don’t know what you’ve got, you won’t be able to defend it well. It is pretty good but it does take a fair amount of effort to get to where you can make the best of it. Some organizations are certainly there, and some are not.

What do organizations need to consider when putting together a vulnerability management system?

One word: visibility. They need to understand that they need to be able to see and know what’s in the environment—everything that’s in their environment—and get good information on those systems. There needs to be visibility into a lot of systems that you don’t always have good visibility into. That means your mobile workforce with their laptops, that means mobile devices that are on the network, which are probably somewhere whether they belong there or not, that means understanding what’s on your network that’s not being managed actively, like Windows systems that might not be in active directory or RedHat systems that aren’t being managed by satellite or whatever systems you use to manage it.

Knowing everything that’s in the environment and its roles in the system—that’s a starting point. Then understanding what’s critical in the environment and how to prioritize that. The first step is really understanding your own environment and having visibility into the entire network—and that can extend to Cloud services if you’re using a lot of Cloud services. One of the conversations I’ve been having lately since the latest Akamai report was about IPv6. Most Americans are ignoring it even at the corporate level, and a lot of folks think you can ignore it still because we’re still routing most of our traffic over the IPv4 protocol. But IPv6 is active on just about every network out there. It’s just whether or not we actively measure and monitor it. The Akamai Report said something that a lot of folks have been saying for years and that’s that this is really a problem. Even though the adoption is pretty low, what you see if you start monitoring for it is people communicating in IPv6 whether intentionally or unintentionally. Often unintentionally because everythings’s enabled, so there’s often a whole swath of your network that people are ignoring. And you can’t have those huge blind spots in the environment, you just can’t. The vulnerability management program has to take into account that sort of overall view of the environment. Then once you’re there, you need a lot of help to solve the vulnerabilities, and that’s back to the human problem.

What should Enterprise Architects look for in an automated solution?

It really depends on the corporate need. They need to figure out whether or not the systems they’re looking at are going to find most or all of their network and discover all of the weakness, and then help them prioritize those. For example, can your systems do vulnerability analysis on newly discovered systems with little or no input? Can you automate detection? Can you automate confirmation of findings somehow? Can you interact with other systems? There’s a piece, too—what’s the rest of your environment look like? Are there ways into it? Does your vulnerability management system work with or understand all the things you’ve got? What if you have some unique network gear that your vulnerability management systems not going to tell you what the vulnerability’s in? There are German companies that like to use operating systems other than Windows and garden variety Linux distributions. Does it work in your environment and will it give you good coverage in your environment and can it take a lot of the mundane out of it?

How can companies maintain Boundaryless Information Flow™–particularly in an era of the Internet of Things–but still manage their vulnerabilities?

The challenge is a lot of people push back against high information flow because they can’t make sense of it; they can’t ingest the data, they can’t do anything with it. It’s the challenge of accepting and sharing a lot of information. It doesn’t matter whether vulnerability management or lot analysis or patch management or systems administration or back up or anything—the challenge is that networks have systems that share a lot of data but until you add context, it’s not really information. What we’re interested in in vulnerability management is different than what you’re automated backup is. The challenge is having systems that can share information outbound, share information inbound and then act rationally on only that which is relevant to them. That’s a real challenge because information overload is a problem that people have been complaining about for years, and it’s accelerating at a stunning rate.

You say Internet of Things, and I get a little frustrated when people treat that as a monolith because at one end an Internet enabled microwave or stove has one set of challenges, and they’re built on garbage commodity hardware with no maintenance ability at all. There are other things that people consider Internet of Things because they’re Internet enabled and they’re running Windows or a more mature Linux stack that has full management and somebody’s managing it. So there’s a huge gap between the managed IoT and the unmanaged, and the unmanaged is just adding low power machines in environments that will just amplify things like distributed denial of service (DoS). As it is, a lot of consumers have home routers that are being used to attack other people and do DoS attacks. A lot of the commercial stuff is being cleaned up, but a lot of the inexpensive home routers that people have are being used, and if those are used and misused or misconfigured or attacked with worms that can change the settings for things to have everything in the network participate in.

The thing with the evolution of vulnerability management is that we’re trying to drive people to a continuous monitoring situation. That’s where the federal government has gone, that’s where a lot of industries are, and it’s a challenge to go from infrequent or even frequent big scans to watching things continuously. The key is to take incremental steps, and the goal is, instead of having a big massive vulnerability project every quarter or every month, the goal is to get down to where it’s part of the routine, you’re taking small remediated measures on a daily or regular basis. There’s still going to be things when Microsoft or Oracle come out with a big patch that will require a bigger tool-up but you’re going to need to do this continuously and reach that point where you do small pieces of the task continuously rather than one big task. That’s the goal is to get to where you’re doing this continuously so you get to where you’re blowing out birthday candles rather than putting out forest fires.

Jack Daniel, a strategist at Tenable Network Security, has over 20 years experience in network and system administration and security, and has worked in a variety of practitioner and management positions. A technology community activist, he supports several information security and technology organizations. Jack is a co-founder of Security BSides, serves on the boards of three Security BSides non-profit corporations, and helps organize Security B-Sides events. Jack is a regular, featured speaker at ShmooCon, SOURCE Boston, DEF CON, RSA and other marque conferences. Jack is a CISSP, holds CCSK, and is a Microsoft MVP for Enterprise Security.

Join the conversation – @theopengroup #ogchat #ogBWI

1 Comment

Filed under Boundaryless Information Flow™, Internet of Things, RISK Management, Security, the open group, The Open Group Baltimore 2015

The Open Group Healthcare Forum Publishes First Whitepaper and Announces New Member

By The Open Group

The Open Group Healthcare Forum has published its first whitepaper, “Enhancing Health Information Exchange with the FHIM” which examines the Federal Health Information Model (FHIM) and its efforts to bring semantic interoperability to the Healthcare industry.

The document was developed in response to a 2014 request to the Healthcare Forum made by the Federal Health Architecture program (FHA), an E-Government Line of Business initiative managed by ONC. The Forum was asked to evaluate the FHIM and to detail its potential usefulness to the wider Healthcare ecosystem. In response, The Healthcare Forum developed a whitepaper that highlights the strengths of the FHIM and the challenges it faces. Contributors came from organizations based across the globe including HP (US), Dividend Group (Canada), Sykehuspartner (Norway), and Philips Medical Systems (Germany).

The FHIM is a key component of a multimillion dollar effort to enable data sharing across the Healthcare enterprise. It has relevance worldwide as US federal agencies are among the leading markets for healthcare technology and processes. By identifying examples of FHIM adoption, understanding barriers to its adoption, and relating the FHIM to other major efforts to achieve Healthcare interoperability, the white paper reflects The Healthcare Forum’s support of Boundaryless Information Flow™, which continues to be engaged in this important work and expects to publish new insights in the second white paper in this series, planned for late 2015. The full whitepaper can be found here to download.

At the same time The Open Group Healthcare Forum has also announced that The Office of the National Coordinator for Health Information Technology (ONC – part of the U.S. Department of Health and Human Services) as its latest key member.

FHA Director Gail Kalbfleisch commented on the announcement, “We look forward to this membership opportunity with the Healthcare Forum, and becoming a part of the synergy that comes from collaborating with other members.”

Allen Brown, President & CEO of The Open Group also welcomed the news, “We are delighted to welcome the ONC to The Open Group Healthcare Forum following the evaluation of the FHIM by our members. The efficient and effective flow of secure healthcare information through healthcare systems is a critical goal of all who are engaged in that industry and is core to the vision of The Open Group which is Boundaryless Information Flow™, achieved through global interoperability in a secure, reliable and timely manner“.

Leave a comment

Filed under Boundaryless Information Flow™, Healthcare, whitepaper

Update on The Open Group France: A Conversation with Eric Boulay

By The Open Group

Following this spring’s European Summit, we reached out to Eric Boulay, our French partner, to catch up on the latest goings on at The Open Group France. Boulay, who is also the CEO of our French affiliate, Arismore, provided us an update on affiliate growth and also discussed the architectural issues currently facing French companies.

Update

As Eric points out in the interview below, digital transformation is one of the largest trends French companies are grappling with. To provide some guidance, The Open Group France has recently published a new whitepaper entitled “Key Issues and Skills in Digital Transformation.” In addition, the organization uses a new publication, “TOGAF En Action” to organize meetings and share TOGAF® case studies. The TOGAF 9.1 Pocket Guide has also recently been translated into French, and a French TOGAF app is now available for iPhone users with an Android version in the works.

One new member, Adservio, has joined The Open Group France in the past quarter, and three memberships were recently renewed. The Open Group France will host an Architecture Practitioners Conference in Paris on June 17th.

Q&A

What are some of the latest goings on with The Open Group France?

France is accelerating in digital transformation so now is a good time to speed up architectural discipline. Training is doing well, and consistency and service in TOGAF®, an Open Group standard, and Enterprise Architecture are doing well because there is a move toward digital transformation. In the France architecture forum, we have meetings every six weeks to share activities and case studies. We are currently raising awareness for The Open Group IT4IT™ Forum. It’s not well-known today in France. It’s just starting up and a brand new subject.

What technology trends are Enterprise Architects in Europe grappling with today?

Definitely there is more interest in closing the gap between strategy, business and information systems. There are two topics – one is what we used to call enterprise IT architecture. IT guys are working to be more and more effective in managing IT assets—this has been the same story for a while. But what is emerging right now is—and this is due to digital transformation—there is a strong need to close the gap between enterprise strategy and information systems. This means that we are working, for example, with ArchiMate® to better understand business motivations and to go from business motivations to a roadmap to build next generation information systems. So this is a new topic because now we can say that Enterprise Architecture is no longer just an IT topic, it’s now an enterprise topic. Enterprise Architects are more and more in the right position to work both on the business and IT sides. This is a hot topic, and France is participating in the Information Architecture Work Group to propose new guidance and prescriptions. Also there is some work on the table with TOGAF to better close the gap between strategy and IT. This is exactly what we have to do better in the The Open Group Architecture Forum, and we’re working on it.

What are some of the things that can be done to start closing that gap?

First is to speak the business language. What we used to do is really to work close with the business guys. We have to use, for example, ArchiMate language but not talk about ArchiMate languages.

For example, there was an international account we had in Europe. We flew to different countries to talk to the business lines and the topic was shared services, like SAP or ERP, and what they could share with subsidiaries in other countries. We were talking about the local business, and by the end of the day we were using ArchiMate and the Archie tool to review and wrap up the meeting. These documents and drawings were very useful to explicitly figure out what exactly this business line needed. Because we had this very formal view of what they needed that was very valuable to be able to compare it with other business lines, and then we were in the position to help them set up the shared services in an international standard view. We definitely used ArchiMate tools, language and the Archie tools. We were talking about strategy and motivation and at the end of the day we shared the ArchiMate view of what they could share, and three months later they are very happy with the deliverables because we were able to provide view across different business units and different countries and we are ready to implement shared services with ERP in different countries and business lines. The method, the language, TOGAF, ArchiMate language—and also Enterprise Architect soft skills—all of these were key differentiators in being able to achieve this job.

What other things are you seeing Enterprise Architects grappling with in Europe?

Obviously Big Data and data analysis is really hyped today. Like the U.S., the first problem is that Europe needs resources and skills to work on these new topics. We are very interested in these topics, but we have to work to better figure out what kind of architecture and reference architectures we can use for that. The Open Platform 3.0™ Forum trends and reference architecture are key to fostering the maturity of the domain.

The second topic is IT4IT—behind IT4IT there is a financial issue for IT people to always deliver more value and save money. If I understand where we are going with IT4IT, we are trying to develop a reference architecture which helps companies to better deliver service with an efficient cost rationale. This is why we are taking part in IT4IT. When we promote IT4IT at the next French event in June we will talk about IT4IT because it’s an opportunity to review the IT service portfolio and the way to deliver it in an effective way.

It’s not so easy with us with security because today it’s a local issue. What I mean by local issue is, in every country in Europe and especially in France, cybersecurity and data privacy are on the government agenda. It’s a sovereignty issue, and they are cautious about local solutions. France, and especially the government, is working on that. There are works at the European level to set up policies for European data privacy and for cyber criminality. To be honest, Europe is not 100 percent confident with security issues if we’re talking about Facebook and Google. It’s not easy to propose an international framework to fix security issues. For example, Arismore is working with EA and security. EA is easy to promote – most of the major French companies are aware of TOGAF and are using EA and TOGAF even more, but not security because we have ISO 27001 and people are not very confident with U.S.-based security solutions.

 

Leave a comment

Filed under ArchiMate®, EA, Enterprise Architecture, IT4IT, the open group, The Open Group France, TOGAF®

Why We Like ArchiMate®

What Are Your Thoughts?

By Allen Brown, President & CEO of The Open Group

This year marks the 30th anniversary of my class graduation from the London Business School MBA program. It was 3 years of working full-time for Unilever and studying every minute possible, and tackling what seemed to be impossible case studies on every subject that you would have to deal with when managing a business.

One of the many core subjects was “Operations Management”: organizing people, materials and technology into an efficient unit. The first thing we were taught was that there are no rules, only pressures and opportunities. The next thing was that there are no boundaries to what can have an impact on the subject: from macro issues of structure and infrastructure to micro issues of marketing, capabilities, location, motivation and much more. It required a lot of analysis and a lot of thinking around realistic solutions of how to change the “now” state.

To support this, one of the techniques we were taught was modeling. There was one case study that I recall was about a small company of less than 150 personnel engaged in the manufacture and development of fast sea-based transport. As part of the analysis I modeled the physical flow system which covered all aspects of the operation from sales to customer feedback and from design to shipment – all in pencil and all on one page. An extract is shown here.

By Allen Brown, President & CEO, The Open Group

I don’t know if it’s just me but that looks very similar to some ArchiMate® models I have seen. OK there is not a specific box or symbol for the actors and their roles or for identifying processes but it is clear, who is responsible what, the function or process that they perform and the information or instructions they pass to or receive from their colleagues.

So it should not be surprising that I would like ArchiMate®, even before it became a standard of The Open Group and by the same token many people holding senior positions in organizations today, have also been through MBA programs in the past, or some form of executive training and as such would be familiar with the modeling that I and my classmates were taught and would therefore easily understand ArchiMate models.

Since graduating, I have used modeling on many occasions to assist with understanding of complex processes and to identify where problems, bottlenecks, delays and unnecessary costs arise. Almost everyone, wherever they are in the organization has not only understood them but also been able to improve them, with the possible exception of software developers, who still needed UML and BPMN.

An ArchiMate Focus Group

A few months ago I got together with some users of ArchiMate to try to understand its appeal to others. Some were in large financial services businesses, others were in healthcare and others were in consulting and training organizations.

The first challenge, of course, is that different people, in different situations, with different roles in different organizations in different countries and continents will always see things differently. In The Open Group there are more than 300,000 people from over 230 different countries; nearly one third of those people identify themselves as “architects”; and of those “architects” there are more than 3,400 job titles that contain the word architect. There are also more than 3,500 people who identify themselves as CEO, nearly 5,500 CIO’s etc.

So one size definitely will not fit all and neither will a single statement produced by a small number of people sat in a room for a day.

So what we did was to focus mostly on a senior executive in a major financial services company in the United States whose team is responsible for maintaining the business capability map for the company. After that we tested the results with others in the financial services industry, a representative from the healthcare industry and with an experienced consultant and trainer.

Ground Rules for Feedback

Now, what I would like to get feedback on is your views, which is the reason for writing this blog. As always there are some ground rules for feedback:

  • Please focus on the constructive
  • Please identify the target audience for the messages as closely as you can: e.g. job title / type; industry; geographic location etc

With those thoughts in mind, let me now share what we have so far.

The Value of ArchiMate

For the person that we initially focused on, he felt that The Open Group ArchiMate® Standard is the standard visual language for communicating and managing the impact of change. The reasons behind this are that it bridges between strategy, solutions and execution and it enables explicit communication.

The value of bridging between strategy, solutions and execution is recognized because it:

  • Accelerates value delivery
  • Integrates between disciplines
  • Describes strategic capabilities, milestones and outcomes

Enabling explicit communication is realized because it:

  • Improves understanding at all levels of the organization
  • Enables a short time to benefit
  • Is supported by leading tool vendors

A supporting comment from him was that ArchiMate enables different delivery approaches (e.g. waterfall, agile). From a modeling point of view the diagrams are still the same, but the iteration cycles and utilization of them become very different in the agile method. Interesting thought.

This is obviously different from why I like ArchiMate but also has some similarities (e.g. easily understood by anyone) and it is a perfect example of why we need to recognize the differences and similarities when communicating with different people.

So when we asked others in the financial services whether they agreed or not and to tell us why they like ArchiMate, they all provided great feedback and suggested improvements. They identified two groups

  • The CEO, CIO, Business Analyst and Business Architect; and
  • Areas of business support and IT and Solution Architects and System Analysts.

All agreed that The Open Group ArchiMate® Standard is the standard visual language. Where they varied was in the next line. For the CEO, CIO, Business Analyst and Business Architect target audience the value of ArchiMate, was realized because:

  • It is for modeling the enterprise and managing its change
  • It can support strategic alignment and support impact analysis

Instead of “enabling explicit communication” others preferred the simpler, “clarifies complex systems” but the sub-bullets remained the same. One supporting statement was, “I can show a diagram that most people can understand even without technical knowledge”. Another statement, this time in support of the bridging capability was, “It helps me in unifying the languages of business and IT”.

The value of strategic alignment support was realized through ArchiMate because it:

  • Allows an integrated view
  • Depicts links between drivers and the specific requirements that address them
  • Links between motivation and business models

Its support of impact analysis and decision taking recognizes the bridging capability:

  • Integrates between disciplines: links between cause and effect
  • Describes and allows to identify, strategic capabilities
  • Bridges between strategy, solutions and execution

When the target audience changed to areas of business support and IT or to Solution Architects and System Analysts, the next line became:

  • It is for communicating and managing change that leverages TOGAF® standard usage
  • It can support the development of conceptual representations for the applications and IT platforms and their alignment with business goals

For these audiences the value was still in the ability to clarify complex systems and to bridge between strategy, solutions and execution but the sub-bullets changed significantly:

  • Clarifies complex systems
    • Improves understanding at all levels of the organization
    • Allows integration between domains
    • Provides a standard way to represent inputs and outputs between domains
    • Supports having a standard model repository to create views
  • Bridges between strategy, solutions and execution
    • Allows views segmentation efficiently
    • Allow a consolidated organizational landscape definition business aligned
    • Supports solutions design definition

Unlike my business school models, ArchiMate models are also understandable to software developers.

The feedback from the healthcare organization was strikingly similar. To give an example format for feedback, I will represent it in a way that would be very helpful if you could use for your comments.

Country: USA

Industry: Healthcare

Target Audience: VP of IT

Positioning statement:

The Open Group ArchiMate® Standard is the standard visual language for communicating and managing change and making the enterprise architecture practice more effective.

It achieves this because it:

  • Clarifies complex systems
    • Improves understanding at all levels of the organization
    • Short time to benefit
    • Supported by leading tool vendors
    • Supports a more effective EA delivery
  • Bridges between strategy, solutions and execution
    • Accelerates value delivery
    • Integrates between disciplines
    • Describes strategic capabilities, milestones and outcomes

Feedback from an experienced consultant and trainer was:

Country / Region: Latin America

Industry:

Target Audience: Director of Business Architecture, Chief EA, Application Architects

Positioning statement:

The Open Group ArchiMate® Standard is the standard visual language for modeling the organization, leveraging communication with stakeholders and managing change

It achieves this because it:

  • Clarifies complex systems and leverage change
    • Improves understanding at all levels of the organization
    • Supported by leading tool vendors
    • Support change impact analysis into the organization and it is a helping tool portfolio management and analysis
    • Supports complex system structures presentation to different stakeholders using a simplified notation
  • Bridges between strategy, solutions and execution
    • Accelerates value delivery
    • Integrates between disciplines
    • Describes strategic capabilities, milestones and outcomes
    • Allow a consolidated organizational landscape definition

Your Feedback

All of this gives us some insight into why a few of us like ArchiMate. I would like to know what you like about ArchiMate or how you talk about it to your colleagues and acquaintances.

So please do not hesitate to let me know. Do you agree with the statements that have been made so far? What improvements would you suggest? How do they resonate in your country, your industry, your organization? What different audiences should be addressed and what messages should we use for them?

Please email your feedback to ArchiMateFeedback@opengroup.org.

By The Open GroupAllen Brown is President and CEO of The Open Group – a global consortium that enables the achievement of business objectives through IT standards.  He is also President of the Association of Enterprise Architects (AEA).

Allen was appointed President & CEO in 1998.  Prior to joining The Open Group, he held a range of senior financial and general management roles both within his own consulting firm, which he founded in 1987, and other multi-national organizations.

Allen is TOGAF® 9 certified, an MBA alumnus of the London Business School and a Fellow of the Association of Chartered Certified Accountants.

1 Comment

Filed under Allen Brown, ArchiMate®, Business Architecture, Enterprise Transformation, the open group

In Praise Of Heuristics – or Saving TOGAF® From Its Friends

By Stuart Boardman, Senior Business Consultant, Business & IT Advisory, KPN Consulting and Ed Harrington, Senior Consulting Associate, Conexiam

As the world’s best known and most used Enterprise Architecture (EA) framework, it’s quite reasonable that TOGAF®, an Open Group standard, attracts criticism from both outside and within The Open Group membership. We would like to discuss a particular class of criticism and even more about the thinking behind that.

This criticism states that TOGAF is neither rigorous nor scientific and that any good EA framework should be both of those things. Now we don’t know anyone who wouldn’t agree that TOGAF could be more rigorous about some things and that’s one of the areas highlighted for attention in the next version of TOGAF.

But, “scientific”? There’s the rub. What do we mean by scientific?

Machines, Nature and Enterprises

What these critics promote is a method, which for any given enterprise, under identical conditions will always deliver the same, “correct” result – regardless who executes the method, as long as they follow the rules. This approach depends on a very 19th/20th Century mechanistic view of an enterprise.

We agree that an enterprise is a system. Mechanical systems behavior is generally predictable. If you get the equation right, you can predict the behavior under any given set of conditions with an accuracy of (to all intents and purposes) 100%. So, if an enterprise were a machine, you could come up with a method that meets this requirement.

Natural and environmental systems do not, in general, behave predictably (leaving trivia like Pavlov and his dogs out of it). There is room for discussion for any one system under consideration as to why this is. It could just be because there are so many variables that we can’t capture all of them at one instant in time (i.e. they are highly complex) or because the system is chaotic (i.e. extremely sensitive to initial conditions) or even stochastic (i.e. we can only establish a probability for a particular outcome) – or possibly a mixture of those things.

A major aspect of enterprises is that, to a considerable extent, they are made up of people, individually and in groups. Each with their shifting perceptions of what “good” is. In fact even a single organization behaves more like an organism than like a machine (note: we are not claiming that organizations are organisms).

Especially important is that enterprises function within wider ecosystems in which external factors like resource availability, innovation, competition, customer loyalty, legislation and regulation (to name but a few) constantly affect the behavior of the enterprise. To reliably predict the behavior of the enterprise we would need to know each and every factor that affects that behavior. Complexity is a major factor. Do we recognize any existing enterprises that do not conform to this (complex) model?

Science and Uncertainty

Enterprises are complex and, we would argue, even chaotic systems. Change the initial conditions and the behavior may be radically different (a totally different equation). A real scientific method for EA would then necessarily reflect that. It would deliver results, which could continue to adapt along with the enterprise. That requires more than just following a set of rules. There is no “equation”. There may be a number of “equations” to choose from. Some degree of experience, domain knowledge and empathy is required to select the most adaptable of those equations. If the world of software architecture hasn’t yet determined a formula for the perfect agile system, how can we imagine the even more complex EA domain could?[1] Any such method would be a meta-method. The actual method followed would be an adaptation (concretization/instantiation) of the meta-method for the system (i.e. enterprise) under examination in its then specific context.

So even if there is an EA method that delivers identical results independent of the user, the chances they’d be correct are…well, just that – chance. (You probably have a better chance of winning the lottery!). The danger of these “scientific” approaches is that we kid ourselves that the result must be right, because the method said so. If the objective were only to produce a moment in time “as-is” view of an enterprise and if you could produce that before everything changed again, then a mechanistic approach might work. But what would be the point?

What Really Bothers Us

Now if the problem here were restricted to the proponents of this “scientific” view, it wouldn’t matter too much, as they’re not especially influential, especially on a global scale. Our concern is that it appears TOGAF is treated by a considerably larger number of people as being exactly that kind of system. Some of the things we read by TOGAF-certified folk on, for example, LinkedIn or come across in practice are deeply disturbing. It seems that people think that the ADM is a recipe for making sausages and that mechanistically stepping through the crop circles will deliver a nicely formed sausage.

Why is this? No TOGAF expert we know thinks TOGAF is a linear, deterministic process. The thousands of TOGAF certified people have a tool that, as TOGAF, itself in chapter 2.10 states: “In all cases, it is expected that the architect will adapt and build on the TOGAF framework in order to define a tailored method that is integrated into the processes and organization structures of the enterprise”.

Is it perhaps an example of the need so many people have to think the whole world is predictable and controllable – an unholy fear of uncertainty? Such people seek comfort and the illusion of certainty in a set of rules. That would certainly fit with an outdated view of science. Or perhaps the problem is located less with the architects themselves than with management by spreadsheet and with project management methodologies that are more concerned with deadlines than with quality? Less experienced architects may feel obliged to go along with this and thus draw the wrong conclusions about TOGAF.

The Task of Enterprise Architecture

Understanding, accepting and taking advantage of the presence of uncertainty is essential for any organization today. This would be true even if it were only because of the accelerating rate of change. But more than that, we need to recognize that the way we do business is changing, that agile organizations encourage emergence[2] and that success means letting go of hard and fast rules. Enterprise architects, to be useful, have to work with this new model, not to be risk averse and to learn from (shared) experience. It’s our responsibility to help our enterprises achieve their strategic goals. If we turn our backs on reality, we may be able to tick off a task on a project plan but we’re not helping anyone.

A good EA framework helps us understand what we need to do and why we are doing it. It doesn’t do the thinking for us. All good EA frameworks are essentially heuristics. They assemble good practice from the experience of real practitioners and provide guidance to assist the intelligent architect in finding the best available solution – in the knowledge that it’s not perfect, that things can and will change and that the most valuable strategy is being able to cope with that change. TOGAF helps us do this.

[1] For more on complexity and uncertainty see Tom Graves’s SCAN method.

[2] See, for example Ruth Malan and Dana Bredemeyer’s The Art of Change: Fractal and Emergent

By Stuart Boardman, KPN, and Ed Harrington, ConexiumStuart Boardman is a Senior Business Consultant with KPN Consulting where he leads the Enterprise Architecture practice and consults to clients on Cloud Computing, Enterprise Mobility and The Internet of Everything. He is Co-Chair of The Open Group Open Platform 3.0™ Forum and was Co-Chair of the Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by KPN, the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI as well as several Open Group white papers, guides and standards. He is a frequent speaker at conferences on the topics of Open Platform 3.0 and Identity.

Ed Harrington is a Senior Consulting Associate with Conexiam, a Calgary, Canada headquartered consultancy. He also heads his own consultancy, EPH Associates. Prior positions include Principle Consultant with Architecting the Enterprise where he provided TOGAF and other Enterprise Architecture (EA) discipline training and consultancy; EVP and COO for Model Driven Solutions, an EA, SOA and Model Driven Architecture Consulting and Software Development company; various positions for two UK based companies, Nexor and ICL and 18 years at General Electric in various marketing and financial management positions. Ed has been an active member of The Open Group since 2000 when the EMA became part of The Open Group and is past chair of various Open Group Forums (including past Vice Chair of the Architecture Forum). Ed is TOGAF® 9 certified.

4 Comments

Filed under architecture, Enterprise Architecture, Enterprise Transformation, TOGAF®