Tag Archives: security

The Open Group Baltimore 2015 Highlights

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

The Open Group Baltimore 2015, Enabling Boundaryless Information Flow™, July 20-23, was held at the beautiful Hyatt Regency Inner Harbor. Over 300 attendees from 16 countries, including China, Japan, Netherlands and Brazil, attended this agenda-packed event.

The event kicked off on July 20th with a warm Open Group welcome by Allen Brown, President and CEO of The Open Group. The first plenary speaker was Bruce McConnell, Senior VP, East West Institute, whose presentation “Global Cooperation in Cyberspace”, gave a behind-the-scenes look at global cybersecurity issues. Bruce focused on US – China cyber cooperation, major threats and what the US is doing about them.

Allen then welcomed Christopher Davis, Professor of Information Systems, University of South Florida, to The Open Group Governing Board as an Elected Customer Member Representative. Chris also serves as Chair of The Open Group IT4IT™ Forum.

The plenary continued with a joint presentation “Can Cyber Insurance Be Linked to Assurance” by Larry Clinton, President & CEO, Internet Security Alliance and Dan Reddy, Adjunct Faculty, Quinsigamond Community College MA. The speakers emphasized that cybersecurity is not a simply an IT issue. They stated there are currently 15 billion mobile devices and there will be 50 billion within 5 years. Organizations and governments need to prepare for new vulnerabilities and the explosion of the Internet of Things (IoT).

The plenary culminated with a panel “US Government Initiatives for Securing the Global Supply Chain”. Panelists were Donald Davidson, Chief, Lifecycle Risk Management, DoD CIO for Cybersecurity, Angela Smith, Senior Technical Advisor, General Services Administration (GSA) and Matthew Scholl, Deputy Division Chief, NIST. The panel was moderated by Dave Lounsbury, CTO and VP, Services, The Open Group. They discussed the importance and benefits of ensuring product integrity of hardware, software and services being incorporated into government enterprise capabilities and critical infrastructure. Government and industry must look at supply chain, processes, best practices, standards and people.

All sessions concluded with Q&A moderated by Allen Brown and Jim Hietala, VP, Business Development and Security, The Open Group.

Afternoon tracks (11 presentations) consisted of various topics including Information & Data Architecture and EA & Business Transformation. The Risk, Dependability and Trusted Technology theme also continued. Jack Daniel, Strategist, Tenable Network Security shared “The Evolution of Vulnerability Management”. Michele Goetz, Principal Analyst at Forrester Research, presented “Harness the Composable Data Layer to Survive the Digital Tsunami”. This session was aimed at helping data professionals understand how Composable Data Layers set digital and the Internet of Things up for success.

The evening featured a Partner Pavilion and Networking Reception. The Open Group Forums and Partners hosted short presentations and demonstrations while guests also enjoyed the reception. Areas focused on were Enterprise Architecture, Healthcare, Security, Future Airborne Capability Environment (FACE™), IT4IT™ and Open Platform™.

Exhibitors in attendance were Esteral Technologies, Wind River, RTI and SimVentions.

By Loren K. Baynes, Director, Global Marketing CommunicationsPartner Pavilion – The Open Group Open Platform 3.0™

On July 21, Allen Brown began the plenary with the great news that Huawei has become a Platinum Member of The Open Group. Huawei joins our other Platinum Members Capgemini, HP, IBM, Philips and Oracle.

By Loren K Baynes, Director, Global Marketing CommunicationsAllen Brown, Trevor Cheung, Chris Forde

Trevor Cheung, VP Strategy & Architecture Practice, Huawei Global Services, will be joining The Open Group Governing Board. Trevor posed the question, “what can we do to combine The Open Group and IT aspects to make a customer experience transformation?” His presentation entitled “The Value of Industry Standardization in Promoting ICT Innovation”, addressed the “ROADS Experience”. ROADS is an acronym for Real Time, On-Demand, All Online, DIY, Social, which need to be defined across all industries. Trevor also discussed bridging the gap; the importance of combining Customer Experience (customer needs, strategy, business needs) and Enterprise Architecture (business outcome, strategies, systems, processes innovation). EA plays a key role in the digital transformation. Huawei will continue to have a global impact yet still focus on China activities.

Allen then presented The Open Group Forum updates. He shared roadmaps which include schedules of snapshots, reviews, standards, and publications/white papers.

Allen also provided a sneak peek of results from our recent survey on TOGAF®, an Open Group standard. TOGAF® 9 is currently available in 15 different languages.

Next speaker was Jason Uppal, Chief Architecture and CEO, iCareQuality, on “Enterprise Architecture Practice Beyond Models”. Jason emphasized the goal is “Zero Patient Harm” and stressed the importance of Open CA Certification. He also stated that there are many roles of Enterprise Architects and they are always changing.

Joanne MacGregor, IT Trainer and Psychologist, Real IRM Solutions, gave a very interesting presentation entitled “You can Lead a Horse to Water… Managing the Human Aspects of Change in EA Implementations”. Joanne discussed managing, implementing, maintaining change and shared an in-depth analysis of the psychology of change.

“Outcome Driven Government and the Movement Towards Agility in Architecture” was presented by David Chesebrough, President, Association for Enterprise Information (AFEI). “IT Transformation reshapes business models, lean startups, web business challenges and even traditional organizations”, stated David.

Questions from attendees were addressed after each session.

In parallel with the plenary was the Healthcare Interoperability Day. Speakers from a wide range of Healthcare industry organizations, such as ONC, AMIA and Healthway shared their views and vision on how IT can improve the quality and efficiency of the Healthcare enterprise.

Before the plenary ended, Allen made another announcement. Allen is stepping down in April 2016 as President and CEO after more than 20 years with The Open Group, including the last 17 as CEO. After conducting a process to choose his successor, The Open Group Governing Board has selected Steve Nunn as his replacement who will assume the role with effect from November of this year. Steve is the current COO of The Open Group and CEO of the Association of Enterprise Architects. Please see press release here.By Loren K. Baynes, Director, Global Marketing Communications

Steve Nunn, Allen Brown

Afternoon track topics were comprised of EA Practice & Professional Development and Open Platform 3.0™.

After a very informative and productive day of sessions, workshops and presentations, event guests were treated to a dinner aboard the USS Constellation just a few minutes walk from the hotel. The USS Constellation constructed in 1854, is a sloop-of-war, the second US Navy ship to carry the name and is designated a National Historic Landmark.

By Loren K. Baynes, Director, Global Marketing CommunicationsUSS Constellation

On Wednesday, July 22, tracks continued: TOGAF® 9 Case Studies and Standard, EA & Capability Training, Knowledge Architecture and IT4IT™ – Managing the Business of IT.

Thursday consisted of members-only meetings which are closed sessions.

A special “thank you” goes to our sponsors and exhibitors: Avolution, SNA Technologies, BiZZdesign, Van Haren Publishing, AFEI and AEA.

Check out all the Twitter conversation about the event – @theopengroup #ogBWI

Event proceedings for all members and event attendees can be found here.

Hope to see you at The Open Group Edinburgh 2015 October 19-22! Please register here.

By Loren K. Baynes, Director, Global Marketing CommunicationsLoren K. Baynes, Director, Global Marketing Communications, joined The Open Group in 2013 and spearheads corporate marketing initiatives, primarily the website, blog, media relations and social media. Loren has over 20 years experience in brand marketing and public relations and, prior to The Open Group, was with The Walt Disney Company for over 10 years. Loren holds a Bachelor of Business Administration from Texas A&M University. She is based in the US.

Leave a comment

Filed under Enterprise Architecture, Cybersecurity, TOGAF®, Security Architecture, Enterprise Transformation, Open CA, Healthcare, Open Platform 3.0, Accreditations, Boundaryless Information Flow™, Interoperability, Internet of Things, Security, The Open Group Baltimore 2015

Using Risk Management Standards: A Q&A with Ben Tomhave, Security Architect and Former Gartner Analyst

By The Open Group

IT Risk Management is currently in a state of flux with many organizations today unsure not only how to best assess risk but also how to place it within the context of their business. Ben Tomhave, a Security Architect and former Gartner analyst, will be speaking at The Open Group Baltimore on July 20 on “The Strengths and Limitations of Risk Management Standards.”

We recently caught up with Tomhave pre-conference to discuss the pros and cons of today’s Risk Management standards, the issues that organizations are facing when it comes to Risk Management and how they can better use existing standards to their advantage.

How would you describe the state of Risk Management and Risk Management standards today?

The topic of my talk is really on the state of standards for Security and Risk Management. There’s a handful of significant standards out there today, varying from some of the work at The Open Group to NIST and the ISO 27000 series, etc. The problem with most of those is that they don’t necessarily provide a prescriptive level of guidance for how to go about performing or structuring risk management within an organization. If you look at ISO 31000 for example, it provides a general guideline for how to structure an overall Risk Management approach or program but it’s not designed to be directly implementable. You can then look at something like ISO 27005 that provides a bit more detail, but for the most part these are fairly high-level guides on some of the key components; they don’t get to the point of how you should be doing Risk Management.

In contrast, one can look at something like the Open FAIR standard from The Open Group, and that gets a bit more prescriptive and directly implementable, but even then there’s a fair amount of scoping and education that needs to go on. So the short answer to the question is, there’s no shortage of documented guidance out there, but there are, however, still a lot of open-ended questions and a lot of misunderstanding about how to use these.

What are some of the limitations that are hindering risk standards then and what needs to be added?

I don’t think it’s necessarily a matter of needing to fix or change the standards themselves, I think where we’re at is that we’re still at a fairly prototypical stage where we have guidance as to how to get started and how to structure things but we don’t necessarily have really good understanding across the industry about how to best make use of it. Complicating things further is an open question about just how much we need to be doing, how much value can we get from these, do we need to adopt some of these practices? If you look at all of the organizations that have had major breaches over the past few years, all of them, presumably, were doing some form of risk management—probably qualitative Risk Management—and yet they still had all these breaches anyway. Inevitably, they were compliant with any number of security standards along the way, too, and yet bad things happen. We have a lot of issues with how organizations are using standards less than with the standards themselves.

Last fall The Open Group fielded an IT Risk Management survey that found that many organizations are struggling to understand and create business value for Risk Management. What you’re saying really echoes those results. How much of this has to do with problems within organizations themselves and not having a better understanding of Risk Management?

I think that’s definitely the case. A lot of organizations are making bad decisions in many areas right now, and they don’t know why or aren’t even aware and are making bad decisions up until the point it’s too late. As an industry we’ve got this compliance problem where you can do a lot of work and demonstrate completion or compliance with check lists and still be compromised, still have massive data breaches. I think there’s a significant cognitive dissonance that exists, and I think it’s because we’re still in a significant transitional period overall.

Security should really have never been a standalone industry or a standalone environment. Security should have just been one of those attributes of the operating system or operating environments from the outset. Unfortunately, because of the dynamic nature of IT (and we’re still going through what I refer to as this Digital Industrial Revolution that’s been going on for 40-50 years), everything’s changing everyday. That will be the case until we hit a stasis point that we can stabilize around and grow a generation that’s truly native with practices and approaches and with the tools and technologies underlying this stuff.

An analogy would be to look at Telecom. Look at Telecom in the 1800s when they were running telegraph poles and running lines along railroad tracks. You could just climb a pole, put a couple alligator clips on there and suddenly you could send and receive messages, too, using the same wires. Now we have buried lines, we have much greater integrity of those systems. We generally know when we’ve lost integrity on those systems for the most part. It took 100 years to get there. So we’re less than half that way with the Internet and things are a lot more complicated, and the ability of an attacker, one single person spending all their time to go after a resource or a target, that type of asymmetric threat is just something that we haven’t really thought about and engineered our environments for over time.

I think it’s definitely challenging. But ultimately Risk Management practices are about making better decisions. How do we put the right amount of time and energy into making these decisions and providing better information and better data around those decisions? That’s always going to be a hard question to answer. Thinking about where the standards really could stand to improve, it’s helping organizations, helping people, understand the answer to that core question—which is, how much time and energy do I have to put into this decision?

When I did my graduate work at George Washington University, a number of years ago, one of the courses we had to take went through decision management as a discipline. We would run through things like decision trees. I went back to the executives at the company that I was working at and asked them, ‘How often do you use decision trees to make your investment decisions?” And they just looked at me funny and said, ‘Gosh, we haven’t heard of or thought about decision trees since grad school.’ In many ways, a lot of the formal Risk Management stuff that we talk about and drill into—especially when you get into the quantitative risk discussions—a lot of that goes down the same route. It’s great academically, it’s great in theory, but it’s not the kind of thing where on a daily basis you need to pull it out and use it for every single decision or every single discussion. Which, by the way, is where the FAIR taxonomy within Open FAIR provides an interesting and very valuable breakdown point. There are many cases where just using the taxonomy to break down a problem and think about it a little bit is more than sufficient, and you don’t have to go the next step of populating it with the actual quantitative estimates and do the quantitative estimations for a FAIR risk analysis. You can use it qualitatively and improve the overall quality and defensibility of your decisions.

How mature are most organizations in their understanding of risk today, and what are some of the core reasons they’re having such a difficult time with Risk Management?

The answer to that question varies to a degree by industry. Industries like financial services just seem to deal with this stuff better for the most part, but then if you look at multibillion dollar write offs for JP Morgan Chase, you think maybe they don’t understand risk after all. I think for the most part most large enterprises have at least some people in the organization that have a nominal understanding of Risk Management and risk assessment and how that factors into making good decisions.

That doesn’t mean that everything’s perfect. Look at the large enterprises that had major breaches in 2014 and 2013 and clearly you can look at those and say ‘Gosh, you guys didn’t make very good decisions.’ Home Depot is a good example or even the NSA with the Snowden stuff. In both cases, they knew they had an exposure, they had done a reasonable job of risk management, they just didn’t move fast enough with their remediation. They just didn’t get stuff in place soon enough to make a meaningful difference.

For the most part, larger enterprises or organizations will have better facilities and capabilities around risk management, but they may have challenges with velocity in terms of being able to put to rest issues in a timely fashion. Now slip down to different sectors and you look at retail, they continue to have issues with cardholder data and that’s where the card brands are asserting themselves more aggressively. Look at healthcare. Healthcare organizations, for one thing, simply don’t have the budget or the control to make a lot of changes, and they’re well behind the curve in terms of protecting patient records and data. Then look at other spaces like SMBs, which make up more than 90 percent of U.S. employment firms or look at the education space where they simply will never have the kinds of resources to do everything that’s expected of them.

I think we have a significant challenge here – a lot of these organizations will never have the resources to have adequate Risk Management in-house, and they will always be tremendously resource-constrained, preventing them from doing all that they really need to do. The challenge for them is, how do we provide answers or tools or methods to them that they can then use that don’t require a lot of expertise but can guide them toward making better decisions overall even if the decision is ‘Why are we doing any of this IT stuff at all when we can simply be outsourcing this to a service that specializes in my industry or specializes in my SMB business size that can take on some of the risk for me that I wasn’t even aware of?’

It ends up being a very basic educational awareness problem in many regards, and many of these organizations don’t seem to be fully aware of the type of exposure and legal liability that they’re carrying at any given point in time.

One of the other IT Risk Management Survey findings was that where the Risk Management function sits in organizations is pretty inconsistent—sometimes IT, sometimes risk, sometimes security—is that part of the problem too?

Yes and no—it’s a hard question to answer directly because we have to drill in on what kind of Risk Management we’re talking about. Because there’s enterprise Risk Management reporting up to a CFO or CEO, and one could argue that the CEO is doing Risk Management.

One of the problems that we historically run into, especially from a bottom-up perspective, is a lot of IT Risk Management people or IT Risk Management professionals or folks from the audit world have mistakenly thought that everything should boil down to a single, myopic view of ‘What is risk?’ And yet it’s really not how executives run organizations. Your chief exec, your board, your CFO, they’re not looking at performance on a single number every day. They’re looking at a portfolio of risk and how different factors are balancing out against everything. So it’s really important for folks in Op Risk Management and IT Risk Management to really truly understand and make sure that they’re providing a portfolio view up the chain that adequately represents the state of the business, which typically will represent multiple lines of business, multiple systems, multiple environments, things like that.

I think one of the biggest challenges we run into is just in an ill-conceived desire to provide value that’s oversimplified. We end up hyper-aggregating results and data, and suddenly everything boils down to a stop light that IT today is either red, yellow or green. That’s not really particularly informative, and it doesn’t help you make better decisions. How can I make better investment decisions around IT systems if all I know is that today things are yellow? I think it comes back to the educational awareness topic. Maybe people aren’t always best placed within organizations but really it’s more about how they’re representing the data and whether they’re getting it into the right format that’s most accessible to that audience.

What should organizations look for in choosing risk standards?

I usually get a variety of questions and they’re all about risk assessment—‘Oh, we need to do risk assessment’ and ‘We hear about this quant risk assessment thing that sounds really cool, where do we get started?’ Inevitably, it comes down to, what’s your actual Risk Management process look like? Do you actually have a context for making decisions, understanding the business context, etc.? And the answer more often than not is no, there is no actual Risk Management process. I think really where people can leverage the standards is understanding what the overall risk management process looks like or can look like and in constructing that, making sure they identify the right stakeholders overall and then start to drill down to specifics around impact analysis, actual risk analysis around remediation and recovery. All of these are important components but they have to exist within the broader context and that broader context has to functionally plug into the organization in a meaningful, measurable manner. I think that’s really where a lot of the confusion ends up occurring. ‘Hey I went to this conference, I heard about this great thing, how do I make use of it?’ People may go through certification training but if they don’t know how to go back to their organization and put that into practice not just on a small-scale decision basis, but actually going in and plugging it into a larger Risk Management process, it will never really demonstrate a lot of value.

The other piece of the puzzle that goes along with this, too, is you can’t just take these standards and implement them verbatim; they’re not designed to do that. You have to spend some time understanding the organization, the culture of the organization and what will work best for that organization. You have to really get to know people and use these things to really drive conversations rather than hoping that one of these risk assessments results will have some meaningful impact at some point.

How can organizations get more value from Risk Management and risk standards?

Starting with latter first, the value of the Risk Management standards is that you don’t have to start from scratch, you don’t have to reinvent the wheel. There are, in fact, very consistent and well-conceived approaches to structuring risk management programs and conducting risk assessment and analysis. That’s where the power of the standards come from, from establishing a template or guideline for establishing things.

The challenge of course is you have to have it well-grounded within the organization. In order to get value from a Risk Management program, it has to be part of daily operations. You have to plug it into things like procurement cycles and other similar types of decision cycles so that people aren’t just making gut decisions based off whatever their existing biases are.

One of my favorite examples is password complexity requirements. If you look back at the ‘best practice’ standards requirements over the years, going all the way back to the Orange Book in the 80s or the Rainbow Series which came out of the federal government, they tell you ‘oh, you have to have 8-character passwords and they have to have upper case, lower, numbers, special characters, etc.’ The funny thing is that while that was probably true in 1985, that is probably less true today. When we actually do risk analysis to look at the problem, and understand what the actual scenario is that we’re trying to guard against, password complexity ends up causing more problems than it solves because what we’re really protecting against is a brute force attack against a log-in interface or guessability on a log-in interface. Or maybe we’re trying to protect against a password database being compromised and getting decrypted. Well, password complexity has nothing to do with solving how that data is protected in storage. So why would we look at something like password complexity requirements as some sort of control against compromise of a database that may or may not be encrypted?

This is where Risk Management practices come into play because you can use Risk Management and risk assessment techniques to look at a given scenario—whether it be technology decisions or security control decisions, administrative or technical controls—we can look at this and say what exactly are we trying to protect against, what problem are we trying to solve? And then based on our understanding of that scenario, let’s look at the options that we can apply to achieve an appropriate degree of protection for the organization.

That ultimately is what we should be trying to achieve with Risk Management. Unfortunately, that’s usually not what we see implemented. A lot of the time, what’s described as risk management is really just an extension of audit practices and issuing a bunch of surveys, questionnaires, asking a lot of questions but never really putting it into a proper business context. Then we see a lot of bad practices applied, and we start seeing a lot of math-magical practices come in where we take categorical data—high, medium, low, more or less, what’s the impact to the business? A lot, a little—we take these categorical labels and suddenly start assigning numerical values to them and doing arithmetic calculations on them, and this is a complete violation of statistical principles. You shouldn’t be doing that at all. By definition, you don’t do arithmetic on categorical data, and yet that’s what a lot of these alleged Risk Management and risk assessment programs are doing.

I think Risk Management gets a bad rap as a result of these poor practices. Conducting a survey, asking questions is not a risk assessment. A risk assessment is taking a scenario, looking at the business impact analysis for that scenario, looking at the risk tolerance, what the risk capacity is for that scenario, and then looking at what the potential threats and weaknesses are within that scenario that could negatively impact the business. That’s a risk assessment. Asking people a bunch of questions about ‘Do you have passwords? Do you use complex passwords? Have you hardened the server? Are there third party people involved?’ That’s interesting information but it’s not usually reflective of the risk state and ultimately we want to find out what the risk state is.

How do you best determine that risk state?

If you look at any of the standards—and again this is where the standards do provide some value—if you look at what a Risk Management process is and the steps that are involved in it, take for example ISO 31000—step one is establishing context, which includes establishing potential business impact or business importance, business priority for applications and data, also what the risk tolerance, risk capacity is for a given scenario. That’s your first step. Then the risk assessment step is taking that data and doing additional analysis around that scenario.

In the technical context, that’s looking at how secure is this environment, what’s the exposure of the system, who has access to it, how is the data stored or protected? From that analysis, you can complete the assessment by saying ‘Given that this is a high value asset, there’s sensitive data in here, but maybe that data is strongly encrypted and access controls have multiple layers of defense, etc., the relative risk here of a compromise or attack being successful is fairly low.’ Or ‘We did this assessment, and we found in the application that we could retrieve data even though it was supposedly stored in an encrypted state, so we could end up with a high risk statement around the business impact, we’re looking at material loss,’ or something like that.

Pulling all of these pieces together is really key, and most importantly, you cannot skip over context setting. If you don’t ever do context setting, and establish the business importance, nothing else ends up mattering. Just because a system has a vulnerability doesn’t mean that it’s a material risk to the business. And you can’t even know that unless you establish the context.

In terms of getting started, leveraging the standards makes a lot of sense, but not from a perspective of this is a compliance check list that I’m going to use verbatim. You have to use it as a structured process, you have to get some training and get educated on how these things work and then what requirements you have to meet and then do what makes sense for the organizational role. At the end of the day, there’s no Easy Button for these things, you have to invest some time and energy and build something that makes sense and is functional for your organization.

To download the IT Risk Management survey summary, please click here.

By The Open GroupFormer Gartner analyst Ben Tomhave (MS, CISSP) is Security Architect for a leading online education organization where he is putting theories into practice. He holds a Master of Science in Engineering Management (Information Security Management concentration) from The George Washington University, and is a member and former co-chair of the American Bar Association Information Security Committee, senior member of ISSA, former board member of the Northern Virginia OWASP chapter, and member and former board member for the Society of Information Risk Analysts. He is a published author and an experienced public speaker, including recent speaking engagements with the RSA Conference, the ISSA International Conference, Secure360, RVAsec, RMISC, and several Gartner events.

Join the conversation! @theopengroup #ogchat #ogBWI

1 Comment

Filed under Cybersecurity, RISK Management, Security, Security Architecture, Standards, The Open Group Baltimore 2015, Uncategorized

Managing Your Vulnerabilities: A Q&A with Jack Daniel

By The Open Group

With hacks and security breaches becoming more prevalent everyday, it’s incumbent on organizations to determine the areas where their systems may be vulnerable and take actions to better handle those vulnerabilities. Jack Daniel, a strategist with Tenable Network Security who has been active in securing networks and systems for more than 20 years, says that if companies start implementing vulnerability management on an incremental basis and use automation to help them, they can hopefully reach a point where they’re not constantly handling vulnerability crises.

Daniel will be speaking at The Open Group Baltimore event on July 20, presenting on “The Evolution of Vulnerability Management.” In advance of that event, we recently spoke to Daniel to get his perspective on hacker motivations, the state of vulnerability management in organizations today, the human problems that underlie security issues and why automation is key to better handling vulnerabilities.

How do you define vulnerability management?

Vulnerability detection is where this started. News would break years ago of some vulnerability, some weakness in a system—a fault in the configuration or software bug that allows bad things to happen. We used to really to do a hit-or-miss job of it, it didn’t have to be rushed at all. Depending on where you were or what you were doing, you might not be targeted—it would take months after something was released before bad people would start doing things with it. As criminals discovered there was money to be made in exploiting vulnerabilities, the attackers became more and more motivated by more than just notoriety. The early hacker scene that was disruptive or did criminal things was largely motivated by notoriety. As people realized they could make money, it became a problem, and that’s when we turned to management.

You have to manage finding vulnerabilities, detecting vulnerabilities and resolving them, which usually means patching but not always. There are a lot of ways to resolve or mitigate without actually patching, but the management aspect is discovering all the weaknesses in your environment—and that’s a really broad brush, depending on what you’re worried about. That could be you’re not compliant with PCI if you’re taking credit cards or it could be that bad guys can steal your database full of credit card numbers or intellectual property.

It’s finding all the weaknesses in your environment, the vulnerabilities, tracking them, resolving them and then continuing to track as new ones appear to make sure old ones don’t reappear. Or if they do reappear, what in your corporate process is allowing bad things to happen over and over again? It’s continuously doing this.

The pace of bad things has accelerated, the motivations of the actors have forked in a couple of directions, and to do a good job of vulnerability management really requires gathering data of different qualities and being able to make assessments about it and then applying what you know to what’s the most effective use of your resources—whether it’s time or money or employees to fix what you can.

What are the primary motivations you’re seeing with hacks today?

They fall into a couple big buckets, and there are a whole bunch of them. One common one is financial—these are the people that are stealing credit cards, stealing credentials so they can do bank wire fraud, or some other way to get at money. There are a variety of financial motivators.

There are also some others, depending on who you are. There’s the so-called ‘Hacktivist,’ which used to be a thing in the early days of hacking but has now become more widespread. These are folks like the Syrian Electronic Army or there’s various Turkish groups that through the years have done website defacements. These people are not trying to steal money, they’re trying to embarrass you, they’re trying to promote a message. It may be, as with the Syrian Electronic Army, they’re trying to support the ruler of whatever’s left of Syria. So there are political motivations. Anonymous did a lot of destructive things—or people calling themselves ‘Anonymous’—that’s a whole other conversation, but people do things under the banner of Anonymous as hacktivism that struck out at corporations they thought were unjust or unfair or they did political things.

Intellectual property theft would be the third big one, I think. Generally the finger is pointed at China, but it’s unfair to say they’re the only ones stealing trade secrets. People within your own country or your own market or region are stealing trade secrets continuously, too.

Those are the three big ones—money, hacktivism and intellectual property theft. It trickles down. One of the things that has come up more often over the past few years is people get attacked because of who they’re connected to. It’s a smaller portion of it and one that’s overlooked but is a message that people need to hear. For example, in the Target breach, it is claimed that the initial entry point was through the heating and air conditioning vendors’ computer systems and their access to the HVAC systems inside a Target facility, and, from there, they were able to get through. There are other stories about the companies where organizations have been targeted because of who they do business with. That’s usually a case of trying to attack somebody that’s well-secured and there’s not an easy way in, so you find out who does their heating and air-conditioning or who manages their remote data centers or something and you attack those people and then come in.

How is vulnerability management different from risk management?

It’s a subset of risk management. Risk management, when done well, gives a scope of a very large picture and helps you drill down into the details, but it has to factor in things above and beyond the more technical details of what we more typically think of as vulnerability management. Certainly they work together—you have to find what’s vulnerable and then you have to make assessments as to how you’re going to address your vulnerabilities, and that ideally should be done in a risk-based manner. Because as much as all of the reports from Verizon Data Breach Report and others say you have to fix everything, the reality is that not only can we not fix everything, we can’t fix a lot immediately so you really have to prioritize things. You have to have information to prioritize things, and that’s a challenge for many organizations.

Your session at The Open Group Baltimore event is on the evolution of vulnerability management—where does vulnerability management stand today and where does it need to go?

One of my opening slides sums it up—it used to be easy, and it’s not anymore. It’s like a lot of other things in security, it’s sort of a buzz phrase that’s never really taken off like it needs to at the enterprise level, which is as part of the operationalization of security. Security needs to be a component of running your organization and needs to be factored into a number of things.

The information security industry has a challenge and history of being a department in the middle and being obstructionist, which is I think is well deserved. But the real challenge is to cooperate more. We have to get a lot more information, which means working well with the rest of the organization, particularly networking and systems administrators and having conversations with them as far as the data and the environment and sharing and what we discover as problems without being the judgmental know-it-all security people. That is our stereotype. The adversaries are often far more cooperative than we are. In a lot of criminal forums, people will be fairly supportive of other people in their community—they’ll go up to where they reach the trade-secret level and stop—but if somebody’s not cutting into their profits, rumor is these people are cooperating and collaborating.

Within an organization, you need to work cross-organizationally. Information sharing is a very real piece of it. That’s not necessarily vulnerability management, but when you step into risk analysis and how you manage your environment, knowing what vulnerabilities you have is one thing, but knowing what vulnerabilities people are actually going to do bad things to requires information sharing, and that’s an industry wide challenge. It’s a challenge within our organizations, and outside it’s a real challenge across the enterprise, across industry, across government.

Why has that happened in the Security industry?

One is the stereotype—a lot of teams are very siloed, a lot of teams have their fiefdoms—that’s just human nature.

Another problem that everyone in security and technology faces is that we talk to all sorts of people and have all sorts of great conversations, learn amazing things, see amazing things and a lot of it is under NDA, formal or informal NDAs. And if it weren’t for friend-of-a-friend contacts a lot of information sharing would be dramatically less. A lot of the sanitized information that comes out is too sanitized to be useful. The Verizon Data Breach Report pointed out that there are similarities in attacks but they don’t line up with industry verticals as you might expect them to, so we have that challenge.

Another serious challenge we have in security, especially in the research community, is that there’s total distrust of the government. The Snowden revelations have really severely damaged the technology and security community’s faith in the government and willingness to cooperate with them. Further damaging that are the discussions about criminalizing many security tools—because the people in Congress don’t understand these things. We have a president who claims to be technologically savvy, and he is more than any before him, but he still doesn’t get it and he’s got advisors that don’t get it. So we have a great distrust of the government, which has been earned, despite the fact that any one of us in the industry knows folks at various agencies—whether the FBI or intelligence agencies or military —who are fantastic people—brilliant, hardworking patriotic—but the entities themselves are political entities, and that causes a lot of distrust in information sharing.

And there are just a lot of people that have the idea that they want proprietary information. This is not unique to security. There are a couple of different types of managers—there are people in organizations who strive to make themselves irreplaceable. As a manager, you’ve got to get those people out of your environment because they’re just poisonous. There are other people who strive to make it so that they can walk away at any time and it will be a minor inconvenience for someone to pick up the notes and run. Those are the type of people you should hang onto for dear life because they share information, they build knowledge, they build relationships. That’s just human nature. In security I don’t think there are enough people who are about building those bridges, building those communications paths, sharing what they’ve learned and trying to advance the cause. I think there’s still too many who horde information as a tool or a weapon.

Security is fundamentally a human problem amplified by technology. If you don’t address the human factors in it, you can have technological controls, but it still has to be managed by people. Human nature is a big part of what we do.

You advocate for automation to help with vulnerability management. Can automation catch the threats when hackers are becoming increasingly sophisticated and use bots themselves? Will this become a war of bot vs. bot?

A couple of points about automation. Our adversaries are using automation against us. We need to use automation to fight them, and we need to use as much automation as we can rely on to improve our situation. But at some point, we need smart people working on hard problems, and that’s not unique to security at all. The more you automate, at some point in time you have to look at whether your automation processes are improving things or not. If you’ve ever seen a big retailer or grocery store that has a person working full-time to manage the self-checkout line, that’s failed automation. That’s just one example of failed automation. Or if there’s a power or network outage at a hospital where everything is regulated and medications are regulated and then nobody can get their medications because the network’s down. Then you have patients suffering until somebody does something. They have manual systems that they have to fall back on and eventually some poor nurse has to spend an entire shift doing data entry because the systems failed so badly.

Automation doesn’t solve the problems—you have to automate the right things in the right ways, and the goal is to do the menial tasks in an automated fashion so you have to spend less human cycles. As a system or network administrator, you run into the same repetitive tasks over and over and you write scripts to do it or buy a tool to automate it. They same applies here –you want to filter through as much of the data as you can because one of the things that modern vulnerability management requires is a lot of data. It requires a ton of data, and it’s very easy to fall into an information overload situation. Where the tools can help is by filtering it down and reducing the amount of stuff that gets put in front of people to make decisions about, and that’s challenging. It’s a balance that requires continuous tuning—you don’t want it to miss anything so you want it to tell you everything that’s questionable but it can’t throw too many things at you that aren’t actually problems or people give up and ignore the problems. That was allegedly part of a couple of the major breaches last year. Alerts were triggered but nobody paid attention because they get tens of thousands of alerts a day as opposed to one big alert. One alert is hard to ignore—40,000 alerts and you just turn it off.

What’s the state of automated solutions today?

It’s pretty good if you tune it, but it takes maintenance. There isn’t an Easy Button, to use the Staples tagline. There’s not an Easy Button, and anyone promising an Easy Button is probably not being honest with you. But if you understand your environment and tune the vulnerability management and patch management tools (and a lot of them are administrative tools), you can automate a lot of it and you can reduce the pain dramatically. It does require a couple of very hard first steps. The first step in all of it is knowing what’s in your environment and knowing what’s crucial in your environment and understanding what you have because if you don’t know what you’ve got, you won’t be able to defend it well. It is pretty good but it does take a fair amount of effort to get to where you can make the best of it. Some organizations are certainly there, and some are not.

What do organizations need to consider when putting together a vulnerability management system?

One word: visibility. They need to understand that they need to be able to see and know what’s in the environment—everything that’s in their environment—and get good information on those systems. There needs to be visibility into a lot of systems that you don’t always have good visibility into. That means your mobile workforce with their laptops, that means mobile devices that are on the network, which are probably somewhere whether they belong there or not, that means understanding what’s on your network that’s not being managed actively, like Windows systems that might not be in active directory or RedHat systems that aren’t being managed by satellite or whatever systems you use to manage it.

Knowing everything that’s in the environment and its roles in the system—that’s a starting point. Then understanding what’s critical in the environment and how to prioritize that. The first step is really understanding your own environment and having visibility into the entire network—and that can extend to Cloud services if you’re using a lot of Cloud services. One of the conversations I’ve been having lately since the latest Akamai report was about IPv6. Most Americans are ignoring it even at the corporate level, and a lot of folks think you can ignore it still because we’re still routing most of our traffic over the IPv4 protocol. But IPv6 is active on just about every network out there. It’s just whether or not we actively measure and monitor it. The Akamai Report said something that a lot of folks have been saying for years and that’s that this is really a problem. Even though the adoption is pretty low, what you see if you start monitoring for it is people communicating in IPv6 whether intentionally or unintentionally. Often unintentionally because everythings’s enabled, so there’s often a whole swath of your network that people are ignoring. And you can’t have those huge blind spots in the environment, you just can’t. The vulnerability management program has to take into account that sort of overall view of the environment. Then once you’re there, you need a lot of help to solve the vulnerabilities, and that’s back to the human problem.

What should Enterprise Architects look for in an automated solution?

It really depends on the corporate need. They need to figure out whether or not the systems they’re looking at are going to find most or all of their network and discover all of the weakness, and then help them prioritize those. For example, can your systems do vulnerability analysis on newly discovered systems with little or no input? Can you automate detection? Can you automate confirmation of findings somehow? Can you interact with other systems? There’s a piece, too—what’s the rest of your environment look like? Are there ways into it? Does your vulnerability management system work with or understand all the things you’ve got? What if you have some unique network gear that your vulnerability management systems not going to tell you what the vulnerability’s in? There are German companies that like to use operating systems other than Windows and garden variety Linux distributions. Does it work in your environment and will it give you good coverage in your environment and can it take a lot of the mundane out of it?

How can companies maintain Boundaryless Information Flow™–particularly in an era of the Internet of Things–but still manage their vulnerabilities?

The challenge is a lot of people push back against high information flow because they can’t make sense of it; they can’t ingest the data, they can’t do anything with it. It’s the challenge of accepting and sharing a lot of information. It doesn’t matter whether vulnerability management or lot analysis or patch management or systems administration or back up or anything—the challenge is that networks have systems that share a lot of data but until you add context, it’s not really information. What we’re interested in in vulnerability management is different than what you’re automated backup is. The challenge is having systems that can share information outbound, share information inbound and then act rationally on only that which is relevant to them. That’s a real challenge because information overload is a problem that people have been complaining about for years, and it’s accelerating at a stunning rate.

You say Internet of Things, and I get a little frustrated when people treat that as a monolith because at one end an Internet enabled microwave or stove has one set of challenges, and they’re built on garbage commodity hardware with no maintenance ability at all. There are other things that people consider Internet of Things because they’re Internet enabled and they’re running Windows or a more mature Linux stack that has full management and somebody’s managing it. So there’s a huge gap between the managed IoT and the unmanaged, and the unmanaged is just adding low power machines in environments that will just amplify things like distributed denial of service (DoS). As it is, a lot of consumers have home routers that are being used to attack other people and do DoS attacks. A lot of the commercial stuff is being cleaned up, but a lot of the inexpensive home routers that people have are being used, and if those are used and misused or misconfigured or attacked with worms that can change the settings for things to have everything in the network participate in.

The thing with the evolution of vulnerability management is that we’re trying to drive people to a continuous monitoring situation. That’s where the federal government has gone, that’s where a lot of industries are, and it’s a challenge to go from infrequent or even frequent big scans to watching things continuously. The key is to take incremental steps, and the goal is, instead of having a big massive vulnerability project every quarter or every month, the goal is to get down to where it’s part of the routine, you’re taking small remediated measures on a daily or regular basis. There’s still going to be things when Microsoft or Oracle come out with a big patch that will require a bigger tool-up but you’re going to need to do this continuously and reach that point where you do small pieces of the task continuously rather than one big task. That’s the goal is to get to where you’re doing this continuously so you get to where you’re blowing out birthday candles rather than putting out forest fires.

Jack Daniel, a strategist at Tenable Network Security, has over 20 years experience in network and system administration and security, and has worked in a variety of practitioner and management positions. A technology community activist, he supports several information security and technology organizations. Jack is a co-founder of Security BSides, serves on the boards of three Security BSides non-profit corporations, and helps organize Security B-Sides events. Jack is a regular, featured speaker at ShmooCon, SOURCE Boston, DEF CON, RSA and other marque conferences. Jack is a CISSP, holds CCSK, and is a Microsoft MVP for Enterprise Security.

Join the conversation – @theopengroup #ogchat #ogBWI

1 Comment

Filed under Boundaryless Information Flow™, Internet of Things, RISK Management, Security, the open group, The Open Group Baltimore 2015

The Open Group Madrid 2015 – Day Two Highlights

By The Open Group

On Tuesday, April 21, Allen Brown, President & CEO of The Open Group, began the plenary presenting highlights of the work going on in The Open Group Forums. The Open Group is approaching 500 memberships in 40 countries.

Big Data & Open Platform 3.0™ – a Big Deal for Open Standards

Ron Tolido, Senior Vice President of Capgemini’s group CTO network and Open Group Board Member, discussed the digital platform as the “fuel” of enterprise transformation today, citing a study published in the book “Leading Digital.” The DNA of companies that successfully achieve transform has the following factors:

  • There is no escaping from mastering the digital technology – this is an essential part of leading transformation. CEO leadership is a success factor.
  • You need a sustainable technology platform embraced by both the business and technical functions

Mastering digital transformation shows a payoff in financial results, both from the standpoint of efficient revenue generation and maintaining and growing market share. The building blocks of digital capability are:

  • Customer Experience
  • Operations
  • New business models

Security technology must move from being a constraint or “passion killer” to being a driver for digital transformation. Data handling must change it’s model – the old structured and siloed approach to managing data no longer works, resulting in business units bypassing or ignoring the “single souce” data repository. He recommended the “Business Data Lake” approach as a approach to overcoming this, and suggested it should be considered as an open standard as part of the work of the Open Platform 3.0 Forum.

In the Q&A session, Ron suggested establishing hands-on labs to help people embrace digital transformation, and presented the analogy of DatOps as an analogy to DevOps for business data.

Challengers in the Digital Era

Mariano Arnaiz, Chief Information Officer in the CESCE Group, presented the experiences of CESCE in facing challenges of:

  • Changing regulation
  • Changing consumer expectations
  • Changing technology
  • Changing competition and market entrants based on new technology

The digital era represents a new language for many businesses, which CESCE faced during the financial crisis of 2008. They chose the “path less traveled” of becoming a data-driven company, using data and analytics to improve business insight, predict behavior and act on it. CESCE receives over 8000 risk analysis requests per day; using analytics, over 85% are answered in real time, when it used to take more than 20 days. Using analytics has given them unique competitive products such as variable pricing and targeted credit risk coverage while reducing loss ratio.

To drive transformation, the CIO must move beyond IT service supporting the business to helping drive business process improvement. Aligning IT to business is no longer enough for EA – EA must also help align business to transformational technology.

In the Q&A, Mariano said that the approach of using analytics and simulation for financial risk modeling could be applied to some cybersecurity risk analysis cases.

Architecting the Internet of Things

Kary Främling,  CEO of the Finnish company ControlThings and Professor of Practice in Building Information Modeling (BIM) at Aalto University, Finland, gave a history of the Internet of Things (IoT), the standards landscape, issues on security in IoT, and real-world examples.

IoT today is characterized by an increasing number of sensors and devices each pushing large amounts of data to their own silos, with communication limited to their own network. Gaining benefit from IoT requires standards to take a systems view of IoT providing horizontal integration among IoT devices and sensors with data collected as and when needed, and two-way data flows between trusted entities within a vision of Closed-Loop Lifecycle Management. These standards are being developed in The Open Group Open Platform 3.0 Forum’s IoT work stream; published standards such as Open Messaging interface (O-MI) and Open Data Format (O-DF) that allow discovery and interoperability of sensors using open protocols, similar to the way http and html enable interoperability on the Web.

Kary addressed the issues of security and privacy in IoT, noting this is an opportunity for The Open Group to use our EA and Security work to to assess these issues at the scale IoT will bring.By The Open Group

Kary Främling

Comments Off on The Open Group Madrid 2015 – Day Two Highlights

Filed under big data, Boundaryless Information Flow™, Cybersecurity, Enterprise Architecture, Internet of Things

Survey Shows Organizations Are Experiencing an Identity Crisis When it Comes to IT Risk Management

By Jim Hietala, VP, Business Development & Security, The Open Group

Last fall, The Open Group Security Forum fielded its first IT Risk Management Survey in conjunction with the Society of Information Risk Analysts (SIRA) and CXOWARE The purpose of the survey was to better understand how mature organizations are when it comes to IT Risk Management today. The survey also aimed to discover which risk management frameworks are currently most prevalent within organizations and how successful those frameworks are in measuring and managing risk.

Consisting of an online questionnaire that included both multiple choice and open text answer formats with questions, the survey explored a number of different parameters in regard to the principles, frameworks and processes organizations are using to manage risk. The sampling included more than 100 information technology and security executives, professionals, analysts and architects that have some responsibility for risk management, as well as full-time risk management professionals within their respective organizations.

Considering the fragmented state of security within most organizations today, it should not come as much surprise that the primary survey finding is that many organizations today are experiencing what might be called an identity crisis when it comes to IT Risk Management. Although many of the organizations surveyed generally believe their Risk Management teams and efforts are providing value to their organizations, they are also experiencing considerable difficulty when it comes to understanding, demonstrating and creating business value for those efforts.

This is likely due to the lack of a common definition for risk relative to IT Risk Management, in particular, as well as the resulting difficulty in communicating the value of something organizations are struggling to clearly define. In addition, the IT Risk Management teams among the companies surveyed do not have much visibility within their organizations and the departments to which they report are inconsistent across the organizations surveyed, with some reporting to senior management and others reporting to IT or to Risk Managers.

Today, Risk Management is becoming increasingly important for IT departments. With the increased digitalization of business and data becoming ever more valuable, companies of all shapes and sizes must begin looking to apply risk management principles to their IT infrastructure in order to guard against the potentially negative financial, competitive and reputational loss that data breaches may bring. A myriad of high-profile breaches at large retailers, financial services firms, entertainment companies and government agencies over the past couple of years serve as frightening examples of what can—and will—happen to more and more companies if they fail to better assess their vulnerability to risk.

This IT Risk Management survey essentially serves as a benchmark for the state of IT Risk Management today. When it comes to IT risk, the ways and means to manage it are still emerging, and IT Risk Management programs are still in the nascent stages within most organizations. We believe that there is not only a lot of room for growth within the discipline of IT Risk Management but are optimistic that organizations will continue to mature in this area as they learn to better understand and prove their intrinsic value within their organizations.

The full survey summary can be viewed here. We recommend that those interested in Risk Management review the full summary as there are a number of deeper observations explored there that look at the value risk teams believe they are providing to their organizations and the level of maturity of those organizations.

By Jim Hietala, The Open GroupJim Hietala, Open FAIR, CISSP, GSEC, is Vice President, Business Development and Security for The Open Group, where he manages the business team, as well as Security and Risk Management programs and standards activities,  He has participated in the development of several industry standards including O-ISM3, O-ESA, O-RT (Risk Taxonomy Standard), O-RA (Risk Analysis Standard), and O-ACEML. He also led the development of compliance and audit guidance for the Cloud Security Alliance v2 publication.

Jim is a frequent speaker at industry conferences. He has participated in the SANS Analyst/Expert program, having written several research white papers and participated in several webcasts for SANS. He has also published numerous articles on information security, risk management, and compliance topics in publications including CSO, The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

An IT security industry veteran, he has held leadership roles at several IT security vendors.

Jim holds a B.S. in Marketing from Southern Illinois University.

Join the conversation @theopengroup #ogchat #ogSecurity

 

 

 

Comments Off on Survey Shows Organizations Are Experiencing an Identity Crisis When it Comes to IT Risk Management

Filed under Cybersecurity, Enterprise Transformation, Information security, IT, RISK Management, Security, Security Architecture, Uncategorized

Cybersecurity Standards: The Open Group Explores Security and Ways to Assure Safer Supply Chains

Following is a transcript of part of the proceedings from The Open Group San Diego 2015 in February.

The following presentations and panel discussion, which together examine the need and outlook for Cybersecurity standards amid supply chains, are provided by moderator Dave Lounsbury, Chief Technology Officer, The Open Group; Mary Ann Davidson, Chief Security Officer, Oracle; Dr. Ron Ross, Fellow of the National Institute of Standards and Technology (NIST), and Jim Hietala, Vice President of Security for The Open Group.

Here are some excerpts:

By The Open GroupDave Lounsbury: Mary Ann Davidson is responsible for Oracle Software Security Assurance and represents Oracle on the Board of Directors for the Information Technology Information Sharing and Analysis Center, and on the international Board of the ISSA.

Dr. Ron Ross leads the Federal Information Security Management Act Implementation Project. It sounds like a big job to fulfill, developing the security standards and guidelines for the federal government.

This session is going to look at the cybersecurity and supply chain landscape from a standards perspective. So Ron and Mary Ann, thank you very much.

By The Open GroupRon Ross: All of us are part of the technology explosion and revolution that we have been experiencing for the last couple of decades.

I would like to have you leave today with a couple of major points, at least from my presentation, things that we have observed in cybersecurity for the last 25 years: where we are today and where I think we might need to go in the future. There is no right or wrong answer to this problem of cybersecurity. It’s probably one of the most difficult and challenging sets of problems we could ever experience.

In our great country, we work on what I call the essential partnership. It’s a combination of government, industry, and academia all working together. We have the greatest technology producers, not just in this country, but around the world, who are producing some fantastic things to which we are all “addicted.” I think we have an addiction to the technology.

Some of the problems we’re going to experience going forward in cybersecurity aren’t just going to be technology problems. They’re going to be cultural problems and organizational problems. The key issue is how we organize ourselves, what our risk tolerance is, how we are going to be able to accomplish all of our critical missions and business operations that Dawn talked about this morning, and do so in a world that’s fairly dangerous. We have to protect ourselves.

Movie App

I think I can sum it up. I was at a movie. I don’t go to movies very often anymore, but about a month ago, I went to a movie. I was sitting there waiting for the main movie to start, and they were going through all the coming attractions. Then they came on the PA and they said that there is an app you can download. I’m not sure you have ever seen this before, but it tells you for that particular movie when is the optimal time to go to the restroom during the movie.

I bring this up because that’s a metaphor for where we are today. We are consumed. There are great companies out there, producing great technologies. We’re buying it up faster than you can shake a stick at it, and we are developing the most complicated IT infrastructure ever.

So when I look at this problem, I look at this from a scientist’s point of view, an engineering point of view. I’m saying to myself, knowing what I know about what it takes  to — I don’t even use the word “secure” anymore, because I don’t think we can ever get there with the current complexity — build the most secure systems we can and be able to manage risk in the world that we live in.

In the Army, we used to have a saying. You go to war with the army that you have, not the army that you want. We’ve heard about all the technology advances, and we’re going to be buying stuff, commercial stuff, and we’re going to have to put it together into systems. Whether it’s the Internet of Things (IoT) or cyber-physical convergence, it all goes back to some fairly simple things.

The IoT and all this stuff that we’re talking about today really gets back to computers. That’s the common denominator. They’re everywhere. This morning, we talked about your automobile having more compute power than Apollo 11. In your toaster, your refrigerator, your building, the control of the temperature, industrial control systems in power plants, manufacturing plants, financial institutions, the common denominator is the computer, driven by firmware and software.

When you look at the complexity of the things that we’re building today, we’ve gone past the time when we can actually understand what we have and how to secure it.

That’s one of the things that we’re going to do at NIST this year and beyond. We’ve been working in the FISMA world forever it seems, and we have a whole set of standards, and that’s the theme of today: how can standards help you build a more secure enterprise?

The answer is that we have tons of standards out there and we have lots of stuff, whether it’s on the federal side with 853 or the Risk Management Framework, or all the great things that are going on in the standards world, with The Open Group, or ISO, pick your favorite standard.

The real question is how we use those standards effectively to change the current outlook and what we are experiencing today because of this complexity? The adversary has a significant advantage in this world, because of complexity. They really can pick the time, the place, and the type of attack, because the attack surface is so large when you talk about not just the individual products.

We have many great companies just in this country and around the world that are doing a lot to make those products more secure. But then they get into the engineering process and put them together in a system, and that really is an unsolved problem. We call it a Composability Problem. I can have a trusted product here and one here, but what is the combination of those two when you put them together in the systems context? We haven’t solved that problem yet, and it’s getting more complicated everyday.

Continuous Monitoring

For the hard problems, we in the federal government do a lot of stuff in continuous monitoring. We’re going around counting our boxes and we are patching stuff and we are configuring our components. That’s loosely called cyber hygiene. It’s very important to be able to do all that and do it quickly and efficiently to make your systems as secure as they need to be.

But even the security controls in our control catalog, 853, when you get into the technical controls —  I’m talking about access control mechanisms, identification, authentication, encryption, and audit — those things are buried in the hardware, the software, the firmware, and the applications.

Most of our federal customers can’t even see those. So when I ask them if they have all their access controls in place, they can nod their head yes, but they can’t really prove that in a meaningful way.

So we have to rely on industry to make sure those mechanisms, those functions, are employed within the component products that we then will put together using some engineering process.

This is the below-the-waterline problem I talk about. We’re in some kind of digital denial today, because below the water line, most consumers are looking at their smartphones, their tablets, and all their apps — that’s why I used that movie example — and they’re not really thinking about those vulnerabilities, because they can’t see them, until it affects them personally.

I had to get three new credit cards last year. I shop at Home Depot and Target, and JPMorgan Chase is our federal credit card. That’s not a pain point for me because I’m indemnified. Even if there are fraudulent charges, I don’t get hit for those.

If your identity is stolen, that’s a personal pain point. We haven’t reached that national pain point yet. All of the security stuff that we do we talk about it a lot and we do a lot of it, but if you really want to effect change, you’re going to start to hear more at this conference about assurance, trustworthiness, and resiliency. That’s the world that we want to build and we are not there today.

That’s the essence of where I am hoping we are going to go. It’s these three areas: software assurance, systems security engineering, and supply-chain risk management.

My colleague Jon Boyens is here today and he is the author, along with a very talented team of coauthors, of the NIST 800-161 document. That’s the supply chain risk document.

It’s going to work hand-in-hand with another publication that we’re still working on, the 800-160 document. We are taking an IEEE and an ISO standard, 15288, and we’re trying to infuse into that standard. They are coming out with the update of that standard this year. We’re trying to infuse security into every step of the lifecycle.

Wrong Reasons

The reason why we are not having a lot of success on the cybersecurity front today is because security ends up appearing either too late or by the wrong people for the wrong reasons.

I’ll give you one example. In the federal government, we have a huge catalog of security controls, and they are allocated into different baselines: low, moderate, and high. So you will pick a baseline, you will tailor, and you’ll come to the system owner or the authorizing official and say, “These are all the controls that NIST says we have to do.” Well, the mission business owner was never involved in that discussion.

One of the things we are going to do with the new document is focus on the software and systems engineering process from the start of the stakeholders, all the way through requirements, analysis, definition, design, development, implementation, operation, and sustainment, all the way to disposal. Critical things are going to happen at every one of those places in the lifecycle

The beauty of that process is that you involve the stakeholders early. So when those security controls are actually selected they can be traced back to a specific security requirement, which is part of a larger set of requirements that support that mission or business operation, and now you have the stakeholders involved in the process.

Up to this point in time, security operates in its own vacuum. It’s in the little office down the hall, and we go down there whenever there’s a problem. But unless and until security gets integrated and we disappear as being our own discipline, we now are part of the Enterprise Architecture, whether it’s TOGAF® or whatever architecture construct you are following, or the systems engineering process. The system development lifecycle is the third one, and people ask what is acquisition and procurement.

Unless we have our stakeholders at those tables to influence, we are going to continue to deploy systems that are largely indefensible not against all cyber attacks but against the high-end attacks.

We have to do a better job getting at the C-Suite and I tried to capture the five essential areas that this discussion has to revolve around. The acronym is TACIT, and it just happens to be a happy coincidence that it fit into an acronym. But it’s basically looking at the threat, how you configure your assets, and how you categorize your assets with regard to criticality.

How complex is the system you’re building? Are you managing that complexity in trying to reduce it, integrating security across the entire set of business practices within the organization? Then, the last component, which really ties into The Open Group, and the things you’re doing here with all the projects that were described in the first session, that is the trustworthiness piece.

Are we building products and systems that are, number one, more penetration resistance to cyber attacks; and number two, since we know we can’t stop all attacks, because we can never reduce complexity to where we thought we could two or three decades ago. Are we building the essential resiliency into that system. Even when the adversary comes to the boundary and the malware starts to work, how far does it spread, and what can it do?

That’s the key question. You try to limit the time on target for the advisory, and that can be done very, very easily with good architectural and good engineering solutions. That’s my message for 2015 and beyond, at least from a lot of things at NIST. We’re going to start focusing on the architecture and the engineering, how to really affect things at the ground level?

Processes are Important

Now we always will have the people, the processes, the technologies kind of this whole ecosystem that we have to deal with, and you’re going to always have to worry about your sys admins that go bad and dump all the stuff that you don’t want dumped on the Internet. But that’s part of system process. Processes are very important because they give us structure, discipline, and the ability to communicate with our partners.

I was talking to Rob Martin from Mitre. He’s working on a lot of important projects there with the CWEs, CVEs. It gives you the ability to communicate a level of trustworthiness and assurance that other people can have that dialogue, because without that, we’re not going to be communicating with each other. We’re not going to trust each other, and that’s critical, having that common understanding. Frameworks provide that common dialogue of security controls in a common process, how we build things, and what is the level of risk that we are willing to accept in that whole process.

These slides, and they’ll be available, go very briefly into the five areas. Understanding the modern threat today is critical because, even if you don’t have access to classified threat data, there’s a lot of great data out there with Symantec and Verizon reports, and there’s open-source threat information available.

If you haven’t had a chance to do that, I know the folks who work on the high assurance stuff in The Open Group RT&ES. look at that stuff a lot, because they’re building a capability that is intended to stop some of those types of threats.

The other thing about assets is that we don’t do a very good job of criticality analysis. In other words, most of our systems are running, processing, storing, and transmitting data and we’re not segregating the critical data into its own domain where necessary.

I know that’s hard to do sometimes. People say, “I’ve got to have all this stuff ready to go 24×7,” but when you look at some of the really bad breaches that we have had over the last several years establishing a domain for critical data, where that domain can be less complex, which means you can better defend it, and then you can invest more resources into defending those things that are the most critical.

I used a very simple example of a safe deposit box. I can’t get all my stuff into the safe deposit box. So I have to make decisions. I put important papers in there, maybe a coin collection, whatever.  I have locks on my house on the front door, but they’re not strong enough to stop some of those bad guys out there. So I make those decisions. I put it in the bank, and it goes in a vault. It’s a pain in the butt to go down there and get the stuff out, but it gives me more assurance, greater trustworthiness. That’s an example of the things we have to be able to do.

Complexity is something that’s going to be very difficult to address because of our penchant for bringing in new technologies. Make no mistake about it, these are great technologies. They are compelling. They are making us more efficient. They are allowing us to do things we never imagined, like finding out the optimal time to go to the restroom during a movie, I mean who could have imagined we could do that a decade ago.

But as with every one of our customers out there, the kinds of things we’re talking about flies below their radar. When you download 100 apps on your smartphone, people in general, even the good folks in Cybersecurity, have no idea where those apps are coming from, where the pedigree is, have they been tested at all, have they been evaluated, are they running on a trusted operating system?

Ultimately, that’s what this business is all about, and that’s what 800-161 is all about. It’s about a lifecycle of the entire stack from applications, to middleware, to operating systems, to firmware, to integrated circuits, to include the supply chain.

The adversary is all over that stack. They now figure out how to compromise our firmware so we have to come up with firmware integrity controls in our control catalog, and that’s the world we live in today.

Managing Complexity

I was smiling this morning when I talked about the DNI, the Director of National Intelligence in building their cloud, if that’s going to go to the public cloud or not. I think Dawn is probably right, you probably won’t see that going to the public cloud anytime soon, but cloud computing gives us an opportunity to manage complexity. You can figure out what you want to send to the public cloud.

They do a good job through the FedRAMP program of deploying controls and they’ve got a business model that’s important to make sure they protect their customers’ assets. So that’s built into their business model and they do a lot of great things out there to try to protect that information.

Then, for whatever stays behind in your enterprise, you can start to employ some of the architectural constructs that you’ll see here at this conference, some of the security engineering constructs that we’re going to talk about in 800-160, and you can better defend what stays behind within your organization.

So cloud is a way to reduce that complexity. Enterprise Architecture, TOGAF®, an Open Group standard, all of those architectural things allow you to provide discipline and structure and thinking about what you’re building: how to protect it, how much it’s going to cost and is it worth it? That is the essence of good security. It’s not about running around with a barrel full of security controls or ISO 27000 saying, hey, you’ve got to do all this stuff, or this guy is going to fall, those days are over.

Integration we talked about. This is also hard. We are working with stovepipes today. Enterprise Architects typically don’t talk to security people. Acquisition folks, in most cases, don’t talk to security people.

I see it everyday. You see RFPs go out and there is a whole long list of requirements, and then, when it comes to security, they say the system or the product they are buying must be FISMA compliant. They know that’s a law and they know they have to do that, but they really don’t give the industry or the potential contractors any specificity as to what they need to do to bring that product or the system to the state where it needs to be.

And so it’s all about expectations. I believe our industry, whether it’s here or overseas, wherever these great companies operate, the one thing we can be sure of is that they want to please their customers. So maybe what the message I’m going to send everyday is that we have to be more informed consumers. We have to ask for things that we know we need.

It’s like if you go back with the automobile. When I first started driving a long time ago,  40 years ago, cars just had seatbelts. There were no airbags and no steel-reinforced doors. Then, you could actually buy an airbag as an option at some point. When you fast-forward to today, every car has an airbag, seatbelt, steel-reinforced doors. It comes as part of the basic product. We don’t have to ask for it, but as consumers we know it’s there, and it’s important to us.

We have to start to look at the IT business in the same way, just like when we cross a bridge or fly in an airplane. All of you who flew here in airplanes and came across bridges had confidence in those structures. Why? Because they are built with good scientific and engineering practices.

So least functionality, least privilege, those are kind of foundational concepts in our world and cybersecurity. You really can’t look at a smartphone or a tablet and talk about least functionality anymore, at least if you are running that movie app, and you want to have all of that capability.

The last point about trustworthiness is that we have four decades of best practices in trusted systems development. It failed 30 years ago because we had the vision back then of trusted operating systems, but the technology and the development far outstripped our ability to actually achieve that.

Increasingly Difficult

We talked about a kernel-based operating system having 2,000, 3,000, 4,000, 5,000 lines of code and being highly trusted. Well, those concepts are still in place. It’s just that now the operating systems are 50 million lines of code, and so it becomes increasingly difficult.

And this is the key thing. As a society, we’re going to have to figure out, going forward, with all this great technology, what kind of world do we want to have for ourselves and our grandchildren? Because with all this technology, as good as it is, if we can’t provide a basis of security and privacy that customers can feel comfortable with, then at some point this party is going to stop.

I don’t know when that time is going to come, but I call it the national pain point in this digital denial. We will come to that steady state. We just haven’t had enough time yet to get to that balance point, but I’m sure we will.

I talked about the essential partnership, but I don’t think we can solve any problem without a collaborative approach, and that’s why I use the essential partnership: government, industry, and academia.

Certainly all of the innovation, or most of the innovation, comes from our great industry. Academia is critical, because the companies like Oracle or Microsoft want to hire students who have been educated in what I call the STEM disciplines: Science, Technology, Engineering — whether it’s “double e” or computer science — and Mathematics. They need those folks to be able to build the kind of products that have the capabilities, function-wise, and also are trusted.

And government plays some role — maybe some leadership, maybe a bully pulpit, cheerleading where we can — bringing things together. But the bottom line is that we have to work together, and I believe that we’ll do that. And when that happens I think all of us will be able to sit in that movie and fire up that app about the restroom and feel good that it’s secure.

By The Open GroupMary Ann Davidson: I guess I’m preaching to the converted, if I can use a religious example without offending somebody. One of the questions you asked is, why do we even have standards in this area? And of course some of them are for technical reasons. Crypto it turns out is easy for even very smart people to get wrong. Unfortunately, we have reason to find out.

So there is technical correctness. Another reason would be interoperability to get things to work better in a more secure manner. I’ve worked in this industry long enough to remember the first SSL implementation, woo-hoo, and then it turns out 40 bits wasn’t really 40, bits because it wasn’t random enough, shall we say.

Trustworthiness. ISO has a standard — The Common Criteria. It’s an ISO standard. We talk about what does it mean to have secure software, what type of threats does it address, how do you prove that it does what you say you do? There are standards for that, which helps. It helps everybody. It certainly helps buyers understand a little bit more about what they’re getting.

No Best Practices

And last, but not least, and the reason it’s in quotes, “best practices,” is because there actually are no best practices. Why do I say that — and I am seeing furrowed brows back there? First of all, lawyers don’t like them in contracts, because then if you are not doing the exact thing, you get sued.

There are good practices and there are worst practices. There typically isn’t one thing that everyone can do exactly the same way that’s going to be the best practice. So that’s why that’s in quotation marks.

Generally speaking, I do think standards, particularly in general, can be a force for good in the universe, particularly in cybersecurity, but they are not always a force for good, depending on other factors.

And what is the ecosystem? Well, we have a lot of people. We have standards makers, people who work on them. Some of them are people who review things. Like when NIST is very good, which I appreciate, about putting drafts out and taking comments, as opposed to saying, “Here it is, take it or leave it.” That’s actually a very constructive dialogue, which I believe a lot of people appreciate. I know that I do.

Sometimes there are mandators. You’ll get an RFP that says, “Verily, thou shall comply with this, less thee be an infidel in the security realm.” And that can be positive. It can  be a leading edge of getting people to do something good that, in many cases, they should do anyway.

Implementers, who have to take this and decipher and figure out why they are doing it. People who make sure that you actually did what you said you were going to do.

And last, but not least, there are weaponizers. What do I mean by that? We all know who they are. They are people who will try to develop a standard and then get it mandated. Actually, it isn’t a standard. It’s something they came up with, which might be very good, but it’s handing them regulatory capture.

And we need to be aware of those people. I like the Oracle database. I have to say that, right? There are a lot of other good databases out there. If I went in and said, purely objectively speaking, everybody should standardize on the Oracle database, because it’s the most secure. Well, nice work if I can get it.

Is that in everybody else’s interest? Probably not. You get better products in something that is not a monopoly market. Competition is good.

So I have an MBA, or had one in a prior life, and they used to talk in the marketing class about the three Ps of marketing. Don’t know what they are anymore; it’s been a while. So I thought I would come up with Four Ps of a Benevolent Standard, which are Problem Statement, Precise Language, Pragmatic Solutions, and Prescriptive Minimization.

Economic Analysis

And the reason I say this is one of the kind of discussions I have to have a lot of times, particularly sometimes with people in the government. I’m not saying this in any pejorative way. So please don’t take it that way. It’s the importance of economic analysis, because nobody can do everything.

So being able to say that I can’t boil the ocean, because you are going to boil everything else in it, but I can do these things. If I could do these things, it’s very clear what I am trying to do. It’s very clear what the benefit is. We’ve analyzed it, and it’s probably something everybody can do. Then, we can get to better.

Better is better than omnibus. Omnibus is something everybody gets thrown under if you make something too big. Sorry, I had to say that.

So Problem Statement: why is this important? You would think it’s obvious, Mary Ann, except that it isn’t, because so often the discussions I have with people, tell me what problem you are worried about? What are you trying to accomplish? If you don’t tell me that, then we’re going to be all over the map. You say potato and I say “potahto,” and the chorus of that song is, “let’s call the whole thing off.”

I use supply chain as an example, because this one is all over the map. Bad quality? Well, buying a crappy product is a risk of doing business. It’s not, per se, a supply chain risk. I’m not saying it’s not important, but it it’s certainly not a cyber-specific supply chain risk.

Bad security: well, that’s important, but again, that’s a business risk.

Backdoor bogeyman: this is the popular one. How do I know you didn’t put a backdoor in there? Well, you can’t actually, and that’s not a solvable problem.

Assurance, supply chain shutdown: yeah, I would like to know that a critical parts supplier isn’t going to go out of business. So these are all important, but they are all different problems.

So if you don’t say what you’re worried about, and it can’t be all the above. Almost every business has some supplier of some sort, even if it’s just healthcare. If you’re not careful how you define this, you will be trying to define a 100 percent of any entity’s business operations. And that’s not appropriate.

Use cases are really important, because you may have a Problem Statement. I’ll give you one, and this is not to ding NIST in any way, shape, or form, but I just read this. It’s the Cryptographic Key Management System draft. The only reason I cite this as an example is that I couldn’t actually find a use case in there.

So whatever the merits of that are saying, are you trying to develop a super secret key management system for government, very sensitive cryptographic things you are building from scratch, or you are trying to define a key management system that we have to use for things like TLS or any encryption that any commercial product does, because that’s way out of scope?

So without that, what are you worried about? And also what’s going to happen is somebody is going to cite this in an RFP and it’s going to be, are you compliant with bladdy-blah? And you have no idea whether that even should apply.

Problem Statement

So that Problem Statement is really important, because without that, you can’t have that dialogue in groups like this. Well, what are we trying to accomplish? What are we worried about? What are the worst problems to solve?

Precise Language is also very important. Why? Because it turns out everybody speaks a slightly different language, even if we all speak some dialect of geek, and that is, for example, a vulnerability.

If you say vulnerability to my vulnerability handling team, they think of that as a security vulnerability that’s caused by a defect in software.

But I’ve seen it used to include, well, you didn’t configure the product properly. I don’t know what that is, but it’s not a vulnerability, at least not to a vendor. You implemented a policy incorrectly. It might lead to vulnerability, but it isn’t one. So you are seeing where I am going with this. If you don’t have language to find very crisply the same thing, you read something and you go off and do it and you realize you solved the wrong problem.

I am very fortunate. One of my colleagues from Oracle, who works on our hardware, and I also saw a presentation by people in that group at the Cryptographic Conference in November. They talked about how much trouble we got into because if you say, “module” to a hardware person, it’s a very different thing from what it meant to somebody trying to certify it. This is a huge problem because again you say, potato, I say “potahto.” It’s not the same thing to everybody. So it needs to be very precisely defined.

Scope is also important. I don’t know why. I have to say this a lot and it does get kind of tiresome, I am sure to the recipients, COTS isn’t GOTS. Commercial software is not government software, and it’s actually globally developed. That’s the only way you get commercial software, the feature rich, reads frequently. We have access to global talent.

It’s not designed for all threat environments. It can certainly be better, and I think most people are moving towards better software, most likely because we’re getting beaten up by hackers and then our customers, and it’s good business. But there is no commercial market for high-assurance software or hardware, and that’s really important, because there is only so much that you can do to move the market.

So even a standards developer or big U.S. governments, is an important customer in the market for a lot of people, but they’re not big enough to move the marketplace on their own, and so you are limited by the business dynamic.

So that’s important, you can get to better. I tell people, “Okay, anybody here have a Volkswagen? Okay, is it an MRAP vehicle? No, it’s not, is it? You bought a Volkswagen and you got a Volkswagen. You can’t take a Volkswagen and drive it around streets and expect it to perform like an MRAP vehicle. Even a system integrator, a good one, cannot sprinkle pixie dust over that Volkswagen and turn it into an MRAP vehicle. Those are very different threat environments.

Why you think commercial software and hardware is different? It’s not different. It’s exactly the same thing. You might have a really good Volkswagen, and it’s great for commuting, but it is never going to perform in an IED environment. It wasn’t designed for that, and there is nothing you can do or make it designed to perform in that environment.

Pragmatism

Pragmatism; I really wish anybody working on any standard would do some economic analysis, because economics rules the world. Even if it’s something really good, a really good idea, time, money, and people, particularly qualified security people, are constrained resourses.

So if you make people do something that looks good on paper, but it’s really time-consuming, it’s an opportunity, the cost is too high. That means what is the value of something you could do with those resources that would either cost less or deliver higher benefit. And if you don’t do that analysis, then you have people say, “Hey, that’s a great idea. Wow, that’s great too. I’d like that.” It’s like asking your kid, “Do you want candy. Do want new toys? Do want more footballs?” Instead of saying, “Hey, you have 50 bucks, what you are going to do with it?”

And then there are unintended consequences, because if you make this too complex, you just have fewer suppliers. People will never say, “I’m just not going to bid because it’s impossible.” I’m going to give you three examples and again I’m trying to be respectful here. This is not to dis anybody who worked on these. In some cases, these things have been subsequent revisions that have been modified, which I really appreciate. But there are examples of, when you think about it, what were you asking for in the first place.

I think this was an early version of NISTR 7622 and has since been excised. There was a requirement that the purchaser wanted to be notified of personnel changes involving maintenance. Okay, what does that mean?

I know what I think they wanted, which is, if you are outsourcing the human resources for the Defense Department and you move the whole thing to “Hackistan,” obviously they would want to be notified. I got that, but that’s not what it said.

So I look at that and say, we have 5,000 products, at least, at Oracle. We have billions and billions of lines of code everyday. Somebody checks out a transaction, getting some code, and they do some work on it and they didn’t write it in the first place.

So am I going to tweet all that to somebody. What’s that going to do for you? Plus you have things like the German Workers Council. We are going to tell the US Government that Jurgen worked on this line of code. Oh no, that’s not going to happen.

So what was it you were worried about, because that is not sustainable, tweeting people 10,000 times a day with code changes is just going to consume a lot of resource.

In another one, had this in an early version of something they were trying to do. They wanted to know, for each phase of development for each project, how many foreigners worked on it? What’s a foreigner? Is it a Green Card holder? Is it someone who has a dual passport? What is that going to do for you?

Now again if you had a super custom code for some intelligence, I can understand there might be cases in which that would matter. But general-purpose software is not one of them. As I said, I can give you that information. We’re a big company and we’ve got lots of resource. A smaller company probably can’t. Again, what will I do for you, because I am taking resources I could be using on something much more valuable and putting them on something really silly.

Last, but not least, and again, with respect, I think I know why this was in there. It might have been the secure engineering draft standard that you came up with that has many good parts to it.

Root Cause Analysis

I think vendors will probably understand this pretty quickly. Root Cause Analysis. If you have a vulnerability, one of the first things you should use is Root Cause Analysis. If you’re a vendor and you have a CVSS 10 Security vulnerability in a product that’s being exploited, what do you think the first thing you are going to do is?

Get a patch in your customers’ hands or work around? Yeah, probably, that’s probably the number one priority. Also, Root Cause Analysis, particularly for really nasty security bugs, is really important. CVSS 0, who cares? But for 9 or 10, you should be doing that common analysis.

I’ve got a better one. We have a technology we have called Java. Maybe you’ve heard of it. We put a lot of work into fixing Java. One of the things we did is not only Root Cause Analysis, for CVSS 9 and higher. They have to go in front of my boss. Every Java developer had to sit through that briefing. How did this happen?

Last but not least, looking for other similar instances, not just root cause, how did that get in there and how do we avoid it. Where else does this problem exist. I am not saying this to make us look good; I ‘m saying for the analytics. What are you really trying to solve here. Root Cause Analysis is important, but it’s important in context. If I have to do it for everything, it’s probably not the best use of a scarce resource.

My last point is to minimize prescriptiveness within limits. For example, probably some people in here don’t know how to bake or maybe you made a pie. There is no one right way to bake a cherry pie. Some people go down to Ralphs and they get a frozen Marie Callendar’s out of the freezer, they stick it in the oven, and they’ve got a pretty good cherry pie.

Some people make everything from scratch. Some people use a prepared pie crust and they do something special with the cherries they picked off their tree, but there is no one way to do that that is going to work for everybody.

Best practice for something. For example, I can say truthfully that a best development practice would not be just start coding, number one; and number two, it compiles without too many errors on the base platform, and ship it. That is not good development practice.

If you mandate too much, it will stifle innovation and it won’t work for people. Plus, as I mentioned, you will have an opportunity cost. If I’m doing something that somebody says I have to do, but there is a more innovative way of doing that.

We don’t have a single development methodology in Oracle, mostly because of acquisitions. We buy a great company, we don’t tell them, “You know, that agile thing you are doing, it’s the last year. You have to do waterfall.” That’s not going to work very well, but there are good practices even within those different methodologies.

Allowing for different hows is really important. Static analysis is one of them. I think static analysis is kind of industry practice now, and people should be doing it. Third party is really bad. I have been opining about this, this morning.

Third-party Analysis

Let just say, I have a large customer, I won’t name who used a third-party static analysis service. They broke their license agreement with us. They’re getting a lot of it from us. Worse, they give us a report that included vulnerabilities from one of our competitors. I don’t want to know about those, right? I can’t fix some. I did tell my competitor, “You should know this report exist, because I’m sure you want to analyze this.”

Here’s the worst part. How many of those vulnerabilities the third-party found you think had any merit? Run tool is nothing; analyzing results is everything. That customer and the vendor wasted the time of one of our best security leads, trying to make sure there was no there there, and there wasn’t.

So again, and last but not least, government can use their purchasing power in lot of very good ways, but realize that regulatory things are probably going to lag actual practice. You could be specifying buggy whip standards and the reality is that nobody uses buggy whips anymore. It’s not always about the standard, particularly if you are using resources in a less than optimal way.

One of the things I like about The Open Group is that here we have actual practitioners. This is one of the best forums I have seen, because there are people who have actual subject matter expertise to bring to the table, which is so important in saying what is going to work and can be effective.

The last thing I am going to say is a nice thank you to the people in The Open Group Trusted Technology Forum (OTTF), because I appreciate the caliber of my colleagues, and also Sally Long. They talk about this type of an effort as herding cats, and at least for me, it’s probably like herding a snarly cat. I can be very snarly. I’m sure you can pick up on that.

So I truly appreciate the professionalism and the focus and the targeting. Targeting a good slice of making a supply-chain problem better, not boiling the ocean, but very focused and targeted and with very high-caliber participation. So thank you to my colleagues and particularly thank you to Sally, and that’s it, I will turn it over to others.

By The Open GroupJim Hietala: We do, we have a few questions from the audience. So the first one and both here could feel free to chime in on this. Something you brought up Dr. Ross, building security in looking at software and systems engineering processes. How do you bring industry along in terms of commercial off-the-shelf products and services especially when you look at things like IoT, where we have got IP interfaces grafted on to all sorts of devices?

Ross: As Mary Ann was saying before, the strength of any standard is really its implementability out there. When we talk about, in particular, the engineering standard, the 15288 extension, if we do that correctly every organization out there who’s already using — let’s say a security development lifecycle like the 27034, you can pick your favorite standard — we should be able to reflect those activities in the different lanes of the 15288 processes.

This is a very important point that I got from Mary Ann’s discussion. We have to win the hearts and minds and be able to reflect things in a disciplined and structured process that doesn’t take people off their current game. If they’re doing good work, we should be able to reflect that good work and say, “I’m doing these activities whether it’s SDL, and this is how it would map to those activities that we are trying to find in the 15288.”

And that can apply to the IoT. Again, it goes back to the computer, whether it’s Oracle database or a Microsoft operating system. It’s all about the code and the discipline and structure of building that software and integrating it into a system. This is where we can really bring together industry, academia, and government and actually do something that we all agree on.

Different Take

Davidson: I would have a slightly different take on this. I know this is not a voice crying in the wilderness. My concern about the IoT goes back to things I learned in business school in financial market theory, which unfortunately has been borne out in 2008.

There are certain types of risks you can mitigate. If I cross a busy street, I’m worried about getting hit by a car. I can look both ways. I can mitigate that. You can’t mitigate systemic risk. It means that you created a fragile system. That is the problem with the IoT, and that is a problem that no jury of engineering will solve.

If it’s not a problem, why aren’t we giving nuclear weapons’ IP addresses? Okay, I am not making this up. The Air Force thought about that at one point. You’re laughing. Okay, Armageddon, there is an app for that.

That’s the problem. I know this is going to happen anyway. whether or not I approve of it, but I really wish that people could look at this, not just in terms of how many of these devices and what a great opportunity, but what is a systemic risk that we are creating by doing this.

My house is not connected to the Internet directly and I do not want somebody to shut my appliances off or shut down my refrigerator or lock it so that I can’t get into it or use that for launching an attack, those are the discussions we should be having — at least as much as how we make sure that people designing these things have a clue.

Hietala: The next question is, how do customers and practitioners value the cost of security, and then a kind of related question on what can global companies due to get C-Suite attention and investment on cybersecurity, that whole ROI value discussion?

Davidson: I know they value it because nobody calls me up and says, “I am bored this week. Don’t you have more security patches for me to apply?” That’s actually true. We know what it costs us to produce a lot of these patches, and it’s important for the amount of resources we spend on that I would much rather be putting them on building something new and innovative, where we could charge money for it and provide more value to customers.

So it’s cost avoidance, number one; number two more people have an IT backbone. They understand the value of having it be reliable. Probably one of the reasons people are moving to clouds is that it’s hard to maintain all these and hard to find the right people to maintain them. But also I do have more customers asking us now about our security practices, which is be careful what you wish for

I said this 10 years ago. People should be demanding. They know what we’re doing and now I am going to spend a lot of time answering RFPs, but that’s good. These people are aware of this. They’re running their business on our stuff and they want to know what kind of care we’re taking to make sure we’re protecting their data and their mission-critical applications as if it were ours.

Difficult Question

Ross: The ROI question is very difficult with regard to security. I think this goes back to what I said earlier. The sooner we get security out of its stovepipe and integrated as just part of the best practices that we do everyday, whether it’s in the development work at a company or whether it’s in our enterprises as part of our mainstream organizational management things like the SDLC, or if we are doing any engineering work within the organization, or if we have the Enterprise Architecture group involved. That integration makes security less of  “hey, I am special” and more of just a part of the way we do business.

So customers are looking for reliability and dependability. They rely on this great bed of IT product systems and services and they’re not always focused on the security aspects. They just want to make sure it works and that if there is an attack and the malware goes creeping through their system, they can be as protected as they need to be, and sometimes that flies way below their radar.

So it’s got to be a systemic process and an organizational transformation. I think we have to go through it, and we are not quite there just yet.

Davidson: Yeah, and you really do have to bake it in. I have a team of — I’ve got three more headcount, hoo-hoo — 45 people, but we have about 1,600 people in development whose jobs are to be security points of contact and security leads. They’re the boots on the ground who implement our program, because I don’t want to have an organization that peers over everybody’s shoulder to make sure they are writing good code. It’s not cost-effective, not a good way to do it. It’s cultural.

One of the ways that you do that is seeding those people in the organization, so they become the boots on the ground and they have authority to do things, because you’re not going to succeed otherwise.

Going back to Java, that was the first discussion I had with one of the executives that this is a cultural thing. Everybody needs to feel that he or she is personally responsible for security, not those 10-20 whatever those people are, whoever the security weenie is. It’s got to be everybody and when you can do that, you really have to see change and how things happen. Everybody is not going to be a security expert, but everybody has some responsibility for security.

Transcript available here.

Transcript of part of the proceedings from The Open Group San Diego 2015 in February. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

Join the conversation! @theopengroup #ogchat

You may also be interested in:

 

Comments Off on Cybersecurity Standards: The Open Group Explores Security and Ways to Assure Safer Supply Chains

Filed under Cloud, Cloud/SOA, Conference, Cybersecurity, Enterprise Architecture, Information security, Internet of Things, IT, OTTF, RISK Management, Security, Standards, TOGAF®, Uncategorized

Putting Information Technology at the Heart of the Business: The Open Group San Diego 2015

By The Open Group

The Open Group is hosting the “Enabling Boundaryless Information Flow™” event February 2 – 5, 2015 in San Diego, CA at the Westin San Diego Gaslamp Quarter. The event is set to focus on the changing role of IT within the enterprise and how new IT trends are empowering improvements in businesses and facilitating Enterprise Transformation. Key themes include Dependability through Assuredness™ (The Cybersecurity Connection) and The Synergy of Enterprise Architecture Frameworks. Particular attention throughout the event will be paid to the need for continued development of an open TOGAF® Architecture Development Method and its importance and value to the wider business architecture community. The goal of Boundaryless Information Flow will be featured prominently in a number of tracks throughout the event.

Key objectives for this year’s event include:

  • Explore how Cybersecurity and dependability issues are threatening business enterprises and critical infrastructure from an integrity and a Security perspective
  • Show the need for Boundaryless Information Flow™, which would result in more interoperable, real-time business processes throughout all business ecosystems
  • Outline current challenges in securing the Internet of Things, and about work ongoing in the Security Forum and elsewhere that will help to address the issues
  • Reinforce the importance of architecture methodologies to assure your enterprise is transforming its approach along with the ever-changing threat landscape
  • Discuss the key drivers and enablers of social business technologies in large organizations which play an important role in the co-creation of business value, and discuss the key building blocks of social business transformation program

Plenary speakers at the event include:

  • Chris Forde, General Manager, Asia Pacific Region & VP, Enterprise Architecture, The Open Group
  • John A. Zachman, Founder & Chairman, Zachman International, and Executive Director of FEAC Institute

Full details on the range of track speakers at the event can be found here, with the following (among many others) contributing:

  • Dawn C. Meyerriecks, Deputy Director for Science and Technology, CIA
  • Charles Betz, Founder, Digital Management Academy
  • Leonard Fehskens. Chief Editor, Journal of Enterprise Architecture, AEA

Registration for The Open Group San Diego 2015 is open and available to members and non-members. Please register here.

Join the conversation via Twitter – @theopengroup #ogSAN

 

2 Comments

Filed under Boundaryless Information Flow™, Dependability through Assuredness™, Internet of Things, Professional Development, Security, Standards, TOGAF®, Uncategorized