Tag Archives: security compliance

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

Big Data Security Tweet Jam

By Patty Donovan, The Open Group

On Tuesday, January 22, The Open Group will host a tweet jam examining the topic of Big Data and its impact on the security landscape.

Recently, Big Data has been dominating the headlines, analyzing everything about the topic from how to manage and process it, to the way it will impact your organization’s IT roadmap. As 2012 came to a close, analyst firm, Gartner predicted that data will help drive IT spending to $3.8 trillion in 2014. Knowing the phenomenon is here to stay, enterprises face a new and daunting challenge of how to secure Big Data. Big Data security also raises other questions, such as: Is Big Data security different from data security? How will enterprises handle Big Data security? What is the best approach to Big Data security?

It’s yet to be seen if Big Data will necessarily revolutionize enterprise security, but it certainly will change execution – if it hasn’t already. Please join us for our upcoming Big Data Security tweet jam where leading security experts will discuss the merits of Big Data security.

Please join us on Tuesday, January 22 at 9:00 a.m. PT/12:00 p.m. ET/5:00 p.m. GMT for a tweet jam, moderated by Dana Gardner (@Dana_Gardner), ZDNet – Briefings Direct, that will discuss and debate the issues around big data security. Key areas that will be addressed during the discussion include: data security, privacy, compliance, security ethics and, of course, Big Data. We welcome Open Group members and interested participants from all backgrounds to join the session and interact with our panel of IT security experts, analysts and thought leaders led by Jim Hietala (@jim_hietala) and Dave Lounsbury (@Technodad) of The Open Group. To access the discussion, please follow the #ogChat hashtag during the allotted discussion time.

And for those of you who are unfamiliar with tweet jams, here is some background information:

What Is a Tweet Jam?

A tweet jam is a one hour “discussion” hosted on Twitter. The purpose of the tweet jam is to share knowledge and answer questions on Big Data security. Each tweet jam is led by a moderator and a dedicated group of experts to keep the discussion flowing. The public (or anyone using Twitter interested in the topic) is encouraged to join the discussion.

Participation Guidance

Whether you’re a newbie or veteran Twitter user, here are a few tips to keep in mind:

  • Have your first #ogChat tweet be a self-introduction: name, affiliation, occupation.
  • Start all other tweets with the question number you’re responding to and the #ogChat hashtag.
    • Sample: “Q1 enterprises will have to make significant adjustments moving forward to secure Big Data environments #ogChat”
    • Please refrain from product or service promotions. The goal of a tweet jam is to encourage an exchange of knowledge and stimulate discussion.
    • While this is a professional get-together, we don’t have to be stiff! Informality will not be an issue!
    • A tweet jam is akin to a public forum, panel discussion or Town Hall meeting – let’s be focused and thoughtful.

If you have any questions prior to the event or would like to join as a participant, please direct them to Rod McLeod (rmcleod at bateman-group dot com). We anticipate a lively chat and hope you will be able to join!

 

patricia donovanPatricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

1 Comment

Filed under Tweet Jam

Data Protection Today and What’s Needed Tomorrow

By Ian Dobson and Jim Hietala, The Open Group

Technology today allows thieves to copy sensitive data, leaving the original in place and thus avoiding detection. One needn’t look far in today’s headlines to understand why protection of data is critical going forward. As this recent article from Bloomberg points out, penetrations of corporate IT systems with the aim to extract sensitive information, IP and other corporate data are rampant.  Despite the existence of data breach and data privacy laws in the U.S., EU and elsewhere, this issue is still not well publicized. The article cites specific intrusions at large consumer products companies, the EU, itself, law firms and a nuclear power plant.

Published in October 2012, the Jericho Forum® Data Protection white paper reviews the state of data protection today and where it should be heading to meet tomorrow’s business needs. The Open Group’s Jericho Forum contends that future data protection solutions must aim to provide stronger, more flexible protection mechanisms around the data itself.

The white paper argues that some of the current issues with data protection are:

  • It is too global and remote to be effective
  • Protection is neither granular nor interoperable enough
  • It’s not integrated with Centralized Authorization Services
  • Weak security services are relied on for enforcement

Refreshingly, it explains not only why, but also how. The white paper reviews the key issues surrounding data protection today; describes properties that data protection mechanisms should include to meet current and future requirements; considers why current technologies don’t deliver what is required; and proposes a set of data protection principles to guide the design of effective solutions.

It goes on to describe how data protection has evolved to where it’s at today, and outlines a series of target stages for progressively moving the industry forward to deliver stronger more flexible protection solutions that business managers are already demanding their IT systems managers provide.  Businesses require these solutions to ensure appropriate data protection levels are wrapped around the rapidly increasing volumes of confidential information that is shared with their business partners, suppliers, customers and outworkers/contractors on a daily basis.

Having mapped out an evolutionary path for what we need to achieve to move data protection forward in the direction our industry needs, we’re now planning optimum approaches for how to achieve each successive stage of protection. The Jericho Forum welcomes folks who want to join us in this important journey.

 

Ian Dobson is the director of the Security Forum and the Jericho Forum for The Open Group, coordinating and facilitating the members to achieve their goals in our challenging information security world.  In the Security Forum, his focus is on supporting development of open standards and guides on security architectures and management of risk and security, while in the Jericho Forum he works with members to anticipate the requirements for the security solutions we will need in future.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

1 Comment

Filed under Cybersecurity

PODCAST: Standards effort points to automation via common markup language for improved IT compliance, security

By Dana Gardner, Interabor Solutions

Listen to this recorded podcast here: BriefingsDirect-O-ACEML Standard Effort Points to Broad Automation for Improved IT Compliance and Security Across Systems

The following is the transcript of a sponsored podcast panel discussion on the new Open Automated Compliance Expert Markup Language (O-ACEML) standard, in conjunction with the The Open Group Conference, Austin 2011.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, we present a sponsored podcast discussion in conjunction with The Open Group Conference in Austin, Texas, the week of July 18, 2011. We’re going to examine the Open Automated Compliance Expert Markup Language (O-ACEML), a new standard creation and effort that helps enterprises automate security compliance across their systems in a consistent and cost-saving manner.

O-ACEML helps to achieve compliance with applicable regulations but also achieves major cost savings. From the compliance audit viewpoint, auditors can carry out similarly consistent and more capable audits in less time. Here to help us understand O-ACEML and managing automated security compliance issues and how the standard is evolving are our guests. We’re here with Jim Hietala, Vice President of Security at The Open Group. Welcome back, Jim.

Jim Hietala: Thanks, Dana. Glad to be with you.

Gardner: We’re also here with Shawn Mullen. He’s a Power Software Security Architect at IBM. Welcome to the show, Shawn.

Shawn Mullen: Thank you.

Gardner: Let’s start by looking at why this is an issue. Why do O-ACEML at all? I assume that security being such a hot topic, as well as ways in which organizations grapple with the regulations, and compliance issues are also very hot, this has now become an issue that needs some standardization. Let me throw this out to both of you. Why are we doing this at all and what are the problems that we need to solve with O-ACEML?

Hietala: One of the things you’ve seen in last 10 or 12 years, since the compliance regulations have really come to the fore, is that the more regulation there is, more specific requirements are put down, and the more challenging it is for organizations to manage. Their IT infrastructure needs to be in compliance with whatever regulations impact them, and the cost of doing so becomes a significant thing. So, anything that could be done to help automate, to drive out cost, and maybe make organizations more effective in complying with the regulations that affect them — whether it’s PCI, HIPAA, or whatever — there’s lot of benefit to large IT organizations in doing that. That’s really what drove us to look at adopting a standard in this area.

Gardner: Jim, just for those folks who are coming in as fresh, are we talking about IT security equipment and the compliance around that, or is it about the process of how you do security, or both? What are the boundaries around this effort and what it focuses on?

Manual process

Hietala: It’s both. It’s enabling the compliance of IT devices specifically around security constraints and the security configuration settings and to some extent, the process. If you look at how people did compliance or managed to compliance without a standard like this, without automation, it tended to be a manual process of setting configuration settings and auditors manually checking on settings. O-ACEML goes to the heart of trying to automate that process and drive some cost out of an equation.

Gardner: Shawn Mullen, how do you see this in terms of the need? What are the trends or environment that necessitate in this?

Mullen: I agree with Jim. This has been going on a while, and we’re seeing it on both classes of customers. On the high-end, we would go from customer-to-customer and they would have their own hardening scripts, their own view of what should be hardened. It may conflict with what compliance organization wanted as far as the settings. This was a standard way of taking what the compliance organization wanted, and also it has an easy way to author it, to change it.

If your own corporate security requirements are more stringent, you can easily change the O-ACEML configuration, so that is satisfies your more stringent corporate compliance or security policy, as well as satisfying the regulatory compliance organization in an easy way to monitor it, to report, and see it.

In addition, on the low end, the small businesses don’t have the expertise to know how to configure their systems. Quite frankly, they don’t want to be security experts. Here is an easy way to print an XML file to harden their systems as it needs to be hardened to meet compliance or just the regular good security practices.

Gardner: One of the things that’s jumped out at me as I’ve looked into this, is the rapid improvement in terms of a cost or return on investment (ROI), almost to the league of a no- brainer category. Help me understand why is it so expensive and inefficient now, when it comes to security equipment audits and regulatory compliance. What might this then therefore bring in terms of improvement?

Mullen: One of the things that we’re seeing in the industry is server consolidation. If you have these hundreds, or in large organizations, thousands of systems and you have to manually configure them, it becomes a very daunting task. Because of that, it’s a one-time shot at doing this, and then the monitoring is even more difficult. With O-ACEML, it’s a way of authoring your security policy as it meets compliance or for your own security policy in pushing that out. This allows you to have a single XML and push it onto heterogeneous platforms. Everything is configured securely and consistently and it gives you a very easy way to get the tooling to monitor those systems, so they are configured correctly today. You’re checking them weekly or daily to ensure that they remain in that desired state.

Gardner: So it’s important not only to automate, but be inclusive and comprehensive in the way you do that or you are back to manual process at least for a significant portion, but that might then not be at your compliance issues. Is that how it works?

Mullen: We had a very interesting presentation here at The Open Group Conference yesterday. I’ll let Jim provide some of the details on that, but customers are finding the best way they can lower their compliance or their cost of meeting compliance is through automation. If you can automate any part of that compliance process, that’s going to save you time and money. If you can get rid of the manual effort with automation, it greatly reduces your cost.

Gardner: Shawn, do we have any sense in the market what the current costs are, even for something that was as well-known as Sarbanes-Oxley? How impressive, or unfortunately intimidating, are some of these costs?

Cost of compliance

Mullen: There was a very good study yesterday. The average cost of an organization to be compliant is $3 million. That’s annual cost. What was also interesting was that the cost of being non-compliant, as they called it, was $9 million.

Hietala: The figures that Shawn was referencing come out of the study by the Ponemon Institute. Larry Ponemon does lots of studies around security risk compliance cost. He authors an annual data breach study that’s pretty widely quoted in the security industry that gets to the cost of data breaches on average for companies.

In the numbers that were presented yesterday, he recently studied 46 very large companies, looking at their cost to be in compliance with the relevant regulations. It’s like $3.5 million a year, and over $9 million for companies that weren’t compliant, which suggests that companies that are actually actively managing towards compliance are probably little more efficient than those that aren’t. What O-ACEML has the opportunity to do for those companies that are in compliance is help drive that $3.5 million down to something much less than that by automating and taking manual labor out of process.

Gardner: So it’s a seemingly very worthwhile effort. How do we get to where we are now, Jim, with the standard and where do we need to go? What’s the level of maturity with this?

Hietala: It’s relatively new. It was just published 60 days ago by The Open Group. The actual specification is on The Open Group website. It’s downloadable, and we would encourage both, system vendors and platform vendors, as well as folks in the security management space or maybe the IT-GRC space, to check it out, take a look at it, and think about adopting it as a way to exchange compliance configuration information with platforms.

We want to encourage adoption by as broad a set of vendors as we can, and we think that having more adoption by the industry, will help make this more available so that end-users can take advantage of it.

Gardner: Back to you Shawn. Now that we’ve determined that we’re in the process of creating this, perhaps, you could set the stage for how it works. What takes place with ACEML? People are familiar with markup languages, but how does this now come to bear on this problem around compliance, automation, and security?

Mullen: Let’s take a single rule, and we’ll use a simple case like the minimum password length. In PCI the minimum password length, for example, is seven. Sarbanes-Oxley, which relies on COBiT password length would be eight.

But with an O-ACEML XML, it’s very easy to author a rule, and there are three segments to it. The first segment is, it’s very human understandable, where you would put something like “password length equals seven.” You can add a descriptive text with it, and that’s all you have to author.

Actionable command

When that is pushed down on to the platform or the system that’s O-ACEML aware, it’s able to take that simple ACEML word or directive and map that into an actionable command relevant to that system. When it finds the map into the actionable command ,it writes it back into the XML. So that’s completing the second phase of the rule. It executes that command either to implement the setting or to check the setting.

The result of the command is then written back into the XML. So now the XML for particular rule has the first part, the authored high-level directive as a compliance organization, how that particular system mapped into a command, and the result of executing that command either in a setting or checking format.

Now we have all of the artifacts we need to ensure that the system is configured correctly, and to generate audit reports. So when the auditor comes in we can say, “This is exactly how any particular system is configured and we know it to be consistent, because we can point to any particular system, get the O-ACEML XML and see all the artifacts and generate reports from that.”

Gardner: Maybe to give a sense of how this works, we can also look at a before-and-after scenario. Maybe you could describe how things are done now, the before or current status approach or standard operating procedure, and then what would be the case after someone would implement and mature O-ACEML implementation.

Mullen: There are similar tools to this, but they don’t all operate exactly the same way. I’ll use an example of BigFix. If I had a particular system, they would offer a way for you to write your own scripts. You would basically be doing what you would do at the end point, but you would be doing it at the BigFix central console. You would write scripts to do the checking. You would be doing all of this work for each of your different platforms, because everyone is a little bit different.

Then you could use BigFix to push the scripts down. They would run, and hopefully you wrote your scripts correctly. You would get results back. What we want to do with ACEML is when you just put the high-level directive down to the system, it understands ACEML and it knows the proper way to do the checking.

What’s interesting about ACEML, and this is one of our differences from, for example, the security content automation protocol (SCAP), is that instead of the vendor saying, “This is how we do it. It has a repository of how the checking goes and everything like that,” you let the end point make the determination. The end point is aware of what OS it is and it’s aware of what version it is.

For example, with IBM UNIX, which is AIX, you would say “password check at this different level.” We’ve increased our password strength, we’ve done a lot of security enhancements around that. If you push the ACEML to a newer level of AIX, it would do the checking slightly differently. So, it really relies on the platform, the device itself, to understand ACEML and understand how best to do its checking.

We see with small businesses and even some of the larger corporations that they’re maintaining their own scripts. They’re doing everything manually. They’re logging on to a system and running some of those scripts. Or, they’re not running scripts at all, but are manually making all of these settings.

It’s an extremely long and burdensome process,when you start considering that there are hundreds of thousands of these systems. There are different OSs. You have to find experts for your Linux systems or your HP-UX or AIX. You have to have all those different talents and skills in these different areas, and again the process is quite lengthy.

Gardner: Jim Hietala, it sounds like we are focusing on servers to begin with, but I imagine that this could be extended to network devices, other endpoints, other infrastructure. What’s the potential universe of applicability here?

Different classes

Hietala: The way to think about it is the universe of IT devices that are in scope for these various compliance regulations. If you think about PCI DSS, it defines pretty tightly what your cardholder data environment consists of. In terms of O-ACEML, it could be networking devices, servers, storage equipment, or any sort of IT device. Broadly speaking, it could apply to lots of different classes of computing devices.

Gardner: Back to you Shawn,. You mentioned the AIX environment. Could you explain a beginning approach that you’ve had with IBM Compliance Expert, or ICE, that might give us a clue as to how well this could work, when applied even more broadly? How does that heritage in ICE develop, and what would that tell us about what we could expect with O-ACEML?

Mullen: We’ve had ICE and this AIX Compliance Expert, using the XML, for a number of years now. It’s been broadly used by a lot of our customers, not only to secure AIX but to secure the virtualization environment in a particular a virtual I/O server. So we use it for that.

One of the things that ACEML brings is that it has some of the lessons we learned from doing our own proprietary XML. It also brings some lessons we learned when looking at other XML for compliance like XCCDF. One of the things we put in there was a remediation element.

For example, the PCI says that your password length should be seven. COBiT says your password length should be eight. It has the XML, so you can blend multiple compliance requirements with a single policy, choosing the more secure setting, so that both compliance organizations, or other three compliance organizations, gets set properly to meet all of those, and apply it to a singular system.

One of the things that we’re hoping vendors will gravitate toward is the ability to have a central console controlling their IT environment or configuring and monitoring their IT environment. It just has to push out a single XML file. It doesn’t have to push out a special XML for Linux versus AIX versus a network device. It can push out that ACEML file to all of the devices. It’s a singular descriptive XML, and each device, in turn, knows how to map it to its own particular platform in security configuring.

Gardner: Jim Hietala, it sounds as if the low-hanging fruit here would be the compliance and automation benefit, but it also sounds as if this is comprehensive. It’s targeted at a very large set of the devices and equipment in the IT infrastructure. This could become a way of propagating new security policies, protocols, approaches, even standards, down the line. Is that part of the vision here — to be able to offer a means by which an automated propagation of future security changes could easily take place?

Hietala: Absolutely, and it goes beyond just the compliance regulations that are inflicted on us or put on us by government organizations to defining a best practice instead of security policies in the organization. Then, using this as a mechanism to push those out to your environment and to ensure that they are being followed and implemented on all the devices in their IT environment.

So, it definitely goes beyond just managing compliance to these external regulations, but to doing a better job of implementing the ideal security configuration settings across your environment.

Gardner: And because this is being done in an open environment like The Open Group, and because it’s inclusive of any folks or vendors or suppliers who want to take part, it sounds as if this could also cross the chasm between an enterprise, IT set, and a consumer or mobile or external third-party provider set.

Is it also a possibility that we’re going beyond heterogeneity, when it comes to different platforms, but perhaps crossing boundaries into different segments of IT and what we’re seeing with the “consumerization” of IT now? I’ll ask this to either of you or both of you.

Moving to the Cloud

Hietala: I’ll make a quick comment and then turn it over to Shawn. Definitely, if you think about how this sort of a standard might apply towards services that are built in somebody’s Cloud, you could see using this as a way to both set configuration settings and check on the status of configuration settings and instances of machines that are running in a Cloud environment. Shawn, maybe you want to expand on that?

Mullen: It’s interesting that you brought this up, because this is the exact conversation we had earlier today in one of the plenary sessions. They were talking about moving your IT out into the Cloud. One of the issues, aside from just the security, was how do you prove that you are meeting these compliance requirements?

O-ACEML is a way to reach into the Cloud to find your particular system and bring back a report that you can present to your auditor. Even though you don’t own the system –it’s not in the data center here in the next office, it’s off in the cloud somewhere — you can bring back all the artifacts necessary to prove to the auditor that you are meeting the regulatory requirements.

Gardner: Jim, how do folks take further steps to either gather more information? Obviously, this would probably of interest to enterprises as well as the suppliers, vendors for professional services organizations. What are the next steps? Where can they go to get some information? What should they do to become involved?

Hietala: The standard specification is up on our website. You can go to the “Publications” tab on our website, and do a search for O-ACEML, and you should find the actual technical standard document. Then, you can get involved directly in the Security Forum by joining The Open Group . As the standard evolves, and as we do more with it, we certainly want more members involved in helping to guide the progress of it over time.

Gardner: Thoughts from you, Shawn, on that same getting involved question?

Mullen: That’s a perfect way to start. We do want to invite different compliance organization, everybody from the electrical power grid — they have their own view of security — to ISO, to payment card industry. For the electrical power grid standard, for example — and ISO is the same way — what ACEML helps them with is they don’t need to understand how Linux does it, how AIX does it. They don’t need to have that deep understanding.

In fact, the way ISO describes it in their PDF around password settings, it basically says, use good password settings, and it doesn’t go into any depth beyond that. The way we architected and designed O-ACEML is that you can just say, “I want good password settings,” and it will default to what we decided. What we focused in on collectively as an international standard in The Open Group was, that good password hygiene means you change your password every six months. It should at least carry this many characters, there should be a non-alpha/numeric.

It removes the burden of these different compliance groups from being security experts and it let’s them just use ACEML and the default settings that The Open Group came up with. We want to reach out to those groups and show them the benefits of publishing some of their security standards in O-ACEML. Beyond that, we’ll work with them to have that standard up, and hopefully they can publish it on their website, or maybe we can publish it on The Open Group website.

Next milestones

Gardner: Well, great. We’ve been learning more about the Open Automated Compliance Expert Markup Language, more commonly known as O-ACEML. And we’ve been seeing how it can help assure compliance along with some applicable regulations across different types of equipment, but has the opportunity to perhaps provide more security across different domains, be that cloud or on-premises or even partner networks. while also achieving major cost savings. We’ve been learning how to get to started on this and what the maturity timeline is.

Jim Hietala, what would be the next milestone? What should people expect next in terms of how this is being rolled out?

Hietala: You’ll see more from us in terms of adoption of the standard. We’re looking already at case studies and so forth to really describe in terms that everyone can understand what benefits organizations are seeing from using O-ACEML. Given the environment we’re in today, we’re seeing about security breaches and hacktivism and so forth everyday in the newspapers.

I think we can expect to see more regulation and more frequent revisions of regulations and standards affecting IT organizations and their security, which really makes it imperative for engineers in IT environment in such a way that you can accommodate those changes, as they are brought to your organization, do so in an effective way, and at the least cost. Those are really the kinds of things that O-ACEML has targeted, and I think there is a lot of benefit to organizations to using it.

Gardner: Shawn, one more question to you as a follow-up to what Jim said, not only that should we expect more regulations, but we’ll see them coming from different governments, different strata of governments, so state, local, federal perhaps. For multinational organization, this could be a very complex undertaking, so I’m curious as to whether O-ACEML could also help when it comes to managing multiple regulations across multiple jurisdictions for larger organizations.

Mullen: That was the goal when we came up with O-ACEML. Anybody could author it, and again, if a single system fell under the purview of multiple compliance requirements, we could plan that together and that system would be a multiple one. It’s an international standard, we want it to be used by multiple compliance organizations. And compliance is a good thing. It’s just good IT governance. It will save companies money in the long run, as we saw with these statistics. The goal is to lower the cost of being compliant, so you get good IT governance, just with a lower cost.

Gardner: Thanks. This sponsored podcast is coming to you in conjunction with The Open Group Conference in Austin, Texas, in the week of July 18, 2011. Thanks to both our guests. Jim Hietala, the Vice President of Security at The Open Group. Thank you, Jim.

Hietala: Thank you, Dana.

Gardner: And also Shawn Mullen, Power Software Security Architect at IBM. Thank you, Shawn.

Mullen: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com.

Copyright The Open Group 2011. All rights reserved.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirect™ blogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

3 Comments

Filed under Cybersecurity