Tag Archives: information security

Improving Patient Care and Reducing Costs in Healthcare – Join The Open Group Tweet Jam on Wednesday, April 23

By Jason Lee, Director of Healthcare and Security Forums, The Open Group

On Wednesday, April 23 at 9:00 am PT/12:00 pm ET/5:00 pm GMT, The Open Group Healthcare Forum will host a tweet jam to discuss the issues around healthcare and improving patient care while reducing costs. Many healthcare payer and provider organizations today are facing numerous “must do” priorities, including EHR implementation, transitioning to ICD-10, and meeting enhanced HIPAA security requirements.

This tweet jam will focus on opportunities that healthcare organizations have available to improve patient care and reduce costs associated with capturing, maintaining, and sharing patient information. It will also explore how using Enterprise Architectural approaches that have proven effective in other industries will apply to the healthcare sector and dramatically improve both costs and patient care.

In addition to the need for implementing integrated digital health records that can be shared across health organizations to maximize care for both patients who don’t want to repeat themselves and the doctors providing their care, we’ll explore what other solutions exist to enhance information flow. For example, did you know that a new social network for M.D.s has even emerged to connect and communicate across teams, hospitals and entire health systems? The new network, called Doximity, boasts that 40 percent of U.S. doctors have signed on. Not only are doctors using social media, they’re using software specifically designed for the iPad that roughly 68 percent of doctors are carrying around. One hospital even calculated its return on investment of utilizing a an iPad in just nine days!

We’ll be talking about how many healthcare thought leaders are looking at technology and its influence on online collaboration, patient telemonitoring and information flow.

We welcome The Open Group members and interested participants from all backgrounds to join the discussion and interact with our panel of thought-leaders including Jim Hietala, Vice President of Security; David Lounsbury, CTO; and Dr. Chris Harding, Forum Director of Open Platform 3.0™ Forum. To access the discussion, please follow the hashtag #ogchat during the allotted discussion time.

Interested in joining The Open Group Healthcare Forum? Register your interest, here.

What Is a Tweet Jam?

The Open Group tweet jam, approximately 45 minutes in length, is a “discussion” hosted on Twitter. The purpose of the tweet jam is to share knowledge and answer questions on relevant and thought-provoking issues. Each tweet jam is led by a moderator and a dedicated group of experts to keep the discussion flowing. The public (or anyone using Twitter interested in the topic) is encouraged to join the discussion.

Participation Guidance

Whether you’re a newbie or veteran Twitter user, here are a few tips to keep in mind:

Have your first #ogchat tweet be a self-introduction: name, affiliation, occupation.

Start all other tweets with the question number you’re responding to and add the #ogchat hashtag.

Sample: Q1 What barriers exist for collaboration among providers in healthcare, and what can be done to improve things? #ogchat

Please refrain from product or service promotions. The goal of a tweet jam is to encourage an exchange of knowledge and stimulate discussion.

While this is a professional get-together, we don’t have to be stiff! Informality will not be an issue.

A tweet jam is akin to a public forum, panel discussion or Town Hall meeting – let’s be focused and thoughtful.

If you have any questions prior to the event or would like to join as a participant, please contact Rob Checkal (@robcheckal or rob.checkal@hotwirepr.com). We anticipate a lively chat and hope you will be able to join!

Jason Lee headshotJason Lee, Director of Healthcare and Security Forums at The Open Group, has conducted healthcare research, policy analysis and consulting for over 20 years. He is a nationally recognized expert in healthcare organization, finance and delivery and applies his expertise to a wide range of issues, including healthcare quality, value-based healthcare, and patient-centered outcomes research. Jason worked for the legislative branch of the U.S. Congress from 1990-2000 — first at GAO, then at CRS, then as Health Policy Counsel for the Chairman of the House Energy and Commerce Committee (in which role the National Journal named him a “Top Congressional Aide” and he was profiled in the Almanac of the Unelected). Subsequently, Jason held roles of increasing responsibility with non-profit organizations — including AcademyHealth, NORC, NIHCM, and NEHI. Jason has published quantitative and qualitative findings in Health Affairs and other journals and his work has been quoted in Newsweek, the Wall Street Journal and a host of trade publications. He is a Fellow of the Employee Benefit Research Institute, was an adjunct faculty member at the George Washington University, and has served on several boards. Jason earned a Ph.D. in social psychology from the University of Michigan and completed two postdoctoral programs (supported by the National Science Foundation and the National Institutes of Health). He is the proud father of twins and lives outside of Boston.

 

6 Comments

Filed under Boundaryless Information Flow™, Enterprise Architecture, Healthcare, Tweet Jam

Q&A with Allen Brown, President and CEO of The Open Group

By The Open Group

Last month, The Open Group hosted its San Francisco 2014 conference themed “Toward Boundaryless Information Flow™.” Boundaryless Information Flow has been the pillar of The Open Group’s mission since 2002 when it was adopted as the organization’s vision for Enterprise Architecture. We sat down at the conference with The Open Group President and CEO Allen Brown to discuss the industry’s progress toward that goal and the industries that could most benefit from it now as well as The Open Group’s new Dependability through Assuredness™ Standard and what the organization’s Forums are working on in 2014.

The Open Group adopted Boundaryless Information Flow as its vision in 2002, and the theme of the San Francisco Conference has been “Towards Boundaryless Information Flow.” Where do you think the industry is at this point in progressing toward that goal?

Well, it’s progressing reasonably well but the challenge is, of course, when we established that vision back in 2002, life was a little less complex, a little bit less fast moving, a little bit less fast-paced. Although organizations are improving the way that they act in a boundaryless manner – and of course that changes by industry – some industries still have big silos and stovepipes, they still have big boundaries. But generally speaking we are moving and everyone understands the need for information to flow in a boundaryless manner, for people to be able to access and integrate information and to provide it to the teams that they need.

One of the keynotes on Day One focused on the opportunities within the healthcare industry and The Open Group recently started a Healthcare Forum. Do you see Healthcare industry as a test case for Boundaryless Information Flow and why?

Healthcare is one of the verticals that we’ve focused on. And it is not so much a test case, but it is an area that absolutely seems to need information to flow in a boundaryless manner so that everyone involved – from the patient through the administrator through the medical teams – have all got access to the right information at the right time. We know that in many situations there are shifts of medical teams, and from one medical team to another they don’t have access to the same information. Information isn’t easily shared between medical doctors, hospitals and payers. What we’re trying to do is to focus on the needs of the patient and improve the information flow so that you get better outcomes for the patient.

Are there other industries where this vision might be enabled sooner rather than later?

I think that we’re already making significant progress in what we call the Exploration, Mining and Minerals industry. Our EMMM™ Forum has produced an industry-wide model that is being adopted throughout that industry. We’re also looking at whether we can have an influence in the airline industry, automotive industry, manufacturing industry. There are many, many others, government and retail included.

The plenary on Day Two of the conference focused on The Open Group’s Dependability through Assuredness standard, which was released last August. Why is The Open Group looking at dependability and why is it important?

Dependability is ultimately what you need from any system. You need to be able to rely on that system to perform when needed. Systems are becoming more complex, they’re becoming bigger. We’re not just thinking about the things that arrive on the desktop, we’re thinking about systems like the barriers at subway stations or Tube stations, we’re looking at systems that operate any number of complex activities. And they bring an awful lot of things together that you have to rely upon.

Now in all of these systems, what we’re trying to do is to minimize the amount of downtime because downtime can result in financial loss or at worst human life, and we’re trying to focus on that. What is interesting about the Dependability through Assuredness Standard is that it brings together so many other aspects of what The Open Group is working on. Obviously the architecture is at the core, so it’s critical that there’s an architecture. It’s critical that we understand the requirements of that system. It’s also critical that we understand the risks, so that fits in with the work of the Security Forum, and the work that they’ve done on Risk Analysis, Dependency Modeling, and out of the dependency modeling we can get the use cases so that we can understand where the vulnerabilities are, what action has to be taken if we identify a vulnerability or what action needs to be taken in the event of a failure of the system. If we do that and assign accountability to people for who will do what by when, in the event of an anomaly being detected or a failure happening, we can actually minimize that downtime or remove it completely.

Now the other great thing about this is it’s not only a focus on the architecture for the actual system development, and as the system changes over time, requirements change, legislation changes that might affect it, external changes, that all goes into that system, but also there’s another circle within that system that deals with failure and analyzes it and makes sure it doesn’t happen again. But there have been so many evidences of failure recently. In the banks for example in the UK, a bank recently was unable to process debit cards or credit cards for customers for about three or four hours. And that was probably caused by the work done on a routine basis over a weekend. But if Dependability through Assuredness had been in place, that could have been averted, it could have saved an awfully lot of difficulty for an awful lot of people.

How does the Dependability through Assuredness Standard also move the industry toward Boundaryless Information Flow?

It’s part of it. It’s critical that with big systems the information has to flow. But this is not so much the information but how a system is going to work in a dependable manner.

Business Architecture was another featured topic in the San Francisco plenary. What role can business architecture play in enterprise transformation vis a vis the Enterprise Architecture as a whole?

A lot of people in the industry are talking about Business Architecture right now and trying to focus on that as a separate discipline. We see it as a fundamental part of Enterprise Architecture. And, in fact, there are three legs to Enterprise Architecture, there’s Business Architecture, there’s the need for business analysts, which are critical to supplying the information, and then there are the solutions, and other architects, data, applications architects and so on that are needed. So those three legs are needed.

We find that there are two or three different types of Business Architect. Those that are using the analysis to understand what the business is doing in order that they can inform the solutions architects and other architects for the development of solutions. There are those that are more integrated with the business that can understand what is going on and provide input into how that might be improved through technology. And there are those that can actually go another step and talk about here we have the advances and the technology and here are the opportunities for advancing our competitiveness and organization.

What are some of the other key initiatives that The Open Group’s forum and work groups will be working on in 2014?

That kind question is like if you’ve got an award, you’ve got to thank your friends, so apologies to anyone that I leave out. Let me start alphabetically with the Architecture Forum. The Architecture Forum obviously is working on the evolution of TOGAF®, they’re also working with the harmonization of TOGAF with Archimate® and they have a number of projects within that, of course Business Architecture is on one of the projects going on in the Architecture space. The Archimate Forum are pushing ahead with Archimate—they’ve got two interesting activities going on at the moment, one is called ArchiMetals, which is going to be a sister publication to the ArchiSurance case study, where the ArchiSurance provides the example of Archimate is used in the insurance industry, ArchiMetals is going to be used in a manufacturing context, so there will be a whitepaper on that and there will be examples and artifacts that we can use. They’re also working on in Archimate a standard for interoperability for modeling tools. There are four tools that are accredited and certified by The Open Group right now and we’re looking for that interoperability to help organizations that have multiple tools as many of them do.

Going down the alphabet, there’s DirecNet. Not many people know about DirecNet, but Direcnet™ is work that we do around the U.S. Navy. They’re working on standards for long range, high bandwidth mobile networking. We can go to the FACE™ Consortium, the Future Airborne Capability Environment. The FACE Consortium are working on their next version of their standard, they’re working toward accreditation, a certification program and the uptake of that through procurement is absolutely amazing, we’re thrilled about that.

Healthcare we’ve talked about. The Open Group Trusted Technology Forum, where they’re working on how we can trust the supply chain in developed systems, they’ve released the Open Trusted Technology Provider™ Standard (O-TTPS) Accreditation Program, that was launched this week, and we already have one accredited vendor and two certified test labs, assessment labs. That is really exciting because now we’ve got a way of helping any organization that has large complex systems that are developed through a global supply chain to make sure that they can trust their supply chain. And that is going to be invaluable to many industries but also to the safety of citizens and the infrastructure of many countries. So the other part of the O-TTPS is that standard we are planning to move toward ISO standardization shortly.

The next one moving down the list would be Open Platform 3.0™. This is really exciting part of Boundaryless Information Flow, it really is. This is talking about the convergence of SOA, Cloud, Social, Mobile, Internet of Things, Big Data, and bringing all of that together, this convergence, this bringing together of all of those activities is really something that is critical right now, and we need to focus on. In the different areas, some of our Cloud computing standards have already gone to ISO and have been adopted by ISO. We’re working right now on the next products that are going to move through. We have a governance standard in process and an ecosystem standard has recently been published. In the area of Big Data there’s a whitepaper that’s 25 percent completed, there’s also a lot of work on the definition of what Open Platform 3.0 is, so this week the members have been working on trying to define Open Platform 3.0. One of the really interesting activities that’s gone on, the members of the Open Platform 3.0 Forum have produced something like 22 different use cases and they’re really good. They’re concise and they’re precise and the cover a number of different industries, including healthcare and others, and the next stage is to look at those and work on the ROI of those, the monetization, the value from those use cases, and that’s really exciting, I’m looking forward to peeping at that from time to time.

The Real Time and Embedded Systems Forum (RTES) is next. Real-Time is where we incubated the Dependability through Assuredness Framework and that was where that happened and is continuing to develop and that’s really good. The core focus of the RTES Forum is high assurance system, and they’re doing some work with ISO on that and a lot of other areas with multicore and, of course, they have a number of EC projects that we’re partnering with other partners in the EC around RTES.

The Security Forum, as I mentioned earlier, they’ve done a lot of work on risk and dependability. So they’ve not only their standards for the Risk Taxonomy and Risk Analysis, but they’ve now also developed the Open FAIR Certification for People, which is based on those two standards of Risk Analysis and Risk Taxonomy. And we’re already starting to see people being trained and being certified under that Open FAIR Certification Program that the Security Forum developed.

A lot of other activities are going on. Like I said, I probably left a lot of things out, but I hope that gives you a flavor of what’s going on in The Open Group right now.

The Open Group will be hosting a summit in Amsterdam May 12-14, 2014. What can we look forward to at that conference?

In Amsterdam we have a summit – that’s going to bring together a lot of things, it’s going to be a bigger conference that we had here. We’ve got a lot of activity in all of our activities; we’re going to bring together top-level speakers, so we’re looking forward to some interesting work during that week.

 

 

 

1 Comment

Filed under ArchiMate®, Boundaryless Information Flow™, Business Architecture, Conference, Cybersecurity, EMMMv™, Enterprise Architecture, FACE™, Healthcare, O-TTF, RISK Management, Standards, TOGAF®

Accrediting the Global Supply Chain: A Conversation with O-TTPS Recognized Assessors Fiona Pattinson and Erin Connor

By The Open Group 

At the recent San Francisco 2014 conference, The Open Group Trusted Technology Forum (OTTF) announced the launch of the Open Trusted Technology Provider™ Standard (O-TTPS) Accreditation Program.

The program is one the first accreditation programs worldwide aimed at assuring the integrity of commercial off-the-shelf (COTS) information and communication technology (ICT) products and the security of their supply chains.

In three short years since OTTF launched, the forum has grown to include more than 25 member companies dedicated to safeguarding the global supply chain against the increasing sophistication of cybersecurity attacks through standards. Accreditation is yet another step in the process of protecting global technology supply chains from maliciously tainted and counterfeit products.

As part of the program, third-party assessor companies will be employed to assess organizations applying for accreditation, with The Open Group serving as the vendor-neutral Accreditation Authority that operates the program.  Prior to the launch, the forum conducted a pilot program with a number of member companies. It was announced at the conference that IBM is the first company to becoming accredited, earning accreditation for its Application, Infrastructure and Middleware (AIM), software business division for its product integrity and supply chain practices.

We recently spoke with OTTF members Fiona Pattinson, director of strategy and business development at Atsec Information Security, and Erin Connor, director at EWA-Canada, at the San Francisco conference to learn more about the assessment process and the new program.

The O-TTPS focus is on securing the technology supply chain. What would you say are the biggest threats facing the supply chain today?

Fiona Pattinson (FP): I think in the three years since the forum began certainly all the members have discussed the various threats quite a lot. It was one of things we discussed as an important topic early on, and I don’t know if it’s the ‘biggest threat,’ but certainly the most important threats that we needed to address initially were those of counterfeit and maliciously tainted products. We came to that through both discussion with all the industry experts in the forum and also through research into some of the requirements from government, so that’s exactly how we knew which threats [to start with].

Erin Connor (EC):  And the forum benefits from having both sides of the acquisition process, both acquirers, and the suppliers and vendors. So they get both perspectives.

How would you define maliciously tainted and counterfeit products?

FP:  They are very carefully defined in the standard—we needed to do that because people’s understanding of that can vary so much.

EC: And actually the concept of ‘maliciously’ tainted was incorporated close to the end of the development process for the standard at the request of members on the acquisition side of the process.

[Note: The standard precisely defines maliciously tainted and counterfeit products as follows:

"The two major threats that acquirers face today in their COTS ICT procurements, as addressed in this Standard, are defined as:

1. Maliciously tainted product – the product is produced by the provider and is acquired

through a provider’s authorized channel, but has been tampered with maliciously.

2. Counterfeit product – the product is produced other than by, or for, the provider, or is

supplied to the provider by other than a provider’s authorized channel and is presented as being legitimate even though it is not."]

The OTTF announced the Accreditation Program for the OTTP Standard at the recent San Francisco conference. Tell us about the standard and how the accreditation program will help ensure conformance to it?

EC: The program is intended to provide organizations with a way to accredit their lifecycle processes for their product development so they can prevent counterfeit or maliciously tainted components from getting into the products they are selling to an end user or into somebody else’s supply chain. It was determined that a third-party type of assessment program would be used. For the organizations, they will know that we Assessors have gone through a qualification process with The Open Group and that we have in place all that’s required on the management side to properly do an assessment. From the consumer side, they have confidence the assessment has been completed by an independent third-party, so they know we aren’t beholden to the organizations to give them a passing grade when perhaps they don’t deserve it. And then of course The Open Group is in position to oversee the whole process and award the final accreditation based on the recommendation we provide.  The Open Group will also be the arbiter of the process between the assessors and organizations if necessary. 

FP:  So The Open Group’s accreditation authority is validating the results of the assessors.

EC: It’s a model that is employed in many, many other product or process assessment and evaluation programs where the actual accreditation authority steps back and have third parties do the assessment.

FP: It is important that the assessor companies are working to the same standard so that there’s no advantage in taking one assessor over the other in terms of the quality of the assessments that are produced.

How does the accreditation program work?

FP: Well, it’s brand new so we don’t know if it is perfect yet, but having said that, we have worked over several months on defining the process, and we have drawn from The Open Group’s existing accreditation programs, as well as from the forum experts who have worked in the accreditation field for many years. We have been performing pilot accreditations in order to check out how the process works. So it is already tested.

How does it actually work? Well, first of all an organization will feel the need to become accredited and at that point will apply to The Open Group to get the accreditation underway. Once their scope of accreditation – which may be as small as one product or theoretically as large as a whole global company – and once the application is reviewed and approved by The Open Group, then they engage an assessor.

There is a way of sampling a large scope to identify the process variations in a larger scope using something we term ‘selective representative products.’ It’s basically a way of logically sampling a big scope so that we capture the process variations within the scope and make sure that the assessment is kept to a reasonable size for the organization undergoing the assessment, but it also gives good assurance to the consumers that it is a representative sample. The assessment is performed by the Recognized Assessor company, and a final report is written and provided to The Open Group for their validation. If everything is in order, then the company will be accredited and their scope of conformance will be added to the accreditation register and trademarked.

EC: So the customers of that organization can go and check the registration for exactly what products are covered by the scope.

FP: Yes, the register is public and anybody can check. So if IBM says WebSphere is accredited, you can go and check that claim on The Open Group web site.

How long does the process take or does it vary?

EC: It will vary depending on how large the scope to be accredited is in terms of the size of the representative set and the documentation evidence. It really does depend on what the variations in the processes are among the product lines as to how long it takes the assessor to go through the evidence and then to produce the report. The other side of the coin is how long it takes the organization to produce the evidence. It may well be that they might not have it totally there at the outset and will have to create some of it.

FP: As Erin said, it varies by the complexity and the variation of the processes and hence the number of selected representative products. There are other factors that can influence the duration. There are three parties influencing that: The applicant Organization, The Open Group’s Accreditation Authority and the Recognized Assessor.

For example, we found that the initial work by the Organization and the Accreditation Authority in checking the scope and the initial documentation can take a few weeks for a complex scope, of course for the pilots we were all new at doing that. In this early part of the project it is vital to get the scope both clearly defined and approved since it is key to a successful accreditation.

It is important that an Organization assigns adequate resources to help keep this to the shortest time possible, both during the initial scope discussions, and during the assessment. If the Organization can provide all the documentation before they get started, then the assessors are not waiting for that and the duration of the assessment can be kept as short as possible.

Of course the resources assigned by the Recognized Assessor also influences how long an assessment takes. A variable for the assessors is how much documentation do they have to read and review? It might be small or it might be a mountain.

The Open Group’s final review and oversight of the assessment takes some time and is influenced by resource availability within that organization. If they have any questions it may take a little while to resolve.

What kind of safeguards does the accreditation program put in place for enforcing the standard?

FP: It is a voluntary standard—there’s no requirement to comply. Currently some of the U.S. government organizations are recommending it. For example, NASA in their SEWP contract and some of the draft NIST documents on Supply Chain refer to it, too.

EC: In terms of actual oversight, we review what their processes are as assessors, and the report and our recommendations are based on that review. The accreditation expires after three years so before the three years is up, the organization should actually get the process underway to obtain a re-accreditation.  They would have to go through the process again but there will be a few more efficiencies because they’ve done it before. They may also wish to expand the scope to include the other product lines and portions of the company. There aren’t any periodic ‘spot checks’ after accreditation to make sure they’re still following the accredited processes, but part of what we look at during the assessment is that they have controls in place to ensure they continue doing the things they are supposed to be doing in terms of securing their supply chain.

FP:  And then the key part is the agreement the organizations signs with The Open Group includes the fact the organization warrant and represent that they remain in conformance with the standard throughout the accreditation period. So there is that assurance too, which builds on the more formal assessment checks.

What are the next steps for The Open Group Trusted Technology Forum?  What will you be working on this year now that the accreditation program has started?

FP: Reviewing the lessons we learned through the pilot!

EC: And reviewing comments from members on the standard now that it’s publicly available and working on version 1.1 to make any corrections or minor modifications. While that’s going on, we’re also looking ahead to version 2 to make more substantial changes, if necessary. The standard is definitely going to be evolving for a couple of years and then it will reach a steady state, which is the normal evolution for a standard.

For more details on the O-TTPS accreditation program, to apply for accreditation, or to learn more about becoming an O-TTPS Recognized Assessor visit the O-TTPS Accreditation page.

For more information on The Open Group Trusted Technology Forum please visit the OTTF Home Page.

The O-TTPS standard and the O-TTPS Accreditation Policy they are freely available from the Trusted Technology Section in The Open Group Bookstore.

For information on joining the OTTF membership please contact Mike Hickey – m.hickey@opengroup.org

Fiona Pattinson Fiona Pattinson is responsible for developing new and existing atsec service offerings.  Under the auspices of The Open Group’s OTTF, alongside many expert industry colleagues, Fiona has helped develop The Open Group’s O-TTPS, including developing the accreditation program for supply chain security.  In the past, Fiona has led service developments which have included establishing atsec’s US Common Criteria laboratory, the CMVP cryptographic module testing laboratory, the GSA FIPS 201 TP laboratory, TWIC reader compliance testing, NPIVP, SCAP, PCI, biometrics testing and penetration testing. Fiona has responsibility for understanding a broad range of information security topics and the application of security in a wide variety of technology areas from low-level design to the enterprise level.

ErinConnorErin Connor is the Director at EWA-Canada responsible for EWA-Canada’s Information Technology Security Evaluation & Testing Facility, which includes a Common Criteria Test Lab, a Cryptographic & Security Test Lab (FIPS 140 and SCAP), a Payment Assurance Test Lab (device testing for PCI PTS POI & HSM, Australian Payment Clearing Association and Visa mPOS) and an O-TTPS Assessor lab Recognized by the Open Group.  Erin participated with other expert members of the Open Group Trusted Technology Forum (OTTF) in the development of The Open Group Trusted Technology Provider Standard for supply chain security and its accompanying Accreditation Program.  Erin joined EWA-Canada in 1994 and his initial activities in the IT Security and Infrastructure Assurance field included working on the team fielding a large scale Public Key Infrastructure system, Year 2000 remediation and studies of wireless device vulnerabilities.  Since 2000, Erin has been working on evaluations of a wide variety of products including hardware security modules, enterprise security management products, firewalls, mobile device and management products, as well as system and network vulnerability management products.  He was also the only representative of an evaluation lab in the Biometric Evaluation Methodology Working Group, which developed a proposed methodology for the evaluation of biometric technologies under the Common Criteria.

Comments Off

Filed under Accreditations, Cybersecurity, OTTF, Professional Development, Standards, Supply chain risk

One Year Later: A Q&A Interview with Chris Harding and Dave Lounsbury about Open Platform 3.0™

By The Open Group

The Open Group launched its Open Platform 3.0™ Forum nearly one year ago at the 2013 Sydney conference. Open Platform 3.0 refers to the convergence of new and emerging technology trends such as Mobile, Social, Big Data, Cloud and the Internet of Things, as well as the new business models and system designs these trends are pushing organizations toward due to the consumerization of IT and evolving user behaviors. The Forum was created to help organizations address the architectural and structural considerations that businesses must consider to take advantage of and benefit from this evolutionary shift in how technology is used.

We sat down with The Open Group CTO Dave Lounsbury and Open Platform 3.0 Director Dr. Chris Harding at the recent San Francisco conference to catch up on the Forum’s activities and progress since launch and what they’ll be working on during 2014.

The Open Group’s Forum, Open Platform 3.0, was launched almost a year ago in April of 2013. What has the Forum been working on over the past year?

Chris Harding (CH): We launched at the Sydney conference in April of last year. What we’ve done since then first of all was to look at the requirements for the platform, and we did this using the proven TOGAF® technique of the Business Scenario. So over the course of last summer, the summer of 2013, we developed a Business Scenario capturing the requirements for Open Platform 3.0 and that was published just before The Open Group conference in October. Following that conference, the main activity that we’ve been doing is in fact furthering the requirements space. We’ve been developing analysis of use cases, so currently we have 22 different use cases that members of the forum have put together which are illustrating the use of the convergent technologies and most importantly the use of them in combination with each other.

What we’re doing here in this meeting in San Francisco is to obtain from that basis of requirements and use cases an understanding of what the platform fundamentally should be because it is our intention to produce a Snapshot definition of the platform by the end of March. So in the first year of the Forum, we hope that we will finish that year by producing a Snapshot definition of Open Platform 3.0.

Dave Lounsbury (DL): First, the roots of the Open Platform go deeper. Previous to that we had a number of works groups in the areas of Cloud, SOA and some other ones in terms of Semantic Interoperability. All of those were early pieces, and what we saw at the beginning of 2013 was a coalescing of that into this concept that businesses were looking for a new platform for their operations that combined aspects of Social, Mobile, Cloud computing, Big Data and the analytics that go along with it. We saw that emerging in the marketplace, and we formed the Forum to develop that direction. The Open Group always takes an end-to-end view of any problem – we like to look at the whole ecosystem. We want to make sure that the technical standards aren’t just point targets and actually address a business need.

Some of the work groups within The Open Group, such as Quantum Lifecycle Management (QLM) and Semantic Interoperability, have been brought under the umbrella of Open Platform 3.0, most notably the Cloud Work Group. How will the work of these groups continue under Platform 3.0?

CH: Some of the work already going on in The Open Group was directly or indirectly relevant to Open Platform 3.0. And that first and most importantly was the work of the Cloud Work Group, Cloud being one of the convergent technologies, and the Cloud Work Group became a part of Platform 3.0. Two other activities also became a part of Open Platform 3.0, one was of these was the Semantic Interoperability Work Group, and that is because we recognized that Semantic Interoperability has to be an important part of how these technologies work with each other. Though it may not be that we have a full definition of that in the first version of the standard – it’s a notoriously difficult area – but over the course of time, we hope to incorporate a Semantic Interoperability component in the Platform definition and that may well build on the work that we’ve been doing with the Universal Data Element Framework, the UDEF project, which is currently undergoing a major restructuring. The key thing from the Open Platform 3.0 perspective is how the semantic convention relates to the convergence of the technologies in the platform.

In terms of QLM, that became part become of Open Platform 3.0 because one of the key convergent technologies is the Internet of Things, and QLM overlaps significantly with that. QLM is not about the Internet of Things, as such, but it does have a strong component of understanding the way networked sensors and controls work, so that’s become an important contribution to the new Forum.

DL: Like in any platform there’s going to be multiple components. In Open Platform 3.0, one of the big drivers for this change is Big Data. Big Data is very trendy, right? But where does Big Data come from? Well, it comes from increased connectivity, increased use of mobile devices, increased use of sensors –  the ‘Internet of Things.’ All of these things are generating data about usage patterns, where people are, what they’re doing, what that they‘re buying, what they’re interested in and what their likes and dislikes are, creating a massive flood of data. Now the question becomes ‘how do you compute on that data?’ You need to handle that massively scalable stream of data. You need massively scalable computing  underneath it, you need the ability to move large amounts of information from one place to another. When you think about the analysis of data like that, you have algorithms that do a lot of data access and they’ll have big spikes of computation, as they create some model of it. If you’re going to look at 10 zillion records, you don’t want to buy enough computers so you can always look at 10 zillion records, you want to be able to turn that on, do your analysis and turn it back off.  That’s, of course, why Cloud is a critical component of Open Platform 3.0.

Open Platform 3.0 encompasses a lot of different technologies as well as how they are converging. How do you piece apart everything that Platform 3.0 entails to begin to formulate a standard for it?

CH: I mentioned that we developed 22 use cases. The way that we’re addressing this is to look at use cases and the business and technical ecosystems that those use cases exemplify and to abstract from that some fundamental architectural patterns. These we believe will be the basis for the initial definition of the platform.

DL: That gets back to this question about how were starting up. Again it’s The Open Group’s mantra that we look at a business problem as an end-to-end problem. So what you’ll see in Open Platform 3.0, is that we’ve done the Business Scenario to figure out what’s the business motivator, what do business people need to get this done, and we’re fleshing that out with these details in these detailed use cases.

One of the things that we’re very careful about in The Open Group is that we don’t replicate what’s going on in other standards bodies. If you look at what’s going on in Cloud, and what continues to go on in Cloud under the Open Platform 3.0, banner, we really focused in on what do business people really need in the cloud guides – those are how business people really use it.  We’ve stayed away for a long time from the bits and bytes – we’re now doing a Cloud Reference Architecture – but we’ve also created the Cloud Ecosystem Reference Model, which was just published. That Cloud Ecosystem Reference Model, if you read through it, isn’t about how bits flow around, it’s about how partners interact with each other – what to look for in your Cloud partner, who are the players? When you go to use Cloud in your business, what players do you have to engage with? What are the roles that you have to engage with them on? So again it’s really that business level of guidance that The Open Group is really good at, and we do liaison with other organizations in order to get technical stuff if we need it – or if not, we’ll create it ourselves because we’ve got very competent technical people – but again, it’s that balanced business approach that distinguishes The Open Group way.

Many industry pundits have said that Open Platform 3.0 is ultimately about a shift toward user-driven IT. How does that change the standards making process when most standards are ultimately put in place by technologists not necessarily end-users?

CH:  It’s an interesting question. I mentioned the Business Scenario that we developed over the summer – one of the key things that came out of that was that there is this shift towards a more direct use of the technologies by business users.  And that is partly because it’s becoming more possible. Cloud is one of the key factors that has shortened the cycle of procuring and putting IT in place to support business use, and made it more possible to manage IT directly. At the same time [users are] becoming impatient with delay and wanting to gain the benefits of technology directly and not at arms length through the IT department. We’re seeing in connection with these phenomena such as the business technologist, the technical specialist who works with or is employed by the business department rather than within a separate IT department, and one of whose key strengths is an understanding of the business.  So that is certainly an important dimension that we’re seeing and one of the requirements for the Platform is that it should be usable in an environment where business is using IT more directly.

But that wasn’t the question you asked. The question was, ‘isn’t it a problem that the standards are defined by technologists?’ We don’t believe it’s a problem provided that the technologists do have an understanding of the business environment. That was why in the Business Scenario activity that we conducted, one of the key inputs was a roundtable workshop with CIO level people, and that is where a lot of our perspective on why things are changing comes from. Open Platform 3.0 certainly does have dimension of fundamental architecture patterns and part of that is business architecture patterns but it also has a technical dimension, and obviously you do really need the technical people to explore that dimension though they do always need to keep in mind the technology is there to serve the business.

DL: If you actually look at trends in the marketplace about how IT is done, and in fact if you look at the last blog post that Allen [Brown] did about agile, the whole thrust of agile methodologies and its successor DevOps is to really get the implementers right next to the business people and have a very tight arrangement in order to get fast iteration and really have the implementer do what the business person needs. I actually view consumerization not as some outside threat but actually a logical extension of that trend. What’s happening in my opinion is that people who are not technologists, who are not part of the IT department, are getting comfortable using and managing their own technology. And so they’re making decisions that used to be made by the IT department years ago – or what used to be the IT department. First there was the big mainframe, and you handed in your cards at a window and you got your printout in your little cubby hole. Then the IT department bought your PC, and now we bring our own devices. There’s nothing wrong with that, that’s people getting comfortable with technology and making decisions. I think that’s one of the reasons we have need for an Open Platform 3.0 approach – to develop business guidance and eventually technical standards on how we keep up with that trend. Because it’s a very natural trend – people want to control the resources they need to get their job done, and if those resources are technical resources, and they’re comfortable doing that, great!

Convergence and Open Platform 3.0 seem to take us closer and closer to The Open Group’s vision of Boundaryless Information Flow™.  Is Open Platform 3.0 the fulfillment of that vision?

DL: I think I’d be crazy to say that it’s the endpoint of that vision. I think being able to move large amounts of data and make decisions on it is a significant step forward in Boundaryless Information Flow, but this is a two-edged sword. I talked about all that data being generated by mobile devices and sensors and retail networks and social networks and things like that. That data is growing exponentially.  The number of people who can make decisions on that data are growing at best linearly and not very quickly. So if there’s all this data out there and nobody to look at it, we need to ask if we have we lowered the boundary for communications or have we actually raised it by creating a pile of data that no one can climb? That’s why I think a next step is, in fact, more machine-assisted analytics and predictive analytics and machine learning that will help humans digest and understand that data. That will be, I think, yet another step toward Boundaryless Information Flow. Moving bits around does not equate to information flow – its only information when it moves from data to being information in a human’s brain. Until we lower that barrier as well, we’re not there. And even beyond that, there’s still lots of things that can be done, in terms of breaking down human language barriers and things like that or social networks in more intuitive ways. I think there’s a long way to go. I think this is a really important step forward, but fulfillment is too strong a word.

CH:  Not in itself, I don’t believe. It is a major contribution towards the vision of Boundaryless Information Flow but it is not the complete fulfillment of that vision. Since we’ve formulated the problem statement of Boundaryless Information Flow there have been a number of developments that have impacted on it and maybe helped to bring it closer. So you might think of SOA as an important enabling technology for Boundaryless Information Flow, replacing the information silos with interacting services. Now we’re seeing Open Platform 3.0, which is certainly going to have a service-oriented flavor, shall we say, although it probably will not look exactly like traditional SOA. The Boundaryless Information Flow requirement was a very far-reaching problem statement. The Interoperable Business Scenario was where it was first set out and since then we’ve been gradually making process toward it. Open Platform 3.0 will bring it closer, but I’m sure there will be other things still needed to make it happen. 

One of the key things for Boundaryless Information Flow is Enterprise Architecture. So within a particular enterprise, the business and IT needs to be architected to enable Boundaryless Information Flow, and TOGAF is the method that is defined and maintained by The Open Group for how enterprises define enterprise architectures. Open Platform 3.0 will complement that by providing a ‘this is what an architecture looks like that enables the business to take advantage of this new converging technologies.’ But there will still be a need for the Enterprise Architect to put that together with the other particular factors involved in an enterprise to create an architecture for Boundaryless Information Flow within that enterprise.

When can we expect the first standard from Open Platform 3.0?

DL: Well, we published the Cloud Ecosystem Reference Guide, and again the understanding of how business partners relate in the Cloud world is a key component of Open Platform 3.0. The Forum has a roadmap, and will start publishing the case studies still in process.

The message I would say is there’s already early value in the Cloud Ecosystem Reference Model, which is a logical continuation of cloud work that had already gone on in the Work Group, but is now part of the Forum as part of Open Platform 3.0.

CH: That’s always a tricky question however I can tell you what is planned. The intention, as I said, was to produce a Snapshot definition by the end of March and, given we are a quarter of the way through the meeting at this conference, which is the key meeting that will define the basis for that, the progress has been good so far, so I’m optimistic. A Snapshot is not a Standard. A Snapshot is a statement of ‘this is what we are thinking and might be what it will look like,’ but it’s not guaranteed in any way that the Standard will follow the Snapshot. We are intending to produce the first Standard definition of the platform in about a year’s time after the Snapshot.  That will give the opportunity for people not only within The Open Group but outside The Open Group to give us input and further understanding of the way people intend to use the platform as feedback on the snapshot, which should be the basis for the first published standard.

For more on the Open Platform 3.0 Forum, please visit: http://www3.opengroup.org/subjectareas/platform3.0.

If you have any questions about Open Platform 3.0 or if you would like to join the new Forum, please contact Chris Harding (c.harding@opengroup.org) for queries regarding the Forum or Chris Parnell (c.parnell@opengroup.org) for queries regarding membership.

Chris HardingDr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing, and the Open Platform 3.0 Forum. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF® practitioner.

Dave LounsburyDave is Chief Technical Officer (CTO) and Vice President, Services for The Open Group. As CTO, he ensures that The Open Group’s people and IT resources are effectively used to implement the organization’s strategy and mission.  As VP of Services, Dave leads the delivery of The Open Group’s proven collaboration processes for collaboration and certification both within the organization and in support of third-party consortia. Dave holds a degree in Electrical Engineering from Worcester Polytechnic Institute, and is holder of three U.S. patents.

Comments Off

Filed under Cloud, Cloud/SOA, Conference, Open Platform 3.0, Standards, TOGAF®

How to Build a Smarter City – Join The Open Group Tweet Jam on February 26

By Loren K. Baynes, Director, Global Marketing Communications, The Open Group

On Wednesday, February 26, The Open Group will host a Tweet Jam examining smart cities and how Real-time and Embedded Systems can seamlessly integrate inputs from various agencies and locations. That collective data allows local governments to better adapt to change by implementing an analytics-based approach to measure:

  • Economic activity
  • Mobility patterns
  • Resource consumption
  • Waste management and sustainability measures
  • Inclement weather
  • And much more!

These metrics allow smart cities to do much more than just coordinate responses to traffic jams, they are forecasting and coordinating safety measures in advance of physical disasters and inclement weather; calculating where offices and shops can be laid out most efficiently; and how all the parts of urban life should be fitted together including energy, sustainability and infrastructural repairs and planning and development.

Smart cities are already very much a reality in the Middle East and in Korea and those have become a model for developers in China, and for redevelopment in Europe. Market research firm, IDC Government Insights projects that 2014 is the year cities around the world start getting smart. It predicts a $265 billion spend by cities worldwide this year alone to implement new technology and integrate agency data. Part of the reason for that spend is likely spurred by the fact that more than half the world’s population currently lives in urban areas. With urbanization rates rapidly increasing, Brookings Institution estimates that number could swell up to 75 percent of the global populace by 2050.

While the awe-inspiring smart city of Rio de Janeiro is proving to be an interesting smart city model for cities across the world, are smart cities always the best option for informing city decisions?  Could the beauty of a self-regulating open grid allow people to decide how best to use spaces in the city?

Please join us on Wednesday, February 26 at 9:00 am PT/12:00 pm ET/5:00 pm GMT for a tweet jam, that will discuss the issues around smart cities.  We welcome The Open Group members and interested participants from all backgrounds to join the discussion and interact with our panel of thought-leaders including  David Lounsbury, CTO and Chris Harding, Director of Interoperability from The Open Group. To access the discussion, please follow the #ogchat hashtag during the allotted discussion time.

What Is a Tweet Jam?

A tweet jam is a one-hour “discussion” hosted on Twitter. The purpose of the tweet jam is to share knowledge and answer questions on relevant and thought-provoking issues. Each tweet jam is led by a moderator and a dedicated group of experts to keep the discussion flowing. The public (or anyone using Twitter interested in the topic) is encouraged to join the discussion.

Participation Guidance

Whether you’re a newbie or veteran Twitter user, here are a few tips to keep in mind:

Have your first #ogchat tweet be a self-introduction: name, affiliation, occupation.

Start all other tweets with the question number you’re responding to and add the #ogchat hashtag.

Sample: “A1: There are already a number of cities implementing tech to get smarter. #ogchat”

Please refrain from product or service promotions. The goal of a tweet jam is to encourage an exchange of knowledge and stimulate discussion.

While this is a professional get-together, we don’t have to be stiff! Informality will not be an issue.

A tweet jam is akin to a public forum, panel discussion or Town Hall meeting – let’s be focused and thoughtful.

If you have any questions prior to the event or would like to join as a participant, please contact Rob Checkal (@robcheckal or rob.checkal@hotwirepr.com). We anticipate a lively chat and hope you will be able to join!

2 Comments

Filed under real-time and embedded systems, Tweet Jam

Facing the Challenges of the Healthcare Industry – An Interview with Eric Stephens of The Open Group Healthcare Forum

By The Open Group

The Open Group launched its new Healthcare Forum at the Philadelphia conference in July 2013. The forum’s focus is on bringing Boundaryless Information Flow™ to the healthcare industry to enable data to flow more easily throughout the complete healthcare ecosystem through a standardized vocabulary and messaging. Leveraging the discipline and principles of Enterprise Architecture, including TOGAF®, the forum aims to develop standards that will result in higher quality outcomes, streamlined business practices and innovation within the industry.

At the recent San Francisco 2014 conference, Eric Stephens, Enterprise Architect at Oracle, delivered a keynote address entitled, “Enabling the Opportunity to Achieve Boundaryless Information Flow” along with Larry Schmidt, HP Fellow at Hewlett-Packard. A veteran of the healthcare industry, Stephens was Senior Director of Enterprise Architects Excellus for BlueCross BlueShield prior to joining Oracle and he is an active member of the Healthcare Forum.

We sat down after the keynote to speak with Stephens about the challenges of healthcare, how standards can help realign the industry and the goals of the forum. The opinions expressed here are Stephens’ own, not of his employer.

What are some of the challenges currently facing the healthcare industry?

There are a number of challenges, and I think when we look at it as a U.S.-centric problem, there’s a disproportionate amount of spending that’s taking place in the U.S. For example, if you look at GDP or percentage of GDP expenditures, we’re looking at now probably 18 percent of GDP [in the U.S.], and other developed countries are spending a full 5 percent less than that of their GDP, and in some cases they’re getting better outcomes outside the U.S.

The mere fact that there’s the existence of what we call “medical tourism, where if I need a hip replacement, I can get it done for a fraction of the cost in another country, same or better quality care and have a vacation—a rehab vacation—at the same time and bring along a spouse or significant other, means there’s a real wide range of disparity there. 

There’s also a lack of transparency. Having worked at an insurance company, I can tell you that with the advent of high deductible plans, there’s a need for additional cost information. When I go on Amazon or go to a local furniture store, I know what the cost is going to be for what I’m about to purchase. In the healthcare system, we don’t get that. With high deductible plans, if I’m going to be responsible for a portion or a larger portion of the fee, I want to know what it is. And what happens is, the incentives to drive costs down force the patient to be a consumer. The consumer now asks the tough questions. If my daughter’s going in for a tonsillectomy, show me a bill of materials that shows me what’s going to be done – if you are charging me $20/pill for Tylenol, I’ll bring my own. Increased transparency is what will in turn drive down the overall costs.

I think there’s one more thing, and this gets into the legal side of things. There is an exorbitant amount of legislation and regulation around what needs to be done. And because every time something goes sideways, there’s going to be a lawsuit, doctors will prescribe an extra test, and extra X-ray for a patient whether they need it or not.

The healthcare system is designed around a vicious cycle of diagnose-treat-release. It’s not incentivized to focus on prevention and management. Oregon is promoting these coordinated care organizations (CCOs) that would be this intermediary that works with all medical professionals – whether it was physical, mental, dental, even social worker – to coordinate episodes of care for patients. This drives down inappropriate utilization – for example, using an ER as a primary care facility and drives the medical system towards prevention and management of health. 

Your keynote with Larry Schmidt of HP focused a lot on cultural changes that need to take place within the healthcare industry – what are some of the changes necessary for the healthcare industry to put standards into place?

I would say culturally, it goes back to those incentives, and it goes back to introducing this idea of patient-centricity. And for the medical community, to really start recognizing that these individuals are consumers and increased choice is being introduced, just like you see in other industries. There are disruptive business models. As a for instance, medical tourism is a disruptive business model for United States-based healthcare. The idea of pharmacies introducing clinical medicine for routine care, such as what you see at a CVS, Wal-Mart or Walgreens. I can get a flu shot, I can get a well-check visit, I can get a vaccine – routine stuff that doesn’t warrant a full-blown medical professional. It’s applying the right amount of medical care to a particular situation.

Why haven’t existing standards been adopted more broadly within the industry? What will help providers be more likely to adopt standards?

I think the standards adoption is about “what’s in it for me, the WIIFM idea. It’s demonstrating to providers that utilizing standards is going to help them get out of the medical administration business and focus on their core business, the same way that any other business would want to standardize its information through integration, processes and components. It reduces your overall maintenance costs going forward and arguably you don’t need a team of billing folks sitting in an doctor’s office because you have standardized exchanges of information.

Why haven’t they been adopted? It’s still a question in my mind. Why would a doctor not want to do that is perhaps a question we’re going to need to explore as part of the Healthcare Forum.

Is it doctors that need to adopt the standards or technologies or combination of different constituents within the ecosystem?

I think it’s a combination. We hear a lot about the Affordable Care Act (ACA) and the health exchanges. What we don’t hear about is the legislation to drive toward standardization to increase interoperability. So unfortunately it would seem the financial incentives or things we’ve tried before haven’t worked, and we may simply have to resort to legislation or at least legislative incentives to make it happen because part of the funding does cover information exchanges so you can move health information between providers and other actors in the healthcare system.

You’re advocating putting the individual at the center of the healthcare ecosystem. What changes need to take place within the industry in order to do this?

I think it’s education, a lot of education that has to take place. I think that individuals via the incentive model around high deductible plans will force some of that but it’s taking responsibility and understanding the individual role in healthcare. It’s also a cultural/societal phenomenon.

I’m kind of speculating here, and going way beyond what enterprise architecture or what IT would deliver, but this is a philosophical thing around if I have an ailment, chances are there’s a pill to fix it. Look at the commercials, every ailment say hypertension, it’s easy, you just dial the medication correctly and you don’t worry as much about diet and exercise. These sorts of things – our over-reliance on medication. I’m certainly not going to knock the medications that are needed for folks that absolutely need them – but I think we can become too dependent on pharmacological solutions for our health problems.   

What responsibility will individuals then have for their healthcare? Will that also require a cultural and behavioral shift for the individual?

The individual has to start managing his or her own health. We manage our careers and families proactively. Now we need to focus on our health and not just float through the system. It may come to financial incentives for certain “individual KPIs such as blood pressure, sugar levels, or BMI. Advances in medical technology may facilitate more personal management of one’s health.

One of the Healthcare Forum’s goals is to help establish Boundaryless Information Flow within the Healthcare industry you’ve said that understanding the healthcare ecosystem will be a key component for that what does that ecosystem encompass and why is it important to know that first?

Very simply we’re talking about the member/patient/consumer, then we get into the payers, the providers, and we have to take into account government agencies and other non-medical agents, but they all have to work in concert and information needs to flow between those organizations in a very standardized way so that decisions can be made in a very timely fashion.

It can’t be bottled up, it’s got to be provided to the right provider at the right time, otherwise, best case, it’s going to cost more to manage all the actors in the system. Worst case, somebody dies or there is a “never event due to misinformation or lack of information during the course of care. The idea of Boundaryless Information Flow gives us the opportunity to standardize, have easily accessible information – and by the way secured – it can really aide in that decision-making process going forward. It’s no different than Wal-Mart knowing what kind of merchandise sells well before and after a hurricane (i.e., beer and toaster pastries, BTW). It’s the same kind of real-time information that’s made available to a Google car so it can steer its way down the road. It’s that kind of viscosity needed to make the right decisions at the right time.

Healthcare is a highly regulated industry, how can Boundarylesss Information Flow and data collection on individuals be achieved and still protect patient privacy?

We can talk about standards and the flow and the technical side. We need to focus on the security and privacy side.  And there’s going to be a legislative side because we’re going to touch on real fundamental data governance issue – who owns the patient record? Each actor in the system thinks they own the patient record. If we’re going to require more personal accountability for healthcare, then shouldn’t the consumer have more ownership? 

We also need to address privacy disclosure regulations to avoid catastrophic data leaks of protected health information (PHI). We need bright IT talent to pull off the integration we are talking about here. We also need folks who are well versed in the privacy laws and regulations. I’ve seen project teams of 200 have up to eight folks just focusing on the security and privacy considerations. We can argue about headcount later but my point is the same – one needs some focused resources around this topic.

What will standards bring to the healthcare industry that is missing now?

I think the standards, and more specifically the harmonization of the standards, is going to bring increased maintainability of solutions, I think it’s going to bring increased interoperability, I think it’s going to bring increased opportunities too. We see mobile computing or even DropBox, that has API hooks into all sorts of tools, and it’s well integrated – so I can integrate and I can move files between devices, I can move files between apps because they have hooks it’s easy to work with. So it’s building these communities of developers, apps and technical capabilities that makes it easy to move the personal health record for example, back and forth between providers and it’s not a cataclysmic event to integrate a new version of electronic health records (EHR) or to integrate the next version of an EHR. This idea of standardization but also some flexibility that goes into it.

Are you looking just at the U.S. or how do you make a standard that can go across borders and be international?

It is a concern, much of my thinking and much of what I’ve conveyed today is U.S.-centric, based on our problems, but many of these interoperability problems are international. We’re going to need to address it; I couldn’t tell you what the sequence is right now. There are other considerations, for example, single vs. multi-payer—that came up in the keynote. We tend to think that if we stay focused on the consumer/patient we’re going to get it for all constituencies. It will take time to go international with a standard, but it wouldn’t be the first time. We have a host of technical standards for the Internet (e.g., TCP/IP, HTTP). The industry has been able to instill these standards across geographies and vendors. Admittedly, the harmonization of health care-related standards will be more difficult. However, as our world shrinks with globalization an international lens will need to be applied to this challenge. 

Eric StephensEric Stephens (@EricStephens) is a member of Oracle’s executive advisory community where he focuses on advancing clients’ business initiatives leveraging the practice of Business and Enterprise Architecture. Prior to joining Oracle he was Senior Director of Enterprise Architecture at Excellus BlueCross BlueShield leading the organization with architecture design, innovation, and technology adoption capabilities within the healthcare industry.

 

Comments Off

Filed under Conference, Data management, Enterprise Architecture, Healthcare, Information security, Standards, TOGAF®

Measuring the Immeasurable: You Have More Data Than You Think You Do

By Jim Hietala, Vice President, Security, The Open Group

According to a recent study by the Ponemon Institute, the average U.S. company experiences more than 100 successful cyber-attacks each year at a cost of $11.6M. By enabling security technologies, those companies can reduce losses by nearly $4M and instituting security governance reduces costs by an average of $1.5M, according to the study.

In light of increasing attacks and security breaches, executives are increasingly asking security and risk professionals to provide analyses of individual company risk and loss estimates. For example, the U.S. healthcare sector has been required by the HIPAA Security rule to perform annual risk assessments for some time now. The recent HITECH Act also added security breach notification and disclosure requirements, increased enforcement in the form of audits and increased penalties in the form of fines. Despite federal requirements, the prospect of measuring risk and doing risk analyses can be a daunting task that leaves even the best of us with a case of “analysis paralysis.”

Many IT experts agree that we are nearing a time where risk analysis is not only becoming the norm, but when those risk figures may well be used to cast blame (or be used as part of a defense in a lawsuit) if and when there are catastrophic security breaches that cost consumers, investors and companies significant losses.

In the past, many companies have been reluctant to perform risk analyses due to the perception that measuring IT security risk is too difficult because it’s intangible. But if IT departments could soon become accountable for breaches, don’t you want to be able to determine your risk and the threats potentially facing your organization?

In his book, How to Measure Anything, father of Applied Information Economics Douglas Hubbard points out that immeasurability is an illusion and that organizations do, in fact, usually have the information they need to create good risk analyses. Part of the misperception of immeasurability stems from a lack of understanding of what measurement is actually meant to be. According to Hubbard, most people, and executives in particular, expect measurement and analysis to produce an “exact” number—as in, “our organization has a 64.5 percent chance of having a denial of service attack next year.”

Hubbard argues that, as risk analysts, we need to look at measurement more like how scientists look at things—measurement is meant to reduce uncertainty—not to produce certainty—about a quantity based on observation.  Proper measurement should not produce an exact number, but rather a range of possibility, as in “our organization has a 30-60 percent chance of having a denial of service attack next year.” Realistic measurement of risk is far more likely when expressed as a probability distribution with a range of outcomes than in terms of one number or one outcome.

The problem that most often produces “analysis paralysis” is not just the question of how to derive those numbers but also how to get to the information that will help produce those numbers. If you’ve been tasked, for instance, with determining the risk of a breach that has never happened to your organization before, perhaps a denial of service attack against your web presence, how can you make an accurate determination about something that hasn’t happened in the past? Where do you get your data to do your analysis? How do you model that analysis?

In an article published in CSO Magazine, Hubbard argues that organizations have far more data than they think they do and they actually need less data than they may believe they do in order to do proper analyses. Hubbard says that IT departments, in particular, have gotten so used to having information stored in databases that they can easily query, they forget there are many other sources to gather data from. Just because something hasn’t happened yet and you haven’t been gathering historical data on it and socking it away in your database doesn’t mean you either don’t have any data or that you can’t find what you need to measure your risk. Even in the age of Big Data, there is plenty of useful data outside of the big database.

You will still need to gather that data. But you just need enough to be able to measure it accurately not necessarily precisely. In our recently published Open Group Risk Assessment Standard (O-RA), this is called calibration of estimates. Calibration provides a method for making good estimates, which are necessary for deriving a measured range of probability for risk. Section 3 of the O-RA standard uses provides a comprehensive look at how best to come up with calibrated estimates, as well as how to determine other risk factors using the FAIR (Factor Analysis of Information Risk) model.

So where do you get your data if it’s not already stored and easily accessible in a database? There are numerous sources you can turn to, both externally and internally. You just have to do the research to find it. For example, even if your company hasn’t experienced a DNS attack, many others have—what was their experience when it happened? This information is out there online—you just need to search for it. Industry reports are another source of information. Verizon publishes its own annual Verizon Data Breach Investigations Report for one. DatalossDB publishes an open data beach incident database that provides information on data loss incidents worldwide. Many vendors publish annual security reports and issue regular security advisories. Security publications and analyst firms such as CSO, Gartner, Forrester or Securosis all have research reports that data can be gleaned from.

Then there’s your internal information. Chances are your IT department has records you can use—they likely count how many laptops are lost or stolen each year. You should also look to the experts within your company to help. Other people can provide a wealth of valuable information for use in your analysis. You can also look to the data you do have on related or similar attacks as a gauge.

Chances are, you already have the data you need or you can easily find it online. Use it.

With the ever-growing list of threats and risks organizations face today, we are fast reaching a time when failing to measure risk will no longer be acceptable—in the boardroom or even by governments.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

1 Comment

Filed under Cybersecurity, Data management, Information security, Open FAIR Certification, RISK Management, Uncategorized

Introducing Two New Security Standards for Risk Analysis—Part II – Risk Analysis Standard

By Jim Hietala, VP Security, The Open Group

Last week we took a look at one of the new risk standards recently introduced by The Open Group® Security Forum at the The Open Group London Conference 2013, the Risk Taxonomy Technical Standard 2.0 (O-RT). Today’s blog looks at its sister standard, the Risk Analysis (O-RA) Standard, which provides risk professionals the tools they need to perform thorough risk analyses within their organizations for better decision-making about risk.

Risk Analysis (O-RA) Standard

The new Risk Analysis Standard provides a comprehensive guide for performing effective analysis scenarios within organizations using the Factor Analysis of Information Risk (FAIR™) framework. O-RA is geared toward managing the frequency and magnitude of loss that can arise from a threat, whether human, animal or a natural event–in other words “how often bad things happened and how bad they are when they occur.” Used together, the O-RT and O-RA Standards provide organizations with a way to perform consistent risk modeling, that can not only help thoroughly explain risk factors to stakeholders but allow information security professionals to strengthen existing or create better analysis methods. O-RA may also be used in conjunction with other risk frameworks to perform risk analysis.

The O-RA standard is also meant to provide something more than a mere assessment of risk. Many professionals within the security industry often fail to distinguish between “assessing” risk vs. “analysis” of risk. This standard goes beyond assessment by supporting effective analyses so that risk statements are less vulnerable to problems and are more meaningful and defensible than assessments that provide only the broad risk-ratings (“this is a 4 on a scale of 1-to-5”) normally used in assessments.

O-RA also lays out standard process for approaching risk analysis that can help organizations streamline the way they approach risk measurement. By focusing in on these four core process elements, organizations are able to perform more effective analyses:

  • Clearly identifying and characterizing the assets, threats, controls and impact/loss elements at play within the scenario being assessed
  • Understanding the organizational context for analysis (i.e. what’s at stake from an organizational perspective)
  • Measuring/estimating various risk factors
  • Calculating risk using a model that represents a logical, rational, and useful view of what risk is and how it works.

Because measurement and calculation are essential elements of properly analyzing risk variables, an entire chapter of the standard is dedicated to how to measure and calibrate risk. This chapter lays out a number of useful approaches for establishing risk variables, including establishing baseline risk estimates and ranges; creating distribution ranges and most likely values; using Monte Carlo simulations; accounting for uncertainty; determining accuracy vs. precision and subjective vs. objective criteria; deriving vulnerability; using ordinal scales; and determining diminishing returns.

Finally, a practical, real-world example is provided to take readers through an actual risk analysis scenario. Using the FAIR model, the example outlines the process for dealing with an threat in which an HR executive at a large bank has left the user name and password that allow him access to all the company’s HR systems on a Post-It note tacked onto his computer in his office in clear view of anyone (other employees, cleaning crews, etc.) who comes into the office.

The scenario outlines four stages in assessing this risk:

  1. .    Stage 1: Identify Scenario Components (Scope the Analysis)
  2. .    Stage 2: Evaluate Loss Event Frequency (LEF)
  3. .    Stage 3: Evaluate Loss Magnitude (LM)
  4. .    Stage 4: Derive and Articulate Risk

Each step of the risk analysis process is thoroughly outlined for the scenario to provide Risk Analysts an example of how to perform an analysis process using the FAIR framework. Considerable guidance is provided for stages 2 and 3, in particular, as those are the most critical elements in determining organizational risk.

Ultimately, the O-RA is a guide to help organizations make better decisions about which risks are the most critical for the organization to prioritize and pay attention to versus those that are less important and may not warrant attention. It is critical for Risk Analysts and organizations to become more consistent in this practice because lack of consistency in determining risk among information security professionals has been a major obstacle in allowing security professionals a more legitimate “seat at the table” in the boardroom with other business functions (finance, HR, etc.) within organizations.

For our profession to evolve and grow, consistency and accurate measurement is key. Issues and solutions must be identified consistently and comparisons and measurement must be based on solid foundations, as illustrated below.

Risk2

Chained Dependencies

O-RA can help organizations arrive at better decisions through consistent analysis techniques as well as provide more legitimacy within the profession.  Without a foundation from which to manage information risk, Risk Analysts and information security professionals may rely too heavily on intuition, bias, commercial or personal agendas for their analyses and decision making. By outlining a thorough foundation for Risk Analysis, O-RA provides not only a common foundation for performing risk analyses but the opportunity to make better decisions and advance the security profession.

For more on the O-RA Standard or to download it, please visit: https://www2.opengroup.org/ogsys/catalog/C13G.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

Comments Off

Filed under Conference, Open FAIR Certification, RISK Management, Security Architecture

Introducing Two New Security Standards for Risk Analysis—Part I – Risk Taxonomy Technical Standard 2.0

By Jim Hietala, VP Security, The Open Group

At the The Open Group London 2013 Conference, The Open Group® announced three new initiatives related to the Security Forum’s work around Risk Management. The first of these was the establishment of a new certification program for Risk Analysts working within the security profession, the Open FAIR Certification Program.  Aimed at providing a professional certification for Risk Analysts, the program will bring a much-needed level of assuredness to companies looking to hire Risk Analysts, certifying that analysts who have completed the Open FAIR program understand the fundamentals of risk analysis and are qualified to perform that analysis.

Forming the basis of the Open FAIR certification program are two new Open Group standards, version 2.0 of the Risk Taxonomy (O-RT) standard originally introduced by the Security Forum in 2009, and a new Risk Analysis (O-RA) Standard, both of which were also announced at the London conference. These standards are the result of ongoing work around risk analysis that the Security Forum has been conducting for a number of years now in order to help organizations better understand and identify their exposure to risk, particularly when it comes to information security risk.

The Risk Taxonomy and Risk Analysis standards not only form the basis and body of knowledge for the Open FAIR certification, but provide practical advice for security practitioners who need to evaluate and counter the potential threats their organization may face.

Today’s blog will look at the first standard, the Risk Taxonomy Technical Standard, version 2.0. Next week, we’ll look at the other standard for Risk Analysis.

Risk Taxonomy (O-RT) Technical Standard 2.0

Originally, published in January 2009, the O-RT is intended to provide a common language and references for security and business professionals who need to understand or analyze risk conditions, providing a common language for them to use when discussing those risks. Version 2.0 of the standard contains a number of updates based both on feedback provided by professionals that have been using the standard and as a result of research conducted by Security Forum member CXOWARE.

The majority of the changes to Version 2.0 are refinements in terminology, including changes in language that better reflect what each term encompasses. For example, the term “Control Strength” in the original standard has now been changed to “Resistance Strength” to reflect that controls used in that part of the taxonomy must be resistive in nature.

More substantive changes were made to the portion of the taxonomy that discusses how Loss Magnitude is evaluated.

Why create a taxonomy for risk?  For two reasons. First, the taxonomy provides a foundation from which risk analysis can be performed and talked about. Second, a tightly defined taxonomy reduces the inability to effectively measure or estimate risk scenarios, leading to better decision making, as illustrated by the following “risk management stack.”

Effective Management


↑

Well-informed Decisions

Effective Comparisons


↑

Meaningful Measurements

Accurate Risk Model

The complete Risk Taxonomy is comprised of two branches: Loss Event Frequency (LEF) and Loss Magnitude (LM), illustrated here:

Risk1

Focusing solely on pure risk (which only results in loss) rather than speculative risk (which might result in either loss or profit), the O-RT is meant to help estimate the probable frequency and magnitude of future loss.

Traditionally LM has been far more difficult to determine than LEF, in part because organizations don’t always perform analyses on their losses or they just stick to evaluating “low hanging fruit” variables rather than delve into determining more complex risk factors. The new taxonomy takes a deep dive into the Loss Magnitude branch of the risk analysis taxonomy providing guidance that will allow Risk Analysts to better tackle the difficult task of determining LM. It includes terminology outlining six specific forms of loss an organization can experience (productivity, response, replacement, fines and judgments, competitive advantage, reputation) as well as how to determine Loss Flow, a new concept in this standard.

The Loss Flow analysis helps identify how a loss may affect both primary (owners, employees, etc.) and secondary (customers, stockholders, regulators, etc.) stakeholders as a result of a threat agent’s action on an asset. The new standard provides a thorough overview on how to assess Loss Flow and identify the loss factors of any given threat.

Finally, the standard also includes a practical, real-world scenario to help analysts understand how to put the taxonomy to use in within their organizations. O-RT provides a common linguistic foundation that will allow security professionals to then perform the risk analyses as outlined in the O-RA Standard.

For more on the Risk Taxonomy Standard or to download it, visit: https://www2.opengroup.org/ogsys/catalog/C13K.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

Comments Off

Filed under Conference, Open FAIR Certification, RISK Management, Security Architecture

Open FAIR Certification Launched

By Jim Hietala, The Open Group, VP of Security

The Open Group today announced the new Open FAIR Certification Program aimed at Risk Analysts, bringing a much-needed professional certification to the market that is focused on the practice of risk analysis. Both the Risk Taxonomy and Risk Analysis standards, standards of The Open Group, constitute the body of knowledge for the certification program, and they advance the risk analysis profession by defining a standard taxonomy for risk, and by describing the process aspects of a rigorous risk analysis.

We believe that this new risk analyst certification program will bring significant value to risk analysts, and to organizations seeking to hire qualified risk analysts. Adoption of these two risk standards from The Open Group will help produce more effective and useful risk analysis. This program clearly represents the growing need in our industry for professionals who understand risk analysis fundamentals.  Furthermore, the mature processes and due diligence The Open Group applies to our standards and certification programs will help make organizations comfortable with the ground breaking concepts and methods underlying FAIR. This will also help professionals looking to differentiate themselves by demonstrating the ability to take a “business perspective” on risk.

In order to become certified, Risk Analysts must pass an Open FAIR certification exam. All certification exams are administered through Prometric, Inc. Exam candidates can start the registration process by visiting Prometric’s Open Group Test Sponsor Site www.prometric.com/opengroup.  With 4,000 testing centers in its IT channel, Prometric brings Open FAIR Certification to security professionals worldwide. For more details on the exam requirements visit http://www.opengroup.org/certifications/exams.

Training courses will be delivered through an Open Group accredited channel. The accreditation of Open FAIR training courses will be available from November 1st 2013.

Our thanks to all of the members of the risk certification working group who worked tirelessly over the past 15 months to bring this certification program, along with a new risk analysis standard and a revised risk taxonomy standard to the market. Our thanks also to the sponsors of the program, whose support is important to building this program. The Open FAIR program sponsors are Architecting the Enterprise, CXOWARE, SNA, and The Unit.

Lastly, if you are involved in risk analysis, we encourage you to consider becoming Open FAIR certified, and to get involved in the risk analysis program at The Open Group. We have plans to develop an advanced level of Open FAIR certification, and we also see a great deal of best practices guidance that is needed by the industry.

For more information on the Open FAIR certification program visit http://www.opengroup.org/certifications/openfair

You may also wish to attend a webcast scheduled for 7th November, 4pm BST that will provide an overview of the Open FAIR certification program, as well as an overview of the two risk standards. You can register here

.62940-hietala

Jim Hietala, CISSP, GSEC, is Vice President, Security for The Open Group, where he manages all security and risk management programs and standards activities, including the Security Forum and the Jericho Forum.  He has participated in the development of several industry standards including O-ISM3, O-ESA, Risk Taxonomy Standard, and O-ACEML. He also led the development of compliance and audit guidance for the Cloud Security Alliance v2 publication.

Jim is a frequent speaker at industry conferences. He has participated in the SANS Analyst/Expert program, having written several research white papers and participated in several webcasts for SANS. He has also published numerous articles on information security, risk management, and compliance topics in publications including CSO, The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

An IT security industry veteran, he has held leadership roles at several IT security vendors.

Jim holds a B.S. in Marketing from Southern Illinois University.

Comments Off

Filed under Conference, Cybersecurity, Open FAIR Certification, Standards

3 Steps to Proactively Address Board-Level Security Concerns

By E.G. Nadhan, HP

Last month, I shared the discussions that ensued in a Tweet Jam conducted by The Open Group on Big Data and Security where the key takeaway was: Protecting Data is Good.  Protecting Information generated from Big Data is priceless.  Security concerns around Big Data continue to the extent that it has become a Board-level concern as explained in this article in ComputerWorldUK.  Board-level concerns must be addressed proactively by enterprises.  To do so, enterprises must provide the business justification for such proactive steps needed to address such board-level concerns.

Nadhan blog image

At The Open Group Conference in Sydney in April, the session on “Which information risks are shaping our lives?” by Stephen Singam, Chief Technology Officer, HP Enterprise Security Services, Australia provides great insight on this topic.  In this session, Singam analyzes the current and emerging information risks while recommending a proactive approach to address them head-on with adversary-centric solutions.

The 3 steps that enterprises must take to proactively address security concerns are below:

Computing the cost of cyber-crime

The HP Ponemon 2012 Cost of Cyber Crime Study revealed that cyber attacks have more than doubled in a three year period with the financial impact increasing by nearly 40 percent. Here are the key takeaways from this research:

  • Cyber-crimes continue to be costly. The average annualized cost of cyber-crime for 56 organizations is $8.9 million per year, with a range of $1.4 million to $46 million.
  • Cyber attacks have become common occurrences. Companies experienced 102 successful attacks per week and 1.8 successful attacks per company per week in 2012.
  • The most costly cyber-crimes are those caused by denial of service, malicious insiders and web-based attacks.

When computing the cost of cyber-crime, enterprises must address direct, indirect and opportunity costs that result from the loss or theft of information, disruption to business operations, revenue loss and destruction of property, plant and equipment. The following phases of combating cyber-crime must also be factored in to comprehensively determine the total cost:

  1. Detection of patterns of behavior indicating an impending attack through sustained monitoring of the enabling infrastructure
  2. Investigation of the security violation upon occurrence to determine the underlying root cause and take appropriate remedial measures
  3. Incident response to address the immediate situation at hand, communicate the incidence of the attack raise all applicable alerts
  4. Containment of the attack by controlling its proliferation across the enterprise
  5. Recovery from the damages incurred as a result of the attack to ensure ongoing business operations based upon the business continuity plans in place

Identifying proactive steps that can be taken to address cyber-crime

  1. “Better get security right,” says HP Security Strategist Mary Ann Mezzapelle in her keynote on Big Data and Security at The Open Group Conference in Newport Beach. Asserting that proactive risk management is the most effective approach, Mezzapelle challenged enterprises to proactively question the presence of shadow IT, data ownership, usage of security tools and standards while taking a comprehensive approach to security end-to-end within the enterprise.
  2. Art Gilliland suggested that learning from cyber criminals and understanding their methods in this ZDNet article since the very frameworks enterprises strive to comply with (such as ISO and PCI) set a low bar for security that adversaries capitalize on.
  3. Andy Ellis discussed managing risk with psychology instead of brute force in his keynote at the 2013 RSA Conference.
  4. At the same conference, in another keynote, world re-knowned game-designer and inventor of SuperBetter, Jane McGonigal suggested the application of the “collective intelligence” that gaming generates can combat security concerns.
  5. In this interview, Bruce Schneier, renowned security guru and author of several books including LIARS & Outliers, suggested “Bad guys are going to invent new stuff — whether we want them to or not.” Should we take a cue from Hollywood and consider the inception of OODA loop into the security hacker’s mind?

The Balancing Act.

Can enterprises afford to take such proactive steps? Or more importantly, can they afford not to?

Enterprises must define their risk management strategy and determine the proactive steps that are best in alignment with their business objectives and information security standards.  This will enable organizations to better assess the cost of execution for such measures.  While the actual cost is likely to vary by enterprise, inaction is not an acceptable alternative.  Like all other critical corporate initiatives, these proactive measures must receive the board-level attention they deserve.

Enterprises must balance the cost of executing such proactive measures against the potential cost of data loss and reputational harm. This will ensure that the right proactive measures are taken with executive support.

How about you?  Has your enterprise taken the steps to assess the cost of cybercrime?  Have you considered various proactive steps to combat cybercrime?  Share your thoughts with me in the comments section below.

NadhanHP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Conference

Quick Hit Thoughts from RSA Conference 2013

By Joshua Brickman, CA Technologies

I have a great job at CA Technologies, I can’t deny it. Working in CA Technologies Federal Certification Program Office, I have the responsibility of knowing what certifications, accreditations, mandates, etc. are relevant and then helping them get implemented.

One of the responsibilities (and benefits) of my job is getting to go to great conferences like the RSA Security Conference which just wrapped last week. This year I was honored to be selected by the Program Committee to speak twice at the event. Both talks fit well to the Policy and Government track at the show.

First I was on a panel with a distinguished group of senior leaders from both industry and government. The title of the session was, Certification of Products or Accreditation of Organizations: Which to Do? The idea was to discuss the advantages and disadvantages of individual product certifications vs. looking at an entire company or business unit. Since I’ve led CA through many product certifications (certs) and have been involved in accreditation programs as well, my position was to be able to bring real-world industry perspective to the panel. The point I tried to make was that product certs (like Common Criteria – CC) add value, but only for the specific purpose that they are designed for (security functions). We’ve seen CC expanding beyond just security enforcing products and that’s concerning. Product certs are expensive, time consuming and take away from time that could be spent on innovation. We want to do CC when it will be long lasting and add value.

On the idea of accreditation of organizations, I first talked about CMMI and my views on its challenges. I then shifted to the Open Trusted Technology Forum (OTTF), a forum of The Open Group, as I’ve written about before and said that the accreditation program that group is building is more focused than CMMI. OTTF is building something that  – when adopted by industry and THEIR suppliers – will provide assurance that technology is being built the right way (best practices) and will give acquirers confidence that products bought from vendors that have the OTTF mark can be trusted. The overall conclusion of the panel was that accreditation of organizations and certifications of products both had a place, and that it is important that the value was understood by buyers and vendors.

A couple of days later, I presented with Mary Ann Davidson, CSO of Oracle. The main point of the talk was to try and give the industry perspective on mandates, legislation and regulations – which all seemed to be focused on technology providers – to solve the cyber security issues which we see every day. We agreed that sometimes regulations make sense but having a clear problem definition, language and limited scope was the path to success and acceptance. We also encouraged government to get involved with industry via public/private partnerships, like The Open Group Trusted Technology Forum.

Collaboration is the key to fighting the cyber security battle. If you are interested in hearing more about ways to get involved in building a safer and more productive computing environment, feel free to contact me or leave a comment on this blog. Cybersecurity is a complicated issue and there were well over 20,000 security professionals discussing it at RSA Conference. We’d love to hear your views as well.

 This blog post was originally published on the CA Technologies blog.


joshJoshua Brickman, PMP (Project Management Professional), runs CA Technologies Federal Certifications Program. He has led CA through the successful evaluation of sixteen products through the Common Criteria over the last six years (in both the U.S. and Canada). He is also a Steering Committee member on The Open Group consortium focused on Supply Chain Integrity and Security, The Open Group Trusted Technology Forum (OTTF). He also runs CA Technologies Accessibility Program. 

1 Comment

Filed under OTTF

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

Protecting Data is Good. Protecting Information Generated from Big Data is Priceless

By E.G. Nadhan, HP

This was the key message that came out of The Open Group® Big Data Security Tweet Jam on Jan 22 at 9:00 a.m. PT, which addressed several key questions centered on Big Data and security. Here is my summary of the observations made in the context of these questions.

Q1. What is Big Data security? Is it different from data security?

Big data security is more about information security. It is typically external to the corporate perimeter. IT is not prepared today to adequately monitor its sheer volume in brontobytes of data. The time period of long-term storage could violate compliance mandates. Note that storing Big Data in the Cloud changes the game with increased risks of leaks, loss, breaches.

Information resulting from the analysis of the data is even more sensitive and therefore, higher risk – especially when it is Personally Identifiable Information on the Internet of devices requiring a balance between utility and privacy.

At the end of the day, it is all about governance or as they say, “It’s the data, stupid! Govern it.”

Q2. Any thoughts about security systems as producers of Big Data, e.g., voluminous systems logs?

Data gathered from information security logs is valuable but rules for protecting it are the same. Security logs will be a good source to detect patterns of customer usage.

Q3. Most BigData stacks have no built in security. What does this mean for securing Big Data?

There is an added level of complexity because it goes across apps, network plus all end points. Having standards to establish identity, metadata, trust would go a long way. The quality of data could also be a security issue — has it been tampered with, are you being gamed etc. Note that enterprises have varying needs of security around their business data.

Q4. How is the industry dealing with the social and ethical uses of consumer data gathered via Big Data?

Big Data is still nascent and ground rules for handling the information are yet to be established. Privacy issue will be key when companies market to consumers. Organizations are seeking forgiveness rather than permission. Regulatory bodies are getting involved due to consumer pressure. Abuse of power from access to big data is likely to trigger more incentives to attack or embarrass. Note that ‘abuse’ to some is just business to others.

Q5. What lessons from basic data security and cloud security can be implemented in Big Data security?

Security testing is even more vital for Big Data. Limit access to specific devices, not just user credentials. Don’t assume security via obscurity for sensors producing bigdata inputs – they will be targets.

Q6. What are some best practices for securing Big Data? What are orgs doing now and what will organizations be doing 2-3 years from now?

Current best practices include:

  • Treat Big Data as your most valuable asset
  • Encrypt everything by default, proper key management, enforcement of policies, tokenized logs
  • Ask your Cloud and Big Data providers the right questions – ultimately, YOU are responsible for security
  • Assume data needs verification and cleanup before it is used for decisions if you are unable to establish trust with data source

Future best practices:

  • Enterprises treat Information like data today and will respect it as the most valuable asset in the future
  • CIOs will eventually become Chief Officer for Information

Q7. We’re nearing the end of today’s tweet tam. Any last thoughts on Big Data security?

Adrian Lane who participated in the tweet jam will be keynoting at The Open Group Conference in Newport Beach next week and wrote a good best practices paper on securing Big Data.

I have been part of multiple tweet chats specific to security as well as one on Information Optimization. Recently, I also conducted the first Open Group Web Jam internal to The Cloud Work Group.  What I liked about this Big Data Security Tweet Jam is that it brought two key domains together highlighting the intersection points. There was great contribution from subject matter experts forcing participants to think about one domain in the context of the other.

In a way, this post is actually synthesizing valuable information from raw data in the tweet messages – and therefore needs to be secured!

What are your thoughts on the observations made in this tweet jam? What measures are you taking to secure Big Data in your enterprise?

I really enjoyed this tweet jam and would strongly encourage you to actively participate in upcoming tweet jams hosted by The Open Group.  You get to interact with a wide spectrum of knowledgeable practitioners listed in this summary post.

NadhanHP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has more than 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project, and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, LinkedIn and Journey Blog.

 

2 Comments

Filed under Tweet Jam

#ogChat Summary – Big Data and Security

By Patty Donovan, The Open Group

The Open Group hosted a tweet jam (#ogChat) to discuss Big Data security. In case you missed the conversation, here is a recap of the event.

The Participants

A total of 18 participants joined in the hour-long discussion, including:

Q1 What is #BigData #security? Is it different from #data security? #ogChat

Participants seemed to agree that while Big Data security is similar to data security, it is more extensive. Two major factors to consider: sensitivity and scalability.

  • @dustinkirkland At the core it’s the same – sensitive data – but the difference is in the size and the length of time this data is being stored. #ogChat
  • @jim_hietala Q1: Applying traditional security controls to BigData environments, which are not just very large info stores #ogChat
  • @TheTonyBradley Q1. The value of analyzing #BigData is tied directly to the sensitivity and relevance of that data–making it higher risk. #ogChat
  • @AdrianLane Q1 Securing #BigData is different. Issues of velocity, scale, elasticity break many existing security products. #ogChat
  • @editingwhiz #Bigdata security is standard information security, only more so. Meaning sampling replaced by complete data sets. #ogchat
  • @Dana_Gardner Q1 Not only is the data sensitive, the analysis from the data is sensitive. Secret. On the QT. Hush, hush. #BigData #data #security #ogChat
    • @Technodad @Dana_Gardner A key point. Much #bigdata will be public – the business value is in cleanup & analysis. Focus on protecting that. #ogChat

Q2 Any thoughts about #security systems as producers of #BigData, e.g., voluminous systems logs? #ogChat

  • Most agreed that security systems should be setting an example for producing secure Big Data environments.
  • @dustinkirkland Q2. They should be setting the example. If the data is deemed important or sensitive, then it should be secured and encrypted. #ogChat
  • @TheTonyBradley Q2. Data is data. Data gathered from information security logs is valuable #BigData, but rules for protecting it are the same. #ogChat
  • @elinormills Q2 SIEM is going to be big. will drive spending. #ogchat #bigdata #security
  • @jim_hietala Q2: Well instrumented IT environments generate lots of data, and SIEM/audit tools will have to be managers of this #BigData #ogchat
  • @dustinkirkland @theopengroup Ideally #bigdata platforms will support #tokenization natively, or else appdevs will have to write it into apps #ogChat

Q3 Most #BigData stacks have no built in #security. What does this mean for securing #BigData? #ogChat

The lack of built-in security hoists a target on the Big Data. While not all enterprise data is sensitive, housing it insecurely runs the risk of compromise. Furthermore, security solutions not only need to be effective, but also scalable as data will continue to get bigger.

  • @elinormills #ogchat big data is one big hacker target #bigdata #security
    • @editingwhiz @elinormills #bigdata may be a huge hacker target, but will hackers be able to process the chaff out of it? THAT takes $$$ #ogchat
    • @elinormills @editingwhiz hackers are innovation leaders #ogchat
    • @editingwhiz @elinormills Yes, hackers are innovation leaders — in security, but not necessarily dataset processing. #eweeknews #ogchat
  • @jim_hietala Q3:There will be a strong market for 3rd party security tools for #BigData – existing security technologies can’t scale #ogchat
  • @TheTonyBradley Q3. When you take sensitive info and store it–particularly in the cloud–you run the risk of exposure or compromise. #ogChat
  • @editingwhiz Not all enterprises have sensitive business data they need to protect with their lives. We’re talking non-regulated, of course. #ogchat
  • @TheTonyBradley Q3. #BigData is sensitive enough. The distilled information from analyzing it is more sensitive. Solutions need to be effective. #ogChat
  • @AdrianLane Q3 It means identifying security products that don’t break big data – i.e. they scale or leverage #BigData #ogChat
    • @dustinkirkland @AdrianLane #ogChat Agreed, this is where certifications and partnerships between the 3rd party and #bigdata vendor are essential.

Q4 How is the industry dealing with the social and ethical uses of consumer data gathered via #BigData? #ogChat #privacy

Participants agreed that the industry needs to improve when it comes to dealing with the social and ethical used of consumer data gathered through Big Data. If the data is easily accessible, hackers will be attracted. No matter what, the cost of a breach is far greater than any preventative solution.

  • @dustinkirkland Q4. #ogChat Sadly, not well enough. The recent Instagram uproar was well publicized but such abuse of social media rights happens every day.
    • @TheTonyBradley @dustinkirkland True. But, they’ll buy the startups, and take it to market. Fortune 500 companies don’t like to play with newbies. #ogChat
    • @editingwhiz Disagree with this: Fortune 500s don’t like to play with newbies. We’re seeing that if the IT works, name recognition irrelevant. #ogchat
    • @elinormills @editingwhiz @thetonybradley ‘hacker’ covers lot of ground, so i would say depends on context. some of my best friends are hackers #ogchat
    • @Technodad @elinormills A core point- data from sensors will drive #bigdata as much as enterprise data. Big security, quality issues there. #ogChat
  • @Dana_Gardner Q4 If privacy is a big issue, hacktivism may crop up. Power of #BigData can also make it socially onerous. #data #security #ogChat
  • @dustinkirkland Q4. The cost of a breach is far greater than the cost (monetary or reputation) of any security solution. Don’t risk it. #ogChat

Q5 What lessons from basic #datasecurity and #cloud #security can be implemented in #BigData security? #ogChat

The principles are the same, just on a larger scale. The biggest risks come from cutting corners due to the size and complexity of the data gathered. As hackers (like Anonymous) get better, so does security regardless of the data size.

  • @TheTonyBradley Q5. Again, data is data. The best practices for securing and protecting it stay the same–just on a more massive #BigData scale. #ogChat
  • @Dana_Gardner Q5 Remember, this is in many ways unchartered territory so expect the unexpected. Count on it. #BigData #data #security #ogChat
  • @NadhanAtHP A5 @theopengroup – Security Testing is even more vital when it comes to #BigData and Information #ogChat
  • @TheTonyBradley Q5. Anonymous has proven time and again that most existing data security is trivial. Need better protection for #BigData. #ogChat

Q6 What are some best practices for securing #BigData? What are orgs doing now, and what will orgs be doing 2-3 years from now? #ogChat

While some argued encrypting everything is the key, and others encouraged pressure on big data providers, most agreed that a multi-step security infrastructure is necessary. It’s not just the data that needs to be secured, but also the transportation and analysis processes.

  • @dustinkirkland Q6. #ogChat Encrypting everything, by default, at least at the fs layer. Proper key management. Policies. Logs. Hopefully tokenized too.
  • @dustinkirkland Q6. #ogChat Ask tough questions of your #cloud or #bigdata provider. Know what they are responsible for and who has access to keys. #ogChat
    • @elinormills Agreed–> @dustinkirkland Q6. #ogChat Ask tough questions of your #cloud or #bigdataprovider. Know what they are responsible for …
  • @Dana_Gardner Q6 Treat most #BigData as a crown jewel, see it as among most valuable assets. Apply commensurate security. #data #security #ogChat
  • @elinormills Q6 govt level crypto minimum, plus protect all endpts #ogchat #bigdata #security
  • @TheTonyBradley Q6. Multi-faceted issue. Must protect raw #BigData, plus processing, analyzing, transporting, and resulting distilled analysis. #ogChat
  • @Technodad If you don’t establish trust with data source, you need to assume data needs verification, cleanup before it is used for decisions. #ogChat

A big thank you to all the participants who made this such a great discussion!

patricia donovanPatricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

3 Comments

Filed under Tweet Jam

Questions for the Upcoming Big Data Security Tweet Jam on Jan. 22

By Patty Donovan, The Open Group

Last week, we announced our upcoming tweet jam on Tuesday, January 22 at 9:00 a.m. PT/12:00 p.m. ET/5:00 p.m. BST, which will examine the impact of Big Data on security and how it will change the security landscape.

Please join us next Tuesday, January 22! The discussion will be moderated by Dana Gardner (@Dana_Gardner), ZDNet – Briefings Direct. We welcome Open Group members and interested participants from all backgrounds to join the session. Our panel of experts will include:

  • Elinor Mills, former CNET reporter and current director of content and media strategy at Bateman Group (@elinormills)
  • Jaikumar Vijayan, Computerworld (@jaivijayan)
  • Chris Preimesberger, eWEEK (@editingwhiz)
  • Tony Bradley, PC World (@TheTonyBradley)
  • Michael Santarcangelo, Security Catalyst Blog (@catalyst)

The discussion will be guided by these six questions:

  1. What is #BigData security? Is it different from #data #security? #ogChat
  2. Any thoughts about #security systems as producers of #BigData, e.g., voluminous systems logs? #ogChat
  3. Most #BigData stacks have no built in #security. What does this mean for securing BigData? #ogChat
  4. How is the industry dealing with the social and ethical uses of consumer data gathered via #BigData? #ogChat #privacy
  5. What lessons from basic data security and #cloud #security can be implemented in #BigData #security? #ogChat
  6. What are some best practices for securing #BigData? #ogChat

To join the discussion, please follow the #ogChat hashtag during the allotted discussion time. Other hashtags we recommend you use during the event include:

  • Information Security: #InfoSec
  • Security: #security
  • BYOD: #BYOD
  • Big Data: #BigData
  • Privacy: #privacy
  • Mobile: #mobile
  • Compliance: #compliance

For more information about the tweet jam, guidelines and general background information, please visit our previous blog post: http://blog.opengroup.org/2013/01/15/big-data-security-tweet-jam/

If you have any questions prior to the event or would like to join as a participant, please direct them to Rod McLeod (rmcleod at bateman-group dot com), or leave a comment below. We anticipate a lively chat and hope you will be able to join us!

patricia donovanPatricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

 

1 Comment

Filed under Tweet Jam

Big Data Security Tweet Jam

By Patty Donovan, The Open Group

On Tuesday, January 22, The Open Group will host a tweet jam examining the topic of Big Data and its impact on the security landscape.

Recently, Big Data has been dominating the headlines, analyzing everything about the topic from how to manage and process it, to the way it will impact your organization’s IT roadmap. As 2012 came to a close, analyst firm, Gartner predicted that data will help drive IT spending to $3.8 trillion in 2014. Knowing the phenomenon is here to stay, enterprises face a new and daunting challenge of how to secure Big Data. Big Data security also raises other questions, such as: Is Big Data security different from data security? How will enterprises handle Big Data security? What is the best approach to Big Data security?

It’s yet to be seen if Big Data will necessarily revolutionize enterprise security, but it certainly will change execution – if it hasn’t already. Please join us for our upcoming Big Data Security tweet jam where leading security experts will discuss the merits of Big Data security.

Please join us on Tuesday, January 22 at 9:00 a.m. PT/12:00 p.m. ET/5:00 p.m. GMT for a tweet jam, moderated by Dana Gardner (@Dana_Gardner), ZDNet – Briefings Direct, that will discuss and debate the issues around big data security. Key areas that will be addressed during the discussion include: data security, privacy, compliance, security ethics and, of course, Big Data. We welcome Open Group members and interested participants from all backgrounds to join the session and interact with our panel of IT security experts, analysts and thought leaders led by Jim Hietala (@jim_hietala) and Dave Lounsbury (@Technodad) of The Open Group. To access the discussion, please follow the #ogChat hashtag during the allotted discussion time.

And for those of you who are unfamiliar with tweet jams, here is some background information:

What Is a Tweet Jam?

A tweet jam is a one hour “discussion” hosted on Twitter. The purpose of the tweet jam is to share knowledge and answer questions on Big Data security. Each tweet jam is led by a moderator and a dedicated group of experts to keep the discussion flowing. The public (or anyone using Twitter interested in the topic) is encouraged to join the discussion.

Participation Guidance

Whether you’re a newbie or veteran Twitter user, here are a few tips to keep in mind:

  • Have your first #ogChat tweet be a self-introduction: name, affiliation, occupation.
  • Start all other tweets with the question number you’re responding to and the #ogChat hashtag.
    • Sample: “Q1 enterprises will have to make significant adjustments moving forward to secure Big Data environments #ogChat”
    • Please refrain from product or service promotions. The goal of a tweet jam is to encourage an exchange of knowledge and stimulate discussion.
    • While this is a professional get-together, we don’t have to be stiff! Informality will not be an issue!
    • A tweet jam is akin to a public forum, panel discussion or Town Hall meeting – let’s be focused and thoughtful.

If you have any questions prior to the event or would like to join as a participant, please direct them to Rod McLeod (rmcleod at bateman-group dot com). We anticipate a lively chat and hope you will be able to join!

 

patricia donovanPatricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

1 Comment

Filed under Tweet Jam

2013 Open Group Predictions, Vol. 1

By The Open Group

A big thank you to all of our members and staff who have made 2012 another great year for The Open Group. There were many notable achievements this year, including the release of ArchiMate 2.0, the launch of the Future Airborne Capability Environment (FACE™) Technical Standard and the publication of the SOA Reference Architecture (SOA RA) and the Service-Oriented Cloud Computing Infrastructure Framework (SOCCI).

As we wrap up 2012, we couldn’t help but look towards what is to come in 2013 for The Open Group and the industries we‘re a part of. Without further ado, here they are:

Big Data
By Dave Lounsbury, Chief Technical Officer

Big Data is on top of everyone’s mind these days. Consumerization, mobile smart devices, and expanding retail and sensor networks are generating massive amounts of data on behavior, environment, location, buying patterns – etc. – producing what is being called “Big Data”. In addition, as the use of personal devices and social networks continue to gain popularity so does the expectation to have access to such data and the computational power to use it anytime, anywhere. Organizations will turn to IT to restructure its services so it meets the growing expectation of control and access to data.

Organizations must embrace Big Data to drive their decision-making and to provide the optimal service mix services to customers. Big Data is becoming so big that the big challenge is how to use it to make timely decisions. IT naturally focuses on collecting data so Big Data itself is not an issue.. To allow humans to keep on top of this flood of data, industry will need to move away from programming computers for storing and processing data to teaching computers how to assess large amounts of uncorrelated data and draw inferences from this data on their own. We also need to start thinking about the skills that people need in the IT world to not only handle Big Data, but to make it actionable. Do we need “Data Architects” and if so, what would their role be?

In 2013, we will see the beginning of the Intellectual Computing era. IT will play an essential role in this new era and will need to help enterprises look at uncorrelated data to find the answer.

Security

By Jim Hietala, Vice President of Security

As 2012 comes to a close, some of the big developments in security over the past year include:

  • Continuation of hacktivism attacks.
  • Increase of significant and persistent threats targeting government and large enterprises. The notable U.S. National Strategy for Trusted Identities in Cyberspace started to make progress in the second half of the year in terms of industry and government movement to address fundamental security issues.
  • Security breaches were discovered by third parties, where the organizations affected had no idea that they were breached. Data from the 2012 Verizon report suggests that 92 percent of companies breached were notified by a third party.
  • Acknowledgement from senior U.S. cybersecurity professionals that organizations fall into two groups: those that know they’ve been penetrated, and those that have been penetrated, but don’t yet know it.

In 2013, we’ll no doubt see more of the same on the attack front, plus increased focus on mobile attack vectors. We’ll also see more focus on detective security controls, reflecting greater awareness of the threat and on the reality that many large organizations have already been penetrated, and therefore responding appropriately requires far more attention on detection and incident response.

We’ll also likely see the U.S. move forward with cybersecurity guidance from the executive branch, in the form of a Presidential directive. New national cybersecurity legislation seemed to come close to happening in 2012, and when it failed to become a reality, there were many indications that the administration would make something happen by executive order.

Enterprise Architecture

By Leonard Fehskens, Vice President of Skills and Capabilities

Preparatory to my looking back at 2012 and forward to 2013, I reviewed what I wrote last year about 2011 and 2012.

Probably the most significant thing from my perspective is that so little has changed. In fact, I think in many respects the confusion about what Enterprise Architecture (EA) and Business Architecture are about has gotten worse.

The stress within the EA community as both the demands being placed on it and the diversity of opinion within it increase continues to grow.  This year, I saw a lot more concern about the value proposition for EA, but not a lot of (read “almost no”) convergence on what that value proposition is.

Last year I wrote “As I expected at this time last year, the conventional wisdom about Enterprise Architecture continues to spin its wheels.”  No need to change a word of that. What little progress at the leading edge was made in 2011 seems to have had no effect in 2012. I think this is largely a consequence of the dust thrown in the eyes of the community by the ascendance of the concept of “Business Architecture,” which is still struggling to define itself.  Business Architecture seems to me to have supplanted last year’s infatuation with “enterprise transformation” as the means of compensating for the EA community’s entrenched IT-centric perspective.

I think this trend and the quest for a value proposition are symptomatic of the same thing — the urgent need for Enterprise Architecture to make its case to its stakeholder community, especially to the people who are paying the bills. Something I saw in 2011 that became almost epidemic in 2012 is conflation — the inclusion under the Enterprise Architecture umbrella of nearly anything with the slightest taste of “business” to it. This has had the unfortunate effect of further obscuring the unique contribution of Enterprise Architecture, which is to bring architectural thinking to bear on the design of human enterprise.

So, while I’m not quite mired in the slough of despond, I am discouraged by the community’s inability to advance the state of the art. In a private communication to some colleagues I wrote, “the conventional wisdom on EA is at about the same state of maturity as 14th century cosmology. It is obvious to even the most casual observer that the earth is both flat and the center of the universe. We debate what happens when you fall off the edge of the Earth, and is the flat earth carried on the back of a turtle or an elephant?  Does the walking of the turtle or elephant rotate the crystalline sphere of the heavens, or does the rotation of the sphere require the turtlephant to walk to keep the earth level?  These are obviously the questions we need to answer.”

Cloud

By Chris Harding, Director of Interoperability

2012 has seen the establishment of Cloud Computing as a mainstream resource for enterprise architects and the emergence of Big Data as the latest hot topic, likely to be mainstream for the future. Meanwhile, Service-Oriented Architecture (SOA) has kept its position as an architectural style of choice for delivering distributed solutions, and the move to ever more powerful mobile devices continues. These trends have been reflected in the activities of our Cloud Computing Work Group and in the continuing support by members of our SOA work.

The use of Cloud, Mobile Computing, and Big Data to deliver on-line systems that are available anywhere at any time is setting a new norm for customer expectations. In 2013, we will see the development of Enterprise Architecture practice to ensure the consistent delivery of these systems by IT professionals, and to support the evolution of creative new computing solutions.

IT systems are there to enable the business to operate more effectively. Customers expect constant on-line access through mobile and other devices. Business organizations work better when they focus on their core capabilities, and let external service providers take care of the rest. On-line data is a huge resource, so far largely untapped. Distributed, Cloud-enabled systems, using Big Data, and architected on service-oriented principles, are the best enablers of effective business operations. There will be a convergence of SOA, Mobility, Cloud Computing, and Big Data as they are seen from the overall perspective of the enterprise architect.

Within The Open Group, the SOA and Cloud Work Groups will continue their individual work, and will collaborate with other forums and work groups, and with outside organizations, to foster the convergence of IT disciplines for distributed computing.

3 Comments

Filed under Business Architecture, Cloud, Cloud/SOA, Cybersecurity, Enterprise Architecture

#ogChat Summary – 2013 Security Priorities

By Patty Donovan, The Open Group

Totaling 446 tweets, yesterday’s 2013 Security Priorities Tweet Jam (#ogChat) saw a lively discussion on the future of security in 2013 and became our most successful tweet jam to date. In case you missed the conversation, here’s a recap of yesterday’s #ogChat!

The event was moderated by former CNET security reporter Elinor Mills, and there was a total of 28 participants including:

Here is a high-level snapshot of yesterday’s #ogChat:

Q1 What’s the biggest lesson learned by the security industry in 2012? #ogChat

The consensus among participants was that 2012 was a year of going back to the basics. There are many basic vulnerabilities within organizations that still need to be addressed, and it affects every aspect of an organization.

  • @Dana_Gardner Q1 … Security is not a product. It’s a way of conducting your organization, a mentality, affects all. Repeat. #ogChat #security #privacy
  • @Technodad Q1: Biggest #security lesson of 2102: everyone is in two security camps: those who know they’ve been penetrated & those who don’t. #ogChat
  • @jim_hietala Q1. Assume you’ve been penetrated, and put some focus on detective security controls, reaction/incident response #ogChat
  • @c7five Lesson of 2012 is how many basics we’re still not covering (eg. all the password dumps that showed weak controls and pw choice). #ogChat

Q2 How will organizations tackle #BYOD security in 2013? Are standards needed to secure employee-owned devices? #ogChat

Participants debated over the necessity of standards. Most agreed that standards and policies are key in securing BYOD.

  • @arj Q2: No “standards” needed for BYOD. My advice: collect as little information as possible; use MDM; create an explicit policy #ogChat
  • @Technodad @arj Standards are needed for #byod – but operational security practices more important than technical standards. #ogChat
  • @AWildCSO Organizations need to develop a strong asset management program as part of any BYOD effort. Identification and Classification #ogChat
  • @Dana_Gardner Q2 #BYOD forces more apps & data back on servers, more secure; leaves devices as zero client. Then take that to PCs too. #ogChat #security
  • @taosecurity Orgs need a BYOD policy for encryption & remote wipe of company data; expect remote compromise assessment apps too @elinormills #ogChat

Q3 In #BYOD era, will organizations be more focused on securing the network, the device, or the data? #ogChat

There was disagreement here. Some emphasized focusing on protecting data, while others argued that it is the devices and networks that need protecting.

  • @taosecurity Everyone claims to protect data, but the main ways to do so remain protecting devices & networks. Ignores code sec too. @elinormills #ogChat
  • @arj Q3: in the BYOD era, the focus must be on the data. Access is gated by employee’s entitlements + device capabilities. #ogChat
  • @Technodad @arj Well said. Data sec is the big challenge now – important for #byod, #cloud, many apps. #ogChat
  • @c7five Organization will focus more on device management while forgetting about the network and data controls in 2013. #ogChat #BYOD

Q4 What impact will using 3rd party #BigData have on corporate security practices? #ogChat

Participants agreed that using third parties will force organizations to rely on security provided by those parties. They also acknowledged that data must be secure in transit.

  • @daviottenheimer Q4 Big Data will redefine perimeter. have to isolate sensitive data in transit, store AND process #ogChat
  • @jim_hietala Q4. 3rd party Big Data puts into focus 3rd party risk management, and transparency of security controls and control state #ogChat
  • @c7five Organizations will jump into 3rd party Big Data without understanding of their responsibilities to secure the data they transfer. #ogChat
  • @Dana_Gardner Q4 You have to trust your 3rd party #BigData provider is better at #security than you are, eh? #ogChat  #security #SLA
  • @jadedsecurity @Technodad @Dana_Gardner has nothing to do with trust. Data that isn’t public must be secured in transit #ogChat
  • @AWildCSO Q4: with or without bigdata, third party risk management programs will continue to grow in 2013. #ogChat

Q5 What will global supply chain security look like in 2013? How involved should governments be? #ogChat

Supply chains are an emerging security issue, and governments need to get involved. But consumers will also start to understand what they are responsible for securing themselves.

  • @jim_hietala Q5. supply chain emerging as big security issue, .gov’s need to be involved, and Open Group’s OTTF doing good work here #ogChat
  • @Technodad Q5: Governments are going to act- issue is getting too important. Challenge is for industry to lead & minimize regulatory patchwork. #ogChat
  • @kjhiggins Q5: Customers truly understanding what they’re responsible for securing vs. what cloud provider is. #ogChat

Q6 What are the biggest unsolved issues in Cloud Computing security? #ogChat

Cloud security is a big issue. Most agreed that Cloud security is mysterious, and it needs to become more transparent. When Cloud providers claim they are secure, consumers and organizations put blind trust in them, making the problem worse.

  • @jadedsecurity @elinormills Q6 all of them. Corps assume cloud will provide CIA and in most cases even fails at availability. #ogChat
  • @jim_hietala Q6. Transparency of security controls/control state, cloud risk management, protection of unstructured data in cloud services #ogChat
  • @c7five Some PaaS cloud providers advertise security as something users don’t need to worry about. That makes the problem worse. #ogChat

Q7 What should be the top security priorities for organizations in 2013? #ogChat

Top security priorities varied. Priorities highlighted in the discussion included:  focusing on creating a culture that promotes secure activity; prioritizing security spending based on risk; focusing on where the data resides; and third-party risk management coming to the forefront.

  • @jim_hietala Q7. prioritizing security spend based on risks, protecting data, detective controls #ogChat
  • @Dana_Gardner Q7 Culture trumps technology and business. So make #security policy adherence a culture that is defined and rewarded. #ogChat #security
  • @kjhiggins Q7 Getting a handle on where all of your data resides, including in the mobile realm. #ogChat
  • @taosecurity Also for 2013: 1) count and classify your incidents & 2) measure time from detection to containment. Apply Lean principles to both. #ogChat
  • @AWildCSO Q7: Asset management, third party risk management, and risk based controls for 2013. #ogChat

A big thank you to all the participants who made this such a great discussion!

Patricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

1 Comment

Filed under Tweet Jam

Operational Resilience through Managing External Dependencies

By Ian Dobson & Jim Hietala, The Open Group

These days, organizations are rarely self-contained. Businesses collaborate through partnerships and close links with suppliers and customers. Outsourcing services and business processes, including into Cloud Computing, means that key operations that an organization depends on are often fulfilled outside their control.

The challenge here is how to manage the dependencies your operations have on factors that are outside your control. The goal is to perform your risk management so it optimizes your operational success through being resilient against external dependencies.

The Open Group’s Dependency Modeling (O-DM) standard specifies how to construct a dependency model to manage risk and build trust over organizational dependencies between enterprises – and between operational divisions within a large organization. The standard involves constructing a model of the operations necessary for an organization’s success, including the dependencies that can affect each operation. Then, applying quantitative risk sensitivities to each dependency reveals those operations that have highest exposure to risk of not being successful, informing business decision-makers where investment in reducing their organization’s exposure to external risks will result in best return.

O-DM helps you to plan for success through operational resilience, assured business continuity, and effective new controls and contingencies, enabling you to:

  • Cut costs without losing capability
  • Make the most of tight budgets
  • Build a resilient supply chain
  •  Lead programs and projects to success
  • Measure, understand and manage risk from outsourcing relationships and supply chains
  • Deliver complex event analysis

The O-DM analytical process facilitates organizational agility by allowing you to easily adjust and evolve your organization’s operations model, and produces rapid results to illustrate how reducing the sensitivity of your dependencies improves your operational resilience. O-DM also allows you to drill as deep as you need to go to reveal your organization’s operational dependencies.

O-DM support training on the development of operational dependency models conforming to this standard is available, as are software computation tools to automate speedy delivery of actionable results in graphic formats to facilitate informed business decision-making.

The O-DM standard represents a significant addition to our existing Open Group Risk Management publications:

The O-DM standard may be accessed here.

Ian Dobson is the director of the Security Forum and the Jericho Forum for The Open Group, coordinating and facilitating the members to achieve their goals in our challenging information security world.  In the Security Forum, his focus is on supporting development of open standards and guides on security architectures and management of risk and security, while in the Jericho Forum he works with members to anticipate the requirements for the security solutions we will need in future.

Jim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security and risk management programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

1 Comment

Filed under Cybersecurity, Security Architecture