Tag Archives: Cloud Security

Q&A with Jim Hietala on Security and Healthcare

By The Open Group

We recently spoke with Jim Hietala, Vice President, Security for The Open Group, at the 2014 San Francisco conference to discuss upcoming activities in The Open Group’s Security and Healthcare Forums.

Jim, can you tell us what the Security Forum’s priorities are going to be for 2014 and what we can expect to see from the Forum?

In terms of our priorities for 2014, we’re continuing to do work in Security Architecture and Information Security Management. In the area of Security Architecture, the big project that we’re doing is adding security to TOGAF®, so we’re working on the next version of the TOGAF standard and specification and there’s an active project involving folks from the Architecture Forum and the Security Forum to integrate security into and stripe it through TOGAF. So, on the Security Architecture side, that’s the priority. On the Information Security Management side, we’re continuing to do work in the area of Risk Management. We introduced a certification late last year, the OpenFAIR certification, and we’ll continue to do work in the area of Risk Management and Risk Analysis. We’re looking to add a second level to the certification program, and we’re doing some other work around the Risk Analysis standards that we’ve introduced.

The theme of this conference was “Towards Boundaryless Information Flow™” and many of the tracks focused on convergence, and the convergence of things Big Data, mobile, Cloud, also known as Open Platform 3.0. How are those things affecting the realm of security right now?

I think they’re just beginning to. Cloud—obviously the security issues around Cloud have been here as long as Cloud has been over the past four or five years. But if you look at things like the Internet of Things and some of the other things that comprise Open Platform 3.0, the security impacts are really just starting to be felt and considered. So I think information security professionals are really just starting to wrap their hands around, what are those new security risks that come with those technologies, and, more importantly, what do we need to do about them? What do we need to do to mitigate risk around something like the Internet of Things, for example?

What kind of security threats do you think companies need to be most worried about over the next couple of years?

There’s a plethora of things out there right now that organizations need to be concerned about. Certainly advanced persistent threat, the idea that maybe nation states are trying to attack other nations, is a big deal. It’s a very real threat, and it’s something that we have to think about – looking at the risks we’re facing, exactly what is that adversary and what are they capable of? I think profit-motivated criminals continue to be on everyone’s mind with all the credit card hacks that have just come out. We have to be concerned about cyber criminals who are profit motivated and who are very skilled and determined and obviously there’s a lot at stake there. All of those are very real things in the security world and things we have to defend against.

The Security track at the San Francisco conference focused primarily on risk management. How can companies better approach and manage risk?

As I mentioned, we did a lot of work over the last few years in the area of Risk Management and the FAIR Standard that we introduced breaks down risk into what’s the frequency of bad things happening and what’s the impact if they do happen? So I would suggest that taking that sort of approach, using something like taking the Risk Taxonomy Standard that we’ve introduced and the Risk Analysis Standard, and really looking at what are the critical assets to protect, who’s likely to attack them, what’s the probably frequency of attacks that we’ll see? And then looking at the impact side, what’s the consequence if somebody successfully attacks them? That’s really the key—breaking it down, looking at it that way and then taking the right mitigation steps to reduce risk on those assets that are really important.

You’ve recently become involved in The Open Group’s new Healthcare Forum. Why a healthcare vertical forum for The Open Group?

In the area of healthcare, what we see is that there’s just a highly fragmented aspect to the ecosystem. You’ve got healthcare information that’s captured in various places, and the information doesn’t necessarily flow from provider to payer to other providers. In looking at industry verticals, the healthcare industry seemed like an area that really needed a lot of approaches that we bring from The Open Group—TOGAF and Enterprise Architecture approaches that we have.

If you take it up to a higher level, it really needs the Boundaryless Information Flow that we talk about in The Open Group. We need to get to the point where our information as patients is readily available in a secure manner to the people who need to give us care, as well as to us because in a lot of cases the information exists as islands in the healthcare industry. In looking at healthcare it just seemed like a natural place where, in our economies – and it’s really a global problem – a lot of money is spent on healthcare and there’s a lot of opportunities for improvement, both in the economics but in the patient care that’s delivered to individuals through the healthcare system. It just seemed like a great area for us to focus on.

As the new Healthcare Forum kicks off this year, what are the priorities for the Forum?

The Healthcare Forum has just published a whitepaper summarizing the workshop findings for the workshop that we held in Philadelphia last summer. We’re also working on a treatise, which will outline our views about the healthcare ecosystem and where standards and architecture work is most needing to be done. We expect to have that whitepaper produced over the next couple of months. Beyond that, we see a lot of opportunities for doing architecture and standards work in the healthcare sector, and our membership is going to determine which of those areas to focus on, which projects to initiate first.

For more on the The Open Group Security Forum, please visit http://www.opengroup.org/subjectareas/security. For more on the The Open Group Healthcare Forum, see http://www.opengroup.org/getinvolved/industryverticals/healthcare.

62940-hietalaJim Hietala, CISSP, GSEC, is the Vice President, Security for The Open Group, where he manages all IT security, risk management and healthcare programs and standards activities. He participates in the SANS Analyst/Expert program and has also published numerous articles on information security, risk management, and compliance topics in publications including The ISSA Journal, Bank Accounting & Finance, Risk Factor, SC Magazine, and others.

Comments Off

Filed under Cloud/SOA, Conference, Data management, Healthcare, Information security, Open FAIR Certification, Open Platform 3.0, RISK Management, TOGAF®, Uncategorized

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

#ogChat Summary – Big Data and Security

By Patty Donovan, The Open Group

The Open Group hosted a tweet jam (#ogChat) to discuss Big Data security. In case you missed the conversation, here is a recap of the event.

The Participants

A total of 18 participants joined in the hour-long discussion, including:

Q1 What is #BigData #security? Is it different from #data security? #ogChat

Participants seemed to agree that while Big Data security is similar to data security, it is more extensive. Two major factors to consider: sensitivity and scalability.

  • @dustinkirkland At the core it’s the same – sensitive data – but the difference is in the size and the length of time this data is being stored. #ogChat
  • @jim_hietala Q1: Applying traditional security controls to BigData environments, which are not just very large info stores #ogChat
  • @TheTonyBradley Q1. The value of analyzing #BigData is tied directly to the sensitivity and relevance of that data–making it higher risk. #ogChat
  • @AdrianLane Q1 Securing #BigData is different. Issues of velocity, scale, elasticity break many existing security products. #ogChat
  • @editingwhiz #Bigdata security is standard information security, only more so. Meaning sampling replaced by complete data sets. #ogchat
  • @Dana_Gardner Q1 Not only is the data sensitive, the analysis from the data is sensitive. Secret. On the QT. Hush, hush. #BigData #data #security #ogChat
    • @Technodad @Dana_Gardner A key point. Much #bigdata will be public – the business value is in cleanup & analysis. Focus on protecting that. #ogChat

Q2 Any thoughts about #security systems as producers of #BigData, e.g., voluminous systems logs? #ogChat

  • Most agreed that security systems should be setting an example for producing secure Big Data environments.
  • @dustinkirkland Q2. They should be setting the example. If the data is deemed important or sensitive, then it should be secured and encrypted. #ogChat
  • @TheTonyBradley Q2. Data is data. Data gathered from information security logs is valuable #BigData, but rules for protecting it are the same. #ogChat
  • @elinormills Q2 SIEM is going to be big. will drive spending. #ogchat #bigdata #security
  • @jim_hietala Q2: Well instrumented IT environments generate lots of data, and SIEM/audit tools will have to be managers of this #BigData #ogchat
  • @dustinkirkland @theopengroup Ideally #bigdata platforms will support #tokenization natively, or else appdevs will have to write it into apps #ogChat

Q3 Most #BigData stacks have no built in #security. What does this mean for securing #BigData? #ogChat

The lack of built-in security hoists a target on the Big Data. While not all enterprise data is sensitive, housing it insecurely runs the risk of compromise. Furthermore, security solutions not only need to be effective, but also scalable as data will continue to get bigger.

  • @elinormills #ogchat big data is one big hacker target #bigdata #security
    • @editingwhiz @elinormills #bigdata may be a huge hacker target, but will hackers be able to process the chaff out of it? THAT takes $$$ #ogchat
    • @elinormills @editingwhiz hackers are innovation leaders #ogchat
    • @editingwhiz @elinormills Yes, hackers are innovation leaders — in security, but not necessarily dataset processing. #eweeknews #ogchat
  • @jim_hietala Q3:There will be a strong market for 3rd party security tools for #BigData – existing security technologies can’t scale #ogchat
  • @TheTonyBradley Q3. When you take sensitive info and store it–particularly in the cloud–you run the risk of exposure or compromise. #ogChat
  • @editingwhiz Not all enterprises have sensitive business data they need to protect with their lives. We’re talking non-regulated, of course. #ogchat
  • @TheTonyBradley Q3. #BigData is sensitive enough. The distilled information from analyzing it is more sensitive. Solutions need to be effective. #ogChat
  • @AdrianLane Q3 It means identifying security products that don’t break big data – i.e. they scale or leverage #BigData #ogChat
    • @dustinkirkland @AdrianLane #ogChat Agreed, this is where certifications and partnerships between the 3rd party and #bigdata vendor are essential.

Q4 How is the industry dealing with the social and ethical uses of consumer data gathered via #BigData? #ogChat #privacy

Participants agreed that the industry needs to improve when it comes to dealing with the social and ethical used of consumer data gathered through Big Data. If the data is easily accessible, hackers will be attracted. No matter what, the cost of a breach is far greater than any preventative solution.

  • @dustinkirkland Q4. #ogChat Sadly, not well enough. The recent Instagram uproar was well publicized but such abuse of social media rights happens every day.
    • @TheTonyBradley @dustinkirkland True. But, they’ll buy the startups, and take it to market. Fortune 500 companies don’t like to play with newbies. #ogChat
    • @editingwhiz Disagree with this: Fortune 500s don’t like to play with newbies. We’re seeing that if the IT works, name recognition irrelevant. #ogchat
    • @elinormills @editingwhiz @thetonybradley ‘hacker’ covers lot of ground, so i would say depends on context. some of my best friends are hackers #ogchat
    • @Technodad @elinormills A core point- data from sensors will drive #bigdata as much as enterprise data. Big security, quality issues there. #ogChat
  • @Dana_Gardner Q4 If privacy is a big issue, hacktivism may crop up. Power of #BigData can also make it socially onerous. #data #security #ogChat
  • @dustinkirkland Q4. The cost of a breach is far greater than the cost (monetary or reputation) of any security solution. Don’t risk it. #ogChat

Q5 What lessons from basic #datasecurity and #cloud #security can be implemented in #BigData security? #ogChat

The principles are the same, just on a larger scale. The biggest risks come from cutting corners due to the size and complexity of the data gathered. As hackers (like Anonymous) get better, so does security regardless of the data size.

  • @TheTonyBradley Q5. Again, data is data. The best practices for securing and protecting it stay the same–just on a more massive #BigData scale. #ogChat
  • @Dana_Gardner Q5 Remember, this is in many ways unchartered territory so expect the unexpected. Count on it. #BigData #data #security #ogChat
  • @NadhanAtHP A5 @theopengroup – Security Testing is even more vital when it comes to #BigData and Information #ogChat
  • @TheTonyBradley Q5. Anonymous has proven time and again that most existing data security is trivial. Need better protection for #BigData. #ogChat

Q6 What are some best practices for securing #BigData? What are orgs doing now, and what will orgs be doing 2-3 years from now? #ogChat

While some argued encrypting everything is the key, and others encouraged pressure on big data providers, most agreed that a multi-step security infrastructure is necessary. It’s not just the data that needs to be secured, but also the transportation and analysis processes.

  • @dustinkirkland Q6. #ogChat Encrypting everything, by default, at least at the fs layer. Proper key management. Policies. Logs. Hopefully tokenized too.
  • @dustinkirkland Q6. #ogChat Ask tough questions of your #cloud or #bigdata provider. Know what they are responsible for and who has access to keys. #ogChat
    • @elinormills Agreed–> @dustinkirkland Q6. #ogChat Ask tough questions of your #cloud or #bigdataprovider. Know what they are responsible for …
  • @Dana_Gardner Q6 Treat most #BigData as a crown jewel, see it as among most valuable assets. Apply commensurate security. #data #security #ogChat
  • @elinormills Q6 govt level crypto minimum, plus protect all endpts #ogchat #bigdata #security
  • @TheTonyBradley Q6. Multi-faceted issue. Must protect raw #BigData, plus processing, analyzing, transporting, and resulting distilled analysis. #ogChat
  • @Technodad If you don’t establish trust with data source, you need to assume data needs verification, cleanup before it is used for decisions. #ogChat

A big thank you to all the participants who made this such a great discussion!

patricia donovanPatricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

3 Comments

Filed under Tweet Jam

Data Governance: A Fundamental Aspect of IT

By E.G. Nadhan, HP

In an earlier post, I had explained how you can build upon SOA governance to realize Cloud governance.  But underlying both paradigms is a fundamental aspect that we have been dealing with ever since the dawn of IT—and that’s the data itself.

In fact, IT used to be referred to as “data processing.” Despite the continuing evolution of IT through various platforms, technologies, architectures and tools, at the end of the day IT is still processing data. However, the data has taken multiple shapes and forms—both structured and unstructured. And Cloud Computing has opened up opportunities to process and store structured and unstructured data. There has been a need for data governance since the day data processing was born, and today, it’s taken on a whole new dimension.

“It’s the economy, stupid,” was a campaign slogan, coined to win a critical election in the United States in 1992. Today, the campaign slogan for governance in the land of IT should be, “It’s the data, stupid!”

Let us challenge ourselves with a few questions. Consider them the what, why, when, where, who and how of data governance.

What is data governance? It is the mechanism by which we ensure that the right corporate data is available to the right people, at the right time, in the right format, with the right context, through the right channels.

Why is data governance needed? The Cloud, social networking and user-owned devices (BYOD) have acted as catalysts, triggering an unprecedented growth in recent years. We need to control and understand the data we are dealing with in order to process it effectively and securely.

When should data governance be exercised? Well, when shouldn’t it be? Data governance kicks in at the source, where the data enters the enterprise. It continues across the information lifecycle, as data is processed and consumed to address business needs. And it is also essential when data is archived and/or purged.

Where does data governance apply? It applies to all business units and across all processes. Data governance has a critical role to play at the point of storage—the final checkpoint before it is stored as “golden” in a database. Data Governance also applies across all layers of the architecture:

  • Presentation layer where the data enters the enterprise
  • Business logic layer where the business rules are applied to the data
  • Integration layer where data is routed
  • Storage layer where data finds its home

Who does data governance apply to? It applies to all business leaders, consumers, generators and administrators of data. It is a good idea to identify stewards for the ownership of key data domains. Stewards must ensure that their data domains abide by the enterprise architectural principles.  Stewards should continuously analyze the impact of various business events to their domains.

How is data governance applied? Data governance must be exercised at the enterprise level with federated governance to individual business units and data domains. It should be proactively exercised when a new process, application, repository or interface is introduced.  Existing data is likely to be impacted.  In the absence of effective data governance, data is likely to be duplicated, either by chance or by choice.

In our data universe, “informationalization” yields valuable intelligence that enables effective decision-making and analysis. However, even having the best people, process and technology is not going to yield the desired outcomes if the underlying data is suspect.

How about you? How is the data in your enterprise? What governance measures do you have in place? I would like to know.

A version of this blog post was originally published on HP’s Journey through Enterprise IT Services blog.

NadhanHP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has more than 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project, and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, LinkedIn and Journey Blog.

1 Comment

Filed under Cloud, Cloud/SOA

#ogChat Summary – 2013 Security Priorities

By Patty Donovan, The Open Group

Totaling 446 tweets, yesterday’s 2013 Security Priorities Tweet Jam (#ogChat) saw a lively discussion on the future of security in 2013 and became our most successful tweet jam to date. In case you missed the conversation, here’s a recap of yesterday’s #ogChat!

The event was moderated by former CNET security reporter Elinor Mills, and there was a total of 28 participants including:

Here is a high-level snapshot of yesterday’s #ogChat:

Q1 What’s the biggest lesson learned by the security industry in 2012? #ogChat

The consensus among participants was that 2012 was a year of going back to the basics. There are many basic vulnerabilities within organizations that still need to be addressed, and it affects every aspect of an organization.

  • @Dana_Gardner Q1 … Security is not a product. It’s a way of conducting your organization, a mentality, affects all. Repeat. #ogChat #security #privacy
  • @Technodad Q1: Biggest #security lesson of 2102: everyone is in two security camps: those who know they’ve been penetrated & those who don’t. #ogChat
  • @jim_hietala Q1. Assume you’ve been penetrated, and put some focus on detective security controls, reaction/incident response #ogChat
  • @c7five Lesson of 2012 is how many basics we’re still not covering (eg. all the password dumps that showed weak controls and pw choice). #ogChat

Q2 How will organizations tackle #BYOD security in 2013? Are standards needed to secure employee-owned devices? #ogChat

Participants debated over the necessity of standards. Most agreed that standards and policies are key in securing BYOD.

  • @arj Q2: No “standards” needed for BYOD. My advice: collect as little information as possible; use MDM; create an explicit policy #ogChat
  • @Technodad @arj Standards are needed for #byod – but operational security practices more important than technical standards. #ogChat
  • @AWildCSO Organizations need to develop a strong asset management program as part of any BYOD effort. Identification and Classification #ogChat
  • @Dana_Gardner Q2 #BYOD forces more apps & data back on servers, more secure; leaves devices as zero client. Then take that to PCs too. #ogChat #security
  • @taosecurity Orgs need a BYOD policy for encryption & remote wipe of company data; expect remote compromise assessment apps too @elinormills #ogChat

Q3 In #BYOD era, will organizations be more focused on securing the network, the device, or the data? #ogChat

There was disagreement here. Some emphasized focusing on protecting data, while others argued that it is the devices and networks that need protecting.

  • @taosecurity Everyone claims to protect data, but the main ways to do so remain protecting devices & networks. Ignores code sec too. @elinormills #ogChat
  • @arj Q3: in the BYOD era, the focus must be on the data. Access is gated by employee’s entitlements + device capabilities. #ogChat
  • @Technodad @arj Well said. Data sec is the big challenge now – important for #byod, #cloud, many apps. #ogChat
  • @c7five Organization will focus more on device management while forgetting about the network and data controls in 2013. #ogChat #BYOD

Q4 What impact will using 3rd party #BigData have on corporate security practices? #ogChat

Participants agreed that using third parties will force organizations to rely on security provided by those parties. They also acknowledged that data must be secure in transit.

  • @daviottenheimer Q4 Big Data will redefine perimeter. have to isolate sensitive data in transit, store AND process #ogChat
  • @jim_hietala Q4. 3rd party Big Data puts into focus 3rd party risk management, and transparency of security controls and control state #ogChat
  • @c7five Organizations will jump into 3rd party Big Data without understanding of their responsibilities to secure the data they transfer. #ogChat
  • @Dana_Gardner Q4 You have to trust your 3rd party #BigData provider is better at #security than you are, eh? #ogChat  #security #SLA
  • @jadedsecurity @Technodad @Dana_Gardner has nothing to do with trust. Data that isn’t public must be secured in transit #ogChat
  • @AWildCSO Q4: with or without bigdata, third party risk management programs will continue to grow in 2013. #ogChat

Q5 What will global supply chain security look like in 2013? How involved should governments be? #ogChat

Supply chains are an emerging security issue, and governments need to get involved. But consumers will also start to understand what they are responsible for securing themselves.

  • @jim_hietala Q5. supply chain emerging as big security issue, .gov’s need to be involved, and Open Group’s OTTF doing good work here #ogChat
  • @Technodad Q5: Governments are going to act- issue is getting too important. Challenge is for industry to lead & minimize regulatory patchwork. #ogChat
  • @kjhiggins Q5: Customers truly understanding what they’re responsible for securing vs. what cloud provider is. #ogChat

Q6 What are the biggest unsolved issues in Cloud Computing security? #ogChat

Cloud security is a big issue. Most agreed that Cloud security is mysterious, and it needs to become more transparent. When Cloud providers claim they are secure, consumers and organizations put blind trust in them, making the problem worse.

  • @jadedsecurity @elinormills Q6 all of them. Corps assume cloud will provide CIA and in most cases even fails at availability. #ogChat
  • @jim_hietala Q6. Transparency of security controls/control state, cloud risk management, protection of unstructured data in cloud services #ogChat
  • @c7five Some PaaS cloud providers advertise security as something users don’t need to worry about. That makes the problem worse. #ogChat

Q7 What should be the top security priorities for organizations in 2013? #ogChat

Top security priorities varied. Priorities highlighted in the discussion included:  focusing on creating a culture that promotes secure activity; prioritizing security spending based on risk; focusing on where the data resides; and third-party risk management coming to the forefront.

  • @jim_hietala Q7. prioritizing security spend based on risks, protecting data, detective controls #ogChat
  • @Dana_Gardner Q7 Culture trumps technology and business. So make #security policy adherence a culture that is defined and rewarded. #ogChat #security
  • @kjhiggins Q7 Getting a handle on where all of your data resides, including in the mobile realm. #ogChat
  • @taosecurity Also for 2013: 1) count and classify your incidents & 2) measure time from detection to containment. Apply Lean principles to both. #ogChat
  • @AWildCSO Q7: Asset management, third party risk management, and risk based controls for 2013. #ogChat

A big thank you to all the participants who made this such a great discussion!

Patricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

1 Comment

Filed under Tweet Jam

Take a Lesson from History to Integrate to the Cloud

By E.G. Nadhan, HP

In an earlier post for The Open Group Blog on the Top 5 tell-tale signs of SOA evolving to the Cloud, I had outlined the various characteristics of SOA that serve as a foundation for the cloud computing paradigm.  Steady growth of service oriented practices and the continued adoption of cloud computing across enterprises has resulted in the need for integrating out to the cloud.  When doing so, we must take a look back in time at the evolution of integration solutions starting with point-to-point solutions maturing to integration brokers and enterprise services buses over the years.  We should take a lesson from history to ensure that this time around, when integrating to the cloud, we prevent undue proliferation of point-to-point solutions across the extended enterprise.

We must exercise the same due-diligence and governance as is done for services within the enterprise. There is an increased risk of point-to-point solutions proliferating because of consumerization of IT and the ease of availability of such services to individual business units.

Thus, here are 5 steps that need to be taken to ensure a more systemic approach when integrating to cloud-based service providers.

  1. Extend your SOA strategy to the Cloud. Review your current SOA strategy and extend this to accommodate cloud based as-a-service providers.
  2. Extend Governance around Cloud Services.   Review your existing IT governance and SOA governance processes to accommodate the introduction and adoption of cloud based as-a-service providers.
  3. Identify Cloud based Integration models. It is not a one-size fits all. Therefore multiple integration models could apply to the cloud-based service provider depending upon the enterprise integration architecture. These integration models include a) point-to-point solutions, b) cloud to on-premise ESB and c) cloud based connectors that adopt a service centric approach to integrate cloud providers to enterprise applications and/or other cloud providers.
  4. Apply right models for right scenarios. Review the scenarios involved and apply the right models to the right scenarios.
  5. Sustain and evolve your services taxonomy. Provide enterprise-wide visibility to the taxonomy of services – both on-premise and those identified for integration with the cloud-based service providers. Continuously evolve these services to integrate to a rationalized set of providers who cater to the integration needs of the enterprise in the cloud.

The biggest challenge enterprises have in driving this systemic adoption of cloud-based services comes from within its business units. Multiple business units may unknowingly avail the same services from the same providers in different ways. Therefore, enterprises must ensure that such point-to-point integrations do not proliferate like they did during the era preceding integration brokers.

Enterprises should not let history repeat itself when integrating to the cloud by adopting service-oriented principles.

How about your enterprise? How are you going about doing this? What is your approach to integrating to cloud service providers?

A version of this post was originally published on HP’s Enterprise Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Cloud, Cloud/SOA

The Open Group Barcelona Conference – Early Bird Registration ends September 21

By The Open Group Conference Team

Early Bird registration for The Open Group Conference in Barcelona ends September 21. Register now and save!

The conference runs October 22-24, 2012. On Monday, October 22, the plenary theme is “Big Data – The Next Frontier in the Enterprise,” and speakers will address the challenges and solutions facing Enterprise Architecture within the context of the growth of Big Data. Topics to be explored include:

  • How does an enterprise adopt the means to contend with Big Data within its information architecture?
  • How does Big Data enable your business architecture?
  • What are the issues concerned with real-time analysis of the data resources on the cloud?
  • What are the information security challenges in the world of outsourced and massively streamed data analytics?
  • What is the architectural view of security for cloud computing? How can you take a risk-based approach to cloud security?

Plenary speakers include:

  • Peter Haviland, head of Business Architecture, Ernst & Young
  • Ron Tolido, CTO of Application Services in Europe, Capgemini; and Manuel Sevilla, chief technical officer, Global Business Information Management, Capgemini
  • Scott Radeztsky, chief technical officer, Deloitte Analytics Innovation Centers
  • Helen Sun, director of Enterprise Architecture, Oracle

On Tuesday, October 23, Dr. Robert Winter, Institute of Information Management, University of St. Gallen, Switzerland, will kick off the day with a keynote on EA Management and Transformation Management.

Tracks include:

  • Practice-driven Research on Enterprise Transformation (PRET)
  • Trends in Enterprise Architecture Research (TEAR)
  • TOGAF® and ArchiMate® Case Studies
  • Information Architecture
  • Distributed Services Architecture
  • Holistic Enterprise Architecture Workshop
  • Business Innovation & Technical Disruption
  • Security Architecture
  • Big Data
  • Cloud Computing for Business
  • Cloud Security and Cloud Architecture
  • Agile Enterprise Architecture
  • Enterprise Architecture and Business Value
  • Setting Up A Successful Enterprise Architecture Practice

For more information or to register: http://www.opengroup.org/barcelona2012/registration

Comments Off

Filed under Conference

Open Group Security Gurus Dissect the Cloud: Higher of Lower Risk

By Dana Gardner, Interarbor Solutions

For some, any move to the Cloud — at least the public Cloud — means a higher risk for security.

For others, relying more on a public Cloud provider means better security. There’s more of a concentrated and comprehensive focus on security best practices that are perhaps better implemented and monitored centrally in the major public Clouds.

And so which is it? Is Cloud a positive or negative when it comes to cyber security? And what of hybrid models that combine public and private Cloud activities, how is security impacted in those cases?

We posed these and other questions to a panel of security experts at last week’s Open Group Conference in San Francisco to deeply examine how Cloud and security come together — for better or worse.

The panel: Jim Hietala, Vice President of Security for The Open Group; Stuart Boardman, Senior Business Consultant at KPN, where he co-leads the Enterprise Architecture Practice as well as the Cloud Computing Solutions Group; Dave Gilmour, an Associate at Metaplexity Associates and a Director at PreterLex Ltd., and Mary Ann Mezzapelle, Strategist for Enterprise Services and Chief Technologist for Security Services at HP.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Is this notion of going outside the firewall fundamentally a good or bad thing when it comes to security?

Hietala: It can be either. Talking to security people in large companies, frequently what I hear is that with adoption of some of those services, their policy is either let’s try and block that until we get a grip on how to do it right, or let’s establish a policy that says we just don’t use certain kinds of Cloud services. Data I see says that that’s really a failed strategy. Adoption is happening whether they embrace it or not.

The real issue is how you do that in a planned, strategic way, as opposed to letting services like Dropbox and other kinds of Cloud Collaboration services just happen. So it’s really about getting some forethought around how do we do this the right way, picking the right services that meet your security objectives, and going from there.

Gardner: Is Cloud Computing good or bad for security purposes?

Boardman: It’s simply a fact, and it’s something that we need to learn to live with.

What I’ve noticed through my own work is a lot of enterprise security policies were written before we had Cloud, but when we had private web applications that you might call Cloud these days, and the policies tend to be directed toward staff’s private use of the Cloud.

Then you run into problems, because you read something in policy — and if you interpret that as meaning Cloud, it means you can’t do it. And if you say it’s not Cloud, then you haven’t got any policy about it at all. Enterprises need to sit down and think, “What would it mean to us to make use of Cloud services and to ask as well, what are we likely to do with Cloud services?”

Gardner: Dave, is there an added impetus for Cloud providers to be somewhat more secure than enterprises?

Gilmour: It depends on the enterprise that they’re actually supplying to. If you’re in a heavily regulated industry, you have a different view of what levels of security you need and want, and therefore what you’re going to impose contractually on your Cloud supplier. That means that the different Cloud suppliers are going to have to attack different industries with different levels of security arrangements.

The problem there is that the penalty regimes are always going to say, “Well, if the security lapses, you’re going to get off with two months of not paying” or something like that. That kind of attitude isn’t going to go in this kind of security.

What I don’t understand is exactly how secure Cloud provision is going to be enabled and governed under tight regimes like that.

An opportunity

Gardner: Jim, we’ve seen in the public sector that governments are recognizing that Cloud models could be a benefit to them. They can reduce redundancy. They can control and standardize. They’re putting in place some definitions, implementation standards, and so forth. Is the vanguard of correct Cloud Computing with security in mind being managed by governments at this point?

Hietala: I’d say that they’re at the forefront. Some of these shared government services, where they stand up Cloud and make it available to lots of different departments in a government, have the ability to do what they want from a security standpoint, not relying on a public provider, and get it right from their perspective and meet their requirements. They then take that consistent service out to lots of departments that may not have had the resources to get IT security right, when they were doing it themselves. So I think you can make a case for that.

Gardner: Stuart, being involved with standards activities yourself, does moving to the Cloud provide a better environment for managing, maintaining, instilling, and improving on standards than enterprise by enterprise by enterprise? As I say, we’re looking at a larger pool and therefore that strikes me as possibly being a better place to invoke and manage standards.

Boardman: Dana, that’s a really good point, and I do agree. Also, in the security field, we have an advantage in the sense that there are quite a lot of standards out there to deal with interoperability, exchange of policy, exchange of credentials, which we can use. If we adopt those, then we’ve got a much better chance of getting those standards used widely in the Cloud world than in an individual enterprise, with an individual supplier, where it’s not negotiation, but “you use my API, and it looks like this.”

Having said that, there are a lot of well-known Cloud providers who do not currently support those standards and they need a strong commercial reason to do it. So it’s going to be a question of the balance. Will we get enough specific weight of people who are using it to force the others to come on board? And I have no idea what the answer to that is.

Gardner: We’ve also seen that cooperation is an important aspect of security, knowing what’s going on on other people’s networks, being able to share information about what the threats are, remediation, working to move quickly and comprehensively when there are security issues across different networks.

Is that a case, Dave, where having a Cloud environment is a benefit? That is to say more sharing about what’s happening across networks for many companies that are clients or customers of a Cloud provider rather than perhaps spotty sharing when it comes to company by company?

Gilmour: There is something to be said for that, Dana. Part of the issue, though, is that companies are individually responsible for their data. They’re individually responsible to a regulator or to their clients for their data. The question then becomes that as soon as you start to share a certain aspect of the security, you’re de facto sharing the weaknesses as well as the strengths.

So it’s a two-edged sword. One of the problems we have is that until we mature a little bit more, we won’t be able to actually see which side is the sharpest.

Gardner: So our premise that Cloud is good and bad for security is holding up, but I’m wondering whether the same things that make you a risk in a private setting — poor adhesion to standards, no good governance, too many technologies that are not being measured and controlled, not instilling good behavior in your employees and then enforcing that — wouldn’t this be the same either way? Is it really Cloud or not Cloud, or is it good security practices or not good security practices? Mary Ann?

No accountability

Mezzapelle: You’re right. It’s a little bit of that “garbage in, garbage out,” if you don’t have the basic things in place in your enterprise, which means the policies, the governance cycle, the audit, and the tracking, because it doesn’t matter if you don’t measure it and track it, and if there is no business accountability.

David said it — each individual company is responsible for its own security, but I would say that it’s the business owner that’s responsible for the security, because they’re the ones that ultimately have to answer that question for themselves in their own business environment: “Is it enough for what I have to get done? Is the agility more important than the flexibility in getting to some systems or the accessibility for other people, as it is with some of the ubiquitous computing?”

So you’re right. If it’s an ugly situation within your enterprise, it’s going to get worse when you do outsourcing, out-tasking, or anything else you want to call within the Cloud environment. One of the things that we say is that organizations not only need to know their technology, but they have to get better at relationship management, understanding who their partners are, and being able to negotiate and manage that effectively through a series of relationships, not just transactions.

Gardner: If data and sharing data is so important, it strikes me that Cloud component is going to be part of that, especially if we’re dealing with business processes across organizations, doing joins, comparing and contrasting data, crunching it and sharing it, making data actually part of the business, a revenue generation activity, all seems prominent and likely.

So to you, Stuart, what is the issue now with data in the Cloud? Is it good, bad, or just the same double-edged sword, and it just depends how you manage and do it?

Boardman: Dana, I don’t know whether we really want to be putting our data in the Cloud, so much as putting the access to our data into the Cloud. There are all kinds of issues you’re going to run up against, as soon as you start putting your source information out into the Cloud, not the least privacy and that kind of thing.

A bunch of APIs

What you can do is simply say, “What information do I have that might be interesting to people? If it’s a private Cloud in a large organization elsewhere in the organization, how can I make that available to share?” Or maybe it’s really going out into public. What a government, for example, can be thinking about is making information services available, not just what you go and get from them that they already published. But “this is the information,” a bunch of APIs if you like. I prefer to call them data services, and to make those available.

So, if you do it properly, you have a layer of security in front of your data. You’re not letting people come in and do joins across all your tables. You’re providing information. That does require you then to engage your users in what is it that they want and what they want to do. Maybe there are people out there who want to take a bit of your information and a bit of somebody else’s and mash it together, provide added value. That’s great. Let’s go for that and not try and answer every possible question in advance.

Gardner: Dave, do you agree with that, or do you think that there is a place in the Cloud for some data?

Gilmour: There’s definitely a place in the Cloud for some data. I get the impression that there is going to drive out of this something like the insurance industry, where you’ll have a secondary Cloud. You’ll have secondary providers who will provide to the front-end providers. They might do things like archiving and that sort of thing.

Now, if you have that situation where your contractual relationship is two steps away, then you have to be very confident and certain of your cloud partner, and it has to actually therefore encompass a very strong level of governance.

The other issue you have is that you’ve got then the intersection of your governance requirements with that of the cloud provider’s governance requirements. Therefore you have to have a really strongly — and I hate to use the word — architected set of interfaces, so that you can understand how that governance is actually going to operate.

Gardner: Wouldn’t data perhaps be safer in a cloud than if they have a poorly managed network?

Mezzapelle: There is data in the Cloud and there will continue to be data in the Cloud, whether you want it there or not. The best organizations are going to start understanding that they can’t control it that way and that perimeter-like approach that we’ve been talking about getting away from for the last five or seven years.

So what we want to talk about is data-centric security, where you understand, based on role or context, who is going to access the information and for what reason. I think there is a better opportunity for services like storage, whether it’s for archiving or for near term use.

There are also other services that you don’t want to have to pay for 12 months out of the year, but that you might need independently. For instance, when you’re running a marketing campaign, you already share your data with some of your marketing partners. Or if you’re doing your payroll, you’re sharing that data through some of the national providers.

Data in different places

So there already is a lot of data in a lot of different places, whether you want Cloud or not, but the context is, it’s not in your perimeter, under your direct control, all of the time. The better you get at managing it wherever it is specific to the context, the better off you will be.

Hietala: It’s a slippery slope [when it comes to customer data]. That’s the most dangerous data to stick out in a Cloud service, if you ask me. If it’s personally identifiable information, then you get the privacy concerns that Stuart talked about. So to the extent you’re looking at putting that kind of data in a Cloud, looking at the Cloud service and trying to determine if we can apply some encryption, apply the sensible security controls to ensure that if that data gets loose, you’re not ending up in the headlines of The Wall Street Journal.

Gardner: Dave, you said there will be different levels on a regulatory basis for security. Wouldn’t that also play with data? Wouldn’t there be different types of data and therefore a spectrum of security and availability to that data?

Gilmour: You’re right. If we come back to Facebook as an example, Facebook is data that, even if it’s data about our known customers, it’s stuff that they have put out there with their will. The data that they give us, they have given to us for a purpose, and it is not for us then to distribute that data or make it available elsewhere. The fact that it may be the same data is not relevant to the discussion.

Three-dimensional solution

That’s where I think we are going to end up with not just one layer or two layers. We’re going to end up with a sort of a three-dimensional solution space. We’re going to work out exactly which chunk we’re going to handle in which way. There will be significant areas where these things crossover.

The other thing we shouldn’t forget is that data includes our software, and that’s something that people forget. Software nowadays is out in the Cloud, under current ways of running things, and you don’t even always know where it’s executing. So if you don’t know where your software is executing, how do you know where your data is?

It’s going to have to be just handled one way or another, and I think it’s going to be one of these things where it’s going to be shades of gray, because it cannot be black and white. The question is going to be, what’s the threshold shade of gray that’s acceptable.

Gardner: Mary Ann, to this notion of the different layers of security for different types of data, is there anything happening in the market that you’re aware of that’s already moving in that direction?

Mezzapelle: The experience that I have is mostly in some of the business frameworks for particular industries, like healthcare and what it takes to comply with the HIPAA regulation, or in the financial services industry, or in consumer products where you have to comply with the PCI regulations.

There has continued to be an issue around information lifecycle management, which is categorizing your data. Within a company, you might have had a document that you coded private, confidential, top secret, or whatever. So you might have had three or four levels for a document.

You’ve already talked about how complex it’s going to be as you move into trying understand, not only for that data, that the name Mary Ann Mezzapelle, happens to be in five or six different business systems over a 100 instances around the world.

That’s the importance of something like an Enterprise Architecture that can help you understand that you’re not just talking about the technology components, but the information, what they mean, and how they are prioritized or critical to the business, which sometimes comes up in a business continuity plan from a system point of view. That’s where I’ve advised clients on where they might start looking to how they connect the business criticality with a piece of information.

One last thing. Those regulations don’t necessarily mean that you’re secure. It makes for good basic health, but that doesn’t mean that it’s ultimately protected.You have to do a risk assessment based on your own environment and the bad actors that you expect and the priorities based on that.

Leaving security to the end

Boardman: I just wanted to pick up here, because Mary Ann spoke about Enterprise Architecture. One of my bugbears — and I call myself an enterprise architect — is that, we have a terrible habit of leaving security to the end. We don’t architect security into our Enterprise Architecture. It’s a techie thing, and we’ll fix that at the back. There are also people in the security world who are techies and they think that they will do it that way as well.

I don’t know how long ago it was published, but there was an activity to look at bringing the SABSA Methodology from security together with TOGAF®. There was a white paper published a few weeks ago.

The Open Group has been doing some really good work on bringing security right in to the process of EA.

Hietala: In the next version of TOGAF, which has already started, there will be a whole emphasis on making sure that security is better represented in some of the TOGAF guidance. That’s ongoing work here at The Open Group.

Gardner: As I listen, it sounds as if the in the Cloud or out of the Cloud security continuum is perhaps the wrong way to look at it. If you have a lifecycle approach to services and to data, then you’ll have a way in which you can approach data uses for certain instances, certain requirements, and that would then apply to a variety of different private Cloud, public Cloud, hybrid Cloud.

Is that where we need to go, perhaps have more of this lifecycle approach to services and data that would accommodate any number of different scenarios in terms of hosting access and availability? The Cloud seems inevitable. So what we really need to focus on are the services and the data.

Boardman: That’s part of it. That needs to be tied in with the risk-based approach. So if we have done that, we can then pick up on that information and we can look at a concrete situation, what have we got here, what do we want to do with it. We can then compare that information. We can assess our risk based on what we have done around the lifecycle. We can understand specifically what we might be thinking about putting where and come up with a sensible risk approach.

You may come to the conclusion in some cases that the risk is too high and the mitigation too expensive. In others, you may say, no, because we understand our information and we understand the risk situation, we can live with that, it’s fine.

Gardner: It sounds as if we are coming at this as an underwriter for an insurance company. Is that the way to look at it?

Current risk

Gilmour: That’s eminently sensible. You have the mortality tables, you have the current risk, and you just work the two together and work out what’s the premium. That’s probably a very good paradigm to give us guidance actually as to how we should approach intellectually the problem.

Mezzapelle: One of the problems is that we don’t have those actuarial tables yet. That’s a little bit of an issue for a lot of people when they talk about, “I’ve got $100 to spend on security. Where am I going to spend it this year? Am I going to spend it on firewalls? Am I going to spend it on information lifecycle management assessment? What am I going to spend it on?” That’s some of the research that we have been doing at HP is to try to get that into something that’s more of a statistic.

So, when you have a particular project that does a certain kind of security implementation, you can see what the business return on it is and how it actually lowers risk. We found that it’s better to spend your money on getting a better system to patch your systems than it is to do some other kind of content filtering or something like that.

Gardner: Perhaps what we need is the equivalent of an Underwriters Laboratories (UL) for permeable organizational IT assets, where the security stamp of approval comes in high or low. Then, you could get you insurance insight– maybe something for The Open Group to look into. Any thoughts about how standards and a consortium approach would come into that?

Hietala: I don’t know about the UL for all security things. That sounds like a risky proposition.

Gardner: It could be fairly popular and remunerative.

Hietala: It could.

Mezzapelle: An unending job.

Hietala: I will say we have one active project in the Security Forum that is looking at trying to allow organizations to measure and understand risk dependencies that they inherit from other organizations.

So if I’m outsourcing a function to XYZ corporation, being able to measure what risk am I inheriting from them by virtue of them doing some IT processing for me, could be a Cloud provider or it could be somebody doing a business process for me, whatever. So there’s work going on there.

I heard just last week about a NSF funded project here in the U.S. to do the same sort of thing, to look at trying to measure risk in a predictable way. So there are things going on out there.

Gardner: We have to wrap up, I’m afraid, but Stuart, it seems as if currently it’s the larger public Cloud provider, something of Amazon and Google and among others that might be playing the role of all of these entities we are talking about. They are their own self-insurer. They are their own underwriter. They are their own risk assessor, like a UL. Do you think that’s going to continue to be the case?

Boardman: No, I think that as Cloud adoption increases, you will have a greater weight of consumer organizations who will need to do that themselves. You look at the question that it’s not just responsibility, but it’s also accountability. At the end of the day, you’re always accountable for the data that you hold. It doesn’t matter where you put it and how many other parties they subcontract that out to.

The weight will change

So there’s a need to have that, and as the adoption increases, there’s less fear and more, “Let’s do something about it.” Then, I think the weight will change.

Plus, of course, there are other parties coming into this world, the world that Amazon has created. I’d imagine that HP is probably one of them as well, but all the big names in IT are moving in here, and I suspect that also for those companies there’s a differentiator in knowing how to do this properly in their history of enterprise involvement.

So yeah, I think it will change. That’s no offense to Amazon, etc. I just think that the balance is going to change.

Gilmour: Yes. I think that’s how it has to go. The question that then arises is, who is going to police the policeman and how is that going to happen? Every company is going to be using the Cloud. Even the Cloud suppliers are using the Cloud. So how is it going to work? It’s one of these never-decreasing circles.

Mezzapelle: At this point, I think it’s going to be more evolution than revolution, but I’m also one of the people who’ve been in that part of the business — IT services — for the last 20 years and have seen it morph in a little bit different way.

Stuart is right that there’s going to be a convergence of the consumer-driven, cloud-based model, which Amazon and Google represent, with an enterprise approach that corporations like HP are representing. It’s somewhere in the middle where we can bring the service level commitments, the options for security, the options for other things that make it more reliable and risk-averse for large corporations to take advantage of it.

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

1 Comment

Filed under Cloud, Cloud/SOA, Conference, Cybersecurity, Information security, Security Architecture

Setting Expectations and Working within Existing Structures the Dominate Themes for Day 3 of San Francisco Conference

By The Open Group Conference Team

Yesterday concluded The Open Group Conference San Francisco. Key themes that stood out on Day 3, as well as throughout the conference, included the need for a better understanding of business expectations and existing structures.

Jason Bloomberg, president of ZapThink, began his presentation by using an illustration of a plate of spaghetti and drawing an analogy to Cloud Computing. He compared spaghetti to legacy applications and displayed the way that enterprises are currently moving to the Cloud – by taking the plate of spaghetti and physically putting it in the Cloud.

A lot of companies that have adopted Cloud Computing have done so without a comprehensive understanding of their current organization and enterprise assets, according to Mr. Bloomberg. A legacy application that is not engineered to operate in the Cloud will not yield the hyped benefits of elasticity and infinite scalability. And Cloud adoption without well thought-out objectives will never reach the vague goals of “better ROI” or “reduced costs.”

Mr. Bloomberg urged the audience to start with the business problem in order to understand what the right adoption will be for your enterprise. He argued that it’s crucial to think about the question “What does your application require?” Do you require Scalability? Elasticity? A private, public or hybrid Cloud? Without knowing a business’s expected outcomes, enterprise architects will be hard pressed to help them achieve their goals.

Understand your environment

Chris Lockhart, consultant at Working Title Management & Technology Consultants, shared his experiences helping a Fortune 25 company with an outdated technology model support Cloud-centric services. Lockhart noted that for many large companies, Cloud has been the fix-it solution for poorly architected enterprises. But often times after the business tells architects to build a model for cloud adoption, the plan presented and the business expectations do not align.

After working on this project Mr. Lockhart learned that the greatest problem for architects is “people with unset and unmanaged expectations.” After the Enterprise Architecture team realized that they had limited power with their recommendations and strategic roadmaps, they acted as negotiators, often facilitating communication between different departments within the business. This is where architects began to display their true value to the organization, illustrated by the following statement made by a business executive within the organization: “Architects are seen as being balanced and rounded individuals who combine a creative approach with a caring, thoughtful disposition.”

The key takeaways from Mr. Lockhart’s experience were:

  • Recognize the limitations
  • Use the same language
  • Work within existing structures
  • Frameworks and models are important to a certain extent
  • Don’t talk products
  • Leave architectural purity in the ivory tower
  • Don’t dictate – low threat level works better
  • Recognize that EA doesn’t know everything
  • Most of the work was dealing with people, not technology

Understand your Cloud Perspective

Steve Bennett, senior enterprise architect at Oracle, discussed the best way to approach Cloud Computing in his session, entitled “A Pragmatic Approach to Cloud Computing.” While architects understand and create value driven approaches, most customers simply don’t think this way, Mr. Bennett said. Often the business side of the enterprise hears about the revolutionary benefits of the Cloud, but they usually don’t take a pragmatic approach to implementing it.

Mr. Bennett went on to compare two types of Cloud adopters – the “Dilberts” and the “Neos” (from the Matrix). Dilberts often pursue monetary savings when moving to the Cloud and are late adopters, while Neos pursue business agility and can be described as early adopters, again highlighting the importance of understanding who is driving the implementation before architecting a plan.

Comments Off

Filed under Cloud, Cloud/SOA, Cybersecurity, Enterprise Architecture, Enterprise Transformation

Overlapping Criminal and State Threats Pose Growing Cyber Security Threat to Global Internet Commerce, Says Open Group Speaker

By Dana Gardner, Interarbor Solutions

This special BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference this January in San Francisco.

The conference will focus on how IT and enterprise architecture support enterprise transformation. Speakers in conference events will also explore the latest in service oriented architecture (SOA), cloud computing, and security.

We’re here now with one of the main speakers, Joseph Menn, Cyber Security Correspondent for the Financial Times and author of Fatal System Error: The Hunt for the New Crime Lords Who are Bringing Down the Internet.

Joe has covered security since 1999 for both the Financial Times and then before that, for the Los Angeles Times. Fatal System Error is his third book, he also wrote All the Rave: The Rise and Fall of Shawn Fanning’s Napster.

As a lead-in to his Open Group presentation, entitled “What You’re Up Against: Mobsters, Nation-States, and Blurry Lines,” Joe explores the current cyber-crimelandscape, the underground cyber-gang movement, and the motive behind governments collaborating with organized crime in cyber space. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Have we entered a new period where just balancing risks and costs isn’t a sufficient bulwark against burgeoning cyber crime?

Menn: Maybe you can make your enterprise a little trickier to get into than the other guy’s enterprise, but crime pays very, very well, and in the big picture, their ecosystem is better than ours. They do capitalism better than we do. They specialize to a great extent. They reinvest in R&D.

On our end, on the good guys’ side, it’s hard if you’re a chief information security officer (CISO) or a chief security officer (CSO) to convince the top brass to pay more. You don’t really know what’s working and what isn’t. You don’t know if you’ve really been had by something that we call advanced persistent threat (APT). Even the top security minds in the country can’t be sure whether they’ve been had or not. So it’s hard to know what to spend on.

More efficient

The other side doesn’t have that problem. They’re getting more efficient in the same way that they used to lead technical innovation. They’re leading economic innovation. The freemium model is best evidenced by crimeware kits like ZeuS, where you can get versions that are pretty effective and will help you steal a bunch of money for free. Then if you like that, you have the add-on to pay extra for — the latest and greatest that are sure to get through the antivirus systems.

Gardner: When you say “they,” who you are really talking about?

Menn: They, the bad guys? It’s largely Eastern European organized crime. In some countries, they can be caught. In other countries they can’t be caught, and there really isn’t any point in trying.

It’s a geopolitical issue, which is something that is not widely understood, because in general, officials don’t talk about it. Working on my book, and in reporting for the newspapers, I’ve met really good cyber investigators for the Secret Service and the FBI, but I’ve yet to meet one that thinks he’s going to get promoted for calling a press conference and announcing that they can’t catch anyone.

So the State Department, meanwhile, keeps hoping that the other side is going to turn a new leaf, but they’ve been hoping that for 10 or more years, and it hasn’t happened. So it’s incumbent upon the rest of us to call a spade a spade here.

What’s really going on is that Russian intelligence and, depending on who is in office at a given time, Ukrainian authorities, are knowingly protecting some of the worst and most effective cyber criminals on the planet.

Gardner: And what would be their motivation?

Menn: As a starting point, the level of garden-variety corruption over there is absolutely mind-blowing. More than 50 percent of Russian citizens responding to the survey say that they had paid a bribe to somebody in the past 12 months. But it’s gone well beyond that.

The same resources, human and technical, that are used to rob us blind are also being used in what is fairly called cyber war. The same criminal networks that are after our bank accounts were, for example, used in denial-of-service (DOS) attacks on Georgia and Estonian websites belonging to government, major media, and Estonia banks.

It’s the same guy, and it’s a “look-the-other-way” thing. You can do whatever crime you want, and when we call upon you to serve Mother Russia, you will do so. And that has accelerated. Just in the past couple of weeks, with the disputed elections in Russia, you’ve seen mass DOS attacks against opposition websites, mainstream media websites, and live journals. It’s a pretty handy tool to have at your disposal. I provide all the evidence that would be needed to convince the reasonable people in my book.

Gardner: In your book you use the terms “bringing down the Internet.” Is this all really a threat to the integrity of the Internet?

Menn: Well integrity is the key word there. No, I don’t think anybody is about to stop us all from the privilege of watching skateboarding dogs onYouTube. What I mean by that is the higher trust in the Internet in the way it’s come to be used, not the way it was designed, but the way it is used now for online banking, ecommerce, and for increasingly storing corporate — and heaven help us, government secrets — in the cloud. That is in very, very great trouble.

Not a prayer

I don’t think that now you can even trust transactions not to be monitored and pilfered. The latest, greatest versions of ZeuS gets past multi-factor authentication and are not detected by any antivirus that’s out there. So consumers don’t have a prayer, in the words of Art Coviello, CEO of RSA, and corporations aren’t doing much better.

So the way the Internet is being used now is in very, very grave trouble and not reliable. That’s what I mean by it. If they turned all the botnets in the world on a given target, that target is gone. For multiple root servers and DNS, they could do some serious damage. I don’t know if they could stop the whole thing, but you’re right, they don’t want to kill the golden goose. I don’t see a motivation for that.

Gardner: If we look at organized crime in historical context, we found that there is a lot of innovation over the decades. Is that playing out on the Internet as well?

Menn: Sure. The mob does well in any place where there is a market for something, and there isn’t an effective regulatory framework that sustains it – prohibition back in the day, prostitution, gambling, and that sort of thing.

… The Russian and Ukrainian gangs went to extortion as an early model, and ironically, some of the first websites that they extorted with the threat were the offshore gambling firms. They were cash rich, they had pretty weak infrastructure, and they were wary about going to the FBI. They started by attacking those sites in 2003-04 and then they moved on to more garden-variety companies. Some of them paid off and some said, “This is going to look little awkward in our SEC filings” and they didn’t pay off.

Once the cyber gang got big enough, sooner or later, they also wanted the protection of traditional organized crime, because those people had better connections inside the intelligence agencies and the police force and could get them protection. That’s the way it worked. It was sort of an organic alliance, rather than “Let’s develop this promising area.”

… That is what happens. Initially it was garden-variety payoffs and protection. Then, around 2007, with the attack on Estonia, these guys started proving their worth to the Kremlin, and others saw that with the attacks that ran through their system.

This has continued to evolve very rapidly. Now the DOS attacks are routinely used as the tool for political repression all around the world –Vietnam, Iran and everywhere you’ll see critics that are silenced from DOS attacks. In most cases, it’s not the spy agencies or whoever themselves, but it’s their contract agents. They just go to their friends in the similar gangs and say, “Hey do this.” What’s interesting is that they are both in this gray area now, both Russia and China, which we haven’t talked about as much.

In China, hacking really started out as an expression of patriotism. Some of the biggest attacks, Code Red being one of them, were against targets in countries that were perceived to have slighted China or had run into some sort of territorial flap with China, and, lo and behold, they got hacked.

In the past several years, with this sort of patriotic hacking, the anti-defense establishment hacking in the West that we are reading a lot about finally, those same guys have gone off and decided to enrich themselves as well. There were actually disputes in some of the major Chinese hacking groups. Some people said it was unethical to just go after money, and some of these early groups split over that.

Once the cyber gang got big enough, sooner or later, they also wanted the protection of traditional organized crime, because those people had better connections inside the intelligence agencies and the police force and could get them protection. That’s the way it worked. It was sort of an organic alliance, rather than “Let’s develop this promising area.”

… That is what happens. Initially it was garden-variety payoffs and protection. Then, around 2007, with the attack on Estonia, these guys started proving their worth to the Kremlin, and others saw that with the attacks that ran through their system.

This has continued to evolve very rapidly. Now the DOS attacks are routinely used as the tool for political repression all around the world –Vietnam, Iran and everywhere you’ll see critics that are silenced from DOS attacks. In most cases, it’s not the spy agencies or whoever themselves, but it’s their contract agents. They just go to their friends in the similar gangs and say, “Hey do this.” What’s interesting is that they are both in this gray area now, both Russia and China, which we haven’t talked about as much.

In China, hacking really started out as an expression of patriotism. Some of the biggest attacks, Code Red being one of them, were against targets in countries that were perceived to have slighted China or had run into some sort of territorial flap with China, and, lo and behold, they got hacked.

In the past several years, with this sort of patriotic hacking, the anti-defense establishment hacking in the West that we are reading a lot about finally, those same guys have gone off and decided to enrich themselves as well. There were actually disputes in some of the major Chinese hacking groups. Some people said it was unethical to just go after money, and some of these early groups split over that.

In Russia, it went the other way. It started out with just a bunch of greedy criminals, and then they said, “Hey — we can do even better and be protected. You have better protection if you do some hacking for the motherland.” In China, it’s the other way. They started out hacking for the motherland, and then added, “Hey — we can get rich while serving our country.”

So they’re both sort of in the same place, and unfortunately it makes it pretty close to impossible for law enforcement in [the U.S.] to do anything about it, because it gets into political protection. What you really need is White House-level dealing with this stuff. If President Obama is going to talk to his opposite numbers about Chinese currency, Russian support of something we don’t like, or oil policy, this has got to be right up there too — or nothing is going to happen at all.

Gardner: What about the pure capitalism side, stealing intellectual property (IP) and taking over products in markets with the aid of these nefarious means? How big a deal is this now for enterprises and commercial organizations?

Menn: It is much, much worse than anybody realizes. The U.S. counterintelligence a few weeks ago finally put out a report saying that Russia and China are deliberately stealing our IP, the IP of our companies. That’s an open secret. It’s been happening for years. You’re right. The man in the street doesn’t realize this, because companies aren’t used to fessing up. Therefore, there is little outrage and little pressure for retaliation or diplomatic engagement on these issues.

I’m cautiously optimistic that that is going to change a little bit. This year the Securities and Exchange Commission (SEC) gave very detailed guidance about when you have to disclose when you’ve been hacked. If there is a material impact to your company, you have to disclose it here and there, even if it’s unknown.

Gardner: So the old adage of shining light on this probably is in the best interest of everyone. Is the message then keeping this quiet isn’t necessarily the right way to go?

Menn: Not only is it not the right way to go, but it’s safer to come out of the woods and fess up now. The stigma is almost gone. If you really blow the PR like Sony, then you’re going to suffer some, but I haven’t heard a lot of people say, “Boy, Google is run by a bunch of stupid idiots. They got hacked by the Chinese.”

It’s the definition of an asymmetrical fight here. There is no company that’s going to stand up against the might of the Chinese military, and nobody is going to fault them for getting nailed. Where we should fault them is for covering it up.

I think you should give the American people some credit. They realize that you’re not the bad guy, if you get nailed. As I said, nobody thinks that Google has a bunch of stupid engineers. It is somewhere between extremely difficult to impossible to ward off against “zero-days” and the dedicated teams working on social engineering, because the TCP/IP is fundamentally broken and it ain’t your fault.

 [These threats] are an existential threat not only to your company, but to our country and to our way of life. It is that bad. One of the problems is that in the U.S., executives tend to think a quarter or two ahead. If your source code gets stolen, your blueprints get taken, nobody might know that for a few years, and heck, by then you’re retired.

With the new SEC guidelines and some national plans in the U.K. and in the U.S., that’s not going to cut it anymore. Executives will be held accountable. This is some pretty drastic stuff. The things that you should be thinking about, if you’re in an IT-based business, include figuring out the absolutely critical crown jewel one, two, or three percent of your stuff, and keeping it off network machines.

Short-term price

Gardner: So we have to think differently, don’t we?

Menn: Basically, regular companies have to start thinking like banks, and banks have to start thinking like intelligence agencies. Everybody has to level up here.

Gardner: What do the intelligence agencies have to start thinking about?

Menn: The discussions that are going on now obviously include greatly increased monitoring, pushing responsibility for seeing suspicious stuff down to private enterprise, and obviously greater information sharing between private enterprise, and government officials.

But, there’s some pretty outlandish stuff that’s getting kicked around, including looking the other way if you, as a company, sniff something out in another country and decide to take retaliatory action on your own. There’s some pretty sea-change stuff that’s going on.

Gardner: So that would be playing offense as well as defense?

Menn: In the Defense Authorization Act that just passed, for the first time, Congress officially blesses offensive cyber-warfare, which is something we’ve already been doing, just quietly.

We’re entering some pretty new areas here, and one of the things that’s going on is that the cyber warfare stuff, which is happening, is basically run by intelligence folks, rather by a bunch of lawyers worrying about collateral damage and the like, and there’s almost no oversight because intelligence agencies in general get low oversight.

Gardner: Just quickly looking to the future, we have some major trends. We have an increased movement toward mobility, cloud, big data, social. How do these big shifts in IT impact this cyber security issue?

Menn: Well, there are some that are clearly dangerous, and there are some things that are a mixed bag. Certainly, the inroads of social networking into the workplace are bad from a security point of view. Perhaps worse is the consumerization of IT, the bring-your-own-device trend, which isn’t going to go away. That’s bad, although there are obviously mitigating things you can do.

The cloud itself is a mixed bag. Certainly, in theory, it could be made more secure than what you have on premise. If you’re turning it over to the very best of the very best, they can do a lot more things than you can in terms of protecting it, particularly if you’re a smaller business.

If you look to the large-scale banks and people with health records and that sort of thing that really have to be ultra-secure, they’re not going to do this yet, because the procedures are not really set up to their specs yet. That may likely come in the future. But, cloud security, in my opinion, is not there yet. So that’s a mixed blessing.

Radical steps

You need to think strategically about this, and that includes some pretty radical steps. There are those who say there are two types of companies out there — those that have been hacked and those that don’t know that they’ve been hacked.

Everybody needs to take a look at this stuff beyond their immediate corporate needs and think about where we’re heading as a society. And to the extent that people are already expert in the stuff or can become expert in this stuff, they need to share that knowledge, and that will often mean, saying “Yes, we got hacked” publicly, but it also means educating those around them about the severity of the threat.

One of the reasons I wrote my book, and spent years doing it, is not because I felt that I could tell every senior executive what they needed to do. I wanted to educate a broader audience, because there are some pretty smart people, even in Washington, who have known about this for years and have been unable to do anything about it. We haven’t really passed anything that’s substantial in terms of legislation.

As a matter of political philosophy, I feel that if enough people on the street realize what’s going on, then quite often leaders will get in front of them and at least attempt to do the right thing. Senior executives should be thinking about educating their customers, their peers, the general public, and Washington to make sure that the stuff that passes isn’t as bad as it might otherwise be.

************

If you are interested in attending The Open Group’s upcoming conference, please register here: http://www3.opengroup.org/event/open-group-conference-san-francisco/registration

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

Comments Off

Filed under Cloud, Cybersecurity, Information security, Security Architecture

2012 Open Group Predictions, Vol. 1

By The Open Group

Foreword

By Allen Brown, CEO

2011 was a big year for The Open Group, thanks to the efforts of our members and our staff – you all deserve a very big thank you. There have been so many big achievements, that to list them all here would mean we would never get to our predictions. Significantly though, The Open Group continues to grow and this year the number of enterprise members passed the 400 mark which means that around 30,000 people are involved, some more so than others, from all over the world.

Making predictions is always risky but we thought it might be fun anyway. Here are three trends that will wield great influence on IT in 2012 and beyond:

  • This year we experienced the consumerization of IT accelerating the pace of change for the enterprise at an astonishing rate as business users embraced new technologies that transformed their organizations. As this trend continues in 2012, the enterprise architect will play a critical role in supporting this change and enabling the business to realize their goals.
  • Enterprise architecture will continue its maturity in becoming a recognized profession. As the profession matures, employers of enterprise architects and other IT professionals, for that matter, will increasingly look for industry recognized certifications.
  • As globalization continues, security and compliance will be increasing issues for companies delivering products or services and there will be a growing spotlight on what might be inside IT products. Vendors will be expected to warrant that the products they purchase and integrate into their own products come from a trusted source and that their own processes can be trusted in order not to introduce potential threats to their customers. At the same time, customers will be increasingly sensitive to the security and dependability of their IT assets. To address this situation, security will continue to be designed in from the outset and be tightly coupled with enterprise architecture.

In addition to my predictions, Other Open Group staff members also wanted to share their predictions for 2012 with you:

Security

By Jim Hietala, VP of Security

Cloud security in 2012 becomes all about point solutions to address specific security pain points. Customers are realizing that to achieve an acceptable level of security, whether for IaaS, SaaS, or PaaS, they need to apply controls in addition to the native platform controls from the Cloud service provider. In 2012, this will manifest as early Cloud security technologies target specific and narrow security functionality gaps. Specific areas where we see this playing out include data encryption, data loss prevention, identity and access management, and others.

Cloud

By Chris Harding, Director of Interoperability

There is a major trend towards shared computing resources that are “on the Cloud” – accessed by increasingly powerful and mobile personal computing devices but decoupled from the users.

This may bring some much-needed economic growth in 2012, but history shows that real growth can only come from markets based on standards. Cloud portability and interoperability standards will enable development of re-usable components as commodity items, but the need for them is not yet appreciated. And, even if the vendors wanted these standards for Cloud Computing, they do not yet have the experience to create good ones.  But, by the end of the year, we should understand Cloud Computing better and will perhaps have made a start on the standardization that will lead to growth in the years ahead.

Here are some more Cloud predictions from my colleagues in The Open Group Cloud Work Group: http://blog.opengroup.org/2011/12/19/cloud-computing-predictions-for-2012/

Business Architecture

By Steve Philp, Professional Certification

There are a number of areas for 2012 where Business Architects will be called upon to engage in transforming the business and applying technologies such as Cloud Computing, social networking and big data. Therefore, the need to have competent Business Architects is greater than ever. This year organizations have been recruiting and developing Business Architects and the profession as a whole is now starting to take shape. But how do you establish who is a practicing Business Architect?

In response to requests from our membership, next year The Open Group will incorporate a Business Architecture stream into The Open Group Certified Architect (Open CA) program. There has already been significant interest in this stream from both organizations and practitioners alike. This is because Open CA is a skills and experience based program that recognizes, at different levels, those individuals who are performing in a Business Architecture role. I believe this initiative will further help to develop the profession over the next few years and especially in 2012.

1 Comment

Filed under Business Architecture, Cloud, Cybersecurity, Enterprise Architecture, Enterprise Transformation, Semantic Interoperability, Uncategorized

Transforming your business operating model for outsourcing and off-shoring with strategic Cloud Computing

By Mark Skilton, Capgemini

Strategic planning has traditionally been a game of numbers and decision scenarios. The make-versus-buy-versus-alliance-versus-acquisition process has remained the tool of the trade in building options to improve capital efficiencies or revenue contribution and gross operating profit. Many non-core and some core services may be moved to a buy-in model or to a shared service that reduces operating costs. Alliances have been forged with outsourcing and out-tasking of IT and business services to further improve the conditions of business performance. These are well-established practices, but how does Cloud Computing change this strategy?

A consideration that is now growing in many markets is how outsourcing and off-shoring has been affected by the emergence of Cloud Computing as a kind of alternative option to hosting services.

Traditional hosting of services involves on-premise and off-premise choices of key network, storage, computer software applications and business services. Typically this includes the movement of technical and staff resources to an off-premise model managed by a third party. This may further include a shift from on-shore to off-shore locations of these services, which is driven by the desire for operational cost improvements, as well as access to managed resources and skills.

Cloud Computing is also a shift to a kind of outsourced model and off-shore model which may be either located on-premise or off-premise, and may offer a private, community or a public Cloud service. But how does this actually alter the balance of strategic choice in a business and its chosen markets? Cloud Computing is a business operating model shift, as well as a technology transition.

Cloud changes the competitive dynamics of a market because it changes the competitive barriers to entry and choice. Some examples of key business driver shifts brought about by Cloud Computing can include:

Cloud impact on GOP Operating Profit

  • Lowers cost of asset ownership reductions – asset investment can be shifted from a “one” to “many” model
  • Lowers barriers to provisioning — self-service provisioning enabling a new kind of on-demand purchasing
  • Lowers collaborative barriers, enabling convenience to exchange ideas, brokering and business transactions

Cloud impact on revenue contribution

  • Increases speed of change to transform business activities because you can take new products and services to market faster or can expand into new markets faster
  • Increases revenue share through new products and services provisioning, and rapid market entry, to sell commodity or custom products and services

Cloud impact on risk and cost of ownership overhead

  • Increases access controls and certification security risks
  • Increases the compliance and audit costs and risks from movement of services on- and off-shore to specified or unspecified locations
  • Increases cost of knowledge acquisition and learning to manage Cloud
  • Creates changes in lock-in and impediments to portability and interoperability, as a commitment to Cloud platforms may come with associated conditions of service and limits on accessibility to move and change providers

Cloud Computing is a very direct challenge to current outsourcing and off-shoring models in a number of fundamental ways, which are huge opportunities and areas that need to be managed and their risk assessed in the strategic planning process.

  • Self-service changes the “window” to request management of services. In outsourcing, these may be facilitation through portals, service desk and service account communication; but in Cloud, self-service means that direct contact and ownership can be done remotely. It may also mean that different consumers and buyers of Cloud services may not necessarily contact the IT provider directly.
  • On-demand collaboration services changes the way sourcing and selection of services are achieved. Online catalogues of predefined services and options to seek out other Cloud services and solutions alter the scope and range of sourcing solutions. These are no longer just constrained to the particular outsourcing or multi-sourcing situation. Contractual conditions and alternative sourcing and innovation strategies are introduced due to the influence of Cloud Computing models.
  • Cloud Service management changes the way ITSM help desk and service monitoring work with Cloud Computing. The role of the service desk changes as it is no longer only considering the requests, issues and problem resolutions; it also needs to be aware of the catalogues and availability management of the Cloud environment to answer service level requests and changes. It moves from a request service to a demand-and-supply management service.

Self –service enables business and IT users to select the Cloud-hosted services needed to expand or change as their business needs change, rather than go through a provisioning cycle with local or central IT.

On-demand collaboration service model improves the quality of support as Cloud-hosted services are defined through a catalogue and an account management process, enabling business and IT users to get better visibility and control of usage and requests. Conversely, it enables variations and maverick buying to be monitored to encourage the development of further common IT service reuse and specific development of new capabilities based on actual usage demand patterns. Hitherto, commercial contracts locked customers and vendors into longer-term contractual solutions, limiting options for change. Cloud Computing catalogues and services aim to create a looser coupling between buyers, consumers and users of IT services.

Cloud service management changes the concept of request-and-response service into a marketplace driven perspective of services.

The shifting onshore/off-shore model

Cloud Computing changes the concept of outsourcing and off-shoring as a physical exchange of services into one of virtual services whose location becomes a “one to many” paradigm, which may be a combination of internal and external marketplaces.  To illustrate this point, consider the following scenario:

Current information technology hardware and software assets and staff skills may be typically consolidated into a business unit or regional data centers which are connected by a corporate network.

A small, central, corporate IT function may coordinate policy and strategy, which is largely distributed down to the many operating companies and individual IT functions to meet local market and business service requirements. Currently, controlling the data center access for the business units and selected partners for security and certification is essential to controlling data center service operations and compliance regulations.

The usage and development of the business applications and infrastructure are focused onshore by business unit, region and individual business operating company. Large-scale corporate systems such as ERP, CRM and SCM and secure certified systems are developed in each data center and may be replicated across other regional data centers. Significant investment in virtualization has typically already been completed or is in progress, and is used to addressed the operational efficiencies of the data centers while the external business market environment forces continue to change rapidly with marketing product launches and demand changes within each market and region seasonal competitive pressures.

Moving to Cloud Computing can potentially redefine the need for each region and business unit to develop certain types of IT service onshore. Common services hosted in a secure Cloud data center provide the possibility to move to an off-shore shared model for many business units. Individual market and business unit agility is still essential for competitive response, but this can be supported by targeting Cloud Computing services for specific business activity needs. The off-shore move also enables service management and capabilities to be invested in shared regions to further improve the operating model organizational efficiency.

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

1 Comment

Filed under Cloud/SOA

Cloud Conference — and Unconference

By Dr. Chris Harding, The Open Group

The Wednesday of The Open Group Conference in San Diego included a formal Cloud Computing conference stream. This was followed in the evening by an unstructured CloudCamp, which made an interesting contrast.

The Cloud Conference Stream

The Cloud conference stream featured presentations on Architecting for Cloud and Cloud Security, and included a panel discussion on the considerations that must be made when choosing a Cloud solution.

In the first session of the morning, we had two presentations on Architecting for Cloud. Both considered TOGAF® as the architectural context. The first, from Stuart Boardman of Getronics, explored the conceptual difference that Cloud makes to enterprise architecture, and the challenge of communicating an architecture vision and discussing the issues with stakeholders in the subsequent TOGAF® phases. The second, from Serge Thorn of Architecting the Enterprise, looked at the considerations in each TOGAF® phase, but in a more specific way. The two presentations showed different approaches to similar subject matter, which proved a very stimulating combination.

This session was followed by a presentation from Steve Else of EA Principals in which he shared several use cases related to Cloud Computing. Using these, he discussed solution architecture considerations, and put forward the lessons learned and some recommendations for more successful planning, decision-making, and execution.

We then had the first of the day’s security-related presentations. It was given by Omkhar Arasaratnam of IBM and Stuart Boardman of Getronics. It summarized the purpose and scope of the Security for the Cloud and SOA project that is being conducted in The Open Group as a joint project of The Open Group’s Cloud Computing Work Group, the SOA Work Group, and Security Forum. Omkhar and Stuart described the usage scenarios that the project team is studying to guide its thinking, the concepts that it is developing, and the conclusions that it has reached so far.

The first session of the afternoon was started by Ed Harrington, of Architecting the Enterprise, who gave an interesting presentation on current U.S. Federal Government thinking on enterprise architecture, showing clearly the importance of Cloud Computing to U.S. Government plans. The U.S. is a leader in the use of IT for government and administration, so we can expect that its conclusions – that Cloud Computing is already making its way into the government computing fabric, and that enterprise architecture, instantiated as SOA and properly governed, will provide the greatest possibility of success in its implementation – will have a global impact.

We then had a panel session, moderated by Dana Gardner with his usual insight and aplomb, that explored the considerations that must be made when choosing a Cloud solution — custom or shrink-wrapped — and whether different forms of Cloud Computing are appropriate to different industry sectors. The panelists represented different players in the Cloud solutions market – customers, providers, and consultants – so that the topic was covered in depth and from a variety of viewpoints. They were Penelope Gordon of 1Plug Corporation, Mark Skilton of Capgemini, Ed Harrington of Architecting the Enterprise, Tom Plunkett of Oracle, and TJ Virdi of the Boeing Company.

In the final session of the conference stream, we returned to the topic of Cloud Security. Paul Simmonds, a member of the Board of the Jericho Forum®, gave an excellent presentation on de-risking the Cloud through effective risk management, in which he explained the approach that the Jericho Forum has developed. The session was then concluded by Andres Kohn of Proofpoint, who addressed the question of whether data can be more secure in the Cloud, considering public, private and hybrid Cloud environment.

CloudCamp

The CloudCamp was hosted by The Open Group but run as a separate event, facilitated by CloudCamp organizer Dave Nielsen. There were around 150-200 participants, including conference delegates and other people from the San Diego area who happened to be interested in the Cloud.

Dave started by going through his definition of Cloud Computing. Perhaps he should have known better – starting a discussion on terminology and definitions can be a dangerous thing to do with an Open Group audience. He quickly got into a good-natured argument from which he eventually emerged a little bloodied, metaphorically speaking, but unbowed.

We then had eight “lightning talks”. These were five-minute presentations covering a wide range of topics, including how to get started with Cloud (Margaret Dawson, Hubspan), supplier/consumer relationship (Brian Loesgen, Microsoft), Cloud-based geographical mapping (Ming-Hsiang Tsou, San Diego University), a patterns-based approach to Cloud (Ken Klingensmith, IBM), efficient large-scale data processing (AlexRasmussen, San Diego University), using desktop spare capacity as a Cloud resource (Michael Krumpe, Intelligent Technology Integration), cost-effective large-scale data processing in the Cloud (Patrick Salami, Temboo), and Cloud-based voice and data communication (Chris Matthieu, Tropo).

The participants then split into groups to discuss topics proposed by volunteers. There were eight topics altogether. Some of these were simply explanations of particular products or services offered by the volunteers’ companies. Others related to areas of general interest such as data security and access control, life-changing Cloud applications, and success stories relating to “big data”.

I joined the groups discussing Cloud software development on Amazon Web Services (AWS) and Microsoft Azure. These sessions had excellent information content which would be valuable to anyone wishing to get started in – or already engaged in – software development on these platforms. They also brought out two points of general interest. The first is that the dividing line between IaaS and PaaS can be very thin. AWS and Azure are in theory on opposite sides of this divide; in practice they provide the developer with broadly similar capabilities. The second point is that in practice your preferred programming language and software environment is likely to be the determining factor in your choice of Cloud development platform.

Overall, the CloudCamp was a great opportunity for people to absorb the language and attitudes of the Cloud community, to discuss ideas, and to pick up specific technical knowledge. It gave an extra dimension to the conference, and we hope that this can be repeated at future events by The Open Group.

Cloud and SOA are a topic of discussion at The Open Group Conference, San Diego, which is currently underway.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF® practitioner.

1 Comment

Filed under Cloud/SOA

Seeing above the Clouds

By Mark Skilton, Capgemini

Genie out of the bottle

I recently looked back at some significant papers that had influenced my thinking on Cloud Computing as part of a review on current strategic trends. In February 2009, a paper published at the University of California, Berkeley, “Above the Clouds: A Berkeley View of Cloud Computing”, stands out, as the first of many papers to drive out the issues around the promise of Cloud computing and technology barriers to achieving secure elastic service. The key issue unfolding at that time was the transfer of risk that resulted from moving to a Cloud environment and the obstacles to security, performance and licensing that would need to evolve. But the genie was out of the bottle, as early successful adopters could see cost savings and rapid one-to-many monetization benefits of on-demand services.

Worlds reloaded – Welcome to the era of multiplicity

A second key moment I can recall was the realization that the exchange of services was no longer a simple request and response. For sure, social networks had demonstrated huge communities of collaboration and online “personas” changing individual and business network interactions. But something else had happened less obvious but more profound. This change was made most evident in the proliferation of mobile computing that greatly expanded the original on-premise move to off-premise services. A key paper by Intel Research titled “CloneCloud” published around that same time period exemplified this shift. Services could be cloned and moved into the Cloud demonstrating the possible new realities in redefining the real potential of how work gets done using Cloud Computing. The key point was that storage or processing transactions, media streaming or complex calculations no longer had to be executed within a physical device. It could be provided as a service from remote source, a virtual Cloud service. But more significant was the term “multiplicity” in this concept. We see this everyday as we download apps, stream video and transact orders. The fact was that you could do not only a few, but multiple tasks simultaneously and pick and choose the services and results.

New thinking, new language

This signaled a big shift away from the old style of thinking about business services that had us conditioned to think of service oriented requests in static, tiered, rigid ways. Those business processes and services missed this new bigger picture. Just take a look at the phenomenon called  hyperlocal services that offer location specific on-demand information or how crowd sourcing can dramatically transform purchasing choices and collaboration incentives. Traditional ways of measuring, modeling and running business operations are underutilizing this potential and undervaluing what can be possible in these new collaborative networks. The new multiplicity based world of Cloud-enabled networks means you can augment yourself and your company’s assets in ways that change the shape of your industry.  What is needed is a new language to describe how this shift feels and works, and how advances in your business portfolio can be realized with these modern ideas, examining current methods and standards of strategy visualization, metrics and design to evolve a new expression of this potential.

Future perfect?

Some two years have passed and what has been achieved?  Certainly we have seen the huge proliferation of services into a Cloud hosting environment. Large strategic movements in private data centers seek to develop private Cloud services, by bringing together social media and social networking through Cloud technologies. But what’s needed now is a new connection between the potential of these technologies and the vision of the Internet, the growth of social graph associations and wider communities and ecosystems are emerging in the movement’s wake.

With every new significant disruptive change, there is also the need for a new language to help describe this new world. Open standards and industry forums will help drive this. The old language focuses on the previous potential and so a new way to visualize, define and use the new realities can help the big shift towards the potential above the Cloud.

This post was simultaneously published on the BriefingsDirect blog by Dana Gardner.

Mark Skilton, Director, Capgemini and is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of cloud computing impact on Outsourcing and Offshoring models and  contributed to the second edition of the Handbook of Global Outsourcing and Offshoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

Comments Off

Filed under Cloud/SOA

Security & architecture: Convergence, or never the twain shall meet?

By Jim Hietala, The Open Group

Our Security Forum chairman, Mike Jerbic, introduced a concept to The Open Group several months ago that is worth thinking a little about. Oversimplifying his ideas a bit, the first point is that much of what’s done in architecture is about designing for intention — that is, thinking about the intended function and goals of information systems, and architecting with these in mind. His second related point has been that in information security management, much of what we do tends to be reactive, and tends to be about dealing with the unintended consequences (variance) of poor architectures and poor software development practices. Consider a few examples:

architecture under fireSignature-based antivirus, which relies upon malware being seen in the wild, captured, and having signatures being distributed to A/V software around the world to pattern match and stop the specific attack. Highly reactive. The same is true for signature-based IDS/IPS, or anomaly-based systems.

Data Loss (or Leak) Prevention, which for the most part tries to spot sensitive corporate information being exfiltrated from a corporate network. Also very reactive.

Vulnerability management, which is almost entirely reactive. The cycle of “Scan my systems, find vulnerabilities, patch or remediate, and repeat” exists entirely to find the weak spots in our environments. This cycle almost ensures that more variance will be headed our way in the future, as each new patch potentially brings with it uncertainty and variance in the form of new bugs and vulnerabilities.

The fact that each of these security technology categories even exist has everything to do with poor architectural decisions made in years gone by, or inadequate ongoing software development and Q/A practices.

Intention versus variance. Architects tend to be good at the former; security professionals have (of necessity) had to be good at managing the consequences of the latter.

Can the disciplines of architecture and information security do a better job of co-existence? What would that look like? Can we get to the point where security is truly “built in” versus “bolted on”?

What do you think?

P.S. The Open Group has numerous initiatives in the area of security architecture. Look for an updated Enterprise Security Architecture publication from us in the next 30 days; plus we have ongoing projects to align TOGAF™ and SABSA, and to develop a Cloud Security Reference Architecture. If there are other areas where you’d like to see guidance developed in the area of security architecture, please contact us.

Jim HietalaAn IT security industry veteran, Jim Hietala is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

Comments Off

Filed under Cybersecurity