Tag Archives: IaaS

Open Group Security Gurus Dissect the Cloud: Higher of Lower Risk

By Dana Gardner, Interarbor Solutions

For some, any move to the Cloud — at least the public Cloud — means a higher risk for security.

For others, relying more on a public Cloud provider means better security. There’s more of a concentrated and comprehensive focus on security best practices that are perhaps better implemented and monitored centrally in the major public Clouds.

And so which is it? Is Cloud a positive or negative when it comes to cyber security? And what of hybrid models that combine public and private Cloud activities, how is security impacted in those cases?

We posed these and other questions to a panel of security experts at last week’s Open Group Conference in San Francisco to deeply examine how Cloud and security come together — for better or worse.

The panel: Jim Hietala, Vice President of Security for The Open Group; Stuart Boardman, Senior Business Consultant at KPN, where he co-leads the Enterprise Architecture Practice as well as the Cloud Computing Solutions Group; Dave Gilmour, an Associate at Metaplexity Associates and a Director at PreterLex Ltd., and Mary Ann Mezzapelle, Strategist for Enterprise Services and Chief Technologist for Security Services at HP.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Is this notion of going outside the firewall fundamentally a good or bad thing when it comes to security?

Hietala: It can be either. Talking to security people in large companies, frequently what I hear is that with adoption of some of those services, their policy is either let’s try and block that until we get a grip on how to do it right, or let’s establish a policy that says we just don’t use certain kinds of Cloud services. Data I see says that that’s really a failed strategy. Adoption is happening whether they embrace it or not.

The real issue is how you do that in a planned, strategic way, as opposed to letting services like Dropbox and other kinds of Cloud Collaboration services just happen. So it’s really about getting some forethought around how do we do this the right way, picking the right services that meet your security objectives, and going from there.

Gardner: Is Cloud Computing good or bad for security purposes?

Boardman: It’s simply a fact, and it’s something that we need to learn to live with.

What I’ve noticed through my own work is a lot of enterprise security policies were written before we had Cloud, but when we had private web applications that you might call Cloud these days, and the policies tend to be directed toward staff’s private use of the Cloud.

Then you run into problems, because you read something in policy — and if you interpret that as meaning Cloud, it means you can’t do it. And if you say it’s not Cloud, then you haven’t got any policy about it at all. Enterprises need to sit down and think, “What would it mean to us to make use of Cloud services and to ask as well, what are we likely to do with Cloud services?”

Gardner: Dave, is there an added impetus for Cloud providers to be somewhat more secure than enterprises?

Gilmour: It depends on the enterprise that they’re actually supplying to. If you’re in a heavily regulated industry, you have a different view of what levels of security you need and want, and therefore what you’re going to impose contractually on your Cloud supplier. That means that the different Cloud suppliers are going to have to attack different industries with different levels of security arrangements.

The problem there is that the penalty regimes are always going to say, “Well, if the security lapses, you’re going to get off with two months of not paying” or something like that. That kind of attitude isn’t going to go in this kind of security.

What I don’t understand is exactly how secure Cloud provision is going to be enabled and governed under tight regimes like that.

An opportunity

Gardner: Jim, we’ve seen in the public sector that governments are recognizing that Cloud models could be a benefit to them. They can reduce redundancy. They can control and standardize. They’re putting in place some definitions, implementation standards, and so forth. Is the vanguard of correct Cloud Computing with security in mind being managed by governments at this point?

Hietala: I’d say that they’re at the forefront. Some of these shared government services, where they stand up Cloud and make it available to lots of different departments in a government, have the ability to do what they want from a security standpoint, not relying on a public provider, and get it right from their perspective and meet their requirements. They then take that consistent service out to lots of departments that may not have had the resources to get IT security right, when they were doing it themselves. So I think you can make a case for that.

Gardner: Stuart, being involved with standards activities yourself, does moving to the Cloud provide a better environment for managing, maintaining, instilling, and improving on standards than enterprise by enterprise by enterprise? As I say, we’re looking at a larger pool and therefore that strikes me as possibly being a better place to invoke and manage standards.

Boardman: Dana, that’s a really good point, and I do agree. Also, in the security field, we have an advantage in the sense that there are quite a lot of standards out there to deal with interoperability, exchange of policy, exchange of credentials, which we can use. If we adopt those, then we’ve got a much better chance of getting those standards used widely in the Cloud world than in an individual enterprise, with an individual supplier, where it’s not negotiation, but “you use my API, and it looks like this.”

Having said that, there are a lot of well-known Cloud providers who do not currently support those standards and they need a strong commercial reason to do it. So it’s going to be a question of the balance. Will we get enough specific weight of people who are using it to force the others to come on board? And I have no idea what the answer to that is.

Gardner: We’ve also seen that cooperation is an important aspect of security, knowing what’s going on on other people’s networks, being able to share information about what the threats are, remediation, working to move quickly and comprehensively when there are security issues across different networks.

Is that a case, Dave, where having a Cloud environment is a benefit? That is to say more sharing about what’s happening across networks for many companies that are clients or customers of a Cloud provider rather than perhaps spotty sharing when it comes to company by company?

Gilmour: There is something to be said for that, Dana. Part of the issue, though, is that companies are individually responsible for their data. They’re individually responsible to a regulator or to their clients for their data. The question then becomes that as soon as you start to share a certain aspect of the security, you’re de facto sharing the weaknesses as well as the strengths.

So it’s a two-edged sword. One of the problems we have is that until we mature a little bit more, we won’t be able to actually see which side is the sharpest.

Gardner: So our premise that Cloud is good and bad for security is holding up, but I’m wondering whether the same things that make you a risk in a private setting — poor adhesion to standards, no good governance, too many technologies that are not being measured and controlled, not instilling good behavior in your employees and then enforcing that — wouldn’t this be the same either way? Is it really Cloud or not Cloud, or is it good security practices or not good security practices? Mary Ann?

No accountability

Mezzapelle: You’re right. It’s a little bit of that “garbage in, garbage out,” if you don’t have the basic things in place in your enterprise, which means the policies, the governance cycle, the audit, and the tracking, because it doesn’t matter if you don’t measure it and track it, and if there is no business accountability.

David said it — each individual company is responsible for its own security, but I would say that it’s the business owner that’s responsible for the security, because they’re the ones that ultimately have to answer that question for themselves in their own business environment: “Is it enough for what I have to get done? Is the agility more important than the flexibility in getting to some systems or the accessibility for other people, as it is with some of the ubiquitous computing?”

So you’re right. If it’s an ugly situation within your enterprise, it’s going to get worse when you do outsourcing, out-tasking, or anything else you want to call within the Cloud environment. One of the things that we say is that organizations not only need to know their technology, but they have to get better at relationship management, understanding who their partners are, and being able to negotiate and manage that effectively through a series of relationships, not just transactions.

Gardner: If data and sharing data is so important, it strikes me that Cloud component is going to be part of that, especially if we’re dealing with business processes across organizations, doing joins, comparing and contrasting data, crunching it and sharing it, making data actually part of the business, a revenue generation activity, all seems prominent and likely.

So to you, Stuart, what is the issue now with data in the Cloud? Is it good, bad, or just the same double-edged sword, and it just depends how you manage and do it?

Boardman: Dana, I don’t know whether we really want to be putting our data in the Cloud, so much as putting the access to our data into the Cloud. There are all kinds of issues you’re going to run up against, as soon as you start putting your source information out into the Cloud, not the least privacy and that kind of thing.

A bunch of APIs

What you can do is simply say, “What information do I have that might be interesting to people? If it’s a private Cloud in a large organization elsewhere in the organization, how can I make that available to share?” Or maybe it’s really going out into public. What a government, for example, can be thinking about is making information services available, not just what you go and get from them that they already published. But “this is the information,” a bunch of APIs if you like. I prefer to call them data services, and to make those available.

So, if you do it properly, you have a layer of security in front of your data. You’re not letting people come in and do joins across all your tables. You’re providing information. That does require you then to engage your users in what is it that they want and what they want to do. Maybe there are people out there who want to take a bit of your information and a bit of somebody else’s and mash it together, provide added value. That’s great. Let’s go for that and not try and answer every possible question in advance.

Gardner: Dave, do you agree with that, or do you think that there is a place in the Cloud for some data?

Gilmour: There’s definitely a place in the Cloud for some data. I get the impression that there is going to drive out of this something like the insurance industry, where you’ll have a secondary Cloud. You’ll have secondary providers who will provide to the front-end providers. They might do things like archiving and that sort of thing.

Now, if you have that situation where your contractual relationship is two steps away, then you have to be very confident and certain of your cloud partner, and it has to actually therefore encompass a very strong level of governance.

The other issue you have is that you’ve got then the intersection of your governance requirements with that of the cloud provider’s governance requirements. Therefore you have to have a really strongly — and I hate to use the word — architected set of interfaces, so that you can understand how that governance is actually going to operate.

Gardner: Wouldn’t data perhaps be safer in a cloud than if they have a poorly managed network?

Mezzapelle: There is data in the Cloud and there will continue to be data in the Cloud, whether you want it there or not. The best organizations are going to start understanding that they can’t control it that way and that perimeter-like approach that we’ve been talking about getting away from for the last five or seven years.

So what we want to talk about is data-centric security, where you understand, based on role or context, who is going to access the information and for what reason. I think there is a better opportunity for services like storage, whether it’s for archiving or for near term use.

There are also other services that you don’t want to have to pay for 12 months out of the year, but that you might need independently. For instance, when you’re running a marketing campaign, you already share your data with some of your marketing partners. Or if you’re doing your payroll, you’re sharing that data through some of the national providers.

Data in different places

So there already is a lot of data in a lot of different places, whether you want Cloud or not, but the context is, it’s not in your perimeter, under your direct control, all of the time. The better you get at managing it wherever it is specific to the context, the better off you will be.

Hietala: It’s a slippery slope [when it comes to customer data]. That’s the most dangerous data to stick out in a Cloud service, if you ask me. If it’s personally identifiable information, then you get the privacy concerns that Stuart talked about. So to the extent you’re looking at putting that kind of data in a Cloud, looking at the Cloud service and trying to determine if we can apply some encryption, apply the sensible security controls to ensure that if that data gets loose, you’re not ending up in the headlines of The Wall Street Journal.

Gardner: Dave, you said there will be different levels on a regulatory basis for security. Wouldn’t that also play with data? Wouldn’t there be different types of data and therefore a spectrum of security and availability to that data?

Gilmour: You’re right. If we come back to Facebook as an example, Facebook is data that, even if it’s data about our known customers, it’s stuff that they have put out there with their will. The data that they give us, they have given to us for a purpose, and it is not for us then to distribute that data or make it available elsewhere. The fact that it may be the same data is not relevant to the discussion.

Three-dimensional solution

That’s where I think we are going to end up with not just one layer or two layers. We’re going to end up with a sort of a three-dimensional solution space. We’re going to work out exactly which chunk we’re going to handle in which way. There will be significant areas where these things crossover.

The other thing we shouldn’t forget is that data includes our software, and that’s something that people forget. Software nowadays is out in the Cloud, under current ways of running things, and you don’t even always know where it’s executing. So if you don’t know where your software is executing, how do you know where your data is?

It’s going to have to be just handled one way or another, and I think it’s going to be one of these things where it’s going to be shades of gray, because it cannot be black and white. The question is going to be, what’s the threshold shade of gray that’s acceptable.

Gardner: Mary Ann, to this notion of the different layers of security for different types of data, is there anything happening in the market that you’re aware of that’s already moving in that direction?

Mezzapelle: The experience that I have is mostly in some of the business frameworks for particular industries, like healthcare and what it takes to comply with the HIPAA regulation, or in the financial services industry, or in consumer products where you have to comply with the PCI regulations.

There has continued to be an issue around information lifecycle management, which is categorizing your data. Within a company, you might have had a document that you coded private, confidential, top secret, or whatever. So you might have had three or four levels for a document.

You’ve already talked about how complex it’s going to be as you move into trying understand, not only for that data, that the name Mary Ann Mezzapelle, happens to be in five or six different business systems over a 100 instances around the world.

That’s the importance of something like an Enterprise Architecture that can help you understand that you’re not just talking about the technology components, but the information, what they mean, and how they are prioritized or critical to the business, which sometimes comes up in a business continuity plan from a system point of view. That’s where I’ve advised clients on where they might start looking to how they connect the business criticality with a piece of information.

One last thing. Those regulations don’t necessarily mean that you’re secure. It makes for good basic health, but that doesn’t mean that it’s ultimately protected.You have to do a risk assessment based on your own environment and the bad actors that you expect and the priorities based on that.

Leaving security to the end

Boardman: I just wanted to pick up here, because Mary Ann spoke about Enterprise Architecture. One of my bugbears — and I call myself an enterprise architect — is that, we have a terrible habit of leaving security to the end. We don’t architect security into our Enterprise Architecture. It’s a techie thing, and we’ll fix that at the back. There are also people in the security world who are techies and they think that they will do it that way as well.

I don’t know how long ago it was published, but there was an activity to look at bringing the SABSA Methodology from security together with TOGAF®. There was a white paper published a few weeks ago.

The Open Group has been doing some really good work on bringing security right in to the process of EA.

Hietala: In the next version of TOGAF, which has already started, there will be a whole emphasis on making sure that security is better represented in some of the TOGAF guidance. That’s ongoing work here at The Open Group.

Gardner: As I listen, it sounds as if the in the Cloud or out of the Cloud security continuum is perhaps the wrong way to look at it. If you have a lifecycle approach to services and to data, then you’ll have a way in which you can approach data uses for certain instances, certain requirements, and that would then apply to a variety of different private Cloud, public Cloud, hybrid Cloud.

Is that where we need to go, perhaps have more of this lifecycle approach to services and data that would accommodate any number of different scenarios in terms of hosting access and availability? The Cloud seems inevitable. So what we really need to focus on are the services and the data.

Boardman: That’s part of it. That needs to be tied in with the risk-based approach. So if we have done that, we can then pick up on that information and we can look at a concrete situation, what have we got here, what do we want to do with it. We can then compare that information. We can assess our risk based on what we have done around the lifecycle. We can understand specifically what we might be thinking about putting where and come up with a sensible risk approach.

You may come to the conclusion in some cases that the risk is too high and the mitigation too expensive. In others, you may say, no, because we understand our information and we understand the risk situation, we can live with that, it’s fine.

Gardner: It sounds as if we are coming at this as an underwriter for an insurance company. Is that the way to look at it?

Current risk

Gilmour: That’s eminently sensible. You have the mortality tables, you have the current risk, and you just work the two together and work out what’s the premium. That’s probably a very good paradigm to give us guidance actually as to how we should approach intellectually the problem.

Mezzapelle: One of the problems is that we don’t have those actuarial tables yet. That’s a little bit of an issue for a lot of people when they talk about, “I’ve got $100 to spend on security. Where am I going to spend it this year? Am I going to spend it on firewalls? Am I going to spend it on information lifecycle management assessment? What am I going to spend it on?” That’s some of the research that we have been doing at HP is to try to get that into something that’s more of a statistic.

So, when you have a particular project that does a certain kind of security implementation, you can see what the business return on it is and how it actually lowers risk. We found that it’s better to spend your money on getting a better system to patch your systems than it is to do some other kind of content filtering or something like that.

Gardner: Perhaps what we need is the equivalent of an Underwriters Laboratories (UL) for permeable organizational IT assets, where the security stamp of approval comes in high or low. Then, you could get you insurance insight– maybe something for The Open Group to look into. Any thoughts about how standards and a consortium approach would come into that?

Hietala: I don’t know about the UL for all security things. That sounds like a risky proposition.

Gardner: It could be fairly popular and remunerative.

Hietala: It could.

Mezzapelle: An unending job.

Hietala: I will say we have one active project in the Security Forum that is looking at trying to allow organizations to measure and understand risk dependencies that they inherit from other organizations.

So if I’m outsourcing a function to XYZ corporation, being able to measure what risk am I inheriting from them by virtue of them doing some IT processing for me, could be a Cloud provider or it could be somebody doing a business process for me, whatever. So there’s work going on there.

I heard just last week about a NSF funded project here in the U.S. to do the same sort of thing, to look at trying to measure risk in a predictable way. So there are things going on out there.

Gardner: We have to wrap up, I’m afraid, but Stuart, it seems as if currently it’s the larger public Cloud provider, something of Amazon and Google and among others that might be playing the role of all of these entities we are talking about. They are their own self-insurer. They are their own underwriter. They are their own risk assessor, like a UL. Do you think that’s going to continue to be the case?

Boardman: No, I think that as Cloud adoption increases, you will have a greater weight of consumer organizations who will need to do that themselves. You look at the question that it’s not just responsibility, but it’s also accountability. At the end of the day, you’re always accountable for the data that you hold. It doesn’t matter where you put it and how many other parties they subcontract that out to.

The weight will change

So there’s a need to have that, and as the adoption increases, there’s less fear and more, “Let’s do something about it.” Then, I think the weight will change.

Plus, of course, there are other parties coming into this world, the world that Amazon has created. I’d imagine that HP is probably one of them as well, but all the big names in IT are moving in here, and I suspect that also for those companies there’s a differentiator in knowing how to do this properly in their history of enterprise involvement.

So yeah, I think it will change. That’s no offense to Amazon, etc. I just think that the balance is going to change.

Gilmour: Yes. I think that’s how it has to go. The question that then arises is, who is going to police the policeman and how is that going to happen? Every company is going to be using the Cloud. Even the Cloud suppliers are using the Cloud. So how is it going to work? It’s one of these never-decreasing circles.

Mezzapelle: At this point, I think it’s going to be more evolution than revolution, but I’m also one of the people who’ve been in that part of the business — IT services — for the last 20 years and have seen it morph in a little bit different way.

Stuart is right that there’s going to be a convergence of the consumer-driven, cloud-based model, which Amazon and Google represent, with an enterprise approach that corporations like HP are representing. It’s somewhere in the middle where we can bring the service level commitments, the options for security, the options for other things that make it more reliable and risk-averse for large corporations to take advantage of it.

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

1 Comment

Filed under Cloud, Cloud/SOA, Conference, Cybersecurity, Information security, Security Architecture

SOCCI: Behind the Scenes

By E.G. Nadhan, HP

Cloud Computing standards, like other standards go through a series of evolutionary phases similar to the ones I outlined in the Top 5 phases of IaaS standards evolution. IaaS standards, in particular, take longer than their SaaS and PaaS counterparts because a balance is required between the service-orientation of the core infrastructure components in Cloud Computing.

This balance is why today’s announcement of the release of the industry’s first technical standard, Service Oriented Cloud Computing Infrastructure (SOCCI) is significant.

As one of the co-chairs of this project, here is some insight into the manner in which The Open Group went about creating the definition of this standard:

  • Step One: Identify the key characteristics of service orientation, as well as those for the cloud as defined by the National Institute of Standards and Technology (NIST). Analyze these characteristics and the resulting synergies through the application of service orientation in the cloud. Compare and contrast their evolution from the traditional environment through service orientation to the Cloud.
  • Step Two: Identify the key architectural building blocks that enable the Operational Systems Layer of the SOA Reference Architecture and the Cloud Reference Architecture that is in progress.
  • Step Three: Map these building blocks across the architectural layers while representing the multi-faceted perspectives of various viewpoints including those of the consumer, provider and developer.
  • Step Four: Define a Motor Cars in the Cloud business scenario: You, the consumer  are downloading auto-racing videos through an environment managed by a Service Integrator which requires the use of services for software, platform and infrastructure along with  traditional technologies. Provide a behind-the-curtains perspective on the business scenario where the SOCCI building blocks slowly but steadily come to life.
  • Step Five: Identify the key connection points with the other Open Group projects in the areas of architecture, business use cases, governance and security.

The real test of a standard is in its breadth of adoption. This standard can be used in multiple ways by the industry at large in order to ensure that the architectural nuances are comprehensively addressed. It could be used to map existing Cloud-based deployments to a standard architectural template. It can also serve as an excellent set of Cloud-based building blocks that can be used to build out a new architecture.

Have you taken a look at this standard? If not, please do so. If so, where and how do you think this standard could be adopted? Are there ways that the standard can be improved in future releases to make it better suited for broader adoption? Please let me know your thoughts.

This blog post was originally posted on HP’s Grounded in the Cloud Blog.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project.

Comments Off

Filed under Cloud, Cloud/SOA, Semantic Interoperability, Service Oriented Architecture, Standards

First Technical Standard for Cloud Computing – SOCCI

By E.G. Nadhan, HP

The Open Group just announced the availability of its first Technical Standard for the Cloud – Service Oriented Cloud Computing Infrastructure Framework (SOCCI), which outlines the concepts and architectural building blocks necessary for infrastructures to support SOA and Cloud initiatives. HP has played a leadership role in the definition and evolution of this standard within The Open Group.

SOCCI.png

As a platinum member of The Open Group, HP’s involvement started with the leadership of the Service Oriented Infrastructure project that I helped co-chair. As the Cloud Computing Working Group started taking shape, I suggested expanding this project into the working group, which resulted in the formation of the Service Oriented Cloud Computing Infrastructure project. This project was co-chaired by Tina Abdollah of IBM and myself and operated under the auspices of both the SOA and Cloud Computing Working Groups.

Infrastructure has been traditionally provisioned in a physical manner. With the evolution of virtualization technologies and application of service-orientation to infrastructure, it can now be offered as a service. SOCCI is the realization of an enabling framework of service-oriented components for infrastructure to be provided as a service in the cloud.

Service Oriented Cloud Computing Infrastructure (SOCCI) is a classic intersection of multiple paradigms in the industry – infrastructure virtualization, service-orientation and the cloud – an inevitable convergence,” said Tom Hall, Global Product Marketing Manager, Cloud and SOA Applications, HP Enterprise Services. “HP welcomes the release of the industry’s first cloud computing standard by The Open Group. This standard provides a strong foundation for HP and The Open Group to work together to evolve additional standards in the SOA and Cloud domains.”

This standard can be leveraged in one or more of the following ways:

  • Comprehend service orientation and Cloud synergies
  • Extend adoption of  traditional and service-oriented infrastructure in the Cloud
  • Leverage consumer, provider and developer viewpoints
  • Incorporate SOCCI building blocks into Enterprise Architecture
  • Implement Cloud-based solutions using different infrastructure deployment models
  • Realize business solutions referencing the SOCCI Business Scenario
  • Apply Cloud governance considerations and recommendations

The Open Group also announced the availability of the SOA Reference Architecture, a blueprint for creating and evaluating SOA solutions.

Standards go through a series of evolution phases as I outline in my post on Evolution of IaaS standards.  The announcement of the SOCCI Technical Standard will give some impetus to the evolution of IaaS standards in the Cloud somewhere between the experience and consensus phases.

It was a very positive experience co-chairing the evolution of the SOCCI standard within The Open Group working with other member companies from several enterprises with varied perspectives.

Have you taken a look at this standard?  If not, please do so.  And for those who have, where and how do you think this standard could be adopted?  Are there ways that the standard can be improved in future releases to make it better suited for broader adoption?  Please let me know!

This blog post was originally posted on HP’s Enterprise Services Blog.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project.

1 Comment

Filed under Cloud, Cloud/SOA, Service Oriented Architecture, Standards

Cloud Computing predictions for 2012

By The Open Group Cloud Work Group Members 

With 2012 fast approaching, Cloud Computing will remain a hot topic for IT professionals everywhere. The Open Group Cloud Work Group worked on various initiatives in 2011, including the Cloud Computing Survey, which explored the business impact and primary drivers for Cloud within organizations, and the release of Cloud Computing for Business, a guide that examines how enterprises can derive the greatest business benefits from Cloud Computing from a business process standpoint.

As this year comes to an end, here are a few predictions from various Cloud Work Group members.

Non-IT executives will increasingly use the term “Cloud” in regular business conversations

By Penelope Gordon, 1 Plug

In 2012, the number of non-IT business executives seeking ways to leverage Cloud will increase, and consequently references to Cloud Computing will increasingly appear in general business publications.

This increase in Cloud references will in part be due to the availability of consumer-oriented Cloud services such as email and photo sharing. For example, the October 2011 edition of the Christian Science Monitor included an article titled “Five things you need to know about ‘the cloud’” by Chris Gaylord that discussed Cloud services in the same vein as mobile phone capabilities. Another factor behind the increase (unintentionally) highlighted in this article is the overuse – and consequent dilution – of the term “Cloud” – Web services and applications running on Cloud infrastructure are not necessarily themselves Cloud services.

The most important factor behind the increase will be due to the relevance of Cloud – especially the SaaS, BPaaS, and cloud-enabled BPO variants – to these executives. In contrast to SOA, Cloud Computing buying decisions related to business process enablement can be very granular and incremental and can thus be made independently of the IT Department – not that I advocate bypassing IT input. Good governance ensures both macro-level optimization and interoperability.

New business models in monetizing your Information-as–a-Service

By Mark Skilton, Capgemini

Personal data is rapidly become less restricted to individual control and management as we see exponential growth in the use of digital media and social networking to exchange ideas, conduct business and enable whole markets, products and services to be accessible. This has significant ramifications not only for individuals and organizations to maintain security and protection over what is public and private; it also represents a huge opportunity to understand both small and big data and the “interstitial connecting glue” – the metadata within and at the edge of Clouds that are like digital smoke trails of online community activities and behaviors.

At the heart of this is the “value of information” to organizations that can extract and understand how to maximize this information and, in turn, monetize it. This can be as simple as profiling customers who “like” products and services to creating secure backup Cloud services to retrieve in times of need and support of emergency services. The point is that new metadata combinations are possible through the aggregation of data inside and outside of organizations to create new value.

There are many new opportunities to create new business models that recognize this new wave of Information-as- a-Service (IaaS) as the Cloud moves further into new value model territories.

Small and large enterprise experiences when it comes to Cloud

By Pam Isom, IBM

The Cloud Business Use Case (CBUC) team is in the process of developing and publishing a paper that is focused on the subject of Cloud for Small-Medium-Enterprises (SME’s). The CBUC team is the same team that contributed to the book Cloud Computing for Business with a concerted focus on Cloud business benefits, use cases, and justification. When it comes to small and large enterprise comparisons of Cloud adoption, some initial observations are that the increased agility associated with Cloud helps smaller organizations with rapid time-to-market and, as a result, attracts new customers in a timely fashion. This faster time-to-market not only helps SME’s gain new customers who otherwise would have gone to competitors, but prevents those competitors from becoming stronger – enhancing the SME’s competitive edge. Larger enterprises might be more willing to have a dedicated IT organization that is backed with support staff and they are more likely to establish full-fledged data center facilities to operate as a Cloud service provider in both a public and private capacity, whereas SME’s have lower IT budgets and tend to focus on keeping their IT footprint small, seeking out IT services from a variety of Cloud service providers.

A recent study conducted by Microsoft surveyed more than 3000 small businesses across 16 countries with the objective of understanding whether they have an appetite for adopting Cloud Computing. One of the findings was that within three years, “43 percent of workloads will become paid cloud services.” This is one of many statistics that stress the significance of Cloud on small businesses in this example and the predictions for larger enterprise as Cloud providers and consumers are just as profound.

Penelope Gordon specializes in adoption strategies for emerging technologies, and portfolio management of early stage innovation. While with IBM, she led innovation, strategy, and product development efforts for all of IBM’s product and service divisions; and helped to design, implement, and manage one of the world’s first public clouds.

Mark Skilton is Global Director for Capgemini, Strategy CTO Group, Global Infrastructure Services. His role includes strategy development, competitive technology planning including Cloud Computing and on-demand services, global delivery readiness and creation of Centers of Excellence. He is currently author of the Capgemini University Cloud Computing Course and is responsible for Group Interoperability strategy.

Pamela K. Isom is the Chief Architect for complex cloud integration and application innovation services such as serious games. She joined IBM in June 2000 and currently leads efforts that drive Smarter Planet efficiencies throughout client enterprises using, and often times enhancing, its’ Enterprise Architecture (EA). Pamela is a Distinguished Chief/Lead IT Architect with The Open Group where she leads the Cloud Business Use Cases Work Group.

3 Comments

Filed under Cloud, Cloud/SOA, Enterprise Transformation, Semantic Interoperability

Facebook – the open source data center

By Mark Skilton, Capgemini

The recent announcement by Facebook of its decision to publish its data center specifications as open source illustrates a new emerging trend in commoditization of compute resources.

Key features of the new facility include:

  • The Oregon facility announced to the world press in April 2011 is 150,000 sq. ft., a $200 million investment. At any one time, the total of Facebook’s 500-million user capacity could be hosted in this one site. Another Facebook data center facility is scheduled to open in 2012 in North Carolina. There may possibly be future data centers in Europe or elsewhere if required by the Palo Alto, Calif.-based company
  • The Oregon data center enables Facebook to reduce its energy consumption per unit of computing power by 38%
  • The data center has a PUE of 1.07, well below the EPA-defined state-of-the-art industry average of 1.5. This means 93% of the energy from the grid makes it into every Open Compute Server.
  • Removed centralized chillers, eliminated traditional inline UPS systems and removed a 480V to 208V transformation
  • Ethernet-powered LED lighting and passive cooling infrastructure reduce energy spent on running the facility
  • New second-level “evaporative cooling system”, a multi-layer method of transforming room temperature and air filtration
  • Launch of the “Open Compute Project” to share the data center design as Open Source. The aim is to encourage collaboration of data center design to improve overall energy consumption and environmental impact. Other observers also see this as a way of reducing component sourcing costs further, as most of the designs are low-cost commodity hardware
  • The servers are 38% more efficient and 24% lower cost

While this can be simply described as a major Cloud services company seeing their data centers as commodity and non-core to their services business, this perhaps is something of a more significant shift in the Cloud Computing industry in general.

Facebook making its data centers specifications open source demonstrates that IaaS (Infrastructure as a Service) utility computing is now seen as a commodity and non-differentiating to companies like Facebook and anyone else who wants cheap compute resources.

What becomes essential is the efficiencies of operation that result in provisioning and delivery of these services are now the key differentiator.

Furthermore, it can be seen that it’s a trend towards what you do with the IaaS storage and compute. How we architect solutions that develop software as a service (SaaS) capabilities becomes the essential differentiator. It is how business models and consumers can maximize these benefits, which increases the importance of architecture and solutions for Cloud. This is key for The Open Group’s vision of “Boundaryless Information Flow™”. It’s how Cloud architecture services are architected, and how architects who design effective Cloud services that use these commodity Cloud resources and capabilities make the difference. Open standards and interoperability are critical to the success of this. How solutions and services are developed to build private, public or hybrid Clouds are the new differentiation. This does not ignore the fact that world-class data centers and infrastructure services are vital of course, but it’s now the way they are used to create value that becomes the debate.

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

2 Comments

Filed under Cloud/SOA

PODCAST: Cloud Computing panel forecasts transition phase for Enterprise Architecture

By Dana Gardner, Interabor Solutions

Listen to this recorded podcast here: BriefingsDirect-Open Group Cloud Panel Forecasts Transition Phase for Enterprise IT

The following is the transcript of a sponsored podcast panel discussion on newly emerging Cloud models and their impact on business and government, from The Open Group Conference, San Diego 2011.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

We now present a sponsored podcast discussion coming to you live from The Open Group 2011 Conference in San Diego. We’re here the week of February 7, and we have assembled a distinguished panel to examine the expectation of new types of cloud models and perhaps cloud specialization requirements emerging quite soon.

By now, we’re all familiar with the taxonomy around public cloud, private cloud, software as a service (SaaS), platform as a service (PaaS), and my favorite, infrastructure as a service (IaaS), but we thought we would do you all an additional service and examine, firstly, where these general types of cloud models are actually gaining use and allegiance, and we’ll look at vertical industries and types of companies that are leaping ahead with cloud, as we now define it. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Then, second, we’re going to look at why one-size-fits-all cloud services may not fit so well in a highly fragmented, customized, heterogeneous, and specialized IT world.

How much of cloud services that come with a true price benefit, and that’s usually at scale and cheap, will be able to replace what is actually on the ground in many complex and unique enterprise IT organizations?

What’s more, we’ll look at the need for cloud specialization, based on geographic and regional requirements, as well as based on the size of these user organizations, which of course can vary from 5 to 50,000 seats. Can a few types of cloud work for all of them?

Please join me now in welcoming our panel. Here to help us better understand the quest for “fit for purpose” cloud balance and to predict, at least for some time, the considerable mismatch between enterprise cloud wants and cloud provider offerings we’re here with Penelope Gordon, the cofounder of 1Plug Corporation, based in San Francisco. Welcome, Penelope.

Penelope Gordon: Thank you.

Gardner: We’re also here with Mark Skilton. He is the Director of Portfolio and Solutions in the Global Infrastructure Services with Capgemini in London. Thank you for coming, Mark.

Mark Skilton: Thank you.

Gardner: Ed Harrington joins us. He is the Principal Consultant in Virginia for the UK-based Architecting the Enterprise organization. Thank you, Ed.

Ed Harrington: Thank you.

Gardner: Tom Plunkett is joining us. He is a Senior Solution Consultant with Oracle in Huntsville, Alabama.

Tom Plunkett: Thank you, Dana.

Gardner: And lastly, we’re here with TJ Virdi. He is Computing Architect in the CAS IT System Architecture Group at Boeing based in Seattle. Welcome.

TJ Virdi: Thank you.

Gardner: Let me go first to you, Mark Skilton. One size fits all has rarely worked in IT. If it has, it has been limited in its scope and, most often, leads to an additional level of engagement to make it work with what’s already there. Why should cloud be any different?

Three areas

Skilton: Well, Dana, from personal experience, there are probably three areas of adaptation of cloud into businesses. For sure, there are horizontal common services to which, what you call, the homogeneous cloud solution could be applied common to a number of business units or operations across a market.

But, we’re starting to increasingly see the need for customization to meet vertical competitive needs of a company or the decisions within that large company. So, differentiation and business models are still there, they are still in platform cloud as they were in the pre-cloud era.

But, the key thing is that we’re seeing a different kind of potential that a business can do now with cloud — a more elastic, explosive expansion and contraction of a business model. We’re seeing fundamentally the operating model of the business growing, and the industry can change using cloud technology.

So, there are two things going on in the business and the technologies are changing because of the cloud.

Gardner: Well, for us to understand where it fits best, and perhaps not so good, is to look at where it’s already working. Ed, you talked about the federal government. They seem to be going like gangbusters in the cloud. Why so?

Harrington: Perceived cost savings, primarily. The (US) federal government has done some analysis. In particular, the General Services Administration (GSA), has done some considerable analysis on what they think they can save by going to, in their case, a public cloud model for email and collaboration services. They’ve issued a $6.7 million contract to Unisys as the systems integrator, with Google being the cloud services supplier.

So, the debate over the benefits of cloud, versus the risks associated with cloud, is still going on quite heatedly.

Gardner: How about some other verticals? Where is this working? We’ve seen in some pharma, health-care, and research environments, which have a lot of elasticity, it makes sense, given that they have very variable loads. Any other suggestions on where this works, Tom?

Plunkett: You mentioned variable workloads. Another place where we are seeing a lot of customers approach cloud is when they are starting a new project. Because then, they don’t have to migrate from the existing infrastructure. Instead everything is brand new. That’s the other place where we see a lot of customers looking at cloud, your greenfields.

Gardner: TJ, any verticals that you are aware of? What are you seeing that’s working now?

Virdi: It’s not probably related with any vertical market, but I think what we are really looking for speed to put new products into the market or evolve the products that we already have and how to optimize business operations, as well as reduce the cost. These may be parallel to any vertical industries, where all these things are probably going to be working as a cloud solution.

Gardner: We’ve heard the application of “core and context” to applications, but maybe there is an application of core and context to cloud computing, whereby there’s not so much core and lot more context. Is that what you’re saying so far?

Unstructured data

Virdi: In a sense, you would have to measure not only the structured documents or structured data, but unstructured data as well. How to measure and create a new product or solutions is the really cool things you would be looking for in the cloud. And, it has proved pretty easy to put a new solution into the market. So, speed is also the big thing in there.

Gardner: Penelope, use cases or verticals where this is working so far?

Gordon: One example in talking about core and context is when you look in retail. You can have two retailers like a Walmart or a Costco, where they’re competing in the same general space, but are differentiating in different areas.

Walmart is really differentiating on the supply chain, and so it’s not a good candidate for public cloud computing solutions. We did discuss it that might possibly be a candidate for private cloud computing.

But that’s really where they’re going to invest in the differentiating, as opposed to a Costco, where it makes more sense for them to invest in their relationship with their customers and their relationship with their employees. They’re going to put more emphasis on those business processes, and they might be more inclined to outsource some of the aspects of their supply chain.

A specific example within retail is pricing optimization. A lot of grocery stores need to do pricing optimization checks once a quarter, or perhaps once a year in some of their areas. It doesn’t makes sense for smaller grocery store chains to have that kind of IT capability in house. So, that’s a really great candidate, when you are looking at a particular vertical business process to outsource to a cloud provider who has specific industry domain expertise.

Gardner: So for small and medium businesses (SMBs) that would be more core for them than others.

Gordon: Right. That’s an example, though, where you’re talking about what I would say is a particular vertical business process. Then, you’re talking about a monetization strategy and then part of the provider, where they are looking more at a niche strategy, rather than a commodity, where they are doing a horizontal infrastructure platform.

Gardner: Ed, you had a thought?

Harrington: Yeah, and it’s along the SMB dimension. We’re seeing a lot of cloud uptake in the small businesses. I work for a 50-person company. We have one “sort of” IT person and we do virtually everything in the cloud. We’ve got people in Australia and Canada, here in the States, headquartered in the UK, and we use cloud services for virtually everything across that. I’m associated with a number of other small companies and we are seeing big uptake of cloud services.

Gardner: Allow me to be a little bit of a skeptic, because I’m seeing these reports from analyst firms on the tens of billions of dollars in potential cloud market share and double-digit growth rates for the next several years. Is this going to come from just peripheral application context activities, mostly SMBs? What about the core in the enterprises? Does anybody have an example of where cloud is being used in either of those?

Skilton: In the telecom sector, which is very IT intensive, I’m seeing the emergence of their core business of delivering service to a large end user or multiple end user channels, using what I call cloud brokering.

Front-end cloud

So, if where you’re going with your question is that, certainly in the telecom sector we’re seeing the emergence of front end cloud, customer relationship management (CRM) type systems and also sort of back-end content delivery engines using cloud.

The fundamental shift away from the service orientated architecture (SOA) era is that we’re seeing more business driven self-service, more deployment of services as a business model, which is a big difference of the shift of the cloud. Particularly in telco, we’re seeing almost an explosion in that particular sector.

Gordon: A lot of companies don’t even necessarily realize that they’re using cloud services, particularly when you talk about SaaS. There are a number of SaaS solutions that are becoming more and more ubiquitous. If you look at large enterprise company recruiting sites, often you will see Taleo down at the bottom. Taleo is a SaaS. So, that’s a cloud solution, but it’s just not thought necessarily of in that context.

Gardner: Right. Tom?

Plunkett: Another place we’re seeing a lot of growth with regards to private clouds is actually on the defense side. The Defense Department is looking at private clouds, but they also have to deal with this core and context issue. We’re in San Diego today. The requirements for a shipboard system are very different from the land-based systems.

Ships have to deal with narrow bandwidth and going disconnected. They also have to deal with coalition partners or perhaps they are providing humanitarian assistance and they are dealing even with organizations we wouldn’t normally consider military. So, they have to deal with lots of information, assurance issues, and have completely different governance concerns that we normally think about for public clouds.

Gardner: However, in the last year or two, the assumption has been that this is something that’s going to impact every enterprise, and everybody should get ready. Yet, I’m hearing mostly this creeping in through packaged applications on a on-demand basis, SMBs, greenfield organizations, perhaps where high elasticity is a requirement.

What would be necessary for these cloud providers to be able to bring more of the core applications the large enterprises are looking for? What’s the new set of requirements? As I pointed out, we have had a general category of SaaS and development, elasticity, a handful of infrastructure services. What’s the next set of requirements that’s going to make it palatable for these core activities and these large enterprises to start doing this? Let me start with you, Penelope.

Gordon: It’s an interesting question and it was something that we were discussing in a session yesterday afternoon. Here is a gentleman from a large telecommunications company, and from his perspective, trust was a big issue. To him, part of it was just an immaturity of the market, specifically talking about what the new style of cloud is and that branding. Some of the aspects of cloud have been around for quite some time.

Look at Linux adoption as an analogy. A lot of companies started adopting Linux, but it was for peripheral applications and peripheral services, some web services that weren’t business critical. It didn’t really get into the core enterprise until much later.

We’re seeing some of that with cloud. It’s just a much bigger issue with cloud, especially as you start looking at providers wanting to moving up the food chain and providing greater value. This means that they have to have more industry knowledge and that they have to have more specialization. It becomes more difficult for large enterprises to trust a vendor to have that kind of knowledge.

No governance

Another aspect of what came up in the afternoon is that, at this point, while we talk about public cloud specifically, it’s not the same as saying it’s a public utility. We talk about “public utility,” but there is no governance, at this point, to say, “Here is certification that these companies have been tested to meet certain delivery standards.” Until that exists, it’s going to be difficult for some enterprises to get over that trust issue.

Gardner: Assuming that the trust and security issues are worked out over time, that experience leads to action, it leads to trust, it leads to adoption, and we have already seen that with SaaS applications. We’ve certainly seen it with the federal government, as Ed pointed out earlier.

Let’s just put that aside as one of the requirements that’s already on the drawing board and that we probably can put a checkmark next to at some point. What’s next? What about customization? What about heterogeneity? What about some of these other issues that are typical in IT, Mark Skilton?

Skilton: One of the under-played areas is PaaS. We hear about lock-in of technology caused by the use of the cloud, either putting too much data in or doing customization of parameters and you lose the elastic features of that cloud.

As to your question about what do vendors or providers need to do more to help the customer use the cloud, the two things we’re seeing are: one, more of an appliance strategy, where they can buy modular capabilities, so the licensing issue, solutioning issue, is more contained. The client can look at it more in a modular appliance sort of way. Think of it as cloud in a box.

The second thing is that we need to be seeing is much more offering transition services, transformation services, to accelerate the use of the cloud in a safe way, and I think that’s something that we need to really push hard to do. There’s a great quote from a client, “It’s not the destination, it’s the journey to the cloud that I need to see.”

Gardner: You mentioned PaaS. We haven’t seen too much yet with a full mature offering of the full continuum of PaaS to IaaS. That’s one where new application development activities and new integration activities would be built of, for, and by the cloud and coordinated between the dev and the ops, with the ops being any number of cloud models — on-premises, off-premises, co-lo, multi-tenancy, and so forth.

So what about that? Is that another requirement that there is continuity between the past and the infrastructure and deployment, Tom?

Plunkett: We’re getting there. PaaS is going to be a real requirement going forward, simply because that’s going to provide us the flexibility to reach some of those core applications that we were talking about before. The further you get away from the context, the more you’re focusing on what the business is really focused in on, and that’s going to be the core, which is going to require effective PaaS.

Gardner: TJ.

More regulatory

Virdi: I want to second that, but at the same time, we’re looking for more regulatory and other kind of licensing and configuration issues as well. Those also make it a little better to use the cloud. You don’t really have to buy, or you can go for the demand. You need to make your licenses a little bit better in such a way that you can just put the product or business solutions into the market, test the water, and then you can go further on that.

Gardner: Penelope, where do you see any benefit of having a coordinated or integrated platform and development test and deploy functions? Is that going to bring this to a more core usage in large enterprises?

Gordon: It depends. I see a lot more of the buying of cloud moving out to the non-IT line of business executives. If that accelerates, there is going to be less and less focus. Companies are really separating now what is differentiating and what is core to my business from the rest of it.

There’s going to be less emphasis on, “Let’s do our scale development on a platform level” and more, “Let’s really seek out those vendors that are going to enable us to effectively integrate, so we don’t have to do double entry of data between different solutions. Let’s look out for the solutions that allow us to apply the governance and that effectively let us tailor our experience with these solutions in a way that doesn’t impinge upon the provider’s ability to deliver in a cost effective fashion.”

That’s going to become much more important. So, a lot of the development onus is going to be on the providers, rather than on the actual buyers.

Gardner: Now, this is interesting. On one hand, we have non-IT people, business people, specifying, acquiring, and using cloud services. On the other hand we’re perhaps going to see more PaaS, the new application development, be it custom or more of a SaaS type of offering that’s brought in with a certain level of adjustment and integration. But, these are going off without necessarily any coordination. At some point, they are going to even come together. It’s inevitable, another “integrationness” perhaps.

Mark Skilton, is that what you see, that we have not just one cloud approach but multiple approaches and then some need to rationalize?

Skilton: There are two key points. There’s a missing architecture practice that needs to be there, which is a workers analysis, so that you design applications to fit specific infrastructure containers, and you’ve got a bridge between the the application service and the infrastructure service. There needs to be a piece of work by enterprise architects that starts to bring that together as a deliberate design for applications to be able to operate in the cloud, and the PaaS platform is a perfect environment.

The second thing is that there’s a lack of policy management in terms of technical governance, and because of the lack of understanding, there needs to be more of a matching exercise going on. The key thing is that that needs to evolve.

Part of the work we’re doing in The Open Group with the Cloud Computing Work Group is to develop new standards and methodologies that bridge those gaps between infrastructure, PaaS, platform development, and SaaS.

Gardner: We already have the Trusted Technology Forum. Maybe soon we’ll see an open trusted cloud technology forum.

Skilton: I hope so.

Gardner: Ed Harrington, you mentioned earlier that the role of the enterprise architect is going to benefit from cloud. Do you see what we just described in terms of dual tracks, multiple inception points, heterogeneity, perhaps overlap and redundancy? Is that where the enterprise architect flourishes?

Shadow IT

Harrington: I think we talked about line management IT getting involved in acquiring cloud services. If you think we’ve got this thing called “shadow IT” today, wait a few years. We’re going to have a huge problem with shadow IT.

From the architect’s perspective, there’s lot to be involved with and a lot to play with, as I said in my talk. There’s an awful lot of analysis to be done — what is the value that the cloud solution being proposed is going to be supplying to the organization in business terms, versus the risk associated with it? Enterprise architects deal with change, and that’s what we’re talking about. We’re talking about change, and change will inherently involve risk.

Gardner: TJ.

Virdi: All these business decisions are going to be coming upstream, and business executives need to be more aware about how cloud could be utilized as a delivery model. The enterprise architects and someone with a technical background needs to educate or drive them to make the right decisions and choose the proper solutions.

It has an impact how you want to use the cloud, as well as how you get out of it too, in case you want to move to different cloud vendors or providers. All those things come into play upstream rather than downstream.

Gardner: We all seem to be resigned to this world of, “Well, here we go again. We’re going to sit back and wait for all these different cloud things to happen. Then, we’ll come in, like the sheriff on the white horse, and try to rationalize.” Why not try to rationalize now before we get to that point? What could be done from an architecture standpoint to head off mass confusion around cloud? Let me start at one end and go down the other. Tom?

Plunkett: One word: governance. We talked about the importance of governance increasing as the IT industry went into SOA. Well, cloud is going to make it even more important. Governance throughout the lifecycle, not just at the end, not just at deployment, but from the very beginning.

Gardner: TJ.

Virdi: In addition to governance, you probably have to figure out how you want to plan to adapt to the cloud also. You don’t want to start as a Big Bang theory. You want to start in incremental steps, small steps, test out what you really want to do. If that works, then go do the other things after that.

Gardner: Penelope, how about following the money? Doesn’t where the money flows in and out of organizations tend to have a powerful impact on motivating people or getting them moving towards governance or not?

Gordon: I agree, and towards that end, it’s enterprise architects. Enterprise architects need to break out of the idea of focusing on how to address the boundary between IT and the business and talk to the business in business terms.

One way of doing that that I have seen as effective is to look at it from the standpoint of portfolio management. Where you were familiar with financial portfolio management, now you are looking at a service portfolio, as well as looking at your overall business and all of your business processes as a portfolio. How can you optimize at a macro level for your portfolio of all the investment decisions you’re making, and how the various processes and services are enabled? Then, it comes down to, as you said, a money issue.

Gardner: Perhaps one way to head off what we seem to think is an inevitable cloud chaos situation is to invoke more shared services, get people to consume services and think about how to pay for them along the way, regardless of where they come from and regardless of who specified them. So back to SOA, back to ITIL, back to the blocking and tackling that’s just good enterprise architecture. Anything to add to that, Mark?

Not more of the same

Skilton: I think it’s a mistake to just describe this as more of the same. ITIL, in my view, needs to change to take into account self-service dynamics. ITIL is kind of a provider service management process. It’s thing that you do to people. Cloud changes that direction to the other way, and I think that’s something that needs to be done.

Also, fundamentally the data center and network strategies need to be in place to adopt cloud. From my experience, the data center transformation or refurbishment strategies or next generation networks tend to be done as a separate exercise from the applications area. So a strong, strong recommendation from me would be to drive a clear cloud route map to your data center.

Gardner: So, perhaps a regulating effect on the self-selection of cloud services would be that the network isn’t designed for it and it’s not going to help.

Skilton: Exactly.

Gardner: That’s one way to govern your cloud. Ed Harrington, any other further thoughts on working towards a cloud future without the pitfalls?

Harrington: Again, the governance, certification of some sort. I’m not in favor of regulation, but I am in favor of some sort of third party certification of services that consumers can rely upon safely. But, I will go back to what I said earlier. It’s a combination of governance, treating the cloud services as services per se, and enterprise architecture.

Gardner: What about the notion that was brought up earlier about private clouds being an important on-ramp to this? If I were a public cloud provider, I would do my market research on what’s going on in the private clouds, because I think they are going to be incubators to what might then become hybrid and ultimately a full-fledged third-party public cloud providing assets and services.

What can we learn from looking at what’s going on with private cloud now, seemingly a lot of trying to reduce cost and energy consumption, but what does that tell us about what we should expect in the next few years? Again, let’s start with you, Tom.

Plunkett: What we’re seeing with private cloud is that it’s actually impacting governance, because one of the things that you look at with private cloud is chargeback between different internal customers. This is forcing these organizations to deal with complex money, business issues that they don’t really like to do.

Nowadays, it’s mostly vertical applications, where you’ve got one owner who is paying for everything. Now, we’re actually going back to, as we were talking about earlier, dealing with some of the tricky issues of SOA.

Gardner: TJ, private cloud as an incubator. What we should expect?

Securing your data

Virdi: Configuration and change management — how in the private cloud we are adapting to it and supporting different customer segments is really the key. This could be utilized in the public cloud too, as well as how you are really securing your information and data or your business knowledge. How you want to secure that is key, and that’s why the private cloud is there. If we can adapt to or mimic the same kind of controls in the public cloud, maybe we’ll have more adoptions in the public cloud too.

Gardner: Penelope, any thoughts on that, the private to public transition?

Gordon: I also look at it in a little different way. For example, in the U.S., you have the National Security Agency (NSA). For a lot of what you would think of as their non-differentiating processes, for example payroll, they can’t use ADP. They can’t use that SaaS for payroll, because they can’t allow the identities of their employees to become publicly known.

Anything that involves their employee data and all the rest of the information within the agency has to be kept within a private cloud. But, they’re actively looking at private cloud solutions for some of the other benefits of cloud.

In one sense, I look at it and say that private cloud adoption to me tells a provider that this is an area that’s not a candidate for a public-cloud solution. But, private clouds could also be another channel for public cloud providers to be able to better monetize what they’re doing, rather than just focusing on public cloud solutions.

Gardner: So, then, you’re saying this is a two-way street. Just as we could foresee someone architecting a good private cloud and then looking to take that out to someone else’s infrastructure, you’re saying there is a lot of public services that for regulatory or other reasons might then need to come back in and be privatized or kept within the walls. Interesting.

Mark Skilton, any thoughts on this public-private tension and/or benefit?

Skilton: I asked an IT service director the question about what was it like running a cloud service for the account. This is a guy who had previously been running hosting and management and with many years experience.

The surprising thing was that he was quite shocked that the disciplines that he previously had for escalating errors and doing planned maintenance, monitoring, billing and charging back to the customer fundamentally were changing, because it had to be done more in real time. You have to fix before it fails. You can’t just wait for it to fail. You have to have a much more disciplined approach to running a private cloud.

The lessons that we’re learning in running private clouds for our clients is the need to have a much more of a running-IT-as-a-business ethos and approach. We find that if customers try to do it themselves, either they may find that difficult, because they are used to buying that as a service, or they have to change their enterprise architecture and support service disciplines to operate the cloud.

Gardner: Perhaps yet another way to offset potential for cloud chaos in the future is to develop the core competencies within the private-cloud environment and do it sooner rather than later? This is where you can cut your teeth or get your chops, some number of metaphors come to mind, but this is something that sounds like a priority. Would you agree with that Ed, coming up with a private-cloud capability is important?

Harrington: It’s important, and it’s probably going to dominate for the foreseeable future, especially in areas that organizations view as core. They view them as core, because they believe they provide some sort of competitive advantage or, as Penelope was saying, security reasons. ADP’s a good idea. ADP could go into NSA and set up a private cloud using ADP and NSA. I think is a really good thing.

Trust a big issue

But, I also think that trust is still a big issue and it’s going to come down to trust. It’s going to take a lot of work to have anything that is perceived by a major organization as core and providing differentiation to move to other than a private cloud.

Gardner: TJ.

Virdi: Private clouds actually allow you to make more business modular. Your capability is going to be a little bit more modular and interoperability testing could happen in the private cloud. Then you can actually use those same kind of modular functions, utilize the public cloud, and work with other commercial off-the-shelf (COTS) vendors that really package this as new holistic solutions.

Gardner: Does anyone consider the impact of mergers and acquisitions on this? We’re seeing the economy pick up, at least in some markets, and we’re certainly seeing globalization, a very powerful trend with us still. We can probably assume, if you’re a big company, that you’re going to get bigger through some sort of merger and acquisition activity. Does a cloud strategy ameliorate the pain and suffering of integration in these business mergers, Tom?

Plunkett: Well, not to speak on behalf of Oracle, but we’ve gone through a few mergers and acquisitions recently, and I do believe that having a cloud environment internally helps quite a bit. Specifically, TJ made the earlier point about modularity. Well, when we’re looking at modules, they’re easier to integrate. It’s easier to recompose services, and all the benefits of SOA really.

Gardner: TJ, mergers and acquisitions in cloud.

Virdi: It really helps. At the same time, we were talking about legal and regulatory compliance stuff. EU and Japan require you to put the personally identifiable information (PII) in their geographical areas. Cloud could provide a way to manage those things without having the hosting where you have your own business.

Gardner: Penelope, any thoughts, or maybe even on a slightly different subject, of being able to grow rapidly vis-à-vis cloud experience and expertise and having architects that understand it?

Gordon: Some of this comes back to some of the discussions we were having about the extra discipline that comes into play, if you are going to effectively consume and provide cloud services, if you do become much more rigorous about your change management, your configuration management, and if you then apply that out to a larger process level.

So, if you define certain capabilities within the business in a much more modular fashion, then, when you go through that growth and add on people, you have documented procedures and processes. It’s much easier to bring someone in and say, “You’re going to be a product manager, and that job role is fungible across the business.”

That kind of thinking, the cloud constructs applied up at a business architecture level, enables a kind of business expansion that we are looking at.

Gardner: Mark Skilton, thoughts about being able to manage growth, mergers and acquisitions, even general business agility vis-à-vis more cloud capabilities.

Skilton: Right now, I’m involved in merging in a cloud company that we bought last year in May, and I would say yes and no. The no point is that I’m trying to bundle this service that we acquired in each product and with which we could add competitive advantage to the services that we are offering. I’ve had a problem with trying to bundle that into our existing portfolio. I’ve got to work out how they will fit and deploy in our own cloud. So, that’s still a complexity problem.

Faster launch

But, the upside is that I can bundle that service that we acquired, because we wanted to get that additional capability, and rewrite design techniques for cloud computing. We can then launch that bundle of new service faster into the market.

It’s kind of a mixed blessing with cloud. With our own cloud services, we acquire these new companies, but we still have the same IT integration problem to then exploit that capability we’ve acquired.

Gardner: That might be a perfect example of where cloud is or isn’t. When you run into the issue of complexity and integration, it doesn’t compute, so to speak.

Skilton: It’s not plug and play yet, unfortunately.

Gardner: Ed, what do you think about this growth opportunity, mergers and acquisitions, a good thing or bad thing?

Harrington: It’s a challenge. I think, as Mark presented it, it’s got two sides. It depends a lot on how close the organizations are, how close their service portfolios are, to what degree has each of the organizations adapted the cloud, and is that going to cause conflict as well. So I think there is potential.

Skilton: Each organization in the commercial sector can have different standards, and then you still have that interoperability problem that we have to translate to make it benefit, the post merger integration issue.

Gardner: We’ve been discussing the practical requirements of various cloud computing models, looking at core and context issues where cloud models would work, where they wouldn’t. And, we have been thinking about how we might want to head off the potential mixed bag of cloud models in our organizations and what we can do now to make the path better, but perhaps also make our organizations more agile, service oriented, and able to absorb things like rapid growth and mergers.

I’d like to thank you all for joining and certainly want to thank our guests. This is a sponsored podcast discussion coming to you from The Open Group’s 2011 Conference in San Diego. We’re here the week of February 7, 2011. A big thank you now to Penelope Gordon, cofounder of 1Plug Corporation. Thanks.

Gordon: Thank you.

Gardner: Mark Skilton, Director of Portfolio and Solutions in the Global Infrastructure Services with Capgemini. Thank you, Mark.

Skilton: Thank you very much.

Gardner: Ed Harrington, Principal Consultant in Virginia for the UK-based Architecting the Enterprise.

Harrington: Thank you, Dana.

Gardner: Tom Plunkett, Senior Solution Consultant with Oracle. Thank you.

Plunkett: Thank you, Dana.

Gardner: TJ Virdi, the Computing Architect in the CAS IT System Architecture group at Boeing.

Virdi: Thank you.

Gardner: I’m Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for joining, and come back next time.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2011. All rights reserved.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirectblogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cloud/SOA, Enterprise Architecture

An SOA Unconference

By Dr. Chris Harding, The Open Group

Monday at The Open Group Conference in San Diego was a big day for Interoperability, with an Interoperability panel session, SOA and Cloud conference streams, meetings of SOA and UDEF project teams, and a joint meeting with the IEEE on next-generation UDEF. The Tuesday was quieter, with just one major interoperability-related session: the SOACamp. The pace picks up again today, with a full day of Cloud meetings, followed by a Thursday packed with members meetings on SOA, Cloud, and Semantic Interoperability.

Unconferences

The SOACamp was an unstructured meeting, based on the CloudCamp Model, for SOA practitioners and people interested in SOA to ask questions and share experiences.

CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas. The CloudCamp organization is responsible for these events. They are frequent and worldwide; 19 events have been held or arranged so far for the first half of 2011 in countries including Australia, Brazil, Canada, India, New Zealand, Nigeria, Spain, Turkey, and the USA. The Open Group has hosted CloudCamps at several of its Conferences, and is hosting one at its current conference in San Diego today.

What is an unconference? It is an event that follows an unscripted format in which topics are proposed and presented by volunteers, with the agenda being made up on the fly to address whatever the attendees most want to discuss. This format works very well for Cloud, and we thought we would give it a try for SOA.

The SOA Hot Topics

So what were the SOA hot topics? Volunteers gave 5-minute “lightning talks” on five issues, which were then considered as the potential agenda items for discussion:

  • Does SOA Apply to Cloud service models?
  • Vendor-neutral framework for registry/repository access to encourage object re-use
  • Fine-grained policy-based authorization for exposing data in the Cloud
  • Relation of SOA to Cloud Architecture
  • Are all Cloud architectures SOA architectures?

The greatest interest was in the last two of these, and they were taken together as a single agenda item for the whole meeting: SOA and Cloud Architecture. The third topic, fine-grained policy-based authorization for exposing data in the Cloud, was considered to be more Cloud-related than SOA-related, and it was agreed to keep it back for the CloudCamp the following day. The other two topics, SOA and Cloud service models and vendor-neutral framework for registry/repository access were considered by separate subgroups meeting in parallel.

The discussions were lively and raised several interesting points.

SOA and Cloud Architecture

Cloud is a consumption and delivery model for SOA, but Cloud and SOA services are different. All Cloud services are SOA services, but not all SOA services are Cloud services, because Cloud services have additional requirements for Quality of Service (QoS) and delivery consumption.

Cloud requires a different approach to QoS. Awareness of the run-time environment and elasticity is crucial for Cloud applications.

Cloud architectures are service-oriented, but they need additional architectural building blocks, particularly for QoS. They may be particularly likely to use a REST-ful approach, but this is still service-oriented.

A final important point is that, within a service-oriented architecture, the Cloud is transparent to the consumer. The service consumer ultimately should not care whether a service is on the Cloud.

Vendor-Neutral Framework for Registry/Repository Access

The concept of vendor-neutral access to SOA registries and repositories is good, but it requires standard data models and protocols to be effective.

The Open Group SOA ontology has proved a good basis for a modeling framework.

Common methods for vendor-neutral access could help services in the Cloud connect to multiple registries and repositories.

Does SOA Apply to Cloud service Models?

The central idea here is that the cloud service models – Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) – could be defined as services in the SOA sense, with each of them exposing capabilities through defined interfaces.

This would require standards in three key areas: metrics/QoS, brokering/subletting, and service prioritization.

Is The Open Group an appropriate forum for setting and defining Cloud customer and provider standards? It has a standards development capability. The key determining factor is the availability of member volunteers with the relevant expertise.

Are Unconferences Good for Discussing SOA?

Cloud is an emerging topic while SOA is a mature one, and this affected the nature of the discussions. The unconference format is great for enabling people to share experience in new topic areas. The participants really wanted to explore new developments rather than compare notes on SOA practice, and the result of this was that the discussion mostly focused on the relation of SOA to the Cloud. This wasn’t what we expected – but resulted in some good discussions, exposing interesting ideas.

So is the unconference format a good one for SOA discussions? Yes it is – if you don’t need to produce a particular result. Just go with the flow, and let it take you and SOA to interesting new places.

Cloud and SOA are a topic of discussion at The Open Group Conference, San Diego, which is currently underway.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF practitioner.

Comments Off

Filed under Cloud/SOA