Category Archives: Cloud/SOA

Open Group Security Gurus Dissect the Cloud: Higher of Lower Risk

By Dana Gardner, Interarbor Solutions

For some, any move to the Cloud — at least the public Cloud — means a higher risk for security.

For others, relying more on a public Cloud provider means better security. There’s more of a concentrated and comprehensive focus on security best practices that are perhaps better implemented and monitored centrally in the major public Clouds.

And so which is it? Is Cloud a positive or negative when it comes to cyber security? And what of hybrid models that combine public and private Cloud activities, how is security impacted in those cases?

We posed these and other questions to a panel of security experts at last week’s Open Group Conference in San Francisco to deeply examine how Cloud and security come together — for better or worse.

The panel: Jim Hietala, Vice President of Security for The Open Group; Stuart Boardman, Senior Business Consultant at KPN, where he co-leads the Enterprise Architecture Practice as well as the Cloud Computing Solutions Group; Dave Gilmour, an Associate at Metaplexity Associates and a Director at PreterLex Ltd., and Mary Ann Mezzapelle, Strategist for Enterprise Services and Chief Technologist for Security Services at HP.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Is this notion of going outside the firewall fundamentally a good or bad thing when it comes to security?

Hietala: It can be either. Talking to security people in large companies, frequently what I hear is that with adoption of some of those services, their policy is either let’s try and block that until we get a grip on how to do it right, or let’s establish a policy that says we just don’t use certain kinds of Cloud services. Data I see says that that’s really a failed strategy. Adoption is happening whether they embrace it or not.

The real issue is how you do that in a planned, strategic way, as opposed to letting services like Dropbox and other kinds of Cloud Collaboration services just happen. So it’s really about getting some forethought around how do we do this the right way, picking the right services that meet your security objectives, and going from there.

Gardner: Is Cloud Computing good or bad for security purposes?

Boardman: It’s simply a fact, and it’s something that we need to learn to live with.

What I’ve noticed through my own work is a lot of enterprise security policies were written before we had Cloud, but when we had private web applications that you might call Cloud these days, and the policies tend to be directed toward staff’s private use of the Cloud.

Then you run into problems, because you read something in policy — and if you interpret that as meaning Cloud, it means you can’t do it. And if you say it’s not Cloud, then you haven’t got any policy about it at all. Enterprises need to sit down and think, “What would it mean to us to make use of Cloud services and to ask as well, what are we likely to do with Cloud services?”

Gardner: Dave, is there an added impetus for Cloud providers to be somewhat more secure than enterprises?

Gilmour: It depends on the enterprise that they’re actually supplying to. If you’re in a heavily regulated industry, you have a different view of what levels of security you need and want, and therefore what you’re going to impose contractually on your Cloud supplier. That means that the different Cloud suppliers are going to have to attack different industries with different levels of security arrangements.

The problem there is that the penalty regimes are always going to say, “Well, if the security lapses, you’re going to get off with two months of not paying” or something like that. That kind of attitude isn’t going to go in this kind of security.

What I don’t understand is exactly how secure Cloud provision is going to be enabled and governed under tight regimes like that.

An opportunity

Gardner: Jim, we’ve seen in the public sector that governments are recognizing that Cloud models could be a benefit to them. They can reduce redundancy. They can control and standardize. They’re putting in place some definitions, implementation standards, and so forth. Is the vanguard of correct Cloud Computing with security in mind being managed by governments at this point?

Hietala: I’d say that they’re at the forefront. Some of these shared government services, where they stand up Cloud and make it available to lots of different departments in a government, have the ability to do what they want from a security standpoint, not relying on a public provider, and get it right from their perspective and meet their requirements. They then take that consistent service out to lots of departments that may not have had the resources to get IT security right, when they were doing it themselves. So I think you can make a case for that.

Gardner: Stuart, being involved with standards activities yourself, does moving to the Cloud provide a better environment for managing, maintaining, instilling, and improving on standards than enterprise by enterprise by enterprise? As I say, we’re looking at a larger pool and therefore that strikes me as possibly being a better place to invoke and manage standards.

Boardman: Dana, that’s a really good point, and I do agree. Also, in the security field, we have an advantage in the sense that there are quite a lot of standards out there to deal with interoperability, exchange of policy, exchange of credentials, which we can use. If we adopt those, then we’ve got a much better chance of getting those standards used widely in the Cloud world than in an individual enterprise, with an individual supplier, where it’s not negotiation, but “you use my API, and it looks like this.”

Having said that, there are a lot of well-known Cloud providers who do not currently support those standards and they need a strong commercial reason to do it. So it’s going to be a question of the balance. Will we get enough specific weight of people who are using it to force the others to come on board? And I have no idea what the answer to that is.

Gardner: We’ve also seen that cooperation is an important aspect of security, knowing what’s going on on other people’s networks, being able to share information about what the threats are, remediation, working to move quickly and comprehensively when there are security issues across different networks.

Is that a case, Dave, where having a Cloud environment is a benefit? That is to say more sharing about what’s happening across networks for many companies that are clients or customers of a Cloud provider rather than perhaps spotty sharing when it comes to company by company?

Gilmour: There is something to be said for that, Dana. Part of the issue, though, is that companies are individually responsible for their data. They’re individually responsible to a regulator or to their clients for their data. The question then becomes that as soon as you start to share a certain aspect of the security, you’re de facto sharing the weaknesses as well as the strengths.

So it’s a two-edged sword. One of the problems we have is that until we mature a little bit more, we won’t be able to actually see which side is the sharpest.

Gardner: So our premise that Cloud is good and bad for security is holding up, but I’m wondering whether the same things that make you a risk in a private setting — poor adhesion to standards, no good governance, too many technologies that are not being measured and controlled, not instilling good behavior in your employees and then enforcing that — wouldn’t this be the same either way? Is it really Cloud or not Cloud, or is it good security practices or not good security practices? Mary Ann?

No accountability

Mezzapelle: You’re right. It’s a little bit of that “garbage in, garbage out,” if you don’t have the basic things in place in your enterprise, which means the policies, the governance cycle, the audit, and the tracking, because it doesn’t matter if you don’t measure it and track it, and if there is no business accountability.

David said it — each individual company is responsible for its own security, but I would say that it’s the business owner that’s responsible for the security, because they’re the ones that ultimately have to answer that question for themselves in their own business environment: “Is it enough for what I have to get done? Is the agility more important than the flexibility in getting to some systems or the accessibility for other people, as it is with some of the ubiquitous computing?”

So you’re right. If it’s an ugly situation within your enterprise, it’s going to get worse when you do outsourcing, out-tasking, or anything else you want to call within the Cloud environment. One of the things that we say is that organizations not only need to know their technology, but they have to get better at relationship management, understanding who their partners are, and being able to negotiate and manage that effectively through a series of relationships, not just transactions.

Gardner: If data and sharing data is so important, it strikes me that Cloud component is going to be part of that, especially if we’re dealing with business processes across organizations, doing joins, comparing and contrasting data, crunching it and sharing it, making data actually part of the business, a revenue generation activity, all seems prominent and likely.

So to you, Stuart, what is the issue now with data in the Cloud? Is it good, bad, or just the same double-edged sword, and it just depends how you manage and do it?

Boardman: Dana, I don’t know whether we really want to be putting our data in the Cloud, so much as putting the access to our data into the Cloud. There are all kinds of issues you’re going to run up against, as soon as you start putting your source information out into the Cloud, not the least privacy and that kind of thing.

A bunch of APIs

What you can do is simply say, “What information do I have that might be interesting to people? If it’s a private Cloud in a large organization elsewhere in the organization, how can I make that available to share?” Or maybe it’s really going out into public. What a government, for example, can be thinking about is making information services available, not just what you go and get from them that they already published. But “this is the information,” a bunch of APIs if you like. I prefer to call them data services, and to make those available.

So, if you do it properly, you have a layer of security in front of your data. You’re not letting people come in and do joins across all your tables. You’re providing information. That does require you then to engage your users in what is it that they want and what they want to do. Maybe there are people out there who want to take a bit of your information and a bit of somebody else’s and mash it together, provide added value. That’s great. Let’s go for that and not try and answer every possible question in advance.

Gardner: Dave, do you agree with that, or do you think that there is a place in the Cloud for some data?

Gilmour: There’s definitely a place in the Cloud for some data. I get the impression that there is going to drive out of this something like the insurance industry, where you’ll have a secondary Cloud. You’ll have secondary providers who will provide to the front-end providers. They might do things like archiving and that sort of thing.

Now, if you have that situation where your contractual relationship is two steps away, then you have to be very confident and certain of your cloud partner, and it has to actually therefore encompass a very strong level of governance.

The other issue you have is that you’ve got then the intersection of your governance requirements with that of the cloud provider’s governance requirements. Therefore you have to have a really strongly — and I hate to use the word — architected set of interfaces, so that you can understand how that governance is actually going to operate.

Gardner: Wouldn’t data perhaps be safer in a cloud than if they have a poorly managed network?

Mezzapelle: There is data in the Cloud and there will continue to be data in the Cloud, whether you want it there or not. The best organizations are going to start understanding that they can’t control it that way and that perimeter-like approach that we’ve been talking about getting away from for the last five or seven years.

So what we want to talk about is data-centric security, where you understand, based on role or context, who is going to access the information and for what reason. I think there is a better opportunity for services like storage, whether it’s for archiving or for near term use.

There are also other services that you don’t want to have to pay for 12 months out of the year, but that you might need independently. For instance, when you’re running a marketing campaign, you already share your data with some of your marketing partners. Or if you’re doing your payroll, you’re sharing that data through some of the national providers.

Data in different places

So there already is a lot of data in a lot of different places, whether you want Cloud or not, but the context is, it’s not in your perimeter, under your direct control, all of the time. The better you get at managing it wherever it is specific to the context, the better off you will be.

Hietala: It’s a slippery slope [when it comes to customer data]. That’s the most dangerous data to stick out in a Cloud service, if you ask me. If it’s personally identifiable information, then you get the privacy concerns that Stuart talked about. So to the extent you’re looking at putting that kind of data in a Cloud, looking at the Cloud service and trying to determine if we can apply some encryption, apply the sensible security controls to ensure that if that data gets loose, you’re not ending up in the headlines of The Wall Street Journal.

Gardner: Dave, you said there will be different levels on a regulatory basis for security. Wouldn’t that also play with data? Wouldn’t there be different types of data and therefore a spectrum of security and availability to that data?

Gilmour: You’re right. If we come back to Facebook as an example, Facebook is data that, even if it’s data about our known customers, it’s stuff that they have put out there with their will. The data that they give us, they have given to us for a purpose, and it is not for us then to distribute that data or make it available elsewhere. The fact that it may be the same data is not relevant to the discussion.

Three-dimensional solution

That’s where I think we are going to end up with not just one layer or two layers. We’re going to end up with a sort of a three-dimensional solution space. We’re going to work out exactly which chunk we’re going to handle in which way. There will be significant areas where these things crossover.

The other thing we shouldn’t forget is that data includes our software, and that’s something that people forget. Software nowadays is out in the Cloud, under current ways of running things, and you don’t even always know where it’s executing. So if you don’t know where your software is executing, how do you know where your data is?

It’s going to have to be just handled one way or another, and I think it’s going to be one of these things where it’s going to be shades of gray, because it cannot be black and white. The question is going to be, what’s the threshold shade of gray that’s acceptable.

Gardner: Mary Ann, to this notion of the different layers of security for different types of data, is there anything happening in the market that you’re aware of that’s already moving in that direction?

Mezzapelle: The experience that I have is mostly in some of the business frameworks for particular industries, like healthcare and what it takes to comply with the HIPAA regulation, or in the financial services industry, or in consumer products where you have to comply with the PCI regulations.

There has continued to be an issue around information lifecycle management, which is categorizing your data. Within a company, you might have had a document that you coded private, confidential, top secret, or whatever. So you might have had three or four levels for a document.

You’ve already talked about how complex it’s going to be as you move into trying understand, not only for that data, that the name Mary Ann Mezzapelle, happens to be in five or six different business systems over a 100 instances around the world.

That’s the importance of something like an Enterprise Architecture that can help you understand that you’re not just talking about the technology components, but the information, what they mean, and how they are prioritized or critical to the business, which sometimes comes up in a business continuity plan from a system point of view. That’s where I’ve advised clients on where they might start looking to how they connect the business criticality with a piece of information.

One last thing. Those regulations don’t necessarily mean that you’re secure. It makes for good basic health, but that doesn’t mean that it’s ultimately protected.You have to do a risk assessment based on your own environment and the bad actors that you expect and the priorities based on that.

Leaving security to the end

Boardman: I just wanted to pick up here, because Mary Ann spoke about Enterprise Architecture. One of my bugbears — and I call myself an enterprise architect — is that, we have a terrible habit of leaving security to the end. We don’t architect security into our Enterprise Architecture. It’s a techie thing, and we’ll fix that at the back. There are also people in the security world who are techies and they think that they will do it that way as well.

I don’t know how long ago it was published, but there was an activity to look at bringing the SABSA Methodology from security together with TOGAF®. There was a white paper published a few weeks ago.

The Open Group has been doing some really good work on bringing security right in to the process of EA.

Hietala: In the next version of TOGAF, which has already started, there will be a whole emphasis on making sure that security is better represented in some of the TOGAF guidance. That’s ongoing work here at The Open Group.

Gardner: As I listen, it sounds as if the in the Cloud or out of the Cloud security continuum is perhaps the wrong way to look at it. If you have a lifecycle approach to services and to data, then you’ll have a way in which you can approach data uses for certain instances, certain requirements, and that would then apply to a variety of different private Cloud, public Cloud, hybrid Cloud.

Is that where we need to go, perhaps have more of this lifecycle approach to services and data that would accommodate any number of different scenarios in terms of hosting access and availability? The Cloud seems inevitable. So what we really need to focus on are the services and the data.

Boardman: That’s part of it. That needs to be tied in with the risk-based approach. So if we have done that, we can then pick up on that information and we can look at a concrete situation, what have we got here, what do we want to do with it. We can then compare that information. We can assess our risk based on what we have done around the lifecycle. We can understand specifically what we might be thinking about putting where and come up with a sensible risk approach.

You may come to the conclusion in some cases that the risk is too high and the mitigation too expensive. In others, you may say, no, because we understand our information and we understand the risk situation, we can live with that, it’s fine.

Gardner: It sounds as if we are coming at this as an underwriter for an insurance company. Is that the way to look at it?

Current risk

Gilmour: That’s eminently sensible. You have the mortality tables, you have the current risk, and you just work the two together and work out what’s the premium. That’s probably a very good paradigm to give us guidance actually as to how we should approach intellectually the problem.

Mezzapelle: One of the problems is that we don’t have those actuarial tables yet. That’s a little bit of an issue for a lot of people when they talk about, “I’ve got $100 to spend on security. Where am I going to spend it this year? Am I going to spend it on firewalls? Am I going to spend it on information lifecycle management assessment? What am I going to spend it on?” That’s some of the research that we have been doing at HP is to try to get that into something that’s more of a statistic.

So, when you have a particular project that does a certain kind of security implementation, you can see what the business return on it is and how it actually lowers risk. We found that it’s better to spend your money on getting a better system to patch your systems than it is to do some other kind of content filtering or something like that.

Gardner: Perhaps what we need is the equivalent of an Underwriters Laboratories (UL) for permeable organizational IT assets, where the security stamp of approval comes in high or low. Then, you could get you insurance insight– maybe something for The Open Group to look into. Any thoughts about how standards and a consortium approach would come into that?

Hietala: I don’t know about the UL for all security things. That sounds like a risky proposition.

Gardner: It could be fairly popular and remunerative.

Hietala: It could.

Mezzapelle: An unending job.

Hietala: I will say we have one active project in the Security Forum that is looking at trying to allow organizations to measure and understand risk dependencies that they inherit from other organizations.

So if I’m outsourcing a function to XYZ corporation, being able to measure what risk am I inheriting from them by virtue of them doing some IT processing for me, could be a Cloud provider or it could be somebody doing a business process for me, whatever. So there’s work going on there.

I heard just last week about a NSF funded project here in the U.S. to do the same sort of thing, to look at trying to measure risk in a predictable way. So there are things going on out there.

Gardner: We have to wrap up, I’m afraid, but Stuart, it seems as if currently it’s the larger public Cloud provider, something of Amazon and Google and among others that might be playing the role of all of these entities we are talking about. They are their own self-insurer. They are their own underwriter. They are their own risk assessor, like a UL. Do you think that’s going to continue to be the case?

Boardman: No, I think that as Cloud adoption increases, you will have a greater weight of consumer organizations who will need to do that themselves. You look at the question that it’s not just responsibility, but it’s also accountability. At the end of the day, you’re always accountable for the data that you hold. It doesn’t matter where you put it and how many other parties they subcontract that out to.

The weight will change

So there’s a need to have that, and as the adoption increases, there’s less fear and more, “Let’s do something about it.” Then, I think the weight will change.

Plus, of course, there are other parties coming into this world, the world that Amazon has created. I’d imagine that HP is probably one of them as well, but all the big names in IT are moving in here, and I suspect that also for those companies there’s a differentiator in knowing how to do this properly in their history of enterprise involvement.

So yeah, I think it will change. That’s no offense to Amazon, etc. I just think that the balance is going to change.

Gilmour: Yes. I think that’s how it has to go. The question that then arises is, who is going to police the policeman and how is that going to happen? Every company is going to be using the Cloud. Even the Cloud suppliers are using the Cloud. So how is it going to work? It’s one of these never-decreasing circles.

Mezzapelle: At this point, I think it’s going to be more evolution than revolution, but I’m also one of the people who’ve been in that part of the business — IT services — for the last 20 years and have seen it morph in a little bit different way.

Stuart is right that there’s going to be a convergence of the consumer-driven, cloud-based model, which Amazon and Google represent, with an enterprise approach that corporations like HP are representing. It’s somewhere in the middle where we can bring the service level commitments, the options for security, the options for other things that make it more reliable and risk-averse for large corporations to take advantage of it.

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

1 Comment

Filed under Cloud, Cloud/SOA, Conference, Cybersecurity, Information security, Security Architecture

San Francisco Conference Observations: Enterprise Transformation, Enterprise Architecture, SOA and a Splash of Cloud Computing

By Chris Harding, The Open Group 

This week I have been at The Open Group conference in San Francisco. The theme was Enterprise Transformation which, in simple terms means changing how your business works to take advantage of the latest developments in IT.

Evidence of these developments is all around. I took a break and went for coffee and a sandwich, to a little cafe down on Pine and Leavenworth that seemed to be run by and for the Millennium generation. True to type, my server pulled out a cellphone with a device attached through which I swiped my credit card; an app read my screen-scrawled signature and the transaction was complete.

Then dinner. We spoke to the hotel concierge, she tapped a few keys on her terminal and, hey presto, we had a window table at a restaurant on Fisherman’s Wharf. No lengthy phone negotiations with the Maitre d’. We were just connected with the resource that we needed, quickly and efficiently.

The power of ubiquitous technology to transform the enterprise was the theme of the inspirational plenary presentation given by Andy Mulholland, Global CTO at Capgemini. Mobility, the Cloud, and big data are the three powerful technical forces that must be harnessed by the architect to move the business to smarter operation and new markets.

Jeanne Ross of the MIT Sloan School of Management shared her recipe for architecting business success, with examples drawn from several major companies. Indomitable and inimitable, she always challenges her audience to think through the issues. This time we responded with, “Don’t small companies need architecture too?” Of course they do, was the answer, but the architecture of a big corporation is very different from that of a corner cafe.

Corporations don’t come much bigger than Nissan. Celso Guiotoko, Corporate VP and CIO at the Nissan Motor Company, told us how Nissan are using enterprise architecture for business transformation. Highlights included the concept of information capitalization, the rationalization of the application portfolio through SOA and reusable services, and the delivery of technology resource through a private cloud platform.

The set of stimulating plenary presentations on the first day of the conference was completed by Lauren States, VP and CTO Cloud Computing and Growth Initiatives at IBM. Everyone now expects business results from technical change, and there is huge pressure on the people involved to deliver results that meet these expectations. IT enablement is one part of the answer, but it must be matched by business process excellence and values-based culture for real productivity and growth.

My role in The Open Group is to support our work on Cloud Computing and SOA, and these activities took all my attention after the initial plenary. If you had, thought five years ago, that no technical trend could possibly generate more interest and excitement than SOA, Cloud Computing would now be proving you wrong.

But interest in SOA continues, and we had a SOA stream including presentations of forward thinking on how to use SOA to deliver agility, and on SOA governance, as well as presentations describing and explaining the use of key Open Group SOA standards and guides: the Service Integration Maturity Model (OSIMM), the SOA Reference Architecture, and the Guide to using TOGAF for SOA.

We then moved into the Cloud, with a presentation by Mike Walker of Microsoft on why Enterprise Architecture must lead Cloud strategy and planning. The “why” was followed by the “how”: Zapthink’s Jason Bloomberg described Representational State Transfer (REST), which many now see as a key foundational principle for Cloud architecture. But perhaps it is not the only principle; a later presentation suggested a three-tier approach with the client tier, including mobile devices, accessing RESTful information resources through a middle tier of agents that compose resources and carry out transactions (ACT).

In the evening we had a CloudCamp, hosted by The Open Group and conducted as a separate event by the CloudCamp organization. The original CloudCamp concept was of an “unconference” where early adopters of Cloud Computing technologies exchange ideas. Its founder, Dave Nielsen, is now planning to set up a demo center where those adopters can experiment with setting up private clouds. This transition from idea to experiment reflects the changing status of mainstream cloud adoption.

The public conference streams were followed by a meeting of the Open Group Cloud Computing Work Group. This is currently pursuing nine separate projects to develop standards and guidance for architects using cloud computing. The meeting in San Francisco focused on one of these – the Cloud Computing Reference Architecture. It compared submissions from five companies, also taking into account ongoing work at the U.S. National Institute of Standards and Technology (NIST), with the aim of creating a base from which to create an Open Group reference architecture for Cloud Computing. This gave a productive finish to a busy week of information gathering and discussion.

Ralph Hitz of Visana, a health insurance company based in Switzerland, made an interesting comment on our reference architecture discussion. He remarked that we were not seeking to change or evolve the NIST service and deployment models. This may seem boring, but it is true, and it is right. Cloud Computing is now where the automobile was in 1920. We are pretty much agreed that it will have four wheels and be powered by gasoline. The business and economic impact is yet to come.

So now I’m on my way to the airport for the flight home. I checked in online, and my boarding pass is on my cellphone. Big companies, as well as small ones, now routinely use mobile technology, and my airline has a frequent-flyer app. It’s just a shame that they can’t manage a decent cup of coffee.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF practitioner.

Comments Off

Filed under Cloud, Cloud/SOA, Conference, Enterprise Architecture, Enterprise Transformation, Service Oriented Architecture, Standards

Setting Expectations and Working within Existing Structures the Dominate Themes for Day 3 of San Francisco Conference

By The Open Group Conference Team

Yesterday concluded The Open Group Conference San Francisco. Key themes that stood out on Day 3, as well as throughout the conference, included the need for a better understanding of business expectations and existing structures.

Jason Bloomberg, president of ZapThink, began his presentation by using an illustration of a plate of spaghetti and drawing an analogy to Cloud Computing. He compared spaghetti to legacy applications and displayed the way that enterprises are currently moving to the Cloud – by taking the plate of spaghetti and physically putting it in the Cloud.

A lot of companies that have adopted Cloud Computing have done so without a comprehensive understanding of their current organization and enterprise assets, according to Mr. Bloomberg. A legacy application that is not engineered to operate in the Cloud will not yield the hyped benefits of elasticity and infinite scalability. And Cloud adoption without well thought-out objectives will never reach the vague goals of “better ROI” or “reduced costs.”

Mr. Bloomberg urged the audience to start with the business problem in order to understand what the right adoption will be for your enterprise. He argued that it’s crucial to think about the question “What does your application require?” Do you require Scalability? Elasticity? A private, public or hybrid Cloud? Without knowing a business’s expected outcomes, enterprise architects will be hard pressed to help them achieve their goals.

Understand your environment

Chris Lockhart, consultant at Working Title Management & Technology Consultants, shared his experiences helping a Fortune 25 company with an outdated technology model support Cloud-centric services. Lockhart noted that for many large companies, Cloud has been the fix-it solution for poorly architected enterprises. But often times after the business tells architects to build a model for cloud adoption, the plan presented and the business expectations do not align.

After working on this project Mr. Lockhart learned that the greatest problem for architects is “people with unset and unmanaged expectations.” After the Enterprise Architecture team realized that they had limited power with their recommendations and strategic roadmaps, they acted as negotiators, often facilitating communication between different departments within the business. This is where architects began to display their true value to the organization, illustrated by the following statement made by a business executive within the organization: “Architects are seen as being balanced and rounded individuals who combine a creative approach with a caring, thoughtful disposition.”

The key takeaways from Mr. Lockhart’s experience were:

  • Recognize the limitations
  • Use the same language
  • Work within existing structures
  • Frameworks and models are important to a certain extent
  • Don’t talk products
  • Leave architectural purity in the ivory tower
  • Don’t dictate – low threat level works better
  • Recognize that EA doesn’t know everything
  • Most of the work was dealing with people, not technology

Understand your Cloud Perspective

Steve Bennett, senior enterprise architect at Oracle, discussed the best way to approach Cloud Computing in his session, entitled “A Pragmatic Approach to Cloud Computing.” While architects understand and create value driven approaches, most customers simply don’t think this way, Mr. Bennett said. Often the business side of the enterprise hears about the revolutionary benefits of the Cloud, but they usually don’t take a pragmatic approach to implementing it.

Mr. Bennett went on to compare two types of Cloud adopters – the “Dilberts” and the “Neos” (from the Matrix). Dilberts often pursue monetary savings when moving to the Cloud and are late adopters, while Neos pursue business agility and can be described as early adopters, again highlighting the importance of understanding who is driving the implementation before architecting a plan.

Comments Off

Filed under Cloud, Cloud/SOA, Cybersecurity, Enterprise Architecture, Enterprise Transformation

The Open Group San Francisco Conference: Day 1 Highlights

By The Open Group Conference Team

With the end of the first day of the conference, here are a few key takeaways from Monday’s key note sessions:

The Enterprise Architect: Architecting Business Success

Jeanne Ross, Director & Principal Research Scientist, MIT Center for Information Systems Research

Ms. Ross began the plenary discussing the impact of enterprise architecture on the whole enterprise. According to Ross “we live in a digital economy, and in order to succeed, we need to excel in enterprise architecture.” She went on to say that the current “plan, build, use” model has led to a lot of application silos. Ms. Ross also mentioned that enablement doesn’t work well; while capabilities are being built, they are grossly underutilized within most organizations.

Enterprise architects need to think about what capabilities their firms will exploit – both in the short- and long-terms. Ms. Ross went on to present case studies from Aetna, Protection 1, USAA, Pepsi America and Commonwealth of Australia. In each of these examples, architects provided the following business value:

  • Helped senior executives clarify business goals
  • Identified architectural capability that can be readily exploited
  • Presented Option and their implications for business goals
  • Built Capabilities incrementally

A well-received quote from Ms. Ross during the Q&A portion of the session was, “Someday, CIOs will report to EA – that’s the way it ought to be!”

How Enterprise Architecture is Helping Nissan IT Transformation

Celso Guiotoko, Corporate Vice President and CIO, Nissan Motor Co., Ltd.

Mr. Guiotoko presented the steps that Nissan took to improve the efficiency of its information systems. The company adapted BEST – an IT mid-term plan that helped led enterprise transformation within the organization. BEST was comprised of the following components:

  • Business Alignment
  • Enterprise Architecture
  • Selective Sourcing
  • Technology Simplification

Guided by BEST and led by strong Enterprise Architecture, Nissan saw the following results:

  • Reduced cost per user from 1.09 to 0.63
  • 230,000 return with 404 applications reduced
  • Improved solution deployment time
  • Significantly reduced hardware costs

Nissan recently created the next IT mid-term plan called “VITESSE,” which stands for value information, technology, simplification and service excellence. Mr. Guiotoko said that VITESSE will help the company achieve its IT and business goals as it moves toward the production of zero-emissions vehicles.

The Transformed Enterprise

Andy Mulholland, Global CTO, Capgemini

Mr. Mulholland began the presentation by discussing what parts of technology comprise today’s enterprise and asking the question, “What needs to be done to integrate these together?” Enterprise technology is changing rapidly and  the consumerization of IT only increasing. Mr. Mulholland presented a statistic from Gartner predicting that up to 35 percent of enterprise IT expenditures will be managed outside of the IT department’s budget by 2015. He then referenced the PC revolution when enterprises were too slow to adapt to employees needs and adoption of technology.

There are three core technology clusters and standards that are emerging today in the form of Cloud, mobility and big data, but there are no business process standards to govern them. In order to not repeat the same mistakes of the PC revolution, organizations need to move from an inside-out model to an outside-in model – looking at the activities and problems within the enterprise then looking outward versus looking at those problems from the outside in. Outside-in, Mulholland argued, will increase productivity and lead to innovative business models, ultimately enabling your enterprise to keep up the current technology trends.

Making Business Drive IT Transformation through Enterprise Architecture

Lauren States, VP & CTO of Cloud Computing and Growth Initiatives, IBM Corp.

Ms. States began her presentation by describing today’s enterprise – flat, transparent and collaborative. In order to empower this emerging type of enterprise, she argued that CEOs need to consider data a strategic initiative.

Giving the example of the CMO within the enterprise to reflect how changing technologies affect their role, she stated, “CMOS are overwhelming underprepared for the data explosion and recognize a need to invest in and integrate technology and analytics.” CIOs and architects need to use business goals and strategy to set the expectation of IT. Ms. States also said that organizations need to focus on enabling growth, productivity and cultural change – factors are all related and lead to enterprise transformation.

*********

The conference will continue tomorrow with overarching themes that include enterprise transformation, security and SOA. For more information about the conference, please go here: http://www3.opengroup.org/sanfrancisco2012

Comments Off

Filed under Cloud, Cloud/SOA, Data management, Enterprise Architecture, Enterprise Transformation, Semantic Interoperability, Standards

OSIMM Goes de Jure: The First International Standards on SOA

By Heather Kreger, CTO International Standards, IBM

I was very excited to see OSIMM pass its ratification vote within the International Organization for Standardization (ISO) on January 8, 2012, becoming the first International Standard on SOA.  This is the culmination of a two year process that I’ve been driving for The Open Group in ISO/IEC JTC1.  Having the OSIMM standard recognized globally is a huge validation of the work that The Open Group and the SOA Work Group have been doing over the past few years since OSIMM first became an Open Group standard in 2009.  Even though the process for international standard ratification is a lengthy one, it has been worth the effort and we’ve already submitted additional Open Group standards to ISO.  For those of you interested in the process, read on…

How it works

In order for OSIMM to become an international standard, The Open Group had to first be approved as an “Approved Reference Organization” and “Publically Available Specification” (PAS) Submitter, in a vote by every JTC1 country.

What does this REALLY mean? It means Open Group standards can be referenced by international standards and it means the Open Group can submit standards to ISO/IEC and ask for them to follow the PAS process, which ratifies standards as they are as International Standards if they pass the international vote.  Each country votes and comments on the specification and if there are comments, there is a ballot resolution meeting with potentially an update to the submitted specification. This all sounds straightforward until you mix in The Open Group’s timeline for approving updates to standards with the JTC1 process. In the end, this takes about a year.

Why drag you through this?  I just wanted you to appreciate what an accomplishment the OSIMM V2 ISO/IEC 16680 is for The Open Group.  The SOA Governance Framework Standard is now following the same process. The SOA Ontology and new SOA Reference Architecture Standards have also been submitted to ISO’s SOA Work Group (in SC38) as input to a normal working group processes.

The OSIMM benefit

Let’s also revisit OSIMM, since it’s been awhile since OSIMM V1 was first standardized in 2009. OSIMM V2 is technically equivalent to OSIMM V1, although we did some clarifications to answer comments from the PAS processes and added an appendix positioning OSIMM with them maturity models in ISO/IEC JTC1.

OSIMM leverages proven best practices to allow consultants and IT practitioners to assess an organization’s readiness and maturity level for adopting services in SOA and Cloud solutions. It defines a process to create a roadmap for incremental adoption that maximizes business benefits at each stage along the way. The model consists of seven levels of maturity and seven dimensions of consideration that represent significant views of business and IT capabilities where the application of SOA principles is essential for the deployment of services. OSIMM acts as a quantitative model to aid in assessment of current state and desired future state of SOA maturity. OSIMM also has an extensible framework for understanding the value of implementing a service model, as well as a comprehensive guide for achieving their desired level of service maturity.

There are a couple of things I REALLY like about OSIMM, especially for those new to SOA:

First, it’s an easy, visual way to grasp the full breadth of what is SOA. From no services to simple, single, hand-developed services or dynamically created services.  In fact, the first three levels of maturity are “pre-services” approaches we all know and use (i.e.: object-oriented and components). With this, everyone can find what they are using…even if they are not using services at all.

Second, it’s a self assessment. You use this to gauge your own use of services today and where you want to be. You can reassess to “track” your progress (sort of like weight loss) on employing services. Because you have to customize the indicators and the weight of the maturity scores will differ according to what is important to your company, it doesn’t make sense to compare scores between two companies. In addition, every company has a different target goal. So, no, sorry, you cannot brag that you are more mature than your arch competitor!  However, some of the process assessments in ISO/IEC SC7 ARE for just that, so check out the OSIMM appendix for links and pointers!

Which brings me to my third point–there is no “right” level of maturity. The most mature level doesn’t make sense for most companies.  OSIMM is a great tool to force your business and IT staff into a discussion to agree together on what the current level is and what the right level is for them – everyone on the same page.

Finally, it’s flexible. You can add indicators and adjust weightings to make it accurate and a reflection of the needs of your business AND IT departments.  You can skip levels, be at different levels of maturity for different business dimensions.  You work on advancing the use of services in the dimension that gives you the most business value, you don’t have to give them all “equal attention” or get them to the same level.

Resources

The following resources are available if you are interested in learning more about the OSIMM V2 Standard:

IBM is also presenting next week during The Open Group Conference in San Francisco, which will discuss how to extend OSIMM for your organization.

Heather KregerHeather Kreger is IBM’s lead architect for Smarter Planet, Policy, and SOA Standards in the IBM Software Group, with 15 years of standards experience. She has led the development of standards for Cloud, SOA, Web services, Management and Java in numerous standards organizations, including W3C, OASIS, DMTF, and Open Group.Heather is currently co-chair for The Open Group’s SOA Work Group and liaison for the Open Group SOA and Cloud Work Groups to ISO/IEC JTC1 SC7 SOA SG and INCITS DAPS38 (US TAG to ISO/IEC JTC 1 SC38). Heather is also the author of numerous articles and specifications, as well as the book Java and JMX, Building Manageable Systems, and most recently was co-editor of Navigating the SOA Open Standards Landscape Around Architecture.

1 Comment

Filed under Cloud/SOA, Service Oriented Architecture, Standards

SOCCI: Behind the Scenes

By E.G. Nadhan, HP

Cloud Computing standards, like other standards go through a series of evolutionary phases similar to the ones I outlined in the Top 5 phases of IaaS standards evolution. IaaS standards, in particular, take longer than their SaaS and PaaS counterparts because a balance is required between the service-orientation of the core infrastructure components in Cloud Computing.

This balance is why today’s announcement of the release of the industry’s first technical standard, Service Oriented Cloud Computing Infrastructure (SOCCI) is significant.

As one of the co-chairs of this project, here is some insight into the manner in which The Open Group went about creating the definition of this standard:

  • Step One: Identify the key characteristics of service orientation, as well as those for the cloud as defined by the National Institute of Standards and Technology (NIST). Analyze these characteristics and the resulting synergies through the application of service orientation in the cloud. Compare and contrast their evolution from the traditional environment through service orientation to the Cloud.
  • Step Two: Identify the key architectural building blocks that enable the Operational Systems Layer of the SOA Reference Architecture and the Cloud Reference Architecture that is in progress.
  • Step Three: Map these building blocks across the architectural layers while representing the multi-faceted perspectives of various viewpoints including those of the consumer, provider and developer.
  • Step Four: Define a Motor Cars in the Cloud business scenario: You, the consumer  are downloading auto-racing videos through an environment managed by a Service Integrator which requires the use of services for software, platform and infrastructure along with  traditional technologies. Provide a behind-the-curtains perspective on the business scenario where the SOCCI building blocks slowly but steadily come to life.
  • Step Five: Identify the key connection points with the other Open Group projects in the areas of architecture, business use cases, governance and security.

The real test of a standard is in its breadth of adoption. This standard can be used in multiple ways by the industry at large in order to ensure that the architectural nuances are comprehensively addressed. It could be used to map existing Cloud-based deployments to a standard architectural template. It can also serve as an excellent set of Cloud-based building blocks that can be used to build out a new architecture.

Have you taken a look at this standard? If not, please do so. If so, where and how do you think this standard could be adopted? Are there ways that the standard can be improved in future releases to make it better suited for broader adoption? Please let me know your thoughts.

This blog post was originally posted on HP’s Grounded in the Cloud Blog.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project.

Comments Off

Filed under Cloud, Cloud/SOA, Semantic Interoperability, Service Oriented Architecture, Standards

First Technical Standard for Cloud Computing – SOCCI

By E.G. Nadhan, HP

The Open Group just announced the availability of its first Technical Standard for the Cloud - Service Oriented Cloud Computing Infrastructure Framework (SOCCI), which outlines the concepts and architectural building blocks necessary for infrastructures to support SOA and Cloud initiatives. HP has played a leadership role in the definition and evolution of this standard within The Open Group.

SOCCI.png

As a platinum member of The Open Group, HP’s involvement started with the leadership of the Service Oriented Infrastructure project that I helped co-chair. As the Cloud Computing Working Group started taking shape, I suggested expanding this project into the working group, which resulted in the formation of the Service Oriented Cloud Computing Infrastructure project. This project was co-chaired by Tina Abdollah of IBM and myself and operated under the auspices of both the SOA and Cloud Computing Working Groups.

Infrastructure has been traditionally provisioned in a physical manner. With the evolution of virtualization technologies and application of service-orientation to infrastructure, it can now be offered as a service. SOCCI is the realization of an enabling framework of service-oriented components for infrastructure to be provided as a service in the cloud.

Service Oriented Cloud Computing Infrastructure (SOCCI) is a classic intersection of multiple paradigms in the industry – infrastructure virtualization, service-orientation and the cloud – an inevitable convergence,” said Tom Hall, Global Product Marketing Manager, Cloud and SOA Applications, HP Enterprise Services. “HP welcomes the release of the industry’s first cloud computing standard by The Open Group. This standard provides a strong foundation for HP and The Open Group to work together to evolve additional standards in the SOA and Cloud domains.”

This standard can be leveraged in one or more of the following ways:

  • Comprehend service orientation and Cloud synergies
  • Extend adoption of  traditional and service-oriented infrastructure in the Cloud
  • Leverage consumer, provider and developer viewpoints
  • Incorporate SOCCI building blocks into Enterprise Architecture
  • Implement Cloud-based solutions using different infrastructure deployment models
  • Realize business solutions referencing the SOCCI Business Scenario
  • Apply Cloud governance considerations and recommendations

The Open Group also announced the availability of the SOA Reference Architecture, a blueprint for creating and evaluating SOA solutions.

Standards go through a series of evolution phases as I outline in my post on Evolution of IaaS standards.  The announcement of the SOCCI Technical Standard will give some impetus to the evolution of IaaS standards in the Cloud somewhere between the experience and consensus phases.

It was a very positive experience co-chairing the evolution of the SOCCI standard within The Open Group working with other member companies from several enterprises with varied perspectives.

Have you taken a look at this standard?  If not, please do so.  And for those who have, where and how do you think this standard could be adopted?  Are there ways that the standard can be improved in future releases to make it better suited for broader adoption?  Please let me know!

This blog post was originally posted on HP’s Enterprise Services Blog.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project.

1 Comment

Filed under Cloud, Cloud/SOA, Service Oriented Architecture, Standards

Enterprise Architects and Paradigm Shifts

By Stuart Boardman, KPN

It’s interesting looking back at what people have written over the course of the year and seeing which themes appear regularly in their blogs. I thought I’d do the same with my own posts for The Open Group and see whether I could pull some of it together. I saw that the recurring themes for me have been dealing with uncertainty, the changing nature of the enterprise and the influence of information technology from outside the enterprise – and all of this in relation to the practice of enterprise architecture. I also explored the mutual influences these themes have on each other.

Unsurprisingly I’m not alone in picking up on these themes. At the risk of offending anyone I don’t mention, I note that Serge Thorn, Raghuraman Krishnamurthy and Len Fehskens have given their own perspectives on The Open Group’s Blog on some or all of these themes. And of course there’s plenty of writing on these themes going on in the blogosphere at large. In one sense I think writing about this is part of a process of trying to understand what’s going on in the world.

After some reflection, it seems to me that all of this converges in what tends to be called ”social business.” For better or worse, there is no fixed definition of the term. I would say it describes a way of working where, both within and across organizations, hierarchies and rules are being replaced by networks and collaboration. The concept of the enterprise in such a system is then definitively extended to include a whole ecosystem of customers and suppliers as well as investors and beneficiaries. Any one organization is just a part of the enterprise – a stakeholder. And of course the enterprise will look different dependent on the viewpoint of a particular stakeholder. That should be a familiar concept anyway for an enterprise architect. That one participant can be a stakeholder in multiple enterprises is not really new – it’s just something we now have no choice but to take into account.

Within any one organization, social business means that creativity and strategy development takes place at and across multiple levels. We can speak of networked, podular or fractal forms of organization. It also means a lot of other things with wider economic, social and political implications but that’s not my focus here.

Another important aspect is the relationship with newer developments in information and communication technology. We can’t separate social business from the technology which has helped it to develop and which in turn is stimulated by its existence and demands. I don’t mean any one technology and I won’t even insist on restricting it to information technology. But it’s clear that there is at least a high degree of synergy between newer IT developments and social business. In other words, the more an organization becomes a social business, the more its business will involve the use of information technology – not as a support function but as an essential part of how it does its business.  Moreover exactly this usage of IT is not and cannot be (entirely) under its own control.

A social business therefore demonstrates, in all aspects of the enterprise, fuzzy boundaries and a higher level of what I call entropy (uncertainty, rate of change, sensitivity to change). It means we need new ways of dealing with complexity, which fortunately is a topic a lot of people are looking at. It means that simplicity is not in every case a desirable goal and that, scary as it may seem, we may actually need to encourage entropy (in some places) in order to develop the agility to respond to change – effectively and without making any unnecessary long term assumptions.

So, if indeed the world is evolving to such a state, what can enterprise architects do to help their own organizations become successful social businesses (social governments – whatever)?

Enterprise Architecture is a practice that is founded in communication. To support and add value to that communication we have developed analysis methods and frameworks, which help us model what we learn and, in turn, communicate the results. Enterprise Architects work across organizations to understand how the activities of the participants relate to the strategy of the organization and how the performance of each person/group’s activities can optimally support and reinforce everyone else’s. We don’t do their work for them and don’t, if we do our work properly, have any sectional interests. We are the ultimate generalists, specialized in bringing together all those aspects, in which other people are the experts. We’re therefore ideally placed to facilitate the development of a unified vision and a complementary set of practices. OK, that sounds a bit idealistic. We know reality is never perfect but, if we don’t have ideals, we’d be hypocrites to be doing this work anyway. Pragmatism and ideals can be a positive combination.

Yes, there’s plenty of work to do to adapt our models to this new reality. Our goals, the things we try to achieve with EA will not be different. In some significant aspects, the results will be – if only because of the scope and diversity of the enterprise. We’ll certainly need to produce some good example EA artifacts to show what these results will look like. I can see an obvious impact in business architecture and in governance – most likely other areas too. But the issues faced in governance may be similar to those being tackled by The Open Group’s Cloud Governance project. And business architecture is long due for expansion outside of the single organization, so there’s synergy there as well. We can also look outside of our own community for inspiration – in the area of complexity theory, in business modeling, in material about innovation and strategy development and in economic and even political thinking about social business.

We’ll also be faced with organizational challenges. EA has for too long and too often been seen as the property of the IT department. That’s always been a problem anyway, but to face the challenges of social business, EA must avoid the slightest whiff of sectional interest and IT centrism. And, ironically, the best hope for the IT department in this scary new world may come from letting go of what it does not need to control and taking on a new role as a positive enabler of change.

There could hardly be a more appropriate time to be working on TOGAF Next. What an opportunity!

Stuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity. 

5 Comments

Filed under Business Architecture, Cloud, Cloud/SOA, Enterprise Architecture, Enterprise Transformation, Semantic Interoperability

Capgemini’s CTO on How Cloud Computing Exposes the Duality Between IT and Business Transformation

By Dana Gardner, Interarbor Solutions

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference this month in San Francisco.

The conference will focus on how IT and enterprise architecture support enterprise transformation. Speakers in conference events will also explore the latest in service oriented architecture (SOA), cloud computing, and security.

We’re now joined by one of the main speakers, Andy Mulholland, the Global Chief Technology Officer and Corporate Vice President at Capgemini. In 2009, Andy was voted one of the top 25 most influential CTOs in the world by InfoWorld. And in 2010, his CTO Blog was voted best blog for business managers and CIOs for the third year running by Computer Weekly.

Capgemini is about to publish a white paper on cloud computing. It draws distinctions between what cloud means to IT, and what it means to business — while examining the complex dual relationship between the two.

As a lead-in to his Open Group conference presentation on the transformed enterprise, Andy draws on the paper and further drills down on one of the decade’s hottest technology and business trends, cloud computing, and how it impacts business and IT. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Why do business people think they have a revolution on their hands, while IT people look cloud computing as an evolution of infrastructure efficiency?

Mulholland: We define the role of IT and give it the responsibility and the accountability in the business in a way that is quite strongly related to internal practice. It’s all about how we manage the company’s transactions, how we reduce the cost, how we automate business process,and generally try to make our company a more efficient internal operator.

When you look at cloud computing through that set of lenses, you’re going to see … the technologies from cloud computing, principally virtualization, [as] ways to improve how you deliver the current server-centric, application-centric environment.

However, business people … reflect on it in terms of the change in society and the business world, which we all ought to recognize because that is our world, around the way we choose what we buy, how we choose to do business with people, how we search more, and how we’ve even changed that attitude.

Changed our ways

There’s a whole list of things that we simply just don’t do anymore because we’ve changed the way we choose to buy a book, the way we choose and listen to music and lots of other things.

So we see this as a revolution in the market or, more particularly, a revolution in how cloud can serve in the market, because everybody uses some form of technology.

So then the question is not the role of the IT department and the enterprise — it’s the role technology should be playing in their extended enterprise in doing business.

Gardner: What do we need to start doing differently?

Mulholland: Let’s go to a conversation this morning with a client. It’s always interesting to touch reality. This particular client is looking at the front end of a complex ecosystem around travel, and was asked this standard question by our account director: Do you have a business case for the work we’re discussing?

The reply from the CEO is very interesting. He fixed him with a very cold glare and he said, “If you were able to have 20 percent more billable hours without increasing your cost structure, would you be bothered to even think about the business case?”

The answer in that particular case was they were talking about 10,000 more travel instances or more a year — with no increase in their cost structure. In other words, their whole idea was there was nothing to do with cost in it. Their argument was in revenue increase, market share increase, and they thought that they would make better margins, because it would actually decrease their cost base or spread it more widely.

That’s the whole purpose of this revolution and that’s the purpose the business schools are always pushing, when they talk about innovative business models. It means innovate your business model to look at the market again from the perspective of getting into new markets, getting increased revenue, and maybe designing things that make more money.

Using technology externally

We’re always hooked on this idea that we’ve used technology very successfully internally, but now we should be asking the question about how we’re using technology externally when the population as a whole uses that as their primary method of deciding what they’re going to buy, how they’re going to buy it, when they’re going to buy it, and lots of other questions.

… A popular book recently has been The Power of Pull, and the idea is that we’re really seeing a decentralization of the front office in order to respond to and follow the market and the opportunities and the events in very different ways.

The Power of Pull says that I do what my market is asking me and I design business process or capabilities to be rapidly orchestrated through the front office around where things want to go, and I have linkage points, application programming interface (API) points, where I take anything significant and transfer it back.

But the real challenge is — and it was put to me today in the client discussion — that their business was designed around 1970 computer systems, augmented slowly around that, and they still felt that. Today, their market and their expectations of the industry that they’re in were that they would be designed around the way people were using their products and services and the events and that they had to make that change.

To do that, they’re transformed in the organization, and that’s where we start to spot the difference. We start to spot the idea that your own staff, your customers, and other suppliers are all working externally in information, process, and services accessible to all on an Internet market or architecture.

So when we talk about business architecture, it’s as relevant today as it ever was in terms of interpreting a business.

Set of methodologies

But when we start talking about architecture, The Open Group Architectural Framework (TOGAF) is a set of methodologies on the IT side — the closed-coupled state for a designed set of principles to client-server type systems. In this new model, when we talk about clouds, mobility, and people traveling around and connecting by wireless, etc., we have a stateless loosely coupled environment.

The whole purpose of The Open Group is, in fact, to help devise new ways for being able to architect methods to deliver that. That’s what stands behind the phrase, “a transformed enterprise.”

… If we go back to the basic mission of The Open Group, which is boundarylessness of this information flow, the boundary has previously been defined by a computer system updating another computer system in another company around traditional IT type procedural business flow.

Now, we’re talking about the idea that the information flow is around an ecosystem in an unstructured way. Not a structured file-to-file type transfer, not a structured architecture of who does what, when, and how, but the whole change model in this is unstructured.

Gardner: It’s important to point out here, Andy, that the stakes are relatively high. Who in the organization can be the change agent that can make that leap between the duality view of cloud that IT has, and these business opportunists?

Mulholland: The CEOs are quite noticeably reading the right articles, hearing the right information from business schools, etc., and they’re getting this picture that they’re going to have new business models and new capabilities.

So the drive end is not hard. The problem that is usually encountered is that the IT department’s definition and role interferes with them being able to play the role they want.

What we’re actually looking for is the idea that IT, as we define it today, is some place else. You have to accept that it exists, it will exist, and it’s hugely important. So please don’t take those principles and try to apply them outside.

The real question here is when you find those people who are doing the work outside — and I’ve yet to find any company where it hasn’t been the case — and the question should be how can we actually encourage and manage that innovation sensibly and successfully?

What I mean by that is that if everybody goes off and does their own thing, once again, we’ll end up with a broken company. Why? Because their whole purpose as an enterprises is to leverage success rapidly. If someone is very successful over there, you really need to know, and you need to leverage that again as rapidly as you can to run the rest of the organization. If it doesn’t work, you need to stop it quickly.

Changing roles

In models of the capabilities of that, the question is where is the government structure? So we hear titles like Chief Innovation Officer, again, slightly surprising how it may come up. But we see the model coming both ways. There are reforming CIOs for sure, who have recognized this and are changing their role and position accordingly, sometimes formally, sometimes informally.

The other way around, there are people coming from other parts of the business, taking the title and driving them. I’ve seen Chief Strategy Officers taking the role. I’ve seen the head of sales and marketing taking the role.

Certainly, recognizing the technology possibilities should be coming from the direction of the technology capabilities within the current IT department. The capability of what that means might be coming differently. So it’s a very interesting balance at the moment, and we don’t know quite the right answer.

What I do know is that it’s happening, and the quick-witted CIOs are understanding that it’s a huge opportunity for them to fix their role and embrace a new area, and a new sense of value that they can bring to their organization.

Gardner: Returning to the upcoming Capgemini white paper, it adds a sense of urgency at the end on how to get started. It suggests that you appoint a leader, but a leader first for the inside-out element of cloud and transformation and then a second leader, a separate leader perhaps, for that outside-in or reflecting the business transformation and the opportunity for what’s going on in the external business and markets. It also suggests a strategic road map that involves both business and technology, and then it suggests getting a pilot going.

How does this transition become something that you can manage?

Mulholland: The question is do you know who is responsible. If you don’t, you’d better figure out how you’re going to make someone responsible, because in any situation, someone has to be deciding what we’re going to do and how we’re going to do it.

Having defined that, there are very different business drivers, as well as different technology drivers, between the two. Clearly, whoever takes those roles will reflect a very different way that they will have to run that element. So a duality is recognized in that comment.

On the other hand, no business can survive by going off in half-a-dozen directions at once. You won’t have the money. You won’t have the brand. You won’t have anything you’d like. It’s simply not feasible.

So, the object of the strategic roadmap is to reaffirm the idea of what kind of business we’re trying to be and do. That’s the glimpse of what we want to achieve.

There has to be a strategy. Otherwise, you’ll end up with way too much decentralization and people making up their own version of the strategy, which they can fairly easily do and fairly easily mount from someone else’s cloud to go and do it today.

So the purpose of the duality is to make sure that the two roles, the two different groups of technology, the two different capabilities they reflect to the organization, are properly addressed, properly managed, and properly have a key authority figure in charge of them.

Enablement model

The business strategy is to make sure that the business knows how the enablement model that these two offer them is capable of being directed to where the shareholders will make money out of the business, because that is ultimately that success factor they’re looking for to drive them forward.

************

If you are interested in attending The Open Group’s upcoming conference, please register here: http://www3.opengroup.org/event/open-group-conference-san-francisco/registration

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

3 Comments

Filed under Cloud, Cloud/SOA, Enterprise Transformation, Semantic Interoperability

SF Conference to Explore Architecture Trends

By The Open Group Conference Team

In addition to exploring the theme of “Enterprise Transformation,” speakers at The Open Group San Francisco conference in January will explore a number of other trends related to enterprise architecture and the profession, including trends in service oriented architectures and business architecture. 

The debate about the role of EA in the development of high-level business strategy is a long running one. EA clearly contributes to business strategy, but does it formulate, plan or execute on business strategy?  If the scope of EA is limited to EA alone, it could have a diminutive role in business strategy and Enterprise Transformation going forward.

EA professionals will have the opportunity to discuss and debate these questions and hear from peers about their practical experiences, including the following tracks:

  • Establishing Value Driven EA as the Enterprise Embarks on Transformation (EA & Enterprise Transformation Track)  - Madhav Naidu, Lead Enterprise Architedt, Ciena Corp., US; and Mark Temple, Chief Architect, Ciena Corp.
  • Building an Enterprise Architecture Practice Foundation for Enterprise Transformation Execution  (EA & Business Innovation Track) – Frank Chen, Senior Manager & Principal Enterprise Architect, Cognizant, US
  • Death of IT: Rise of the Machines (Business Innovation & Technological Disruption: The Challenges to EA Track) –  Mans Bhuller, Senior Director, Oracle Corporation, US
  • Business Architecture Profession and Case Studies  (Business Architecture Track) – Mieke Mahakena, Capgemini,; and Peter Haviland, Chief Architect/Head of Business Architecture, Ernst & Young
  • Constructing the Architecture of an Agile Enterprise Using the MSBI Method (Agile Enterprise Architecture Track) – Nick Malike, Senior Principal Enterprise Architect, Microsoft Corporation, US
  • There’s a SEA Change in Your Future: How Sustainable EA Enables Business Success in Times of Disruptive Change (Sustainable EA Track)  – Leo Laverdure & Alex Conn, Managing Partners, SBSA Partners LLC, US
  • The Realization of SOA’s Using the SOA Reference Architecture  (Tutorials) – Nikhil Kumar, President, Applied Technology Solutions, US
  • SOA Governance: Thinking Beyond Services (SOA Track) – Jed Maczuba, Senior Manager, Accenture, US

In addition, a number of conference tracks will explore issues and trends related to the enterprise architecture profession and role of enterprise architects within organizations.  Tracks addressing professional concerns include:

  • EA: Professionalization or Marketing Needed? (Professional Development Track)  - Peter Kuppen, Senior Manager, Deloitte Consulting, BV, Netherlands
  • Implementing Capabilities With an Architecture Practice (Setting up a Successful EA Practice Track)  – Mike Jacobs, Director and Principal Architect, OmptumInsight; and Joseph May, Director, Architecture Center of Excellence, OmptumInsight
  • Gaining and Retaining Stakeholder Buy-In: The Key to a Successful EA Practice Practice (Setting up a Successful EA Practice Track)   – Russ Gibfried, Enterprise Architect, CareFusion Corporation, US
  • The Virtual Enterprise Architecture Team (Nature & Role of the Enterprise Architecture) – Nicholas Hill, Principal Enterprise Architect, Consulting Services, FSI, Infosys; and Musharal Mughal, Director of EA, Manulife Financials, Canada

 Our Tutorials track will also provide practical guidance for attendees interested in learning more about how to implement architectures within organizations.  Topics will include tutorials on subjects such as TOGAF®, Archimate®, Service Oriented Architectures,  and architecture methods and techniques.

For more information on EA conference tracks, please visit the conference program on our website.

Comments Off

Filed under Cloud/SOA, Enterprise Architecture, Enterprise Transformation, Semantic Interoperability, Service Oriented Architecture

Security and Cloud Computing Themes to be explored at The Open Group San Francisco Conference

By The Open Group Conference Team

Cybersecurity and Cloud Computing are two of the most pressing trends facing enterprises today. The Open Group Conference San Francisco will feature tracks on both trends where attendees can learn about the latest developments in both disciplines as well as hear practical advice for implementing both secure architectures and for moving enterprises into the Cloud.  Below are some of the highlights and featured speakers from both tracks.

Security

The San Francisco conference will provide an opportunity for practitioners to explore the theme of “hacktivism,” the use and abuse of IT to drive social change, and its potential impact on business strategy and Enterprise Transformation.  Traditionally, IT security has focused on protecting the IT infrastructure and the integrity of the data held within.  However, in a rapidly changing world where hacktivism is an enterprise’s biggest threat, how can enterprise IT security respond?

Featured speakers and panels include:

  • Steve Whitlock, Chief Security Strategist, Boeing, “Information Security in the Internet Age”
  • Jim Hietala, Vice President, Security, The Open Group, “The Open Group Security Survey Results”
  • Dave Hornford, Conexiam, and Chair, The Open Group Architecture Forum, “Overview of TOGAF® and SABSA® Integration White Paper”
  • Panel – “The Global Supply Chain: Presentation and Discussion on the Challenges of Protecting Products Against Counterfeit and Tampering”

Cloud Computing

According to Gartner, Cloud Computing is now entering the “trough of disillusionment” on its hype cycle. It is critical that organizations better understand the practical business, operational and regulatory issues associated with the implementation of Cloud Computing in order to truly maximize its potential benefits.

Featured speakers and panels include:

  • David JW Gilmour, Metaplexity Associates, “Architecting for Information Security in a Cloud Environment”
  • Chris Lockhart, Senior Enterprise Architect, UnitedHeal, “Un-Architecture: How a Fortune 25 Company Solved the Greatest IT Problem”
  • Penelope Gordon, Cloud and Business Architect, 1Plug Corporation, “Measuring the Business Performance of Cloud Products”
  • Jitendra Maan, Tata Consultancy, “Mobile Intelligence with Cloud Strategy”
  • Panel – “The Benefits, Challenges and Survey of Cloud Computing Interoperability and Portability”
    • Mark Skilton, Capgemini; Kapil Bakshi, Cisco; Jeffrey Raugh, Hewlett-Packard

Please join us in San Francisco for these speaking tracks, as well as those on our featured them of Enterprise Transformation and the role of enterprise architecture. For more information, please go to the conference homepage: http://www3.opengroup.org/sanfrancisco2012

2 Comments

Filed under Cloud, Cloud/SOA, Cybersecurity, Information security, Security Architecture, Semantic Interoperability, TOGAF

2012 Open Group Predictions, Vol. 2

By The Open Group

Continuing on the theme of predictions, here are a few more, which focus on enterprise architecture, business architecture, general IT and Open Group events in 2012.

Enterprise Architecture – The Industry

By Leonard Fehskens, VP of Skills and Capabilities

Looking back at 2011 and looking forward to 2012, I see growing stress within the EA community as both the demands being placed on it and the diversity of opinions within it increase. While this stress is not likely to fracture the community, it is going to make it much more difficult for both enterprise architects and the communities they serve to make sense of EA in general, and its value proposition in particular.

As I predicted around this time last year, the conventional wisdom about EA continues to spin its wheels.  At the same time, there has been a bit more progress at the leading edge than I had expected or hoped for. The net effect is that the gap between the conventional wisdom and the leading edge has widened. I expect this to continue through the next year as progress at the leading edge is something like the snowball rolling downhill, and newcomers to the discipline will pronounce that it’s obvious the Earth is both flat and the center of the universe.

What I had not expected is the vigor with which the loosely defined concept of business architecture has been adopted as the answer to the vexing challenge of “business/IT alignment.” The big idea seems to be that the enterprise comprises “the business” and IT, and enterprise architecture comprises business architecture and IT architecture. We already know how to do the IT part, so if we can just figure out the business part, we’ll finally have EA down to a science. What’s troubling is how much of the EA community does not see this as an inherently IT-centric perspective that will not win over the “business community.” The key to a truly enterprise-centric concept of EA lies inside that black box labeled “the business” – a black box that accounts for 95% or more of the enterprise.

As if to compensate for this entrenched IT-centric perspective, the EA community has lately adopted the mantra of “enterprise transformation”, a dangerous strategy that risks promising even more when far too many EA efforts have been unable to deliver on the promises they have already made.

At the same time, there is a growing interest in professionalizing the discipline, exemplified by the membership of the Association of Enterprise Architects (AEA) passing 20,000, TOGAF® 9 certifications passing 10,000, and the formation of the Federation of Enterprise Architecture Professional Organizations (FEAPO). The challenge that we face in 2012 and beyond is bringing order to the increasing chaos that characterizes the EA space. The biggest question looming seems to be whether this should be driven by IT. If so, will we be honest about this IT focus and will the potential for EA to become a truly enterprise-wide capability be realized?

Enterprise Architecture – The Profession

By Steve Nunn, COO of The Open Group and CEO of the Association of Enterprise Architects

It’s an exciting time for enterprise architecture, both as an industry and as a profession. There are an abundance of trends in EA, but I wanted to focus on three that have emerged and will continue to evolve in 2012 and beyond.

  • A Defined Career Path for Enterprise Architects: Today, there is no clear career path for the enterprise architect. I’ve heard this from college students, IT and business professionals and current EAs. Up until now, the skills necessary to succeed and the roles within an organization that an EA can and should fill have not been defined. It’s imperative that we determine the skill sets EAs need and the path for EAs to acquire these skills in a linear progression throughout their career. Expect this topic to become top priority in 2012.
  • Continued EA Certification Adoption: Certification will continue to grow as EAs seek ways to differentiate themselves within the industry and to employers. Certifications and memberships through professional bodies such as the Association of Enterprise Architects will offer value to members and employers alike by identifying competent and capable architects. This growth will also be supported by EA certification adoption in emerging markets like India and China, as those countries continue to explore ways to build value and quality for current and perspective clients, and to establish more international credibility.
  • Greater Involvement from the Business: As IT investments become business driven, business executives controlling corporate strategy will need to become more involved in EA and eventually drive the process. Business executive involvement will be especially helpful when outsourcing IT processes, such as Cloud Computing. Expect to see greater interest from executives and business schools that will implement coursework and training to reflect this shift, as well as increased discussion on the value of business architecture.

Business Architecture – Part 2

By Kevin Daley, IBM and Vice-Chair of The Open Group Business Forum

Several key technologies have reached a tipping point in 2011 that will move them out of the investigation and validation by enterprise architects and into the domain of strategy and realization for business architects. Five areas where business architects will be called upon for participation and effort in 2012 are related to:

  • Cloud: This increasingly adopted and disruptive technology will help increase the speed of development and change. The business architect will be called upon to ensure the strategic relevancy of transformation in a repeatable fashion as cycle times and rollouts happen faster.
  • Social Networking / Mobile Computing: Prevalent consumer usage, global user adoption and improvements in hardware and security make this a trend that cannot be ignored. The business architect will help develop new strategies as organizations strive for new markets and broader demographic reach.
  • Internet of Things: This concept from 2000 is reaching critical mass as more and more devices become communicative. The business architect will be called on to facilitate the conversation and design efforts between operational efforts and technologies managing the flood of new and usable information.
  • Big Data and Business Intelligence: Massive amounts of previously untapped data are being exposed, analyzed and made insightful and useful. The business architect will be utilized to help contain the complexity of business possibilities while identifying tactical areas where the new insights can be integrated into existing technologies to optimize automation and business process domains.
  • ERP Resurgence and Smarter Software: Software purchasing looks to continue its 2011 trend towards broader, more intuitive and feature-rich software and applications.  The business architect will be called upon to identify and help drive getting the maximum amount of operational value and output from these platforms to both preserve and extend organizational differentiation.

The State of IT

By Dave Lounsbury, CTO

What will have a profound effect on the IT industry throughout 2012 are the twin horses of mobility and consumerization, both of which are galloping at full tilt within the IT industry right now. Key to these trends are the increased use of personal devices, as well as favorite consumer Cloud services and social networks, which drive a rapidly growing comfort among end users with both data and computational power being everywhere. This comfort brings a level of expectations to end users who will increasingly want to control how they access and use their data, and with what devices. The expectation of control and access will be increasingly brought from home to the workplace.

This has profound implications for core IT organizations. There will be less reliance on core IT services, and with that an increased expectation of “I’ll buy the services, you show me know to knit them in” as the prevalent user approach to IT – thus requiring increased attention to use of standards conformance. IT departments will change from being the only service providers within organizations to being a guiding force when it comes to core business processes, with IT budgets being impacted. I see a rapid tipping point in this direction in 2012.

What does this mean for corporate data? The matters of scale that have been a part of IT—the overarching need for good architecture, security, standards and governance—will now apply to a wide range of users and their devices and services. Security issues will loom larger. Data, apps and hardware are coming from everywhere, and companies will need to develop criteria for knowing whether systems are robust, secure and trustworthy. Governments worldwide will take a close look at this in 2012, but industry must take the lead to keep up with the pace of technology evolution, such as The Open Group and its members have done with the OTTF standard.

Open Group Events in 2012

By Patty Donovan, VP of Membership and Events

In 2012, we will continue to connect with members globally through all mediums available to us – our quarterly conferences, virtual and regional events and social media. Through coordination with our local partners in Brazil, China, France, Japan, South Africa, Sweden, Turkey and the United Arab Emirates, we’ve been able to increase our global footprint and connect members and non-members who may not have been able to attend the quarterly conferences with the issues facing today’s IT professionals. These events in conjunction with our efforts in social media has led to a rise in member participation and helped further develop The Open Group community, and we hope to have continued growth in the coming year and beyond.

We’re always open to new suggestions, so if you have a creative idea on how to connect members, please let me know! Also, please be sure to attend the upcoming Open Group Conference in San Francisco, which is taking place on January 30 through February 3. The conference will address enterprise transformation as well as other key issues in 2012 and beyond.

9 Comments

Filed under Business Architecture, Cloud, Cloud/SOA, Data management, Enterprise Architecture, Semantic Interoperability, Standards

Save the Date—The Open Group Conference San Francisco!

By Patty Donovan, The Open Group

It’s that time again to start thinking ahead to The Open Group’s first conference of 2012 to be held in San Francisco, January 30 – February 3, 2012. Not only do we have a great venue for the event, the Intercontinental Mark Hopkins (home of the famous “Top of the Mark” sky lounge—with amazing views of all of San Francisco!), but we have stellar line up for our winter conference centered on the theme of Enterprise Transformation.

Enterprise Transformation is a theme that is increasingly being used by organizations of all types to represent the change processes they implement in response to internal and external business drivers. Enterprise Architecture (EA) can be a means to Enterprise Transformation, but most enterprises today because EA is still largely limited to the IT department and transformation must go beyond the IT department to be successful. The San Francisco conference will focus on the role that both IT and EA can play within the Enterprise Transformation process, including the following:

  • The differences between EA and Enterprise Transformation and how they relate  to one another
  • The use of EA to facilitate Enterprise Transformation
  • How EA can be used to create a foundation for Enterprise Transformation that the Board and business-line managers can understand and use to their advantage
  • How EA facilitates transformation within IT, and how does such transformation support the transformation of the enterprise as a whole
  • How EA can help the enterprise successfully adapt to “disruptive technologies” such as Cloud Computing and ubiquitous mobile access

In addition, we will be featuring a line-up of keynotes by some of the top industry leaders to discuss Enterprise Transformation, as well as themes around our regular tracks of Enterprise Architecture and Professional Certification, Cloud Computing and Cybersecurity. Keynoting at the conference will be:

  • Joseph Menn, author and cybersecurity correspondent for the Financial Times (Keynote: What You’re Up Against: Mobsters, Nation-States and Blurry Lines)
  • Celso Guiotoko, Corporate Vice President and CIO, Nissan Motor Co., Ltd. (Keynote: How Enterprise Architecture is helping NISSAN IT Transformation)
  • Jeanne W. Ross, Director & Principal Research Scientist, MIT Center for Information Systems Research (Keynote: The Enterprise Architect: Architecting Business Success)
  • Lauren C. States, Vice President & Chief Technology Officer, Cloud Computing and Growth Initiatives, IBM Corp. (Keynote: Making Business Drive IT Transformation Through Enterprise Architecture)
  • Andy Mulholland, Chief Global Technical Officer, Capgemini (Keynote: The Transformed Enterprise)
  • William Rouse, Executive Director, Tennenbaum Institute at Georgia Institute of Technology (Keynote: Enterprise Transformation: An Architecture-Based Approach)

For more on the conference tracks or to register, please visit our conference registration page. And stay tuned throughout the next month for more sneak peeks leading up to The Open Group Conference San Francisco!

1 Comment

Filed under Cloud, Cloud/SOA, Cybersecurity, Data management, Enterprise Architecture, Semantic Interoperability, Standards

Cloud Computing predictions for 2012

By The Open Group Cloud Work Group Members 

With 2012 fast approaching, Cloud Computing will remain a hot topic for IT professionals everywhere. The Open Group Cloud Work Group worked on various initiatives in 2011, including the Cloud Computing Survey, which explored the business impact and primary drivers for Cloud within organizations, and the release of Cloud Computing for Business, a guide that examines how enterprises can derive the greatest business benefits from Cloud Computing from a business process standpoint.

As this year comes to an end, here are a few predictions from various Cloud Work Group members.

Non-IT executives will increasingly use the term “Cloud” in regular business conversations

By Penelope Gordon, 1 Plug

In 2012, the number of non-IT business executives seeking ways to leverage Cloud will increase, and consequently references to Cloud Computing will increasingly appear in general business publications.

This increase in Cloud references will in part be due to the availability of consumer-oriented Cloud services such as email and photo sharing. For example, the October 2011 edition of the Christian Science Monitor included an article titled “Five things you need to know about ‘the cloud’” by Chris Gaylord that discussed Cloud services in the same vein as mobile phone capabilities. Another factor behind the increase (unintentionally) highlighted in this article is the overuse – and consequent dilution – of the term “Cloud” – Web services and applications running on Cloud infrastructure are not necessarily themselves Cloud services.

The most important factor behind the increase will be due to the relevance of Cloud – especially the SaaS, BPaaS, and cloud-enabled BPO variants – to these executives. In contrast to SOA, Cloud Computing buying decisions related to business process enablement can be very granular and incremental and can thus be made independently of the IT Department – not that I advocate bypassing IT input. Good governance ensures both macro-level optimization and interoperability.

New business models in monetizing your Information-as–a-Service

By Mark Skilton, Capgemini

Personal data is rapidly become less restricted to individual control and management as we see exponential growth in the use of digital media and social networking to exchange ideas, conduct business and enable whole markets, products and services to be accessible. This has significant ramifications not only for individuals and organizations to maintain security and protection over what is public and private; it also represents a huge opportunity to understand both small and big data and the “interstitial connecting glue” – the metadata within and at the edge of Clouds that are like digital smoke trails of online community activities and behaviors.

At the heart of this is the “value of information” to organizations that can extract and understand how to maximize this information and, in turn, monetize it. This can be as simple as profiling customers who “like” products and services to creating secure backup Cloud services to retrieve in times of need and support of emergency services. The point is that new metadata combinations are possible through the aggregation of data inside and outside of organizations to create new value.

There are many new opportunities to create new business models that recognize this new wave of Information-as- a-Service (IaaS) as the Cloud moves further into new value model territories.

Small and large enterprise experiences when it comes to Cloud

By Pam Isom, IBM

The Cloud Business Use Case (CBUC) team is in the process of developing and publishing a paper that is focused on the subject of Cloud for Small-Medium-Enterprises (SME’s). The CBUC team is the same team that contributed to the book Cloud Computing for Business with a concerted focus on Cloud business benefits, use cases, and justification. When it comes to small and large enterprise comparisons of Cloud adoption, some initial observations are that the increased agility associated with Cloud helps smaller organizations with rapid time-to-market and, as a result, attracts new customers in a timely fashion. This faster time-to-market not only helps SME’s gain new customers who otherwise would have gone to competitors, but prevents those competitors from becoming stronger – enhancing the SME’s competitive edge. Larger enterprises might be more willing to have a dedicated IT organization that is backed with support staff and they are more likely to establish full-fledged data center facilities to operate as a Cloud service provider in both a public and private capacity, whereas SME’s have lower IT budgets and tend to focus on keeping their IT footprint small, seeking out IT services from a variety of Cloud service providers.

A recent study conducted by Microsoft surveyed more than 3000 small businesses across 16 countries with the objective of understanding whether they have an appetite for adopting Cloud Computing. One of the findings was that within three years, “43 percent of workloads will become paid cloud services.” This is one of many statistics that stress the significance of Cloud on small businesses in this example and the predictions for larger enterprise as Cloud providers and consumers are just as profound.

Penelope Gordon specializes in adoption strategies for emerging technologies, and portfolio management of early stage innovation. While with IBM, she led innovation, strategy, and product development efforts for all of IBM’s product and service divisions; and helped to design, implement, and manage one of the world’s first public clouds.

Mark Skilton is Global Director for Capgemini, Strategy CTO Group, Global Infrastructure Services. His role includes strategy development, competitive technology planning including Cloud Computing and on-demand services, global delivery readiness and creation of Centers of Excellence. He is currently author of the Capgemini University Cloud Computing Course and is responsible for Group Interoperability strategy.

Pamela K. Isom is the Chief Architect for complex cloud integration and application innovation services such as serious games. She joined IBM in June 2000 and currently leads efforts that drive Smarter Planet efficiencies throughout client enterprises using, and often times enhancing, its’ Enterprise Architecture (EA). Pamela is a Distinguished Chief/Lead IT Architect with The Open Group where she leads the Cloud Business Use Cases Work Group.

3 Comments

Filed under Cloud, Cloud/SOA, Enterprise Transformation, Semantic Interoperability

Taking Decisions In The Face Of Uncertainty (Responsible Moments)

By Stuart Boardman, KPN

Ruth Malan recently tweeted a link to a piece by Alistair Cockburn about the Last Responsible Moment concept (LRM) in Lean Software Development. I’ve been out of software development for a while now but I could guess what that might mean in an “agile” context and wondered how it might apply to problems I’ve been considering recently in Enterprise Architecture. Anyway, Alistair Cockburn is an interesting writer who would be deservedly famous even if he’d never done anything after writing the most practical and insightful book ever written about use cases. So I read on. The basic idea of the LRM is that in order to deal with uncertainty you avoid taking deterministic decisions until just before it would become irresponsible (for cost or delivery reasons) not to take them. Or to put it another way, don’t take decisions you don’t yet need to take if the result will be to constrain your options but do be ready to take them when it’s dangerous to wait longer.

Alistair’s not a big fan of LRM. He makes the following statement: “If you keep all decisions open until the hypothetical LRM, then your brain will be completely cluttered with open decisions and you won’t be able to keep track of them all.” Later in the discussion, he modifies this a bit but it certainly struck a chord with me. I’ve argued recently in this column that the degree of uncertainty (I called this entropy) in which enterprise architects have to operate is only increasing and that this in turn is due to three factors: the increasing rate of change happening in or affecting the enterprise; the increasing complexity of the environment in which the enterprise exists; and the decreasing extent to which any one enterprise can control that environment. This in turn increases the level of complexity in decision making. I’ll come back to these factors later but if you give me the benefit of the doubt for the moment, you can see that there’s actually a pretty good argument for taking any decision you can reasonably take (i.e. one which does not unjustifiably constrain everything else), as early as you can – in order to minimize complexity as you go along.

This is not (repeat not) a dogma. If it’s totally unclear what decision you should take, you’d probably be better off waiting for more information – and a last responsible moment will undoubtedly arrive.

So assuming you gave me the benefit of the doubt, you might now reasonably be thinking that this is theoretically all very well but how can we actually put it into practice. To do that we need first to look at the three sources of complexity I mentioned:

  • That the rate of change is increasing is pretty much a truism. Some change is due to market forces such as competition, availability/desirability of new capabilities, withdrawal of existing capabilities or changes in the business models of partners and suppliers. Some change is due to regulation (or deregulation) or to indirect factors such as changing demographics. Factors such as social media and Cloud are perhaps more optional but are certainly disruptive – and themselves constantly in change.
  • The increase in complexity of the environment is largely due to the increase in the number of partners and to more or less formal value networks (extended enterprise), to an increased number of delivery channels and to lack of standardization at both the supply and delivery ends.
  • The decrease in control (or more accurately in exclusive and total control) arises from all forms of shared services, which the enterprise one way or another makes use of. This can be Cloud (in which case we talk about multi-tenancy), social media (in which case we talk about anarchy) but equally well the extended enterprise network where not merely do our partners and suppliers have other customers but they also have their own partners and suppliers who have other customers. A consequence of most of this is that you can’t expect to be consulted before change decisions are made.

At best you will be notified well enough in advance of it happening. So you need to take that into account in what you implement.

Each of these factors may affect what the organization is – its core values, its key value propositions, its strategy. They may also affect how it carries out its business – its key activities and processes, its partners and even its customers. And they can affect how those activities and processes are implemented, which by the way can in turn drive change at the strategic level – it’s not just one way traffic – but this is a subject worthy of its own blog.

The point is that, if we want to be able to deal with this, to make sensible decisions in a non-deterministic environment, we would do well to address them where they first manifest themselves in order to avoid a geometric expansion of complexity further on. I’m inclined to think this is primarily in the business architecture (assuming we all accept that business architecture is not just a collection of process models). Almost all of the factors are encountered first here and subsequently reflected possibly in strategy and nearly always on the implementation side. If we make the reasonable assumption that the implementation side will encounter its own complexities, we can at least keep that manageable by not passing on all the possible options from the business architecture.

I said almost all factors are encountered first in the business architecture. The most obvious exceptions I can think of are the Infrastructure as a Service and Platform as a Service variants on Cloud. There’s a good case to be made that the effects of these are primarily felt within IT (strategy and implementation). But wherever we start, the principle doesn’t change – start the analysis at the first point of impact.

The next thing we need to do is look for ways to a) reduce the level of entropy in the part of the system we start with and b) understand how to make decisions that don’t create unnecessary lock in.  There’s not enough space in a blog to go into this in detail but it’s worth mentioning some new and some established techniques.

My attention has recently been drawn (by Verna Allee and others) to the study of networks of things, organizations and people. This in turn makes a lot of use of visualizations. These enable us to “see” the level of entropy around the particular element we’re focusing on – without the penalty of losing sight of the big picture. An example that I found useful is by Eric Berlow.  Another concept in this area involves identifying what are referred to as communities (because the idea came out of the study of social networks – clusters of related elements, which are only loosely coupled to other communities. These techniques allow us to reduce the scope (and therefore complexity) of the problem we’re trying to solve at any one time without falling into the trap of assuming it’s entirely self- contained.

A few blogs ago I mentioned an idea of Robert Phipps’s, where he visualizes the various forces within an organization as vectors. Each vector represents some factor driving (or even constraining) change. Those can be formal or informal organizational groupings (people), stakeholders both within and external to the organization, economic factors around supply or revenue, changes in the business model or even in technology. In that blog I used this as a way of illustrating entropy but Robert is actually looking at ways of applying measures to these vectors in order to be able to establish their actual force (and direction) and therefore their impact on change. Turning an apparently random factor into something knowable reduces the level of entropy and makes us more confident about taking decisions early – and therefore in turn reduces the entropy at a later stage.

One more example: Ruth Malan and Dana Bredemeyer produced a paper last year in which they examined the idea that organizations can make the most use of the creativity of their personnel by replacing the traditional hierarchical and compartmentalized structures with what they called a fractal approach. The idea is that patterns of strategy creation are reflected in all parts of an organization, thus making strategy integral to an organization rather than merely dictated from “above”. It has the added benefit of making the overall complexity more manageable. Architects belong in each fractal both as creators and interpreters of strategy. I can’t possibly do this long paper justice here but I wanted to mention an additional thought I had. What can also help architects is to look for these fractals even in formally hierarchical organizations. There’s a great chance that they really exist and are just waiting for someone to pay them attention.

Having achieved focus on a manageable area and gathered as much meaningful data as possible, we can then apply some basic (but often forgotten or ignored) design principles. Think of separation of concerns, low coupling, high cohesion. All that starts by focusing on the core purpose of the element(s) of the architecture we’ve zoomed in on. And folks, the good news is that this will all have to wait for another occasion.

The very last thing I want to say is something I tend to hammer on about. You have to take some risks. No creative, successful organization does not take risks. You need a degree of confidence about the level and potential impact of the risk but at the end of the day you’ll have to make a decision anyway. Even if you believe that everything is potentially knowable, you know that we often don’t have the information available to achieve that. So you take a gamble on something that seems to deliver value and where the risk is at worst manageable. And by doing that you reduce the total entropy in the system and make taking other decisions easier.

Stuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity. 

1 Comment

Filed under Business Architecture, Cloud/SOA, Enterprise Architecture

Twtpoll results from The Open Group Conference, Austin

The Open Group set up two informal Twitter polls this week during The Open Group Conference, Austin. If you wondered about the results, or just want to see what our Twitter followers think about some topline issues in the industry in very simple terms, see our twtpoll.com results below.

On Day One of the Conference, when the focus of the discussions was on Enterprise Architecture, we polled our Twitter followers about the profession of EA: Do you think we will see a shortage of enterprise architects within the next decade? Why or why not?

The results were split right down the middle.  A sampling of responses:

  • “Yes, if you mean good enterprise architects. No, if you are just referring to those who take the training but have no clue.”
  • “Yes, retirement of Boomers; not enough professionalization.”
  • “Yes, we probably will. EA is becoming more and more important because of fast-changing economies which request fast company change.”
  • “No: budgets, not a priority.”
  • “No. Over just one year, I can see the significant increase of the number of people who are talking EA and realizing the benefits of EA practices.”
  • “No, a majority of companies will still be focusing on short-term improvement because of ongoing current economic status, etc. EA is not a priority.”

On Day Two, while we focused on security, we queried our Twitter followers about data security protection: What type of data security do you think provides the most comprehensive protection of PII? Again, the results were split evenly into thirds:

What do you think of our informal poll results? Do you agree? Disagree? And why?

And let us know if you have thoughts on this one: Do you think SOA is essential for Cloud implementation?

Want some survey results you can really sink your teeth into? View the results of The Open Group’s State of the Industry Cloud Survey. Download the slide deck from The Open Group Bookstore, or read a previous blog post about it.

The Open Group Conference, Austin is now in member meetings. Join us in Taipei or San Francisco for our next Conferences! Hear best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Comments Off

Filed under Cloud/SOA, Cybersecurity, Enterprise Architecture

Understanding security aspects of Cloud initiatives

By Stuart Boardman, Getronics; and Omkhar Arasaratnam, IBM

The Open Group recently published a whitepaper, An Architectural View Of Security For Cloud, which is the first in a series being produced by the Security For The Cloud and SOA project. In this whitepaper we introduce a method that helps organizations to model and therefore understand the security aspects of their Cloud initiatives.

Security is still often cited as the biggest concern about the Cloud. This topic was even raised during the recent survey by The Open Group on Cloud Computing. But does the concern reflect a genuine level of risk? If so, in what way and under what circumstances? It would be irresponsible not to take this seriously, but right now we’re suffering from a “here be dragons” mentality. Despite all the good work done by The Open Group, the Cloud Security Alliance (CSA) and others, we still see far too much discussion of this kind: “The biggest single security threat in the Cloud is…” This helps no one, because these are generalizations and every organization’s situation is specific (This is borne out by other surveys, by the way). The result is FUD (fear, uncertainty and doubt) and therefore stagnation. And as people lose patience with that, the reaction is sometimes the taking of inappropriate risks.

One of the challenges in understanding Cloud-based architectures is that each party, whether it is primarily a consumer or primarily a provider, is part of an ecosystem of different entities, providing and consuming Cloud services. The view of the architecture for each player may be different but each of them must take the entire ecosystem into account and not just its own part. When you couple this with the fact that there are so many possible types of Cloud service and delivery, and so many different kinds of data one might expose in the Cloud, it’s clear that there is no one generic model for Cloud. You need to understand the particular situation you are in or can foresee being in. That can be quite complex.

The Open Group’s Security for the Cloud and SOA project is developing a security reference architecture, which will help architects and security specialists to develop their view and understanding of their situations. Using the architecture and the associated method and combining this with the advice coming from other groups such as CSA or The Open Group Jericho Forum®, you can create a comprehensible view of a complex situation, determine risks, test your solution options and set up controls to manage all this in a production situation.

The fundamentals of our approach are architectural building blocks, security principles and a scenario-driven modeling method. We have defined a set of principles but also take into account identity principles from the CSA – and in the future, will work to combine all these effectively with the recently published Jericho Foundation Identity Commandments. Policy-driven security is for us a basic principle and itself is how most other principles are supported. By using the method to model responsibility for the building blocks, you can understand how policy is managed across the ecosystem and make an informed analysis of risks, mitigations and opportunities.

In the whitepaper, we illustrate the approach for the area of identity, entitlement and access management policy. We use a scenario involving one consumer organization and three SaaS providers supporting travel booking. We look at three situations which might apply depending on the capabilities and flexibility of the various parties. Here’s an example of how responsibility for the building blocks is distributed in one of these situations and how open standards can help to support that.

This happens to be the situation which best supports the principles we highlight in the whitepaper. In other situations you can see exactly how principles are compromised. That helps an organization weigh up risks and benefits. Take a look at the whitepaper and let us know what you think. We’re happy with any input we receive. More whitepapers will follow soon extending the method to other areas of security. Later on we’ll start building realizations that will, we hope, help to promote the use of open standards and bring us closer to Boundaryless Information Flow™. We’re also running an “architectural decisions rodeo” at The Open Group Conference, Austin (July 18-22) during which we will discuss and document key architectural decisions regarding Cloud security.

Omkhar Arasaratnam is a Certified Senior Security Architect with IBM. He is a member of the IBM Security Architecture Board, the IBM Cloud Computing Security Architecture Board, and co-leads The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project. He is also actively involved in the International Standards Organization (ISO) JTC1/SC38 Study Group on Cloud Computing. Omkhar is also an accomplished author and technical editor of several IBM, John Wiley & Sons, and O’Reilly publications. He also has five pending patents in the field of information technology. Omkhar has worldwide responsibility for security architecture in some of IBM’s Cloud Computing services.

Stuart Boardman is a Senior Business Consultant with Getronics Consulting where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead with Omkhar Arasaratnam of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity.

1 Comment

Filed under Cloud/SOA

Facebook – the open source data center

By Mark Skilton, Capgemini

The recent announcement by Facebook of its decision to publish its data center specifications as open source illustrates a new emerging trend in commoditization of compute resources.

Key features of the new facility include:

  • The Oregon facility announced to the world press in April 2011 is 150,000 sq. ft., a $200 million investment. At any one time, the total of Facebook’s 500-million user capacity could be hosted in this one site. Another Facebook data center facility is scheduled to open in 2012 in North Carolina. There may possibly be future data centers in Europe or elsewhere if required by the Palo Alto, Calif.-based company
  • The Oregon data center enables Facebook to reduce its energy consumption per unit of computing power by 38%
  • The data center has a PUE of 1.07, well below the EPA-defined state-of-the-art industry average of 1.5. This means 93% of the energy from the grid makes it into every Open Compute Server.
  • Removed centralized chillers, eliminated traditional inline UPS systems and removed a 480V to 208V transformation
  • Ethernet-powered LED lighting and passive cooling infrastructure reduce energy spent on running the facility
  • New second-level “evaporative cooling system”, a multi-layer method of transforming room temperature and air filtration
  • Launch of the “Open Compute Project” to share the data center design as Open Source. The aim is to encourage collaboration of data center design to improve overall energy consumption and environmental impact. Other observers also see this as a way of reducing component sourcing costs further, as most of the designs are low-cost commodity hardware
  • The servers are 38% more efficient and 24% lower cost

While this can be simply described as a major Cloud services company seeing their data centers as commodity and non-core to their services business, this perhaps is something of a more significant shift in the Cloud Computing industry in general.

Facebook making its data centers specifications open source demonstrates that IaaS (Infrastructure as a Service) utility computing is now seen as a commodity and non-differentiating to companies like Facebook and anyone else who wants cheap compute resources.

What becomes essential is the efficiencies of operation that result in provisioning and delivery of these services are now the key differentiator.

Furthermore, it can be seen that it’s a trend towards what you do with the IaaS storage and compute. How we architect solutions that develop software as a service (SaaS) capabilities becomes the essential differentiator. It is how business models and consumers can maximize these benefits, which increases the importance of architecture and solutions for Cloud. This is key for The Open Group’s vision of “Boundaryless Information Flow™”. It’s how Cloud architecture services are architected, and how architects who design effective Cloud services that use these commodity Cloud resources and capabilities make the difference. Open standards and interoperability are critical to the success of this. How solutions and services are developed to build private, public or hybrid Clouds are the new differentiation. This does not ignore the fact that world-class data centers and infrastructure services are vital of course, but it’s now the way they are used to create value that becomes the debate.

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

2 Comments

Filed under Cloud/SOA

Twtpoll results from The Open Group Conference, London

The Open Group set up three informal Twitter polls this week during The Open Group Conference, London. If you wondered about the results, or just want to see what our Twitter followers think about some topline issues in the industry in very simple terms, see our twtpoll.com graphs below.

On Day One of the Conference, when the focus of the discussions was on Enterprise Architecture, we polled our Twitter followers about the perceived value of certification. Your response was perhaps disappointing, but unsurprising:

On Day Two, while we focused on security during The Open Group Jericho Forum® Conference, we queried you about what you see as being the biggest organizational security threat. Out of four stated choices, and the opportunity to fill in your own answer, the answer was unanimous: two specific areas of security keep you up at night the most.

And finally, during Day Three’s emphasis on Cloud Computing, we asked our Twitter followers about the types of Cloud they’re using or are likely to use.

What do you think of our informal poll results? Do you agree? Disagree? And why?

Want some survey results you can really sink your teeth into? View the results of The Open Group’s State of the Industry Cloud Survey. Read our blog post about it, or download the slide deck from The Open Group bookstore.

The Open Group Conference, London is in member meetings for the rest of this week. Join us in Austin, July 18-22, for our next Conference! Join us for best practices and case studies on Enterprise Architecture, Cloud, Security and more, presented by preeminent thought leaders in the industry.

Comments Off

Filed under Certifications, Cloud/SOA, Cybersecurity, Enterprise Architecture

What does the Amazon EC2 downtime mean?

By Mark Skilton, Capgemini

The recent announcement of the Amazon EC2 outage in April this year triggers some thoughts about this very high-profile topic in Cloud Computing. How secure and available is your data in the Cloud?

While the outage was more to do with the service level availability (SLA) of data and services from your Cloud provider, the recent, potentially more concerning risk of Epsilon e-mail data stolen, and as I write this the Sony email theft is breaking news, further highlights this big topic in Cloud Computing.

My initial reaction on hearing the about the outage was that it was due to over-allocation due to high demand in the US EAST 2 region, which led to a cascade system failure. I subsequently read that Amazon said it was a network glitch, which triggered storage backups to automatically create more than needed, consuming the elastic block storage. This in turn, I theorized, seems to have created the supply unavailability problem.

From a business perspective, this focuses on the issues of using a primary Cloud provider. The businesses like Quora.com and foursquare.com that were affected “live in the Cloud,” yet backup and secondary Cloud support needs are clearly important.  Some of these are economic decisions, trade-offs between loss of business and business continuity. It highlights the vulnerability of these enterprises even though a highly successful organization like Amazon makes this a rare event. Consumers of Cloud services need to consider taking mitigating actions such as disruption insurance; having secondary backups; and the issues of assurances of SLAs, which are largely out of the hands of SMB Market users. A result of outages in Cloud providers has been the emergence of a new market called “Cloud Backup,” which is starting to gain favor with customers and providers in providing added levels of protection of service fail over.

While these are concerning issues, I believe most outage issues may be addressed by taking due diligence in the procurement and usage behavior of any service that involves a third party. I’ve expanding the definition of due diligence in Cloud Computing to include at least six key processes that any prospective Cloud buyer should be aware and make contingency for, as you would with any purchase of a business critical service:

  • Security management
  • Compliance management
  • Service Management (ITSM and License controls)
  • Performance management
  • Account management
  • Ecosystem standards management

I don’t think publishing a bill of rights for consumers is enough to insure against failure. One thing that Cloud Computing design has taught me is that part of the architectural shift brought about by Cloud is the emergence of automation as an implicit part of the operating model design to enable elasticity. This automation may have been a factor, ironically, in the Amazon situation, but overall the benefits of Cloud far outweigh the downsides, which can be re-engineered and resolved.

A useful guide to address some of the business impact can be found in a new book by The Open Group on Cloud Computing for Business that we plan to publish this quarter. The topics of the book address many of these challenges in understanding and driving the value of the Cloud Computing in the language of business. The book covers chapters relating to business use of cloud and includes topics of risk management of the Cloud. Check The Open Group website for more information on The Open Group Cloud Computing Work Group and the Cloud publications in the bookstore at http://www.opengroup.org.

Cloud Computing isa key topic of discussion at The Open Group Conference, London, May 9-13, which is currently underway. 

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

1 Comment

Filed under Cloud/SOA