Tag Archives: Dana Gardner

The Open Group Panel Explores How the Big Data Era Now Challenges the IT Status Quo

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group panel explores how the Big Data era now challenges the IT status quo, or view the on-demand video recording on this discussion here: http://new.livestream.com/opengroup/events/1838807.

We recently assembled a panel of experts to explore how Big Data changes the status quo for architecting the enterprise. The bottom line from the discussion is that large enterprises should not just wade into Big Data as an isolated function, but should anticipate the strategic effects and impacts of Big Data — as well the simultaneous complicating factors of Cloud Computing and mobile– as soon as possible.

The panel consisted of Robert Weisman, CEO and Chief Enterprise Architect at Build The Vision; Andras Szakal, Vice President and CTO of IBM’s Federal Division; Jim Hietala, Vice President for Security at The Open Group, and Chris Gerty, Deputy Program Manager at the Open Innovation Program at NASA. I served as the moderator.

And this special thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on “Big Data — he transformation we need to embrace today.”

Threaded factors

An interesting thread for me throughout the conference was to factor where Big Data begins and plain old data, if you will, ends. Of course, it’s going to vary quite a bit from organization to organization.

But Gerty from NASA, part of our panel, provided a good example: It’s when you run out of gas with your old data methods, and your ability to deal with the data — and it’s not just the size of the data itself.

Therefore, Big Data means do things differently — not just to manage the velocity and the volume and the variety of the data, but to really think about data fundamentally and differently. And, we need to think about security, risk and governance. If it’s a “boundaryless organization” when it comes your data, either as a product or service or a resource, that control and management of which data should be exposed, which should be opened, and which should be very closely guarded all need to be factored, determined and implemented.

Here are some excerpts from the on-stage discussion:

Dana Gardner: You mentioned that Big Data to you is not a factor of the size, because NASA’s dealing with so much. It’s when you run out of steam, as it were, with the methodologies. Maybe you could explain more. When do you know that you’ve actually run out of steam with the methodologies?

Gerty: When we collect data, we have some sort of goal in minds of what we might get out of it. When we put the pieces from the data together, it either maybe doesn’t fit as well as you thought or you are successful and you continue to do the same thing, gathering archives of information.

Gardner: Andras, does that square with where you are in your government interactions — that data now becomes a different type of resource, and that you need to know when to do things differently?At that point, where you realize there might even something else that you want to do with the data, different than what you planned originally, that’s when we have to pivot a little bit and say, “Now I need to treat this as a living archive. It’s a ‘it may live beyond me’ type of thing.” At that point, I think you treat it as setting up the infrastructure for being used later, whether it’d be by you or someone else. That’s an important transition to make and might be what one could define as Big Data.

Szakal: The importance of data hasn’t changed. The data itself, the veracity of the data, is still important. Transactional data will always need to exist. The difference is that you have certainly the three or four Vs, depending on how you look at it, but the importance of data is in its veracity, and your ability to understand or to be able to use that data before the data’s shelf life runs out.

Gardner: Bob, we’ve seen the price points on storage go down so dramatically. We’ve seem people just decide to hold on to data that they wouldn’t have before, simply because they can and they can afford to do so. That means we need to try to extract value and use that data. From the perspective of an enterprise architect, how are things different now, vis-à-vis this much larger set of data and variety of data, when it comes to planning and executing as architects?Some data has a shelf life that’s long lived. Other data has very little shelf life, and you would use different approaches to being able to utilize that information. It’s ultimately not about the data itself, but it’s about gaining deep insight into that data. So it’s not storing data or manipulating data, but applying those analytical capabilities to data.

Weisman: One of the major issues is that normally organizations are holding two orders of magnitude more data then they need. It’s an huge overhead, both in terms of the applications architecture that has a code basis, larger than it should be, and also from the technology architecture that is supporting a horrendous number of servers and a whole bunch of technology stuff that they don’t need.

The issue for the architect is to figure out as what data is useful, institute a governance process, so that you can have data lifecycle management, have a proper disposition,  focus the organization on information data and knowledge that is basically going to provide business value to the organization, and help them innovate and have a competitive advantage.

Can’t afford it

And in terms of government, just improve service delivery, because there’s waste right now on information infrastructure, and we can’t afford it anymore.

Gardner: So it’s difficult to know what to keep and what not to keep. I’ve actually spoken to a few people lately who want to keep everything, just because they want to mine it, and they are willing to spend the money and effort to do that.

Jim Hietala, when people do get to this point of trying to decide what to keep, what not to keep, and how to architect properly for that, they also need to factor in security. It shouldn’t become later in the process. It should come early. What are some of the precepts that you think are important in applying good security practices to Big Data?

Hietala: One of the big challenges is that many of the big-data platforms weren’t built from the get-go with security in mind. So some of the controls that you’ve had available in your relational databases, for instance, you move over to the Big Data platforms and the access control authorizations and mechanisms are not there today.

Gardner: There are a lot of unknown unknowns out there, as we discovered with our tweet chat last month. Some people think that the data is just data, and you apply the same security to it. Do you think that’s the case with Big Data? Is it just another follow-through of what you always did with data in the first place?Planning the architecture, looking at bringing in third-party controls to give you the security mechanisms that you are used to in your older platforms, is something that organizations are going to have to do. It’s really an evolving and emerging thing at this point.

Hietala: I would say yes, at a conceptual level, but it’s like what we saw with virtualization. When there was a mad rush to virtualize everything, many of those traditional security controls didn’t translate directly into the virtualized world. The same thing is true with Big Data.

When you’re talking about those volumes of data, applying encryption, applying various security controls, you have to think about how those things are going to scale? That may require new solutions from new technologies and that sort of thing.

Gardner: Chris Gerty, when it comes to that governance, security, and access control, are there any lessons that you’ve learned that you are aware of in terms of the best of openness, but also with the ability to manage the spigot?

Gerty: Spigot is probably a dangerous term to use, because it implies that all data is treated the same. The sooner that you can tag the data as either sensitive or not, mostly coming from the person or team that’s developed or originated the data, the better.

Kicking the can

Once you have it on a hard drive, once you get crazy about storing everything, if you don’t know where it came from, you’re forced to put it into a secure environment. And that’s just kicking the can down the road. It’s really a disservice to people who might use the data in a useful way to address their problems.

We constantly have satellites that are made for one purpose. They send all the data down. It’s controlled either for security or for intellectual property (IP), so someone can write a paper. Then, after the project doesn’t get funded or it just comes to a nice graceful close, there is that extra step, which is almost a responsibility of the originators, to make it useful to the rest of the world.

Gardner: Let’s look at Big Data through the lens of some other major trends right now. Let’s start with Cloud. You mentioned that at NASA, you have your own private Cloud that you’re using a lot, of course, but you’re also now dabbling in commercial and public Clouds. Frankly, the price points that these Cloud providers are offering for storage and data services are pretty compelling.

So we should expect more data to go to the Cloud. Bob, from your perspective, as organizations and architects have to think about data in this hybrid Cloud on-premises off-premises, moving back and forth, what do you think enterprise architects need to start thinking about in terms of managing that, planning for the right destination of data, based on the right mix of other requirements?

Weisman: It’s a good question. As you said, the price point is compelling, but the security and privacy of the information is something else that has to be taken into account. Where is that information going to reside? You have to have very stringent service-level agreements (SLAs) and in certain cases, you might say it’s a price point that’s compelling, but the risk analysis that I have done means that I’m going to have to set up my own private Cloud.

Gardner: Andras, how do the Cloud and Big Data come together in a way that’s intriguing to you?Right now, everybody’s saying is the public Cloud is going to be the way to go. Vendors are going to have to be very sensitive to that and many are, at this point in time, addressing a lot of the needs of some of the large client basis. So it’s not one-size-fits-all and it’s more than just a price for service. Architecture can bring down the price pretty dramatically, even within an enterprise.

Szakal: Actually it’s a great question. We could take the rest of the 22 minutes talking on this one question. I helped lead the President’s Commission on Big Data that Steve Mills from IBM and — I forget the name of the executive from SAP — led. We intentionally tried to separate Cloud from Big Data architecture, primarily because we don’t believe that, in all cases, Cloud is the answer to all things Big Data. You have to define the architecture that’s appropriate for your business needs.

However, it also depends on where the data is born. Take many of the investments IBM has made into enterprise market management, for example, Coremetrics, several of these services that we now offer for helping customers understand deep insight into how their retail market or supply chain behaves.

Born in the Cloud

All of that information is born in the Cloud. But if you’re talking about actually using Cloud as infrastructure and moving around huge sums of data or constructing some of these solutions on your own, then some of the ideas that Bob conveyed are absolutely applicable.

I think it becomes prohibitive to do that and easier to stand up a hybrid environment for managing the amount of data. But I think that you have to think about whether your data is real-time data, whether it’s data that you could apply some of these new technologies like Hadoop to, Hadoop MapReduce-type solutions, or whether it’s traditional data warehousing.

Data warehouses are going to continue to exist and they’re going to continue to evolve technologically. You’re always going to use a subset of data in those data warehouses, and it’s going to be an applicable technology for many years to come.

Gardner: So suffice it to say, an enterprise architect who is well versed in both Cloud infrastructure requirements, technologies, and methods, as well as Big Data, will probably be in quite high demand. That specialization in one or the other isn’t as valuable as being able to cross-pollinate between them.

Szakal: Absolutely. It’s enabling our architects and finding deep individuals who have this unique set of skills, analytics, mathematics, and business. Those individuals are going to be the future architects of the IT world, because analytics and Big Data are going to be integrated into everything that we do and become part of the business processing.

Gardner: Well, that’s a great segue to the next topic that I am interested in, and it’s around mobility as a trend and also application development. The reason I lump them together is that I increasingly see developers being tasked with mobile first.

When you create a new app, you have to remember that this is going to run in the mobile tier and you want to make sure that the requirements, the UI, and the complexity of that app don’t go beyond the ability of the mobile app and the mobile user. This is interesting to me, because data now has a different relationship with apps.

We used to think of apps as creating data and then the data would be stored and it might be used or integrated. Now, we have applications that are simply there in order to present the data and we have the ability now to present it to those mobile devices in the mobile tier, which means it goes anywhere, everywhere all the time.

Let me start with you Jim, because it’s security and risk, but it’s also just rethinking the way we use data in a mobile tier. If we can do it safely, and that’s a big IF, how important should it be for organizations to start thinking about making this data available to all of these devices and just pour out into that mobile tier as possible?

Hietala: In terms of enabling the business, it’s very important. There are a lot of benefits that accrue from accessing your data from whatever device you happen to be on. To me, it is that question of “if,” because now there’s a whole lot of problems to be solved relative to the data floating around anywhere on Android, iOS, whatever the platform is, and the organization being able to lock down their data on those devices, forgetting about whether it’s the organization device or my device. There’s a set of issues around that that the security industry is just starting to get their arms around today.

Mobile ability

Gardner: Chris, any thoughts about this mobile ability that the data gets more valuable the more you can use it and apply it, and then the more you can apply it, the more data you generate that makes the data more valuable, and we start getting into that positive feedback loop?

Gerty: Absolutely. It’s almost an appreciation of what more people could do and get to the problem. We’re getting to the point where, if it’s available on your desktop, you’re going to find a way to make it available on your device.

That same security questions probably need to be answered anyway, but making it mobile compatible is almost an acknowledgment that there will be someone who wants to use it. So let me go that extra step to make it compatible and see what I get from them. It’s more of a cultural benefit that you get from making things compatible with mobile.

Gardner: Any thoughts about what developers should be thinking by trying to bring the fruits of Big Data through these analytics to more users rather than just the BI folks or those that are good at SQL queries? Does this change the game by actually making an application on a mobile device, simple, powerful but accessing this real time updated treasure trove of data?

Gerty: I always think of the astronaut on the moon. He’s got a big, bulky glove and he might have a heads-up display in front of him, but he really needs to know exactly a certain piece of information at the right moment, dealing with bandwidth issues, dealing with the environment, foggy helmet wherever.

It’s very analogous to what the day-to-day professional will use trying to find out that quick e-mail he needs to know or which meeting to go to — which one is more important — and it all comes down to putting your developer in the shoes of the user. So anytime you can get interaction between the two, that’s valuable.

Weisman: From an Enterprise Architecture point of view my background is mainly defense and government, but defense mobile computing has been around for decades. So you’ve always been dealing with that.

The main thing is that in many cases, if they’re coming up with information, the whole presentation layer is turning into another architecture domain with information visualization and also with your security controls, with an integrated identity management capability.

It’s like you were saying about astronaut getting it right. He doesn’t need to know everything that’s happening in the world. He needs to know about his heads-up display, the stuff that’s relevant to him.

So it’s getting the right information to person in an authorized manner, in a way that he can visualize and make sense of that information, be it straight data, analytics, or whatever. The presentation layer, ergonomics, visual communication are going to become very important in the future for that. There are also a lot of problems. Rather than doing it at the application level, you’re doing it entirely in one layer.

Governance and security

Gardner: So clearly the implications of data are cutting across how we think about security, how we think about UI, how we factor in mobility. What we now think about in terms of governance and security, we have to do differently than we did with older data models.

Jim Hietala, what about the impact on spurring people towards more virtualized desktop delivery, if you don’t want to have the date on that end device, if you want solve some of the issues about control and governance, and if you want to be able to manage just how much data gets into that UI, not too much not too little.

Do you think that some of these concerns that we’re addressing will push people to look even harder, maybe more aggressive in how they go to desktop and application virtualization, as they say, keep it on the server, deliver out just the deltas?

Hietala: That’s an interesting point. I’ve run across a startup in the last month or two that is doing is that. The whole value proposition is to virtualize the environment. You get virtual gold images. You don’t have to worry about what’s actually happening on the physical device and you know when the devices connect. The security threat goes away. So we may see more of that as a solution to that.

Gardner: Andras, do you see that that some of the implications of Big Data, far fetched as it may be, are propelling people to cultivate their servers more and virtualize their apps, their data, and their desktop right up to the end devices?

Szakal: Yeah, I do. I see IBM providing solutions for virtual desktop, but I think it was really a security question you were asking. You’re certainly going to see an additional number of virtualized desktop environments.

Ultimately, our network still is not stable enough or at a high enough bandwidth to really make that useful exercise for all but the most menial users in the enterprise. From a security point of view, there is a lot to be still solved.

And part of the challenge in the Cloud environment that we see today is the proliferation of virtual machines (VMs) and the inability to actually contain the security controls within those machines and across these machines from an enterprise perspective. So we’re going to see more solutions proliferate in this area and to try to solve some of the management issues, as well as the security issues, but we’re a long ways away from that.

Gerty: Big Data, by itself, isn’t magical. It doesn’t have the answers just by being big. If you need more, you need to pry deeper into it. That’s the example. They realized early enough that they were able to make something good.

Gardner: Jim Hietala, any thoughts about examples that illustrate where we’re going and why this is so important?

Hietala: Being a security guy, I tend to talk about scare stories, horror stories. One example from last year that struck me. One of the major retailers here in the U.S. hit the news for having predicted, through customer purchase behavior, when people were pregnant.

They could look and see, based upon buying 20 things, that if you’re buying 15 of these and your purchase behavior has changed, they can tell that. The privacy implications to that are somewhat concerning.

An example was that this retailer was sending out coupons related to somebody being pregnant. The teenage girl, who was pregnant hadn’t told her family yet. The father found it. There was alarm in the household and at the local retailer store, when the father went and confronted them.

Privacy implications

There are privacy implications from the use of Big Data. When you get powerful new technology in marketing people’s hands, things sometimes go awry. So I’d throw that out just as a cautionary tale that there is that aspect to this. When you can see across people’s buying transactions, things like that, there are privacy considerations that we’ll have to think about, and that we really need to think about as an industry and a society.

Comments Off

Filed under Conference

SOA Provides Needed Support for Enterprise Architecture in Cloud, Mobile, Big Data, Says Open Group Panel

By Dana Gardner, BriefingsDirect

There’s been a resurgent role for service-oriented architecture (SOA) as a practical and relevant ingredient for effective design and use of Cloud, mobile, and big data technologies.

To find out why, The Open Group recently gathered an international panel of experts to explore the concept of “architecture is destiny,” especially when it comes to hybrid services delivery and management. The panel shows how SOA is proving instrumental in allowing the needed advancements over highly distributed services and data, when it comes to scale, heterogeneity support, and governance.

The panel consists of Chris Harding, Director of Interoperability at The Open Group, based in the UK; Nikhil Kumar, President of Applied Technology Solutions and Co-Chair of the SOA Reference Architecture Projects within The Open Group, and he’s based in Michigan, and Mats Gejnevall, Enterprise Architect at Capgemini and Co-Chair of The Open Group SOA Work Group, and he’s based in Sweden. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

The full podcast can be found here.

Here are some excerpts:

Gardner: Why this resurgence in the interest around SOA?

Harding: My role in The Open Group is to support the work of our members on SOA, Cloud computing, and other topics. We formed the SOA Work Group back in 2005, when SOA was a real emerging hot topic, and we set up a number of activities and projects. They’re all completed.

I was thinking that the SOA Work Group would wind down, move into maintenance mode, and meet once every few months or so, but we still get a fair attendance at our regular web meetings.

In fact, we’ve started two new projects and we’re about to start a third one. So, it’s very clear that there is still an interest, and indeed a renewed interest, in SOA from the IT community within The Open Group.

Larger trends

Gardner: Nikhil, do you believe that this has to do with some of the larger trends we’re seeing in the field, like Cloud Software as a Service (SaaS)? What’s driving this renewal?

Kumar: What I see driving it is three things. One is the advent of the Cloud and mobile, which requires a lot of cross-platform delivery of consistent services. The second is emerging technologies, mobile, big data, and the need to be able to look at data across multiple contexts.

The third thing that’s driving it is legacy modernization. A lot of organizations are now a lot more comfortable with SOA concepts. I see it in a number of our customers. I’ve just been running a large Enterprise Architecture initiative in a Fortune 500 customer.

At each stage, and at almost every point in that, they’re now comfortable. They feel that SOA can provide the ability to rationalize multiple platforms. They’re restructuring organizational structures, delivery organizations, as well as targeting their goals around a service-based platform capability.

So legacy modernization is a back-to-the-future kind of thing that has come back and is getting adoption. The way it’s being implemented is using RESTful services, as well as SOAP services, which is different from traditional SOA, say from the last version, which was mostly SOAP-driven.

Gardner: Mats, do you think that what’s happened is that the marketplace and the requirements have changed and that’s made SOA more relevant? Or has SOA changed to better fit the market? Or perhaps some combination?

Gejnevall: I think that the Cloud is really a service delivery platform. Companies discover that to be able to use the Cloud services, the SaaS things, they need to look at SOA as their internal development way of doing things as well. They understand they need to do the architecture internally, and if they’re going to use lots of external Cloud services, you might as well use SOA to do that.

Also, if you look at the Cloud suppliers, they also need to do their architecture in some way and SOA probably is a good vehicle for them. They can use that paradigm and also deliver what the customer wants in a well-designed SOA environment.

Gardner: Let’s drill down on the requirements around the Cloud and some of the key components of SOA. We’re certainly seeing, as you mentioned, the need for cross support for legacy, Cloud types of services, and using a variety of protocol, transports, and integration types. We already heard about REST for lightweight approaches and, of course, there will still be the need for object brokering and some of the more traditional enterprise integration approaches.

This really does sound like the job for an Enterprise Service Bus (ESB). So let’s go around the panel and look at this notion of an ESB. Some people, a few years back, didn’t think it was necessary or a requirement for SOA, but it certainly sounds like it’s the right type of functionality for the job.

Loosely coupled

Harding: I believe so, but maybe we ought to consider that in the Cloud context, you’re not just talking about within a single enterprise. You’re talking about a much more loosely coupled, distributed environment, and the ESB concept needs to take account of that in the Cloud context.

Gardner: Nikhil, any thoughts about how to manage this integration requirement around the modern SOA environment and whether ESBs are more or less relevant as a result?

Kumar: In the context of a Cloud we really see SOA and the concept of service contracts coming to the fore. In that scenario, ESBs play a role as a broker within the enterprise. When we talk about the interaction across Cloud-service providers and Cloud consumers, what we’re seeing is that the service provider has his own concept of an ESB within its own internal context.

If you want your Cloud services to be really reusable, the concept of the ESB then becomes more for the routing and the mediation of those services, once they’re provided to the consumer. There’s a kind of separation of concerns between the concept of a traditional ESB and a Cloud ESB, if you want to call it that.

The Cloud context involves more of the need to be able to support, enforce, and apply governance concepts and audit concepts, the capabilities to ensure that the interaction meets quality of service guarantees. That’s a little different from the concept that drove traditional ESBs.

That’s why you’re seeing API management platforms like Layer 7Mashery, or Apigee and other kind of product lines. They’re also coming into the picture, driven by the need to be able to support the way Cloud providers are provisioning their services. As Chris put it, you’re looking beyond the enterprise. Who owns it? That’s where the role of the ESB is different from the traditional concept.

Most Cloud platforms have cost factors associated with locality. If you have truly global enterprises and services, you need to factor in the ability to deal with safe harbor issues and you need to factor in variations and law in terms of security governance.

The platforms that are evolving are starting to provide this out of the box. The service consumer or a service provider needs to be able to support those. That’s going to become the role of their ESB in the future, to be able to consume a service, to be able to assert this quality-of-service guarantee, and manage constraints or data-in-flight and data-at-rest.

Gardner: Mats, are there other aspects of the concept of ESB that are now relevant to the Cloud?

Entire stack

Gejnevall: One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products that all the consultancies were asking companies to buy, wanting them to buy an ESB, governance tools, business process management tools, and a lot of sort of quite large investments to just get your foot into the door of doing SOA.

These days you can buy that kind of stuff. You can buy the entire stack in the Cloud and start playing with it. I did some searches on it today and I found a company that you can play with the entire stack, including business tools and everything like that, for zero dollars. Then you can grow and use more and more of it in your business, but you can start to see if this is something for you.

In the past, the suppliers or the consultants told you that you could do it. You couldn’t really try it out yourself. You needed both the software and the hardware in place. The money to get started is much lower today. That’s another reason people might be thinking about it these days.

Gardner: It sounds as if there’s a new type of on-ramp to SOA values, and the componentry that supports SOA is now being delivered as a service. On top of that, you’re also able to consume it in a pay-as-you-go manner.

Harding: That’s a very good point, but there are two contradictory trends we are seeing here. One is the kind of trend that Mats is describing, where the technology you need to handle a complex stack is becoming readily available in the Cloud.

And the other is the trend that Nikhil mentioned: to go for a simpler style, which a lot of people term REST, for accessing services. It will be interesting to see how those two tendencies play out against each other.

Kumar: I’d like to make a comment on that. The approach for the on-ramp is really one of the key differentiators of the Cloud, because you have the agility and the lack of capital investment (CAPEX) required to test things out.

But as we are evolving with Cloud platforms, I’m also seeing with a lot of Platform-as-a-Service (PaaS) vendor scenarios that they’re trying the ESB in the stack itself. They’re providing it in their Cloud fabric. A couple of large players have already done that.

For example, Azure provides that in the forward-looking vision. I am sure IBM and Oracle have already started down that path. A lot of the players are going to provide it as a core capability.

Pre-integrated environment

Gejnevall: Another interesting thing is that they could get a whole environment that’s pre-integrated. Usually, when you buy these things from a vendor, a lot of times they don’t fit together that well. Now, there’s an effort to make them work together.

But some people put these open-source tools together. Some people have done that and put them out on the Cloud, which gives them a pretty cheap platform for themselves. Then, they can sell it at a reasonable price, because of the integration of all these things.

Gardner: The Cloud model may be evolving toward an all-inclusive offering. But SOA, by its definition, advances interoperability, to plug and play across existing, current, and future sets of service possibilities. Are we talking about SOA being an important element of keeping Clouds dynamic and flexible — even open?

Kumar: We can think about the OSI 7 Layer Model. We’re evolving in terms of complexity, right? So from an interoperability perspective, we may talk SOAP or REST, for example, but the interaction with AWS, SalesforceSmartCloud, or Azure would involve using APIs that each of these platforms provide for interaction.

Lock-in

So you could have an AMI, which is an image on the Amazon Web Services environment, for example, and that could support a lab stack or an open source stack. How you interact with it, how you monitor it, how you cluster it, all of those aspects now start factoring in specific APIs, and so that’s the lock-in.

From an architect’s perspective, I look at it as we need to support proper separation of concerns, and that’s part of [The Open Group] SOA Reference Architecture. That’s what we tried to do, to be able to support implementation architectures that support that separation of concerns.

There’s another factor that we need to understand from the context of the Cloud, especially for mid-to-large sized organizations, and that is that the Cloud service providers, especially the large ones — Amazon, Microsoft, IBM — encapsulate infrastructure.

If you were to go to Amazon, Microsoft, or IBM and use their IaaS networking capabilities, you’d have one of the largest WAN networks in the world, and you wouldn’t have to pay a dime to establish that infrastructure. Not in terms of the cost of the infrastructure, not in terms of the capabilities required, nothing. So that’s an advantage that the Cloud is bringing, which I think is going to be very compelling.

The other thing is that, from an SOA context, you’re now able to look at it and say, “Well, I’m dealing with the Cloud, and what all these providers are doing is make it seamless, whether you’re dealing with the Cloud or on-premise.” That’s an important concept.

Now, each of these providers and different aspects of their stacks are at significantly different levels of maturity. Many of these providers may find that their stacks do not interoperate with themselves either, within their own stacks, just because they’re using different run times, different implementations, etc. That’s another factor to take in.

From an SOA perspective, the Cloud has become very compelling, because I’m dealing, let’s say, with a Salesforce.com and I want to use that same service within the enterprise, let’s say, an insurance capability for Microsoft Dynamics or for SugarCRM. If that capability is exposed to one source of truth in the enterprise, you’ve now reduced the complexity and have the ability to adopt different Cloud platforms.

What we are going to start seeing is that the Cloud is going to shift from being just one à-la-carte solution for everybody. It’s going to become something similar to what we used to deal with in the enterprise context. You had multiple applications, which you service-enabled to reduce complexity and provide one service-based capability, instead of an application-centered approach.

You’re now going to move the context to the Cloud, to your multiple Cloud solutions, and maybe many implementations in a nontrivial environment for the same business capability, but they are now exposed to services in the enterprise SOA. You could have Salesforce. You could have Amazon. You could have an IBM implementation. And you could pick and choose the source of truth and share it.

So a lot of the core SOA concepts will still apply and are still applying.

Another on-ramp

Gardner: Perhaps yet another on-ramp to the use of SOA is the app store, which allows for discovery, socialization of services, but at the same time provides overnance and control?

Kumar: We’re seeing that with a lot of our customers, typically the vendors who support PaaS solution associate app store models along with their platform as a mechanism to gain market share.

The issue that you run into with that is, it’s okay if it’s on your cellphone or on your iPad, your tablet PC, or whatever, but once you start having managed apps, for example Salesforce, or if you have applications which are being deployed on an Azure or on a SmartCloud context, you have high risk scenario. You don’t know how well architected that application is. It’s just like going and buying an enterprise application.

When you deploy it in the Cloud, you really need to understand the Cloud PaaS platform for that particular platform to understand the implications in terms of dependencies and cross-dependencies across apps that you have installed. They have real practical implications in terms of maintainability and performance. We’ve seen that with at least two platforms in the last six months.

Governance becomes extremely important. Because of the low CAPEX implications to the business, the business is very comfortable with going and buying these applications and saying, “We can install X, Y, or Z and it will cost us two months and a few million dollars and we are all set.” Or maybe it’s a few hundred thousand dollars.

They don’t realize the implications in terms of interoperability, performance, and standard architectural quality attributes that can occur. There is a governance aspect from the context of the Cloud provisioning of these applications.

There is another aspect to it, which is governance in terms of the run-time, more classic SOA governance, to measure, assert, and to view the cost of these applications in terms of performance to your infrastructural resources, to your security constraints. Also, are there scenarios where the application itself has a dependency on a daisy chain, multiple external applications, to trace the data?

In terms of the context of app stores, they’re almost like SaaS with a particular platform in mind. They provide the buyer with certain commitments from the platform manager or the platform provider, such as security. When you buy an app from Apple, there is at least a reputational expectation of security from the vendor.

What you do not always know is if that security is really being provided. There’s a risk there for organizations who are exposing mission-critical data to that.

The second thing is there is still very much a place for the classic SOA registries and repositories in the Cloud. Only the place is for a different purpose. Those registries and repositories are used either by service providers or by consumers to maintain the list of services they’re using internally.

Different paradigms

There are two different paradigms. The app store is a place where I can go and I know that the gas I am going to get is 85 percent ethanol, versus I also have to maintain some basic set of goods at home to make that I have my dinner on time. These are different kind of roles and different kind of purposes they’re serving.

Above all, I think the thing that’s going to become more and more important in the context of the Cloud is that the functionality will be provided by the Cloud platform or the app you buy, but the governance will be a major IT responsibility, right from the time of picking the app, to the time of delivering it, to the time of monitoring it.

Gardner: How is The Open Group allowing architects to better exercise SOA principles, as they’re grappling with some of these issues around governance, hybrid services delivery and management, and the use and demand in their organizations to start consuming more Cloud services?

Harding: The architect’s primary concern, of course, has to be to meet the needs of the client and to do so in a way that is most effective and that is cost-effective. Cloud gives the architect a usability to go out and get different components much more easily than hitherto.

There is a problem, of course, with integrating them and putting them together. SOA can provide part of the solution to that problem, in that it gives a principle of loosely coupled services. If you didn’t have that when you were trying to integrate different functionality from different places, you would be in a real mess.

What The Open Group contributes is a set of artifacts that enable the architect to think through how to meet the client’s needs in the best way when working with SOA and Cloud.

For example, the SOA Reference Architecture helps the architect understand what components might be brought into the solution. We have the SOA TOGAF Practical Guide, which helps the architect understand how to use TOGAF® in the SOA context.

We’re working further on artifacts in the Cloud space, the Cloud Computing Reference Architecture, a notational language for enabling people to describe Cloud ecosystems on recommendations for Cloud interoperability and portability. We’re also working on recommendations for Cloud governance to complement the recommendations for SOA governance, the SOA Governance Framework Standards that we have already produced, and a number of other artifacts.

The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client.

From the very early days, SOA was seen as bringing a closer connection between the business and technology. A lot of those promises that were made about SOA seven or eight years ago are only now becoming possible to fulfill, and that business front is what that project is looking at.

We’re also producing an update to the SOA Reference Architectures. We have input the SOA Reference Architecture for consideration by the ISO Group that is looking at an International Standard Reference Architecture for SOA and also to the IEEE Group that is looking at an IEEE Standard Reference Architecture.

We hope that both of those groups will want to work along the principles of our SOA Reference Architecture and we intend to produce a new version that incorporates the kind of ideas that they want to bring into the picture.

We’re also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions.

So those are three new initiatives that should result in new Open Group standards and guides to complement, as I have described already, the SOA Reference Architecture, the SOA Governance Framework, the Practical Guides to using TOGAF for SOA.

We also have the Service Integration Maturity Model that we need to assess the SOA maturity. We have a standard on service orientation applied to Cloud infrastructure, and we have a formal SOA Ontology.

Those are the things The Open Group has in place at present to assist the architect, and we are and will be working on three new things: version 2 of the Reference Architecture for SOA, SOA for business technology, and I believe shortly we’ll start on assistance to architects in developing SOA solutions.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirect™ blogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cloud, Cloud/SOA, Service Oriented Architecture

#ogChat Summary – Walled Garden Networks

By Patty Donovan, The Open Group

With hundreds of tweets flying at break-neck pace, yesterday’s #ogChat saw a very spirited discussion on the Internet’s movement toward a walled garden model. In case you missed the conversation, you’re in luck! Here’s a recap of yesterday’s #ogChat.

The full list of participants included:

Here is a high-level a snapshot of yesterday’s #ogChat:

Q1 In the context of #WWW, why has there been a shift from the open Internet to portals, apps and walled environs? #ogChat

Participants generally agreed that the impetus behind the walled garden trend was led by two factors: companies and developers wanting more control, and a desire by users to feel “safer.”

  • @charleneli: Q1 Peeps & developers like order, structure, certainty. Control can provide that. But too much and they leave. #ogChat.
  • @Technodad: User info & contributions are raw material of walled sites-“If you’re not paying for the service, the product being sold is you”. #ogChat
  • @AlanWebber #ogChat Q1 – People feel safer inside the “Walls” but don’t realize what they are loosing

Q2 How has this trend affected privacy/control? Do users have enough control over their IDs/content within #walledgarden networks? #ogChat

This was a hot topic as participants debated the tradeoffs between great content and privacy controls. Questions of where data was used and leaked to also emerged, as walled gardens are known to have backdoors.

  • @AlanWebber: But do people understand what they are giving up inside the walls? #ogChat
  • @TheTonyBradley: Q2 — Yes and no. Users have more control than they’re aware of, but for many its too complex and cumbersome to manage properly. #ogchat
  • @jim_hietala: #ogChat Q2 privacy and control trade offs need to be made more obvious, visible
  • @zdFYRashid: Q2 users assume that #walledgarden means nothing leaves, so they think privacy is implied. They don’t realize that isn’t the case #ogchat
  • @JohnFontana: Q2 Notion is wall and gate is at the front of garden where users enter. It’s the back that is open and leaking their data #ogchat
  • @subreyes94: #ogchat .@DanaGardner More walls coming down through integration. FB and Twitter are becoming de facto login credentials for other sites

Q3 What has been the role of social and #mobile in developing #walledgardens? Have they accelerated this trend? #ogChat

Everyone agreed that social and mobile catalyzed the formation of walled garden networks. Many also gave a nod to location as a nascent driver.

  • @jaycross: Q3 Mobile adds your location to potential violations of privacy. It’s like being under surveillance. Not very far along yet. #ogChat
  • @charleneli: Q3: Mobile apps make it easier to access, reinforcing behavior. But also enables new connections a la Zynga that can escape #ogChat
  • @subreyes94: #ogChatQ3 They have accelerated the always-inside the club. The walls have risen to keep info inside not keep people out.
    • @Technodad: @subreyes94 Humans are social, want to belong to community & be in touch with others “in the group”. Will pay admission fee of info. #ogChat

Q4 Can people use the internet today without joining a walled garden network? What does this say about the current web? #ogChat

There were a lot of parallels drawn between real and virtual worlds. It was interesting to see that walled gardens provided a sense of exclusivity that human seek out by nature. It was also interesting to see a generational gap emerge as many participants cited their parents as not being a part of a walled garden network.

  • @TheTonyBradley: Q4 — You can, the question is “would you want to?” You can still shop Amazon or get directions from Mapquest. #ogchat
  • @zdFYRashid: Q4 people can use the internet without joining a walled garden, but they don’t want to play where no one is. #ogchat
  • @JohnFontana: Q4 I believe we are headed to a time when people will buy back their anonymity. That is the next social biz. #ogchat

Q5 Is there any way to reconcile the ideals of the early web with the need for companies to own information about users? #ogChat

While walled gardens have started to emerge, the consumerization of the Internet and social media has really driven user participation and empowered users to create content within these walled gardens.

  • @JohnFontana: Q5 – It is going to take identity, personal data lockers, etc. to reconcile the two. Wall-garden greed heads can’t police themselves #ogchat
  • @charleneli: Q5: Early Web optimism was less about being open more about participation. B4 you needed to know HTML. Now it’s fill in a box. #ogChat
  • @Dana_Gardner: Q5 Early web was more a one-way street, info to a user. Now it’s a mix-master of social goo. No one knows what the goo is, tho. #ogChat
  • @AlanWebber: Q5, Once there are too many walls, people will begin to look on to the next (virtual) world. Happening already #ogChat

Q6 What #Web2.0 lessons learned should be implemented into the next iteration of the web? How to fix this? #ogChat

Identity was the most common topic with the sixth and final question. Single sign-on, personal identities on mobile phones/passports and privacy seemed to be the biggest issues facing the next iteration of the web.

  • @Technodad: Q6 Common identity is a key – need portable, mutually-recognized IDs that can be used for access control of shared info. #ogChat
  • @JohnFontana: Q6 Users want to be digital. Give them ways to do that safely and privately if so desired. #ogChat
  • @TheTonyBradley: Q6 — Single ID has pros and cons. Convenient to login everywhere with FB credentials, but also a security Achilles heel. #ogchat

Thank you to all the participants who made this such a great discussion!

Patricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

Comments Off

Filed under Tweet Jam

Social Networks – Challenging an Open Internet? Walled Gardens Tweet Jam

By Patty Donovan, The Open Group

On July 10, The Open Group will host a special tweet jam to examine “walled gardens” and the effect of social media networks on the web.

The World Wide Web was originally intended to be an open platform – from the early forums for programmers exchanging code or listservs to today’s daily photo blogs or corporate website providing product information. Information was meant to be free and available for public consumption, meaning any link on the World Wide Web could be accessed by anyone, anytime.

With the advent of Web 2.0, content no longer roams free. Increasingly, private companies and social networks, such as Facebook and Google Plus, have realized the value of controlling information and restricting the once open flow of the Internet. A link to a Facebook profile, for example, doesn’t lead to a member’s Facebook page, but instead to an invitation to join Facebook – a closed, member-only network where one must be inside the network to derive any benefit. And once one joins one of these “walled gardens,” personal content is shared in ways that are uncontrollable by the user.

As web data continues to explode and more and more information about Internet usage is gathered across sites, the pressure to “grow the gardens” with more personal data and content will continue to increase.

Please join us on July 10 at 9:00 a.m. PT/12:00 p.m. ET/5:00 p.m. BST for a tweet jam that will discuss the future of the web as it relates to information flow, identity management and privacy in the context of “walled garden” networks such as Facebook and Google. We welcome Open Group members and interested participants from all backgrounds to join the session and interact with our panel of experts, including:

To access the discussion, please follow the #ogChat hashtag next Tuesday during the allotted discussion time. Other hashtags we recommend you using include:

  • Open Group Conference, Washington, D.C.: #ogDCA
  • Facebook: #fb (Twitter account: @facebook)
  • Google: #google (Twitter account: @google)
  • Identity management: #idM
  • Mobile: #mobile
  • IT security: @ITsec
  • Semantic web: #semanticweb
  • Walled garden: #walledgarden
  • Web 2.0: #web20

Below is a list of the questions that will be addressed during the hour-long discussion:

  1. In the context of the World Wide Web, why has there been a shift from the open Internet to portals, apps and walled environments?
  2. How has this trend affected privacy and control? Do users have enough control over their IDs and content within walled garden networks?
  3. What has been the role of social and mobile in developing walled gardens? Have they accelerated this trend?
  4. Can people use the Internet today without joining a walled garden network? What does this say about the current web?
  5. Is there any way to reconcile the ideals of the early web with the need for companies to own information about users?
  6. What Web 2.0 lessons learned should be implemented into the next iteration of the web?

And for those of you who are unfamiliar with tweet jams, here is some background information:

What Is a Tweet Jam?

A tweet jam is a one hour “discussion” hosted on Twitter. The purpose of the tweet jam is to share knowledge and answer questions on a chosen topic. Each tweet jam is led by a moderator (Dana Gardner) and a dedicated group of experts to keep the discussion flowing. The public (or anyone using Twitter interested in the topic) is free (and encouraged!) to join the discussion.

Participation Guidance

Whether you’re a newbie or veteran Twitter user, here are a few tips to keep in mind:

  • Have your first #ogChat tweet be a self-introduction: name, affiliation, occupation.
  • Start all other tweets with the question number you’re responding to and the #ogChat hashtag.
    • Sample: “Q4 People can still use the Internet without joining a walled garden, but their content exposure would be extremely limited #ogChat”
  • Please refrain from product or service promotions. The goal of a tweet jam is to encourage an exchange of knowledge and stimulate discussion.
  • While this is a professional get-together, we don’t have to be stiff! Informality will not be an issue!
  • A tweet jam is akin to a public forum, panel discussion or Town Hall meeting – let’s be focused and thoughtful.

If you have any questions prior to the event, please direct them to Rod McLeod (rmcleod at bateman-group dot com). We anticipate a lively chat on July 10 and hope you will be able to join!

Patricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the US.

Comments Off

Filed under Identity Management, Tweet Jam

The Open Group and MIT Experts Detail New Advances in ID Management to Help Reduce Cyber Risk

By Dana Gardner, The Open Group

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how Enterprise Architecture (EA), enterprise transformation and securing global supply chains.

We’re joined in advance by some of the main speakers at the July 16 conference to examine the relationship between controlled digital identities in cyber risk management. Our panel will explore how the technical and legal support of ID management best practices have been advancing rapidly. And we’ll see how individuals and organizations can better protect themselves through better understanding and managing of their online identities.

The panelist are Jim Hietala, vice president of security at The Open Group; Thomas Hardjono, technical lead and executive director of the MIT Kerberos Consortium; and Dazza Greenwood, president of the CIVICS.com consultancy and lecturer at the MIT Media Lab. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is ID management, and how does it form a fundamental component of cybersecurity?

Hietala: ID management is really the process of identifying folks who are logging onto computing services, assessing their identity, looking at authenticating them, and authorizing them to access various services within a system. It’s something that’s been around in IT since the dawn of computing, and it’s something that keeps evolving in terms of new requirements and new issues for the industry to solve.

Particularly as we look at the emergence of cloud and software-as-a-service (SaaS) services, you have new issues for users in terms of identity, because we all have to create multiple identities for every service we access.

You have issues for the providers of cloud and SaaS services, in terms of how they provision, where they get authoritative identity information for the users, and even for enterprises who have to look at federating identity across networks of partners. There are a lot of challenges there for them as well.

Key theme

Figuring out who is at the other end of that connection is fundamental to all of cybersecurity. As we look at the conference that we’re putting on this month in Washington, D.C., a key theme is cybersecurity — and identity is a fundamental piece of that.

You can look at things that are happening right now in terms of trojans, bank fraud, scammers and attackers, wire transferring money out of company’s bank accounts and other things you can point to.

There are failures in their client security and the customer’s security mechanisms on the client devices, but I think there are also identity failures. They need new approaches for financial institutions to adopt to prevent some of those sorts of things from happening. I don’t know if I’d use the word “rampant,” but they are clearly happening all over the place right now. So I think there is a high need to move quickly on some of these issues.

Gardner: Are we at a plateau? Or has ID management been a continuous progression over the past decade?

Hardjono: So it’s been at least a decade since the industry began addressing identity and identity federation. Someone in the audience might recall Liberty Alliance, the Project Liberty in its early days.

One notable thing about the industry is that the efforts have been sort of piecemeal, and the industry, as a whole, is now reaching the point where a true correct identity is absolutely needed now in transactions in a time of so many so-called Internet scams.

Gardner: Dazza, is there a casual approach to this, or a professional need? By that, I mean that we see a lot of social media activities, Facebook for example, where people can have an identity and may or may not be verified. That’s sort of the casual side, but it sounds like what we’re really talking about is more for professional business or eCommerce transactions, where verification is important. In other words, is there a division between these two areas that we should consider before we get into it more deeply?

Greenwood: Rather than thinking of it as a division, a spectrum would be a more useful way to look at it. On one side, you have, as you mentioned, a very casual use of identity online, where it may be self-asserted. It may be that you’ve signed a posting or an email.

On the other side, of course, the Internet and other online services are being used to conduct very high value, highly sensitive, or mission-critical interactions and transactions all the time. When you get toward that spectrum, a lot more information is needed about the identity authenticating, that it really is that person, as Thomas was starting to foreshadow. The authorization, workflow permissions, and accesses are also incredibly important.

In the middle, you have a lot of gradations, based partly on the sensitivity of what’s happening, based partly on culture and context as well. When you have people who are operating within organizations or within contexts that are well-known and well-understood — or where there is already a lot of not just technical, but business, legal and cultural understanding of what happens — if something goes wrong, there are the right kind of supports and risk management processes.

There are different ways that this can play out. It’s not always just a matter of higher security. It’s really higher confidence, and more trust based on a variety of factors. But the way you phrased it is a good way to enter this topic, which is, we have a spectrum of identity that occurs online, and much of it is more than sufficient for the very casual or some of the social activities that are happening.

Higher risk

But as the economy in our society moves into a digital age, ever more fully and at ever-higher speeds, much more important, higher risk, higher value interactions are occurring. So we have to revisit how it is that we have been addressing identity — and give it more attention and a more careful design, instead of architectures and rules around it. Then we’ll be able to make that transition more gracefully and with less collateral damage, and really get to the benefits of going online.

Gardner: What’s happening to shore this up and pull it together? Let’s look at some of the big news.

Hietala: I think the biggest recent news is the U.S. National Strategy for Trusted Identities in Cyber Space (NSTIC) initiative. It clearly shows that a large government, the United States government, is focused on the issue and is willing to devote resources to furthering an ID management ecosystem and construct for the future. To me that’s the biggest recent news.

At a crossroads

Greenwood: We’re just now is at a crossroads where finally industry, government and increasingly the populations in general, are understanding that there is a different playing field. In the way that we interact, the way we work, the way we do healthcare, the way we do education, the way our social groups cohere and communicate, big parts are happening online.

In some cases, it happens online through the entire lifecycle. What that means now is that a deeper approach is needed. Jim mentioned NSTIC as one of those examples. There are a number of those to touch on that are occurring because of the profound transition that requires a deeper treatment.

NSTIC is the U.S. government’s roadmap to go from its piecemeal approach to a coherent architecture and infrastructure for identity within the United States. It could provide a great model for other countries as well.

People can reuse their identity, and we can start to address what you’re talking about with identity and other people taking your ID, and more to the point, how to prove you are who you said you were to get that ID back. That’s not always so easy after identity theft, because we don’t have an underlying effective identity structure in the United States yet.

I just came back from the United Kingdom at a World Economic Forum meeting. I was very impressed by what their cabinet officers are doing with an identity-assurance scheme in large scale procurement. It’s very consistent with the NSTIC approach in the United States. They can get tens of millions of their citizens using secure well-authenticated identities across a number of transactions, while always keeping privacy, security, and also individual autonomy at the forefront.

There are a number of technology and business milestones that are occurring as well. Open Identity Exchange (OIX) is a great group that’s beginning to bring industry and other sectors together to look at their approaches and technology. We’ve had Security Assertion Markup Language (SAML). Thomas is co-chair of the PC, and that’s getting a facelift.

That approach was being brought to match scale with OpenID Connect, which is OpenID and OAuth. There are a great number of technology innovations that are coming online.

Legally, there are also some very interesting newsworthy harbingers. Some of it is really just a deeper usage of statutes that have been passed a few years ago — the Uniform Electronic Transactions Act, the Electronic Signatures in Global and National Commerce Act, among others, in the U.S.

There is eSignature Directive and others in Europe and in the rest of the world that have enabled the use of interactions online and dealt with identity and signatures, but have left to the private sector and to culture which technologies, approaches, and solutions we’ll use.

Now, we’re not only getting one-off solutions, but architectures for a number of different solutions, so that whole sectors of the economy and segments of society can more fully go online. Practically everywhere you look, you see news and signs of this transition that’s occurring, an exciting time for people interested in identity.

Gardner: What’s most new and interesting from your perspective on what’s being brought to bear on this problem, particularly from a technology perspective?

Two dimensions

Hardjono: It’s along two dimensions. The first one is within the Kerberos Consortium. We have a number of people coming from the financial industry. They all have the same desire, and that is to scale their services to the global market, basically sign up new customers abroad, outside United States. In wanting to do so, they’re facing a question of identity. How do we assert that somebody in a country is truly who they say they are.

The second, introduces a number of difficult technical problems. Closer to home and maybe at a smaller scale, the next big thing is user consent. The OpenID exchange and the OpenID Connect specifications have been completed, and people can do single sign-on using technology such as OAuth 2.0.

The next big thing is how can an attribute provider, banks, telcos and so on, who have data about me, share data with other partners in the industry and across the sectors of the industry with my expressed consent in a digital manner.

Gardner: Tell us a bit about the MIT Core ID approach and how this relates to the Jericho Forum approach.

Greenwood: I would defer to Jim of The Open Group to speak more authoritatively on Jericho Forum, which is a part of Open Group. But, in general, Jericho Forum is a group of experts in the security field from industry and, more broadly, who have done some great work in the past on deperimeterized security and some other foundational work.

In the last few years, they’ve been really focused on identity, coming to realize that identity is at the center of what one would have to solve in order to have a workable approach to security. It’s necessary, but not sufficient, for security. We have to get that right.

To their credit, they’ve come up with a remarkably good list of simple understandable principles, that they call the Jericho Forum Identity Commandments, which I strongly commend to everybody to read.

It puts forward a vision of an approach to identity, which is very constant with an approach that I’ve been exploring here at MIT for some years. A person would have a core ID identity, a core ID, and could from that create more than one persona. You may have a work persona, an eCommerce persona, maybe a social and social networking persona and so on. Some people may want a separate political persona.

You could cluster all of the accounts, interactions, services, attributes, and so forth, directly related to each of those to those individual personas, but not be in a situation where we’re almost blindly backing into right now. With a lot of the solutions in the market, your different aspects of life, unintentionally sometimes or even counter-intentionally, will merge.

Good architecture

Sometimes, that’s okay. Sometimes, in fact, we need to be able to have an inability to separate different parts of life. That’s part of privacy and can be part of security. It’s also just part of autonomy. It’s a good architecture. So Jericho Forum has got the commandments.

Many years ago, at MIT, we had a project called the Identity Embassy here in the Media Lab, where we put forward some simple prototypes and ideas, ways you could do that. Now, with all the recent activity we mentioned earlier toward full-scale usage of architectures for identity in U.S. with NSTIC and around the world, we’re taking a stronger, deeper run at this problem.

Thomas and I have been collaborating across different parts of MIT. I’m putting out what we think is a very exciting and workable way that you can in a high security manner, but also quite usably, have these core identifiers or individuals and inextricably link them to personas, but escape that link back to the core ID, and from across the different personas, so that you can get the benefits when you want them, keeping the personas separate.

Also it allows for many flexible business models and other personalization and privacy services as well, but we can get into that more in the fullness of time. But, in general, that’s what’s happening right now and we couldn’t be more excited about it.

Hardjono: For a global infrastructure for core identities to be able to develop, we definitely need collaboration between the governments of the world and the private sector. Looking at this problem, we were searching back in history to find an analogy, and the best analogy we could find was the rollout of a DNS infrastructure and the IP address assignment.

It’s not perfect and it’s got its critics, but the idea is that you could split blocks of IP addresses and get it sold and resold by private industry, really has allowed the Internet to scale, hitting limitations, but of course IPv6 is on the horizon. It’s here today.

So we were thinking along the same philosophy, where core identifiers could be arranged in blocks and handed out to the private sector, so that they can assign, sell it, or manage it on behalf of people who are Internet savvy, and perhaps not, such as my mom. So we have a number of challenges in that phase.

Gardner: Does this relate to the MIT Model Trust Framework System Rules project?

Greenwood: The Model Trust Framework System Rules project that we are pursuing in MIT is a very important aspect of what we’re talking about. Thomas and I talked somewhat about the technical and practical aspects of core identifiers and core identities. There is a very important business and legal layer within there as well.

So these trust framework system rules are ways to begin to approach the complete interconnected set of dimensions necessary to roll out these kinds of schemes at the legal, business, and technical layers.

They come from very successful examples in the past, where organizations have federated ID with more traditional approaches such as SAML and other approaches. There are some examples of those trust framework system rules at the business, legal, and technical level available.

Right now it’s CIVICS.com, and soon, when we have our model MIT under Creative Commons approach, we’ll take a lot of the best of what’s come before codified in a rational way. Business, legal, and technical rules can really be aligned in a more granular way to fit well, and put out a model that we think will be very helpful for the identity solutions of today that are looking at federate according to NSTIC and similar models. It absolutely would be applicable to how at the core identity persona underlying architecture and infrastructure that Thomas, I, and Jericho Forum are postulating could occur.

Hardjono: Looking back 10-15 years, we engineers came up with all sorts of solutions and standardized them. What’s really missing is the business models, business cases, and of course the legal side.

How can a business make revenue out of the management of identity-related aspects, management of attributes, and so on and how can they do so in such a manner that it doesn’t violate the user’s privacy. But it’s still user-centric in the sense that the user needs to give consent and can withdraw consent and so on. And trying to develop an infrastructure where everybody is protected.

Gardner: The Open Group, being a global organization focused on the collaboration process behind the establishment of standards, it sounds like these are some important aspects that you can bring out to your audience, and start to create that collaboration and discussion that could lead to more fuller implementation. Is that the plan, and is that what we’re expecting to hear more of at the conference next month?

Hietala: It is the plan, and we do get a good mix at our conferences and events of folks from all over the world, from government organizations and large enterprises as well. So it tends to be a good mixing of thoughts and ideas from around the globe on whatever topic we’re talking about — in this case identity and cybersecurity.

At the Washington, D.C. Conference, we have a mix of discussions. The kick-off one is a fellow by the name Joel Brenner who has written a book, America the Vulnerable, which I would recommend. He was inside the National Security Agency (NSA) and he’s been involved in fighting a lot of the cyber attacks. He has a really good insight into what’s actually happening on the threat and defending against the threat side. So that will be a very interesting discussion. [Read an interview with Joel Brenner.]

Then, on Monday, we have conference presentations in the afternoon looking at cybersecurity and identity, including Thomas and Dazza presenting on some of the projects that they’ve mentioned.

Cartoon videos

Then, we’re also bringing to that event for the first time, a series of cartoon videos that were produced for the Jericho Forum. They describe a lot of the commandments that Dazza mentioned in a more approachable way. So they’re hopefully understandable to laymen, and folks with not as much understanding about all the identity mechanisms that are out there. So, yeah, that’s what we are hoping to do.

Gardner: Perhaps we could now better explain what NSTIC is and does?

Greenwood:The best person to speak about NSTIC in the United States right now is probably President Barrack Obama, because he is the person that signed the policy. Our president and the administration has taken a needed, and I think a very well-conceived approach, to getting industry involved with other stakeholders in creating the architecture that’s going to be needed for identity for the United States and as a model for the world, and also how to interact with other models.

Jeremy Grant is in charge of the program office and he is very accessible. So if people want more information, they can find Jeremy online easily in at nist.gov/nstic. And nstic.us also has more information.

In general, NSTIC is a strategy document and a roadmap for how a national ecosystem can emerge, which is comprised of a governing body. They’re beginning to put that together this very summer, with 13 different stakeholders groups, each of which would self-organize and elect or appoint a person — industry, government, state and local government, academia, privacy groups, individuals — which is terrific — and so forth.

That governance group will come up with more of the details in terms of what the accreditation and trust marks look like, the types of technologies and approaches that would be favored according to the general principles I hope everyone reads within the NSTIC document.

At a lower level, Congress has appropriated more than $10 million to work with the White House for a number of pilots that will be under a million half dollars each for a year or two, where individual proof of concept, technologies, or approaches to trust frameworks will be piloted and put out into where they can be used in the market.

In general, by this time two months from now, we’ll know a lot more about the governing body, once it’s been convened and about the pilots once those contracts have been awarded and grants have been concluded. What we can say right now is that the way it’s going to come together is with trust framework system rules, the same exact type of entity that we are doing a model of, to help facilitate people’s understanding and having templates and well-thought through structures that they can pull down and, in turn, use as a starting point.

Circle of trust

So industry-by-industry, sector-by-sector, but also what we call circle of trust by circle of trust. Folks will come up with their own specific rules to define exactly how they will meet these requirements. They can get a trust mark, be interoperable with other trust framework consistent rules, and eventually you’ll get a clustering of those, which will lead to an ecosystem.

The ecosystem is not one size fits all. It’s a lot of systems that interoperate in a healthy way and can adapt and involve over time. A lot more, as I said, is available on nstic.us and nist.gov/nstic, and it’s exciting times. It’s certainly the best government document I have ever read. I’ll be so very excited to see how it comes out.

Gardner: What’s coming down the pike that’s going to make this yet more important?

Hietala: I would turn to the threat and attacks side of the discussion and say that, unfortunately, we’re likely to see more headlines of organizations being breached, of identities being lost, stolen, and compromised. I think it’s going to be more bad news that’s going to drive this discussion forward. That’s my take based on working in the industry and where it’s at right now.

Hardjono: I mentioned the user consent going forward. I think this is increasingly becoming an important sort of small step to address and to resolve in the industry and efforts like the User Managed Access (UMA) working group within the Kantara Initiative.

Folks are trying to solve the problem of how to share resources. How can I legitimately not only share my photos on Flickr with data, but how can I allow my bank to share some of my attributes with partners of the bank with my consent. It’s a small step, but it’s a pretty important step.

Greenwood: Keep your eyes on UMA out of Kantara. Keep looking at OASIS, as well, and the work that’s coming with SAML and some of the Model Trust Framework System Rules.

Most important thing

In my mind the most strategically important thing that will happen is OpenID Connect. They’re just finalizing the standard now, and there are some reference implementations. I’m very excited to work with MIT, with our friends and partners at MITRE Corporation and elsewhere.

That’s going to allow mass scales of individuals to have more ready access to identities that they can reuse in a great number of places. Right now, it’s a little bit catch-as-catch-can. You’ve got your Google ID or Facebook, and a few others. It’s not something that a lot of industries or others are really quite willing to accept to understand yet.

They’ve done a complete rethink of that, and use the best lessons learned from SAML and a bunch of other federated technology approaches. I believe this one is going to change how identity is done and what’s possible.

They’ve done such a great job on it, I might add It fits hand in glove with the types of Model Trust Framework System Rules approaches, a layer of UMA on top, and is completely consistent with the architecture rights, with a future infrastructure where people would have a Core ID and more than one persona, which could be expressed as OpenID Connect credentials that are reusable by design across great numbers of relying parties getting where we want to be with single sign-on.

So it’s exciting times. If it’s one thing you have to look at, I’d say do a Google search and get updates on OpenID Connect and watch how that evolves.

************

For more information on The Open Group’s upcoming conference in Washington, D.C., please visit: http://www.opengroup.org/dc2012

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

1 Comment

Filed under Conference, Cybersecurity

Learn How Enterprise Architects Can Better Relate TOGAF and DoDAF to Bring Best IT Practices to Defense Contracts

By Dana Gardner, Interarbor Solutions

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how Enterprise Architecture (EA), enterprise transformation, and securing global supply chains.

We’re joined by one of the main speakers at the July 16 conference, Chris Armstrong, President of Armstrong Process Group, to examine how governments in particular are using various frameworks to improve their architectural planning and IT implementations.

Armstrong is an internationally recognized thought leader in EA, formal modeling, process improvement, systems and software engineering, requirements management, and iterative and agile development.

He represents the Armstrong Process Group at the Open Group, the Object Management Group (OMG), and Eclipse Foundation. Armstrong also co-chairs The Open Group Architectural Framework (TOGAF®), and Model Driven Architecture (MDA) process modeling efforts, and also the TOGAF 9 Tool Certification program, all at The Open Group.

At the conference, Armstrong will examine the use of TOGAF 9 to deliver Department of Defense (DoD) Architecture Framework or DoDAF 2 capabilities. And in doing so, we’ll discuss how to use TOGAF architecture development methods to drive the development and use of DoDAF 2 architectures for delivering new mission and program capabilities. His presentation will also be Livestreamed free from The Open Group Conference. The full podcast can be found here.

Here are some excerpts:

Gardner: TOGAF and DoDAF, where have they been? Where are they going? And why do they need to relate to one another more these days?

Armstrong: TOGAF [forms] a set of essential components for establishing and operating an EA capability within an organization. And it contains three of the four key components of any EA.

First, the method by which EA work is done, including how it touches other life cycles within the organization and how it’s governed and managed. Then, there’s a skills framework that talks about the skills and experiences that the individual practitioners must have in order to participate in the EA work. Then, there’s a taxonomy framework that describes the semantics and form of the deliverables and the knowledge that the EA function is trying to manage.

One-stop shop

One of the great things that TOGAF has going for it is that, on the one hand, it’s designed to be a one-stop shop — namely providing everything that a end-user organization might need to establish an EA practice. But it does acknowledge that there are other components, predominantly in the various taxonomies and reference models, that various end-user organizations may want to substitute or augment.

It turns out that TOGAF has a nice synergy with other taxonomies, such as DoDAF, as it provides the backdrop for how to establish the overall EA capability, how to exploit it, and put it into practice to deliver new business capabilities.

Frameworks, such as DoDAF, focus predominantly on the taxonomy, mainly the kinds of things we’re keeping track of, the semantics relationships, and perhaps some formalism on how they’re structured. There’s a little bit of method guidance within DoDAF, but not a lot. So we see the marriage of the two as a natural synergy.

Gardner: So their complementary natures allows for more particulars on the defense side, but the overall TOGAF looks at the implementation method and skills for how this works best. Is this something new, or are we just learning to do it better?

Armstrong: I think we’re seeing the state of industry advance and looking at trying to have the federal government, both United States and abroad, embrace global industry standards for EA work. Historically, particularly in the US government, a lot of defense agencies and their contractors have often been focusing on a minimalistic compliance perspective with respect to DoDAF. In order to get paid for this work or be authorized to do this work, one of our requirements is we must produce DoDAF.

People are doing that because they’ve been commanded to do it. We’re seeing a new level of awareness. There’s some synergy with what’s going on in the DoDAF space, particularly as it relates to migrating from DoDAF 1.5 to DoDAF 2.

Agencies need some method and technique guidance on exactly how to come up with those particular viewpoints that are going to be most relevant, and how to exploit what DoDAF has to offer, in a way that advances the business as opposed to just solely being to conforming or compliant?

Gardner: Have there been hurdles, perhaps culturally, because of the landscape of these different companies and their inability to have that boundary-less interaction. What’s been the hurdle? What’s prevented this from being more beneficial at that higher level?

Armstrong: Probably overall organizational and practitioner maturity. There certainly are a lot of very skilled organizations and individuals out there. However, we’re trying to get them all lined up with the best practice for establishing an EA capability and then operating it and using it to a business strategic advantage, something that TOGAF defines very nicely and which the DoDAF taxonomy and work products hold in very effectively.

Gardner: Help me understand, Chris. Is this discussion that you’ll be delivering on July 16 primarily for TOGAF people to better understand how to implement vis-à-vis, DoDAF, is this the other direction, or is it a two-way street?

Two-way street

Armstrong: It’s a two-way street. One of the big things that particularly the DoD space has going for it is that there’s quite a bit of maturity in the notion of formally specified models, as DoDAF describes them, and the various views that DoDAF includes.

We’d like to think that, because of that maturity, the general TOGAF community can glean a lot of benefit from the experience they’ve had. What does it take to capture these architecture descriptions, some of the finer points about managing some of those assets. People within the TOGAF general community are always looking for case studies and best practices that demonstrate to them that what other people are doing is something that they can do as well.

We also think that the federal agency community also has a lot to glean from this. Again, we’re trying to get some convergence on standard methods and techniques, so that they can more easily have resources join their teams and immediately be productive and add value to their projects, because they’re all based on a standard EA method and framework.

One of the major changes between DoDAF 1 and DoDAF 2 is the focusing on fitness for purpose. In the past, a lot of organizations felt that it was their obligation to describe all architecture viewpoints that DoDAF suggests without necessarily taking a step back and saying, “Why would I want to do that?”

So it’s trying to make the agencies think more critically about how they can be the most agile, mainly what’s the least amount of architecture description that we can invest and that has the greatest possible value. Organizations now have the discretion to determine what fitness for purpose is.

Then, there’s the whole idea in DoDAF 2, that the architecture is supposed to be capability-driven. That is, you’re not just describing architecture, because you have some tools that happened to be DoDAF conforming, but there is a new business capability that you’re trying to inject into the organization through capability-based transformation, which is going to involve people, process, and tools.

One of the nice things that TOGAF’s architecture development method has to offer is a well-defined set of activities and best practices for deciding how you determine what those capabilities are and how you engage your stakeholders to really help collect the requirements for what fit for purpose means.

Gardner: As with the private sector, it seems that everyone needs to move faster. I see you’ve been working on agile development. With organizations like the OMG and Eclipse is there something that doing this well — bringing the best of TOGAF and DoDAF together — enables a greater agility and speed when it comes to completing a project?

Different perspectives

Armstrong: Absolutely. When you talk about what agile means to the general community, you may get a lot of different perspectives and a lot of different answers. Ultimately, we at APG feel that agility is fundamentally about how well your organization responds to change.

If you take a step back, that’s really what we think is the fundamental litmus test of the goodness of an architecture. Whether it’s an EA, a segment architecture, or a system architecture, the architects need to think thoughtfully and considerately about what things are almost certainly going to happen in the near future. I need to anticipate, and be able to work these into my architecture in such a way that when these changes occur, the architecture can respond in a timely, relevant fashion.

We feel that, while a lot of people think that agile is just a pseudonym for not planning, not making commitments, going around in circles forever, we call that chaos, another five letter word. But agile in our experience really demands rigor, and discipline.

Of course, a lot of the culture of the DoD brings that rigor and discipline to it, but also the experience that that community has had, in particular, of formally modeling architecture description. That sets up those government agencies to act agilely much more than others.

Gardner: Do you know of anyone that has done it successfully or is in the process? Even if you can’t name them, perhaps you can describe how something like this works?

Armstrong: First, there has been some great work done by the MITRE organization through their work in collaboration at The Open Group. They’ve written a white paper that talks about which DoDAF deliverables are likely to be useful in specific architecture development method activities. We’re going to be using that as a foundation for the talk we’re going to be giving at the conference in July.

The biggest thing that TOGAF has to offer is that a nascent organization that’s jumping into the DoDAF space may just look at it from an initial compliance perspective, saying, “We have to create an AV-1, and an OV-1, and a SvcV-5,” and so on.

Providing guidance

TOGAF will provide the guidance for what is EA. Why should I care? What kind of people do I need within my organization? What kind of skills do they need? What kind of professional certification might be appropriate to get all of the participants up on the same page, so that when we’re talking about EA, we’re all using the same language?

TOGAF also, of course, has a great emphasis on architecture governance and suggests that immediately, when you’re first propping up your EA capability, you need to put into your plan how you’re going to operate and maintain these architectural assets, once they’ve been produced, so that you can exploit them in some reuse strategy moving forward.

So, the preliminary phase of the TOGAF architecture development method provides those agencies best practices on how to get going with EA, including exactly how an organization is going to exploit what the DoDAF taxonomy framework has to offer.

Then, once an organization or a contractor is charged with doing some DoDAF work, because of a new program or a new capability, they would immediately begin executing Phase A: Architecture Vision, and follow the best practices that TOGAF has to offer.

Just what is that capability that we’re trying to describe? Who are the key stakeholders, and what are their concerns? What are their business objectives and requirements? What constraints are we going to be placed under?

Part of that is to create a high-level description of the current or baseline architecture descriptions, and then the future target state, so that all parties have at least a coarse-grained idea of kind of where we’re at right now, and what our vision is of where we want to be.

Because this is really a high level requirements and scoping set of activities, we expect that that’s going to be somewhat ambiguous. As the project unfolds, they’re going to discover details that may cause some adjustment to that final target.

Internalize best practices

So, we’re seeing defense contractors being able to internalize some of these best practices, and really be prepared for the future so that they can win the greatest amount of business and respond as rapidly and appropriately as possible, as well as how they can exploit these best practices to affect greater business transformation across their enterprises.

Gardner: We mentioned that your discussion on these issues, on July 16 will be Livestreamed for free, but you’re also doing some pre-conference and post-conference activities — webinars, and other things. Tell us how this is all coming together, and for those who are interested, how they could take advantage of all of these.

Armstrong: We’re certainly very privileged that The Open Group has offered this as opportunity to share this content with the community. On Monday, June 25, we’ll be delivering a webinar that focuses on architecture change management in the DoDAF space, particularly how an organization migrates from DoDAF 1 to DoDAF 2.

I’ll be joined by a couple of other people from APG, David Rice, one of our Principal Enterprise Architects who is a member of the DoDAF 2 Working Group, as well as J.D. Baker, who is the Co-chair of the OMG’s Analysis and Design Taskforce, and a member of the Unified Profile for DoDAF and MODAF (UPDM) work group, a specification from the OMG.

We’ll be talking about things that organizations need to think about as they migrate from DoDAF 1 to DoDAF 2. We’ll be focusing on some of the key points of the DoDAF 2 meta-model, namely the rearrangement of the architecture viewpoints and the architecture partitions and how that maps from the classical DoDAF 1.5 viewpoint, as well as focusing on this notion of capability-driven architectures and fitness for purpose.

We also have the great privilege after the conference to be delivering a follow-up webinar on implementation methods and techniques around advanced DoDAF architectures. Particularly, we’re going to take a closer look at something that some people may be interested in, namely tool interoperability and how the DoDAF meta-model offers that through what’s called the Physical Exchange Specification (PES).

We’ll be taking a look a little bit more closely at this UPDM thing I just mentioned, focusing on how we can use formal modeling languages based on OMG standards, such as UML, SysML, BPMN, and SoaML, to do very formal architectural modeling.

One of the big challenges with EA is, at the end of the day, EA comes up with a set of policies, principles, assets, and best practices that talk about how the organization needs to operate and realize new solutions within that new framework. If EA doesn’t have a hand-off to the delivery method, namely systems engineering and solution delivery, then none of this architecture stuff makes a bit of a difference.

Driving the realization

We’re going to be talking a little bit about how DoDAF-based architecture description and TOGAF would drive the realization of those capabilities through traditional systems, engineering, and software development method.

************

For more information on The Open Group’s upcoming conference in Washington, D.C., please visit: http://www.opengroup.org/dc2012

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

Comments Off

Filed under Conference, Enterprise Architecture, Enterprise Transformation, TOGAF®

Corporate Data, Supply Chains Remain Vulnerable to Cyber Crime Attacks, Says Open Group Conference Speaker

By Dana Gardner, Interarbor Solutions 

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how security impacts the Enterprise Architecture, enterprise transformation, and global supply chain activities in organizations, both large and small.

We’re now joined on the security front with one of the main speakers at the conference, Joel Brenner, the author of America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare.”

Joel is a former Senior Counsel at the National Security Agency (NSA), where he advised on legal and policy issues relating to network security. Mr. Brenner currently practices law in Washington at Cooley LLP, specializing in cyber security. Registration remains open for The Open Group Conference in Washington, DC beginning July 16.

Previously, he served as the National Counterintelligence Executive in the Office of the Director of National Intelligence, and as the NSA’s Inspector General. He is a graduate of University of Wisconsin–Madison, the London School of Economics, and Harvard Law School. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Your book came out last September and it affirmed this notion that the United States, or at least open Western cultures and societies, are particularly vulnerable to being infiltrated, if you will, from cybercrime, espionage, and dirty corporate tricks.

Why are we particularly vulnerable, when we should be most adept at using cyber activities to our advantage?

Brenner: Let’s make a distinction here between the political-military espionage that’s gone on since pre-biblical times and the economic espionage that’s going on now and, in many cases, has nothing to do at all to do with military, defense, or political issues.

The other stuff has been going on forever, but what we’ve seen in the last 15 or so years is a relentless espionage attack on private companies for reasons having nothing to do with political-military affairs or defense.

So the countries that are adept at cyber, but whose economies are relatively undeveloped compared to ours, are at a big advantage, because they’re not very lucrative targets for this kind of thing, and we are. Russia, for example, is paradoxical. While it has one of the most educated populations in the world and is deeply cultured, it has never been able to produce a commercially viable computer chip.

Not entrepreneurial

We’re not going to Russia to steal advanced technology. We’re not going to China to steal advanced technology. They’re good at engineering and they’re good at production, but so far, they have not been good at making themselves into an entrepreneurial culture.

That’s one just very cynical reason why we don’t do economic espionage against the people who are mainly attacking us, which are China, Russia, and Iran. I say attack in the espionage sense.

The other reason is that you’re stealing intellectual property when you’re doing economic espionage. It’s a bedrock proposition of American economics and political strategy around the world to defend the legal regime that protects intellectual property. So we don’t do that kind of espionage. Political-military stuff we’re real good at.

Gardner: Wouldn’t our defense rise to the occasion? Why hasn’t it?

Brenner: The answer has a lot to do with the nature of the Internet and its history. The Internet, as some of your listeners will know, was developed starting in the late ’60s by the predecessor of the Defense Advanced Research Projects Agency (DARPA), a brilliant operation which produced a lot of cool science over the years.

It was developed for a very limited purpose, to allow the collaboration of geographically dispersed scientists who worked under contract in various universities with the Defense Department’s own scientists. It was bringing dispersed brainpower to bear.

It was a brilliant idea, and the people who invented this, if you talk to them today, lament the fact that they didn’t build a security layer into it. They thought about it. But it wasn’t going to be used for anything else but this limited purpose in a trusted environment, so why go to the expense and aggravation of building a lot of security into it?

Until 1992, it was against the law to use the Internet for commercial purposes. Dana, this is just amazing to realize. That’s 20 years ago, a twinkling of an eye in the history of a country’s commerce. That means that 20 years ago, nobody was doing anything commercial on the Internet. Ten years ago, what were you doing on the Internet, Dana? Buying a book for the first time or something like that? That’s what I was doing, and a newspaper.

In the intervening decade, we’ve turned this sort of Swiss cheese, cool network, which has brought us dramatic productivity and all and pleasure into the backbone of virtually everything we do.

International finance, personal finance, command and control of military, manufacturing controls, the controls in our critical infrastructure, all of our communications, virtually all of our activities are either on the Internet or exposed to the Internet. And it’s the same Internet that was Swiss cheese 20 years ago and it’s Swiss cheese now. It’s easy to spoof identities on it.

So this gives a natural and profound advantage to attack on this network over defense. That’s why we’re in the predicament we’re in.

Both directions

Gardner: Let’s also look at this notion of supply chain, because corporations aren’t just islands unto themselves. A business is really a compendium of other businesses, products, services, best practices, methodologies, and intellectual property that come together to create a value add of some kind. It’s not just attacking the end point, where that value is extended into the market. It’s perhaps attacking anywhere along that value chain.

What are the implications for this notion of the ecosystem vulnerability versus the enterprise vulnerability?

Brenner: Well, the supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements. For example, this software, where was it written? Maybe it was written in Russia — or maybe somewhere in Ohio or in Nevada, but by whom? We don’t know.

There are two fundamental different issues for supply chain, depending on the company. One is counterfeiting. That’s a bad problem. Somebody is trying to substitute shoddy goods under your name or the name of somebody that you thought you could trust. That degrades performance and presents real serious liability problems as a result.

The other problem is the intentional hooking, or compromising, of software or chips to do things that they’re not meant to do, such as allow backdoors and so on in systems, so that they can be attacked later. That’s a big problem for military and for the intelligence services all around the world.

The reason we have the problem is that nobody knows how to vet a computer chip or software to see that it won’t do thesesquirrelly things. We can test that stuff to make sure it will do what it’s supposed to do, but nobody knows how to test the computer chip or two million lines of software reliably to be sure that it won’t also do certain things we don’t want it to do.

You can put it in a sandbox or a virtual environment and you can test it for a lot of things, but you can’t test it for everything. It’s just impossible. In hardware and software, it is thestrategic supply chain problem now. That’s why we have it.

If you have a worldwide supply chain, you have to have a worldwide supply chain management system. This is hard and it means getting very specific. It includes not only managing a production process, but also the shipment process. A lot of squirrelly things happen on loading docks, and you have to have a way not to bring perfect security to that — that’s impossible — but to make it really harder to attack your supply chain.

Notion of cost

Gardner: So many organizations today, given the economy and the lagging growth, have looked to lowest cost procedures, processes, suppliers, materials, and aren’t factoring in the risk and the associated cost around these security issues. Do people need to reevaluate cost in the supply chain by factoring in what the true risks are that we’re discussing?

Brenner: Yes, but of course, when the CEO and the CFO get together and start to figure this stuff out, they look at the return on investment (ROI) of additional security. It’s very hard to be quantitatively persuasive about that. That’s one reason why you may see some kinds of production coming back into the United States. How one evaluates that risk depends on the business you’re in and how much risk you can tolerate.

This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things.

Gardner: We’ve seen other aspects of commerce in which we can’t lock down the process. We can’t know all the information, but what we can do is offer deterrence, perhaps in the form of legal recourse, if something goes wrong, if in fact, decisions were made that countered the contracts or were against certain laws or trade practices.

Brenner: For a couple of years now, I’ve struggled with the question why it is that liability hasn’t played a bigger role in bringing more cyber security to our environment, and there are a number of reasons.

We’ve created liability for the loss of personal information, so you can quantify that risk. You have a statute that says there’s a minimum damage of $500 or $1,000 per person whose identifiable information you lose. You add up the number of files in the breach and how much the lawyers and the forensic guys cost and you come up with a calculation of what these things cost.

But when it comes to just business risk, not legal risk, and the law says intellectual property to a company that depends on that intellectual property, you have a business risk. You don’t have much of a legal risk at this point.

You may have a shareholder suit issue, but there hasn’t been an awful lot of that kind of litigation so far. So I don’t know. I’m not sure that’s quite the question you were asking me, Dana.

Gardner: My follow on to that was going to be where would you go to sue across borders anyway? Is there an über-regulatory or legal structure across borders to target things like supply chain, counterfeit, cyber espionage, or mistreatment of business practice?

Depends on the borders

Brenner: It depends on the borders you’re talking about. The Europeans have a highly developed legal and liability system. You can bring actions in European courts. So it depends what borders you mean.

If you’re talking about the border of Russia, you have very different legal issues. China has different legal issues, different from Russia, as well from Iran. There are an increasing number of cases where actions are being brought in China successfully for breaches of intellectual property rights. But you wouldn’t say that was the case in Nigeria. You wouldn’t say that was the case in a number of other countries where we’ve had a lot of cybercrime originating from.

So there’s no one solution here. You have to think in terms of all kinds of layered defenses. There are legal actions you can take sometimes, but the fundamental problem we’re dealing with is this inherently porous Swiss-cheesy system. In the long run, we’re going to have to begin thinking about the gradual reengineering of the way the Internet works, or else this basic dynamic, in which lawbreakers have advantage over law-abiding people, is not going to go away.

Think about what’s happened in cyber defenses over the last 10 years and how little they’ve evolved — even 20 years for that matter. They almost all require us to know the attack mode or the sequence of code in order to catch it. And we get better at that, but that’s a leapfrog business. That’s fundamentally the way we do it.

Whether we do it at the perimeter, inside, or even outside before the attack gets to the perimeter, that’s what we’re looking for — stuff we’ve already seen. That’s a very poor strategy for doing security, but that’s where we are. It hasn’t changed much in quite a long time and it’s probably not going to.

Gardner: Why is that the case? Is this not a perfect opportunity for a business-government partnership to come together and re-architect the Internet at least for certain types of business activities, permit a two-tier approach, and add different levels of security into that? Why hasn’t it gone anywhere?

Brenner: What I think you’re saying is different tiers or segments. We’re talking about the Balkanization of the Internet. I think that’s going to happen as more companies demand a higher level of protection, but this again is a cost-benefit analysis. You’re going to see even more Balkanization of the Internet as you see countries like Russia and China, with some success, imposing more controls over what can be said and done on the Internet. That’s not going to be acceptable to us.

Gardner: We’ve seen a lot with Cloud Computing and more businesses starting to go to third-party Cloud providers for their applications, services, data storage, even integration to other business services and so forth.

More secure

If there’s a limited lumber, or at least a finite number, of Cloud providers and they can institute the proper security and take advantage of certain networks within networks, then wouldn’t that hypothetically make a Cloud approach more secure and more managed than every-man-for-himself, which is what we have now in enterprises and small to medium-sized businesses (SMBs)?

Brenner: I think the short answer is, yes. The SMBs will achieve greater security by basically contracting it out to what are called Cloud providers. That’s because managing the patching of vulnerabilities and other aspects and encryption is beyond what’s most small businesses and many medium-sized businesses can do, are willing to do, or can do cost-effectively.

For big businesses in the Cloud, it just depends on how good the big businesses’ own management of IT is as to whether it’s an improvement or not. But there are some problems with the Cloud.

People talk about security, but there are different aspects of it. You and I have been talking just now about security meaning the ability to prevent somebody from stealing or corrupting your information. But availability is another aspect of security. By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.

Consequently, it seems to me that backup issues are really critical for people who are going to the Cloud. Are you going to rely on your Cloud provider to provide the backup? Are you going to rely on the Cloud provider to provide all of your backup? Are you going to go to a second Cloud provider? Are you going to keep some information copied in-house?

What would happen if your information is good, but you can’t get to it? That means you can’t get to anything anymore. So that’s another aspect of security people need to think through.

Gardner: How do you know you’re doing the right thing? How do you know that you’re protecting? How do you know that you’ve gone far enough to ameliorate the risk?

Brenner: This is really hard. If somebody steals your car tonight, Dana, you go out to the curb or the garage in the morning, and you know it’s not there. You know it’s been stolen.

When somebody steals your algorithms, your formulas, or your secret processes, you’ve still got them. You don’t know they’re gone, until three or four years later, when somebody in Central China or Siberia is opening a factory and selling stuff into your market that you thought you were going to be selling — and that’s your stuff. Then maybe you go back and realize, “Oh, that incident three or four years ago, maybe that’s when that happened, maybe that’s when I lost it.”

What’s going out

So you don’t even know necessarily when things have been stolen. Most companies don’t do a good job. They’re so busy trying to find out what’s coming into their network, they’re not looking at what’s going out.

That’s one reason the stuff is hard to measure. Another is that ROI is very tough. On the other hand, there are lots of things where business people have to make important judgments in the face of risks and opportunities they can’t quantify, but we do it.

We’re right to want data whenever we can get it, because data generally means we can make better decisions. But we make decisions about investment in R&D all the time without knowing what the ROI is going to be and we certainly don’t know what the return on a particular R&D expenditure is going to be. But we make that, because people are convinced that if they don’t make it, they’ll fall behind and they’ll be selling yesterday’s products tomorrow.

Why is it that we have a bias toward that kind of risk, when it comes to opportunity, but not when it comes to defense? I think we need to be candid about our own biases in that regard, but I don’t have a satisfactory answer to your question, and nobody else does either. This is one where we can’t quantify that answer.

Gardner: It sounds as if people need to have a healthy dose of paranoia to tide them over across these areas. Is that a fair assessment?

Brenner: Well, let’s say skepticism. People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.

In many companies, you don’t see a willingness to do that, but that varies a great deal from company to company. Things are not always what they seem. That is not how we Americans approach life. We are trusting folks, which is why this is a great country to do business in and live in. But we’re having our pockets picked and it’s time we understood that.

Gardner: And, as we pointed out earlier, this picking of pockets is not just on our block, but could be any of our suppliers, partners, or other players in our ecosystem. If their pockets get picked, it ends up being our problem too.

Brenner: Yeah, I described this risk in my book, America the Vulnerable,” at great length and in my practice, here at Cooley, I deal with this every day. I find myself, Dana, giving briefings to businesspeople that 5, 10, or 20 years ago, you wouldn’t have given to anybody who wasn’t a diplomat or a military person going outside the country. Now this kind of cyber pilferage is an aspect of daily commercial life, I’m sorry to say.

************

For more information on The Open Group’s upcoming conference in Washington, D.C., please visit: http://www.opengroup.org/dc2012

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

2 Comments

Filed under Cloud, Cybersecurity, Supply chain risk

Open Group Security Gurus Dissect the Cloud: Higher of Lower Risk

By Dana Gardner, Interarbor Solutions

For some, any move to the Cloud — at least the public Cloud — means a higher risk for security.

For others, relying more on a public Cloud provider means better security. There’s more of a concentrated and comprehensive focus on security best practices that are perhaps better implemented and monitored centrally in the major public Clouds.

And so which is it? Is Cloud a positive or negative when it comes to cyber security? And what of hybrid models that combine public and private Cloud activities, how is security impacted in those cases?

We posed these and other questions to a panel of security experts at last week’s Open Group Conference in San Francisco to deeply examine how Cloud and security come together — for better or worse.

The panel: Jim Hietala, Vice President of Security for The Open Group; Stuart Boardman, Senior Business Consultant at KPN, where he co-leads the Enterprise Architecture Practice as well as the Cloud Computing Solutions Group; Dave Gilmour, an Associate at Metaplexity Associates and a Director at PreterLex Ltd., and Mary Ann Mezzapelle, Strategist for Enterprise Services and Chief Technologist for Security Services at HP.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Is this notion of going outside the firewall fundamentally a good or bad thing when it comes to security?

Hietala: It can be either. Talking to security people in large companies, frequently what I hear is that with adoption of some of those services, their policy is either let’s try and block that until we get a grip on how to do it right, or let’s establish a policy that says we just don’t use certain kinds of Cloud services. Data I see says that that’s really a failed strategy. Adoption is happening whether they embrace it or not.

The real issue is how you do that in a planned, strategic way, as opposed to letting services like Dropbox and other kinds of Cloud Collaboration services just happen. So it’s really about getting some forethought around how do we do this the right way, picking the right services that meet your security objectives, and going from there.

Gardner: Is Cloud Computing good or bad for security purposes?

Boardman: It’s simply a fact, and it’s something that we need to learn to live with.

What I’ve noticed through my own work is a lot of enterprise security policies were written before we had Cloud, but when we had private web applications that you might call Cloud these days, and the policies tend to be directed toward staff’s private use of the Cloud.

Then you run into problems, because you read something in policy — and if you interpret that as meaning Cloud, it means you can’t do it. And if you say it’s not Cloud, then you haven’t got any policy about it at all. Enterprises need to sit down and think, “What would it mean to us to make use of Cloud services and to ask as well, what are we likely to do with Cloud services?”

Gardner: Dave, is there an added impetus for Cloud providers to be somewhat more secure than enterprises?

Gilmour: It depends on the enterprise that they’re actually supplying to. If you’re in a heavily regulated industry, you have a different view of what levels of security you need and want, and therefore what you’re going to impose contractually on your Cloud supplier. That means that the different Cloud suppliers are going to have to attack different industries with different levels of security arrangements.

The problem there is that the penalty regimes are always going to say, “Well, if the security lapses, you’re going to get off with two months of not paying” or something like that. That kind of attitude isn’t going to go in this kind of security.

What I don’t understand is exactly how secure Cloud provision is going to be enabled and governed under tight regimes like that.

An opportunity

Gardner: Jim, we’ve seen in the public sector that governments are recognizing that Cloud models could be a benefit to them. They can reduce redundancy. They can control and standardize. They’re putting in place some definitions, implementation standards, and so forth. Is the vanguard of correct Cloud Computing with security in mind being managed by governments at this point?

Hietala: I’d say that they’re at the forefront. Some of these shared government services, where they stand up Cloud and make it available to lots of different departments in a government, have the ability to do what they want from a security standpoint, not relying on a public provider, and get it right from their perspective and meet their requirements. They then take that consistent service out to lots of departments that may not have had the resources to get IT security right, when they were doing it themselves. So I think you can make a case for that.

Gardner: Stuart, being involved with standards activities yourself, does moving to the Cloud provide a better environment for managing, maintaining, instilling, and improving on standards than enterprise by enterprise by enterprise? As I say, we’re looking at a larger pool and therefore that strikes me as possibly being a better place to invoke and manage standards.

Boardman: Dana, that’s a really good point, and I do agree. Also, in the security field, we have an advantage in the sense that there are quite a lot of standards out there to deal with interoperability, exchange of policy, exchange of credentials, which we can use. If we adopt those, then we’ve got a much better chance of getting those standards used widely in the Cloud world than in an individual enterprise, with an individual supplier, where it’s not negotiation, but “you use my API, and it looks like this.”

Having said that, there are a lot of well-known Cloud providers who do not currently support those standards and they need a strong commercial reason to do it. So it’s going to be a question of the balance. Will we get enough specific weight of people who are using it to force the others to come on board? And I have no idea what the answer to that is.

Gardner: We’ve also seen that cooperation is an important aspect of security, knowing what’s going on on other people’s networks, being able to share information about what the threats are, remediation, working to move quickly and comprehensively when there are security issues across different networks.

Is that a case, Dave, where having a Cloud environment is a benefit? That is to say more sharing about what’s happening across networks for many companies that are clients or customers of a Cloud provider rather than perhaps spotty sharing when it comes to company by company?

Gilmour: There is something to be said for that, Dana. Part of the issue, though, is that companies are individually responsible for their data. They’re individually responsible to a regulator or to their clients for their data. The question then becomes that as soon as you start to share a certain aspect of the security, you’re de facto sharing the weaknesses as well as the strengths.

So it’s a two-edged sword. One of the problems we have is that until we mature a little bit more, we won’t be able to actually see which side is the sharpest.

Gardner: So our premise that Cloud is good and bad for security is holding up, but I’m wondering whether the same things that make you a risk in a private setting — poor adhesion to standards, no good governance, too many technologies that are not being measured and controlled, not instilling good behavior in your employees and then enforcing that — wouldn’t this be the same either way? Is it really Cloud or not Cloud, or is it good security practices or not good security practices? Mary Ann?

No accountability

Mezzapelle: You’re right. It’s a little bit of that “garbage in, garbage out,” if you don’t have the basic things in place in your enterprise, which means the policies, the governance cycle, the audit, and the tracking, because it doesn’t matter if you don’t measure it and track it, and if there is no business accountability.

David said it — each individual company is responsible for its own security, but I would say that it’s the business owner that’s responsible for the security, because they’re the ones that ultimately have to answer that question for themselves in their own business environment: “Is it enough for what I have to get done? Is the agility more important than the flexibility in getting to some systems or the accessibility for other people, as it is with some of the ubiquitous computing?”

So you’re right. If it’s an ugly situation within your enterprise, it’s going to get worse when you do outsourcing, out-tasking, or anything else you want to call within the Cloud environment. One of the things that we say is that organizations not only need to know their technology, but they have to get better at relationship management, understanding who their partners are, and being able to negotiate and manage that effectively through a series of relationships, not just transactions.

Gardner: If data and sharing data is so important, it strikes me that Cloud component is going to be part of that, especially if we’re dealing with business processes across organizations, doing joins, comparing and contrasting data, crunching it and sharing it, making data actually part of the business, a revenue generation activity, all seems prominent and likely.

So to you, Stuart, what is the issue now with data in the Cloud? Is it good, bad, or just the same double-edged sword, and it just depends how you manage and do it?

Boardman: Dana, I don’t know whether we really want to be putting our data in the Cloud, so much as putting the access to our data into the Cloud. There are all kinds of issues you’re going to run up against, as soon as you start putting your source information out into the Cloud, not the least privacy and that kind of thing.

A bunch of APIs

What you can do is simply say, “What information do I have that might be interesting to people? If it’s a private Cloud in a large organization elsewhere in the organization, how can I make that available to share?” Or maybe it’s really going out into public. What a government, for example, can be thinking about is making information services available, not just what you go and get from them that they already published. But “this is the information,” a bunch of APIs if you like. I prefer to call them data services, and to make those available.

So, if you do it properly, you have a layer of security in front of your data. You’re not letting people come in and do joins across all your tables. You’re providing information. That does require you then to engage your users in what is it that they want and what they want to do. Maybe there are people out there who want to take a bit of your information and a bit of somebody else’s and mash it together, provide added value. That’s great. Let’s go for that and not try and answer every possible question in advance.

Gardner: Dave, do you agree with that, or do you think that there is a place in the Cloud for some data?

Gilmour: There’s definitely a place in the Cloud for some data. I get the impression that there is going to drive out of this something like the insurance industry, where you’ll have a secondary Cloud. You’ll have secondary providers who will provide to the front-end providers. They might do things like archiving and that sort of thing.

Now, if you have that situation where your contractual relationship is two steps away, then you have to be very confident and certain of your cloud partner, and it has to actually therefore encompass a very strong level of governance.

The other issue you have is that you’ve got then the intersection of your governance requirements with that of the cloud provider’s governance requirements. Therefore you have to have a really strongly — and I hate to use the word — architected set of interfaces, so that you can understand how that governance is actually going to operate.

Gardner: Wouldn’t data perhaps be safer in a cloud than if they have a poorly managed network?

Mezzapelle: There is data in the Cloud and there will continue to be data in the Cloud, whether you want it there or not. The best organizations are going to start understanding that they can’t control it that way and that perimeter-like approach that we’ve been talking about getting away from for the last five or seven years.

So what we want to talk about is data-centric security, where you understand, based on role or context, who is going to access the information and for what reason. I think there is a better opportunity for services like storage, whether it’s for archiving or for near term use.

There are also other services that you don’t want to have to pay for 12 months out of the year, but that you might need independently. For instance, when you’re running a marketing campaign, you already share your data with some of your marketing partners. Or if you’re doing your payroll, you’re sharing that data through some of the national providers.

Data in different places

So there already is a lot of data in a lot of different places, whether you want Cloud or not, but the context is, it’s not in your perimeter, under your direct control, all of the time. The better you get at managing it wherever it is specific to the context, the better off you will be.

Hietala: It’s a slippery slope [when it comes to customer data]. That’s the most dangerous data to stick out in a Cloud service, if you ask me. If it’s personally identifiable information, then you get the privacy concerns that Stuart talked about. So to the extent you’re looking at putting that kind of data in a Cloud, looking at the Cloud service and trying to determine if we can apply some encryption, apply the sensible security controls to ensure that if that data gets loose, you’re not ending up in the headlines of The Wall Street Journal.

Gardner: Dave, you said there will be different levels on a regulatory basis for security. Wouldn’t that also play with data? Wouldn’t there be different types of data and therefore a spectrum of security and availability to that data?

Gilmour: You’re right. If we come back to Facebook as an example, Facebook is data that, even if it’s data about our known customers, it’s stuff that they have put out there with their will. The data that they give us, they have given to us for a purpose, and it is not for us then to distribute that data or make it available elsewhere. The fact that it may be the same data is not relevant to the discussion.

Three-dimensional solution

That’s where I think we are going to end up with not just one layer or two layers. We’re going to end up with a sort of a three-dimensional solution space. We’re going to work out exactly which chunk we’re going to handle in which way. There will be significant areas where these things crossover.

The other thing we shouldn’t forget is that data includes our software, and that’s something that people forget. Software nowadays is out in the Cloud, under current ways of running things, and you don’t even always know where it’s executing. So if you don’t know where your software is executing, how do you know where your data is?

It’s going to have to be just handled one way or another, and I think it’s going to be one of these things where it’s going to be shades of gray, because it cannot be black and white. The question is going to be, what’s the threshold shade of gray that’s acceptable.

Gardner: Mary Ann, to this notion of the different layers of security for different types of data, is there anything happening in the market that you’re aware of that’s already moving in that direction?

Mezzapelle: The experience that I have is mostly in some of the business frameworks for particular industries, like healthcare and what it takes to comply with the HIPAA regulation, or in the financial services industry, or in consumer products where you have to comply with the PCI regulations.

There has continued to be an issue around information lifecycle management, which is categorizing your data. Within a company, you might have had a document that you coded private, confidential, top secret, or whatever. So you might have had three or four levels for a document.

You’ve already talked about how complex it’s going to be as you move into trying understand, not only for that data, that the name Mary Ann Mezzapelle, happens to be in five or six different business systems over a 100 instances around the world.

That’s the importance of something like an Enterprise Architecture that can help you understand that you’re not just talking about the technology components, but the information, what they mean, and how they are prioritized or critical to the business, which sometimes comes up in a business continuity plan from a system point of view. That’s where I’ve advised clients on where they might start looking to how they connect the business criticality with a piece of information.

One last thing. Those regulations don’t necessarily mean that you’re secure. It makes for good basic health, but that doesn’t mean that it’s ultimately protected.You have to do a risk assessment based on your own environment and the bad actors that you expect and the priorities based on that.

Leaving security to the end

Boardman: I just wanted to pick up here, because Mary Ann spoke about Enterprise Architecture. One of my bugbears — and I call myself an enterprise architect — is that, we have a terrible habit of leaving security to the end. We don’t architect security into our Enterprise Architecture. It’s a techie thing, and we’ll fix that at the back. There are also people in the security world who are techies and they think that they will do it that way as well.

I don’t know how long ago it was published, but there was an activity to look at bringing the SABSA Methodology from security together with TOGAF®. There was a white paper published a few weeks ago.

The Open Group has been doing some really good work on bringing security right in to the process of EA.

Hietala: In the next version of TOGAF, which has already started, there will be a whole emphasis on making sure that security is better represented in some of the TOGAF guidance. That’s ongoing work here at The Open Group.

Gardner: As I listen, it sounds as if the in the Cloud or out of the Cloud security continuum is perhaps the wrong way to look at it. If you have a lifecycle approach to services and to data, then you’ll have a way in which you can approach data uses for certain instances, certain requirements, and that would then apply to a variety of different private Cloud, public Cloud, hybrid Cloud.

Is that where we need to go, perhaps have more of this lifecycle approach to services and data that would accommodate any number of different scenarios in terms of hosting access and availability? The Cloud seems inevitable. So what we really need to focus on are the services and the data.

Boardman: That’s part of it. That needs to be tied in with the risk-based approach. So if we have done that, we can then pick up on that information and we can look at a concrete situation, what have we got here, what do we want to do with it. We can then compare that information. We can assess our risk based on what we have done around the lifecycle. We can understand specifically what we might be thinking about putting where and come up with a sensible risk approach.

You may come to the conclusion in some cases that the risk is too high and the mitigation too expensive. In others, you may say, no, because we understand our information and we understand the risk situation, we can live with that, it’s fine.

Gardner: It sounds as if we are coming at this as an underwriter for an insurance company. Is that the way to look at it?

Current risk

Gilmour: That’s eminently sensible. You have the mortality tables, you have the current risk, and you just work the two together and work out what’s the premium. That’s probably a very good paradigm to give us guidance actually as to how we should approach intellectually the problem.

Mezzapelle: One of the problems is that we don’t have those actuarial tables yet. That’s a little bit of an issue for a lot of people when they talk about, “I’ve got $100 to spend on security. Where am I going to spend it this year? Am I going to spend it on firewalls? Am I going to spend it on information lifecycle management assessment? What am I going to spend it on?” That’s some of the research that we have been doing at HP is to try to get that into something that’s more of a statistic.

So, when you have a particular project that does a certain kind of security implementation, you can see what the business return on it is and how it actually lowers risk. We found that it’s better to spend your money on getting a better system to patch your systems than it is to do some other kind of content filtering or something like that.

Gardner: Perhaps what we need is the equivalent of an Underwriters Laboratories (UL) for permeable organizational IT assets, where the security stamp of approval comes in high or low. Then, you could get you insurance insight– maybe something for The Open Group to look into. Any thoughts about how standards and a consortium approach would come into that?

Hietala: I don’t know about the UL for all security things. That sounds like a risky proposition.

Gardner: It could be fairly popular and remunerative.

Hietala: It could.

Mezzapelle: An unending job.

Hietala: I will say we have one active project in the Security Forum that is looking at trying to allow organizations to measure and understand risk dependencies that they inherit from other organizations.

So if I’m outsourcing a function to XYZ corporation, being able to measure what risk am I inheriting from them by virtue of them doing some IT processing for me, could be a Cloud provider or it could be somebody doing a business process for me, whatever. So there’s work going on there.

I heard just last week about a NSF funded project here in the U.S. to do the same sort of thing, to look at trying to measure risk in a predictable way. So there are things going on out there.

Gardner: We have to wrap up, I’m afraid, but Stuart, it seems as if currently it’s the larger public Cloud provider, something of Amazon and Google and among others that might be playing the role of all of these entities we are talking about. They are their own self-insurer. They are their own underwriter. They are their own risk assessor, like a UL. Do you think that’s going to continue to be the case?

Boardman: No, I think that as Cloud adoption increases, you will have a greater weight of consumer organizations who will need to do that themselves. You look at the question that it’s not just responsibility, but it’s also accountability. At the end of the day, you’re always accountable for the data that you hold. It doesn’t matter where you put it and how many other parties they subcontract that out to.

The weight will change

So there’s a need to have that, and as the adoption increases, there’s less fear and more, “Let’s do something about it.” Then, I think the weight will change.

Plus, of course, there are other parties coming into this world, the world that Amazon has created. I’d imagine that HP is probably one of them as well, but all the big names in IT are moving in here, and I suspect that also for those companies there’s a differentiator in knowing how to do this properly in their history of enterprise involvement.

So yeah, I think it will change. That’s no offense to Amazon, etc. I just think that the balance is going to change.

Gilmour: Yes. I think that’s how it has to go. The question that then arises is, who is going to police the policeman and how is that going to happen? Every company is going to be using the Cloud. Even the Cloud suppliers are using the Cloud. So how is it going to work? It’s one of these never-decreasing circles.

Mezzapelle: At this point, I think it’s going to be more evolution than revolution, but I’m also one of the people who’ve been in that part of the business — IT services — for the last 20 years and have seen it morph in a little bit different way.

Stuart is right that there’s going to be a convergence of the consumer-driven, cloud-based model, which Amazon and Google represent, with an enterprise approach that corporations like HP are representing. It’s somewhere in the middle where we can bring the service level commitments, the options for security, the options for other things that make it more reliable and risk-averse for large corporations to take advantage of it.

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

1 Comment

Filed under Cloud, Cloud/SOA, Conference, Cybersecurity, Information security, Security Architecture

Capgemini’s CTO on How Cloud Computing Exposes the Duality Between IT and Business Transformation

By Dana Gardner, Interarbor Solutions

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference this month in San Francisco.

The conference will focus on how IT and enterprise architecture support enterprise transformation. Speakers in conference events will also explore the latest in service oriented architecture (SOA), cloud computing, and security.

We’re now joined by one of the main speakers, Andy Mulholland, the Global Chief Technology Officer and Corporate Vice President at Capgemini. In 2009, Andy was voted one of the top 25 most influential CTOs in the world by InfoWorld. And in 2010, his CTO Blog was voted best blog for business managers and CIOs for the third year running by Computer Weekly.

Capgemini is about to publish a white paper on cloud computing. It draws distinctions between what cloud means to IT, and what it means to business — while examining the complex dual relationship between the two.

As a lead-in to his Open Group conference presentation on the transformed enterprise, Andy draws on the paper and further drills down on one of the decade’s hottest technology and business trends, cloud computing, and how it impacts business and IT. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: Why do business people think they have a revolution on their hands, while IT people look cloud computing as an evolution of infrastructure efficiency?

Mulholland: We define the role of IT and give it the responsibility and the accountability in the business in a way that is quite strongly related to internal practice. It’s all about how we manage the company’s transactions, how we reduce the cost, how we automate business process,and generally try to make our company a more efficient internal operator.

When you look at cloud computing through that set of lenses, you’re going to see … the technologies from cloud computing, principally virtualization, [as] ways to improve how you deliver the current server-centric, application-centric environment.

However, business people … reflect on it in terms of the change in society and the business world, which we all ought to recognize because that is our world, around the way we choose what we buy, how we choose to do business with people, how we search more, and how we’ve even changed that attitude.

Changed our ways

There’s a whole list of things that we simply just don’t do anymore because we’ve changed the way we choose to buy a book, the way we choose and listen to music and lots of other things.

So we see this as a revolution in the market or, more particularly, a revolution in how cloud can serve in the market, because everybody uses some form of technology.

So then the question is not the role of the IT department and the enterprise — it’s the role technology should be playing in their extended enterprise in doing business.

Gardner: What do we need to start doing differently?

Mulholland: Let’s go to a conversation this morning with a client. It’s always interesting to touch reality. This particular client is looking at the front end of a complex ecosystem around travel, and was asked this standard question by our account director: Do you have a business case for the work we’re discussing?

The reply from the CEO is very interesting. He fixed him with a very cold glare and he said, “If you were able to have 20 percent more billable hours without increasing your cost structure, would you be bothered to even think about the business case?”

The answer in that particular case was they were talking about 10,000 more travel instances or more a year — with no increase in their cost structure. In other words, their whole idea was there was nothing to do with cost in it. Their argument was in revenue increase, market share increase, and they thought that they would make better margins, because it would actually decrease their cost base or spread it more widely.

That’s the whole purpose of this revolution and that’s the purpose the business schools are always pushing, when they talk about innovative business models. It means innovate your business model to look at the market again from the perspective of getting into new markets, getting increased revenue, and maybe designing things that make more money.

Using technology externally

We’re always hooked on this idea that we’ve used technology very successfully internally, but now we should be asking the question about how we’re using technology externally when the population as a whole uses that as their primary method of deciding what they’re going to buy, how they’re going to buy it, when they’re going to buy it, and lots of other questions.

… A popular book recently has been The Power of Pull, and the idea is that we’re really seeing a decentralization of the front office in order to respond to and follow the market and the opportunities and the events in very different ways.

The Power of Pull says that I do what my market is asking me and I design business process or capabilities to be rapidly orchestrated through the front office around where things want to go, and I have linkage points, application programming interface (API) points, where I take anything significant and transfer it back.

But the real challenge is — and it was put to me today in the client discussion — that their business was designed around 1970 computer systems, augmented slowly around that, and they still felt that. Today, their market and their expectations of the industry that they’re in were that they would be designed around the way people were using their products and services and the events and that they had to make that change.

To do that, they’re transformed in the organization, and that’s where we start to spot the difference. We start to spot the idea that your own staff, your customers, and other suppliers are all working externally in information, process, and services accessible to all on an Internet market or architecture.

So when we talk about business architecture, it’s as relevant today as it ever was in terms of interpreting a business.

Set of methodologies

But when we start talking about architecture, The Open Group Architectural Framework (TOGAF) is a set of methodologies on the IT side — the closed-coupled state for a designed set of principles to client-server type systems. In this new model, when we talk about clouds, mobility, and people traveling around and connecting by wireless, etc., we have a stateless loosely coupled environment.

The whole purpose of The Open Group is, in fact, to help devise new ways for being able to architect methods to deliver that. That’s what stands behind the phrase, “a transformed enterprise.”

… If we go back to the basic mission of The Open Group, which is boundarylessness of this information flow, the boundary has previously been defined by a computer system updating another computer system in another company around traditional IT type procedural business flow.

Now, we’re talking about the idea that the information flow is around an ecosystem in an unstructured way. Not a structured file-to-file type transfer, not a structured architecture of who does what, when, and how, but the whole change model in this is unstructured.

Gardner: It’s important to point out here, Andy, that the stakes are relatively high. Who in the organization can be the change agent that can make that leap between the duality view of cloud that IT has, and these business opportunists?

Mulholland: The CEOs are quite noticeably reading the right articles, hearing the right information from business schools, etc., and they’re getting this picture that they’re going to have new business models and new capabilities.

So the drive end is not hard. The problem that is usually encountered is that the IT department’s definition and role interferes with them being able to play the role they want.

What we’re actually looking for is the idea that IT, as we define it today, is some place else. You have to accept that it exists, it will exist, and it’s hugely important. So please don’t take those principles and try to apply them outside.

The real question here is when you find those people who are doing the work outside — and I’ve yet to find any company where it hasn’t been the case — and the question should be how can we actually encourage and manage that innovation sensibly and successfully?

What I mean by that is that if everybody goes off and does their own thing, once again, we’ll end up with a broken company. Why? Because their whole purpose as an enterprises is to leverage success rapidly. If someone is very successful over there, you really need to know, and you need to leverage that again as rapidly as you can to run the rest of the organization. If it doesn’t work, you need to stop it quickly.

Changing roles

In models of the capabilities of that, the question is where is the government structure? So we hear titles like Chief Innovation Officer, again, slightly surprising how it may come up. But we see the model coming both ways. There are reforming CIOs for sure, who have recognized this and are changing their role and position accordingly, sometimes formally, sometimes informally.

The other way around, there are people coming from other parts of the business, taking the title and driving them. I’ve seen Chief Strategy Officers taking the role. I’ve seen the head of sales and marketing taking the role.

Certainly, recognizing the technology possibilities should be coming from the direction of the technology capabilities within the current IT department. The capability of what that means might be coming differently. So it’s a very interesting balance at the moment, and we don’t know quite the right answer.

What I do know is that it’s happening, and the quick-witted CIOs are understanding that it’s a huge opportunity for them to fix their role and embrace a new area, and a new sense of value that they can bring to their organization.

Gardner: Returning to the upcoming Capgemini white paper, it adds a sense of urgency at the end on how to get started. It suggests that you appoint a leader, but a leader first for the inside-out element of cloud and transformation and then a second leader, a separate leader perhaps, for that outside-in or reflecting the business transformation and the opportunity for what’s going on in the external business and markets. It also suggests a strategic road map that involves both business and technology, and then it suggests getting a pilot going.

How does this transition become something that you can manage?

Mulholland: The question is do you know who is responsible. If you don’t, you’d better figure out how you’re going to make someone responsible, because in any situation, someone has to be deciding what we’re going to do and how we’re going to do it.

Having defined that, there are very different business drivers, as well as different technology drivers, between the two. Clearly, whoever takes those roles will reflect a very different way that they will have to run that element. So a duality is recognized in that comment.

On the other hand, no business can survive by going off in half-a-dozen directions at once. You won’t have the money. You won’t have the brand. You won’t have anything you’d like. It’s simply not feasible.

So, the object of the strategic roadmap is to reaffirm the idea of what kind of business we’re trying to be and do. That’s the glimpse of what we want to achieve.

There has to be a strategy. Otherwise, you’ll end up with way too much decentralization and people making up their own version of the strategy, which they can fairly easily do and fairly easily mount from someone else’s cloud to go and do it today.

So the purpose of the duality is to make sure that the two roles, the two different groups of technology, the two different capabilities they reflect to the organization, are properly addressed, properly managed, and properly have a key authority figure in charge of them.

Enablement model

The business strategy is to make sure that the business knows how the enablement model that these two offer them is capable of being directed to where the shareholders will make money out of the business, because that is ultimately that success factor they’re looking for to drive them forward.

************

If you are interested in attending The Open Group’s upcoming conference, please register here: http://www3.opengroup.org/event/open-group-conference-san-francisco/registration

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

3 Comments

Filed under Cloud, Cloud/SOA, Enterprise Transformation, Semantic Interoperability

MIT’s Ross on How Enterprise Architecture and IT More Than Ever Lead to Business Transformation

By Dana Gardner, Interarbor Solutions

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference this month in San Francisco.

The conference will focus on how IT and enterprise architecture support enterprise transformation. Speakers in conference events will also explore the latest in service oriented architecture (SOA), cloud computing, and security.

We’re now joined by of the main speakers, Jeanne Ross, Director and Principal Research Scientist at the MIT Center for Information Systems Research. Jeanne studies how firms develop competitive advantage through the implementation and reuse of digitized platforms.

She is also the co-author of three books: IT Governance: How Top Performers Manage IT Decision Rights for Superior Results, Enterprise Architecture As Strategy: Creating a Foundation for Business Execution, and IT Savvy: What Top Executives Must Know to Go from Pain to Gain.

As a lead-in to her Open Group presentation on how adoption of enterprise architecture (EA) leads to greater efficiencies and better business agility, Ross explains how enterprise architects have helped lead the way to successful business transformations. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here.

Here are some excerpts:

Gardner: How you measure or determine that enterprise architects and their practices are intrinsic to successful business transformations?

Ross: That’s a great question. Today, there remains kind of a leap of faith in recognizing that companies that are well-architected will, in fact, perform better, partly because you can be well-architected and perform badly. Or if we look at companies that are very young and have no competitors, they can be very poorly architected and achieve quite remarkably in the marketplace.

But what we can ascribe to architecture is that when companies have competition, then they can establish any kind of performance target they want, whether it’s faster revenue growth or better profitability, and then architect themselves so they can achieve their goals. Then, we can monitor that.

We do have evidence in repeated case studies of companies that set goals, defined an architecture, started to build the capabilities associated with that architecture, and did indeed improve their performance. We have wonderful case study results that should be very reaffirming. I accept that they are not conclusive.

Architectural maturity

We also have statistical support in some of the work we’ve done that shows that high performers in our sample of 102 companies, in fact, had greater architecture maturity. They had deployed a number of practices associated with good architecture.

Gardner: Is there something that’s new about this, rather than just trying to reengineer something?

Ross: Yes, the thing we’re learning about enterprise architecture is that there’s a cultural shift that takes place in an organization, when it commits to doing business in a new way, and that cultural shift starts with abandoning a culture of heroes and accepting a culture of discipline.

Nobody wants to get rid of the heroes in their company. Heroes are people who see a problem and solve it. But we do want to get past heroes sub-optimizing. What companies traditionally did before they started thinking about what architecture would mean, is they relied on individuals to do what seemed best and that clearly can sub-optimize in an environment that increasingly is global and requires things like a single face to the customer.

We also have statistical support in some of the work we’ve done that shows that high performers in our sample of 102 companies, in fact, had greater architecture maturity. They had deployed a number of practices associated with good architecture.

Gardner: Is there something that’s new about this, rather than just trying to reengineer something?

Ross: Yes, the thing we’re learning about enterprise architecture is that there’s a cultural shift that takes place in an organization, when it commits to doing business in a new way, and that cultural shift starts with abandoning a culture of heroes and accepting a culture of discipline.

Nobody wants to get rid of the heroes in their company. Heroes are people who see a problem and solve it. But we do want to get past heroes sub-optimizing. What companies traditionally did before they started thinking about what architecture would mean, is they relied on individuals to do what seemed best and that clearly can sub-optimize in an environment that increasingly is global and requires things like a single face to the customer.

We really just need architecture to pull out unnecessary cost and to enable desirable reusability. And the architect is typically going to be the person representing that enterprise view and helping everyone understand the benefits of understanding that enterprise view, so that everybody who can easily or more easily see the local view is constantly working with architects to balance those two requirements.

Gardner: Is this a particularly good time, from your vantage point, to undertake enterprise architecture?

Ross: It’s a great time for most companies. There will be exceptions that I’ll talk about in a minute. One thing we learned early on in the research is that companies who were best at adopting architecture and implementing it effectively had cost pressures. What happens when you have cost pressures is that you’re forced to make tough decisions.

If you have all the money in the world, you’re not forced to make tough decisions. Architecture is all about making tough decisions, understanding your tradeoffs, and recognizing that you’re going to get some things that you want and you are going to sacrifice others.

If you don’t see that, if you just say, “We’re going to solve that by spending more money,” it becomes nearly impossible to become architected. This is why investment banks are invariably very badly architected, and most people in investment banks are very aware of that. It’s just very hard to do anything other than say, “If that’s important to us, let’s spend more money and let’s get it.” One thing you can’t get by spending more money is discipline, and architecture is very tightly related to discipline.

Tough decisions

In a tough economy, when competition is increasingly global and marketplaces are shifting, this ability to make tough decisions is going to be essential. Opportunities to save costs are going to be really valued, and architecture invariably helps companies save money. The ability to reuse, and thus rapidly seize the next related business opportunity, is also going to be highly valued.

The thing you have to be careful of is that if you see your markets disappearing, if your product is outdated, or your whole industry is being redefined, as we have seen in things like media, you have to be ready to innovate. Architecture can restrict your innovative gene, by saying, “Wait, wait, wait. We want to slow down. We want to do things on our platform.” That can be very dangerous, if you are really facing disruptive technology or market changes.

So you always have to have that eye out there that says, “When is what we built that’s stable actually constraining us too much? When is it preventing important innovation?” For a lot of architects, that’s going to be tough, because you start to love the architecture, the standards, and the discipline. You love what you’ve created, but if it isn’t right for the market you’re facing, you have to be ready to let it go and go seize the next opportunity.

Gardner: Perhaps this environment is the best of all worlds, because we have that discipline on the costs which forces hard decisions, as you say. We also have a lot of these innovative IT trends that would almost force you to look at doing things differently. I’m thinking again of cloud, mobile, the big data issues, and even social-media types of effects.

Ross: Absolutely. We should all look at it that way and say, “What a wonderful world we live in.” One of the companies that I find quite remarkable in their ability to, on the one hand, embrace discipline and architecture, and on the other hand, constantly innovate, is USAA. I’m sure I’ll talk about them a little bit at the conference.

This is a company that just totally understands the importance of discipline around customer service. They’re off the charts in their customer satisfaction.

They’re a financial services institution. Most financial services institutions just drool over USAA’s customer satisfaction ratings, but they’ve done this by combining this idea of discipline around the customer. We have a single customer file. We have an enterprise view of that customer. We constantly standardize those practices and processes that will ensure that we understand the customer and we deliver the products and services they need. They have enormous discipline around these things.

Simultaneously, they have people working constantly around innovation. They were the first company to see the need for this deposit with your iPhone. Take a picture of your check and it’s automatically deposited into your account. They were nearly a year ahead of the next company that came up with that service.

The way they see it is that for any new technology that comes out, our customer will want to use it. We’ve got to be there the day after the technology comes out. They obviously haven’t been able to achieve that, but that’s their goal. If they can make deals with R&D companies that are coming up with new technologies, they’re going to make them, so that they can be ready with their product when the thing actually becomes commercial.

So it’s certainly possible for a company to be both innovative and responsive to what’s going on in the technology world and disciplined and cost effective around customer service, order-to-cash, and those other underlying critical requirements in your organization. But it’s not easy, and that’s why USAA is quite remarkable. They’ve pulled it off and they are a lesson for many other companies.

Gardner: Is The Open Group a good forum for your message and your research, and if so, why?

Ross: The Open Group is great for me, because there is so much serious thinking in The Open Group about what architecture is, how it adds value, and how we do it well. For me to touch base with people in The Open Group is really valuable, and for me to touch base to share my research and hear the push back, the debate, or the value add is perfect, because these are people who are living it every day.

Major themes

Gardner: Are there any other major themes that you’ll be discussing at the conference coming up that you might want to share with us?

Ross: One thing we have observed in our cases that is more and more important to architects is that the companies are struggling more than we realized with using their platforms well.

I’m not sure that architects or people in IT always see this. You build something that’s phenomenally good and appropriate for the business and then you just assume, that if you give them a little training, they’ll use it well.

That’s actually been a remarkable struggle for organizations. One of our research projects right now is called “Working Smarter on Your Digitized Platform.” When we go out, we find there aren’t very many companies that have come anywhere close to leveraging their platforms the way they might have imagined and certainly the way an architect would have imagined.

It’s harder than we thought. It requires persistent coaching. It’s not about training, but persistent coaching. It requires enormous clarity of what the organization is trying to do, and organizations change fast. Clarity is a lot harder to achieve than we think it ought to be.

The message for architects would be: here you are trying to get really good at being a great architect. To add value to your organization, you actually have to understand one more thing: how effectively are people in your company adopting the capabilities and leveraging them effectively? At some point, the value add of the architecture is diminished by the fact that people don’t get it. They don’t understand what they should be able to do.

We’re going to see architects spending a little more time understanding what their leadership is capable of and what capabilities they’ll be able to leverage in the organization, as opposed to which on a rational basis seem like a really good idea.

Getting started

Gardner: When you’re an organization and you’ve decided that you do want to transform and take advantage of unique opportunities for either technical disruption or market discipline, how do you go about getting more structure, more of an architecture?

Ross: That’s idiosyncratic to some extent, because in your dream world, what happens is that the CEO announces, “This is what we are going to be five years from now. This is how we are going to operate and I expect everyone to get on board.” The vision is clear and the commitment is clear. Then the architects can just say, and most architects are totally capable of this, “Oh, well then, here are the capabilities we need to build. Let’s just go build them and then we’ll live happily ever after.”

The problem is that’s rarely the way you get to start. Invariably, the CEO is looking at the need for some acquisitions, some new markets, and all kinds of pressures. The last thing you’re getting is some clarity around the vision of an operating model that would define your critical architectural capabilities.

What ends up happening instead is architects recognize key business leaders who understand the need for, reused standardization, process discipline, whatever it is, and they’re very pragmatic about it. They say, “What do you need here to develop an enterprise view of the customer, or what’s limiting your ability to move into the next market?”

And they have to pragmatically develop what the organization can use, as opposed to defining the organizational vision and then the big picture view of the enterprise architecture.

So in practice, it’s a much more pragmatic process than what we would imagine when we, for example, write books on how to do enterprise architecture. The best architects are listening very hard to who is asking for what kind of capability. When they see real demand and real leadership around certain enterprise capabilities, they focus their attention on addressing those, in the context of what they realize will be a bigger picture over time.

They can already see the unfolding bigger picture, but there’s no management commitment yet. So they stick to the capabilities that they are confident the organization will use. That’s the way they get the momentum to build. That is more art than science and it really distinguishes the most successful architects.

************

If you are interested in attending The Open Group’s upcoming conference, please register here: http://www3.opengroup.org/event/open-group-conference-san-francisco/registration

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

2 Comments

Filed under Enterprise Architecture, Enterprise Transformation, Semantic Interoperability