Tag Archives: Podcast

The Open Group Panel Explores How the Big Data Era Now Challenges the IT Status Quo

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group panel explores how the Big Data era now challenges the IT status quo, or view the on-demand video recording on this discussion here: http://new.livestream.com/opengroup/events/1838807.

We recently assembled a panel of experts to explore how Big Data changes the status quo for architecting the enterprise. The bottom line from the discussion is that large enterprises should not just wade into Big Data as an isolated function, but should anticipate the strategic effects and impacts of Big Data — as well the simultaneous complicating factors of Cloud Computing and mobile– as soon as possible.

The panel consisted of Robert Weisman, CEO and Chief Enterprise Architect at Build The Vision; Andras Szakal, Vice President and CTO of IBM’s Federal Division; Jim Hietala, Vice President for Security at The Open Group, and Chris Gerty, Deputy Program Manager at the Open Innovation Program at NASA. I served as the moderator.

And this special thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on “Big Data — he transformation we need to embrace today.”

Threaded factors

An interesting thread for me throughout the conference was to factor where Big Data begins and plain old data, if you will, ends. Of course, it’s going to vary quite a bit from organization to organization.

But Gerty from NASA, part of our panel, provided a good example: It’s when you run out of gas with your old data methods, and your ability to deal with the data — and it’s not just the size of the data itself.

Therefore, Big Data means do things differently — not just to manage the velocity and the volume and the variety of the data, but to really think about data fundamentally and differently. And, we need to think about security, risk and governance. If it’s a “boundaryless organization” when it comes your data, either as a product or service or a resource, that control and management of which data should be exposed, which should be opened, and which should be very closely guarded all need to be factored, determined and implemented.

Here are some excerpts from the on-stage discussion:

Dana Gardner: You mentioned that Big Data to you is not a factor of the size, because NASA’s dealing with so much. It’s when you run out of steam, as it were, with the methodologies. Maybe you could explain more. When do you know that you’ve actually run out of steam with the methodologies?

Gerty: When we collect data, we have some sort of goal in minds of what we might get out of it. When we put the pieces from the data together, it either maybe doesn’t fit as well as you thought or you are successful and you continue to do the same thing, gathering archives of information.

Gardner: Andras, does that square with where you are in your government interactions — that data now becomes a different type of resource, and that you need to know when to do things differently?At that point, where you realize there might even something else that you want to do with the data, different than what you planned originally, that’s when we have to pivot a little bit and say, “Now I need to treat this as a living archive. It’s a ‘it may live beyond me’ type of thing.” At that point, I think you treat it as setting up the infrastructure for being used later, whether it’d be by you or someone else. That’s an important transition to make and might be what one could define as Big Data.

Szakal: The importance of data hasn’t changed. The data itself, the veracity of the data, is still important. Transactional data will always need to exist. The difference is that you have certainly the three or four Vs, depending on how you look at it, but the importance of data is in its veracity, and your ability to understand or to be able to use that data before the data’s shelf life runs out.

Gardner: Bob, we’ve seen the price points on storage go down so dramatically. We’ve seem people just decide to hold on to data that they wouldn’t have before, simply because they can and they can afford to do so. That means we need to try to extract value and use that data. From the perspective of an enterprise architect, how are things different now, vis-à-vis this much larger set of data and variety of data, when it comes to planning and executing as architects?Some data has a shelf life that’s long lived. Other data has very little shelf life, and you would use different approaches to being able to utilize that information. It’s ultimately not about the data itself, but it’s about gaining deep insight into that data. So it’s not storing data or manipulating data, but applying those analytical capabilities to data.

Weisman: One of the major issues is that normally organizations are holding two orders of magnitude more data then they need. It’s an huge overhead, both in terms of the applications architecture that has a code basis, larger than it should be, and also from the technology architecture that is supporting a horrendous number of servers and a whole bunch of technology stuff that they don’t need.

The issue for the architect is to figure out as what data is useful, institute a governance process, so that you can have data lifecycle management, have a proper disposition,  focus the organization on information data and knowledge that is basically going to provide business value to the organization, and help them innovate and have a competitive advantage.

Can’t afford it

And in terms of government, just improve service delivery, because there’s waste right now on information infrastructure, and we can’t afford it anymore.

Gardner: So it’s difficult to know what to keep and what not to keep. I’ve actually spoken to a few people lately who want to keep everything, just because they want to mine it, and they are willing to spend the money and effort to do that.

Jim Hietala, when people do get to this point of trying to decide what to keep, what not to keep, and how to architect properly for that, they also need to factor in security. It shouldn’t become later in the process. It should come early. What are some of the precepts that you think are important in applying good security practices to Big Data?

Hietala: One of the big challenges is that many of the big-data platforms weren’t built from the get-go with security in mind. So some of the controls that you’ve had available in your relational databases, for instance, you move over to the Big Data platforms and the access control authorizations and mechanisms are not there today.

Gardner: There are a lot of unknown unknowns out there, as we discovered with our tweet chat last month. Some people think that the data is just data, and you apply the same security to it. Do you think that’s the case with Big Data? Is it just another follow-through of what you always did with data in the first place?Planning the architecture, looking at bringing in third-party controls to give you the security mechanisms that you are used to in your older platforms, is something that organizations are going to have to do. It’s really an evolving and emerging thing at this point.

Hietala: I would say yes, at a conceptual level, but it’s like what we saw with virtualization. When there was a mad rush to virtualize everything, many of those traditional security controls didn’t translate directly into the virtualized world. The same thing is true with Big Data.

When you’re talking about those volumes of data, applying encryption, applying various security controls, you have to think about how those things are going to scale? That may require new solutions from new technologies and that sort of thing.

Gardner: Chris Gerty, when it comes to that governance, security, and access control, are there any lessons that you’ve learned that you are aware of in terms of the best of openness, but also with the ability to manage the spigot?

Gerty: Spigot is probably a dangerous term to use, because it implies that all data is treated the same. The sooner that you can tag the data as either sensitive or not, mostly coming from the person or team that’s developed or originated the data, the better.

Kicking the can

Once you have it on a hard drive, once you get crazy about storing everything, if you don’t know where it came from, you’re forced to put it into a secure environment. And that’s just kicking the can down the road. It’s really a disservice to people who might use the data in a useful way to address their problems.

We constantly have satellites that are made for one purpose. They send all the data down. It’s controlled either for security or for intellectual property (IP), so someone can write a paper. Then, after the project doesn’t get funded or it just comes to a nice graceful close, there is that extra step, which is almost a responsibility of the originators, to make it useful to the rest of the world.

Gardner: Let’s look at Big Data through the lens of some other major trends right now. Let’s start with Cloud. You mentioned that at NASA, you have your own private Cloud that you’re using a lot, of course, but you’re also now dabbling in commercial and public Clouds. Frankly, the price points that these Cloud providers are offering for storage and data services are pretty compelling.

So we should expect more data to go to the Cloud. Bob, from your perspective, as organizations and architects have to think about data in this hybrid Cloud on-premises off-premises, moving back and forth, what do you think enterprise architects need to start thinking about in terms of managing that, planning for the right destination of data, based on the right mix of other requirements?

Weisman: It’s a good question. As you said, the price point is compelling, but the security and privacy of the information is something else that has to be taken into account. Where is that information going to reside? You have to have very stringent service-level agreements (SLAs) and in certain cases, you might say it’s a price point that’s compelling, but the risk analysis that I have done means that I’m going to have to set up my own private Cloud.

Gardner: Andras, how do the Cloud and Big Data come together in a way that’s intriguing to you?Right now, everybody’s saying is the public Cloud is going to be the way to go. Vendors are going to have to be very sensitive to that and many are, at this point in time, addressing a lot of the needs of some of the large client basis. So it’s not one-size-fits-all and it’s more than just a price for service. Architecture can bring down the price pretty dramatically, even within an enterprise.

Szakal: Actually it’s a great question. We could take the rest of the 22 minutes talking on this one question. I helped lead the President’s Commission on Big Data that Steve Mills from IBM and — I forget the name of the executive from SAP — led. We intentionally tried to separate Cloud from Big Data architecture, primarily because we don’t believe that, in all cases, Cloud is the answer to all things Big Data. You have to define the architecture that’s appropriate for your business needs.

However, it also depends on where the data is born. Take many of the investments IBM has made into enterprise market management, for example, Coremetrics, several of these services that we now offer for helping customers understand deep insight into how their retail market or supply chain behaves.

Born in the Cloud

All of that information is born in the Cloud. But if you’re talking about actually using Cloud as infrastructure and moving around huge sums of data or constructing some of these solutions on your own, then some of the ideas that Bob conveyed are absolutely applicable.

I think it becomes prohibitive to do that and easier to stand up a hybrid environment for managing the amount of data. But I think that you have to think about whether your data is real-time data, whether it’s data that you could apply some of these new technologies like Hadoop to, Hadoop MapReduce-type solutions, or whether it’s traditional data warehousing.

Data warehouses are going to continue to exist and they’re going to continue to evolve technologically. You’re always going to use a subset of data in those data warehouses, and it’s going to be an applicable technology for many years to come.

Gardner: So suffice it to say, an enterprise architect who is well versed in both Cloud infrastructure requirements, technologies, and methods, as well as Big Data, will probably be in quite high demand. That specialization in one or the other isn’t as valuable as being able to cross-pollinate between them.

Szakal: Absolutely. It’s enabling our architects and finding deep individuals who have this unique set of skills, analytics, mathematics, and business. Those individuals are going to be the future architects of the IT world, because analytics and Big Data are going to be integrated into everything that we do and become part of the business processing.

Gardner: Well, that’s a great segue to the next topic that I am interested in, and it’s around mobility as a trend and also application development. The reason I lump them together is that I increasingly see developers being tasked with mobile first.

When you create a new app, you have to remember that this is going to run in the mobile tier and you want to make sure that the requirements, the UI, and the complexity of that app don’t go beyond the ability of the mobile app and the mobile user. This is interesting to me, because data now has a different relationship with apps.

We used to think of apps as creating data and then the data would be stored and it might be used or integrated. Now, we have applications that are simply there in order to present the data and we have the ability now to present it to those mobile devices in the mobile tier, which means it goes anywhere, everywhere all the time.

Let me start with you Jim, because it’s security and risk, but it’s also just rethinking the way we use data in a mobile tier. If we can do it safely, and that’s a big IF, how important should it be for organizations to start thinking about making this data available to all of these devices and just pour out into that mobile tier as possible?

Hietala: In terms of enabling the business, it’s very important. There are a lot of benefits that accrue from accessing your data from whatever device you happen to be on. To me, it is that question of “if,” because now there’s a whole lot of problems to be solved relative to the data floating around anywhere on Android, iOS, whatever the platform is, and the organization being able to lock down their data on those devices, forgetting about whether it’s the organization device or my device. There’s a set of issues around that that the security industry is just starting to get their arms around today.

Mobile ability

Gardner: Chris, any thoughts about this mobile ability that the data gets more valuable the more you can use it and apply it, and then the more you can apply it, the more data you generate that makes the data more valuable, and we start getting into that positive feedback loop?

Gerty: Absolutely. It’s almost an appreciation of what more people could do and get to the problem. We’re getting to the point where, if it’s available on your desktop, you’re going to find a way to make it available on your device.

That same security questions probably need to be answered anyway, but making it mobile compatible is almost an acknowledgment that there will be someone who wants to use it. So let me go that extra step to make it compatible and see what I get from them. It’s more of a cultural benefit that you get from making things compatible with mobile.

Gardner: Any thoughts about what developers should be thinking by trying to bring the fruits of Big Data through these analytics to more users rather than just the BI folks or those that are good at SQL queries? Does this change the game by actually making an application on a mobile device, simple, powerful but accessing this real time updated treasure trove of data?

Gerty: I always think of the astronaut on the moon. He’s got a big, bulky glove and he might have a heads-up display in front of him, but he really needs to know exactly a certain piece of information at the right moment, dealing with bandwidth issues, dealing with the environment, foggy helmet wherever.

It’s very analogous to what the day-to-day professional will use trying to find out that quick e-mail he needs to know or which meeting to go to — which one is more important — and it all comes down to putting your developer in the shoes of the user. So anytime you can get interaction between the two, that’s valuable.

Weisman: From an Enterprise Architecture point of view my background is mainly defense and government, but defense mobile computing has been around for decades. So you’ve always been dealing with that.

The main thing is that in many cases, if they’re coming up with information, the whole presentation layer is turning into another architecture domain with information visualization and also with your security controls, with an integrated identity management capability.

It’s like you were saying about astronaut getting it right. He doesn’t need to know everything that’s happening in the world. He needs to know about his heads-up display, the stuff that’s relevant to him.

So it’s getting the right information to person in an authorized manner, in a way that he can visualize and make sense of that information, be it straight data, analytics, or whatever. The presentation layer, ergonomics, visual communication are going to become very important in the future for that. There are also a lot of problems. Rather than doing it at the application level, you’re doing it entirely in one layer.

Governance and security

Gardner: So clearly the implications of data are cutting across how we think about security, how we think about UI, how we factor in mobility. What we now think about in terms of governance and security, we have to do differently than we did with older data models.

Jim Hietala, what about the impact on spurring people towards more virtualized desktop delivery, if you don’t want to have the date on that end device, if you want solve some of the issues about control and governance, and if you want to be able to manage just how much data gets into that UI, not too much not too little.

Do you think that some of these concerns that we’re addressing will push people to look even harder, maybe more aggressive in how they go to desktop and application virtualization, as they say, keep it on the server, deliver out just the deltas?

Hietala: That’s an interesting point. I’ve run across a startup in the last month or two that is doing is that. The whole value proposition is to virtualize the environment. You get virtual gold images. You don’t have to worry about what’s actually happening on the physical device and you know when the devices connect. The security threat goes away. So we may see more of that as a solution to that.

Gardner: Andras, do you see that that some of the implications of Big Data, far fetched as it may be, are propelling people to cultivate their servers more and virtualize their apps, their data, and their desktop right up to the end devices?

Szakal: Yeah, I do. I see IBM providing solutions for virtual desktop, but I think it was really a security question you were asking. You’re certainly going to see an additional number of virtualized desktop environments.

Ultimately, our network still is not stable enough or at a high enough bandwidth to really make that useful exercise for all but the most menial users in the enterprise. From a security point of view, there is a lot to be still solved.

And part of the challenge in the Cloud environment that we see today is the proliferation of virtual machines (VMs) and the inability to actually contain the security controls within those machines and across these machines from an enterprise perspective. So we’re going to see more solutions proliferate in this area and to try to solve some of the management issues, as well as the security issues, but we’re a long ways away from that.

Gerty: Big Data, by itself, isn’t magical. It doesn’t have the answers just by being big. If you need more, you need to pry deeper into it. That’s the example. They realized early enough that they were able to make something good.

Gardner: Jim Hietala, any thoughts about examples that illustrate where we’re going and why this is so important?

Hietala: Being a security guy, I tend to talk about scare stories, horror stories. One example from last year that struck me. One of the major retailers here in the U.S. hit the news for having predicted, through customer purchase behavior, when people were pregnant.

They could look and see, based upon buying 20 things, that if you’re buying 15 of these and your purchase behavior has changed, they can tell that. The privacy implications to that are somewhat concerning.

An example was that this retailer was sending out coupons related to somebody being pregnant. The teenage girl, who was pregnant hadn’t told her family yet. The father found it. There was alarm in the household and at the local retailer store, when the father went and confronted them.

Privacy implications

There are privacy implications from the use of Big Data. When you get powerful new technology in marketing people’s hands, things sometimes go awry. So I’d throw that out just as a cautionary tale that there is that aspect to this. When you can see across people’s buying transactions, things like that, there are privacy considerations that we’ll have to think about, and that we really need to think about as an industry and a society.

Comments Off

Filed under Conference

Open Group Panel Explores Changing Field of Risk Management and Analysis in the Era of Big Data

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Panel Explores Changing Field of Risk Management and Analysis in Era of Big Data

This is a transcript of a sponsored podcast discussion on the threats from and promise of Big Data in securing enterprise information assets in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference itself is focusing on Big Data the transformation we need to embrace today.

We’re here now with a panel of experts to explore new trends and solutions in the area of risk management and analysis. We’ll learn how large enterprises are delivering risk assessments and risk analysis, and we’ll see how Big Data can be both an area to protect from in form of risks, but also as a tool for better understanding and mitigating risks.

With that, please join me in welcoming our panel. We’re here with Jack Freund, PhD, the Information Security Risk Assessment Manager at TIAA-CREF. Welcome, Jack.

Jack Freund: Hello Dana, how are you?

Gardner: I’m great. Glad you could join us.

We are also here with Jack Jones, Principal of CXOWARE. He has more than nine years of experience as a Chief Information Security Officer, is the inventor of the Factor Analysis Information Risk (FAIR) framework. Welcome, Jack.

Jack Jones: Thank you. And we’re also here with Jim Hietala, Vice President, Security for The Open Group. Welcome, Jim.

Jim Hietala: Thanks, Dana.

Gardner: All right, let’s start out with looking at this from a position of trends. Why is the issue of risk analysis so prominent now? What’s different from, say, five years ago? And we’ll start with you, Jack Jones.

Jones: The information security industry has struggled with getting the attention of and support from management and businesses for a long time, and it has finally come around to the fact that the executives care about loss exposure — the likelihood of bad things happening and how bad those things are likely to be.

It’s only when we speak of those terms or those issues in terms of risk, that we make sense to those executives. And once we do that, we begin to gain some credibility and traction in terms of getting things done.

Gardner: So we really need to talk about this in the terms that a business executive would appreciate, not necessarily an IT executive.

Effects on business

Jones: Absolutely. They’re tired of hearing about vulnerabilities, hackers, and that sort of thing. It’s only when we can talk in terms of the effect on the business that it makes sense to them.

Gardner: Jack Freund, I should also point out that you have more than 14 years in enterprise IT experience. You’re a visiting professor at DeVry University and you chair a risk-management subcommittee for ISACA? Is that correct?

Freund: ISACA, yes.

Gardner: And do you agree?

Freund: The problem that we have as a profession, and I think it’s a big problem, is that we have allowed ourselves to escape the natural trend that the other IT professionals have already taken.

There was a time, years ago, when you could code in the basement, and nobody cared much about what you were doing. But now, largely speaking, developers and systems administrators are very focused on meeting the goals of the organization.

Security has been allowed to miss that boat a little. We have been allowed to hide behind this aura of a protector and of an alerter of terrible things that could happen, without really tying ourselves to the problem that the organizations are facing and how can we help them succeed in what they’re doing.

Gardner: Jim Hietala, how do you see things that are different now than a few years ago when it comes to risk assessment?

Hietala: There are certainly changes on the threat side of the landscape. Five years ago, you didn’t really have hacktivism or this notion of an advanced persistent threat (APT).

That highly skilled attacker taking aim at governments and large organizations didn’t really exist -– or didn’t exist to the degree it does today. So that has changed.

You also have big changes to the IT platform landscape, all of which bring new risks that organizations need to really think about. The mobility trend, the Cloud trend, the big-data trend that we are talking about today, all of those things bring new risk to the organization.

As Jack Jones mentioned, business executives don’t want to hear about, “I’ve got 15 vulnerabilities in the mobility part of my organization.” They want to understand what’s the risk of bad things happening because of mobility, what we’re doing about it, and what’s happening to risk over time?

So it’s a combination of changes in the threats and attackers, as well as just changes to the IT landscape, that we have to take a different look at how we measure and present risk to the business.

Gardner: Because we’re at a big-data conference, do you share my perception, Jack Jones, that Big Data can be a source of risk and vulnerability, but also the analytics and the business intelligence (BI) tools that we’re employing with Big Data can be used to alert you to risks or provide a strong tool for better understanding your true risk setting or environment.

Crown jewels

Jones: You are absolutely right. You think of Big Data and, by definition, it’s where your crown jewels, and everything that leads to crown jewels from an information perspective, are going to be found. It’s like one-stop shopping for the bad guy, if you want to look at it in that context. It definitely needs to be protected. The architecture surrounding it and its integration across a lot of different platforms and such, can be leveraged and probably result in a complex landscape to try and secure.

There are a lot of ways into that data and such, but at least if you can leverage that same Big Data architecture, it’s an approach to information security. With log data and other threat and vulnerability data and such, you should be able to make some significant gains in terms of how well-informed your analyses and your decisions are, based on that data.

Gardner: Jack Freund, do you share that? How does Big Data fit into your understanding of the evolving arena of risk assessment and analysis?

Freund: If we fast-forward it five years, and this is even true today, a lot of people on the cutting edge of Big Data will tell you the problem isn’t so much building everything together and figuring out what it can do. They are going to tell you that the problem is what we do once we figure out everything that we have. This is the problem that we have traditionally had on a much smaller scale in information security. When everything is important, nothing is important.

Gardner: To follow up on that, where do you see the gaps in risk analysis in large organizations? In other words, what parts of organizations aren’t being assessed for risk and should be?

Freund: The big problems that exist largely today in the way that risk assessments are done, is the focus on labels. We want to quickly address the low, medium, and high things and know where they are. But the problem is that there are inherent problems in the way that we think about those labels, without doing any of the analysis legwork.

I think that’s what’s really missing is that true analysis. If the system goes offline, do we lose money? If the system becomes compromised, what are the cost-accounting things that will happen that allow us to figure out how much money we’re going to lose.

That analysis work is largely missing. That’s the gap. The gap is if the control is not in place, then there’s a risk that must be addressed in some fashion. So we end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

Every day, our organizations are out there selling products, offering services, which is and of itself, its own risky venture. So tying what we do from an information security perspective to that is critical for not just the success of the organization, but the success of our profession.

Gardner: So we can safely say that large companies are probably pretty good at a cost-benefit analysis or they wouldn’t be successful. Now, I guess we need to ask them to take that a step further and do a cost-risk analysis, but in business terms, being mindful that their IT systems might be a much larger part of that than they had at once considered. Is that fair, Jack?

Risk implications

Jones: Businesses have been making these decisions, chasing the opportunity, but generally, without any clear understanding of the risk implications, at least from the information security perspective. They will have us in the corner screaming and throwing red flags in there, and talking about vulnerabilities and threats from one thing or another.

But, we come to the table with red, yellow, and green indicators, and on the other side of the table, they’ve got numbers. Well, here is what we expect to earn in revenue from this initiative, and the information security people are saying it’s crazy. How do you normalize the quantitative revenue gain versus red, yellow, and green?

Gardner: Jim Hietala, do you see it in the same red, yellow, green or are there some other frameworks or standard methodologies that The Open Group is looking at to make this a bit more of a science?

Hietala: Probably four years ago, we published what we call the Risk Taxonomy Standard which is based upon FAIR, the management framework that Jack Jones invented. So, we’re big believers in bringing that level of precision to doing risk analysis. Having just gone through training for FAIR myself, as part of the standards effort that we’re doing around certification, I can say that it really brings a level of precision and a depth of analysis to risk analysis that’s been lacking frequently in IT security and risk management.

Gardner: We’ve talked about how organizations need to be mindful that their risks are higher and different than in the past and we’ve talked about how standardization and methodologies are important, helping them better understand this from a business perspective, instead of just a technology perspective.

But, I’m curious about a cultural and organizational perspective. Whose job should this fall under? Who is wearing the white hat in the company and can rally the forces of good and make all the bad things managed? Is this a single person, a cultural, an organizational mission? How do you make this work in the enterprise in a real-world way? Let’s go to you, Jack Freund.

Freund: The profession of IT risk management is changing. That profession will have to sit between the business and information security inclusive of all the other IT functions that make that happen.

In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups. You have to be able to understand profit and loss and capital expenditure on the business side. On the IT risk side, you have to be technical enough to do all those sorts of things.

But I think the sum total of those two things is probably only about 50 percent of the job of IT risk management today. The other 50 percent is communication. Finding ways to translate that language and to understand the needs and concerns of each side of that relationship is really the job of IT risk management.

To answer your question, I think it’s absolutely the job of IT risk management to do that. From my own experiences with the FAIR framework, I can say that using FAIR is the Rosetta Stone for speaking between those two groups.

Necessary tools

It gives you the tools necessary to speak in the insurance and risk terms that business appreciate. And it gives you the ability to be as technical and just nerdy, if you will, as you need to be in order to talk to IT security and the other IT functions in order to make sure everybody is on the same page and everyone feels like their concerns are represented in the risk-assessment functions that are happening.

Gardner: Jack Jones, can you add to that?

Jones: I agree with what Jack said wholeheartedly. I would add, though, that integration or adoption of something like this is a lot easier the higher up in the organization you go.

For CFOs traditionally, their neck is most clearly on the line for risk-related issues within most organizations. At least in my experience, if you get their ear on this and present the information security data analyses to them, they jump on board, they drive it through the organization, and it’s just brain-dead easy.

If you try to drive it up through the ranks, maybe you get an enthusiastic supporter in the information security organization, especially if it’s below the CISO level, and they try a grassroots sort of effort to bring it in, it’s a tougher thing. It can still work. I’ve seen it work very well, but, it’s a longer row to hoe.

Gardner: There have been a lot of research, studies, and surveys on data breaches. What are some of the best sources, or maybe not so good sources, for actually measuring this? How do you know if you’re doing it right? How do you know if you’re moving from yellow to green, instead of to red? To you, Jack Freund.

Freund: There are a couple of things in that question. The first is there’s this inherent assumption in a lot of organizations that we need to move from yellow to green, and that may not be the case. So, becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

That’s part of the official mindset of IT security. When you graduate an information security person today, they are minted knowing that there are a lot of bad things out there, and their goal in life is to reduce them. But, that may not be the case. The case may very well be that things are okay now, but we have bigger things to fry over here that we’re going to focus on. So, that’s one thing.

The second thing, and it’s a very good question, is how we know that we’re getting better? How do we trend that over time? Overall, measuring that value for the organization has to be able to show a reduction of a risk or at least reduction of risk to the risk-tolerance levels of the organization.

Calculating and understanding that requires something that I always phrase as we have to become comfortable with uncertainty. When you are talking about risk in general, you’re talking about forward-looking statements about things that may or may not happen. So, becoming comfortable with the fact that they may or may not happen means that when you measure them today, you have to be willing to be a little bit squishy in how you’re representing that.

In FAIR and in other academic works, they talk about using ranges to do that. So, things like high, medium, and low, could be represented in terms of a minimum, maximum, and most likely. And that tends to be very, very effective. People can respond to that fairly well.

Gathering data

Jones: With regard to the data sources, there are a lot of people out there doing these sorts of studies, gathering data. The problem that’s hamstringing that effort is the lack of a common set of definitions, nomenclature, and even taxonomy around the problem itself.

You will have one study that will have defined threat, vulnerability, or whatever differently from some other study, and so the data can’t be normalized. It really harms the utility of it. I see data out there and I think, “That looks like that can be really useful.” But, I hesitate to use it because I don’t understand. They don’t publish their definitions, approach, and how they went after it.

There’s just so much superficial thinking in the profession on this that we now have dug under the covers. Too often, I run into stuff that just can’t be defended. It doesn’t make sense, and therefore the data can’t be used. It’s an unfortunate situation.

I do think we’re heading in a positive direction. FAIR can provide a normalizing structure for that sort of thing. The VERIS framework, which by the way, is also derived in part from FAIR, also has gained real attraction in terms of the quality of the research they have done and the data they’re generating. We’re headed in the right direction, but we’ve got a long way to go.

Gardner: Jim Hietala, we’re seemingly looking at this on a company-by-company basis. But, is there a vertical industry slice or industry-wide slice where we could look at what’s happening to everyone and put some standard understanding, or measurement around what’s going on in the overall market, maybe by region, maybe by country?

Hietala: There are some industry-specific initiatives and what’s really needed, as Jack Jones mentioned, are common definitions for things like breach, exposure, loss, all those, so that the data sources from one organization can be used in another, and so forth. I think about the financial services industry. I know that there is some information sharing through an organization called the FS-ISAC about what’s happening to financial services organizations in terms of attacks, loss, and those sorts of things.

There’s an opportunity for that on a vertical-by-vertical basis. But, like Jack said, there is a long way to go on that. In some industries, healthcare for instance, you are so far from that, it’s ridiculous. In the US here, the HIPAA security rule says you must do a risk assessment. So, hospitals have done annual risk assessments, will stick the binder on the shelf, and they don’t think much about information security in between those annual risk assessments. That’s a generalization, but various industries are at different places on a continuum of maturity of their risk management approaches.

Gardner: As we get better with having a common understanding of the terms and the measurements and we share more data, let’s go back to this notion of how to communicate this effectively to those people that can use it and exercise change management as a result. That could be the CFO, the CEO, what have you, depending on the organization.

Do you have any examples? Can we look to an organization that’s done this right, and examine their practices, the way they’ve communicated it, some of the tools they’ve used and say, “Aha, they’re headed in the right direction maybe we could follow a little bit.” Let’s start with you, Jack Freund.

Freund: I have worked and consulted for various organizations that have done risk management at different levels. The ones that have embraced FAIR tend to be the ones that overall feel that risk is an integral part of their business strategy. And I can give a couple of examples of scenarios that have played out that I think have been successful in the way they have been communicated.

Coming to terms

The key to keep in mind with this is that one of the really important things is that when you’re a security professional, you’re again trained to feel like you need results. But, the results for the IT risk management professional are different. The results are “I’ve communicated this effectively, so I am done.” And then whatever the results are, are the results that needed to be. And that’s a really hard thing to come to terms with.

I’ve been involved in large-scale efforts to assess risk for a Cloud venture. We needed to move virtually every confidential record that we have to the Cloud in order to be competitive with the rest of our industry. If our competitors are finding ways to utilize the Cloud before us, we can lose out. So, we need to find a way to do that, and to be secure and compliant with all the laws and regulations and such.

Through that scenario, one of the things that came out was that key ownership became really, really important. We had the opportunity to look at the various control structures and we analyzed them using FAIR. What we ended up with was sort of a long-tail risk. Most people will probably do their job right over a long enough period of time. But, over that same long period of time, the odds of somebody making a mistake not in your favor are probably likely, but, not significantly enough so that you can’t make the move.

But, the problem became that the loss side, the side that typically gets ignored with traditional risk-assessment methodologies, was so significant that the organization needed to make some judgment around that, and they needed to have a sense of what we needed to do in order to minimize that.

That became a big point of discussion for us and it drove the conversation away from bad things could happen. We didn’t bury the lead. The lead was that this is the most important thing to this organization in this particular scenario.

So, let’s talk about things we can do. Are we comfortable with it? Do we need to make any sort of changes? What are some control opportunities? How much do they cost? This is a significantly more productive conversation than just, “Here is a bunch of bad things that happen. I’m going to cross my arms and say no.”

Gardner: Jack Jones, examples at work?

Jones: In an organization that I’ve been working with recently, their board of directors said they wanted a quantitative view of information security risk. They just weren’t happy with the red, yellow, green. So, they came to us, and there were really two things that drove them there. One was that they were looking at cyber insurance. They wanted to know how much cyber insurance they should take out, and how do you figure that out when you’ve got a red, yellow, green scale?

They were able to do a series of analyses on a population of the scenarios that they thought were relevant in their world, get an aggregate view of their annualized loss exposure, and make a better informed decision about that particular problem.

Gardner: I’m curious how prevalent cyber insurance is, and is that going to be a leveling effect in the industry where people speak a common language the equivalent of actuarial tables, but for security in enterprise and cyber security?

Jones: One would dream and hope, but at this point, what I’ve seen out there in terms of the basis on which insurance companies are setting their premiums and such is essentially the same old “risk assessment” stuff that the industry has been doing poorly for years. It’s not based on data or any real analysis per se, at least what I’ve run into. What they do is set their premiums high to buffer themselves and typically cover as few things as possible. The question of how much value it’s providing the customers becomes a problem.

Looking to the future

Gardner: We’re coming up on our time limit. So, let’s quickly look to the future. Is there such thing as risk management as a service? Can we outsource this? Is there a way in which moving more of IT into Cloud or hybrid models would mitigate risk, because the Cloud provider would standardize? Then, many players in that environment, those who were buying those services, would be under that same umbrella? Let’s start with you Jim Hietala. What’s the future of this and what do the Cloud trends bring to the table?

Hietala: I’d start with a maxim that comes out of the financial services industry, which is that you can outsource the function, but you still own the risk. That’s an unfortunate reality. You can throw things out in the Cloud, but it doesn’t absolve you from understanding your risk and then doing things to manage it to transfer it if there’s insurance or whatever the case may be.

That’s just a reality. Organizations in the risky world we live in are going to have to get more serious about doing effective risk analysis. From The Open Group standpoint, we see this as an opportunity area.

As I mentioned, we’ve standardized the taxonomy piece of FAIR. And we really see an opportunity around the profession going forward to help the risk-analysis community by further standardizing FAIR and launching a certification program for a FAIR-certified risk analyst. That’s in demand from large organizations that are looking for evidence that people understand how to apply FAIR and use it in doing risk analyses.

Gardner: Jack Freund, looking into your crystal ball, how do you see this discipline evolving?

Freund: I always try to consider things as they exist within other systems. Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

Risk will always be owned by the organization that is offering that service. If we decide at some point that we can move to the Cloud and all these other things, we need to look to the legal system. There is a series of pressures that they are going to apply, and who is going to own that, and how that plays itself out.

If we look to the Europeans and the way that they’re managing risk and compliance, they’re still as strict as we in United States think that they may be about things, but there’s still a lot of leeway in a lot of the ways that laws are written. You’re still being asked to do things that are reasonable. You’re still being asked to do things that are standard for your industry. But, we’d still like the ability to know what that is, and I don’t think that’s going to go away anytime soon.

Judgment calls

We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things. Whenever that happens, you have to make a judgment call. What’s the most important thing that I care about? And that’s why risk management exists, because there’s a certain series of things that we have to deal with. We don’t have the resources to do them all, and I don’t think that’s going to change over time. Regardless of whether the landscape changes, that’s the one that remains true.

Gardner: The last word to you, Jack Jones. It sounds as if we’re continuing down the path of being mostly reactive. Is there anything you can see on the horizon that would perhaps tip the scales, so that the risk management and analysis practitioners can really become proactive and head things off before they become a big problem?

Jones: If we were to take a snapshot at any given point in time of an organization’s loss exposure, how much risk they have right then, that’s a lagging indicator of the decisions they’ve made in the past, and their ability to execute against those decisions.

We can do some great root-cause analysis around that and ask how we got there. But, we can also turn that coin around and ask how good we are at making well-informed decisions, and then executing against them, the asking what that implies from a risk perspective downstream.

If we understand the relationship between our current state, and past and future states, we have those linkages defined, especially, if we have an analytic framework underneath it. We can do some marvelous what-if analysis.

What if this variable changed in our landscape? Let’s run a few thousand Monte Carlo simulations against that and see what comes up. What does that look like? Well, then let’s change this other variable and then see which combination of dials, when we turn them, make us most robust to change in our landscape.

But again, we can’t begin to get there, until we have this foundational set of definitions, frameworks, and such to do that sort of analysis. That’s what we’re doing with FAIR, but without some sort of framework like that, there’s no way you can get there.

Gardner: I am afraid we’ll have to leave it there. We’ve been talking with a panel of experts on how new trends and solutions are emerging in the area of risk management and analysis. And we’ve seen how new tools for communication and using Big Data to understand risks are also being brought to the table.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference in Newport Beach, California. I’d like to thank our panel: Jack Freund, PhD, Information Security Risk Assessment Manager at TIAA-CREF. Thanks so much Jack.

Freund: Thank you, Dana.

Gardner: We’ve also been speaking with Jack Jones, Principal at CXOWARE.

Jones: Thank you. Thank you, pleasure to be here.

Gardner: And last, Jim Hietala, the Vice President for Security at The Open Group. Thanks.

Hietala: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions; your host and moderator through these thought leadership interviews. Thanks again for listening and come back next time.

Comments Off

Filed under Security Architecture

The Open Group Conference Plenary Speaker Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Keynoter Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

This is a transcript of a sponsored podcast discussion on Big Data analytics and its role in business processes, in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group® Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference will focus on big data and the transformation we need to embrace today.

We are here now with one of the main speakers at the conference; Michael Cavaretta, PhD, Technical Leader of Predictive Analytics for Ford Research and Advanced Engineering in Dearborn, Michigan.

We’ll see how Ford has exploited the strengths of big data analytics by directing them internally to improve business results. In doing so, they scour the metrics from the company’s best processes across myriad manufacturing efforts and through detailed outputs from in-use automobiles, all to improve and help transform their business.

Cavaretta has led multiple data-analytic projects at Ford to break down silos inside the company to best define Ford’s most fruitful datasets. Ford has successfully aggregated customer feedback, and extracted all the internal data to predict how best new features in technologies will improve their cars.

As a lead-in to his Open Group presentation, Michael and I will now explore how big data is fostering business transformation by allowing deeper insights into more types of data efficiently, and thereby improving processes, quality control, and customer satisfaction.

With that, please join me in welcoming Michael Cavaretta. Welcome to BriefingsDirect, Michael.

Michael Cavaretta: Thank you very much.

Gardner: Your upcoming presentation for The Open Group Conference is going to describe some of these new approaches to big data and how that offers some valuable insights into internal operations, and therefore making a better product. To start, what’s different now in being able to get at this data and do this type of analysis from, say, five years ago?

Cavaretta: The biggest difference has to do with the cheap availability of storage and processing power, where a few years ago people were very much concentrated on filtering down the datasets that were being stored for long-term analysis. There has been a big sea change with the idea that we should just store as much as we can and take advantage of that storage to improve business processes.

Gardner: That sounds right on the money, but how do we get here? How do we get to the point where we could start using these benefits from a technology perspective, as you say, better storage, networks, being able to move big dataset, that sort of thing, to wrenching out benefits. What’s the process behind the benefit?

Cavaretta: The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises. There’s this idea that you don’t need to spend so much time figuring out what data you want to store and worry about the cost associated with it, and more about data as an asset. There is value in being able to store it, and being able to go back and extract different insights from it. This really comes from this really cheap storage, access to parallel processing machines, and great software.

Gardner: It seems to me that for a long time, the mindset was that data is simply the output from applications, with applications being primary and the data being almost an afterthought. It seems like we sort flipped that. The data now is perhaps as important, even more important, than the applications. Does that seem to hold true?

Cavaretta: Most definitely, and we’ve had a number of interesting engagements where people have thought about the data that’s being collected. When we talk to them about big data, storing everything at the lowest level of transactions, and what could be done with that, their eyes light up and they really begin to get it.

Gardner: I suppose earlier, when cost considerations and technical limitations were at work, we would just go for a tip of the iceberg level. Now, as you say, we can get almost all the data. So, is this a matter of getting at more data, different types of data, bringing in unstructured data, all the above? How much you are really going after here?

Cavaretta: I like to talk to people about the possibility that big data provides and I always tell them that I have yet to have a circumstance where somebody is giving me too much data. You can pull in all this information and then answer a variety of questions, because you don’t have to worry that something has been thrown out. You have everything.

You may have 100 questions, and each one of the questions uses a very small portion of the data. Those questions may use different portions of the data, a very small piece, but they’re all different. If you go in thinking, “We’re going to answer the top 20 questions and we’re just going to hold data for that,” that leaves so much on the table, and you don’t get any value out of it.

Gardner: I suppose too that we can think about small samples or small datasets and aggregate them or join them. We have new software capabilities to do that efficiently, so that we’re able to not just look for big honking, original datasets, but to aggregate, correlate, and look for a lifecycle level of data. Is that fair as well?

Cavaretta: Definitely. We’re a big believer in mash-ups and we really believe that there is a lot of value in being able to take even datasets that are not specifically big-data sizes yet, and then not go deep, not get more detailed information, but expand the breadth. So it’s being able to augment it with other internal datasets, bridging across different business areas as well as augmenting it with external datasets.

A lot of times you can take something that is maybe a few hundred thousand records or a few million records, and then by the time you’re joining it, and appending different pieces of information onto it, you can get the big dataset sizes.

Gardner: Just to be clear, you’re unique. The conventional wisdom for big data is to look at what your customers are doing, or just the external data. You’re really looking primarily at internal data, while also availing yourself of what external data might be appropriate. Maybe you could describe a little bit about your organization, what you do, and why this internal focus is so important for you.

Cavaretta: I’m part of a larger department that is housed over in the research and advanced-engineering area at Ford Motor Company, and we’re about 30 people. We work as internal consultants, kind of like Capgemini or Ernst & Young, but only within Ford Motor Company. We’re responsible for going out and looking for different opportunities from the business perspective to bring advanced technologies. So, we’ve been focused on the area of statistical modeling and machine learning for I’d say about 15 years or so.

And in this time, we’ve had a number of engagements where we’ve talked with different business customers, and people have said, “We’d really like to do this.” Then, we’d look at the datasets that they have, and say, “Wouldn’t it be great if we would have had this. So now we have to wait six months or a year.”

These new technologies are really changing the game from that perspective. We can turn on the complete fire-hose, and then say that we don’t have to worry about that anymore. Everything is coming in. We can record it all. We don’t have to worry about if the data doesn’t support this analysis, because it’s all there. That’s really a big benefit of big-data technologies.

Gardner: If you’ve been doing this for 15 years, you must be demonstrating a return on investment (ROI) or a value proposition back to Ford. Has that value proposition been changing? Do you expect it to change? What might be your real value proposition two or three years from now?

Cavaretta: The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts who are really interested in looking at things from a data-driven perspective. From when I first came in to now, the biggest change has been when Alan Mulally came into the company, and really pushed the idea of data-driven decisions.

Before, we were getting a lot of interest from people who are really very focused on the data that they had internally. After that, they had a lot of questions from their management and from upper level directors and vice-president saying, “We’ve got all these data assets. We should be getting more out of them.” This strategic perspective has really changed a lot of what we’ve done in the last few years.

Gardener: As I listen to you Michael, it occurs to me that you are applying this data-driven mentality more deeply. As you pointed out earlier, you’re also going after all the data, all the information, whether that’s internal or external.

In the case of an automobile company, you’re looking at the factory, the dealers, what drivers are doing, what the devices within the automobile are telling you, factoring that back into design relatively quickly, and then repeating this process. Are we getting to the point where this sort of Holy Grail notion of a total feedback loop across the lifecycle of a major product like an automobile is really within our grasp? Are we getting there, or is this still kind of theoretical. Can we pull it altogether and make it a science?

Cavaretta: The theory is there. The question has more to do with the actual implementation and the practicality of it. We still are talking a lot of data where even with new advanced technologies and techniques that’s a lot of data to store, it’s a lot of data to analyze, there’s a lot of data to make sure that we can mash-up appropriately.

And, while I think the potential is there and I think the theory is there. There is also a work in being able to get the data from multiple sources. So everything which you can get back from the vehicle, fantastic. Now if you marry that up with internal data, is it survey data, is it manufacturing data, is it quality data? What are the things do you want to go after first? We can’t do everything all at the same time.

Our perspective has been let’s make sure that we identify the highest value, the greatest ROI areas, and then begin to take some of the major datasets that we have and then push them and get more detail. Mash them up appropriately and really prove up the value for the technologists.

Gardner: Clearly, there’s a lot more to come in terms of where we can take this, but I suppose it’s useful to have a historic perspective and context as well. I was thinking about some of the early quality gurus like Deming and some of the movement towards quality like Six Sigma. Does this fall within that same lineage? Are we talking about a continuum here over that last 50 or 60 years, or is this something different?

Cavaretta: That’s a really interesting question. From the perspective of analyzing data, using data appropriately, I think there is a really good long history, and Ford has been a big follower of Deming and Six Sigma for a number of years now.

The difference though, is this idea that you don’t have to worry so much upfront about getting the data. If you’re doing this right, you have the data right there, and this has some great advantages. You’ll have to wait until you get enough history to look for somebody’s patterns. Then again, it also has some disadvantage, which is you’ve got so much data that it’s easy to find things that could be spurious correlations or models that don’t make any sense.

The piece that is required is good domain knowledge, in particular when you are talking about making changes in the manufacturing plant. It’s very appropriate to look at things and be able to talk with people who have 20 years of experience to say, “This is what we found in the data. Does this match what your intuition is?” Then, take that extra step.

Gardner: Tell me a little about sort a day in the life of your organization and your team to let us know what you do. How do you go about making more data available and then reaching some of these higher-level benefits?

Cavaretta: We’re very much focused on interacting with the business. Most of all, we do have to deal with working on pilot projects and working with our business customers to bring advanced analytics and big data technologies to bear against these problems. So we work in kind of what we call push-and-pull model.

We go out and investigate technologies and say these are technologies that Ford should be interested in. Then, we look internally for business customers who would be interested in that. So, we’re kind of pushing the technologies.

From the pull perspective, we’ve had so many successful engagements in such good contacts and good credibility within the organization that we’ve had people come to us and say, “We’ve got a problem. We know this has been in your domain. Give us some help. We’d love to be able to hear your opinions on this.”

So we’ve pulled from the business side and then our job is to match up those two pieces. It’s best when we will be looking at a particular technology and we have somebody come to us and we say, “Oh, this is a perfect match.”

Those types of opportunities have been increasing in the last few years, and we’ve been very happy with the number of internal customers that have really been very excited about the areas of big data.

Gardner: Because this is The Open Group conference and an audience that’s familiar with the IT side of things, I’m curious as to how this relates to software and software development. Of course there are so many more millions of lines of code in automobiles these days, software being more important than just about everything. Are you applying a lot of what you are doing to the software side of the house or are the agile and the feedback loops and the performance management issues a separate domain, or it’s your crossover here?

Cavaretta: There’s some crossover. The biggest area that we’ve been focused on has been picking information, whether internal business processes or from the vehicle, and then being able to bring it back in to derive value. We have very good contacts in the Ford IT group, and they have been fantastic to work with in bringing interesting tools and technology to bear, and then looking at moving those into production and what’s the best way to be able to do that.

A fantastic development has been this idea that we’re using some of the more agile techniques in this space and Ford IT has been pushing this for a while. It’s been fantastic to see them work with us and be able to bring these techniques into this new domain. So we’re pushing the envelope from two different directions.

Gardner: It sounds like you will be meeting up at some point with a complementary nature to your activities.

Cavaretta: Definitely.

Gardner: Let’s move on to this notion of the “Internet of things,” a very interesting concept that lot of people talk about. It seems relevant to what we’ve been discussing. We have sensors in these cars, wireless transfer of data, more-and-more opportunity for location information to be brought to bear, where cars are, how they’re driven, speed information, all sorts of metrics, maybe making those available through cloud providers that assimilate this data.

So let’s not go too deep, because this is a multi-hour discussion all on its own, but how is this notion of the Internet of things being brought to bear on your gathering of big data and applying it to the analytics in your organization?

Cavaretta: It is a huge area, and not only from the internal process perspective —  RFID tags within the manufacturing plans, as well as out on the plant floor, and then all of the information that’s being generated by the vehicle itself.

The Ford Energi generates about 25 gigabytes of data per hour. So you can imagine selling couple of million vehicles in the near future with that amount of data being generated. There are huge opportunities within that, and there are also some interesting opportunities having to do with opening up some of these systems for third-party developers. OpenXC is an initiative that we have going on to add at Research and Advanced Engineering.

We have a lot of data coming from the vehicle. There’s huge number of sensors and processors that are being added to the vehicles. There’s data being generated there, as well as communication between the vehicle and your cell phone and communication between vehicles.

There’s a group over at Ann Arbor Michigan, the University of Michigan Transportation Research Institute (UMTRI), that’s investigating that, as well as communication between the vehicle and let’s say a home system. It lets the home know that you’re on your way and it’s time to increase the temperature, if it’s winter outside, or cool it at the summer time. The amount of data that’s been generated there is invaluable information and could be used for a lot of benefits, both from the corporate perspective, as well as just the very nature of the environment.

Gardner: Just to put a stake in the ground on this, how much data do cars typically generate? Do you have a sense of what now is the case, an average?

Cavaretta: The Energi, according to the latest information that I have, generates about 25 gigabytes per hour. Different vehicles are going to generate different amounts, depending on the number of sensors and processors on the vehicle. But the biggest key has to do with not necessarily where we are right now but where we will be in the near future.

With the amount of information that’s being generated from the vehicles, a lot of it is just internal stuff. The question is how much information should be sent back for analysis and to find different patterns? That becomes really interesting as you look at external sensors, temperature, humidity. You can know when the windshield wipers go on, and then to be able to take that information, and mash that up with other external data sources too. It’s a very interesting domain.

Gardner: So clearly, it’s multiple gigabytes per hour per vehicle and probably going much higher.

Cavaretta: Easily.

Gardner: Let’s move forward now for those folks who have been listening and are interested in bringing this to bear on their organizations and their vertical industries, from the perspective of skills, mindset, and culture. Are there standards, certification, or professional organizations that you’re working with in order to find the right people?

It’s a big question. Let’s look at what skills do you target for your group, and what ways you think that you can improve on that. Then, we’ll get into some of those larger issues about culture and mindset.

Cavaretta: The skills that we have in our department, in particular on our team, are in the area of computer science, statistics, and some good old-fashioned engineering domain knowledge. We’ve really gone about this from a training perspective. Aside from a few key hires, it’s really been an internally developed group.

The biggest advantage that we have is that we can go out and be very targeted with the amount of training that we have. There are such big tools out there, especially in the open-source realm, that we can spin things up with relatively low cost and low risk, and do a number of experiments in the area. That’s really the way that we push the technologies forward.

Gardner: Why The Open Group? Why is that a good forum for your message, and for your research here?

Cavaretta: The biggest reason is the focus on the enterprise, where there are a lot of advantages and a lot of business cases, looking at large enterprises and where there are a lot of systems, companies that can take a relatively small improvement, and it can make a large difference on the bottom-line.

Talking with The Open Group really gives me an opportunity to be able to bring people on board with the idea that you should be looking at a difference in mindset. It’s not “Here’s a way that data is being generated, look, try and conceive of some questions that we can use, and we’ll store that too.” Let’s just take everything, we’ll worry about it later, and then we’ll find the value.

Gardner: I’m sure the viewers of your presentation on January 28 will be gathering a lot of great insights. A lot of the people that attend The Open Group conferences are enterprise architects. What do you think those enterprise architects should be taking away from this? Is there something about their mindset that should shift in recognizing the potential that you’ve been demonstrating?

Cavaretta: It’s important for them to be thinking about data as an asset, rather than as a cost. You even have to spend some money, and it may be a little bit unsafe without really solid ROI at the beginning. Then, move towards pulling that information in, and being able to store it in a way that allows not just the high-level data scientist to get access to and provide value, but people who are interested in the data overall. Those are very important pieces.

The last one is how do you take a big-data project, how do you take something where you’re not storing in the traditional business intelligence (BI) framework that an enterprise can develop, and then connect that to the BI systems and look at providing value to those mash-ups. Those are really important areas that still need some work.

Gardner: Another big constituency within The Open Group community are those business architects. Is there something about mindset and culture, getting back to that topic, that those business-level architects should consider? Do you really need to change the way you think about planning and resource allocation in a business setting, based on the fruits of things that you are doing with big data?

Cavaretta: I really think so. The digital asset that you have can be monetized to change the way the business works, and that could be done by creating new assets that then can be sold to customers, as well as improving the efficiencies of the business.

This idea that everything is going to be very well-defined and there is a lot of work that’s being put into  making sure that data has high quality, I think those things need to be changed somewhat. As you’re pulling the data in, as you are thinking about long-term storage, it’s more the access to the information, rather than the problem in just storing it.

Gardner: Interesting that you brought up that notion that the data becomes a product itself and even a profit center perhaps.

Cavaretta: Exactly. There are many companies, especially large enterprises, that are looking at their data assets and wondering what can they do to monetize this, not only to just pay for the efficiency improvement but as a new revenue stream.

Gardner: We’re almost out of time. For those organizations that want to get started on this, are there any 20/20 hindsights or Monday morning quarterback insights you can provide. How do you get started? Do you appoint a leader? Do you need a strategic roadmap, getting this culture or mindset shifted, pilot programs? How would you recommend that people might begin the process of getting into this?

Cavaretta: We’re definitely a huge believer in pilot projects and proof of concept, and we like to develop roadmaps by doing. So get out there. Understand that it’s going to be messy. Understand that it maybe going to be a little bit more costly and the ROI isn’t going to be there at the beginning.

But get your feet wet. Start doing some experiments, and then, as those experiments turn from just experimentation into really providing real business value, that’s the time to start looking at a more formal aspect and more formal IT processes. But you’ve just got to get going at this point.

Gardner: I would think that the competitive forces are out there. If you are in a competitive industry, and those that you compete against are doing this and you are not, that could spell some trouble.

Cavaretta:  Definitely.

Gardner: We’ve been talking with Michael Cavaretta, PhD, Technical Leader of Predictive Analytics at Ford Research and Advanced Engineering in Dearborn, Michigan. Michael and I have been exploring how big data is fostering business transformation by allowing deeper insights into more types of data and all very efficiently. This is improving processes, updating quality control and adding to customer satisfaction.

Our conversation today comes as a lead-in to Michael’s upcoming plenary presentation. He is going to be talking on January 28 in Newport Beach California, as part of The Open Group conference.

You will hear more from Michael and others, the global leaders on big data that are going to be gathering to talk about business transformation from big data at this conference. So a big thank you to Michael for joining us in this fascinating discussion. I really enjoyed it and I look forward to your presentation on the 28.

Cavaretta: Thank you very much.

Gardner: And I would encourage our listeners and readers to attend the conference or follow more of the threads in social media from the event. Again, it’s going to be happening from January 27 to January 30 in Newport Beach, California.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator through the thought leadership interviews. Thanks again for listening, and come back next time.

1 Comment

Filed under Conference, Uncategorized

SOA Provides Needed Support for Enterprise Architecture in Cloud, Mobile, Big Data, Says Open Group Panel

By Dana Gardner, BriefingsDirect

There’s been a resurgent role for service-oriented architecture (SOA) as a practical and relevant ingredient for effective design and use of Cloud, mobile, and big data technologies.

To find out why, The Open Group recently gathered an international panel of experts to explore the concept of “architecture is destiny,” especially when it comes to hybrid services delivery and management. The panel shows how SOA is proving instrumental in allowing the needed advancements over highly distributed services and data, when it comes to scale, heterogeneity support, and governance.

The panel consists of Chris Harding, Director of Interoperability at The Open Group, based in the UK; Nikhil Kumar, President of Applied Technology Solutions and Co-Chair of the SOA Reference Architecture Projects within The Open Group, and he’s based in Michigan, and Mats Gejnevall, Enterprise Architect at Capgemini and Co-Chair of The Open Group SOA Work Group, and he’s based in Sweden. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

The full podcast can be found here.

Here are some excerpts:

Gardner: Why this resurgence in the interest around SOA?

Harding: My role in The Open Group is to support the work of our members on SOA, Cloud computing, and other topics. We formed the SOA Work Group back in 2005, when SOA was a real emerging hot topic, and we set up a number of activities and projects. They’re all completed.

I was thinking that the SOA Work Group would wind down, move into maintenance mode, and meet once every few months or so, but we still get a fair attendance at our regular web meetings.

In fact, we’ve started two new projects and we’re about to start a third one. So, it’s very clear that there is still an interest, and indeed a renewed interest, in SOA from the IT community within The Open Group.

Larger trends

Gardner: Nikhil, do you believe that this has to do with some of the larger trends we’re seeing in the field, like Cloud Software as a Service (SaaS)? What’s driving this renewal?

Kumar: What I see driving it is three things. One is the advent of the Cloud and mobile, which requires a lot of cross-platform delivery of consistent services. The second is emerging technologies, mobile, big data, and the need to be able to look at data across multiple contexts.

The third thing that’s driving it is legacy modernization. A lot of organizations are now a lot more comfortable with SOA concepts. I see it in a number of our customers. I’ve just been running a large Enterprise Architecture initiative in a Fortune 500 customer.

At each stage, and at almost every point in that, they’re now comfortable. They feel that SOA can provide the ability to rationalize multiple platforms. They’re restructuring organizational structures, delivery organizations, as well as targeting their goals around a service-based platform capability.

So legacy modernization is a back-to-the-future kind of thing that has come back and is getting adoption. The way it’s being implemented is using RESTful services, as well as SOAP services, which is different from traditional SOA, say from the last version, which was mostly SOAP-driven.

Gardner: Mats, do you think that what’s happened is that the marketplace and the requirements have changed and that’s made SOA more relevant? Or has SOA changed to better fit the market? Or perhaps some combination?

Gejnevall: I think that the Cloud is really a service delivery platform. Companies discover that to be able to use the Cloud services, the SaaS things, they need to look at SOA as their internal development way of doing things as well. They understand they need to do the architecture internally, and if they’re going to use lots of external Cloud services, you might as well use SOA to do that.

Also, if you look at the Cloud suppliers, they also need to do their architecture in some way and SOA probably is a good vehicle for them. They can use that paradigm and also deliver what the customer wants in a well-designed SOA environment.

Gardner: Let’s drill down on the requirements around the Cloud and some of the key components of SOA. We’re certainly seeing, as you mentioned, the need for cross support for legacy, Cloud types of services, and using a variety of protocol, transports, and integration types. We already heard about REST for lightweight approaches and, of course, there will still be the need for object brokering and some of the more traditional enterprise integration approaches.

This really does sound like the job for an Enterprise Service Bus (ESB). So let’s go around the panel and look at this notion of an ESB. Some people, a few years back, didn’t think it was necessary or a requirement for SOA, but it certainly sounds like it’s the right type of functionality for the job.

Loosely coupled

Harding: I believe so, but maybe we ought to consider that in the Cloud context, you’re not just talking about within a single enterprise. You’re talking about a much more loosely coupled, distributed environment, and the ESB concept needs to take account of that in the Cloud context.

Gardner: Nikhil, any thoughts about how to manage this integration requirement around the modern SOA environment and whether ESBs are more or less relevant as a result?

Kumar: In the context of a Cloud we really see SOA and the concept of service contracts coming to the fore. In that scenario, ESBs play a role as a broker within the enterprise. When we talk about the interaction across Cloud-service providers and Cloud consumers, what we’re seeing is that the service provider has his own concept of an ESB within its own internal context.

If you want your Cloud services to be really reusable, the concept of the ESB then becomes more for the routing and the mediation of those services, once they’re provided to the consumer. There’s a kind of separation of concerns between the concept of a traditional ESB and a Cloud ESB, if you want to call it that.

The Cloud context involves more of the need to be able to support, enforce, and apply governance concepts and audit concepts, the capabilities to ensure that the interaction meets quality of service guarantees. That’s a little different from the concept that drove traditional ESBs.

That’s why you’re seeing API management platforms like Layer 7Mashery, or Apigee and other kind of product lines. They’re also coming into the picture, driven by the need to be able to support the way Cloud providers are provisioning their services. As Chris put it, you’re looking beyond the enterprise. Who owns it? That’s where the role of the ESB is different from the traditional concept.

Most Cloud platforms have cost factors associated with locality. If you have truly global enterprises and services, you need to factor in the ability to deal with safe harbor issues and you need to factor in variations and law in terms of security governance.

The platforms that are evolving are starting to provide this out of the box. The service consumer or a service provider needs to be able to support those. That’s going to become the role of their ESB in the future, to be able to consume a service, to be able to assert this quality-of-service guarantee, and manage constraints or data-in-flight and data-at-rest.

Gardner: Mats, are there other aspects of the concept of ESB that are now relevant to the Cloud?

Entire stack

Gejnevall: One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products that all the consultancies were asking companies to buy, wanting them to buy an ESB, governance tools, business process management tools, and a lot of sort of quite large investments to just get your foot into the door of doing SOA.

These days you can buy that kind of stuff. You can buy the entire stack in the Cloud and start playing with it. I did some searches on it today and I found a company that you can play with the entire stack, including business tools and everything like that, for zero dollars. Then you can grow and use more and more of it in your business, but you can start to see if this is something for you.

In the past, the suppliers or the consultants told you that you could do it. You couldn’t really try it out yourself. You needed both the software and the hardware in place. The money to get started is much lower today. That’s another reason people might be thinking about it these days.

Gardner: It sounds as if there’s a new type of on-ramp to SOA values, and the componentry that supports SOA is now being delivered as a service. On top of that, you’re also able to consume it in a pay-as-you-go manner.

Harding: That’s a very good point, but there are two contradictory trends we are seeing here. One is the kind of trend that Mats is describing, where the technology you need to handle a complex stack is becoming readily available in the Cloud.

And the other is the trend that Nikhil mentioned: to go for a simpler style, which a lot of people term REST, for accessing services. It will be interesting to see how those two tendencies play out against each other.

Kumar: I’d like to make a comment on that. The approach for the on-ramp is really one of the key differentiators of the Cloud, because you have the agility and the lack of capital investment (CAPEX) required to test things out.

But as we are evolving with Cloud platforms, I’m also seeing with a lot of Platform-as-a-Service (PaaS) vendor scenarios that they’re trying the ESB in the stack itself. They’re providing it in their Cloud fabric. A couple of large players have already done that.

For example, Azure provides that in the forward-looking vision. I am sure IBM and Oracle have already started down that path. A lot of the players are going to provide it as a core capability.

Pre-integrated environment

Gejnevall: Another interesting thing is that they could get a whole environment that’s pre-integrated. Usually, when you buy these things from a vendor, a lot of times they don’t fit together that well. Now, there’s an effort to make them work together.

But some people put these open-source tools together. Some people have done that and put them out on the Cloud, which gives them a pretty cheap platform for themselves. Then, they can sell it at a reasonable price, because of the integration of all these things.

Gardner: The Cloud model may be evolving toward an all-inclusive offering. But SOA, by its definition, advances interoperability, to plug and play across existing, current, and future sets of service possibilities. Are we talking about SOA being an important element of keeping Clouds dynamic and flexible — even open?

Kumar: We can think about the OSI 7 Layer Model. We’re evolving in terms of complexity, right? So from an interoperability perspective, we may talk SOAP or REST, for example, but the interaction with AWS, SalesforceSmartCloud, or Azure would involve using APIs that each of these platforms provide for interaction.

Lock-in

So you could have an AMI, which is an image on the Amazon Web Services environment, for example, and that could support a lab stack or an open source stack. How you interact with it, how you monitor it, how you cluster it, all of those aspects now start factoring in specific APIs, and so that’s the lock-in.

From an architect’s perspective, I look at it as we need to support proper separation of concerns, and that’s part of [The Open Group] SOA Reference Architecture. That’s what we tried to do, to be able to support implementation architectures that support that separation of concerns.

There’s another factor that we need to understand from the context of the Cloud, especially for mid-to-large sized organizations, and that is that the Cloud service providers, especially the large ones — Amazon, Microsoft, IBM — encapsulate infrastructure.

If you were to go to Amazon, Microsoft, or IBM and use their IaaS networking capabilities, you’d have one of the largest WAN networks in the world, and you wouldn’t have to pay a dime to establish that infrastructure. Not in terms of the cost of the infrastructure, not in terms of the capabilities required, nothing. So that’s an advantage that the Cloud is bringing, which I think is going to be very compelling.

The other thing is that, from an SOA context, you’re now able to look at it and say, “Well, I’m dealing with the Cloud, and what all these providers are doing is make it seamless, whether you’re dealing with the Cloud or on-premise.” That’s an important concept.

Now, each of these providers and different aspects of their stacks are at significantly different levels of maturity. Many of these providers may find that their stacks do not interoperate with themselves either, within their own stacks, just because they’re using different run times, different implementations, etc. That’s another factor to take in.

From an SOA perspective, the Cloud has become very compelling, because I’m dealing, let’s say, with a Salesforce.com and I want to use that same service within the enterprise, let’s say, an insurance capability for Microsoft Dynamics or for SugarCRM. If that capability is exposed to one source of truth in the enterprise, you’ve now reduced the complexity and have the ability to adopt different Cloud platforms.

What we are going to start seeing is that the Cloud is going to shift from being just one à-la-carte solution for everybody. It’s going to become something similar to what we used to deal with in the enterprise context. You had multiple applications, which you service-enabled to reduce complexity and provide one service-based capability, instead of an application-centered approach.

You’re now going to move the context to the Cloud, to your multiple Cloud solutions, and maybe many implementations in a nontrivial environment for the same business capability, but they are now exposed to services in the enterprise SOA. You could have Salesforce. You could have Amazon. You could have an IBM implementation. And you could pick and choose the source of truth and share it.

So a lot of the core SOA concepts will still apply and are still applying.

Another on-ramp

Gardner: Perhaps yet another on-ramp to the use of SOA is the app store, which allows for discovery, socialization of services, but at the same time provides overnance and control?

Kumar: We’re seeing that with a lot of our customers, typically the vendors who support PaaS solution associate app store models along with their platform as a mechanism to gain market share.

The issue that you run into with that is, it’s okay if it’s on your cellphone or on your iPad, your tablet PC, or whatever, but once you start having managed apps, for example Salesforce, or if you have applications which are being deployed on an Azure or on a SmartCloud context, you have high risk scenario. You don’t know how well architected that application is. It’s just like going and buying an enterprise application.

When you deploy it in the Cloud, you really need to understand the Cloud PaaS platform for that particular platform to understand the implications in terms of dependencies and cross-dependencies across apps that you have installed. They have real practical implications in terms of maintainability and performance. We’ve seen that with at least two platforms in the last six months.

Governance becomes extremely important. Because of the low CAPEX implications to the business, the business is very comfortable with going and buying these applications and saying, “We can install X, Y, or Z and it will cost us two months and a few million dollars and we are all set.” Or maybe it’s a few hundred thousand dollars.

They don’t realize the implications in terms of interoperability, performance, and standard architectural quality attributes that can occur. There is a governance aspect from the context of the Cloud provisioning of these applications.

There is another aspect to it, which is governance in terms of the run-time, more classic SOA governance, to measure, assert, and to view the cost of these applications in terms of performance to your infrastructural resources, to your security constraints. Also, are there scenarios where the application itself has a dependency on a daisy chain, multiple external applications, to trace the data?

In terms of the context of app stores, they’re almost like SaaS with a particular platform in mind. They provide the buyer with certain commitments from the platform manager or the platform provider, such as security. When you buy an app from Apple, there is at least a reputational expectation of security from the vendor.

What you do not always know is if that security is really being provided. There’s a risk there for organizations who are exposing mission-critical data to that.

The second thing is there is still very much a place for the classic SOA registries and repositories in the Cloud. Only the place is for a different purpose. Those registries and repositories are used either by service providers or by consumers to maintain the list of services they’re using internally.

Different paradigms

There are two different paradigms. The app store is a place where I can go and I know that the gas I am going to get is 85 percent ethanol, versus I also have to maintain some basic set of goods at home to make that I have my dinner on time. These are different kind of roles and different kind of purposes they’re serving.

Above all, I think the thing that’s going to become more and more important in the context of the Cloud is that the functionality will be provided by the Cloud platform or the app you buy, but the governance will be a major IT responsibility, right from the time of picking the app, to the time of delivering it, to the time of monitoring it.

Gardner: How is The Open Group allowing architects to better exercise SOA principles, as they’re grappling with some of these issues around governance, hybrid services delivery and management, and the use and demand in their organizations to start consuming more Cloud services?

Harding: The architect’s primary concern, of course, has to be to meet the needs of the client and to do so in a way that is most effective and that is cost-effective. Cloud gives the architect a usability to go out and get different components much more easily than hitherto.

There is a problem, of course, with integrating them and putting them together. SOA can provide part of the solution to that problem, in that it gives a principle of loosely coupled services. If you didn’t have that when you were trying to integrate different functionality from different places, you would be in a real mess.

What The Open Group contributes is a set of artifacts that enable the architect to think through how to meet the client’s needs in the best way when working with SOA and Cloud.

For example, the SOA Reference Architecture helps the architect understand what components might be brought into the solution. We have the SOA TOGAF Practical Guide, which helps the architect understand how to use TOGAF® in the SOA context.

We’re working further on artifacts in the Cloud space, the Cloud Computing Reference Architecture, a notational language for enabling people to describe Cloud ecosystems on recommendations for Cloud interoperability and portability. We’re also working on recommendations for Cloud governance to complement the recommendations for SOA governance, the SOA Governance Framework Standards that we have already produced, and a number of other artifacts.

The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client.

From the very early days, SOA was seen as bringing a closer connection between the business and technology. A lot of those promises that were made about SOA seven or eight years ago are only now becoming possible to fulfill, and that business front is what that project is looking at.

We’re also producing an update to the SOA Reference Architectures. We have input the SOA Reference Architecture for consideration by the ISO Group that is looking at an International Standard Reference Architecture for SOA and also to the IEEE Group that is looking at an IEEE Standard Reference Architecture.

We hope that both of those groups will want to work along the principles of our SOA Reference Architecture and we intend to produce a new version that incorporates the kind of ideas that they want to bring into the picture.

We’re also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions.

So those are three new initiatives that should result in new Open Group standards and guides to complement, as I have described already, the SOA Reference Architecture, the SOA Governance Framework, the Practical Guides to using TOGAF for SOA.

We also have the Service Integration Maturity Model that we need to assess the SOA maturity. We have a standard on service orientation applied to Cloud infrastructure, and we have a formal SOA Ontology.

Those are the things The Open Group has in place at present to assist the architect, and we are and will be working on three new things: version 2 of the Reference Architecture for SOA, SOA for business technology, and I believe shortly we’ll start on assistance to architects in developing SOA solutions.

Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirect™ blogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies.

Comments Off

Filed under Cloud, Cloud/SOA, Service Oriented Architecture

PODCAST: The Open Group FACE™ Consortium is Providing the Future of Airborne Systems

By The Open Group Staff

Recently, Judy Cerenzia, director of The Open Group Future Airborne Capability Environment (FACE™) Consortium sat down with Defense IQ to talk about FACE and its support for open architectures. The interview is in conjunction with the Interoperable Open Architecture (IOA) Conference taking place in London from October 29 31, 2012.

In the podcast interview, Judy talks about the FACE Consortium, an aviation-focused professional group made up of U.S. industry suppliers, customers and users, and its work to create a technologically appropriate open FACE reference architecture, standards and business models that point the way to the warfighter of tomorrow. Judy also discusses the evolution of FACE standards and business guidelines and what that means to the marketplace.

About IOA 2012

The IOA Conference will take place October 29-31, 2012 in London. The conference looks to make open systems truly open by empowering attendees to base future platforms architectures on publically available standards. More information about IOA is available on its website, and registration is available here.

Comments Off

Filed under Conference, FACE™

The Open Group Trusted Technology Forum is Leading the Way to Securing Global IT Supply Chains

By Dana Gardner, Interarbor Solutions

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on Enterprise Architecture (EA), enterprise transformation, and securing global supply chains.

We’re joined in advance by some of the main speakers at the conference to examine the latest efforts to make global supply chains for technology providers more secure, verified, and therefore trusted. We’ll examine the advancement of The Open Group Trusted Technology Forum (OTTF) to gain an update on the effort’s achievements, and to learn more about how technology suppliers and buyers can expect to benefit.

The expert panel consists of Dave Lounsbury, Chief Technical Officer at The Open Group; Dan Reddy, Senior Consultant Product Manager in the Product Security Office at EMC Corp.; Andras Szakal, Vice President and Chief Technology Officer at IBM’s U.S. Federal Group, and also the Chair of the OTTF, and Edna Conway, Chief Security Strategist for Global Supply Chain at Cisco. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why this is an important issue, and why is there a sense of urgency in the markets?

Lounsbury: The Open Group has a vision of boundaryless information flow, and that necessarily involves interoperability. But interoperability doesn’t have the effect that you want, unless you can also trust the information that you’re getting, as it flows through the system.

Therefore, it’s necessary that you be able to trust all of the links in the chain that you use to deliver your information. One thing that everybody who watches the news would acknowledge is that the threat landscape has changed. As systems become more and more interoperable, we get more and more attacks on the system.

As the value that flows through the system increases, there’s a lot more interest in cyber crime. Unfortunately, in our world, there’s now the issue of state-sponsored incursions in cyberspace, whether officially state-sponsored or not, but politically motivated ones certainly.

So there is an increasing awareness on the part of government and industry that we must protect the supply chain, both through increasing technical security measures, which are handled in lots of places, and in making sure that the vendors and consumers of components in the supply chain are using proper methodologies to make sure that there are no vulnerabilities in their components.

I’ll note that the demand we’re hearing is increasingly for work on standards in security. That’s top of everybody’s mind these days.

Reddy: One of the things that we’re addressing is the supply chain item that was part of the Comprehensive National Cybersecurity Initiative (CNCI), which spans the work of two presidents. Initiative 11 was to develop a multi-pronged approach to global supply chain risk management. That really started the conversation, especially in the federal government as to how private industry and government should work together to address the risks there.

In the OTTF, we’ve tried create a clear measurable way to address supply-chain risk. It’s been really hard to even talk about supply chain risk, because you have to start with getting a common agreement about what the supply chain is, and then talk about how to deal with risk by following best practices.

Szakal: One of the observations that I’ve made over the last couple of years is that this group of individuals, who are now part of this standards forum, have grown in their ability to collaborate, define, and rise to the challenges, and work together to solve the problem.

Standards process

Technology supply chain security and integrity are not necessarily a set of requirements or an initiative that has been taken on by the standards committee or standards groups up to this point The people who are participating in this aren’t your traditional IT standards gurus. They had to learn the standards process. They had to understand how to approach the standardization of best practices, which is how we approach solving this problem.

It’s sharing information. It’s opening up across the industry to share best practices on how to secure the supply chain and how to ensure its overall integrity. Our goal has been to develop a framework of best practices and then ultimately take those codified best practices and instantiate them into a standard, which we can then assess providers against. It’s a big effort, but I think we’re making tremendous progress.

Gardner: Because The Open Group Conference is taking place in Washington, D.C., what’s the current perception in the U.S. Government about this in terms of its role?

Szakal:The government has always taken a prominent role, at least to help focus the attention of the industry.

Now that they’ve corralled the industry and they’ve got us moving in the right direction, in many ways, we’ve fought through many of the intricate complex technology supply chain issues and we’re ahead of some of the thinking of folks outside of this group because the industry lives these challenges and understands the state of the art. Some of the best minds in the industry are focused on this, and we’ve applied some significant internal resources across our membership to work on this challenge.

So the government is very interested in it. We’ve had collaborations all the way from the White House across the Department of Defense (DoD) and within the Department of Homeland Security (DHS), and we have members from the government space in NASA and DoD.

It’s very much a collaborative effort, and I’m hoping that it can continue to be so and be utilized as a standard that the government can point to, instead of coming up with their own policies and practices that may actually not work as well as those defined by the industry.

Conway: Our colleagues on the public side of the public-private partnership that is addressing supply-chain integrity have recognized that we need to do it together.

More importantly, you need only to listen to a statement, which I know has often been quoted, but it’s worth noting again from EU Commissioner Algirdas Semeta. He recently said that in a globalized world, no country can secure the supply chain in isolation. He recognized that, again quoting, national supply chains are ineffective and too costly unless they’re supported by enhanced international cooperation.

Mindful focus

The one thing that we bring to bear here is a mindful focus on the fact that we need a public-private partnership to address comprehensively in our information and communications technology industry supply chain integrity internationally. That has been very important in our focus. We want to be a one-stop shop of best practices that the world can look at, so that we continue to benefit from commercial technology which sells globally and frequently builds once or on a limited basis.

Combining that international focus and the public-private partnership is something that’s really coming home to roost in everyone’s minds right now, as we see security value migrating away from an end point and looking comprehensively at the product lifecycle or the global supply chain.

Lounsbury:I had the honor of testifying before the U.S. House Energy and Commerce Committee on Oversight Investigations, on the view from within the U.S. Government on IT security.

It was very gratifying to see that the government does recognize this problem. We had witnesses in from the DoD and Department of Energy (DoE). I was there, because I was one of the two voices on industry that the government wants to tap into to get the industry’s best practices into the government.

It was even more gratifying to see that the concerns that were raised in the hearings were exactly the ones that the OTTF is pursuing. How do you validate a long and complex global supply chain in the face of a very wide threat environment, recognizing that it can’t be any single country? Also, it really does need to be not a process that you apply to a point, but something where you have a standard that raises the bar for our security for all the participants in your supply chain.

So it was really good to know that we were on track and that the government, and certainly the U.S. Government, as we’ve heard from Edna, the European governments, and I suspect all world governments are looking at exactly how to tap into this industry activity.

Gardner: Where we are in the progression of OTTF?

Lounsbury: In the last 18 months, there has been a tremendous amount of progress. The thing that I’ll highlight is that early in 2012, the OTTF published a snapshot of the standard. A snapshot is what The Open Group uses to give a preview of what we expect the standards will apply. It has fleshed out two areas, one on tainted products and one on counterfeit products, the standards and best practices needed to secure a supply chain against those two vulnerabilities.

So that’s out there. People can take a look at that document. Of course, we would welcome their feedback on it. We think other people have good answers too. Also, if they want to start using that as guidance for how they should shape their own practices, then that would be available to them.

Normative guidance

That’s the top development topic inside the OTTF itself. Of course, in parallel with that, we’re continuing to engage in an outreach process and talking to government agencies that have a stake in securing the supply chain, whether it’s part of government policy or other forms of steering the government to making sure they are making the right decisions. In terms of exactly where we are, I’ll defer to Edna and Andras on the top priority in the group.

Gardner: Edna, what’s been going on at OTTF and where do things stand?

Conway: We decided that this was, in fact, a comprehensive effort that was going to grow over time and change as the challenges change. We began by looking at two primary areas, which were counterfeit and taint in that communications technology arena. In doing so, we first identified a set of best practices, which you referenced briefly inside of that snapshot.

Where we are today is adding the diligence, and extracting the knowledge and experience from the broad spectrum of participants in the OTTF to establish a set of rigorous conformance criteria that allow a balance between flexibility and how one goes about showing compliance to those best practices, while also assuring the end customer that there is rigor sufficient to ensure that certain requirements are met meticulously, but most importantly comprehensively.

We have a practice right now where we’re going through each and every requirement or best practice and thinking through the broad spectrum of the development stage of the lifecycle, as well as the end-to-end nodes of the supply chain itself.

This is to ensure that there are requirements that would establish conformance that could be pointed to, by both those who would seek accreditation to this international standard, as well as those who would rely on that accreditation as the imprimatur of some higher degree of trustworthiness in the products and solutions that are being afforded to them, when they select an OTTF accredited provider.

Gardner: Andras, I’m curious where in an organization like IBM that these issues are most enforceable. Where within the private sector is the knowledge and the expertise to reside?

Szakal: Speaking for IBM, we recently celebrated our 100th anniversary in 2011. We’ve had a little more time than some folks to come up with a robust engineering and development process, which harkens back to the IBM 701 and the beginning of the modern computing era.

Integrated process

We have what we call the integrated product development process (IPD), which all products follow and that includes hardware and software. And we have a very robust quality assurance team, the QSE team, which ensures that the folks are following those practices that are called out. Within each of line of business there exist specific requirements that apply more directly to the architecture of a particular product offering.

For example, the hardware group obviously has additional standards that they have to follow during the course of development that is specific to hardware development and the associated supply chain, and that is true with the software team as well.

The product development teams are integrated with the supply chain folks, and we have what we call the Secure Engineering Framework, of which I was an author and the Secure Engineering Initiative which we have continued to evolve for quite some time now, to ensure that we are effectively engineering and sourcing components and that we’re following these Open Trusted Technology Provider Standard (O-TTPS) best practices.

In fact, the work that we’ve done here in the OTTF has helped to ensure that we’re focused in all of the same areas that Edna’s team is with Cisco, because we’ve shared our best practices across all of the members here in the OTTF, and it gives us a great view into what others are doing, and helps us ensure that we’re following the most effective industry best practices.

Gardner: Dan, at EMC, is the Product Security Office something similar to what Andras explained for how IBM operates? Perhaps you could just give us a sense of how it’s done there?

Reddy: At EMC in our Product Security Office, we house the enabling expertise to define how to build their products securely. We’re interested in building that in as soon as possible throughout the entire lifecycle. We work with all of our product teams to measure where they are, to help them define their path forward, as they look at each of the releases of their other products. And we’ve done a lot of work in sharing our practices within the industry.

One of the things this standard does for us, especially in the area of dealing with the supply chain, is it gives us a way to communicate what our practices are with our customers. Customers are looking for that kind of assurance and rather than having a one-by-one conversation with customers about what our practices are for a particular organization. This would allow us to have a way of demonstrating the measurement and the conformance against a standard to our own customers.

Also, as we flip it around and take a look at our own suppliers, we want to be able to encourage suppliers, which may be small suppliers, to conform to a standard, as we go and select who will be our authorized suppliers.

Gardner: Dave, what would you suggest for those various suppliers around the globe to begin the process?

Publications catalog

Lounsbury: Obviously, the thing I would recommend right off is to go to The Open Group website, go to the publications catalog, and download the snapshot of the OTTF standard. That gives a good overview of the two areas of best practices for protection from tainted and counterfeit products we’ve mentioned on the call here.

That’s the starting point, but of course, the reason it’s very important for the commercial world to lead this is that commercial vendors face the commercial market pressures and have to respond to threats quickly. So the other part of this is how to stay involved and how to stay up to date?

And of course the two ways that The Open Group offers to let people do that is that you can come to our quarterly conferences, where we do regular presentations on this topic. In fact, the Washington meeting is themed on the supply chain security.

Of course, the best way to do it is to actually be in the room as these standards are evolved to meet the current and the changing threat environment. So, joining The Open Group and joining the OTTF is absolutely the best way to be on the cutting edge of what’s happening, and to take advantage of the great information you get from the companies represented on this call, who have invested years-and-years, as Andras said, in making their own best practices and learning from them.

Gardner:Edna, what’s on the short list of next OTTF priorities?

Conway: You’ve heard us talk about CNCI, and the fact that cybersecurity is on everyone’s minds today. So while taint embodies that to some degree, we probably need to think about partnering in a more comprehensive way under the resiliency and risk umbrella that you heard Dan talk about and really think about embedding security into a resilient supply chain or a resilient enterprise approach.

In fact, to give that some forethought, we actually have invited at the upcoming conference, a colleague who I’ve worked with for a number of years who is a leading expert in enterprise resiliency and supply chain resiliency to join us and share his thoughts.

He is a professor at MIT, and his name is Yossi Sheffi. Dr. Sheffi will be with us. It’s from that kind of information sharing, as we think in a more comprehensive way, that we begin to gather the expertise that not only resides today globally in different pockets, whether it be academia, government, or private enterprise, but also to think about what the next generation is going to look like.

Resiliency, as it was known five years ago, is nothing like supply chain resiliency today, and where we want to take it into the future. You need only look at the US national strategy for global supply chain security to understand that. When it was announced in January of this year at Davos by Secretary Napolitano of the DHS, she made it quite clear that we’re now putting security at the forefront, and resiliency is a part of that security endeavor.

So that mindset is a change, given the reliance ubiquitously on communications, for everything, everywhere, at all times — not only critical infrastructure, but private enterprise, as well as all of us on a daily basis today. Our communications infrastructure is essential to us.

Thinking about resiliency

Given that security has taken top ranking, we’re probably at the beginning of this stage of thinking about resiliency. It’s not just about continuity of supply, not just about prevention from the kinds of cyber incidents that we’re worried about, but also to be cognizant of those nation-state concerns or personal concerns that would arise from those parties who are engaging in malicious activity, either for political, religious or reasons.

Or, as you know, some of them are just interested in seeing whether or not they can challenge the system, and that causes loss of productivity and a loss of time. In some cases, there are devastating negative impacts to infrastructure.

Szakal: There’s another area too that I am highly focused on, but have kind of set aside, and that’s the continued development and formalization of the framework itself that is to continue the collective best practices from the industry and provide some sort of methods by which vendors can submit and externalize those best practices. So those are a couple of areas that I think that would keep me busy for the next 12 months easily.

Gardner: What do IT vendors companies gain if they do this properly?

Secure by Design

Szakal: Especially now in this day and age, any time that you actually approach security as part of the lifecycle — what we call an IBM Secure by Design — you’re going to be ahead of the market in some ways. You’re going to be in a better place. All of these best practices that we’ve defined are additive in effect. However, the very nature of technology as it exists today is that it will be probably another 50 or so years, before we see a perfect security paradigm in the way that we all think about it.

So the researchers are going to be ahead of all of the providers in many ways in identifying security flaws and helping us to remediate those practices. That’s part of what we’re doing here, trying to make sure that we continue to keep these practices up to date and relevant to the entire lifecycle of commercial off-the-shelf technology (COTS) development.

So that’s important, but you also have to be realistic about the best practices as they exist today. The bar is going to move as we address future challenges.

************

For more information on The Open Group’s upcoming conference in Washington, D.C., please visit: http://www.opengroup.org/dc2012

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

Comments Off

Filed under Cybersecurity, Information security, OTTF, Supply chain risk

The Open Group and MIT Experts Detail New Advances in ID Management to Help Reduce Cyber Risk

By Dana Gardner, The Open Group

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how Enterprise Architecture (EA), enterprise transformation and securing global supply chains.

We’re joined in advance by some of the main speakers at the July 16 conference to examine the relationship between controlled digital identities in cyber risk management. Our panel will explore how the technical and legal support of ID management best practices have been advancing rapidly. And we’ll see how individuals and organizations can better protect themselves through better understanding and managing of their online identities.

The panelist are Jim Hietala, vice president of security at The Open Group; Thomas Hardjono, technical lead and executive director of the MIT Kerberos Consortium; and Dazza Greenwood, president of the CIVICS.com consultancy and lecturer at the MIT Media Lab. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is ID management, and how does it form a fundamental component of cybersecurity?

Hietala: ID management is really the process of identifying folks who are logging onto computing services, assessing their identity, looking at authenticating them, and authorizing them to access various services within a system. It’s something that’s been around in IT since the dawn of computing, and it’s something that keeps evolving in terms of new requirements and new issues for the industry to solve.

Particularly as we look at the emergence of cloud and software-as-a-service (SaaS) services, you have new issues for users in terms of identity, because we all have to create multiple identities for every service we access.

You have issues for the providers of cloud and SaaS services, in terms of how they provision, where they get authoritative identity information for the users, and even for enterprises who have to look at federating identity across networks of partners. There are a lot of challenges there for them as well.

Key theme

Figuring out who is at the other end of that connection is fundamental to all of cybersecurity. As we look at the conference that we’re putting on this month in Washington, D.C., a key theme is cybersecurity — and identity is a fundamental piece of that.

You can look at things that are happening right now in terms of trojans, bank fraud, scammers and attackers, wire transferring money out of company’s bank accounts and other things you can point to.

There are failures in their client security and the customer’s security mechanisms on the client devices, but I think there are also identity failures. They need new approaches for financial institutions to adopt to prevent some of those sorts of things from happening. I don’t know if I’d use the word “rampant,” but they are clearly happening all over the place right now. So I think there is a high need to move quickly on some of these issues.

Gardner: Are we at a plateau? Or has ID management been a continuous progression over the past decade?

Hardjono: So it’s been at least a decade since the industry began addressing identity and identity federation. Someone in the audience might recall Liberty Alliance, the Project Liberty in its early days.

One notable thing about the industry is that the efforts have been sort of piecemeal, and the industry, as a whole, is now reaching the point where a true correct identity is absolutely needed now in transactions in a time of so many so-called Internet scams.

Gardner: Dazza, is there a casual approach to this, or a professional need? By that, I mean that we see a lot of social media activities, Facebook for example, where people can have an identity and may or may not be verified. That’s sort of the casual side, but it sounds like what we’re really talking about is more for professional business or eCommerce transactions, where verification is important. In other words, is there a division between these two areas that we should consider before we get into it more deeply?

Greenwood: Rather than thinking of it as a division, a spectrum would be a more useful way to look at it. On one side, you have, as you mentioned, a very casual use of identity online, where it may be self-asserted. It may be that you’ve signed a posting or an email.

On the other side, of course, the Internet and other online services are being used to conduct very high value, highly sensitive, or mission-critical interactions and transactions all the time. When you get toward that spectrum, a lot more information is needed about the identity authenticating, that it really is that person, as Thomas was starting to foreshadow. The authorization, workflow permissions, and accesses are also incredibly important.

In the middle, you have a lot of gradations, based partly on the sensitivity of what’s happening, based partly on culture and context as well. When you have people who are operating within organizations or within contexts that are well-known and well-understood — or where there is already a lot of not just technical, but business, legal and cultural understanding of what happens — if something goes wrong, there are the right kind of supports and risk management processes.

There are different ways that this can play out. It’s not always just a matter of higher security. It’s really higher confidence, and more trust based on a variety of factors. But the way you phrased it is a good way to enter this topic, which is, we have a spectrum of identity that occurs online, and much of it is more than sufficient for the very casual or some of the social activities that are happening.

Higher risk

But as the economy in our society moves into a digital age, ever more fully and at ever-higher speeds, much more important, higher risk, higher value interactions are occurring. So we have to revisit how it is that we have been addressing identity — and give it more attention and a more careful design, instead of architectures and rules around it. Then we’ll be able to make that transition more gracefully and with less collateral damage, and really get to the benefits of going online.

Gardner: What’s happening to shore this up and pull it together? Let’s look at some of the big news.

Hietala: I think the biggest recent news is the U.S. National Strategy for Trusted Identities in Cyber Space (NSTIC) initiative. It clearly shows that a large government, the United States government, is focused on the issue and is willing to devote resources to furthering an ID management ecosystem and construct for the future. To me that’s the biggest recent news.

At a crossroads

Greenwood: We’re just now is at a crossroads where finally industry, government and increasingly the populations in general, are understanding that there is a different playing field. In the way that we interact, the way we work, the way we do healthcare, the way we do education, the way our social groups cohere and communicate, big parts are happening online.

In some cases, it happens online through the entire lifecycle. What that means now is that a deeper approach is needed. Jim mentioned NSTIC as one of those examples. There are a number of those to touch on that are occurring because of the profound transition that requires a deeper treatment.

NSTIC is the U.S. government’s roadmap to go from its piecemeal approach to a coherent architecture and infrastructure for identity within the United States. It could provide a great model for other countries as well.

People can reuse their identity, and we can start to address what you’re talking about with identity and other people taking your ID, and more to the point, how to prove you are who you said you were to get that ID back. That’s not always so easy after identity theft, because we don’t have an underlying effective identity structure in the United States yet.

I just came back from the United Kingdom at a World Economic Forum meeting. I was very impressed by what their cabinet officers are doing with an identity-assurance scheme in large scale procurement. It’s very consistent with the NSTIC approach in the United States. They can get tens of millions of their citizens using secure well-authenticated identities across a number of transactions, while always keeping privacy, security, and also individual autonomy at the forefront.

There are a number of technology and business milestones that are occurring as well. Open Identity Exchange (OIX) is a great group that’s beginning to bring industry and other sectors together to look at their approaches and technology. We’ve had Security Assertion Markup Language (SAML). Thomas is co-chair of the PC, and that’s getting a facelift.

That approach was being brought to match scale with OpenID Connect, which is OpenID and OAuth. There are a great number of technology innovations that are coming online.

Legally, there are also some very interesting newsworthy harbingers. Some of it is really just a deeper usage of statutes that have been passed a few years ago — the Uniform Electronic Transactions Act, the Electronic Signatures in Global and National Commerce Act, among others, in the U.S.

There is eSignature Directive and others in Europe and in the rest of the world that have enabled the use of interactions online and dealt with identity and signatures, but have left to the private sector and to culture which technologies, approaches, and solutions we’ll use.

Now, we’re not only getting one-off solutions, but architectures for a number of different solutions, so that whole sectors of the economy and segments of society can more fully go online. Practically everywhere you look, you see news and signs of this transition that’s occurring, an exciting time for people interested in identity.

Gardner: What’s most new and interesting from your perspective on what’s being brought to bear on this problem, particularly from a technology perspective?

Two dimensions

Hardjono: It’s along two dimensions. The first one is within the Kerberos Consortium. We have a number of people coming from the financial industry. They all have the same desire, and that is to scale their services to the global market, basically sign up new customers abroad, outside United States. In wanting to do so, they’re facing a question of identity. How do we assert that somebody in a country is truly who they say they are.

The second, introduces a number of difficult technical problems. Closer to home and maybe at a smaller scale, the next big thing is user consent. The OpenID exchange and the OpenID Connect specifications have been completed, and people can do single sign-on using technology such as OAuth 2.0.

The next big thing is how can an attribute provider, banks, telcos and so on, who have data about me, share data with other partners in the industry and across the sectors of the industry with my expressed consent in a digital manner.

Gardner: Tell us a bit about the MIT Core ID approach and how this relates to the Jericho Forum approach.

Greenwood: I would defer to Jim of The Open Group to speak more authoritatively on Jericho Forum, which is a part of Open Group. But, in general, Jericho Forum is a group of experts in the security field from industry and, more broadly, who have done some great work in the past on deperimeterized security and some other foundational work.

In the last few years, they’ve been really focused on identity, coming to realize that identity is at the center of what one would have to solve in order to have a workable approach to security. It’s necessary, but not sufficient, for security. We have to get that right.

To their credit, they’ve come up with a remarkably good list of simple understandable principles, that they call the Jericho Forum Identity Commandments, which I strongly commend to everybody to read.

It puts forward a vision of an approach to identity, which is very constant with an approach that I’ve been exploring here at MIT for some years. A person would have a core ID identity, a core ID, and could from that create more than one persona. You may have a work persona, an eCommerce persona, maybe a social and social networking persona and so on. Some people may want a separate political persona.

You could cluster all of the accounts, interactions, services, attributes, and so forth, directly related to each of those to those individual personas, but not be in a situation where we’re almost blindly backing into right now. With a lot of the solutions in the market, your different aspects of life, unintentionally sometimes or even counter-intentionally, will merge.

Good architecture

Sometimes, that’s okay. Sometimes, in fact, we need to be able to have an inability to separate different parts of life. That’s part of privacy and can be part of security. It’s also just part of autonomy. It’s a good architecture. So Jericho Forum has got the commandments.

Many years ago, at MIT, we had a project called the Identity Embassy here in the Media Lab, where we put forward some simple prototypes and ideas, ways you could do that. Now, with all the recent activity we mentioned earlier toward full-scale usage of architectures for identity in U.S. with NSTIC and around the world, we’re taking a stronger, deeper run at this problem.

Thomas and I have been collaborating across different parts of MIT. I’m putting out what we think is a very exciting and workable way that you can in a high security manner, but also quite usably, have these core identifiers or individuals and inextricably link them to personas, but escape that link back to the core ID, and from across the different personas, so that you can get the benefits when you want them, keeping the personas separate.

Also it allows for many flexible business models and other personalization and privacy services as well, but we can get into that more in the fullness of time. But, in general, that’s what’s happening right now and we couldn’t be more excited about it.

Hardjono: For a global infrastructure for core identities to be able to develop, we definitely need collaboration between the governments of the world and the private sector. Looking at this problem, we were searching back in history to find an analogy, and the best analogy we could find was the rollout of a DNS infrastructure and the IP address assignment.

It’s not perfect and it’s got its critics, but the idea is that you could split blocks of IP addresses and get it sold and resold by private industry, really has allowed the Internet to scale, hitting limitations, but of course IPv6 is on the horizon. It’s here today.

So we were thinking along the same philosophy, where core identifiers could be arranged in blocks and handed out to the private sector, so that they can assign, sell it, or manage it on behalf of people who are Internet savvy, and perhaps not, such as my mom. So we have a number of challenges in that phase.

Gardner: Does this relate to the MIT Model Trust Framework System Rules project?

Greenwood: The Model Trust Framework System Rules project that we are pursuing in MIT is a very important aspect of what we’re talking about. Thomas and I talked somewhat about the technical and practical aspects of core identifiers and core identities. There is a very important business and legal layer within there as well.

So these trust framework system rules are ways to begin to approach the complete interconnected set of dimensions necessary to roll out these kinds of schemes at the legal, business, and technical layers.

They come from very successful examples in the past, where organizations have federated ID with more traditional approaches such as SAML and other approaches. There are some examples of those trust framework system rules at the business, legal, and technical level available.

Right now it’s CIVICS.com, and soon, when we have our model MIT under Creative Commons approach, we’ll take a lot of the best of what’s come before codified in a rational way. Business, legal, and technical rules can really be aligned in a more granular way to fit well, and put out a model that we think will be very helpful for the identity solutions of today that are looking at federate according to NSTIC and similar models. It absolutely would be applicable to how at the core identity persona underlying architecture and infrastructure that Thomas, I, and Jericho Forum are postulating could occur.

Hardjono: Looking back 10-15 years, we engineers came up with all sorts of solutions and standardized them. What’s really missing is the business models, business cases, and of course the legal side.

How can a business make revenue out of the management of identity-related aspects, management of attributes, and so on and how can they do so in such a manner that it doesn’t violate the user’s privacy. But it’s still user-centric in the sense that the user needs to give consent and can withdraw consent and so on. And trying to develop an infrastructure where everybody is protected.

Gardner: The Open Group, being a global organization focused on the collaboration process behind the establishment of standards, it sounds like these are some important aspects that you can bring out to your audience, and start to create that collaboration and discussion that could lead to more fuller implementation. Is that the plan, and is that what we’re expecting to hear more of at the conference next month?

Hietala: It is the plan, and we do get a good mix at our conferences and events of folks from all over the world, from government organizations and large enterprises as well. So it tends to be a good mixing of thoughts and ideas from around the globe on whatever topic we’re talking about — in this case identity and cybersecurity.

At the Washington, D.C. Conference, we have a mix of discussions. The kick-off one is a fellow by the name Joel Brenner who has written a book, America the Vulnerable, which I would recommend. He was inside the National Security Agency (NSA) and he’s been involved in fighting a lot of the cyber attacks. He has a really good insight into what’s actually happening on the threat and defending against the threat side. So that will be a very interesting discussion. [Read an interview with Joel Brenner.]

Then, on Monday, we have conference presentations in the afternoon looking at cybersecurity and identity, including Thomas and Dazza presenting on some of the projects that they’ve mentioned.

Cartoon videos

Then, we’re also bringing to that event for the first time, a series of cartoon videos that were produced for the Jericho Forum. They describe a lot of the commandments that Dazza mentioned in a more approachable way. So they’re hopefully understandable to laymen, and folks with not as much understanding about all the identity mechanisms that are out there. So, yeah, that’s what we are hoping to do.

Gardner: Perhaps we could now better explain what NSTIC is and does?

Greenwood:The best person to speak about NSTIC in the United States right now is probably President Barrack Obama, because he is the person that signed the policy. Our president and the administration has taken a needed, and I think a very well-conceived approach, to getting industry involved with other stakeholders in creating the architecture that’s going to be needed for identity for the United States and as a model for the world, and also how to interact with other models.

Jeremy Grant is in charge of the program office and he is very accessible. So if people want more information, they can find Jeremy online easily in at nist.gov/nstic. And nstic.us also has more information.

In general, NSTIC is a strategy document and a roadmap for how a national ecosystem can emerge, which is comprised of a governing body. They’re beginning to put that together this very summer, with 13 different stakeholders groups, each of which would self-organize and elect or appoint a person — industry, government, state and local government, academia, privacy groups, individuals — which is terrific — and so forth.

That governance group will come up with more of the details in terms of what the accreditation and trust marks look like, the types of technologies and approaches that would be favored according to the general principles I hope everyone reads within the NSTIC document.

At a lower level, Congress has appropriated more than $10 million to work with the White House for a number of pilots that will be under a million half dollars each for a year or two, where individual proof of concept, technologies, or approaches to trust frameworks will be piloted and put out into where they can be used in the market.

In general, by this time two months from now, we’ll know a lot more about the governing body, once it’s been convened and about the pilots once those contracts have been awarded and grants have been concluded. What we can say right now is that the way it’s going to come together is with trust framework system rules, the same exact type of entity that we are doing a model of, to help facilitate people’s understanding and having templates and well-thought through structures that they can pull down and, in turn, use as a starting point.

Circle of trust

So industry-by-industry, sector-by-sector, but also what we call circle of trust by circle of trust. Folks will come up with their own specific rules to define exactly how they will meet these requirements. They can get a trust mark, be interoperable with other trust framework consistent rules, and eventually you’ll get a clustering of those, which will lead to an ecosystem.

The ecosystem is not one size fits all. It’s a lot of systems that interoperate in a healthy way and can adapt and involve over time. A lot more, as I said, is available on nstic.us and nist.gov/nstic, and it’s exciting times. It’s certainly the best government document I have ever read. I’ll be so very excited to see how it comes out.

Gardner: What’s coming down the pike that’s going to make this yet more important?

Hietala: I would turn to the threat and attacks side of the discussion and say that, unfortunately, we’re likely to see more headlines of organizations being breached, of identities being lost, stolen, and compromised. I think it’s going to be more bad news that’s going to drive this discussion forward. That’s my take based on working in the industry and where it’s at right now.

Hardjono: I mentioned the user consent going forward. I think this is increasingly becoming an important sort of small step to address and to resolve in the industry and efforts like the User Managed Access (UMA) working group within the Kantara Initiative.

Folks are trying to solve the problem of how to share resources. How can I legitimately not only share my photos on Flickr with data, but how can I allow my bank to share some of my attributes with partners of the bank with my consent. It’s a small step, but it’s a pretty important step.

Greenwood: Keep your eyes on UMA out of Kantara. Keep looking at OASIS, as well, and the work that’s coming with SAML and some of the Model Trust Framework System Rules.

Most important thing

In my mind the most strategically important thing that will happen is OpenID Connect. They’re just finalizing the standard now, and there are some reference implementations. I’m very excited to work with MIT, with our friends and partners at MITRE Corporation and elsewhere.

That’s going to allow mass scales of individuals to have more ready access to identities that they can reuse in a great number of places. Right now, it’s a little bit catch-as-catch-can. You’ve got your Google ID or Facebook, and a few others. It’s not something that a lot of industries or others are really quite willing to accept to understand yet.

They’ve done a complete rethink of that, and use the best lessons learned from SAML and a bunch of other federated technology approaches. I believe this one is going to change how identity is done and what’s possible.

They’ve done such a great job on it, I might add It fits hand in glove with the types of Model Trust Framework System Rules approaches, a layer of UMA on top, and is completely consistent with the architecture rights, with a future infrastructure where people would have a Core ID and more than one persona, which could be expressed as OpenID Connect credentials that are reusable by design across great numbers of relying parties getting where we want to be with single sign-on.

So it’s exciting times. If it’s one thing you have to look at, I’d say do a Google search and get updates on OpenID Connect and watch how that evolves.

************

For more information on The Open Group’s upcoming conference in Washington, D.C., please visit: http://www.opengroup.org/dc2012

Dana Gardner is president and principal analyst at Interarbor Solutions, an enterprise IT analysis, market research, and consulting firm. Gardner, a leading identifier of software and Cloud productivity trends and new IT business growth opportunities, honed his skills and refined his insights as an industry analyst, pundit, and news editor covering the emerging software development and enterprise infrastructure arenas for the last 18 years.

1 Comment

Filed under Conference, Cybersecurity