Monthly Archives: January 2013

First Open Group Webjam — Impact of Cloud Computing on our Resumes

By E.G. Nadhan, HP

The Open Group conducted its first ever webjam within The Cloud Work Group last month. A Webjam is an informal mechanism for the members within a particular work group with a common interest to have an interactive brainstorming debate on a topic of their choice. Consider it to be a panel discussion — except everyone on the call is part of the panel! I coordinated the first webjam for The Cloud Work Group — the topic was “What will Cloud do to your resume?”

The webjam was attended by active members of the Cloud work group including

  • Sanda Morar and Som Balakrishnan from Cognizant Technologies
  • Raj Bhoopathi and E.G.Nadhan from HP.
  • Chris Harding from The Open Group

We used this post on the ECIO Forum Blog to set the context for this webjam. Click here for recording. Below is a brief summary of the key takeaways:

  • Cloud Computing is causing significant shifts that could impact the extent to which some roles exist in the future—especially the role of the CTO and the CIO. The CIO must become a cooperative integrator across a heterogeneous mix of technologies, platforms and services that are provisioned on or off the cloud.
  • Key Cloud characteristics—such as multi-tenancy, elasticity, scalability, etc.—are likely to be called out in resumes. There is an accelerated push for Cloud Architects who are supposed to ensure that aspects of the Cloud are consistently addressed across all architectural layers.
  • DevOps is expanding the role of the developer to transcend into operations. Developers’ resumes are more likely to call this experience out in Cloud Computing environments.
  • Business users are likely to call out their experience directly procuring Cloud services.
  • Application testers are more likely to address interoperability between the services provided—including the validation of the projected service levels—which could, in turn, show up on their resumes.
  • Operations personnel are likely to call out their experience with tools that can seamlessly monitor physical and virtual resources.

The recording provides much more detail.

I really enjoyed the webjam. It provided an opportunity to share the perspectives of individuals from numerous member companies of The Open Group on a topic germane to us as IT professionals as well as to The Cloud Work Group.

Are there other roles that are impacted? Are there any other changes to the content of the resumes in the future? Please listen to the recording and let me know your thoughts.

If you are a member of the Cloud Work Group, I look forward to engaging in an interesting discussion with you on other topics in this area!

A version of this blog post was originally published on HP’s Journey through Enterprise IT Services blog.

NadhanHP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has more than 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project, and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, LinkedIn and Journey Blog.

 

Comments Off

Filed under Cloud, Cloud/SOA

Protecting Data is Good. Protecting Information Generated from Big Data is Priceless

By E.G. Nadhan, HP

This was the key message that came out of The Open Group® Big Data Security Tweet Jam on Jan 22 at 9:00 a.m. PT, which addressed several key questions centered on Big Data and security. Here is my summary of the observations made in the context of these questions.

Q1. What is Big Data security? Is it different from data security?

Big data security is more about information security. It is typically external to the corporate perimeter. IT is not prepared today to adequately monitor its sheer volume in brontobytes of data. The time period of long-term storage could violate compliance mandates. Note that storing Big Data in the Cloud changes the game with increased risks of leaks, loss, breaches.

Information resulting from the analysis of the data is even more sensitive and therefore, higher risk – especially when it is Personally Identifiable Information on the Internet of devices requiring a balance between utility and privacy.

At the end of the day, it is all about governance or as they say, “It’s the data, stupid! Govern it.”

Q2. Any thoughts about security systems as producers of Big Data, e.g., voluminous systems logs?

Data gathered from information security logs is valuable but rules for protecting it are the same. Security logs will be a good source to detect patterns of customer usage.

Q3. Most BigData stacks have no built in security. What does this mean for securing Big Data?

There is an added level of complexity because it goes across apps, network plus all end points. Having standards to establish identity, metadata, trust would go a long way. The quality of data could also be a security issue — has it been tampered with, are you being gamed etc. Note that enterprises have varying needs of security around their business data.

Q4. How is the industry dealing with the social and ethical uses of consumer data gathered via Big Data?

Big Data is still nascent and ground rules for handling the information are yet to be established. Privacy issue will be key when companies market to consumers. Organizations are seeking forgiveness rather than permission. Regulatory bodies are getting involved due to consumer pressure. Abuse of power from access to big data is likely to trigger more incentives to attack or embarrass. Note that ‘abuse’ to some is just business to others.

Q5. What lessons from basic data security and cloud security can be implemented in Big Data security?

Security testing is even more vital for Big Data. Limit access to specific devices, not just user credentials. Don’t assume security via obscurity for sensors producing bigdata inputs – they will be targets.

Q6. What are some best practices for securing Big Data? What are orgs doing now and what will organizations be doing 2-3 years from now?

Current best practices include:

  • Treat Big Data as your most valuable asset
  • Encrypt everything by default, proper key management, enforcement of policies, tokenized logs
  • Ask your Cloud and Big Data providers the right questions – ultimately, YOU are responsible for security
  • Assume data needs verification and cleanup before it is used for decisions if you are unable to establish trust with data source

Future best practices:

  • Enterprises treat Information like data today and will respect it as the most valuable asset in the future
  • CIOs will eventually become Chief Officer for Information

Q7. We’re nearing the end of today’s tweet tam. Any last thoughts on Big Data security?

Adrian Lane who participated in the tweet jam will be keynoting at The Open Group Conference in Newport Beach next week and wrote a good best practices paper on securing Big Data.

I have been part of multiple tweet chats specific to security as well as one on Information Optimization. Recently, I also conducted the first Open Group Web Jam internal to The Cloud Work Group.  What I liked about this Big Data Security Tweet Jam is that it brought two key domains together highlighting the intersection points. There was great contribution from subject matter experts forcing participants to think about one domain in the context of the other.

In a way, this post is actually synthesizing valuable information from raw data in the tweet messages – and therefore needs to be secured!

What are your thoughts on the observations made in this tweet jam? What measures are you taking to secure Big Data in your enterprise?

I really enjoyed this tweet jam and would strongly encourage you to actively participate in upcoming tweet jams hosted by The Open Group.  You get to interact with a wide spectrum of knowledgeable practitioners listed in this summary post.

NadhanHP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has more than 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project, and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, LinkedIn and Journey Blog.

 

2 Comments

Filed under Tweet Jam

The Open Group Conference Plenary Speaker Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Keynoter Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

This is a transcript of a sponsored podcast discussion on Big Data analytics and its role in business processes, in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group® Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference will focus on big data and the transformation we need to embrace today.

We are here now with one of the main speakers at the conference; Michael Cavaretta, PhD, Technical Leader of Predictive Analytics for Ford Research and Advanced Engineering in Dearborn, Michigan.

We’ll see how Ford has exploited the strengths of big data analytics by directing them internally to improve business results. In doing so, they scour the metrics from the company’s best processes across myriad manufacturing efforts and through detailed outputs from in-use automobiles, all to improve and help transform their business.

Cavaretta has led multiple data-analytic projects at Ford to break down silos inside the company to best define Ford’s most fruitful datasets. Ford has successfully aggregated customer feedback, and extracted all the internal data to predict how best new features in technologies will improve their cars.

As a lead-in to his Open Group presentation, Michael and I will now explore how big data is fostering business transformation by allowing deeper insights into more types of data efficiently, and thereby improving processes, quality control, and customer satisfaction.

With that, please join me in welcoming Michael Cavaretta. Welcome to BriefingsDirect, Michael.

Michael Cavaretta: Thank you very much.

Gardner: Your upcoming presentation for The Open Group Conference is going to describe some of these new approaches to big data and how that offers some valuable insights into internal operations, and therefore making a better product. To start, what’s different now in being able to get at this data and do this type of analysis from, say, five years ago?

Cavaretta: The biggest difference has to do with the cheap availability of storage and processing power, where a few years ago people were very much concentrated on filtering down the datasets that were being stored for long-term analysis. There has been a big sea change with the idea that we should just store as much as we can and take advantage of that storage to improve business processes.

Gardner: That sounds right on the money, but how do we get here? How do we get to the point where we could start using these benefits from a technology perspective, as you say, better storage, networks, being able to move big dataset, that sort of thing, to wrenching out benefits. What’s the process behind the benefit?

Cavaretta: The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises. There’s this idea that you don’t need to spend so much time figuring out what data you want to store and worry about the cost associated with it, and more about data as an asset. There is value in being able to store it, and being able to go back and extract different insights from it. This really comes from this really cheap storage, access to parallel processing machines, and great software.

Gardner: It seems to me that for a long time, the mindset was that data is simply the output from applications, with applications being primary and the data being almost an afterthought. It seems like we sort flipped that. The data now is perhaps as important, even more important, than the applications. Does that seem to hold true?

Cavaretta: Most definitely, and we’ve had a number of interesting engagements where people have thought about the data that’s being collected. When we talk to them about big data, storing everything at the lowest level of transactions, and what could be done with that, their eyes light up and they really begin to get it.

Gardner: I suppose earlier, when cost considerations and technical limitations were at work, we would just go for a tip of the iceberg level. Now, as you say, we can get almost all the data. So, is this a matter of getting at more data, different types of data, bringing in unstructured data, all the above? How much you are really going after here?

Cavaretta: I like to talk to people about the possibility that big data provides and I always tell them that I have yet to have a circumstance where somebody is giving me too much data. You can pull in all this information and then answer a variety of questions, because you don’t have to worry that something has been thrown out. You have everything.

You may have 100 questions, and each one of the questions uses a very small portion of the data. Those questions may use different portions of the data, a very small piece, but they’re all different. If you go in thinking, “We’re going to answer the top 20 questions and we’re just going to hold data for that,” that leaves so much on the table, and you don’t get any value out of it.

Gardner: I suppose too that we can think about small samples or small datasets and aggregate them or join them. We have new software capabilities to do that efficiently, so that we’re able to not just look for big honking, original datasets, but to aggregate, correlate, and look for a lifecycle level of data. Is that fair as well?

Cavaretta: Definitely. We’re a big believer in mash-ups and we really believe that there is a lot of value in being able to take even datasets that are not specifically big-data sizes yet, and then not go deep, not get more detailed information, but expand the breadth. So it’s being able to augment it with other internal datasets, bridging across different business areas as well as augmenting it with external datasets.

A lot of times you can take something that is maybe a few hundred thousand records or a few million records, and then by the time you’re joining it, and appending different pieces of information onto it, you can get the big dataset sizes.

Gardner: Just to be clear, you’re unique. The conventional wisdom for big data is to look at what your customers are doing, or just the external data. You’re really looking primarily at internal data, while also availing yourself of what external data might be appropriate. Maybe you could describe a little bit about your organization, what you do, and why this internal focus is so important for you.

Cavaretta: I’m part of a larger department that is housed over in the research and advanced-engineering area at Ford Motor Company, and we’re about 30 people. We work as internal consultants, kind of like Capgemini or Ernst & Young, but only within Ford Motor Company. We’re responsible for going out and looking for different opportunities from the business perspective to bring advanced technologies. So, we’ve been focused on the area of statistical modeling and machine learning for I’d say about 15 years or so.

And in this time, we’ve had a number of engagements where we’ve talked with different business customers, and people have said, “We’d really like to do this.” Then, we’d look at the datasets that they have, and say, “Wouldn’t it be great if we would have had this. So now we have to wait six months or a year.”

These new technologies are really changing the game from that perspective. We can turn on the complete fire-hose, and then say that we don’t have to worry about that anymore. Everything is coming in. We can record it all. We don’t have to worry about if the data doesn’t support this analysis, because it’s all there. That’s really a big benefit of big-data technologies.

Gardner: If you’ve been doing this for 15 years, you must be demonstrating a return on investment (ROI) or a value proposition back to Ford. Has that value proposition been changing? Do you expect it to change? What might be your real value proposition two or three years from now?

Cavaretta: The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts who are really interested in looking at things from a data-driven perspective. From when I first came in to now, the biggest change has been when Alan Mulally came into the company, and really pushed the idea of data-driven decisions.

Before, we were getting a lot of interest from people who are really very focused on the data that they had internally. After that, they had a lot of questions from their management and from upper level directors and vice-president saying, “We’ve got all these data assets. We should be getting more out of them.” This strategic perspective has really changed a lot of what we’ve done in the last few years.

Gardener: As I listen to you Michael, it occurs to me that you are applying this data-driven mentality more deeply. As you pointed out earlier, you’re also going after all the data, all the information, whether that’s internal or external.

In the case of an automobile company, you’re looking at the factory, the dealers, what drivers are doing, what the devices within the automobile are telling you, factoring that back into design relatively quickly, and then repeating this process. Are we getting to the point where this sort of Holy Grail notion of a total feedback loop across the lifecycle of a major product like an automobile is really within our grasp? Are we getting there, or is this still kind of theoretical. Can we pull it altogether and make it a science?

Cavaretta: The theory is there. The question has more to do with the actual implementation and the practicality of it. We still are talking a lot of data where even with new advanced technologies and techniques that’s a lot of data to store, it’s a lot of data to analyze, there’s a lot of data to make sure that we can mash-up appropriately.

And, while I think the potential is there and I think the theory is there. There is also a work in being able to get the data from multiple sources. So everything which you can get back from the vehicle, fantastic. Now if you marry that up with internal data, is it survey data, is it manufacturing data, is it quality data? What are the things do you want to go after first? We can’t do everything all at the same time.

Our perspective has been let’s make sure that we identify the highest value, the greatest ROI areas, and then begin to take some of the major datasets that we have and then push them and get more detail. Mash them up appropriately and really prove up the value for the technologists.

Gardner: Clearly, there’s a lot more to come in terms of where we can take this, but I suppose it’s useful to have a historic perspective and context as well. I was thinking about some of the early quality gurus like Deming and some of the movement towards quality like Six Sigma. Does this fall within that same lineage? Are we talking about a continuum here over that last 50 or 60 years, or is this something different?

Cavaretta: That’s a really interesting question. From the perspective of analyzing data, using data appropriately, I think there is a really good long history, and Ford has been a big follower of Deming and Six Sigma for a number of years now.

The difference though, is this idea that you don’t have to worry so much upfront about getting the data. If you’re doing this right, you have the data right there, and this has some great advantages. You’ll have to wait until you get enough history to look for somebody’s patterns. Then again, it also has some disadvantage, which is you’ve got so much data that it’s easy to find things that could be spurious correlations or models that don’t make any sense.

The piece that is required is good domain knowledge, in particular when you are talking about making changes in the manufacturing plant. It’s very appropriate to look at things and be able to talk with people who have 20 years of experience to say, “This is what we found in the data. Does this match what your intuition is?” Then, take that extra step.

Gardner: Tell me a little about sort a day in the life of your organization and your team to let us know what you do. How do you go about making more data available and then reaching some of these higher-level benefits?

Cavaretta: We’re very much focused on interacting with the business. Most of all, we do have to deal with working on pilot projects and working with our business customers to bring advanced analytics and big data technologies to bear against these problems. So we work in kind of what we call push-and-pull model.

We go out and investigate technologies and say these are technologies that Ford should be interested in. Then, we look internally for business customers who would be interested in that. So, we’re kind of pushing the technologies.

From the pull perspective, we’ve had so many successful engagements in such good contacts and good credibility within the organization that we’ve had people come to us and say, “We’ve got a problem. We know this has been in your domain. Give us some help. We’d love to be able to hear your opinions on this.”

So we’ve pulled from the business side and then our job is to match up those two pieces. It’s best when we will be looking at a particular technology and we have somebody come to us and we say, “Oh, this is a perfect match.”

Those types of opportunities have been increasing in the last few years, and we’ve been very happy with the number of internal customers that have really been very excited about the areas of big data.

Gardner: Because this is The Open Group conference and an audience that’s familiar with the IT side of things, I’m curious as to how this relates to software and software development. Of course there are so many more millions of lines of code in automobiles these days, software being more important than just about everything. Are you applying a lot of what you are doing to the software side of the house or are the agile and the feedback loops and the performance management issues a separate domain, or it’s your crossover here?

Cavaretta: There’s some crossover. The biggest area that we’ve been focused on has been picking information, whether internal business processes or from the vehicle, and then being able to bring it back in to derive value. We have very good contacts in the Ford IT group, and they have been fantastic to work with in bringing interesting tools and technology to bear, and then looking at moving those into production and what’s the best way to be able to do that.

A fantastic development has been this idea that we’re using some of the more agile techniques in this space and Ford IT has been pushing this for a while. It’s been fantastic to see them work with us and be able to bring these techniques into this new domain. So we’re pushing the envelope from two different directions.

Gardner: It sounds like you will be meeting up at some point with a complementary nature to your activities.

Cavaretta: Definitely.

Gardner: Let’s move on to this notion of the “Internet of things,” a very interesting concept that lot of people talk about. It seems relevant to what we’ve been discussing. We have sensors in these cars, wireless transfer of data, more-and-more opportunity for location information to be brought to bear, where cars are, how they’re driven, speed information, all sorts of metrics, maybe making those available through cloud providers that assimilate this data.

So let’s not go too deep, because this is a multi-hour discussion all on its own, but how is this notion of the Internet of things being brought to bear on your gathering of big data and applying it to the analytics in your organization?

Cavaretta: It is a huge area, and not only from the internal process perspective —  RFID tags within the manufacturing plans, as well as out on the plant floor, and then all of the information that’s being generated by the vehicle itself.

The Ford Energi generates about 25 gigabytes of data per hour. So you can imagine selling couple of million vehicles in the near future with that amount of data being generated. There are huge opportunities within that, and there are also some interesting opportunities having to do with opening up some of these systems for third-party developers. OpenXC is an initiative that we have going on to add at Research and Advanced Engineering.

We have a lot of data coming from the vehicle. There’s huge number of sensors and processors that are being added to the vehicles. There’s data being generated there, as well as communication between the vehicle and your cell phone and communication between vehicles.

There’s a group over at Ann Arbor Michigan, the University of Michigan Transportation Research Institute (UMTRI), that’s investigating that, as well as communication between the vehicle and let’s say a home system. It lets the home know that you’re on your way and it’s time to increase the temperature, if it’s winter outside, or cool it at the summer time. The amount of data that’s been generated there is invaluable information and could be used for a lot of benefits, both from the corporate perspective, as well as just the very nature of the environment.

Gardner: Just to put a stake in the ground on this, how much data do cars typically generate? Do you have a sense of what now is the case, an average?

Cavaretta: The Energi, according to the latest information that I have, generates about 25 gigabytes per hour. Different vehicles are going to generate different amounts, depending on the number of sensors and processors on the vehicle. But the biggest key has to do with not necessarily where we are right now but where we will be in the near future.

With the amount of information that’s being generated from the vehicles, a lot of it is just internal stuff. The question is how much information should be sent back for analysis and to find different patterns? That becomes really interesting as you look at external sensors, temperature, humidity. You can know when the windshield wipers go on, and then to be able to take that information, and mash that up with other external data sources too. It’s a very interesting domain.

Gardner: So clearly, it’s multiple gigabytes per hour per vehicle and probably going much higher.

Cavaretta: Easily.

Gardner: Let’s move forward now for those folks who have been listening and are interested in bringing this to bear on their organizations and their vertical industries, from the perspective of skills, mindset, and culture. Are there standards, certification, or professional organizations that you’re working with in order to find the right people?

It’s a big question. Let’s look at what skills do you target for your group, and what ways you think that you can improve on that. Then, we’ll get into some of those larger issues about culture and mindset.

Cavaretta: The skills that we have in our department, in particular on our team, are in the area of computer science, statistics, and some good old-fashioned engineering domain knowledge. We’ve really gone about this from a training perspective. Aside from a few key hires, it’s really been an internally developed group.

The biggest advantage that we have is that we can go out and be very targeted with the amount of training that we have. There are such big tools out there, especially in the open-source realm, that we can spin things up with relatively low cost and low risk, and do a number of experiments in the area. That’s really the way that we push the technologies forward.

Gardner: Why The Open Group? Why is that a good forum for your message, and for your research here?

Cavaretta: The biggest reason is the focus on the enterprise, where there are a lot of advantages and a lot of business cases, looking at large enterprises and where there are a lot of systems, companies that can take a relatively small improvement, and it can make a large difference on the bottom-line.

Talking with The Open Group really gives me an opportunity to be able to bring people on board with the idea that you should be looking at a difference in mindset. It’s not “Here’s a way that data is being generated, look, try and conceive of some questions that we can use, and we’ll store that too.” Let’s just take everything, we’ll worry about it later, and then we’ll find the value.

Gardner: I’m sure the viewers of your presentation on January 28 will be gathering a lot of great insights. A lot of the people that attend The Open Group conferences are enterprise architects. What do you think those enterprise architects should be taking away from this? Is there something about their mindset that should shift in recognizing the potential that you’ve been demonstrating?

Cavaretta: It’s important for them to be thinking about data as an asset, rather than as a cost. You even have to spend some money, and it may be a little bit unsafe without really solid ROI at the beginning. Then, move towards pulling that information in, and being able to store it in a way that allows not just the high-level data scientist to get access to and provide value, but people who are interested in the data overall. Those are very important pieces.

The last one is how do you take a big-data project, how do you take something where you’re not storing in the traditional business intelligence (BI) framework that an enterprise can develop, and then connect that to the BI systems and look at providing value to those mash-ups. Those are really important areas that still need some work.

Gardner: Another big constituency within The Open Group community are those business architects. Is there something about mindset and culture, getting back to that topic, that those business-level architects should consider? Do you really need to change the way you think about planning and resource allocation in a business setting, based on the fruits of things that you are doing with big data?

Cavaretta: I really think so. The digital asset that you have can be monetized to change the way the business works, and that could be done by creating new assets that then can be sold to customers, as well as improving the efficiencies of the business.

This idea that everything is going to be very well-defined and there is a lot of work that’s being put into  making sure that data has high quality, I think those things need to be changed somewhat. As you’re pulling the data in, as you are thinking about long-term storage, it’s more the access to the information, rather than the problem in just storing it.

Gardner: Interesting that you brought up that notion that the data becomes a product itself and even a profit center perhaps.

Cavaretta: Exactly. There are many companies, especially large enterprises, that are looking at their data assets and wondering what can they do to monetize this, not only to just pay for the efficiency improvement but as a new revenue stream.

Gardner: We’re almost out of time. For those organizations that want to get started on this, are there any 20/20 hindsights or Monday morning quarterback insights you can provide. How do you get started? Do you appoint a leader? Do you need a strategic roadmap, getting this culture or mindset shifted, pilot programs? How would you recommend that people might begin the process of getting into this?

Cavaretta: We’re definitely a huge believer in pilot projects and proof of concept, and we like to develop roadmaps by doing. So get out there. Understand that it’s going to be messy. Understand that it maybe going to be a little bit more costly and the ROI isn’t going to be there at the beginning.

But get your feet wet. Start doing some experiments, and then, as those experiments turn from just experimentation into really providing real business value, that’s the time to start looking at a more formal aspect and more formal IT processes. But you’ve just got to get going at this point.

Gardner: I would think that the competitive forces are out there. If you are in a competitive industry, and those that you compete against are doing this and you are not, that could spell some trouble.

Cavaretta:  Definitely.

Gardner: We’ve been talking with Michael Cavaretta, PhD, Technical Leader of Predictive Analytics at Ford Research and Advanced Engineering in Dearborn, Michigan. Michael and I have been exploring how big data is fostering business transformation by allowing deeper insights into more types of data and all very efficiently. This is improving processes, updating quality control and adding to customer satisfaction.

Our conversation today comes as a lead-in to Michael’s upcoming plenary presentation. He is going to be talking on January 28 in Newport Beach California, as part of The Open Group conference.

You will hear more from Michael and others, the global leaders on big data that are going to be gathering to talk about business transformation from big data at this conference. So a big thank you to Michael for joining us in this fascinating discussion. I really enjoyed it and I look forward to your presentation on the 28.

Cavaretta: Thank you very much.

Gardner: And I would encourage our listeners and readers to attend the conference or follow more of the threads in social media from the event. Again, it’s going to be happening from January 27 to January 30 in Newport Beach, California.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator through the thought leadership interviews. Thanks again for listening, and come back next time.

1 Comment

Filed under Conference, Uncategorized

Leveraging Social Media at The Open Group Newport Beach Conference (#ogNB)

By The Open Group Conference Team

By attending conferences hosted by The Open Group®, attendees are able to learn from industry experts, understand the latest technologies and standards and discuss and debate current industry trends. One way to maximize the benefits is to make technology work for you. If you are attending The Open Group Conference in Newport Beach next week, we’ve put together a few tips on how to leverage social media to make networking at the conference easier, quicker and more effective.

Using Twitter at #ogNB

Twitter is a real-time news-sharing tool that anyone can use. The official hashtag for the conference is #ogNB. This enables anybody, whether they are physically attending the event or not, to follow what’s happening at the Newport Beach conference in real-time and interact with each other.

Before the conference, be sure to update your Twitter account to monitor #ogNB and, of course, to tweet about the conference.

Using Facebook at The Open Group Conference in Newport Beach

You can also track what is happening at the conference on The Open Group Facebook page. We will be running another photo contest, where all of entries will be uploaded to our page. Members and Open Group Facebook fans can vote by “liking” a photo. The photos with the most “likes” in each category will be named the winner. Submissions will be uploaded in real-time, so the sooner you submit a photo, the more time members and fans will have to vote for it!

For full details of the contest and how to enter see The Open Group blog at: http://blog.opengroup.org/2013/01/22/the-open-group-photo-contest-document-the-magic-at-the-newport-beach-conference/

LinkedIn during The Open Group Conference in Newport Beach

Inspired by one of the sessions? Interested in what your peers have to say? Start a discussion on The Open Group LinkedIn Group page. We’ll also be sharing interesting topics and questions related to The Open Group Conference as it is happening. If you’re not a member already, requesting membership is easy. Simply go to the group page and click the “Join Group” button. We’ll accept your request as soon as we can!

Blogging during The Open Group Conference in Newport Beach

Stay tuned for daily conference recaps here on The Open Group blog. In case you missed a session or you weren’t able to make it to Newport Beach, we’ll be posting the highlights and recaps on the blog. If you are attending the conference and would like to submit a recap of your own, please contact opengroup (at) bateman-group.com.

If you have any questions about social media usage at the conference, feel free to tweet the conference team @theopengroup.

Comments Off

Filed under Uncategorized

Improving Signal-to-Noise in Risk Management

By Jack Jones, CXOWARE

One of the most important responsibilities of the information security professional (or any IT professional, for that matter) is to help management make well-informed decisions. Unfortunately, this has been an illusive objective when it comes to risk. Although we’re great at identifying control deficiencies, and we can talk all day long about the various threats we face, we have historically had a poor track record when it comes to risk. There are a number of reasons for this, but in this article I’ll focus on just one — definition.

You’ve probably heard the old adage, “You can’t manage what you can’t measure.”  Well, I’d add to that by saying, “You can’t measure what you haven’t defined.” The unfortunate fact is that the information security profession has been inconsistent in how it defines and uses the term “risk.” Ask a number of professionals to define the term, and you will get a variety of definitions.

Besides inconsistency, another problem regarding the term “risk” is that many of the common definitions don’t fit the information security problem space or simply aren’t practical. For example, the ISO27000 standard defines risk as, “the effect of uncertainty on objectives.” What does that mean? Fortunately (or perhaps unfortunately), I must not be the only one with that reaction because the ISO standard goes on to define “effect,” “uncertainty,” and “objectives,” as follows:

  • Effect: A deviation from the expected — positive and/or negative
  • Uncertainty: The state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence or likelihood
  • Objectives: Can have different aspects (such as financial, health and safety, information security, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product and process)

NOTE: Their definition for ”objectives” doesn’t appear to be a definition at all, but rather an example. 

Although I understand, conceptually, the point this definition is getting at, my first concern is practical in nature. As a Chief Information Security Officer (CISO), I invariably have more to do than I have resources to apply. Therefore, I must prioritize and prioritization requires comparison and comparison requires measurement. It isn’t clear to me how “uncertainty regarding deviation from the expected (positive and/or negative) that might affect my organization’s objectives” can be applied to measure, and thus compare and prioritize, the issues I’m responsible for dealing with.

This is just an example though, and I don’t mean to pick on ISO because much of their work is stellar. I could have chosen any of several definitions in our industry and expressed varied concerns.

In my experience, information security is about managing how often loss takes place, and how much loss will be realized when/if it occurs. That is our profession’s value proposition, and it’s what management cares about. Consequently, whatever definition we use needs to align with this purpose.

The Open Group’s Risk Taxonomy (shown below), based on Factor Analysis of Information Risk (FAIR), helps to solve this problem by providing a clear and practical definition for risk. In this taxonomy, Risk is defined as, “the probable frequency and probable magnitude of future loss.”

Taxonomy image

The elements below risk in the taxonomy form a Bayesian network that models risk factors and acts as a framework for critically evaluating risk. This framework has been evolving for more than a decade now and is helping information security professionals across many industries understand, measure, communicate and manage risk more effectively.

In the communications context, you have to have a very clear understanding of what constitutes signal before you can effectively and reliably filter it out from noise. The Open Group’s Risk Taxonomy gives us an important foundation for achieving a much clearer signal.

I will be discussing this topic in more detail next week at The Open Group Conference in Newport Beach. For more information on my session or the conference, visit: http://www.opengroup.org/newportbeach2013.

Jack Jones HeadshotJack Jones has been employed in technology for the past twenty-nine years, and has specialized in information security and risk management for twenty-two years.  During this time, he’s worked in the United States military, government intelligence, consulting, as well as the financial and insurance industries.  Jack has over nine years of experience as a CISO, with five of those years at a Fortune 100 financial services company.  His work there was recognized in 2006 when he received the 2006 ISSA Excellence in the Field of Security Practices award at that year’s RSA conference.  In 2007, he was selected as a finalist for the Information Security Executive of the Year, Central United States, and in 2012 was honored with the CSO Compass award for leadership in risk management.  He is also the author and creator of the Factor Analysis of Information Risk (FAIR) framework.

1 Comment

Filed under Cybersecurity

How Should we use Cloud?

By Chris Harding, The Open Group

How should we use Cloud? This is the key question at the start of 2013.

The Open Group® conferences in recent years have thrown light on, “What is Cloud?” and, “Should we use Cloud?” It is time to move on.

Cloud as a Distributed Processing Platform

The question is an interesting one, because the answer is not necessarily, “Use Cloud resources just as you would use in-house resources.” Of course, you can use Cloud processing and storage to replace or supplement what you have in-house, and many companies are doing just that. You can also use the Cloud as a distributed computing platform, on which a single application instance can use multiple processing and storage resources, perhaps spread across many countries.

It’s a bit like contracting a company to do a job, rather than hiring a set of people. If you hire a set of people, you have to worry about who will do what when. Contract a company, and all that is taken care of. The company assembles the right people, schedules their work, finds replacements in case of sickness, and moves them on to other things when their contribution is complete.

This doesn’t only make things easier, it also enables you to tackle bigger jobs. Big Data is the latest technical phenomenon. Big Data can be processed effectively by parceling the work out to multiple computers. Cloud providers are beginning to make the tools to do this available, using distributed file systems and map-reduce. We do not yet have, “Distributed Processing as a Service” – but that will surely come.

Distributed Computing at the Conference

Big Data is the main theme of the Newport Beach conference. The plenary sessions have keynote presentations on Big Data, including the crucial aspect of security, and there is a Big Data track that explores in depth its use in Enterprise Architecture.

There are also Cloud tracks that explore the business aspects of using Cloud and the use of Cloud in Enterprise Architecture, including a session on its use for Big Data.

Service orientation is generally accepted as a sound underlying principle for systems using both Cloud and in-house resources. The Service Oriented Architecture (SOA) movement focused initially on its application within the enterprise. We are now looking to apply it to distributed systems of all kinds. This may require changes to specific technology and interfaces, but not to the fundamental SOA approach. The Distributed Services Architecture track contains presentations on the theory and practice of SOA.

Distributed Computing Work in The Open Group

Many of the conference presentations are based on work done by Open Group members in the Cloud Computing, SOA and Semantic Interoperability Work Groups, and in the Architecture, Security and Jericho Forums. The Open Group enables people to come together to develop standards and best practices for the benefit of the architecture community. We have active Work Groups and Forums working on artifacts such as a Cloud Computing Reference Architecture, a Cloud Portability and Interoperability Guide, and a Guide to the use of TOGAF® framework in Cloud Ecosystems.

The Open Group Conference in Newport Beach

Our conferences provide an opportunity for members and non-members to discuss ideas together. This happens not only in presentations and workshops, but also in informal discussions during breaks and after the conference sessions. These discussions benefit future work at The Open Group. They also benefit the participants directly, enabling them to bring to their enterprises ideas that they have sounded out with their peers. People from other companies can often bring new perspectives.

Most enterprises now know what Cloud is. Many have identified specific opportunities where they will use it. The challenge now for enterprise architects is determining how best to do this, either by replacing in-house systems, or by using the Cloud’s potential for distributed processing. This is the question for discussion at The Open Group Conference in Newport Beach. I’m looking forward to an interesting conference!

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

1 Comment

Filed under Cloud, Conference

Successful Enterprise Architecture using the TOGAF® and ArchiMate® Standards

By Henry Franken, BiZZdesign

The discipline of Enterprise Architecture was developed in the 1980s with a strong focus on the information systems landscape of organizations. Since those days, the scope of the discipline has slowly widened to include more and more aspects of the enterprise as a whole. This holistic perspective takes into account the concerns of a wide variety of stakeholders. Architects, especially at the strategic level, attempt to answer the question “How should we organize ourselves in order to be successful?”

An architecture framework is a foundational structure, or set of structures, which can be used for developing a broad range of different architectures and consists of a process and a modeling component. TOGAF® framework and the ArchiMate® modeling language – both maintained by The Open Group® – are the two leading standards in this field.

TA

Much has been written on this topic in online forums, whitepapers, and blogs. On the BiZZdesign blog we have published several series on EA in general and these standards in particular, with a strong focus on the question: what should we do to be successful with EA using TOGAF framework and the ArchiMate modeling language? I would like to summarize some of our findings here:

Tip 1 One of the key success factors for being successful with EA is to deliver value early on. We have found that organizations who understand that a long-term vision and incremental delivery (“think big, act small”) have a larger chance of developing an effective EA capability
 
Tip 2 Combine process and modeling: TOGAF framework and the ArchiMate modeling language are a powerful combination. Deliverables in the architecture process are more effective when based on an approach that combines formal models with powerful visualization capabilities. Even more, an architecture repository is an valuable asset that can be reused throughout the enterprise
 
Tip 3 Use a tool! It is true that “a fool with a tool is still a fool”. In our teaching and consulting practice we have found, however, that adoption of a flexible and easy to use tool can be a strong driver in pushing the EA-initiative forward.

There will be several interesting presentations on this subject at the upcoming Open Group conference (Newport Beach, CA, USA, January 28 – 31: Look here), ranging from theory to case practice, focusing on getting started with EA as well as on advanced topics.

I will also present on this subject and will elaborate on the combined use of The Open Group standards for EA. I also gladly invite you to join me at the panel sessions. Look forward to see you there!

Henry FrankenHenry Franken is the managing director of BiZZdesign and is chair of The Open Group ArchiMate Forum. As chair of The Open Group ArchiMate Forum, Henry led the development of the ArchiMate Version 2.o standard. Henry is a speaker at many conferences and has co-authored several international publications and Open Group White Papers. Henry is co-founder of the BPM-Forum. At BiZZdesign, Henry is responsible for research and innovation.

2 Comments

Filed under ArchiMate®, Enterprise Architecture, TOGAF®

#ogChat Summary – Big Data and Security

By Patty Donovan, The Open Group

The Open Group hosted a tweet jam (#ogChat) to discuss Big Data security. In case you missed the conversation, here is a recap of the event.

The Participants

A total of 18 participants joined in the hour-long discussion, including:

Q1 What is #BigData #security? Is it different from #data security? #ogChat

Participants seemed to agree that while Big Data security is similar to data security, it is more extensive. Two major factors to consider: sensitivity and scalability.

  • @dustinkirkland At the core it’s the same – sensitive data – but the difference is in the size and the length of time this data is being stored. #ogChat
  • @jim_hietala Q1: Applying traditional security controls to BigData environments, which are not just very large info stores #ogChat
  • @TheTonyBradley Q1. The value of analyzing #BigData is tied directly to the sensitivity and relevance of that data–making it higher risk. #ogChat
  • @AdrianLane Q1 Securing #BigData is different. Issues of velocity, scale, elasticity break many existing security products. #ogChat
  • @editingwhiz #Bigdata security is standard information security, only more so. Meaning sampling replaced by complete data sets. #ogchat
  • @Dana_Gardner Q1 Not only is the data sensitive, the analysis from the data is sensitive. Secret. On the QT. Hush, hush. #BigData #data #security #ogChat
    • @Technodad @Dana_Gardner A key point. Much #bigdata will be public – the business value is in cleanup & analysis. Focus on protecting that. #ogChat

Q2 Any thoughts about #security systems as producers of #BigData, e.g., voluminous systems logs? #ogChat

  • Most agreed that security systems should be setting an example for producing secure Big Data environments.
  • @dustinkirkland Q2. They should be setting the example. If the data is deemed important or sensitive, then it should be secured and encrypted. #ogChat
  • @TheTonyBradley Q2. Data is data. Data gathered from information security logs is valuable #BigData, but rules for protecting it are the same. #ogChat
  • @elinormills Q2 SIEM is going to be big. will drive spending. #ogchat #bigdata #security
  • @jim_hietala Q2: Well instrumented IT environments generate lots of data, and SIEM/audit tools will have to be managers of this #BigData #ogchat
  • @dustinkirkland @theopengroup Ideally #bigdata platforms will support #tokenization natively, or else appdevs will have to write it into apps #ogChat

Q3 Most #BigData stacks have no built in #security. What does this mean for securing #BigData? #ogChat

The lack of built-in security hoists a target on the Big Data. While not all enterprise data is sensitive, housing it insecurely runs the risk of compromise. Furthermore, security solutions not only need to be effective, but also scalable as data will continue to get bigger.

  • @elinormills #ogchat big data is one big hacker target #bigdata #security
    • @editingwhiz @elinormills #bigdata may be a huge hacker target, but will hackers be able to process the chaff out of it? THAT takes $$$ #ogchat
    • @elinormills @editingwhiz hackers are innovation leaders #ogchat
    • @editingwhiz @elinormills Yes, hackers are innovation leaders — in security, but not necessarily dataset processing. #eweeknews #ogchat
  • @jim_hietala Q3:There will be a strong market for 3rd party security tools for #BigData – existing security technologies can’t scale #ogchat
  • @TheTonyBradley Q3. When you take sensitive info and store it–particularly in the cloud–you run the risk of exposure or compromise. #ogChat
  • @editingwhiz Not all enterprises have sensitive business data they need to protect with their lives. We’re talking non-regulated, of course. #ogchat
  • @TheTonyBradley Q3. #BigData is sensitive enough. The distilled information from analyzing it is more sensitive. Solutions need to be effective. #ogChat
  • @AdrianLane Q3 It means identifying security products that don’t break big data – i.e. they scale or leverage #BigData #ogChat
    • @dustinkirkland @AdrianLane #ogChat Agreed, this is where certifications and partnerships between the 3rd party and #bigdata vendor are essential.

Q4 How is the industry dealing with the social and ethical uses of consumer data gathered via #BigData? #ogChat #privacy

Participants agreed that the industry needs to improve when it comes to dealing with the social and ethical used of consumer data gathered through Big Data. If the data is easily accessible, hackers will be attracted. No matter what, the cost of a breach is far greater than any preventative solution.

  • @dustinkirkland Q4. #ogChat Sadly, not well enough. The recent Instagram uproar was well publicized but such abuse of social media rights happens every day.
    • @TheTonyBradley @dustinkirkland True. But, they’ll buy the startups, and take it to market. Fortune 500 companies don’t like to play with newbies. #ogChat
    • @editingwhiz Disagree with this: Fortune 500s don’t like to play with newbies. We’re seeing that if the IT works, name recognition irrelevant. #ogchat
    • @elinormills @editingwhiz @thetonybradley ‘hacker’ covers lot of ground, so i would say depends on context. some of my best friends are hackers #ogchat
    • @Technodad @elinormills A core point- data from sensors will drive #bigdata as much as enterprise data. Big security, quality issues there. #ogChat
  • @Dana_Gardner Q4 If privacy is a big issue, hacktivism may crop up. Power of #BigData can also make it socially onerous. #data #security #ogChat
  • @dustinkirkland Q4. The cost of a breach is far greater than the cost (monetary or reputation) of any security solution. Don’t risk it. #ogChat

Q5 What lessons from basic #datasecurity and #cloud #security can be implemented in #BigData security? #ogChat

The principles are the same, just on a larger scale. The biggest risks come from cutting corners due to the size and complexity of the data gathered. As hackers (like Anonymous) get better, so does security regardless of the data size.

  • @TheTonyBradley Q5. Again, data is data. The best practices for securing and protecting it stay the same–just on a more massive #BigData scale. #ogChat
  • @Dana_Gardner Q5 Remember, this is in many ways unchartered territory so expect the unexpected. Count on it. #BigData #data #security #ogChat
  • @NadhanAtHP A5 @theopengroup – Security Testing is even more vital when it comes to #BigData and Information #ogChat
  • @TheTonyBradley Q5. Anonymous has proven time and again that most existing data security is trivial. Need better protection for #BigData. #ogChat

Q6 What are some best practices for securing #BigData? What are orgs doing now, and what will orgs be doing 2-3 years from now? #ogChat

While some argued encrypting everything is the key, and others encouraged pressure on big data providers, most agreed that a multi-step security infrastructure is necessary. It’s not just the data that needs to be secured, but also the transportation and analysis processes.

  • @dustinkirkland Q6. #ogChat Encrypting everything, by default, at least at the fs layer. Proper key management. Policies. Logs. Hopefully tokenized too.
  • @dustinkirkland Q6. #ogChat Ask tough questions of your #cloud or #bigdata provider. Know what they are responsible for and who has access to keys. #ogChat
    • @elinormills Agreed–> @dustinkirkland Q6. #ogChat Ask tough questions of your #cloud or #bigdataprovider. Know what they are responsible for …
  • @Dana_Gardner Q6 Treat most #BigData as a crown jewel, see it as among most valuable assets. Apply commensurate security. #data #security #ogChat
  • @elinormills Q6 govt level crypto minimum, plus protect all endpts #ogchat #bigdata #security
  • @TheTonyBradley Q6. Multi-faceted issue. Must protect raw #BigData, plus processing, analyzing, transporting, and resulting distilled analysis. #ogChat
  • @Technodad If you don’t establish trust with data source, you need to assume data needs verification, cleanup before it is used for decisions. #ogChat

A big thank you to all the participants who made this such a great discussion!

patricia donovanPatricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

3 Comments

Filed under Tweet Jam

The Open Group Photo Contest: Document the Magic at the Newport Beach Conference!

By The Open Group Conference Team

It’s that time again! The Open Group is busily preparing for the Newport Beach Conference, taking place Jan. 28-31, 2013. As you begin packing, charge up your smartphones and bring your digital cameras: We’ll be hosting The Open Group Photo Contest once again! The prize is a free pass to attend any one of the Open Group conferences in 2013!

The contest is open to all Newport Beach Conference attendees. Here are the details for those of you who have yet to participate or need a refresher on our guidelines.

The categories will include:

  • The Real O.C. Award – any photo taken in or around Newport Beach.
  • The Newport Beach Conference Award – any photo taken during the conference. This includes photos of keynote speakers, candid photos of Open Group members, group sessions, etc.

Participants can submit photos via Twitter using the hashtag #ogPhoto, or via email to photo@opengroup.org.  Please include your full name and the photo’s category upon submission. The submission period will end on Friday, February 8 at 5:00 p.m. PT, with the winner to be announced the following week.

All photos will be uploaded to The Open Group’s Facebook page. Facebook members can vote by “liking” a photo; photos with the most “likes” in each category will win the contest. Photos will be uploaded in real-time, so the sooner you submit a photo, the more time members will have to vote on it.

Below are previous photo contest winners from the Barcelona Conference in 2012:

Modernista Award: For best photo taken in or around Barcelona

Winner: Craig Heath

Craig Heath - Franklin Heath

“Barcelona Sky from the Fundació Joan Miró”

Best of Barcelona Conference Award:  For any photo taken during conference activities

Winner: Leonardo Ramirez

Leonardo Ramirez DuxDiligens 5

A flamenco dancer at the Tuesday night event

1 Comment

Filed under Conference

Questions for the Upcoming Big Data Security Tweet Jam on Jan. 22

By Patty Donovan, The Open Group

Last week, we announced our upcoming tweet jam on Tuesday, January 22 at 9:00 a.m. PT/12:00 p.m. ET/5:00 p.m. BST, which will examine the impact of Big Data on security and how it will change the security landscape.

Please join us next Tuesday, January 22! The discussion will be moderated by Dana Gardner (@Dana_Gardner), ZDNet – Briefings Direct. We welcome Open Group members and interested participants from all backgrounds to join the session. Our panel of experts will include:

  • Elinor Mills, former CNET reporter and current director of content and media strategy at Bateman Group (@elinormills)
  • Jaikumar Vijayan, Computerworld (@jaivijayan)
  • Chris Preimesberger, eWEEK (@editingwhiz)
  • Tony Bradley, PC World (@TheTonyBradley)
  • Michael Santarcangelo, Security Catalyst Blog (@catalyst)

The discussion will be guided by these six questions:

  1. What is #BigData security? Is it different from #data #security? #ogChat
  2. Any thoughts about #security systems as producers of #BigData, e.g., voluminous systems logs? #ogChat
  3. Most #BigData stacks have no built in #security. What does this mean for securing BigData? #ogChat
  4. How is the industry dealing with the social and ethical uses of consumer data gathered via #BigData? #ogChat #privacy
  5. What lessons from basic data security and #cloud #security can be implemented in #BigData #security? #ogChat
  6. What are some best practices for securing #BigData? #ogChat

To join the discussion, please follow the #ogChat hashtag during the allotted discussion time. Other hashtags we recommend you use during the event include:

  • Information Security: #InfoSec
  • Security: #security
  • BYOD: #BYOD
  • Big Data: #BigData
  • Privacy: #privacy
  • Mobile: #mobile
  • Compliance: #compliance

For more information about the tweet jam, guidelines and general background information, please visit our previous blog post: http://blog.opengroup.org/2013/01/15/big-data-security-tweet-jam/

If you have any questions prior to the event or would like to join as a participant, please direct them to Rod McLeod (rmcleod at bateman-group dot com), or leave a comment below. We anticipate a lively chat and hope you will be able to join us!

patricia donovanPatricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

 

1 Comment

Filed under Tweet Jam

Big Data Security Tweet Jam

By Patty Donovan, The Open Group

On Tuesday, January 22, The Open Group will host a tweet jam examining the topic of Big Data and its impact on the security landscape.

Recently, Big Data has been dominating the headlines, analyzing everything about the topic from how to manage and process it, to the way it will impact your organization’s IT roadmap. As 2012 came to a close, analyst firm, Gartner predicted that data will help drive IT spending to $3.8 trillion in 2014. Knowing the phenomenon is here to stay, enterprises face a new and daunting challenge of how to secure Big Data. Big Data security also raises other questions, such as: Is Big Data security different from data security? How will enterprises handle Big Data security? What is the best approach to Big Data security?

It’s yet to be seen if Big Data will necessarily revolutionize enterprise security, but it certainly will change execution – if it hasn’t already. Please join us for our upcoming Big Data Security tweet jam where leading security experts will discuss the merits of Big Data security.

Please join us on Tuesday, January 22 at 9:00 a.m. PT/12:00 p.m. ET/5:00 p.m. GMT for a tweet jam, moderated by Dana Gardner (@Dana_Gardner), ZDNet – Briefings Direct, that will discuss and debate the issues around big data security. Key areas that will be addressed during the discussion include: data security, privacy, compliance, security ethics and, of course, Big Data. We welcome Open Group members and interested participants from all backgrounds to join the session and interact with our panel of IT security experts, analysts and thought leaders led by Jim Hietala (@jim_hietala) and Dave Lounsbury (@Technodad) of The Open Group. To access the discussion, please follow the #ogChat hashtag during the allotted discussion time.

And for those of you who are unfamiliar with tweet jams, here is some background information:

What Is a Tweet Jam?

A tweet jam is a one hour “discussion” hosted on Twitter. The purpose of the tweet jam is to share knowledge and answer questions on Big Data security. Each tweet jam is led by a moderator and a dedicated group of experts to keep the discussion flowing. The public (or anyone using Twitter interested in the topic) is encouraged to join the discussion.

Participation Guidance

Whether you’re a newbie or veteran Twitter user, here are a few tips to keep in mind:

  • Have your first #ogChat tweet be a self-introduction: name, affiliation, occupation.
  • Start all other tweets with the question number you’re responding to and the #ogChat hashtag.
    • Sample: “Q1 enterprises will have to make significant adjustments moving forward to secure Big Data environments #ogChat”
    • Please refrain from product or service promotions. The goal of a tweet jam is to encourage an exchange of knowledge and stimulate discussion.
    • While this is a professional get-together, we don’t have to be stiff! Informality will not be an issue!
    • A tweet jam is akin to a public forum, panel discussion or Town Hall meeting – let’s be focused and thoughtful.

If you have any questions prior to the event or would like to join as a participant, please direct them to Rod McLeod (rmcleod at bateman-group dot com). We anticipate a lively chat and hope you will be able to join!

 

patricia donovanPatricia Donovan is Vice President, Membership & Events, at The Open Group and a member of its executive management team. In this role she is involved in determining the company’s strategic direction and policy as well as the overall management of that business area. Patricia joined The Open Group in 1988 and has played a key role in the organization’s evolution, development and growth since then. She also oversees the company’s marketing, conferences and member meetings. She is based in the U.S.

1 Comment

Filed under Tweet Jam

The Death of Planning

By Stuart Boardman, KPN

If I were to announce that planning large scale transformation projects was a waste of time, you’d probably think I’d taken leave of my senses. And yet, somehow this thought has been nagging at me for some time now. Bear with me.

It’s not so long ago that we still had debates about whether complex projects should be delivered as a “big bang” or in phases. These days the big bang has pretty much been forgotten. Why is that? I think the main reason is the level of risk involved with running a long process and dropping it into the operational environment just like that. This applies to any significant change, whether related to a business model and processes or IT architecture or physical building developments. Even if it all works properly, the level of sudden organizational change involved may stop it in its tracks.

So it has become normal to plan the change as a series of phases. We develop a roadmap to get us from here (as-is) to the end goal (to-be). And this is where I begin to identify the problem.

A few months ago I spent an enjoyable and thought provoking day with Jack Martin Leith (@jackmartinleith). Jack is a master in demystifying clichés but when he announced his irritation with “change is a journey,” I could only respond, “but Jack, it is.” What Jack made me see is that, whilst the original usage was a useful insight, it’s become a cliché which is commonly completely misused. It results in some pretty frustrating journeys! To understand that let’s take the analogy literally. Suppose your objective is to travel to San Diego but there are no direct flights from where you live. If the first step on your journey is a 4 hour layover at JFK, that’s at best a waste of your time and energy. There’s no value in this step. A day in Manhattan might be a different story. We can (and do) deal with this kind of thing for journeys of a day or so but imagine a journey that takes three or more years and all you see on the way is the inside of airports.

My experience has been that the same problem too often manifests itself in transformation programs. The first step may be logical from an implementation perspective, but it delivers no discernible value (tangible or intangible). It’s simply a validation that something has been done, as if, in our travel analogy, we were celebrating travelling the first 1000 kilometers, even if that put us somewhere over the middle of Lake Erie.

What would be better? An obvious conclusion that many have drawn is that we need to ensure every step delivers business value but that’s easier said than done.

Why is it so hard? The next thing Jack said helped me understand why. His point is that when you’ve taken the first step on your journey, it’s not just some intermediate station. It’s the “new now.” The new reality. The new as-is. And if the new reality is hanging around in some grotty airport trying to do your job via a Wi-Fi connection of dubious security and spending too much money on coffee and cookies…….you get the picture.

The problem with identifying that business value is that we’re not focusing on the new now but on something much more long-term. We’re trying to interpolate the near term business value out of the long term goal, which wasn’t defined based on near term needs.

What makes this all the more urgent is the increasing rate and unpredictability of change – in all aspects of doing business. This has led us to shorter planning horizons and an increasing tendency to regard that “to be” as nothing more than a general sense of direction. We’re thinking, “If we could deliver the whole thing really, really quickly on the basis of what we know we’d like to be able to do now, if it were possible, then it would look like this” – but knowing all the time that by the time we get anywhere near that end goal, it will have changed. It’s pretty obvious then that a first step, whose justification is entirely based on that imagined end goal, can easily be of extremely limited value.

So why not put more focus on the first step? That’s going to be the “new now.” How about making that our real target? Something that the enterprise sees as real value and that is actually feasible in a reasonable time scale (whatever that is). Instead of scoping that step as an intermediate (and rather immature) layover, why not put all our efforts into making it something really good? And when we get there and people know how the new now looks and feels, we can all think afresh about where to go next. After all, a journey is not simply defined by its destination but by how you get there and what you see and do on the way. If the actual journey itself is valuable, we may not want to get to the end of it.

Now that doesn’t mean we have to forget all about where we might want to be in three or even five years — not at all. The long term view is still important in helping us to make smart decisions about shorter term changes. It helps us allow for future change, even if only because it lets us see how much might change. And that helps us make sound decisions. But we should accept that our three or five year horizon needs to be continually open to revision – not on some artificial yearly cycle but every time there’s a “new now.” And this needs to include the times where the new now is not something we planned but is an emergent development from within or outside of the enterprise or is due to a major regulatory or market change.

So, if the focus is all on the first step and if our innovation cycle is getting steadily shorter, what’s the value of planning anything? Relax, I’m not about to fire the entire planning profession. If you don’t plan how you’re going to do something, what your dependencies are, how to react to the unexpected, etc., you’re unlikely to achieve your goal at all. Arguably that’s just project planning.

What about program planning? Well, if the program is so exposed to change maybe our concept of program planning needs to change. Instead of the plan being a thing fixed in stone that dictates everything, it could become a process in which the whole enterprise participates – itself open to emergence. The more I think about it, the more appealing that idea seems.

In my next post, I’ll go into more detail about how this might work, in particular from the perspective of Enterprise Architecture. I’ll also look more at how “the new planning” relates to innovation, emergence and social business and at the conflicts and synergies between these concerns. In the meantime, feel free to throw stones and see where the story doesn’t hold up.

Stuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity. 

7 Comments

Filed under Enterprise Architecture, Uncategorized

Flying in the Cloud by the Seat of Our Pants

By Chris Harding, The Open Group

In the early days of aviation, when instruments were unreliable or non-existent, pilots often had to make judgments by instinct. This was known as “flying by the seat of your pants.” It was exciting, but error prone, and accidents were frequent. Today, enterprises are in that position with Cloud Computing.

Staying On Course

Flight navigation does not end with programming the flight plan. The navigator must check throughout the flight that the plane is on course.  Successful use of Cloud requires, not only an understanding of what it can do for the business, but also continuous monitoring that it is delivering value as expected. A change of service-level, for example, can have as much effect on a user enterprise as a change of wind speed on an aircraft.

The Open Group conducted a Cloud Return on Investment (ROI) survey in 2011. Then, 55 percent of those surveyed felt that Cloud ROI would be easy to evaluate and justify, although only 35 percent had mechanisms in place to do it. When we repeated the survey in 2012, we found that the proportion that thought it would be easy had gone down to 44 percent, and only 20 percent had mechanisms in place. This shows, arguably, more realism, but it certainly doesn’t show any increased tendency to monitor the value delivered by Cloud. In fact, it shows the reverse. The enterprise pilots are flying by the seats of their pants. (The full survey results are available at http://www.opengroup.org/sites/default/files/contentimages/Documents/cloud_roi_formal_report_12_19_12-1.pdf)

They Have No Instruments

It is hard to blame the pilots for this, because they really do not have the instruments. The Open Group published a book in 2011, Cloud Computing for Business, that explains how to evaluate and monitor Cloud risk and ROI, with spreadsheet examples. The spreadsheet is pretty much the state-of-the-art in Cloud ROI instrumentation.  Like a compass, it is robust and functional at a basic level, but it does not have the sophistication and accuracy of a satellite navigation system. If we want better navigation, we must have better systems.

There is scope for Enterprise Architecture tool vendors to fill this need. As the inclusion of Cloud in Enterprise Architectures becomes commonplace, and Cloud Computing metrics and their relation to ROI become better understood, it should be possible to develop the financial components of Enterprise Architecture modeling tools so that the business impact of the Cloud systems can be seen more clearly.

The Enterprise Flight Crew

But this is not just down to the architects. The architecture is translated into systems by developers, and the systems are operated by operations staff. All of these people must be involved in the procurement and configuration of Cloud services and their monitoring through the Cloud buyers’ life cycle.

Cloud is already bringing development and operations closer together. The concept of DevOps, a paradigm that stresses communication, collaboration and integration between software developers and IT operations professionals, is increasingly being adopted by enterprises that use Cloud Computing. This communication, collaboration and integration must involve – indeed must start with – enterprise architects, and it must include the establishment and monitoring of Cloud ROI models. All of these professionals must co-operate to ensure that the Cloud-enabled enterprise keeps to its financial course.

The Architect as Pilot

The TOGAF® architecture development method includes a phase (Phase G) in which the architects participate in implementation governance. The following Phase H is currently devoted to architecture change management, with the objectives of ensuring that the architecture lifecycle is maintained, the architecture governance framework is executed, and the Enterprise Architecture capability meets current requirements. Perhaps Cloud architects should also think about ensuring that the system meets its business requirements, and continues to do so throughout its operation. They can then revisit earlier phases of the architecture development cycle (always a possibility in TOGAF) if it does not.

Flying the Cloud

Cloud Computing compresses the development lifecycle, cutting the time to market of new products and the time to operation of new enterprise systems. This is a huge benefit. It implies closer integration of architecture, development and operations. But this must be supported by proper instrumentation of the financial parameters of Cloud services, so that the architecture, development and operations professionals can keep the enterprise on course.

Flying by the seat of the pants must have been a great experience for the magnificent men in the flying machines of days gone by, but no one would think of taking that risk with the lives of 500 passengers on a modern aircraft. The business managers of a modern enterprise should not have to take that risk either. We must develop standard Cloud metrics and ROI models, so that they can have instruments to measure success.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

10 Comments

Filed under Cloud/SOA

Data Governance: A Fundamental Aspect of IT

By E.G. Nadhan, HP

In an earlier post, I had explained how you can build upon SOA governance to realize Cloud governance.  But underlying both paradigms is a fundamental aspect that we have been dealing with ever since the dawn of IT—and that’s the data itself.

In fact, IT used to be referred to as “data processing.” Despite the continuing evolution of IT through various platforms, technologies, architectures and tools, at the end of the day IT is still processing data. However, the data has taken multiple shapes and forms—both structured and unstructured. And Cloud Computing has opened up opportunities to process and store structured and unstructured data. There has been a need for data governance since the day data processing was born, and today, it’s taken on a whole new dimension.

“It’s the economy, stupid,” was a campaign slogan, coined to win a critical election in the United States in 1992. Today, the campaign slogan for governance in the land of IT should be, “It’s the data, stupid!”

Let us challenge ourselves with a few questions. Consider them the what, why, when, where, who and how of data governance.

What is data governance? It is the mechanism by which we ensure that the right corporate data is available to the right people, at the right time, in the right format, with the right context, through the right channels.

Why is data governance needed? The Cloud, social networking and user-owned devices (BYOD) have acted as catalysts, triggering an unprecedented growth in recent years. We need to control and understand the data we are dealing with in order to process it effectively and securely.

When should data governance be exercised? Well, when shouldn’t it be? Data governance kicks in at the source, where the data enters the enterprise. It continues across the information lifecycle, as data is processed and consumed to address business needs. And it is also essential when data is archived and/or purged.

Where does data governance apply? It applies to all business units and across all processes. Data governance has a critical role to play at the point of storage—the final checkpoint before it is stored as “golden” in a database. Data Governance also applies across all layers of the architecture:

  • Presentation layer where the data enters the enterprise
  • Business logic layer where the business rules are applied to the data
  • Integration layer where data is routed
  • Storage layer where data finds its home

Who does data governance apply to? It applies to all business leaders, consumers, generators and administrators of data. It is a good idea to identify stewards for the ownership of key data domains. Stewards must ensure that their data domains abide by the enterprise architectural principles.  Stewards should continuously analyze the impact of various business events to their domains.

How is data governance applied? Data governance must be exercised at the enterprise level with federated governance to individual business units and data domains. It should be proactively exercised when a new process, application, repository or interface is introduced.  Existing data is likely to be impacted.  In the absence of effective data governance, data is likely to be duplicated, either by chance or by choice.

In our data universe, “informationalization” yields valuable intelligence that enables effective decision-making and analysis. However, even having the best people, process and technology is not going to yield the desired outcomes if the underlying data is suspect.

How about you? How is the data in your enterprise? What governance measures do you have in place? I would like to know.

A version of this blog post was originally published on HP’s Journey through Enterprise IT Services blog.

NadhanHP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has more than 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project, and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, LinkedIn and Journey Blog.

1 Comment

Filed under Cloud, Cloud/SOA

2013 Open Group Predictions, Vol. 2

By The Open Group

Continuing on the theme of predictions, here are a few more, which focus on global IT trends, business architecture, OTTF and Open Group events in 2013.

Global Enterprise Architecture

By Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities

Cloud is no longer a bleeding edge technology – most organizations are already well on their way to deploying cloud technology.  However, Cloud implementations are resurrecting a perennial problem for organizations—integration. Now that Cloud infrastructures are being deployed, organizations are having trouble integrating different systems, especially with systems hosted by third parties outside their organization. What will happen when two, three or four technical delivery systems are hosted on AND off premise? This presents a looming integration problem.

As we see more and more organizations buying into cloud infrastructures, we’ll see an increase in cross-platform integration architectures globally in 2013. The role of the enterprise architect will become more complex. Architectures must not only ensure that systems are integrated properly, but architects also need to figure out a way to integrate outsourced teams and services and determine responsibility across all systems. Additionally, outsourcing and integration will lead to increased focus on security in the coming year, especially in healthcare and financial sectors. When so many people are involved, and responsibility is shared or lost in the process, gaping holes can be left unnoticed. As data is increasingly shared between organizations and current trends escalate, security will also become more and more of a concern. Integration may yield great rewards architecturally, but it also means greater exposure to vulnerabilities outside of your firewall.

Within the Architecture Forum, we will be working on improvements to the TOGAF® standard throughout 2013, as well as an effort to continue to harmonize the TOGAF specification with the ArchiMate® modelling language.  The Forum also expects to publish a whitepaper on application portfolio management in the new year, as well as be involved in the upcoming Cloud Reference Architecture.

In China, The Open Group is progressing well. In 2013, we’ll continue translating The Open Group website, books and whitepapers from English to Chinese. Partnerships and Open CA certification will remain in the forefront of global priorities, as well as enrolling TOGAF trainers throughout Asia Pacific as Open Group members. There are a lot of exciting developments arising, and we will keep you updated as we expand our footprint in China and the rest of Asia.

Open Group Events in 2013

By Patty Donovan, Vice President of Membership and Events

In 2013, the biggest change for us will be our quarterly summit. The focus will shift toward an emphasis on verticals. This new focus will debut at our April event in Sydney where the vertical themes include Mining, Government, and Finance. Additional vertical themes that we plan to cover throughout the year include: Healthcare, Transportation, Retail, just to name a few. We will also continue to increase the number of our popular Livestream sessions as we have seen an extremely positive reaction to them as well as all of our On-Demand sessions – listen to best selling authors and industry leaders who participated as keynote and track speakers throughout the year.

Regarding social media, we made big strides in 2012 and will continue to make this a primary focus of The Open Group. If you haven’t already, please “like” us on Facebook, follow us on Twitter, join the chat on (#ogchat) one of our Security focused Tweet Jams, and join our LinkedIn Group. And if you have the time, we’d love for you to contribute to The Open Group blog.

We’re always open to new suggestions, so if you have a creative idea on how we can improve your membership, Open Group events, webinars, podcasts, please let me know! Also, please be sure to attend the upcoming Open Group Conference in Newport Beach, Calif., which is taking place on January 28-31. The conference will address Big Data.

Business Architecture

By Steve Philp, Marketing Director for Open CA and Open CITS

Business Architecture is still a relatively new discipline, but in 2013 I think it will continue to grow in prominence and visibility from an executive perspective. C-Level decision makers are not just looking at operational efficiency initiatives and cost reduction programs to grow their future revenue streams; they are also looking at market strategy and opportunity analysis.

Business Architects are extremely valuable to an organization when they understand market and technology trends in a particular sector. They can then work with business leaders to develop strategies based on the capabilities and positioning of the company to increase revenue, enhance their market position and improve customer loyalty.

Senior management recognizes that technology also plays a crucial role in how organizations can achieve their business goals. A major role of the Business Architect is to help merge technology with business processes to help facilitate this business transformation.

There are a number of key technology areas for 2013 where Business Architects will be called upon to engage with the business such as Cloud Computing, Big Data and social networking. Therefore, the need to have competent Business Architects is a high priority in both the developed and emerging markets and the demand for Business Architects currently exceeds the supply. There are some training and certification programs available based on a body of knowledge, but how do you establish who is a practicing Business Architect if you are looking to recruit?

The Open Group is trying to address this issue and has incorporated a Business Architecture stream into The Open Group Certified Architect (Open CA) program. There has already been significant interest in this stream from both organizations and practitioners alike. This is because Open CA is a skills- and experience-based program that recognizes, at different levels, those individuals who are actually performing in a Business Architecture role. You must complete a candidate application package and be interviewed by your peers. Achieving certification demonstrates your competency as a Business Architect and therefore will stand you in good stead for both next year and beyond.

You can view the conformance criteria for the Open CA Business Architecture stream at https://www2.opengroup.org/ogsys/catalog/X120.

Trusted Technology

By Sally Long, Director of Consortia Services

The interdependency of all countries on global technology providers and technology providers’ dependencies on component suppliers around the world is more certain than ever before.  The need to work together in a vendor-neutral, country-neutral environment to assure there are standards for securing technology development and supply chain operations will become increasingly apparent in 2013. Securing the global supply chain can not be done in a vacuum, by a few providers or a few governments, it must be achieved by working together with all governments, providers, component suppliers and integrators and it must be done through open standards and accreditation programs that demonstrate conformance to those standards and are available to everyone.

The Open Group’s Trusted Technology Forum is providing that open, vendor and country-neutral environment, where suppliers from all countries and governments from around the world can work together in a trusted collaborative environment, to create a standard and an accreditation program for securing the global supply chain. The Open Trusted Technology Provider Standard (O-TTPS) Snapshot (Draft) was published in March of 2012 and is the basis for our 2013 predictions.

We predict that in 2013:

  • Version 1.0 of the O-TTPS (Standard) will be published.
  • Version 1.0 will be submitted to the ISO PAS process in 2013, and will likely become part of the ISO/IEC 27036 standard, where Part 5 of that ISO standard is already reserved for the O-TTPS work
  • An O-TTPS Accreditation Program – open to all providers, component suppliers, and integrators, will be launched
  • The Forum will continue the trend of increased member participation from governments and suppliers around the world

4 Comments

Filed under Business Architecture, Conference, Enterprise Architecture, O-TTF, OTTF

2013 Open Group Predictions, Vol. 1

By The Open Group

A big thank you to all of our members and staff who have made 2012 another great year for The Open Group. There were many notable achievements this year, including the release of ArchiMate 2.0, the launch of the Future Airborne Capability Environment (FACE™) Technical Standard and the publication of the SOA Reference Architecture (SOA RA) and the Service-Oriented Cloud Computing Infrastructure Framework (SOCCI).

As we wrap up 2012, we couldn’t help but look towards what is to come in 2013 for The Open Group and the industries we‘re a part of. Without further ado, here they are:

Big Data
By Dave Lounsbury, Chief Technical Officer

Big Data is on top of everyone’s mind these days. Consumerization, mobile smart devices, and expanding retail and sensor networks are generating massive amounts of data on behavior, environment, location, buying patterns – etc. – producing what is being called “Big Data”. In addition, as the use of personal devices and social networks continue to gain popularity so does the expectation to have access to such data and the computational power to use it anytime, anywhere. Organizations will turn to IT to restructure its services so it meets the growing expectation of control and access to data.

Organizations must embrace Big Data to drive their decision-making and to provide the optimal service mix services to customers. Big Data is becoming so big that the big challenge is how to use it to make timely decisions. IT naturally focuses on collecting data so Big Data itself is not an issue.. To allow humans to keep on top of this flood of data, industry will need to move away from programming computers for storing and processing data to teaching computers how to assess large amounts of uncorrelated data and draw inferences from this data on their own. We also need to start thinking about the skills that people need in the IT world to not only handle Big Data, but to make it actionable. Do we need “Data Architects” and if so, what would their role be?

In 2013, we will see the beginning of the Intellectual Computing era. IT will play an essential role in this new era and will need to help enterprises look at uncorrelated data to find the answer.

Security

By Jim Hietala, Vice President of Security

As 2012 comes to a close, some of the big developments in security over the past year include:

  • Continuation of hacktivism attacks.
  • Increase of significant and persistent threats targeting government and large enterprises. The notable U.S. National Strategy for Trusted Identities in Cyberspace started to make progress in the second half of the year in terms of industry and government movement to address fundamental security issues.
  • Security breaches were discovered by third parties, where the organizations affected had no idea that they were breached. Data from the 2012 Verizon report suggests that 92 percent of companies breached were notified by a third party.
  • Acknowledgement from senior U.S. cybersecurity professionals that organizations fall into two groups: those that know they’ve been penetrated, and those that have been penetrated, but don’t yet know it.

In 2013, we’ll no doubt see more of the same on the attack front, plus increased focus on mobile attack vectors. We’ll also see more focus on detective security controls, reflecting greater awareness of the threat and on the reality that many large organizations have already been penetrated, and therefore responding appropriately requires far more attention on detection and incident response.

We’ll also likely see the U.S. move forward with cybersecurity guidance from the executive branch, in the form of a Presidential directive. New national cybersecurity legislation seemed to come close to happening in 2012, and when it failed to become a reality, there were many indications that the administration would make something happen by executive order.

Enterprise Architecture

By Leonard Fehskens, Vice President of Skills and Capabilities

Preparatory to my looking back at 2012 and forward to 2013, I reviewed what I wrote last year about 2011 and 2012.

Probably the most significant thing from my perspective is that so little has changed. In fact, I think in many respects the confusion about what Enterprise Architecture (EA) and Business Architecture are about has gotten worse.

The stress within the EA community as both the demands being placed on it and the diversity of opinion within it increase continues to grow.  This year, I saw a lot more concern about the value proposition for EA, but not a lot of (read “almost no”) convergence on what that value proposition is.

Last year I wrote “As I expected at this time last year, the conventional wisdom about Enterprise Architecture continues to spin its wheels.”  No need to change a word of that. What little progress at the leading edge was made in 2011 seems to have had no effect in 2012. I think this is largely a consequence of the dust thrown in the eyes of the community by the ascendance of the concept of “Business Architecture,” which is still struggling to define itself.  Business Architecture seems to me to have supplanted last year’s infatuation with “enterprise transformation” as the means of compensating for the EA community’s entrenched IT-centric perspective.

I think this trend and the quest for a value proposition are symptomatic of the same thing — the urgent need for Enterprise Architecture to make its case to its stakeholder community, especially to the people who are paying the bills. Something I saw in 2011 that became almost epidemic in 2012 is conflation — the inclusion under the Enterprise Architecture umbrella of nearly anything with the slightest taste of “business” to it. This has had the unfortunate effect of further obscuring the unique contribution of Enterprise Architecture, which is to bring architectural thinking to bear on the design of human enterprise.

So, while I’m not quite mired in the slough of despond, I am discouraged by the community’s inability to advance the state of the art. In a private communication to some colleagues I wrote, “the conventional wisdom on EA is at about the same state of maturity as 14th century cosmology. It is obvious to even the most casual observer that the earth is both flat and the center of the universe. We debate what happens when you fall off the edge of the Earth, and is the flat earth carried on the back of a turtle or an elephant?  Does the walking of the turtle or elephant rotate the crystalline sphere of the heavens, or does the rotation of the sphere require the turtlephant to walk to keep the earth level?  These are obviously the questions we need to answer.”

Cloud

By Chris Harding, Director of Interoperability

2012 has seen the establishment of Cloud Computing as a mainstream resource for enterprise architects and the emergence of Big Data as the latest hot topic, likely to be mainstream for the future. Meanwhile, Service-Oriented Architecture (SOA) has kept its position as an architectural style of choice for delivering distributed solutions, and the move to ever more powerful mobile devices continues. These trends have been reflected in the activities of our Cloud Computing Work Group and in the continuing support by members of our SOA work.

The use of Cloud, Mobile Computing, and Big Data to deliver on-line systems that are available anywhere at any time is setting a new norm for customer expectations. In 2013, we will see the development of Enterprise Architecture practice to ensure the consistent delivery of these systems by IT professionals, and to support the evolution of creative new computing solutions.

IT systems are there to enable the business to operate more effectively. Customers expect constant on-line access through mobile and other devices. Business organizations work better when they focus on their core capabilities, and let external service providers take care of the rest. On-line data is a huge resource, so far largely untapped. Distributed, Cloud-enabled systems, using Big Data, and architected on service-oriented principles, are the best enablers of effective business operations. There will be a convergence of SOA, Mobility, Cloud Computing, and Big Data as they are seen from the overall perspective of the enterprise architect.

Within The Open Group, the SOA and Cloud Work Groups will continue their individual work, and will collaborate with other forums and work groups, and with outside organizations, to foster the convergence of IT disciplines for distributed computing.

3 Comments

Filed under Business Architecture, Cloud, Cloud/SOA, Cybersecurity, Enterprise Architecture