Tag Archives: The Open Group Conference

NASCIO Defines State of Enterprise Architecture at The Open Group Conference in Philadelphia

By E.G. Nadhan, HP

I have attended and blogged about many Open Group conferences. The keynotes at these conferences like other conferences provide valuable insight into the key messages and the underlying theme for the conference – which is Enterprise Architecture and Enterprise Transformation for The Open Group Conference in Philadelphia. Therefore, it is no surprise that Eric Sweden, Program Director, Enterprise Architecture & Governance, NASCIO will be delivering one of the keynotes on “State of the States: NASCIO on Enterprise Architecture”. Sweden asserts “Enterprise Architecture” provides an operating discipline for creating, operating, continual re-evaluation and transformation of an “Enterprise.” Not only do I agree with this assertion, but I would add that the proper creation, operation and continuous evaluation of the “Enterprise” systemically drives its transformation. Let’s see how.

Creation. This phase involves the definition of the Enterprise Architecture (EA) in the first place. Most often, this involves the definition of an architecture that factors in what is in place today while taking into account the future direction. TOGAF® (The Open Group Architecture Framework) provides a framework for developing this architecture from a business, application, data, infrastructure and technology standpoint; in alignment with the overall Architecture Vision with associated architectural governance.

Operation. EA is not a done deal once it has been defined. It is vital that the EA defined is sustained on a consistent basis with the advent of new projects, new initiatives, new technologies, and new paradigms. As the abstract states, EA is a comprehensive business discipline that drives business and IT investments. In addition to driving investments, the operation phase also includes making the requisite changes to the EA as a result of these investments.

Continuous Evaluation. We live in a landscape of continuous change with innovative solutions and technologies constantly emerging. Moreover, the business objectives of the enterprise are constantly impacted by market dynamics, mergers and acquisitions. Therefore, the EA defined and in operation must be continuously evaluated against the architectural principles, while exercising architectural governance across the enterprise.

Transformation. EA is an operating discipline for the transformation of an enterprise. Enterprise Transformation is not a destination — it is a journey that needs to be managed — as characterized by Twentieth Century Fox CIO, John Herbert. To Forrester Analyst Phil Murphy, Transformation is like the Little Engine That Could — focusing on the business functions that matter. (Big Data – highlighted in another keynote at this conference by Michael Cavaretta — is a paradigm gaining a lot of ground for enterprises to stay competitive in the future.)

Global organizations are enterprises of enterprises, undergoing transformation faced with the challenges of systemic architectural governance. NASCIO has valuable insight into the challenges faced by the 50 “enterprises” represented by each of the United States. Challenges that contrast the need for healthy co-existence of these states with the desire to retain a degree of autonomy. Therefore, I look forward to this keynote to see how EA done right can drive the transformation of the Enterprise.

By the way, remember when Enterprise Architecture was done wrong close to the venue of another Open Group conference?

How does Enterprise Architecture drive the transformation of your enterprise? Please let me know.

A version of this blog post originally appeared on the HP Journey through Enterprise IT Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. 

3 Comments

Filed under Business Architecture, Cloud, Cloud/SOA, Conference, Enterprise Architecture, Enterprise Transformation, TOGAF®

As Platform 3.0 ripens, expect agile access and distribution of actionable intelligence across enterprises, says The Open Group panel

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here

This latest BriefingsDirect discussion, leading into the The Open Group Conference on July 15 in Philadelphia, brings together a panel of experts to explore the business implications of the current shift to so-called Platform 3.0.

Known as the new model through which big data, cloud, and mobile and social — in combination — allow for advanced intelligence and automation in business, Platform 3.0 has so far lacked standards or even clear definitions.

The Open Group and its community are poised to change that, and we’re here now to learn more how to leverage Platform 3.0 as more than a IT shift — and as a business game-changer. It will be a big topic at next week’s conference.

The panel: Dave Lounsbury, Chief Technical Officer at The Open Group; Chris Harding, Director of Interoperability at The Open Group, and Mark Skilton, Global Director in the Strategy Office at Capgemini. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

This special BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference, which is focused on enterprise transformation in the finance, government, and healthcare sectors. Registration to the conference remains open. Follow the conference on Twitter at #ogPHL. [Disclosure: The Open Group is a sponsor of this and other BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: A lot of people are still wrapping their minds around this notion of Platform 3.0, something that is a whole greater than the sum of the parts. Why is this more than an IT conversation or a shift in how things are delivered? Why are the business implications momentous?

Lounsbury: Well, Dana, there are lot of IT changes or technical changes going on that are bringing together a lot of factors. They’re turning into this sort of super-saturated solution of ideas and possibilities and this emerging idea that this represents a new platform. I think it’s a pretty fundamental change.

Lounsbury

If you look at history, not just the history of IT, but all of human history, you see that step changes in societies and organizations are frequently driven by communication or connectedness. Think about the evolution of speech or the invention of the alphabet or movable-type printing. These technical innovations that we’re seeing are bringing together these vast sources of data about the world around us and doing it in real time.

Further, we’re starting to see a lot of rapid evolution in how you turn data into information and presenting the information in a way such that people can make decisions on it. Given all that we’re starting to realize, we’re on the cusp of another step of connectedness and awareness.

Fundamental changes

This really is going to drive some fundamental changes in the way we organize ourselves. Part of what The Open Group is doing, trying to bring Platform 3.0 together, is to try to get ahead of this and make sure that we understand not just what technical standards are needed, but how businesses will need to adapt and evolve what business processes they need to put in place in order to take maximum advantage of this to see change in the way that we look at the information.

Harding: Enterprises have to keep up with the way that things are moving in order to keep their positions in their industries. Enterprises can’t afford to be working with yesterday’s technology. It’s a case of being able to understand the information that they’re presented, and make the best decisions.

Harding

We’ve always talked about computers being about input, process, and output. Years ago, the input might have been through a teletype, the processing on a computer in the back office, and the output on print-out paper.

Now, we’re talking about the input being through a range of sensors and social media, the processing is done on the cloud, and the output goes to your mobile device, so you have it wherever you are when you need it. Enterprises that stick in the past are probably going to suffer.

Gardner: Mark Skilton, the ability to manage data at greater speed and scale, the whole three Vs — velocity, volume, and value — on its own could perhaps be a game changing shift in the market. The drive of mobile devices into lives of both consumers and workers is also a very big deal.

Of course, cloud has been an ongoing evolution of emphasis towards agility and efficiency in how workloads are supported. But is there something about the combination of how these are coming together at this particular time that, in your opinion, substantiates The Open Group’s emphasis on this as a literal platform shift?

Skilton: It is exactly that in terms of the workloads. The world we’re now into is the multi-workload environment, where you have mobile workloads, storage and compute workloads, and social networking workloads. There are many different types of data and traffic today in different cloud platforms and devices.

Skilton

It has to do with not just one solution, not one subscription model — because we’re now into this subscription-model era … the subscription economy, as one group tends to describe it. Now, we’re looking for not only just providing the security, the infrastructure, to deliver this kind of capability to a mobile device, as Chris was saying. The question is, how can you do this horizontally across other platforms? How can you integrate these things? This is something that is critical to the new order.

So Platform 3.0 addressing this point by bringing this together. Just look at the numbers. Look at the scale that we’re dealing with — 1.7 billion mobile devices sold in 2012, and 6.8 billion subscriptions estimated according to the International Telecommunications Union (ITU) equivalent to 96 percent of the world population.

Massive growth

We had massive growth in scale of mobile data traffic and internet data expansion. Mobile data is increasing 18 percent fold from 2011 to 2016 reaching 130 exabytes annually.  We passed 1 zettabyte of global online data storage back in 2010 and IP data traffic predicted to pass 1.3 zettabytes by 2016, with internet video accounting for 61 percent of total internet data according to Cisco studies.

These studies also predict data center traffic combining network and internet based storage will reach 6.6 zettabytes annually, and nearly two thirds of this will be cloud based by 2016.  This is only going to grow as social networking is reaching nearly one in four people around the world with 1.7 billion using at least one form of social networking in 2013, rising to one in three people with 2.55 billion global audience by 2017 as another extraordinary figure from an eMarketing.com study.

It is not surprising that many industry analysts are seeing growth in technologies of mobility, social computing, big data and cloud convergence at 30 to 40 percent and the shift to B2C commerce passing $1 trillion in 2012 is just the start of a wider digital transformation.

These numbers speak volumes in terms of the integration, interoperability, and connection of the new types of business and social realities that we have today.

Gardner: Why should IT be thinking about this as a fundamental shift, rather than a modest change?

Lounsbury: A lot depends on how you define your IT organization. It’s useful to separate the plumbing from the water. If we think of the water as the information that’s flowing, it’s how we make sure that the water is pure and getting to the places where you need to have the taps, where you need to have the water, etc.

But the plumbing also has to be up to the job. It needs to have the capacity. It needs to have new tools to filter out the impurities from the water. There’s no point giving someone data if it’s not been properly managed or if there’s incorrect information.

What’s going to happen in IT is not only do we have to focus on the mechanics of the plumbing, where we see things like the big database that we’ve seen in the open-source  role and things like that nature, but there’s the analytics and the data stewardship aspects of it.

We need to bring in mechanisms, so the data is valid and kept up to date. We need to indicate its freshness to the decision makers. Furthermore, IT is going to be called upon, whether as part of the enterprise IP or where end users will drive the selection of what they’re going to do with analytic tools and recommendation tools to take the data and turn it into information. One of the things you can’t do with business decision makers is overwhelm them with big rafts of data and expect them to figure it out.

You really need to present the information in a way that they can use to quickly make business decisions. That is an addition to the role of IT that may not have been there traditionally — how you think about the data and the role of what, in the beginning, was called data scientist and things of that nature.

Shift in constituency

Skilton: I’d just like to add to Dave’s excellent points about, the shape of data has changed, but also about why should IT get involved. We’re seeing that there’s a shift in the constituency of who is using this data.

We have the Chief Marketing Officer and the Chief Procurement Officer and other key line of business managers taking more direct control over the uses of information technology that enable their channels and interactions through mobile, social and data analytics. We’ve got processes that were previously managed just by IT and are now being consumed by significant stakeholders and investors in the organization.

We have to recognize in IT that we are the masters of our own destiny. The information needs to be sorted into new types of mobile devices, new types of data intelligence, and ways of delivering this kind of service.

I read recently in MIT Sloan Management Review an article that asked what is the role of the CIO. There is still the critical role of managing the security, compliance, and performance of these systems. But there’s also a socialization of IT, and this is where  the  positioning architectures which are cross platform is key to  delivering real value to the business users in the IT community.

Gardner: How do we prevent this from going off the rails?

Harding: This a very important point. And to add to the difficulties, it’s not only that a whole set of different people are getting involved with different kinds of information, but there’s also a step change in the speed with which all this is delivered. It’s no longer the case, that you can say, “Oh well, we need some kind of information system to manage this information. We’ll procure it and get a program written” that a year later that would be in place in delivering reports to it.

Now, people are looking to make sense of this information on the fly if possible. It’s really a case of having the platforms be the standard technology platform and also the systems for using it, the business processes, understood and in place.

Then, you can do all these things quickly and build on learning from what people have gone in the past, and not go out into all sorts of new experimental things that might not lead anywhere. It’s a case of building up the standard platform in the industry best practice. This is where The Open Group can really help things along by being a recipient and a reflector of best practice and standard.

Skilton: Capgemini has been doing work in this area. I break it down into four levels of scalability. It’s the platform scalability of understanding what you can do with your current legacy systems in introducing cloud computing or big data, and the infrastructure that gives you this, what we call multiplexing of resources. We’re very much seeing this idea of introducing scalable platform resource management, and you see that a lot with the heritage of virtualization.

Going into networking and the network scalability, a lot of the customers have who inherited their old telecommunications networks are looking to introduce new MPLS type scalable networks. The reason for this is that it’s all about connectivity in the field. I meet a number of clients who are saying, “We’ve got this cloud service,” or “This service is in a certain area of my country. If I move to another parts of the country or I’m traveling, I can’t get connectivity.” That’s the big issue of scaling.

Another one is application programming interfaces (APIs). What we’re seeing now is an explosion of integration and application services using API connectivity, and these are creating huge opportunities of what Chris Anderson of Wired used to call the “long tail effect.” It is now a reality in terms of building that kind of social connectivity and data exchange that Dave was talking about.

Finally, there are the marketplaces. Companies needs to think about what online marketplaces they need for digital branding, social branding, social networks, and awareness of your customers, suppliers, and employees. Customers can see that these four levels are where they need to start thinking about for IT strategy, and Platform 3.0 is right on this target of trying to work out what are the strategies of each of these new levels of scalability.

Gardner: We’re coming up on The Open Group Conference in Philadelphia very shortly. What should we expect from that? What is The Open Group doing vis-à-vis Platform 3.0, and how can organizations benefit from seeing a more methodological or standardized approach to some way of rationalizing all of this complexity? [Registration to the conference remains open. Follow the conference on Twitter at #ogPHL.]

Lounsbury: We’re still in the formational stages of  “third platform” or Platform 3.0 for The Open Group as an industry. To some extent, we’re starting pretty much at the ground floor with that in the Platform 3.0 forum. We’re leveraging a lot of the components that have been done previously by the work of the members of The Open Group in cloud, services-oriented architecture (SOA), and some of the work on the Internet of things.

First step

Our first step is to bring those things together to make sure that we’ve got a foundation to depart from. The next thing is that, through our Platform 3.0 Forum and the Steering Committee, we can ask people to talk about what their scenarios are for adoption of Platform 3.0?

That can range from things like the technological aspects of it and what standards are needed, but also to take a clue from our previous cloud working group. What are the best business practices in order to understand and then adopt some of these Platform 3.0 concepts to get your business using them?

What we’re really working toward in Philadelphia is to set up an exchange of ideas among the people who can, from the buy side, bring in their use cases from the supply side, bring in their ideas about what the technology possibilities are, and bring those together and start to shape a set of tracks where we can create business and technical artifacts that will help businesses adopt the Platform 3.0 concept.

Harding: We certainly also need to understand the business environment within which Platform 3.0 will be used. We’ve heard already about new players, new roles of various kinds that are appearing, and the fact that the technology is there and the business is adapting to this to use technology in new ways.

For example, we’ve heard about the data scientist. The data scientist is a new kind of role, a new kind of person, that is playing a particular part in all this within enterprises. We’re also hearing about marketplaces for services, new ways in which services are being made available and combined.

We really need to understand the actors in this new kind of business scenario. What are the pain points that people are having? What are the problems that need to be resolved in order to understand what kind of shape the new platform will have? That is one of the key things that the Platform 3.0 Forum members will be getting their teeth into.

Gardner: Looking to the future, when we think about the ability of the data to be so powerful when processed properly, when recommendations can be delivered to the right place at the right time, but we also recognize that there are limits to a manual or even human level approach to that, scientist by scientist, analysis by analysis.

When we think about the implications of automation, it seems like there were already some early examples of where bringing cloud, data, social, mobile, interactions, granularity of interactions together, that we’ve begun to see that how a recommendation engine could be brought to bear. I’m thinking about the Siri capability at Apple and even some of the examples of the Watson Technology at IBM.

So to our panel, are there unknown unknowns about where this will lead in terms of having extraordinary intelligence, a super computer or data center of super computers, brought to bear almost any problem instantly and then the result delivered directly to a center, a smart phone, any number of end points?

It seems that the potential here is mind boggling. Mark Skilton, any thoughts?

Skilton: What we’re talking about is the next generation of the Internet.  The advent of IPv6 and the explosion in multimedia services, will start to drive the next generation of the Internet.

I think that in the future, we’ll be talking about a multiplicity of information that is not just about services at your location or your personal lifestyle or your working preferences. We’ll see a convergence of information and services across multiple devices and new types of “co-presence services” that interact with your needs and social networks to provide predictive augmented information value.

When you start to get much more information about the context of where you are, the insight into what’s happening, and the predictive nature of these, it becomes something that becomes much more embedding into everyday life and in real time in context of what you are doing.

I expect to see much more intelligent applications coming forward on mobile devices in the next 5 to 10 years driven by this interconnected explosion of real time processing data, traffic, devices and social networking we describe in the scope of platform 3.0. This will add augmented intelligence and is something that’s really exciting and a complete game changer. I would call it the next killer app.

First-mover benefits

Gardner: There’s this notion of intelligence brought to bear rapidly in context, at a manageable cost. This seems to me a big change for businesses. We could, of course, go into the social implications as well, but just for businesses, that alone to me would be an incentive to get thinking and acting on this. So any thoughts about where businesses that do this well would be able to have significant advantage and first mover benefits?

Harding: Businesses always are taking stock. They understand their environments. They understand how the world that they live in is changing and they understand what part they play in it. It will be down to individual businesses to look at this new technical possibility and say, “So now this is where we could make a change to our business.” It’s the vision moment where you see a combination of technical possibility and business advantage that will work for your organization.

It’s going to be different for every business, and I’m very happy to say this, it’s something that computers aren’t going to be able to do for a very long time yet. It’s going to really be down to business people to do this as they have been doing for centuries and millennia, to understand how they can take advantage of these things.

So it’s a very exciting time, and we’ll see businesses understanding and developing their individual business visions as the starting point for a cycle of business transformation, which is what we’ll be very much talking about in Philadelphia. So yes, there will be businesses that gain advantage, but I wouldn’t point to any particular business, or any particular sector and say, “It’s going to be them” or “It’s going to be them.”

Gardner: Dave Lounsbury, a last word to you. In terms of some of the future implications and vision, where could this could lead in the not too distant future?

Lounsbury: I’d disagree a bit with my colleagues on this, and this could probably be a podcast on its own, Dana. You mentioned Siri, and I believe IBM just announced the commercial version of its Watson recommendation and analysis engine for use in some customer-facing applications.

I definitely see these as the thin end of the wedge on filling that gap between the growth of data and the analysis of data. I can imagine in not in the next couple of years, but in the next couple of technology cycles, that we’ll see the concept of recommendations and analysis as a service, to bring it full circle to cloud. And keep in mind that all of case law is data and all of the medical textbooks ever written are data. Pick your industry, and there is huge amount of knowledge base that humans must currently keep on top of.

This approach and these advances in the recommendation engines driven by the availability of big data are going to produce profound changes in the way knowledge workers produce their job. That’s something that businesses, including their IT functions, absolutely need to stay in front of to remain competitive in the next decade or so.

Comments Off

Filed under ArchiMate®, Business Architecture, Cloud, Cloud/SOA, Conference, Data management, Enterprise Architecture, Platform 3.0, Professional Development, TOGAF®

Three laws of the next Internet of Things – the new platforming evolution in computing

By Mark Skilton, Global Director at Capgemini

There is a wave of new devices and services that are growing in strength extending the boundary of what is possible in today’s internet driven economy and lifestyle.   A striking feature is the link between apps that are on smart phones and tablets and the ability to connect to not just websites but also to data collection sensors and intelligence analytical analysis of that information.   A key driver of this has also been the improvement in the cost-performance curve of information technology not just in CPU and storage but also the easy availability and affordability of highly powerful computing and mass storage in mobile devices coupled with access to complex sensors, advanced optics and screen displays results in a potentially truly immersive experience.  This is a long way from the early days of radio frequency identity tags which are the forerunner of this evolution.   Digitization of information and its interpretation of meaning is everywhere, moving into a range of industries and augmented services that create new possibilities and value. A key challenge in how to understand this growth of devices, sensors, content and services across the myriad of platforms and permutations this can bring.

·         Energy conservation

o   Through home and building energy management

·         Lifestyle activity

o   Motion sensor Accelerometers, ambient light sensors, moisture sensors, gyroscopes, proximity sensors.

·          Lifestyle health

o   Heart rate, blood oxygen levels, respiratory rate, heart rate variability, for cardiorespiratory monitoring are some of the potential
that connecting Devices

·         Medical Health

o   Biomedical sensing for patient care and elderly care management,  heart, lung, kidney dialysis,  medial value and organ implants, orthopaedic implants and brain-image scanning.   Examples of devices can monitoring elderly physical activity, blood pressure and other factors unobtrusively and proactively.  These aim to drive improvements in prevention, testing, early detection, surgery and treatment helping improve quality of life and address rising medical costs and society impact of aging population.

·         Transport

o   Precision global positioning, local real time image perception interpretation  sensing, dynamic electromechanical control systems.

·         Materials science engineering and manufacturing

o   Strain gauges, stress sensors, precision lasers, micro and nanoparticle engineering,  cellular manipulation, gene splicing,
3D printing has the potential to revolutionize automated manufacturing but through distributed services over the internet, manufacturing can potentially be accessed by anyone.

·         Physical Safety and security

o   Examples include Controlling children’s access to their mobile phone via your pc is an example of parental protection of children using web based applications to monitory and control mobile and computing access.  Or Keyless entry using your  phone.  Wiki, Bluetooth and internet network app and device to automate locking of physical; door and entry remotely or in proximity.

·         Remote activity and swarming robotics

o   The developing of autonomous robotics to respond and support exploration and services in harsh or inaccessible environments. Disabled support through robotic prosthetics and communication synthesis.   Swarming robots that fly or mimic group behavior.  Swarming robots that mimic nature and decision making.

These are just the tip of want is possible; the early commercial ventures that are starting to drive new ways to think about information technology and application services.

A key feature I noticed in all these devices are that they augment previous layers of technology by sitting on top of them and adding extra value.   While often the long shadow of the first generation giants of the public internet Apple, Google, Amazon give the impression that to succeed means a controlled platform and investment of millions; these new technologies use existing infrastructure and operate across a federated distributed architecture that represents a new kind of platforming paradigm of multiple systems.

Perhaps a paradigm of new technology cycles is that as the new tech arrives it will cannibalize older technologies. Clearly nothing is immune to this trend, even the cloud,   I’ll call it even the evolution of a  kind a technology laws ( a feature  I saw in by Charles Fine clock speed book http://www.businessforum.com/clockspeed.html  but adapted here as a function of compound cannibalization and augmentation).  I think Big Data is an example of such a shift in this direction as augmented informatics enables major next generation power pays for added value services.

These devices and sensors can work with existing infrastructure services and resources but they also create a new kind of computing architecture that involves many technologies, standards and systems. What was in early times called “system of systems” Integration (Examples seen in the defence sector  http://www.bctmod.army.mil/SoSI/sosi.html  and digital ecosystems in the government sector  http://www.eurativ.com/specialreport-skills/kroes-europe-needs-digital-ecosy-interview-517996 )

While a sensor device can replace the existing thermostat in your house or the lighting or the access locks to your doors, they are offering a new kind of augmented experience that provides information and insight that enabled better control of the wider environment or the actions and decisions within a context.

This leads to a second feature of these device, the ability to learn and adapt from the inputs and environment.  This is probably an even larger impact than the first to use infrastructure in that it’s the ability to change the outcomes is a revolution in information.  The previous idea of static information and human sense making of this data is being replaced by the active pursuit of automated intelligence from the machines we build.   Earlier design paradigms that needed to define declarative services, what IT call CRUD (Create, Read, Update, Delete) as predefined and managed transactions are being replaced by machine learning algorithms that seek to build a second generation of intelligent services  that alter the results and services with the passage of time and usage characteristics.

This leads me to a third effect that became apparent in the discussion of lifestyle services versus medical and active device management.  In the case of lifestyle devices a key feature is the ability to blend in with the personal activity to enable new insight in behavior and lifestyle choices, to passively and actively monitor or tack action, not always to affect they behavior itself. That is to provide unobtrusive, ubiquitous presence.   But moving this idea further it is also about the way the devices could merge in a become integrated within the context of the user or environmental setting.  The example of biomedical devices to augment patient care and wellbeing is one such example that can have real and substantive impact of quality of life as well as efficiency in cost of care programs with an aging population to support.

An interesting side effect of these trends is the cultural dilemma these devices and sensors bring in the intrusion of personal data and privacy. Yet once the meaning and value of if this telemetry on safety , health or material value factors is perceived for the good of the individual and community, the adoption of such services may become more pronounced and reinforced. A virtuous circle of accelerated adoption seen as a key characteristic of successful growth and a kind of conditioning feedback that creates positive reinforcement.     While a key feature that is underpinning these is the ability of the device and sensor to have an unobtrusive, ubiquitous presence this overall effect is central to the idea of effective system of systems integration and borderless information flow TM (The Open Group)

These trends I see as three laws of the next Internet of things describing a next generation platforming strategy and evolution.

Its clear that sensors and devices are merging together in a way that will see cross cutting from one industry to another.  Motion and temperature sensors in one will see application in another industry.   Services from one industry may connect with other industries as combinations of these services, lifestyles and affects.

Iofthings1.jpg

Formal and informal communities both physical and virtual will be connected through sensors and devices that pervade the social, technological and commercial environments. This will drive further growth in the mass of data and digitized information with the gradual semantic representation of this information into meaningful context.  Apps services will develop increasing intelligence and awareness of the multiplicity of data, its content and metadata adding new insight and services to the infrastructure fabric.  This is a new platforming paradigm that may be constructed from one or many systems and architectures from the macro to micro, nano level systems technologies.

The three laws as I describe may be recast in a lighter tongue-in-cheek way comparing them to the famous Isaac Asimov three laws of robotics.   This is just an illustration but in some way implies that the sequence of laws is in some fashion protecting the users, resources and environment by some altruistic motive.  This may be the case in some system feedback loops that are seeking this goal but often commercial micro economic considerations may be more the driver. However I can’t help thinking that this does hint to what maybe the first stepping stone to the eventuality of such laws.

Three laws of the next generation of The Internet of Things – a new platforming architecture

Law 1. A device, sensor or service may operate in an environment if it can augment infrastructure

Law 2.  A device, sensor or service must be able  to learn and adapt its response to the environment as long as  it’s not in conflict with the First law

Law 3. A device, sensor or service  must have unobtrusive ubiquitous presence such that it does not conflict with the First or Second laws

References

 ·       Energy conservation

o   The example of  Nest  http://www.nest.com Learning thermostat, founded by Tony Fadell, ex ipod hardware designer and  Head of iPod and iPhone division, Apple.   The device monitors and learns about energy usage in a building and adapts and controls the use of energy for improved carbon and cost efficiency.

·         Lifestyle activity

o   Motion sensor Accelerometers, ambient light sensors, moisture sensors, gyroscopes, proximity sensors.  Example such as UP Jawbone  https://jawbone/up and Fitbit  http://www.fitbit.com .

·          Lifestyle health

o   Heart rate, blood oxygen levels, respiratory rate, heart rate variability, for cardiorespiratory monitoring are some of the potential that connecting Devices such as Zensorium  http://www.zensorium.com

·         Medical Health

o   Biomedical sensing for patient care and elderly care management,  heart, lung, kidney dialysis,  medial value and organ implants, orthopaedic implants and brain-image scanning.   Examples of devices can monitoring elderly physical activity, blood pressure and other factors unobtrusively and proactively.  http://www.nytimes.com/2010/07/29/garden/29parents.html?pagewanted-all  These aim to drive improvements in prevention, testing, early detection, surgery and treatment helping improve quality of life and address rising medical costs and society impact of aging population.

·         Transport

o   Precision global positioning, local real time image perception interpretation  sensing, dynamic electromechanical control systems. Examples include Toyota  advanced IT systems that will help drivers avoid road accidents.  Http://www.toyota.com/safety/ Google driverless car  http://www.forbes.com/sites/chenkamul/2013/01/22/fasten-your-seatbelts-googles-driverless-car-is-worth-trillions/

·         Materials science engineering and manufacturing

o   Strain gauges, stress sensors, precision lasers, micro and nanoparticle engineering,  cellular manipulation, gene splicing,
3D printing has the potential to revolutionize automated manufacturing but through distributed services over the internet, manufacturing can potentially be accessed by anyone.

·         Physical Safety and security

o   Alpha Blue http://www.alphablue.co.uk Controlling children’s access to their mobile phone via your pc is an example of parental protection of children using web based applications to monitory and control mobile and computing access.

o   Keyless entry using your  phone.  Wiki, Bluetooth and internet network app and device to automate locking of physical; door and entry remotely or in proximity. Examples such as Lockitron  https://www.lockitron.com.

·         Remote activity and swarming robotics

o   The developing of autonomous robotics to respond and support exploration and services in  harsh or inaccessible environments. Examples include the NASA Mars curiosity rover that has active control programs to determine remote actions on the red planet that has a signal delay time round trip (13 minutes, 48 seconds EDL) approximately 30 minutes to detect perhaps react to an event remotely from Earth.  http://blogs.eas.int/mex/2012/08/05/time-delay-betrween-mars-and-earth/  http://www.nasa.gov/mission_pages/mars/main/imdex.html .  Disabled support through robotic prosthetics and communication synthesis.     http://disabilitynews.com/technology/prosthetic-robotic-arm-can-feel/.  Swarming robotc that fly or mimic group behavior.    University of Pennsylvania, http://www.reuters.com/video/2012/03/20/flying-robot-swarms-the-future-of-search?videoId-232001151 Swarming robots ,   Natural Robotics Lab , The University of Sheffield , UK   http://www.sheffield.ac.uk/news/nr/sheffield-centre-robotic-gross-natural-robotics-lab-1.265434

 Mark Skilton is Global Director for Capgemini, Strategy CTO Group, Global Infrastructure Services. His role includes strategy development, competitive technology planning including Cloud Computing and on-demand services, global delivery readiness and creation of Centers of Excellence. He is currently author of the Capgemini University Cloud Computing Course and is responsible for Group Interoperability strategy.

1 Comment

Filed under Cloud, Cloud/SOA, Conference, Data management, Platform 3.0

The era of “Internet aware systems and services” – the multiple-data, multi-platform and multi-device and sensors world

By Mark Skilton, Global Director at Capgemini

Communications + Data protocols and the Next Internet of Things Multi-Platform solutions

Much of the discussion on the “internet of things” have been around industry sector examples use of device and sensor services.  Examples of these I have listed at the end of this paper.  What are central to this emerging trend are not just sector point solutions but three key technical issues driving a new Industry Sector Digital Services strategy to bring these together into a coherent whole.

  1. How combinations of system technologies platforms are converging enabling composite business processes that are mobile , content and transactional rich  and with near real time persistence and interactivity
  2. The development of “non-web browser” protocols in new sensor driven machine data that are emerging that extend  new types of data into internet connected business and social integration
  3. The development of “connected systems” that move solutions in a new digital services of multiple services across platforms creating new business and technology services

I want to illustrate this by focusing on three topics:  multi-platforming strategies, communication protocols and examples of connected systems.

I want to show that this is not a simple “three or four step model” that I often see where mobile + applications and Cloud equal a solution but result in silos of data and platform integration challenges. New processing methods for big data platforms, distributed stream computing and in memory data base services for example are changing the nature of business analytics and in particular marketing and sales strategic planning and insight.  New feedback systems collecting social and machine learning data are  creating new types of business growth opportunities in context aware services that work and augment skills and services.

The major solutions in the digital ecosystem today incorporate an ever growing mix of devices and platforms that offer new user experiences and  organization. This can be seen across most all industry sectors and horizontally between industry sectors. This diagram is a simplistic view I want to use to illustrate the fundamental structures that are forming.

Iofthings1.jpg

Multiple devices that offer simple to complex visualization, on-board application services

Multiple Sensors that can economically detect measure and monitor most physical phenomena: light, heat, energy, chemical, radiological in both non-biological and biological systems.

Physical and virtual communities of formal and informal relationships. These human and/ or machine based  associations in the sense that search and discover of data and resources that can now work autonomously across an internet of many different types of data.

Physical and virtual Infrastructure that represent servers, storage, databases, networks and other resources that can constitute one or more platforms and environments. This infrastructure now is more complex in that it is both distributed and federated across multiple domains: mobile platforms, cloud computing platforms, social network platforms, big data platforms and embedded sensor platforms. The sense of a single infrastructure is both correct and incorrect in that is a combined state and set of resources that may or may not be within a span of control of an individual or organization.

Single and multi-tenanted Application services that operate in transactional, semi or non-deterministic ways that drive logical processing, formatting, interpretation, computation and other processing of data and results from one-to-many, many-to-one or many-to-many platforms and endpoints.

The key to thinking in multiple platforms is to establish the context of how these fundamental forces of platform services are driving interactions for many Industries and business and social networks and services. This is changing because they are interconnected altering the very basis of what defines a single platform to a multiple platform concept.

MS2This diagram illustrates some of these relationships and arrangements.   It is just one example of a digital ecosystem pattern, there can be other arrangements of these system use cases to meet different needs and outcomes.

I use this model to illustrate some of the key digital strategies to consider in empowering communities; driving value for money strategies or establishing a joined up device and sensor strategy for new mobile knowledge workers.   This is particularly relevant for key business stakeholders decision making processes today in Sales, Marketing, Procurement, Design, Sourcing, Supply and Operations to board level as well as IT related Strategy and service integration and engineering.

Taking one key stakeholder example, the Chief Marketing Officer (CMO) is interested and central to strategic channel and product development and brand management. The CMO typically seeks to develop Customer Zones, Supplier zones, marketplace trading communities, social networking communities and behavior insight leadership. These are critical drivers for successful company presence, product and service brand and market grow development as well as managing and aligning IT Cost and spend to what is needed for the business performance.  This creates a new kind of Digital Marketing Infrastructure to drive new customer and marketing value.  The following diagram illustrates types of  marketing services that raise questions over the types of platforms needed for single and multiple data sources, data quality and fidelity.

ms3These interconnected issues effect the efficacy and relevancy of marketing services to work at the speed, timeliness and point of contact necessary to add and create customer and stakeholder value.

What all these new converged technologies have in common are communications.  But  communications that are not just HTTP protocols but wider bandwidth of frequencies that are blurring together what is now possible to be connected.

These protocols include Wi-Fi and other wireless systems and standards that are not just in the voice speech band but also in the collection and use of other types of telemetry relating to other senses and detectors.

All these have common issues of Device and sensor compatibility, discovery and paring and security compatibility and controls.

ms4Communication standards examples for multiple services.

  • Wireless: WLAN, Bluetooth, ZigBee, Z-Wave, Wireless USB,
  •  Proximity Smartcard, Passive , Active, Vicinity Card
  • IrDA, Infrared
  • GPS Satellite
  • Mobile 3G, 4GLTE, Cell, Femtocell, GSM, CDMA, WIMAX
  • RFID RF, LF, HFbands
  • Encryption: WEP, WPA, WPA2, WPS, other

These communication protocols impact on the design and connectivity of system- to-system services. These standards relate to the operability of the services that can be used in the context of a platform and how they are delivered and used by consumers and providers..  How does the data and service connect with the platform? How does the service content get collected, formatted, processed and transmitted between the source and target platform?  How do these devices and sensors work to support extended and remote mobile and platform service?  What distributed workloads work best in a mobile platform, sensor platform or distributed to a dedicated or shared platform that may be cloud computing or appliance based for example?

Answering these questions are key to providing a consistent and powerful digital service strategy that is both flexible and capable of exploiting, scaling and operating with these new system and intersystem capabilities.

This becomes central to a new generation of Internet aware data and services that represent the digital ecosystem that deliver new business and consumer experience on and across platforms.ms5

This results in a new kind of User Experience and Presence strategy that moves the “single voice of the Customer” and “Customer Single voice” to a new level that works across mobile, tablets and other devices and sensors that translate and create new forms of information and experience for consumers and providers. Combining this with new sensors that can include for example; positional, physical and biomedical data content become a reality in this new generation of digital services.  Smart phones today have a price-point that includes many built in sensors that are precision technologies measuring physical and biological data sources. When these are built into new feedback and decision analytics creates a whole new set of possibilities in real time and near real time augmented services as well as new levels of resource use and behavior insight.

The scale and range of data types (text, voice, video, image, semi structured, unstructured, knowledge, metadata , contracts, IP ) about social, business and physical environments have moved beyond the early days of RFID tags to encompass new internet aware sensors, systems, devices and services.  ms6This is not just “Tabs and Pads” of mobiles and tablets but a growing presence into “Boards, Places and Spaces” that make up physical environments turning them in part of the interactive experience and sensory input of service interaction. This now extends to the massive scale of terrestrial communications that connect across the planet and beyond in the case of NASA for example; but also right down to the Micro, Nano, Pico and quantum levels in the case of Molecular and Nano tech engineering .   All these are now part of the modern technological landscape that is pushing the barriers of what is possible in today’s digital ecosystem.

The conclusion is that strategic planning needs to have insight into the nature of new infrastructures and applications that will support these new multisystem workloads and digital infrastructures.
I illustrate this in the following diagram in what I call the “multi-platforming” framework that represents this emerging new ecosystem of services.ms7

Digital Service = k ∑ Platforms + ∑ Connections

K= a coefficient measuring how open, closed and potential value of service

Digital Ecosystem = e ∑ Digital Services

e = a coefficient of how diverse and dynamic the ecosystem and its service participants.

I will explore the impact on enterprise architecture and digital strategy in future blogs and how the emergence of a new kind of architecture called Ecosystem Arch.

Examples of new general Industry sector services Internet of Things

 Mark Skilton is Global Director for Capgemini, Strategy CTO Group, Global Infrastructure Services. His role includes strategy development, competitive technology planning including Cloud Computing and on-demand services, global delivery readiness and creation of Centers of Excellence. He is currently author of the Capgemini University Cloud Computing Course and is responsible for Group Interoperability strategy.

4 Comments

Filed under Cloud, Cloud/SOA, Conference, Data management, Platform 3.0

Driving Boundaryless Information Flow in Healthcare

By E.G. Nadhan, HP

I look forward with great interest to the upcoming Open Group conference on EA & Enterprise Transformation in Finance, Government & Healthcare in Philadelphia in July 2013. In particular, I am interested in the sessions planned on topics related to the Healthcare Industry. This industry is riddled with several challenges of uncontrolled medical costs, legislative pressures, increased plan participation, and improved longevity of individuals. Come to think of it, these challenges are not that different from those faced when defining a comprehensive enterprise architecture. Therefore, can the fundamental principles of Enterprise Architecture be applied towards the resolution of these challenges in the Healthcare industry? The Open Group certainly thinks so.

Enterprise Architecture is a discipline, methodology, and practice for translating business vision and strategy into the fundamental structures and dynamics of an enterprise at various levels of abstraction. As defined by TOGAF®, enterprise architecture needs to be developed through multiple phases. These include Business Architecture, Applications, Information, and Technology Architecture. All this must be in alignment with the overall vision. The TOGAF Architecture Development Method enables a systematic approach to addressing these challenges while simplifying the problem domain.

This approach to the development of Enterprise Architecture can be applied towards the complex problem domain that manifests itself in Healthcare. Thus, it is no surprise that The Open Group is sponsoring the Population Health Working Group, which has a vision to enable “boundary-less information flow” between the stakeholders that participate in healthcare delivery. Checkout the presentation delivered by Larry Schmidt, Chief Technologist, Health and Life Sciences Industries, HP, US at the Open Group conference in Philadelphia.

As a Platinum member of The Open Group, HP has co-chaired the release of multiple standards, including the first technical cloud standard. The Open Group is also leading the definition of the Cloud Governance Framework. Having co-chaired these projects, I look forward to the launch of the Population Health Working Group with great interest.

Given the role of information in today’s landscape, “boundary-less information flow” between the stakeholders that participate in healthcare delivery is vital. At the same time, how about injecting a healthy dose of innovation given that enterprise Architects are best positioned for innovation – a post triggered by Forrester Analyst Brian Hopkins’s thoughts on this topic. The Open Group — with its multifaceted representation from a wide array of enterprises — provides incredible opportunities for innovation in the context of the complex landscape of the healthcare industry. Take a look at the steps taken by HP Labs to innovate and improve patient care one day at a time.

I would strongly encourage you to attend Schmidt’s session, as well as the Healthcare Transformation Panel moderated by Open Group CEO, Allen Brown at this conference.

How about you? What are some of the challenges that you are facing within the Healthcare industry today? Have you applied Enterprise Architecture development methods to problem domains in other industries? Please let me know.

Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

A version of this blog post originally appeared on the HP Enterprise Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. 

3 Comments

Filed under Business Architecture, Cloud, Cloud/SOA, Conference, Enterprise Architecture, Healthcare, TOGAF®

Healthcare Transformation – Let’s be Provocative

by Jason Uppal, Chief Architect, QRS

Recently, I attended a one-day healthcare transformation event in Toronto. The master of ceremony, a renowned doctor, asked the speakers to be provocative in how to tackle the issues in healthcare and healthcare delivery in a specific way. After about 8 speakers – I must admit I did not hear anything that social media will classify as “remarkable” either in terms of problem definition or the solution direction – all speeches emphasized the importance of better healthcare. I watched one video, Jess’s Story, and I am convinced without discussion that we need a better way to deliver care.

I am an Engineer and not a Medical Doctor. In my profession, we spend 90% of our effort defining the problem and 10% solving it with known solution patterns. In this blog, I would like to define the healthcare delivery problem and offer a potential solution direction.

 First the Basic Facts

Table 1: Healthcare Spending and Quality

Country 1980 [$] 2007 [$] 2010 [$] 2012 [$] Healthcare Quality Ranking
US 1106 6102 8233 8946 6
Canada 3165 4445 5
Germany 3005 4338 1

Note: $ represent per capita spend per year, sources of information are public; references can be made available if required. Healthcare Quality Ranking – lower the number the better

Firstly, the obvious fact is that the US spends more on healthcare per capita and gets less for it.  These facts as well as many other studies lead to the same conclusion.

Problem Definition, Option 1 – Straight-forward reduction of healthcare costs: US healthcare roughly represents 18% of the US GDP. Reduction in spending will result in shrinking the GDP, unless politicians spend the saved money somewhere else. This is not a good option as we all know the impact of austerity measures without altering the underlying process. Or even closer to home, the impact of the recent sequesters on air traffic in major us airports has resulted in terrible delays and has significantly inconvenienced the traveling public.  We learned during the 1980s when “reengineering” was a sexy terms that when we reduced labour by 30%, we simply hoped the remaining souls would figure out how to do work with less.  We all knew what that approach did, fat paycheques for the CEO and senior management and entire industries got wiped out.

Problem Definition, Option 2 – Reduce healthcare costs and issue health  dividends: Let’s target to reduce the base healthcare spending to $4000 per person per year. This will bring spending to the 1980 level with inflation factored. The remaining funds, $4946 per capita ($8946 –$ 4000), be given as a health dividend to the population and providers. This will go to both the population as a tax credit and to providers as an incentive to keep those that they care for healthy. This will not reduce health care spending, have no impact on the GDP, but will certainly improve the health of our biggest producers and consumers in the economy.

There is proof that this model could work to reduce overall cost and improve population health if both the population and providers are incented appropriately. Recently, I had an argument with my General Practitioner’s (GP) secretary who wanted me to come to the office three times for the following:

1)     to receive the results of my blood test,

2)     to have an annual physical check-up,

3)     to remove  couple of annoying skin tags.

Each procedure was no more than 2 to 7 minutes long, they insisted that it have to be three separate appointments. A total of 10 minutes of consult for three procedures with my GP would have cost me an additional 7 hours in my productivity loss (2.0 hours to drive, 0.5 hour wait and 1.0 hour productivity loss due to distractions of the appointments). A reason for this behaviour is that the way physicians are incented; they are able to bill the system more based on the number of visits alone. Not based on what is good for both the patient and provider.

Therefore, I will define the problem this way: reduce the cost of base care to $4000 per capita and incent both the population and provider to stay and keep their customers healthy. Let the innovation begin. There is no shortage of very smart architects, engineers  and very motivated providers who want to live to their oath of “do no harm”.

Call to Action:

  • To help develop next generation healthcare delivery organization – we need the help of healthcare Zuckerbergs, Steve Jobs, Pierre Omidyar, Jeffrey P. Bezos; people who can think outside the box and bypass the current entitled establishment for the better.
  • We are taking first step to define an alternative architecture – join us in Philadelphia on July 16th for a one-day active workshop.
  • Website: http://www.opengroup.org/philadelphia2013
  • Program Outline: http://www.opengroup.org/events/timetable/1548
    • Tuesday: Healthcare Transformation
    • Keynote Speaker: Dr David Nash, Dean of Population Health Jefferson University
    • Reactors Panel: Hear from other experts on what is possible
    • Workshops
      • Be part of organized workshops and learn from your fellow providers and enterprise architects on how to transform healthcare for the next generation
      • This is your trip to the Gemba

uppalJason Uppal, P.Eng. is the Chief Architect at QRS and was the first Master IT Architect certified by The Open Group, by direct review, in October 2005. He is now a Distinguished Chief Architect in the Open CA program. He holds an undergraduate degree in Mechanical Engineering, graduate degree in Economics and a post graduate diploma in Computer Science. Jason’s commitment to Enterprise Architecture Life Cycle (EALC) has led him to focus on training (TOGAF®), education (UOIT) and mentoring services to his clients as well as being the responsible individual for both Architecture and Portfolio & Project Management for a number of major projects.

Comments Off

Filed under Enterprise Architecture, Healthcare, Open CA

The Open Group Sydney – My Conference Highlights

By Mac Lemon, MD Australia at Enterprise Architects

Sydney

Well the dust has settled now with the conclusion of The Open Group ‘Enterprise Transformation’ Conference held in Sydney, Australia for the first time on April 15-20. Enterprise Architects is proud to have been recognised at the event by The Open Group as being pivotal in the success of this event. A number of our clients including NBN, Australia Post, QGC, RIO and Westpac presented excellent papers on leading edge approaches in strategy and architecture and a number of EA’s own thought leaders in Craig Martin, Christine Stephenson and Ana Kukec also delivered widely acclaimed papers.

Attendance at the conference was impressive and demonstrated that there is substantial appetite for a dedicated event focussed on the challenges of business and technology strategy and architecture. We saw many international visitors both as delegates and presenting papers and there is no question that a 2014 Open Group Forum will be the stand out event in the calendar for business and technology strategy and architecture professionals.

My top 10 take-outs from the conference include the following:

  1. The universal maturing in understanding the criticality of Business Architecture and the total convergence upon Business Capability Modelling as a cornerstone of business architecture;
  2. The improving appreciation of techniques for understanding and expressing business strategy and motivation, such as strategy maps, business model canvass and business motivation modelling;
  3. That customer experience is emerging as a common driver for many transformation initiatives;
  4. While the process for establishing the case and roadmap for transformation appears well enough understood, the process for management of the blueprint through transformation is not and generally remains a major program risk;
  5. Then next version of TOGAF® should offer material uplift in support for security architecture which otherwise remains at low levels of maturity from a framework standardisation perspective;
  6. ArchiMate® is generating real interest as a preferred enterprise architecture modelling notation – and that stronger alignment of ArchiMate® and TOGAF® meta models in then next version of TOGAF® is highly anticipated;
  7. There is industry demand for recognised certification of architects to demonstrate learning alongside experience as the mark of a good architect. There remains an unsatisfied requirement for certification that falls in the gap between TOGAF® and the Open CA certification;
  8. Australia can be proud of its position in having the second highest per capita TOGAF® certification globally behind the Netherlands;
  9. While the topic of interoperability in government revealed many battle scarred veterans convinced of the hopelessness of the cause – there remain an equal number of campaigners willing to tackle the challenge and their free and frank exchange of views was entertaining enough to justify worth the price of a conference ticket;
  10. Unashamedly – Enterprise Architects remains in a league of its own in the concentration of strategy and architecture thought leadership in Australia – if not globally.

Mac LemonMac Lemon is the Managing Director of Enterprise Architects Pty Ltd and is based in Melbourne, Australia.

This is an extract from Mac’s recent blog post on the Enterprise Architects web site which you can view here.

Comments Off

Filed under ArchiMate®, Business Architecture, Certifications, Conference, Enterprise Architecture, Enterprise Transformation, Professional Development, Security Architecture, TOGAF, TOGAF®

Connect at The Open Group Conference in Sydney (#ogSYD) via Social Media

By The Open Group Conference Team

By attending The Open Group’s conferences, attendees are able to learn from industry experts, understand the latest technologies and standards and discuss and debate current industry trends. One way to maximize the benefits is to make technology work for you. If you are attending The Open Group Conference in Sydney next week, we’ve put together a few tips on how to leverage social media to make networking at the conference easier, quicker and more effective.

Using Twitter at #ogSYD

Twitter is a real-time news-sharing tool that anyone can use. The official hashtag for the conference is #ogSYD. This enables anybody, whether they are physically attending the event or not, to follow what’s happening at The Open Group Conference in Sydney in real-time and interact with each other.

Before the conference, be sure to update your Twitter account to monitor #ogSYD and, of course, to tweet about the conference.

Using Facebook at The Open Group Conference in Sydney

You can also track what is happening at the conference on The Open Group Facebook Page. We will be posting photos from conference events throughout the week. If you’re willing to share, your photos with us, we’re happy to post them to our page with a photo credit. Please email your photos, captions, full name and organization to photo (at) opengroup.org!

LinkedIn during The Open Group Conference in Sydney

Motivated by one of the sessions? Interested in what your peers have to say? Start a discussion on The Open Group LinkedIn Group page. We’ll also be sharing interesting topics and questions related to The Open Group Conference as it is happening. If you’re not a member already, requesting membership is easy. Simply go to the group page and click the “Join Group” button. We’ll accept your request as soon as we can!

Blogging during The Open Group Conference in Sydney

Stay tuned for conference recaps here on The Open Group blog. In case you missed a session or you weren’t able to make it to Sydney, we’ll be posting the highlights and recaps on the blog. If you are attending the conference and would like to submit a recap of your own, please contact ukopengroup (at) hotwirepr.com.

If you have any questions about social media usage at the conference, feel free to tweet the conference team @theopengroup.

Comments Off

Filed under Conference

The Open Group Conference in Sydney Plenary Sessions Preview

By The Open Group Conference Team

Taking place April 15-18, 2013, The Open Group Conference in Sydney will bring together industry experts to discuss the evolving role of Enterprise Architecture and how it transforms the enterprise. As the conference quickly approaches, let’s take a deeper look into the plenary sessions that kick-off day one and two. And if you haven’t already, register for The Open Group Conference in Sydney today!

Enterprise Transformation and the Role of Open Standards

By Allen Brown, President & CEO, The Open Group

Enterprise transformation seems to be gathering momentum within the Enterprise Architecture community.  The term, enterprise transformation, suggests the process of fundamentally changing an enterprise.  Sometimes the transformation is dramatic but for most of us it is a steady process. Allen will kick off the conference by discussing how to set expectations, the planning process for enterprise transformation and the role of standards, and provide an overview of ongoing projects by The Open Group’s members.

TOGAF® as a Powerful Took to Kick Start Business Transformation

By Peter Haviland, Chief Business Architect, and Martin Keywood, Partner, Ernst & Young

Business transformation is a tricky beast. It requires many people to work together toward a singular vision, and even more people to be aligned to an often multi-year execution program throughout which personal and organizational priorities will change. As a firm with considerable Business Architecture and transformation experience, Ernst & Young (EY) deploys multi-disciplinary teams of functional and technical experts and uses a number of approaches, anchored on TOGAF framework, to address these issues. This is necessary to get a handle on the complexity inherent to today’s business environment so that stakeholders are aligned and remain actively engaged, past investments in both processes and systems can be maximized, and transformation programs are set up for success and can be driven with sustained momentum.

In this session Peter and Martin will take us through EY’s Transformation Design approach – an approach that, within 12 weeks, can define a transformation vision, get executives on board, create a high level multi-domain architecture, broadly outline transformation alternatives and finally provide initial estimates of the necessary work packages to achieve transformation. They will also share case studies and metrics from the approach of financial services, oil and gas and professional services sectors. The session should interest executives looking to increase buy-in amongst their peers or professionals charged with stakeholder engagement and alignment. It will also show how to use the TOGAF framework within this situation.

Building a More Cohesive Organization Using Business Architecture

 By Craig Martin, COO & Chief Architect, Enterprise Architects

In shifting the focus away from Enterprise Architecture being seen purely as an IT discipline, organizations are beginning to formalize the development of Business Architecture practices and outcomes. The Open Group has made the differentiation between business, IT and enterprise architects through various working groups and certification tracks. However, industry at present is grappling to try to understand where the discipline of Business Architecture resides in the business and what value it can provide separate of the traditional project based business analysis focus.

Craig will provide an overview of some of the critical questions being asked by businesses and how these are addressed through Business Architecture. Using both method as well as case study examples, he will show an approach to building more cohesion across the business landscape. Craig will focus on the use of business motivation models, strategic scenario planning and capability based planning techniques to provide input into the strategic planning process.

Other plenary speakers include:

  • Capability Based Strategic Planning in Transforming a Mining Environment by David David, EA Manager, Rio Tinto
  • Development of the National Broadband Network IT Architecture – A Greenfield Telco Transformation by Roger Venning, Chief IT Architect, NBN Co. Ltd
  • Business Architecture in Finance Panel moderated by Chris Forde, VP Enterprise Architecture, The Open Group

More details about the conference can be found here: http://www.opengroup.org/sydney2013

1 Comment

Filed under Conference

3 Steps to Proactively Address Board-Level Security Concerns

By E.G. Nadhan, HP

Last month, I shared the discussions that ensued in a Tweet Jam conducted by The Open Group on Big Data and Security where the key takeaway was: Protecting Data is Good.  Protecting Information generated from Big Data is priceless.  Security concerns around Big Data continue to the extent that it has become a Board-level concern as explained in this article in ComputerWorldUK.  Board-level concerns must be addressed proactively by enterprises.  To do so, enterprises must provide the business justification for such proactive steps needed to address such board-level concerns.

Nadhan blog image

At The Open Group Conference in Sydney in April, the session on “Which information risks are shaping our lives?” by Stephen Singam, Chief Technology Officer, HP Enterprise Security Services, Australia provides great insight on this topic.  In this session, Singam analyzes the current and emerging information risks while recommending a proactive approach to address them head-on with adversary-centric solutions.

The 3 steps that enterprises must take to proactively address security concerns are below:

Computing the cost of cyber-crime

The HP Ponemon 2012 Cost of Cyber Crime Study revealed that cyber attacks have more than doubled in a three year period with the financial impact increasing by nearly 40 percent. Here are the key takeaways from this research:

  • Cyber-crimes continue to be costly. The average annualized cost of cyber-crime for 56 organizations is $8.9 million per year, with a range of $1.4 million to $46 million.
  • Cyber attacks have become common occurrences. Companies experienced 102 successful attacks per week and 1.8 successful attacks per company per week in 2012.
  • The most costly cyber-crimes are those caused by denial of service, malicious insiders and web-based attacks.

When computing the cost of cyber-crime, enterprises must address direct, indirect and opportunity costs that result from the loss or theft of information, disruption to business operations, revenue loss and destruction of property, plant and equipment. The following phases of combating cyber-crime must also be factored in to comprehensively determine the total cost:

  1. Detection of patterns of behavior indicating an impending attack through sustained monitoring of the enabling infrastructure
  2. Investigation of the security violation upon occurrence to determine the underlying root cause and take appropriate remedial measures
  3. Incident response to address the immediate situation at hand, communicate the incidence of the attack raise all applicable alerts
  4. Containment of the attack by controlling its proliferation across the enterprise
  5. Recovery from the damages incurred as a result of the attack to ensure ongoing business operations based upon the business continuity plans in place

Identifying proactive steps that can be taken to address cyber-crime

  1. “Better get security right,” says HP Security Strategist Mary Ann Mezzapelle in her keynote on Big Data and Security at The Open Group Conference in Newport Beach. Asserting that proactive risk management is the most effective approach, Mezzapelle challenged enterprises to proactively question the presence of shadow IT, data ownership, usage of security tools and standards while taking a comprehensive approach to security end-to-end within the enterprise.
  2. Art Gilliland suggested that learning from cyber criminals and understanding their methods in this ZDNet article since the very frameworks enterprises strive to comply with (such as ISO and PCI) set a low bar for security that adversaries capitalize on.
  3. Andy Ellis discussed managing risk with psychology instead of brute force in his keynote at the 2013 RSA Conference.
  4. At the same conference, in another keynote, world re-knowned game-designer and inventor of SuperBetter, Jane McGonigal suggested the application of the “collective intelligence” that gaming generates can combat security concerns.
  5. In this interview, Bruce Schneier, renowned security guru and author of several books including LIARS & Outliers, suggested “Bad guys are going to invent new stuff — whether we want them to or not.” Should we take a cue from Hollywood and consider the inception of OODA loop into the security hacker’s mind?

The Balancing Act.

Can enterprises afford to take such proactive steps? Or more importantly, can they afford not to?

Enterprises must define their risk management strategy and determine the proactive steps that are best in alignment with their business objectives and information security standards.  This will enable organizations to better assess the cost of execution for such measures.  While the actual cost is likely to vary by enterprise, inaction is not an acceptable alternative.  Like all other critical corporate initiatives, these proactive measures must receive the board-level attention they deserve.

Enterprises must balance the cost of executing such proactive measures against the potential cost of data loss and reputational harm. This will ensure that the right proactive measures are taken with executive support.

How about you?  Has your enterprise taken the steps to assess the cost of cybercrime?  Have you considered various proactive steps to combat cybercrime?  Share your thoughts with me in the comments section below.

NadhanHP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Conference

Join us for The Open Group Conference in Sydney – April 15-18

By The Open Group Conference Team

The Open Group is busy gearing up for the Sydney conference, which will take place on April 15-18, 2013. With over 2,000 Associate of Enterprise Architects (AEA) members in Australia, Sydney is an ideal setting for industry experts from around the world to gather and discuss the evolution of Enterprise Architecture and its role in transforming the enterprise. Be sure to register today!

The conference offers roughly 60 sessions on a varied of topics including:

  • Cloud infrastructure as an enabler of innovation in enterprises
  • Simplifying data integration in the government and defense sectors
  • Merger transformation with TOGAF® framework and ArchiMate® modeling language
  • Measuring and managing cybersecurity risks
  • Pragmatic IT road-mapping with ArchiMate modeling language
  • The value of Enterprise Architecture certification within a professional development framework

Plenary speakers will include:

  • Allen Brown, President & CEO, The Open Group
  • Peter Haviland, Chief Business Architect, with Martin Keywood, Partner, Ernst & Young
  • David David, EA Manager, Rio Tinto
  • Roger Venning, Chief IT Architect, NBN Co. Ltd
  • Craig Martin, COO & Chief Architect, Enterprise Architects
  • Chris Forde, VP Enterprise Architecture, The Open Group

The full conference agenda is available here. Tracks include:

  • Finance & Commerce
  • Government & Defense
  • Energy & Natural Resources

And topics of discussion include, but are not limited to:

  • Cloud
  • Business Transformation
  • Enterprise Architecture
  • Technology & Innovation
  • Data Integration/Information Sharing
  • Governance & Security
  • Architecture Reference Models
  • Strategic Planning
  • Distributed Services Architecture

Upcoming Conference Submission Deadlines

Would you like a chance to speak an Open Group conference? There are upcoming deadlines for speaker proposal submissions for upcoming conferences in Philadelphia and London. To submit a proposal to speak, click here.

Venue Industry Focus Submission Deadline
Philadelphia (July 15-17) Healthcare, Finance, Government & Defense April 5, 2013
London (October 21-23) Finance, Government, Healthcare July 8, 2013

 

The agenda for Philadelphia and London are filling up fast, so it is important for proposals to be submitted as early as possible. Proposals received after the deadline dates will still be considered, space permitting; if not, proposals may be carried over to a future conference. Priority will be given to proposals received by the deadline dates and to proposals that include an end-user organization, at least as a co-presenter.

Comments Off

Filed under Conference

Beyond Big Data

By Chris Harding, The Open Group

The big bang that started The Open Group Conference in Newport Beach was, appropriately, a presentation related to astronomy. Chris Gerty gave a keynote on Big Data at NASA, where he is Deputy Program Manager of the Open Innovation Program. He told us how visualizing deep space and its celestial bodies created understanding and enabled new discoveries. Everyone who attended felt inspired to explore the universe of Big Data during the rest of the conference. And that exploration – as is often the case with successful space missions – left us wondering what lies beyond.

The Big Data Conference Plenary

The second presentation on that Monday morning brought us down from the stars to the nuts and bolts of engineering. Mechanical devices require regular maintenance to keep functioning. Processing the mass of data generated during their operation can improve safety and cut costs. For example, airlines can overhaul aircraft engines when it needs doing, rather than on a fixed schedule that has to be frequent enough to prevent damage under most conditions, but might still fail to anticipate failure in unusual circumstances. David Potter and Ron Schuldt lead two of The Open Group initiatives, Quantum Lifecycle management (QLM) and the Universal Data Element Framework (UDEF). They explained how a semantic approach to product lifecycle management can facilitate the big-data processing needed to achieve this aim.

Chris Gerty was then joined by Andras Szakal, vice-president and chief technology officer at IBM US Federal IMT, Robert Weisman, chief executive officer of Build The Vision, and Jim Hietala, vice-president of Security at The Open Group, in a panel session on Big Data that was moderated by Dana Gardner of Interarbor Solutions. As always, Dana facilitated a fascinating discussion. Key points made by the panelists included: the trend to monetize data; the need to ensure veracity and usefulness; the need for security and privacy; the expectation that data warehouse technology will exist and evolve in parallel with map/reduce “on-the-fly” analysis; the importance of meaningful presentation of the data; integration with cloud and mobile technology; and the new ways in which Big Data can be used to deliver business value.

More on Big Data

In the afternoons of Monday and Tuesday, and on most of Wednesday, the conference split into streams. These have presentations that are more technical than the plenary, going deeper into their subjects. It’s a pity that you can’t be in all the streams at once. (At one point I couldn’t be in any of them, as there was an important side meeting to discuss the UDEF, which is in one of the areas that I support as forum director). Fortunately, there were a few great stream presentations that I did manage to get to.

On the Monday afternoon, Tom Plunkett and Janet Mostow of Oracle presented a reference architecture that combined Hadoop and NoSQL with traditional RDBMS, streaming, and complex event processing, to enable Big Data analysis. One application that they described was to trace the relations between particular genes and cancer. This could have big benefits in disease prediction and treatment. Another was to predict the movements of protesters at a demonstration through analysis of communications on social media. The police could then concentrate their forces in the right place at the right time.

Jason Bloomberg, president of Zapthink – now part of Dovel – is always thought-provoking. His presentation featured the need for governance vitality to cope with ever changing tools to handle Big Data of ever increasing size, “crowdsourcing” to channel the efforts of many people into solving a problem, and business transformation that is continuous rather than a one-time step from “as is” to “to be.”

Later in the week, I moderated a discussion on Architecting for Big Data in the Cloud. We had a well-balanced panel made up of TJ Virdi of Boeing, Mark Skilton of Capgemini and Tom Plunkett of Oracle. They made some excellent points. Big Data analysis provides business value by enabling better understanding, leading to better decisions. The analysis is often an iterative process, with new questions emerging as answers are found. There is no single application that does this analysis and provides the visualization needed for understanding, but there are a number of products that can be used to assist. The role of the data scientist in formulating the questions and configuring the visualization is critical. Reference models for the technology are emerging but there are as yet no commonly-accepted standards.

The New Enterprise Platform

Jogging is a great way of taking exercise at conferences, and I was able to go for a run most mornings before the meetings started at Newport Beach. Pacific Coast Highway isn’t the most interesting of tracks, but on Tuesday morning I was soon up in Castaways Park, pleasantly jogging through the carefully-nurtured natural coastal vegetation, with views over the ocean and its margin of high-priced homes, slipways, and yachts. I reflected as I ran that we had heard some interesting things about Big Data, but it is now an established topic. There must be something new coming over the horizon.

The answer to what this might be was suggested in the first presentation of that day’s plenary, Mary Ann Mezzapelle, security strategist for HP Enterprise Services, talked about the need to get security right for Big Data and the Cloud. But her scope was actually wider. She spoke of the need to secure the “third platform” – the term coined by IDC to describe the convergence of social, cloud and mobile computing with Big Data.

Securing Big Data

Mary Ann’s keynote was not about the third platform itself, but about what should be done to protect it. The new platform brings with it a new set of security threats, and the increasing scale of operation makes it increasingly important to get the security right. Mary Ann presented a thoughtful analysis founded on a risk-based approach.

She was followed by Adrian Lane, chief technology officer at Securosis, who pointed out that Big Data processing using NoSQL has a different architecture from traditional relational data processing, and requires different security solutions. This does not necessarily mean new techniques; existing techniques can be used in new ways. For example, Kerberos may be used to secure inter-node communications in map/reduce processing. Adrian’s presentation completed the Tuesday plenary sessions.

Service Oriented Architecture

The streams continued after the plenary. I went to the Distributed Services Architecture stream, which focused on SOA.

Bill Poole, enterprise architect at JourneyOne in Australia, described how to use the graphical architecture modeling language ArchiMate® to model service-oriented architectures. He illustrated this using a case study of a global mining organization that wanted to consolidate its two existing bespoke inventory management applications into a single commercial off-the-shelf application. It’s amazing how a real-world case study can make a topic come to life, and the audience certainly responded warmly to Bill’s excellent presentation.

Ali Arsanjani, chief technology officer for Business Performance and Service Optimization, and Heather Kreger, chief technology officer for International Standards, both at IBM, described the range of SOA standards published by The Open Group and available for use by enterprise architects. Ali was one of the brains that developed the SOA Reference Architecture, and Heather is a key player in international standards activities for SOA, where she has helped The Open Group’s Service Integration Maturity Model and SOA Governance Framework to become international standards, and is working on an international standard SOA reference architecture.

Cloud Computing

To start Wednesday’s Cloud Computing streams, TJ Virdi, senior enterprise architect at The Boeing Company, discussed use of TOGAF® to develop an Enterprise Architecture for a Cloud ecosystem. A large enterprise such as Boeing may use many Cloud service providers, enabling collaboration between corporate departments, partners, and regulators in a complex ecosystem. Architecting for this is a major challenge, and The Open Group’s TOGAF for Cloud Ecosystems project is working to provide guidance.

Stuart Boardman of KPN gave a different perspective on Cloud ecosystems, with a case study from the energy industry. An ecosystem may not necessarily be governed by a single entity, and the participants may not always be aware of each other. Energy generation and consumption in the Netherlands is part of a complex international ecosystem involving producers, consumers, transporters, and traders of many kinds. A participant may be involved in several ecosystems in several ways: a farmer for example, might consume energy, have wind turbines to produce it, and also participate in food production and transport ecosystems.

Penelope Gordon of 1-Plug Corporation explained how choice and use of business metrics can impact Cloud service providers. She worked through four examples: a start-up Software-as-a-Service provider requiring investment, an established company thinking of providing its products as cloud services, an IT department planning to offer an in-house private Cloud platform, and a government agency seeking budget for government Cloud.

Mark Skilton, director at Capgemini in the UK, gave a presentation titled “Digital Transformation and the Role of Cloud Computing.” He covered a very broad canvas of business transformation driven by technological change, and illustrated his theme with a case study from the pharmaceutical industry. New technology enables new business models, giving competitive advantage. Increasingly, the introduction of this technology is driven by the business, rather than the IT side of the enterprise, and it has major challenges for both sides. But what new technologies are in question? Mark’s presentation had Cloud in the title, but also featured social and mobile computing, and Big Data.

The New Trend

On Thursday morning I took a longer run, to and round Balboa Island. With only one road in or out, its main street of shops and restaurants is not a through route and the island has the feel of a real village. The SOA Work Group Steering Committee had found an excellent, and reasonably priced, Italian restaurant there the previous evening. There is a clear resurgence of interest in SOA, partly driven by the use of service orientation – the principle, rather than particular protocols – in Cloud Computing and other new technologies. That morning I took the track round the shoreline, and was reminded a little of Dylan Thomas’s “fishing boat bobbing sea.” Fishing here is for leisure rather than livelihood, but I suspected that the fishermen, like those of Thomas’s little Welsh village, spend more time in the bar than on the water.

I thought about how the conference sessions had indicated an emerging trend. This is not a new technology but the combination of four current technologies to create a new platform for enterprise IT: Social, Cloud, and Mobile computing, and Big Data. Mary Ann Mezzapelle’s presentation had referenced IDC’s “third platform.” Other discussions had mentioned Gartner’s “Nexus of forces,” the combination of Social, Cloud and Mobile computing with information that Gartner says is transforming the way people and businesses relate to technology, and will become a key differentiator of business and technology management. Mark Skilton had included these same four technologies in his presentation. Great minds, and analyst corporations, think alike!

I thought also about the examples and case studies in the stream presentations. Areas as diverse as healthcare, manufacturing, energy and policing are using the new technologies. Clearly, they can deliver major business benefits. The challenge for enterprise architects is to maximize those benefits through pragmatic architectures.

Emerging Standards

On the way back to the hotel, I remarked again on what I had noticed before, how beautifully neat and carefully maintained the front gardens bordering the sidewalk are. I almost felt that I was running through a public botanical garden. Is there some ordinance requiring people to keep their gardens tidy, with severe penalties for anyone who leaves a lawn or hedge unclipped? Is a miserable defaulter fitted with a ball and chain, not to be removed until the untidy vegetation has been properly trimmed, with nail clippers? Apparently not. People here keep their gardens tidy because they want to. The best standards are like that: universally followed, without use or threat of sanction.

Standards are an issue for the new enterprise platform. Apart from the underlying standards of the Internet, there really aren’t any. The area isn’t even mapped out. Vendors of Social, Cloud, Mobile, and Big Data products and services are trying to stake out as much valuable real estate as they can. They have no interest yet in boundaries with neatly-clipped hedges.

This is a stage that every new technology goes through. Then, as it matures, the vendors understand that their products and services have much more value when they conform to standards, just as properties have more value in an area where everything is neat and well-maintained.

It may be too soon to define those standards for the new enterprise platform, but it is certainly time to start mapping out the area, to understand its subdivisions and how they inter-relate, and to prepare the way for standards. Following the conference, The Open Group has announced a new Forum, provisionally titled Open Platform 3.0, to do just that.

The SOA and Cloud Work Groups

Thursday was my final day of meetings at the conference. The plenary and streams presentations were done. This day was for working meetings of the SOA and Cloud Work Groups. I also had an informal discussion with Ron Schuldt about a new approach for the UDEF, following up on the earlier UDEF side meeting. The conference hallways, as well as the meeting rooms, often see productive business done.

The SOA Work Group discussed a certification program for SOA professionals, and an update to the SOA Reference Architecture. The Open Group is working with ISO and the IEEE to define a standard SOA reference architecture that will have consensus across all three bodies.

The Cloud Work Group had met earlier to further the TOGAF for Cloud ecosystems project. Now it worked on its forthcoming white paper on business performance metrics. It also – though this was not on the original agenda – discussed Gartner’s Nexus of Forces, and the future role of the Work Group in mapping out the new enterprise platform.

Mapping the New Enterprise Platform

At the start of the conference we looked at how to map the stars. Big Data analytics enables people to visualize the universe in new ways, reach new understandings of what is in it and how it works, and point to new areas for future exploration.

As the conference progressed, we found that Big Data is part of a convergence of forces. Social, mobile, and Cloud Computing are being combined with Big Data to form a new enterprise platform. The development of this platform, and its roll-out to support innovative applications that deliver more business value, is what lies beyond Big Data.

At the end of the conference we were thinking about mapping the new enterprise platform. This will not require sophisticated data processing and analysis. It will take discussions to create a common understanding, and detailed committee work to draft the guidelines and standards. This work will be done by The Open Group’s new Open Platform 3.0 Forum.

The next Open Group conference is in the week of April 15, in Sydney, Australia. I’m told that there’s some great jogging there. More importantly, we’ll be reflecting on progress in mapping Open Platform 3.0, and thinking about what lies ahead. I’m looking forward to it already.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

2 Comments

Filed under Conference

Complexity from Big Data and Cloud Trends Makes Architecture Tools like ArchiMate and TOGAF More Powerful, Says Expert Panel

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: Complexity from Big Data and Cloud Trends Makes Architecture Tools like ArchiMate and TOGAF More Powerful, Says Expert Panel, or read the transcript here.

We recently assembled a panel of Enterprise Architecture (EA) experts to explain how such simultaneous and complex trends as big data, Cloud Computing, security, and overall IT transformation can be helped by the combined strengths of The Open Group Architecture Framework (TOGAF®) and the ArchiMate® modeling language.

The panel consisted of Chris Forde, General Manager for Asia-Pacific and Vice President of Enterprise Architecture at The Open Group; Iver Band, Vice Chair of The Open Group ArchiMate Forum and Enterprise Architect at The Standard, a diversified financial services company; Mike Walker, Senior Enterprise Architecture Adviser and Strategist at HP and former Director of Enterprise Architecture at DellHenry Franken, the Chairman of The Open Group ArchiMate Forum and Managing Director at BIZZdesign, and Dave Hornford, Chairman of the Architecture Forum at The Open Group and Managing Partner at Conexiam. I served as the moderator.

This special BriefingsDirect thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on “Big Data — he transformation we need to embrace today.” [Disclosure: The Open Group and HP are sponsors ofBriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Is there something about the role of the enterprise architect that is shifting?

Walker: There is less of a focus on the traditional things we come to think of EA such as standards, governance and policies, but rather into emerging areas such as the soft skills, Business Architecture, and strategy.

To this end I see a lot in the realm of working directly with the executive chain to understand the key value drivers for the company and rationalize where they want to go with their business. So we’re moving into a business-transformation role in this practice.

At the same time, we’ve got to be mindful of the disruptive external technology forces coming in as well. EA can’t just divorce from the other aspects of architecture as well. So the role that enterprise architects play becomes more and more important and elevated in the organization.

Two examples of this disruptive technology that are being focused on at the conference are Big Data and Cloud Computing. Both are providing impacts to our businesses not because of some new business idea but because technology is available to enhance or provide new capabilities to our business. The EA’s still do have to understand these new technology innovations and determine how they will apply to the business.

We need to get really good enterprise architects, it’s difficult to find good ones. There is a shortage right now especially given that a lot of focus is being put on the EA department to really deliver sound architectures.

Not standalone

Gardner: We’ve been talking a lot here about Big Data, but usually that’s not just a standalone topic. It’s Big Data and Cloud, Cloud, mobile and security.

So with these overlapping and complex relationships among multiple trends, why is EA and things like the TOGAF framework and the ArchiMate modeling language especially useful?

Band: One of the things that has been clear for a while now is that people outside of IT don’t necessarily have to go through the technology function to avail themselves of these technologies any more. Whether they ever had to is really a question as well.

One of things that EA is doing, and especially in the practice that I work in, is using approaches like the ArchiMate modeling language to effect clear communication between the business, IT, partners and other stakeholders. That’s what I do in my daily work, overseeing our major systems modernization efforts. I work with major partners, some of which are offshore.

I’m increasingly called upon to make sure that we have clear processes for making decisions and clear ways of visualizing the different choices in front of us. We can’t always unilaterally dictate the choice, but we can make the conversation clearer by using frameworks like the TOGAF standard and the ArchiMate modeling language, which I use virtually every day in my work.

Hornford: The fundamental benefit of these tools is the organization realizing its capability and strategy. I just came from a session where a fellow quoted a Harvard study, which said that around a third of executives thought their company was good at executing on its strategy. He highlighted that this means that two-thirds are not good at executing on their strategy.

If you’re not good at executing on your strategy and you’ve got Big Data, mobile, consumerization of IT and Cloud, where are you going? What’s the correct approach? How does this fit into what you were trying to accomplish as an enterprise?

An enterprise architect that is doing their job is bringing together the strategy, goals and objectives of the organization. Also, its capabilities with the techniques that are available, whether it’s offshoring, onshoring, Cloud, or Big Data, so that the organization is able to move forward to where it needs to be, as opposed to where it’s going to randomly walk to.

Forde: One of the things that has come out in several of the presentations is this kind of capability-based planning, a technique in EA to get their arms around this thing from a business-driver perspective. Just to polish what Dave said a little bit, it’s connecting all of those things. We see enterprises talking about a capability-based view of things on that basis.

Gardner: Let’s get a quick update. The TOGAF framework, where are we and what have been the highlights from this particular event?

Minor upgrade

Hornford: In the last year, we’ve published a minor upgrade for TOGAF version 9.1 which was based upon cleaning up consistency in the language in the TOGAF documentation. What we’re working on right now is a significant new release, the next release of the TOGAF standard, which is dividing the TOGAF documentation to make it more consumable, more consistent and more useful for someone.

Today, the TOGAF standard has guidance on how to do something mixed into the framework of what you should be doing. We’re peeling those apart. So with that peeled apart, we won’t have guidance that is tied to classic application architecture in a world of Cloud.

What we find when we have done work with the Banking Industry Architecture Network (BIAN) for banking architecture, Sherwood Applied Business Security Architecture (SABSA) for security architecture, and the TeleManagement Forum, is that the concepts in the TOGAF framework work across industries and across trends. We need to move the guidance into a place so that we can be far nimbler on how to tie Cloud with my current strategy, how to tie consumerization of IT with on-shoring?

Franken: The ArchiMate modeling language turned two last year, and the ArchiMate 1.0 standard is the language to model out the core of your EA. The ArchiMate 2.0 standard added two specifics to it to make it better aligned also to the process of EA.

According to the TOGAF standard, this is being able to model out the motivation, why you’re doing EA, stakeholders and the goals that drive us. The second extension to the ArchiMate standard is being able to model out its planning and migration.

So with the core EA and these two extensions, together with the TOGAF standard process working, you have a good basis on getting EA to work in your organization.

Gardner: Mike, fill us in on some of your thoughts about the role of information architecture vis-à-vis the larger business architect and enterprise architect roles.

Walker: Information architecture is an interesting topic in that it hasn’t been getting a whole lot of attention until recently.

Information architecture is an aspect of Enterprise Architecture that enables an information strategy or business solution through the definition of the company’s business information assets, their sources, structure, classification and associations that will prescribe the required application architecture and technical capabilities.

Information architecture is the bridge between the Business Architecture world and the application and technology architecture activities.

The reason I say that is because information architecture is a business-driven discipline that details the information strategy of the company. As we know, and from what we’ve heard at the conference keynotes like in the case of NASA, Big Data, and security presentations, the preservation and classification of that information is vital to understanding what your architecture should be.

Least matured

From an industry perspective, this is one of the least matured, as far as being incorporated into a formal discipline. The TOGAF standard actually has a phase dedicated to it in data architecture. Again, there are still lots of opportunities to grow and incorporate additional methods, models and tools by the enterprise information management discipline.

Enterprise information management not only it captures traditional topic areas like master data management (MDM), metadata and unstructured types of information architecture but also focusing on the information governance, and the architecture patterns and styles implemented in MDM, Big Data, etc. There is a great deal of opportunity there.

From the role of information architects, I’m seeing more and more traction in the industry as a whole. I’ve dealt with an entire group that’s focused on information architecture and building up an enterprise information management practice, so that we can take our top line business strategies and understand what architectures we need to put there.

This is a critical enabler for global companies, because oftentimes they’re restricted by regulation, typically handled at a government or regional area. This means we have to understand that we build our architecture. So it’s not about the application, but rather the data that it processes, moves, or transforms.

Gardner: Up until not too long ago, the conventional thinking was that applications generate data. Then you treat the data in some way so that it can be used, perhaps by other applications, but that the data was secondary to the application.

But there’s some shift in that thinking now more toward the idea that the data is the application and that new applications are designed to actually expand on the data’s value and deliver it out to mobile tiers perhaps. Does that follow in your thinking that the data is actually more prominent as a resource perhaps on par with applications?

Walker: You’re spot on, Dana. Before the commoditization of these technologies that resided on premises, we could get away with starting at the application layer and work our way back because we had access to the source code or hardware behind our firewalls. We could throw servers out, and we used to put the firewalls in front of the data to solve the problem with infrastructure. So we didn’t have to treat information as a first-class citizen. Times have changed, though.

Information access and processing is now democratized and it’s being pushed as the first point of presentment. A lot of times this is on a mobile device and even then it’s not the corporate’s mobile device, but your personal device. So how do you handle that data?

It’s the same way with Cloud, and I’ll give you a great example of this. I was working as an adviser for a company, and they were looking at their Cloud strategy. They had made a big bet on one of the big infrastructures and Cloud-service providers. They looked first at what the features and functions that that Cloud provider could provide, and not necessarily the information requirements. There were two major issues that they ran into, and that was essentially a showstopper. They had to pull off that infrastructure.

The first one was that in that specific Cloud provider’s terms of service around intellectual property (IP) ownership. Essentially, that company was forced to cut off their IP rights.

Big business

As you know, IP is a big business these days, and so that was a showstopper. It actually broke the core regulatory laws around being able to discover information.

So focusing on the applications to make sure it meets your functional needs is important. However, we should take a step back and look at the information first and make sure that for the people in your organization who can’t say no, their requirements are satisfied.

Gardner: Data architecture is it different from EA and Business Architecture, or is it a subset? What’s the relationship, Dave?

Hornford: Data architecture is part of an EA. I won’t use the word subset, because a subset starts to imply that it is a distinct thing that you can look at on its own. You cannot look at your Business Architecture without understanding your information architecture. When you think about Big Data, cool. We’ve got this pile of data in the corner. Where did it come from? Can we use it? Do we actually have legitimate rights, as Mike highlighted, to use this information? Are we allowed to mix it and who mixes it?

When we look at how our business is optimized, they normally optimize around work product, what the organization is delivering. That’s very easy. You can see who consumes your work product. With information, you often have no idea who consumes your information. So now we have provenance, we have source and as we move for global companies, we have the trends around consumerization, Cloud and simply tightening cycle time.

Gardner: Of course, the end game for a lot of the practitioners here is to create that feedback loop of a lifecycle approach, rapid information injection and rapid analysis that could be applied. So what are some of the ways that these disciplines and tools can help foster that complete lifecycle?

Band: The disciplines and tools can facilitate the right conversations among different stakeholders. One of the things that we’re doing at The Standard is building cadres equally balanced between people in business and IT.

We’re training them in information management, going through a particular curriculum, and having them study for an information management certification that introduces a lot of these different frameworks and standard concepts.

Creating cadres

We want to create these cadres to be able to solve tough and persistent information management problems that affect all companies in financial services, because information is a shared asset. The purpose of the frameworks is to ensure proper stewardship of that asset across disciplines and across organizations within an enterprise.

Hornford: The core is from the two standards that we have, the ArchiMate standard and the TOGAF standard. The TOGAF standard has, from its early roots, focused on the components of EA and how to build a consistent method of understanding of what I’m trying to accomplish, understanding where I am, and where I need to be to reach my goal.

When we bring in the ArchiMate standard, I have a language, a descriptor, a visual descriptor that allows me to cross all of those domains in a consistent description, so that I can do that traceability. When I pull in this lever or I have this regulatory impact, what does it hit me with, or if I have this constraint, what does it hit me with?

If I don’t do this, if I don’t use the framework of the TOGAF standard, or I don’t use the discipline of formal modeling in the ArchiMate standard, we’re going to do it anecdotally. We’re going to trip. We’re going to fall. We’re going to have a non-ending series of surprises, as Mike highlighted.

“Oh, terms of service. I am violating the regulations. Beautiful. Let’s take that to our executive and tell him right as we are about to go live that we have to stop, because we can’t get where we want to go, because we didn’t think about what it took to get there.” And that’s the core of EA in the frameworks.

Walker: To build on what Dave has just talked about and going back to your first question Dana, the value statement on TOGAF from a business perspective. The businesses value of TOGAF is that they get a repeatable and a predictable process for building out our architectures that properly manage risks and reliably produces value.

The TOGAF framework provides a methodology to ask what problems you’re trying to solve and where you are trying to go with your business opportunities or challenges. That leads to Business Architecture, which is really a rationalization in technical or architectural terms the distillation of the corporate strategy.

From there, what you want to understand is information — how does that translate, what information architecture do we need to put in place? You get into all sorts of things around risk management, etc., and then it goes on from there, until what we were talking about earlier about information architecture.

If the TOGAF standard is applied properly you can achieve the same result every time, That is what interests business stakeholders in my opinion. And the ArchiMate modeling language is great because, as we talked about, it provides very rich visualizations so that people cannot only show a picture, but tie information together. Different from other aspects of architecture, information architecture is less about the boxes and more about the lines.

Quality of the individuals

Forde: Building on what Dave was saying earlier and also what Iver was saying is that while the process and the methodology and the tools are of interest, it’s the discipline and the quality of the individuals doing the work

Iver talked about how the conversation is shifting and the practice is improving to build communications groups that have a discipline to operate around. What I am hearing is implied, but actually I know what specifically occurs, is that we end up with assets that are well described and reusable.

And there is a point at which you reach a critical mass that these assets become an accelerator for decision making. So the ability of the enterprise and the decision makers in the enterprise at the right level to respond is improved, because they have a well disciplined foundation beneath them.

A set of assets that are reasonably well-known at the right level of granularity for them to absorb the information and the conversation is being structured so that the technical people and the business people are in the right room together to talk about the problems.

This is actually a fairly sophisticated set of operations that I am discussing and doesn’t happen overnight, but is definitely one of the things that we see occurring with our members in certain cases.

Hornford: I want to build on that what Chris said. It’s actually the word “asset.” While he was talking, I was thinking about how people have talked about information as an asset. Most of us don’t know what information we have, how it’s collected, where it is, but we know we have got a valuable asset.

I’ll use an analogy. I have a factory some place in the world that makes stuff. Is that an asset? If I know that my factory is able to produce a particular set of goods and it’s hooked into my supply chain here, I’ve got an asset. Before that, I just owned a thing.

I was very encouraged listening to what Iver talked about. We’re building cadres. We’re building out this approach and I have seen this. I’m not using that word, but now I’m stealing that word. It’s how people build effective teams, which is not to take a couple of specialists and put them in an ivory tower, but it’s to provide the method and the discipline of how we converse about it, so that we can have a consistent conversation.

When I tie it with some of the tools from the Architecture Forum and the ArchiMate Forum, I’m able to consistently describe it, so that I now have an asset I can identify, consume and produce value from.

Business context

Forde: And this is very different from data modeling. We are not talking about entity relationship, junk at the technical detail, or third normal form and that kind of stuff. We’re talking about a conversation that’s occurring around the business context of what needs to go on supported by the right level of technical detail when you need to go there in order to clarify.

Comments Off

Filed under ArchiMate®, Enterprise Architecture, TOGAF®

The Open Group Panel Explores How the Big Data Era Now Challenges the IT Status Quo

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group panel explores how the Big Data era now challenges the IT status quo, or view the on-demand video recording on this discussion here: http://new.livestream.com/opengroup/events/1838807.

We recently assembled a panel of experts to explore how Big Data changes the status quo for architecting the enterprise. The bottom line from the discussion is that large enterprises should not just wade into Big Data as an isolated function, but should anticipate the strategic effects and impacts of Big Data — as well the simultaneous complicating factors of Cloud Computing and mobile– as soon as possible.

The panel consisted of Robert Weisman, CEO and Chief Enterprise Architect at Build The Vision; Andras Szakal, Vice President and CTO of IBM’s Federal Division; Jim Hietala, Vice President for Security at The Open Group, and Chris Gerty, Deputy Program Manager at the Open Innovation Program at NASA. I served as the moderator.

And this special thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on “Big Data — he transformation we need to embrace today.”

Threaded factors

An interesting thread for me throughout the conference was to factor where Big Data begins and plain old data, if you will, ends. Of course, it’s going to vary quite a bit from organization to organization.

But Gerty from NASA, part of our panel, provided a good example: It’s when you run out of gas with your old data methods, and your ability to deal with the data — and it’s not just the size of the data itself.

Therefore, Big Data means do things differently — not just to manage the velocity and the volume and the variety of the data, but to really think about data fundamentally and differently. And, we need to think about security, risk and governance. If it’s a “boundaryless organization” when it comes your data, either as a product or service or a resource, that control and management of which data should be exposed, which should be opened, and which should be very closely guarded all need to be factored, determined and implemented.

Here are some excerpts from the on-stage discussion:

Dana Gardner: You mentioned that Big Data to you is not a factor of the size, because NASA’s dealing with so much. It’s when you run out of steam, as it were, with the methodologies. Maybe you could explain more. When do you know that you’ve actually run out of steam with the methodologies?

Gerty: When we collect data, we have some sort of goal in minds of what we might get out of it. When we put the pieces from the data together, it either maybe doesn’t fit as well as you thought or you are successful and you continue to do the same thing, gathering archives of information.

Gardner: Andras, does that square with where you are in your government interactions — that data now becomes a different type of resource, and that you need to know when to do things differently?At that point, where you realize there might even something else that you want to do with the data, different than what you planned originally, that’s when we have to pivot a little bit and say, “Now I need to treat this as a living archive. It’s a ‘it may live beyond me’ type of thing.” At that point, I think you treat it as setting up the infrastructure for being used later, whether it’d be by you or someone else. That’s an important transition to make and might be what one could define as Big Data.

Szakal: The importance of data hasn’t changed. The data itself, the veracity of the data, is still important. Transactional data will always need to exist. The difference is that you have certainly the three or four Vs, depending on how you look at it, but the importance of data is in its veracity, and your ability to understand or to be able to use that data before the data’s shelf life runs out.

Gardner: Bob, we’ve seen the price points on storage go down so dramatically. We’ve seem people just decide to hold on to data that they wouldn’t have before, simply because they can and they can afford to do so. That means we need to try to extract value and use that data. From the perspective of an enterprise architect, how are things different now, vis-à-vis this much larger set of data and variety of data, when it comes to planning and executing as architects?Some data has a shelf life that’s long lived. Other data has very little shelf life, and you would use different approaches to being able to utilize that information. It’s ultimately not about the data itself, but it’s about gaining deep insight into that data. So it’s not storing data or manipulating data, but applying those analytical capabilities to data.

Weisman: One of the major issues is that normally organizations are holding two orders of magnitude more data then they need. It’s an huge overhead, both in terms of the applications architecture that has a code basis, larger than it should be, and also from the technology architecture that is supporting a horrendous number of servers and a whole bunch of technology stuff that they don’t need.

The issue for the architect is to figure out as what data is useful, institute a governance process, so that you can have data lifecycle management, have a proper disposition,  focus the organization on information data and knowledge that is basically going to provide business value to the organization, and help them innovate and have a competitive advantage.

Can’t afford it

And in terms of government, just improve service delivery, because there’s waste right now on information infrastructure, and we can’t afford it anymore.

Gardner: So it’s difficult to know what to keep and what not to keep. I’ve actually spoken to a few people lately who want to keep everything, just because they want to mine it, and they are willing to spend the money and effort to do that.

Jim Hietala, when people do get to this point of trying to decide what to keep, what not to keep, and how to architect properly for that, they also need to factor in security. It shouldn’t become later in the process. It should come early. What are some of the precepts that you think are important in applying good security practices to Big Data?

Hietala: One of the big challenges is that many of the big-data platforms weren’t built from the get-go with security in mind. So some of the controls that you’ve had available in your relational databases, for instance, you move over to the Big Data platforms and the access control authorizations and mechanisms are not there today.

Gardner: There are a lot of unknown unknowns out there, as we discovered with our tweet chat last month. Some people think that the data is just data, and you apply the same security to it. Do you think that’s the case with Big Data? Is it just another follow-through of what you always did with data in the first place?Planning the architecture, looking at bringing in third-party controls to give you the security mechanisms that you are used to in your older platforms, is something that organizations are going to have to do. It’s really an evolving and emerging thing at this point.

Hietala: I would say yes, at a conceptual level, but it’s like what we saw with virtualization. When there was a mad rush to virtualize everything, many of those traditional security controls didn’t translate directly into the virtualized world. The same thing is true with Big Data.

When you’re talking about those volumes of data, applying encryption, applying various security controls, you have to think about how those things are going to scale? That may require new solutions from new technologies and that sort of thing.

Gardner: Chris Gerty, when it comes to that governance, security, and access control, are there any lessons that you’ve learned that you are aware of in terms of the best of openness, but also with the ability to manage the spigot?

Gerty: Spigot is probably a dangerous term to use, because it implies that all data is treated the same. The sooner that you can tag the data as either sensitive or not, mostly coming from the person or team that’s developed or originated the data, the better.

Kicking the can

Once you have it on a hard drive, once you get crazy about storing everything, if you don’t know where it came from, you’re forced to put it into a secure environment. And that’s just kicking the can down the road. It’s really a disservice to people who might use the data in a useful way to address their problems.

We constantly have satellites that are made for one purpose. They send all the data down. It’s controlled either for security or for intellectual property (IP), so someone can write a paper. Then, after the project doesn’t get funded or it just comes to a nice graceful close, there is that extra step, which is almost a responsibility of the originators, to make it useful to the rest of the world.

Gardner: Let’s look at Big Data through the lens of some other major trends right now. Let’s start with Cloud. You mentioned that at NASA, you have your own private Cloud that you’re using a lot, of course, but you’re also now dabbling in commercial and public Clouds. Frankly, the price points that these Cloud providers are offering for storage and data services are pretty compelling.

So we should expect more data to go to the Cloud. Bob, from your perspective, as organizations and architects have to think about data in this hybrid Cloud on-premises off-premises, moving back and forth, what do you think enterprise architects need to start thinking about in terms of managing that, planning for the right destination of data, based on the right mix of other requirements?

Weisman: It’s a good question. As you said, the price point is compelling, but the security and privacy of the information is something else that has to be taken into account. Where is that information going to reside? You have to have very stringent service-level agreements (SLAs) and in certain cases, you might say it’s a price point that’s compelling, but the risk analysis that I have done means that I’m going to have to set up my own private Cloud.

Gardner: Andras, how do the Cloud and Big Data come together in a way that’s intriguing to you?Right now, everybody’s saying is the public Cloud is going to be the way to go. Vendors are going to have to be very sensitive to that and many are, at this point in time, addressing a lot of the needs of some of the large client basis. So it’s not one-size-fits-all and it’s more than just a price for service. Architecture can bring down the price pretty dramatically, even within an enterprise.

Szakal: Actually it’s a great question. We could take the rest of the 22 minutes talking on this one question. I helped lead the President’s Commission on Big Data that Steve Mills from IBM and — I forget the name of the executive from SAP — led. We intentionally tried to separate Cloud from Big Data architecture, primarily because we don’t believe that, in all cases, Cloud is the answer to all things Big Data. You have to define the architecture that’s appropriate for your business needs.

However, it also depends on where the data is born. Take many of the investments IBM has made into enterprise market management, for example, Coremetrics, several of these services that we now offer for helping customers understand deep insight into how their retail market or supply chain behaves.

Born in the Cloud

All of that information is born in the Cloud. But if you’re talking about actually using Cloud as infrastructure and moving around huge sums of data or constructing some of these solutions on your own, then some of the ideas that Bob conveyed are absolutely applicable.

I think it becomes prohibitive to do that and easier to stand up a hybrid environment for managing the amount of data. But I think that you have to think about whether your data is real-time data, whether it’s data that you could apply some of these new technologies like Hadoop to, Hadoop MapReduce-type solutions, or whether it’s traditional data warehousing.

Data warehouses are going to continue to exist and they’re going to continue to evolve technologically. You’re always going to use a subset of data in those data warehouses, and it’s going to be an applicable technology for many years to come.

Gardner: So suffice it to say, an enterprise architect who is well versed in both Cloud infrastructure requirements, technologies, and methods, as well as Big Data, will probably be in quite high demand. That specialization in one or the other isn’t as valuable as being able to cross-pollinate between them.

Szakal: Absolutely. It’s enabling our architects and finding deep individuals who have this unique set of skills, analytics, mathematics, and business. Those individuals are going to be the future architects of the IT world, because analytics and Big Data are going to be integrated into everything that we do and become part of the business processing.

Gardner: Well, that’s a great segue to the next topic that I am interested in, and it’s around mobility as a trend and also application development. The reason I lump them together is that I increasingly see developers being tasked with mobile first.

When you create a new app, you have to remember that this is going to run in the mobile tier and you want to make sure that the requirements, the UI, and the complexity of that app don’t go beyond the ability of the mobile app and the mobile user. This is interesting to me, because data now has a different relationship with apps.

We used to think of apps as creating data and then the data would be stored and it might be used or integrated. Now, we have applications that are simply there in order to present the data and we have the ability now to present it to those mobile devices in the mobile tier, which means it goes anywhere, everywhere all the time.

Let me start with you Jim, because it’s security and risk, but it’s also just rethinking the way we use data in a mobile tier. If we can do it safely, and that’s a big IF, how important should it be for organizations to start thinking about making this data available to all of these devices and just pour out into that mobile tier as possible?

Hietala: In terms of enabling the business, it’s very important. There are a lot of benefits that accrue from accessing your data from whatever device you happen to be on. To me, it is that question of “if,” because now there’s a whole lot of problems to be solved relative to the data floating around anywhere on Android, iOS, whatever the platform is, and the organization being able to lock down their data on those devices, forgetting about whether it’s the organization device or my device. There’s a set of issues around that that the security industry is just starting to get their arms around today.

Mobile ability

Gardner: Chris, any thoughts about this mobile ability that the data gets more valuable the more you can use it and apply it, and then the more you can apply it, the more data you generate that makes the data more valuable, and we start getting into that positive feedback loop?

Gerty: Absolutely. It’s almost an appreciation of what more people could do and get to the problem. We’re getting to the point where, if it’s available on your desktop, you’re going to find a way to make it available on your device.

That same security questions probably need to be answered anyway, but making it mobile compatible is almost an acknowledgment that there will be someone who wants to use it. So let me go that extra step to make it compatible and see what I get from them. It’s more of a cultural benefit that you get from making things compatible with mobile.

Gardner: Any thoughts about what developers should be thinking by trying to bring the fruits of Big Data through these analytics to more users rather than just the BI folks or those that are good at SQL queries? Does this change the game by actually making an application on a mobile device, simple, powerful but accessing this real time updated treasure trove of data?

Gerty: I always think of the astronaut on the moon. He’s got a big, bulky glove and he might have a heads-up display in front of him, but he really needs to know exactly a certain piece of information at the right moment, dealing with bandwidth issues, dealing with the environment, foggy helmet wherever.

It’s very analogous to what the day-to-day professional will use trying to find out that quick e-mail he needs to know or which meeting to go to — which one is more important — and it all comes down to putting your developer in the shoes of the user. So anytime you can get interaction between the two, that’s valuable.

Weisman: From an Enterprise Architecture point of view my background is mainly defense and government, but defense mobile computing has been around for decades. So you’ve always been dealing with that.

The main thing is that in many cases, if they’re coming up with information, the whole presentation layer is turning into another architecture domain with information visualization and also with your security controls, with an integrated identity management capability.

It’s like you were saying about astronaut getting it right. He doesn’t need to know everything that’s happening in the world. He needs to know about his heads-up display, the stuff that’s relevant to him.

So it’s getting the right information to person in an authorized manner, in a way that he can visualize and make sense of that information, be it straight data, analytics, or whatever. The presentation layer, ergonomics, visual communication are going to become very important in the future for that. There are also a lot of problems. Rather than doing it at the application level, you’re doing it entirely in one layer.

Governance and security

Gardner: So clearly the implications of data are cutting across how we think about security, how we think about UI, how we factor in mobility. What we now think about in terms of governance and security, we have to do differently than we did with older data models.

Jim Hietala, what about the impact on spurring people towards more virtualized desktop delivery, if you don’t want to have the date on that end device, if you want solve some of the issues about control and governance, and if you want to be able to manage just how much data gets into that UI, not too much not too little.

Do you think that some of these concerns that we’re addressing will push people to look even harder, maybe more aggressive in how they go to desktop and application virtualization, as they say, keep it on the server, deliver out just the deltas?

Hietala: That’s an interesting point. I’ve run across a startup in the last month or two that is doing is that. The whole value proposition is to virtualize the environment. You get virtual gold images. You don’t have to worry about what’s actually happening on the physical device and you know when the devices connect. The security threat goes away. So we may see more of that as a solution to that.

Gardner: Andras, do you see that that some of the implications of Big Data, far fetched as it may be, are propelling people to cultivate their servers more and virtualize their apps, their data, and their desktop right up to the end devices?

Szakal: Yeah, I do. I see IBM providing solutions for virtual desktop, but I think it was really a security question you were asking. You’re certainly going to see an additional number of virtualized desktop environments.

Ultimately, our network still is not stable enough or at a high enough bandwidth to really make that useful exercise for all but the most menial users in the enterprise. From a security point of view, there is a lot to be still solved.

And part of the challenge in the Cloud environment that we see today is the proliferation of virtual machines (VMs) and the inability to actually contain the security controls within those machines and across these machines from an enterprise perspective. So we’re going to see more solutions proliferate in this area and to try to solve some of the management issues, as well as the security issues, but we’re a long ways away from that.

Gerty: Big Data, by itself, isn’t magical. It doesn’t have the answers just by being big. If you need more, you need to pry deeper into it. That’s the example. They realized early enough that they were able to make something good.

Gardner: Jim Hietala, any thoughts about examples that illustrate where we’re going and why this is so important?

Hietala: Being a security guy, I tend to talk about scare stories, horror stories. One example from last year that struck me. One of the major retailers here in the U.S. hit the news for having predicted, through customer purchase behavior, when people were pregnant.

They could look and see, based upon buying 20 things, that if you’re buying 15 of these and your purchase behavior has changed, they can tell that. The privacy implications to that are somewhat concerning.

An example was that this retailer was sending out coupons related to somebody being pregnant. The teenage girl, who was pregnant hadn’t told her family yet. The father found it. There was alarm in the household and at the local retailer store, when the father went and confronted them.

Privacy implications

There are privacy implications from the use of Big Data. When you get powerful new technology in marketing people’s hands, things sometimes go awry. So I’d throw that out just as a cautionary tale that there is that aspect to this. When you can see across people’s buying transactions, things like that, there are privacy considerations that we’ll have to think about, and that we really need to think about as an industry and a society.

Comments Off

Filed under Conference

Open Group Panel Explores Changing Field of Risk Management and Analysis in the Era of Big Data

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Panel Explores Changing Field of Risk Management and Analysis in Era of Big Data

This is a transcript of a sponsored podcast discussion on the threats from and promise of Big Data in securing enterprise information assets in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference itself is focusing on Big Data the transformation we need to embrace today.

We’re here now with a panel of experts to explore new trends and solutions in the area of risk management and analysis. We’ll learn how large enterprises are delivering risk assessments and risk analysis, and we’ll see how Big Data can be both an area to protect from in form of risks, but also as a tool for better understanding and mitigating risks.

With that, please join me in welcoming our panel. We’re here with Jack Freund, PhD, the Information Security Risk Assessment Manager at TIAA-CREF. Welcome, Jack.

Jack Freund: Hello Dana, how are you?

Gardner: I’m great. Glad you could join us.

We are also here with Jack Jones, Principal of CXOWARE. He has more than nine years of experience as a Chief Information Security Officer, is the inventor of the Factor Analysis Information Risk (FAIR) framework. Welcome, Jack.

Jack Jones: Thank you. And we’re also here with Jim Hietala, Vice President, Security for The Open Group. Welcome, Jim.

Jim Hietala: Thanks, Dana.

Gardner: All right, let’s start out with looking at this from a position of trends. Why is the issue of risk analysis so prominent now? What’s different from, say, five years ago? And we’ll start with you, Jack Jones.

Jones: The information security industry has struggled with getting the attention of and support from management and businesses for a long time, and it has finally come around to the fact that the executives care about loss exposure — the likelihood of bad things happening and how bad those things are likely to be.

It’s only when we speak of those terms or those issues in terms of risk, that we make sense to those executives. And once we do that, we begin to gain some credibility and traction in terms of getting things done.

Gardner: So we really need to talk about this in the terms that a business executive would appreciate, not necessarily an IT executive.

Effects on business

Jones: Absolutely. They’re tired of hearing about vulnerabilities, hackers, and that sort of thing. It’s only when we can talk in terms of the effect on the business that it makes sense to them.

Gardner: Jack Freund, I should also point out that you have more than 14 years in enterprise IT experience. You’re a visiting professor at DeVry University and you chair a risk-management subcommittee for ISACA? Is that correct?

Freund: ISACA, yes.

Gardner: And do you agree?

Freund: The problem that we have as a profession, and I think it’s a big problem, is that we have allowed ourselves to escape the natural trend that the other IT professionals have already taken.

There was a time, years ago, when you could code in the basement, and nobody cared much about what you were doing. But now, largely speaking, developers and systems administrators are very focused on meeting the goals of the organization.

Security has been allowed to miss that boat a little. We have been allowed to hide behind this aura of a protector and of an alerter of terrible things that could happen, without really tying ourselves to the problem that the organizations are facing and how can we help them succeed in what they’re doing.

Gardner: Jim Hietala, how do you see things that are different now than a few years ago when it comes to risk assessment?

Hietala: There are certainly changes on the threat side of the landscape. Five years ago, you didn’t really have hacktivism or this notion of an advanced persistent threat (APT).

That highly skilled attacker taking aim at governments and large organizations didn’t really exist -– or didn’t exist to the degree it does today. So that has changed.

You also have big changes to the IT platform landscape, all of which bring new risks that organizations need to really think about. The mobility trend, the Cloud trend, the big-data trend that we are talking about today, all of those things bring new risk to the organization.

As Jack Jones mentioned, business executives don’t want to hear about, “I’ve got 15 vulnerabilities in the mobility part of my organization.” They want to understand what’s the risk of bad things happening because of mobility, what we’re doing about it, and what’s happening to risk over time?

So it’s a combination of changes in the threats and attackers, as well as just changes to the IT landscape, that we have to take a different look at how we measure and present risk to the business.

Gardner: Because we’re at a big-data conference, do you share my perception, Jack Jones, that Big Data can be a source of risk and vulnerability, but also the analytics and the business intelligence (BI) tools that we’re employing with Big Data can be used to alert you to risks or provide a strong tool for better understanding your true risk setting or environment.

Crown jewels

Jones: You are absolutely right. You think of Big Data and, by definition, it’s where your crown jewels, and everything that leads to crown jewels from an information perspective, are going to be found. It’s like one-stop shopping for the bad guy, if you want to look at it in that context. It definitely needs to be protected. The architecture surrounding it and its integration across a lot of different platforms and such, can be leveraged and probably result in a complex landscape to try and secure.

There are a lot of ways into that data and such, but at least if you can leverage that same Big Data architecture, it’s an approach to information security. With log data and other threat and vulnerability data and such, you should be able to make some significant gains in terms of how well-informed your analyses and your decisions are, based on that data.

Gardner: Jack Freund, do you share that? How does Big Data fit into your understanding of the evolving arena of risk assessment and analysis?

Freund: If we fast-forward it five years, and this is even true today, a lot of people on the cutting edge of Big Data will tell you the problem isn’t so much building everything together and figuring out what it can do. They are going to tell you that the problem is what we do once we figure out everything that we have. This is the problem that we have traditionally had on a much smaller scale in information security. When everything is important, nothing is important.

Gardner: To follow up on that, where do you see the gaps in risk analysis in large organizations? In other words, what parts of organizations aren’t being assessed for risk and should be?

Freund: The big problems that exist largely today in the way that risk assessments are done, is the focus on labels. We want to quickly address the low, medium, and high things and know where they are. But the problem is that there are inherent problems in the way that we think about those labels, without doing any of the analysis legwork.

I think that’s what’s really missing is that true analysis. If the system goes offline, do we lose money? If the system becomes compromised, what are the cost-accounting things that will happen that allow us to figure out how much money we’re going to lose.

That analysis work is largely missing. That’s the gap. The gap is if the control is not in place, then there’s a risk that must be addressed in some fashion. So we end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

Every day, our organizations are out there selling products, offering services, which is and of itself, its own risky venture. So tying what we do from an information security perspective to that is critical for not just the success of the organization, but the success of our profession.

Gardner: So we can safely say that large companies are probably pretty good at a cost-benefit analysis or they wouldn’t be successful. Now, I guess we need to ask them to take that a step further and do a cost-risk analysis, but in business terms, being mindful that their IT systems might be a much larger part of that than they had at once considered. Is that fair, Jack?

Risk implications

Jones: Businesses have been making these decisions, chasing the opportunity, but generally, without any clear understanding of the risk implications, at least from the information security perspective. They will have us in the corner screaming and throwing red flags in there, and talking about vulnerabilities and threats from one thing or another.

But, we come to the table with red, yellow, and green indicators, and on the other side of the table, they’ve got numbers. Well, here is what we expect to earn in revenue from this initiative, and the information security people are saying it’s crazy. How do you normalize the quantitative revenue gain versus red, yellow, and green?

Gardner: Jim Hietala, do you see it in the same red, yellow, green or are there some other frameworks or standard methodologies that The Open Group is looking at to make this a bit more of a science?

Hietala: Probably four years ago, we published what we call the Risk Taxonomy Standard which is based upon FAIR, the management framework that Jack Jones invented. So, we’re big believers in bringing that level of precision to doing risk analysis. Having just gone through training for FAIR myself, as part of the standards effort that we’re doing around certification, I can say that it really brings a level of precision and a depth of analysis to risk analysis that’s been lacking frequently in IT security and risk management.

Gardner: We’ve talked about how organizations need to be mindful that their risks are higher and different than in the past and we’ve talked about how standardization and methodologies are important, helping them better understand this from a business perspective, instead of just a technology perspective.

But, I’m curious about a cultural and organizational perspective. Whose job should this fall under? Who is wearing the white hat in the company and can rally the forces of good and make all the bad things managed? Is this a single person, a cultural, an organizational mission? How do you make this work in the enterprise in a real-world way? Let’s go to you, Jack Freund.

Freund: The profession of IT risk management is changing. That profession will have to sit between the business and information security inclusive of all the other IT functions that make that happen.

In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups. You have to be able to understand profit and loss and capital expenditure on the business side. On the IT risk side, you have to be technical enough to do all those sorts of things.

But I think the sum total of those two things is probably only about 50 percent of the job of IT risk management today. The other 50 percent is communication. Finding ways to translate that language and to understand the needs and concerns of each side of that relationship is really the job of IT risk management.

To answer your question, I think it’s absolutely the job of IT risk management to do that. From my own experiences with the FAIR framework, I can say that using FAIR is the Rosetta Stone for speaking between those two groups.

Necessary tools

It gives you the tools necessary to speak in the insurance and risk terms that business appreciate. And it gives you the ability to be as technical and just nerdy, if you will, as you need to be in order to talk to IT security and the other IT functions in order to make sure everybody is on the same page and everyone feels like their concerns are represented in the risk-assessment functions that are happening.

Gardner: Jack Jones, can you add to that?

Jones: I agree with what Jack said wholeheartedly. I would add, though, that integration or adoption of something like this is a lot easier the higher up in the organization you go.

For CFOs traditionally, their neck is most clearly on the line for risk-related issues within most organizations. At least in my experience, if you get their ear on this and present the information security data analyses to them, they jump on board, they drive it through the organization, and it’s just brain-dead easy.

If you try to drive it up through the ranks, maybe you get an enthusiastic supporter in the information security organization, especially if it’s below the CISO level, and they try a grassroots sort of effort to bring it in, it’s a tougher thing. It can still work. I’ve seen it work very well, but, it’s a longer row to hoe.

Gardner: There have been a lot of research, studies, and surveys on data breaches. What are some of the best sources, or maybe not so good sources, for actually measuring this? How do you know if you’re doing it right? How do you know if you’re moving from yellow to green, instead of to red? To you, Jack Freund.

Freund: There are a couple of things in that question. The first is there’s this inherent assumption in a lot of organizations that we need to move from yellow to green, and that may not be the case. So, becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

That’s part of the official mindset of IT security. When you graduate an information security person today, they are minted knowing that there are a lot of bad things out there, and their goal in life is to reduce them. But, that may not be the case. The case may very well be that things are okay now, but we have bigger things to fry over here that we’re going to focus on. So, that’s one thing.

The second thing, and it’s a very good question, is how we know that we’re getting better? How do we trend that over time? Overall, measuring that value for the organization has to be able to show a reduction of a risk or at least reduction of risk to the risk-tolerance levels of the organization.

Calculating and understanding that requires something that I always phrase as we have to become comfortable with uncertainty. When you are talking about risk in general, you’re talking about forward-looking statements about things that may or may not happen. So, becoming comfortable with the fact that they may or may not happen means that when you measure them today, you have to be willing to be a little bit squishy in how you’re representing that.

In FAIR and in other academic works, they talk about using ranges to do that. So, things like high, medium, and low, could be represented in terms of a minimum, maximum, and most likely. And that tends to be very, very effective. People can respond to that fairly well.

Gathering data

Jones: With regard to the data sources, there are a lot of people out there doing these sorts of studies, gathering data. The problem that’s hamstringing that effort is the lack of a common set of definitions, nomenclature, and even taxonomy around the problem itself.

You will have one study that will have defined threat, vulnerability, or whatever differently from some other study, and so the data can’t be normalized. It really harms the utility of it. I see data out there and I think, “That looks like that can be really useful.” But, I hesitate to use it because I don’t understand. They don’t publish their definitions, approach, and how they went after it.

There’s just so much superficial thinking in the profession on this that we now have dug under the covers. Too often, I run into stuff that just can’t be defended. It doesn’t make sense, and therefore the data can’t be used. It’s an unfortunate situation.

I do think we’re heading in a positive direction. FAIR can provide a normalizing structure for that sort of thing. The VERIS framework, which by the way, is also derived in part from FAIR, also has gained real attraction in terms of the quality of the research they have done and the data they’re generating. We’re headed in the right direction, but we’ve got a long way to go.

Gardner: Jim Hietala, we’re seemingly looking at this on a company-by-company basis. But, is there a vertical industry slice or industry-wide slice where we could look at what’s happening to everyone and put some standard understanding, or measurement around what’s going on in the overall market, maybe by region, maybe by country?

Hietala: There are some industry-specific initiatives and what’s really needed, as Jack Jones mentioned, are common definitions for things like breach, exposure, loss, all those, so that the data sources from one organization can be used in another, and so forth. I think about the financial services industry. I know that there is some information sharing through an organization called the FS-ISAC about what’s happening to financial services organizations in terms of attacks, loss, and those sorts of things.

There’s an opportunity for that on a vertical-by-vertical basis. But, like Jack said, there is a long way to go on that. In some industries, healthcare for instance, you are so far from that, it’s ridiculous. In the US here, the HIPAA security rule says you must do a risk assessment. So, hospitals have done annual risk assessments, will stick the binder on the shelf, and they don’t think much about information security in between those annual risk assessments. That’s a generalization, but various industries are at different places on a continuum of maturity of their risk management approaches.

Gardner: As we get better with having a common understanding of the terms and the measurements and we share more data, let’s go back to this notion of how to communicate this effectively to those people that can use it and exercise change management as a result. That could be the CFO, the CEO, what have you, depending on the organization.

Do you have any examples? Can we look to an organization that’s done this right, and examine their practices, the way they’ve communicated it, some of the tools they’ve used and say, “Aha, they’re headed in the right direction maybe we could follow a little bit.” Let’s start with you, Jack Freund.

Freund: I have worked and consulted for various organizations that have done risk management at different levels. The ones that have embraced FAIR tend to be the ones that overall feel that risk is an integral part of their business strategy. And I can give a couple of examples of scenarios that have played out that I think have been successful in the way they have been communicated.

Coming to terms

The key to keep in mind with this is that one of the really important things is that when you’re a security professional, you’re again trained to feel like you need results. But, the results for the IT risk management professional are different. The results are “I’ve communicated this effectively, so I am done.” And then whatever the results are, are the results that needed to be. And that’s a really hard thing to come to terms with.

I’ve been involved in large-scale efforts to assess risk for a Cloud venture. We needed to move virtually every confidential record that we have to the Cloud in order to be competitive with the rest of our industry. If our competitors are finding ways to utilize the Cloud before us, we can lose out. So, we need to find a way to do that, and to be secure and compliant with all the laws and regulations and such.

Through that scenario, one of the things that came out was that key ownership became really, really important. We had the opportunity to look at the various control structures and we analyzed them using FAIR. What we ended up with was sort of a long-tail risk. Most people will probably do their job right over a long enough period of time. But, over that same long period of time, the odds of somebody making a mistake not in your favor are probably likely, but, not significantly enough so that you can’t make the move.

But, the problem became that the loss side, the side that typically gets ignored with traditional risk-assessment methodologies, was so significant that the organization needed to make some judgment around that, and they needed to have a sense of what we needed to do in order to minimize that.

That became a big point of discussion for us and it drove the conversation away from bad things could happen. We didn’t bury the lead. The lead was that this is the most important thing to this organization in this particular scenario.

So, let’s talk about things we can do. Are we comfortable with it? Do we need to make any sort of changes? What are some control opportunities? How much do they cost? This is a significantly more productive conversation than just, “Here is a bunch of bad things that happen. I’m going to cross my arms and say no.”

Gardner: Jack Jones, examples at work?

Jones: In an organization that I’ve been working with recently, their board of directors said they wanted a quantitative view of information security risk. They just weren’t happy with the red, yellow, green. So, they came to us, and there were really two things that drove them there. One was that they were looking at cyber insurance. They wanted to know how much cyber insurance they should take out, and how do you figure that out when you’ve got a red, yellow, green scale?

They were able to do a series of analyses on a population of the scenarios that they thought were relevant in their world, get an aggregate view of their annualized loss exposure, and make a better informed decision about that particular problem.

Gardner: I’m curious how prevalent cyber insurance is, and is that going to be a leveling effect in the industry where people speak a common language the equivalent of actuarial tables, but for security in enterprise and cyber security?

Jones: One would dream and hope, but at this point, what I’ve seen out there in terms of the basis on which insurance companies are setting their premiums and such is essentially the same old “risk assessment” stuff that the industry has been doing poorly for years. It’s not based on data or any real analysis per se, at least what I’ve run into. What they do is set their premiums high to buffer themselves and typically cover as few things as possible. The question of how much value it’s providing the customers becomes a problem.

Looking to the future

Gardner: We’re coming up on our time limit. So, let’s quickly look to the future. Is there such thing as risk management as a service? Can we outsource this? Is there a way in which moving more of IT into Cloud or hybrid models would mitigate risk, because the Cloud provider would standardize? Then, many players in that environment, those who were buying those services, would be under that same umbrella? Let’s start with you Jim Hietala. What’s the future of this and what do the Cloud trends bring to the table?

Hietala: I’d start with a maxim that comes out of the financial services industry, which is that you can outsource the function, but you still own the risk. That’s an unfortunate reality. You can throw things out in the Cloud, but it doesn’t absolve you from understanding your risk and then doing things to manage it to transfer it if there’s insurance or whatever the case may be.

That’s just a reality. Organizations in the risky world we live in are going to have to get more serious about doing effective risk analysis. From The Open Group standpoint, we see this as an opportunity area.

As I mentioned, we’ve standardized the taxonomy piece of FAIR. And we really see an opportunity around the profession going forward to help the risk-analysis community by further standardizing FAIR and launching a certification program for a FAIR-certified risk analyst. That’s in demand from large organizations that are looking for evidence that people understand how to apply FAIR and use it in doing risk analyses.

Gardner: Jack Freund, looking into your crystal ball, how do you see this discipline evolving?

Freund: I always try to consider things as they exist within other systems. Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

Risk will always be owned by the organization that is offering that service. If we decide at some point that we can move to the Cloud and all these other things, we need to look to the legal system. There is a series of pressures that they are going to apply, and who is going to own that, and how that plays itself out.

If we look to the Europeans and the way that they’re managing risk and compliance, they’re still as strict as we in United States think that they may be about things, but there’s still a lot of leeway in a lot of the ways that laws are written. You’re still being asked to do things that are reasonable. You’re still being asked to do things that are standard for your industry. But, we’d still like the ability to know what that is, and I don’t think that’s going to go away anytime soon.

Judgment calls

We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things. Whenever that happens, you have to make a judgment call. What’s the most important thing that I care about? And that’s why risk management exists, because there’s a certain series of things that we have to deal with. We don’t have the resources to do them all, and I don’t think that’s going to change over time. Regardless of whether the landscape changes, that’s the one that remains true.

Gardner: The last word to you, Jack Jones. It sounds as if we’re continuing down the path of being mostly reactive. Is there anything you can see on the horizon that would perhaps tip the scales, so that the risk management and analysis practitioners can really become proactive and head things off before they become a big problem?

Jones: If we were to take a snapshot at any given point in time of an organization’s loss exposure, how much risk they have right then, that’s a lagging indicator of the decisions they’ve made in the past, and their ability to execute against those decisions.

We can do some great root-cause analysis around that and ask how we got there. But, we can also turn that coin around and ask how good we are at making well-informed decisions, and then executing against them, the asking what that implies from a risk perspective downstream.

If we understand the relationship between our current state, and past and future states, we have those linkages defined, especially, if we have an analytic framework underneath it. We can do some marvelous what-if analysis.

What if this variable changed in our landscape? Let’s run a few thousand Monte Carlo simulations against that and see what comes up. What does that look like? Well, then let’s change this other variable and then see which combination of dials, when we turn them, make us most robust to change in our landscape.

But again, we can’t begin to get there, until we have this foundational set of definitions, frameworks, and such to do that sort of analysis. That’s what we’re doing with FAIR, but without some sort of framework like that, there’s no way you can get there.

Gardner: I am afraid we’ll have to leave it there. We’ve been talking with a panel of experts on how new trends and solutions are emerging in the area of risk management and analysis. And we’ve seen how new tools for communication and using Big Data to understand risks are also being brought to the table.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference in Newport Beach, California. I’d like to thank our panel: Jack Freund, PhD, Information Security Risk Assessment Manager at TIAA-CREF. Thanks so much Jack.

Freund: Thank you, Dana.

Gardner: We’ve also been speaking with Jack Jones, Principal at CXOWARE.

Jones: Thank you. Thank you, pleasure to be here.

Gardner: And last, Jim Hietala, the Vice President for Security at The Open Group. Thanks.

Hietala: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions; your host and moderator through these thought leadership interviews. Thanks again for listening and come back next time.

Comments Off

Filed under Security Architecture

On Demand Broadcasts from Day One at The Open Group Conference in Newport Beach

By The Open Group Conference Team

Since not everyone could make the trip to The Open Group Conference in Newport Beach, we’ve put together a recap of day one’s plenary speakers. Stay tuned for more recaps coming soon!

Big Data at NASA

In his talk titled, “Big Data at NASA,” Chris Gerty, deputy program manager, Open Innovation Program, National Aeronautics and Space Administration (NASA), discussed how Big Data is being interpreted by the next generation of rocket scientists. Chris presented a few lessons learned from his experiences at NASA:

  1. A traditional approach is not always the best approach. A tried and proven method may not translate. Creating more programs for more data to store on bigger hard drives is not always effective. We need to address the never-ending challenges that lie ahead in the shift of society to the information age.
  2. A plan for openness. Based on a government directive, Chris’ team looked to answer questions by asking the right people. For example, NASA asked the people gathering data on a satellite to determine what data was the most important, which enabled NASA to narrow focus and solve problems. Furthermore, by realizing what can also be useful to the public and what tools have already been developed by the public, open source development can benefit the masses. Through collaboration, governments and citizens can work together to solve some of humanity’s biggest problems.
  3. Embrace the enormity of the universe. Look for Big Data where no one else is looking by putting sensors and information gathering tools. If people continue to be scared of Big Data, we will be resistant to gathering more of it. By finding Big Data where it has yet to be discovered, we can solve problems and innovate.

To view Chris’s presentation, please watch the broadcasted session here: http://new.livestream.com/opengroup/Gerty-NPB13

Bringing Order to the Chaos

David Potter, chief technical officer at Promise Innovation and Ron Schuldt, senior partner at UDEF-IT, LLC discussed how The Open Group’s evolving Quantum Lifecycle Management (QLM) standard coupled with its complementary Universal Data Element Framework (UDEF) standard help bring order to the terminology chaos that faces Big Data implementations.

The QLM standard provides a framework for the aggregation of lifecycle data from a multiplicity of sources to add value to the decision making process. Gathering mass amounts of data is useless if it cannot be analyzed. The QLM framework provides a means to interpret the information gathered for business intelligence. The UDEF allows each piece of data to be paired with an unambiguous key to provide clarity. By partnering with the UDEF, the QLM framework is able to separate itself from domain-specific semantic models. The UDEF also provides a ready-made key for international language support. As an open standard, the UDEF is data model independent and as such supports normalization across data models.

One example of successful implementation is by Compassion International. The organization needed to find a balance between information that should be kept internal (e.g., payment information) and information that should be shared with its international sponsors. In this instance, UDEF was used as a structured process for harmonizing the terms used in IT systems between funding partners.

The beauty of the QLM framework and UDEF integration is that they are flexible and can be applied to any product, domain and industry.

To view David and Ron’s presentation, please watch the broadcasted session here: http://new.livestream.com/opengroup/potter-NPB13

Big Data – Panel Discussion

Moderated by Dana Gardner, Interarbor Solution, Robert Weisman , Build The Vision, Andras Szakal, IBM, Jim Hietala, The Open Group, and Chris Gerty, NASA, discussed the implications of Big Data and what it means for business architects and enterprise architects.

Big Data is not about the size but about analyzing that data. Robert mentioned that most organizations store more data than they need or use, and from an enterprise architect’s perspective, it’s important to focus on the analysis of the data and to provide information that will ultimately aid it in some way. When it comes to security, Jim explained that newer Big Data platforms are not built with security in mind. While data is data, many security controls don’t translate to new platforms or scale with the influx of data.

Cloud Computing is Big Data-ready, and price can be compelling, but there are significant security and privacy risks. Robert brought up the argument over public and private Cloud adoption, and said, “It’s not one size fits all.” But can Cloud and Big Data come together? Andras explained that Cloud is not the almighty answer to Big Data. Every organization needs to find the Enterprise Architecture that fits its needs.

The fruits of Big Data can be useful to more than just business intelligence professionals. With the trend of mobility and application development in mind, Chris suggested that developers keep users in mind. Big Data can be used to tell us many different things, but it’s about finding out what is most important and relevant to users in a way that is digestible.

Finally, the panel discussed how Big Data bringing about big changes in almost every aspect of an organization. It is important not to generalize, but customize. Every enterprise needs its own set of architecture to fit its needs. Each organization finds importance in different facets of the data gathered, and security is different at every organization. With all that in mind, the panel agreed that focusing on the analytics is the key.

To view the panel discussion, please watch the broadcasted session here: http://new.livestream.com/opengroup/events/1838807

Comments Off

Filed under Conference

Capturing The Open Group Conference in Newport Beach

By The Open Group Conference Team

It is time to announce the winners of the Newport Beach Photo Contest! For those of you who were unable to attend, conference attendees submitted some of their best photos to the contest for a chance to win one free conference pass to one of The Open Group’s global conferences over the next year – a prize valued at more than $1,000/€900 value.

Southern California is known for its palm trees and warm sandy beaches. While Newport Beach is most recognized for its high-end real estate and association with popular television show, “The OC,” enterprise architects invaded the beach and boating town for The Open Group Conference.

The contest ended Friday at noon PDT, and it is time to announce the winners…

Best of The Open Group Conference in Newport Beach - For any photo taken during conference activities

The winner is Henry Franken, BiZZdesign!

 Henry Franken 01 BiZZdesign table

A busy BiZZdesign exhibitor booth

The Real OC Award – For best photo taken in or around Newport Beach

The winner is Andrew Josey, The Open Group!

 Andrew Josey 02

A local harbor in Newport Beach, Calif.

Thank you to all those who participated in this contest – whether it was submitting one of your own photos or voting for your favorites. Please visit The Open Group’s Facebook page to view all of the submissions and conference photos.

We’re always trying to improve our programs, so if you have any feedback regarding the photo contest, please email photo@opengroup.org or leave a comment below. We’ll see you in Sydney!

Comments Off

Filed under Conference

The Open Group Conference Plenary Speaker Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

By Dana Gardner, Interarbor Solutions

Listen to the recorded podcast here: The Open Group Keynoter Sees Big-Data Analytics as a Way to Bolster Quality, Manufacturing and Business Processes

This is a transcript of a sponsored podcast discussion on Big Data analytics and its role in business processes, in conjunction with the The Open Group Conference in Newport Beach.

Dana Gardner: Hello, and welcome to a special thought leadership interview series coming to you in conjunction with The Open Group® Conference on January 28 in Newport Beach, California.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host and moderator throughout these business transformation discussions. The conference will focus on big data and the transformation we need to embrace today.

We are here now with one of the main speakers at the conference; Michael Cavaretta, PhD, Technical Leader of Predictive Analytics for Ford Research and Advanced Engineering in Dearborn, Michigan.

We’ll see how Ford has exploited the strengths of big data analytics by directing them internally to improve business results. In doing so, they scour the metrics from the company’s best processes across myriad manufacturing efforts and through detailed outputs from in-use automobiles, all to improve and help transform their business.

Cavaretta has led multiple data-analytic projects at Ford to break down silos inside the company to best define Ford’s most fruitful datasets. Ford has successfully aggregated customer feedback, and extracted all the internal data to predict how best new features in technologies will improve their cars.

As a lead-in to his Open Group presentation, Michael and I will now explore how big data is fostering business transformation by allowing deeper insights into more types of data efficiently, and thereby improving processes, quality control, and customer satisfaction.

With that, please join me in welcoming Michael Cavaretta. Welcome to BriefingsDirect, Michael.

Michael Cavaretta: Thank you very much.

Gardner: Your upcoming presentation for The Open Group Conference is going to describe some of these new approaches to big data and how that offers some valuable insights into internal operations, and therefore making a better product. To start, what’s different now in being able to get at this data and do this type of analysis from, say, five years ago?

Cavaretta: The biggest difference has to do with the cheap availability of storage and processing power, where a few years ago people were very much concentrated on filtering down the datasets that were being stored for long-term analysis. There has been a big sea change with the idea that we should just store as much as we can and take advantage of that storage to improve business processes.

Gardner: That sounds right on the money, but how do we get here? How do we get to the point where we could start using these benefits from a technology perspective, as you say, better storage, networks, being able to move big dataset, that sort of thing, to wrenching out benefits. What’s the process behind the benefit?

Cavaretta: The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises. There’s this idea that you don’t need to spend so much time figuring out what data you want to store and worry about the cost associated with it, and more about data as an asset. There is value in being able to store it, and being able to go back and extract different insights from it. This really comes from this really cheap storage, access to parallel processing machines, and great software.

Gardner: It seems to me that for a long time, the mindset was that data is simply the output from applications, with applications being primary and the data being almost an afterthought. It seems like we sort flipped that. The data now is perhaps as important, even more important, than the applications. Does that seem to hold true?

Cavaretta: Most definitely, and we’ve had a number of interesting engagements where people have thought about the data that’s being collected. When we talk to them about big data, storing everything at the lowest level of transactions, and what could be done with that, their eyes light up and they really begin to get it.

Gardner: I suppose earlier, when cost considerations and technical limitations were at work, we would just go for a tip of the iceberg level. Now, as you say, we can get almost all the data. So, is this a matter of getting at more data, different types of data, bringing in unstructured data, all the above? How much you are really going after here?

Cavaretta: I like to talk to people about the possibility that big data provides and I always tell them that I have yet to have a circumstance where somebody is giving me too much data. You can pull in all this information and then answer a variety of questions, because you don’t have to worry that something has been thrown out. You have everything.

You may have 100 questions, and each one of the questions uses a very small portion of the data. Those questions may use different portions of the data, a very small piece, but they’re all different. If you go in thinking, “We’re going to answer the top 20 questions and we’re just going to hold data for that,” that leaves so much on the table, and you don’t get any value out of it.

Gardner: I suppose too that we can think about small samples or small datasets and aggregate them or join them. We have new software capabilities to do that efficiently, so that we’re able to not just look for big honking, original datasets, but to aggregate, correlate, and look for a lifecycle level of data. Is that fair as well?

Cavaretta: Definitely. We’re a big believer in mash-ups and we really believe that there is a lot of value in being able to take even datasets that are not specifically big-data sizes yet, and then not go deep, not get more detailed information, but expand the breadth. So it’s being able to augment it with other internal datasets, bridging across different business areas as well as augmenting it with external datasets.

A lot of times you can take something that is maybe a few hundred thousand records or a few million records, and then by the time you’re joining it, and appending different pieces of information onto it, you can get the big dataset sizes.

Gardner: Just to be clear, you’re unique. The conventional wisdom for big data is to look at what your customers are doing, or just the external data. You’re really looking primarily at internal data, while also availing yourself of what external data might be appropriate. Maybe you could describe a little bit about your organization, what you do, and why this internal focus is so important for you.

Cavaretta: I’m part of a larger department that is housed over in the research and advanced-engineering area at Ford Motor Company, and we’re about 30 people. We work as internal consultants, kind of like Capgemini or Ernst & Young, but only within Ford Motor Company. We’re responsible for going out and looking for different opportunities from the business perspective to bring advanced technologies. So, we’ve been focused on the area of statistical modeling and machine learning for I’d say about 15 years or so.

And in this time, we’ve had a number of engagements where we’ve talked with different business customers, and people have said, “We’d really like to do this.” Then, we’d look at the datasets that they have, and say, “Wouldn’t it be great if we would have had this. So now we have to wait six months or a year.”

These new technologies are really changing the game from that perspective. We can turn on the complete fire-hose, and then say that we don’t have to worry about that anymore. Everything is coming in. We can record it all. We don’t have to worry about if the data doesn’t support this analysis, because it’s all there. That’s really a big benefit of big-data technologies.

Gardner: If you’ve been doing this for 15 years, you must be demonstrating a return on investment (ROI) or a value proposition back to Ford. Has that value proposition been changing? Do you expect it to change? What might be your real value proposition two or three years from now?

Cavaretta: The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts who are really interested in looking at things from a data-driven perspective. From when I first came in to now, the biggest change has been when Alan Mulally came into the company, and really pushed the idea of data-driven decisions.

Before, we were getting a lot of interest from people who are really very focused on the data that they had internally. After that, they had a lot of questions from their management and from upper level directors and vice-president saying, “We’ve got all these data assets. We should be getting more out of them.” This strategic perspective has really changed a lot of what we’ve done in the last few years.

Gardener: As I listen to you Michael, it occurs to me that you are applying this data-driven mentality more deeply. As you pointed out earlier, you’re also going after all the data, all the information, whether that’s internal or external.

In the case of an automobile company, you’re looking at the factory, the dealers, what drivers are doing, what the devices within the automobile are telling you, factoring that back into design relatively quickly, and then repeating this process. Are we getting to the point where this sort of Holy Grail notion of a total feedback loop across the lifecycle of a major product like an automobile is really within our grasp? Are we getting there, or is this still kind of theoretical. Can we pull it altogether and make it a science?

Cavaretta: The theory is there. The question has more to do with the actual implementation and the practicality of it. We still are talking a lot of data where even with new advanced technologies and techniques that’s a lot of data to store, it’s a lot of data to analyze, there’s a lot of data to make sure that we can mash-up appropriately.

And, while I think the potential is there and I think the theory is there. There is also a work in being able to get the data from multiple sources. So everything which you can get back from the vehicle, fantastic. Now if you marry that up with internal data, is it survey data, is it manufacturing data, is it quality data? What are the things do you want to go after first? We can’t do everything all at the same time.

Our perspective has been let’s make sure that we identify the highest value, the greatest ROI areas, and then begin to take some of the major datasets that we have and then push them and get more detail. Mash them up appropriately and really prove up the value for the technologists.

Gardner: Clearly, there’s a lot more to come in terms of where we can take this, but I suppose it’s useful to have a historic perspective and context as well. I was thinking about some of the early quality gurus like Deming and some of the movement towards quality like Six Sigma. Does this fall within that same lineage? Are we talking about a continuum here over that last 50 or 60 years, or is this something different?

Cavaretta: That’s a really interesting question. From the perspective of analyzing data, using data appropriately, I think there is a really good long history, and Ford has been a big follower of Deming and Six Sigma for a number of years now.

The difference though, is this idea that you don’t have to worry so much upfront about getting the data. If you’re doing this right, you have the data right there, and this has some great advantages. You’ll have to wait until you get enough history to look for somebody’s patterns. Then again, it also has some disadvantage, which is you’ve got so much data that it’s easy to find things that could be spurious correlations or models that don’t make any sense.

The piece that is required is good domain knowledge, in particular when you are talking about making changes in the manufacturing plant. It’s very appropriate to look at things and be able to talk with people who have 20 years of experience to say, “This is what we found in the data. Does this match what your intuition is?” Then, take that extra step.

Gardner: Tell me a little about sort a day in the life of your organization and your team to let us know what you do. How do you go about making more data available and then reaching some of these higher-level benefits?

Cavaretta: We’re very much focused on interacting with the business. Most of all, we do have to deal with working on pilot projects and working with our business customers to bring advanced analytics and big data technologies to bear against these problems. So we work in kind of what we call push-and-pull model.

We go out and investigate technologies and say these are technologies that Ford should be interested in. Then, we look internally for business customers who would be interested in that. So, we’re kind of pushing the technologies.

From the pull perspective, we’ve had so many successful engagements in such good contacts and good credibility within the organization that we’ve had people come to us and say, “We’ve got a problem. We know this has been in your domain. Give us some help. We’d love to be able to hear your opinions on this.”

So we’ve pulled from the business side and then our job is to match up those two pieces. It’s best when we will be looking at a particular technology and we have somebody come to us and we say, “Oh, this is a perfect match.”

Those types of opportunities have been increasing in the last few years, and we’ve been very happy with the number of internal customers that have really been very excited about the areas of big data.

Gardner: Because this is The Open Group conference and an audience that’s familiar with the IT side of things, I’m curious as to how this relates to software and software development. Of course there are so many more millions of lines of code in automobiles these days, software being more important than just about everything. Are you applying a lot of what you are doing to the software side of the house or are the agile and the feedback loops and the performance management issues a separate domain, or it’s your crossover here?

Cavaretta: There’s some crossover. The biggest area that we’ve been focused on has been picking information, whether internal business processes or from the vehicle, and then being able to bring it back in to derive value. We have very good contacts in the Ford IT group, and they have been fantastic to work with in bringing interesting tools and technology to bear, and then looking at moving those into production and what’s the best way to be able to do that.

A fantastic development has been this idea that we’re using some of the more agile techniques in this space and Ford IT has been pushing this for a while. It’s been fantastic to see them work with us and be able to bring these techniques into this new domain. So we’re pushing the envelope from two different directions.

Gardner: It sounds like you will be meeting up at some point with a complementary nature to your activities.

Cavaretta: Definitely.

Gardner: Let’s move on to this notion of the “Internet of things,” a very interesting concept that lot of people talk about. It seems relevant to what we’ve been discussing. We have sensors in these cars, wireless transfer of data, more-and-more opportunity for location information to be brought to bear, where cars are, how they’re driven, speed information, all sorts of metrics, maybe making those available through cloud providers that assimilate this data.

So let’s not go too deep, because this is a multi-hour discussion all on its own, but how is this notion of the Internet of things being brought to bear on your gathering of big data and applying it to the analytics in your organization?

Cavaretta: It is a huge area, and not only from the internal process perspective —  RFID tags within the manufacturing plans, as well as out on the plant floor, and then all of the information that’s being generated by the vehicle itself.

The Ford Energi generates about 25 gigabytes of data per hour. So you can imagine selling couple of million vehicles in the near future with that amount of data being generated. There are huge opportunities within that, and there are also some interesting opportunities having to do with opening up some of these systems for third-party developers. OpenXC is an initiative that we have going on to add at Research and Advanced Engineering.

We have a lot of data coming from the vehicle. There’s huge number of sensors and processors that are being added to the vehicles. There’s data being generated there, as well as communication between the vehicle and your cell phone and communication between vehicles.

There’s a group over at Ann Arbor Michigan, the University of Michigan Transportation Research Institute (UMTRI), that’s investigating that, as well as communication between the vehicle and let’s say a home system. It lets the home know that you’re on your way and it’s time to increase the temperature, if it’s winter outside, or cool it at the summer time. The amount of data that’s been generated there is invaluable information and could be used for a lot of benefits, both from the corporate perspective, as well as just the very nature of the environment.

Gardner: Just to put a stake in the ground on this, how much data do cars typically generate? Do you have a sense of what now is the case, an average?

Cavaretta: The Energi, according to the latest information that I have, generates about 25 gigabytes per hour. Different vehicles are going to generate different amounts, depending on the number of sensors and processors on the vehicle. But the biggest key has to do with not necessarily where we are right now but where we will be in the near future.

With the amount of information that’s being generated from the vehicles, a lot of it is just internal stuff. The question is how much information should be sent back for analysis and to find different patterns? That becomes really interesting as you look at external sensors, temperature, humidity. You can know when the windshield wipers go on, and then to be able to take that information, and mash that up with other external data sources too. It’s a very interesting domain.

Gardner: So clearly, it’s multiple gigabytes per hour per vehicle and probably going much higher.

Cavaretta: Easily.

Gardner: Let’s move forward now for those folks who have been listening and are interested in bringing this to bear on their organizations and their vertical industries, from the perspective of skills, mindset, and culture. Are there standards, certification, or professional organizations that you’re working with in order to find the right people?

It’s a big question. Let’s look at what skills do you target for your group, and what ways you think that you can improve on that. Then, we’ll get into some of those larger issues about culture and mindset.

Cavaretta: The skills that we have in our department, in particular on our team, are in the area of computer science, statistics, and some good old-fashioned engineering domain knowledge. We’ve really gone about this from a training perspective. Aside from a few key hires, it’s really been an internally developed group.

The biggest advantage that we have is that we can go out and be very targeted with the amount of training that we have. There are such big tools out there, especially in the open-source realm, that we can spin things up with relatively low cost and low risk, and do a number of experiments in the area. That’s really the way that we push the technologies forward.

Gardner: Why The Open Group? Why is that a good forum for your message, and for your research here?

Cavaretta: The biggest reason is the focus on the enterprise, where there are a lot of advantages and a lot of business cases, looking at large enterprises and where there are a lot of systems, companies that can take a relatively small improvement, and it can make a large difference on the bottom-line.

Talking with The Open Group really gives me an opportunity to be able to bring people on board with the idea that you should be looking at a difference in mindset. It’s not “Here’s a way that data is being generated, look, try and conceive of some questions that we can use, and we’ll store that too.” Let’s just take everything, we’ll worry about it later, and then we’ll find the value.

Gardner: I’m sure the viewers of your presentation on January 28 will be gathering a lot of great insights. A lot of the people that attend The Open Group conferences are enterprise architects. What do you think those enterprise architects should be taking away from this? Is there something about their mindset that should shift in recognizing the potential that you’ve been demonstrating?

Cavaretta: It’s important for them to be thinking about data as an asset, rather than as a cost. You even have to spend some money, and it may be a little bit unsafe without really solid ROI at the beginning. Then, move towards pulling that information in, and being able to store it in a way that allows not just the high-level data scientist to get access to and provide value, but people who are interested in the data overall. Those are very important pieces.

The last one is how do you take a big-data project, how do you take something where you’re not storing in the traditional business intelligence (BI) framework that an enterprise can develop, and then connect that to the BI systems and look at providing value to those mash-ups. Those are really important areas that still need some work.

Gardner: Another big constituency within The Open Group community are those business architects. Is there something about mindset and culture, getting back to that topic, that those business-level architects should consider? Do you really need to change the way you think about planning and resource allocation in a business setting, based on the fruits of things that you are doing with big data?

Cavaretta: I really think so. The digital asset that you have can be monetized to change the way the business works, and that could be done by creating new assets that then can be sold to customers, as well as improving the efficiencies of the business.

This idea that everything is going to be very well-defined and there is a lot of work that’s being put into  making sure that data has high quality, I think those things need to be changed somewhat. As you’re pulling the data in, as you are thinking about long-term storage, it’s more the access to the information, rather than the problem in just storing it.

Gardner: Interesting that you brought up that notion that the data becomes a product itself and even a profit center perhaps.

Cavaretta: Exactly. There are many companies, especially large enterprises, that are looking at their data assets and wondering what can they do to monetize this, not only to just pay for the efficiency improvement but as a new revenue stream.

Gardner: We’re almost out of time. For those organizations that want to get started on this, are there any 20/20 hindsights or Monday morning quarterback insights you can provide. How do you get started? Do you appoint a leader? Do you need a strategic roadmap, getting this culture or mindset shifted, pilot programs? How would you recommend that people might begin the process of getting into this?

Cavaretta: We’re definitely a huge believer in pilot projects and proof of concept, and we like to develop roadmaps by doing. So get out there. Understand that it’s going to be messy. Understand that it maybe going to be a little bit more costly and the ROI isn’t going to be there at the beginning.

But get your feet wet. Start doing some experiments, and then, as those experiments turn from just experimentation into really providing real business value, that’s the time to start looking at a more formal aspect and more formal IT processes. But you’ve just got to get going at this point.

Gardner: I would think that the competitive forces are out there. If you are in a competitive industry, and those that you compete against are doing this and you are not, that could spell some trouble.

Cavaretta:  Definitely.

Gardner: We’ve been talking with Michael Cavaretta, PhD, Technical Leader of Predictive Analytics at Ford Research and Advanced Engineering in Dearborn, Michigan. Michael and I have been exploring how big data is fostering business transformation by allowing deeper insights into more types of data and all very efficiently. This is improving processes, updating quality control and adding to customer satisfaction.

Our conversation today comes as a lead-in to Michael’s upcoming plenary presentation. He is going to be talking on January 28 in Newport Beach California, as part of The Open Group conference.

You will hear more from Michael and others, the global leaders on big data that are going to be gathering to talk about business transformation from big data at this conference. So a big thank you to Michael for joining us in this fascinating discussion. I really enjoyed it and I look forward to your presentation on the 28.

Cavaretta: Thank you very much.

Gardner: And I would encourage our listeners and readers to attend the conference or follow more of the threads in social media from the event. Again, it’s going to be happening from January 27 to January 30 in Newport Beach, California.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator through the thought leadership interviews. Thanks again for listening, and come back next time.

1 Comment

Filed under Conference, Uncategorized

Leveraging Social Media at The Open Group Newport Beach Conference (#ogNB)

By The Open Group Conference Team

By attending conferences hosted by The Open Group®, attendees are able to learn from industry experts, understand the latest technologies and standards and discuss and debate current industry trends. One way to maximize the benefits is to make technology work for you. If you are attending The Open Group Conference in Newport Beach next week, we’ve put together a few tips on how to leverage social media to make networking at the conference easier, quicker and more effective.

Using Twitter at #ogNB

Twitter is a real-time news-sharing tool that anyone can use. The official hashtag for the conference is #ogNB. This enables anybody, whether they are physically attending the event or not, to follow what’s happening at the Newport Beach conference in real-time and interact with each other.

Before the conference, be sure to update your Twitter account to monitor #ogNB and, of course, to tweet about the conference.

Using Facebook at The Open Group Conference in Newport Beach

You can also track what is happening at the conference on The Open Group Facebook page. We will be running another photo contest, where all of entries will be uploaded to our page. Members and Open Group Facebook fans can vote by “liking” a photo. The photos with the most “likes” in each category will be named the winner. Submissions will be uploaded in real-time, so the sooner you submit a photo, the more time members and fans will have to vote for it!

For full details of the contest and how to enter see The Open Group blog at: http://blog.opengroup.org/2013/01/22/the-open-group-photo-contest-document-the-magic-at-the-newport-beach-conference/

LinkedIn during The Open Group Conference in Newport Beach

Inspired by one of the sessions? Interested in what your peers have to say? Start a discussion on The Open Group LinkedIn Group page. We’ll also be sharing interesting topics and questions related to The Open Group Conference as it is happening. If you’re not a member already, requesting membership is easy. Simply go to the group page and click the “Join Group” button. We’ll accept your request as soon as we can!

Blogging during The Open Group Conference in Newport Beach

Stay tuned for daily conference recaps here on The Open Group blog. In case you missed a session or you weren’t able to make it to Newport Beach, we’ll be posting the highlights and recaps on the blog. If you are attending the conference and would like to submit a recap of your own, please contact opengroup (at) bateman-group.com.

If you have any questions about social media usage at the conference, feel free to tweet the conference team @theopengroup.

Comments Off

Filed under Uncategorized

Improving Signal-to-Noise in Risk Management

By Jack Jones, CXOWARE

One of the most important responsibilities of the information security professional (or any IT professional, for that matter) is to help management make well-informed decisions. Unfortunately, this has been an illusive objective when it comes to risk. Although we’re great at identifying control deficiencies, and we can talk all day long about the various threats we face, we have historically had a poor track record when it comes to risk. There are a number of reasons for this, but in this article I’ll focus on just one — definition.

You’ve probably heard the old adage, “You can’t manage what you can’t measure.”  Well, I’d add to that by saying, “You can’t measure what you haven’t defined.” The unfortunate fact is that the information security profession has been inconsistent in how it defines and uses the term “risk.” Ask a number of professionals to define the term, and you will get a variety of definitions.

Besides inconsistency, another problem regarding the term “risk” is that many of the common definitions don’t fit the information security problem space or simply aren’t practical. For example, the ISO27000 standard defines risk as, “the effect of uncertainty on objectives.” What does that mean? Fortunately (or perhaps unfortunately), I must not be the only one with that reaction because the ISO standard goes on to define “effect,” “uncertainty,” and “objectives,” as follows:

  • Effect: A deviation from the expected — positive and/or negative
  • Uncertainty: The state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence or likelihood
  • Objectives: Can have different aspects (such as financial, health and safety, information security, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product and process)

NOTE: Their definition for ”objectives” doesn’t appear to be a definition at all, but rather an example. 

Although I understand, conceptually, the point this definition is getting at, my first concern is practical in nature. As a Chief Information Security Officer (CISO), I invariably have more to do than I have resources to apply. Therefore, I must prioritize and prioritization requires comparison and comparison requires measurement. It isn’t clear to me how “uncertainty regarding deviation from the expected (positive and/or negative) that might affect my organization’s objectives” can be applied to measure, and thus compare and prioritize, the issues I’m responsible for dealing with.

This is just an example though, and I don’t mean to pick on ISO because much of their work is stellar. I could have chosen any of several definitions in our industry and expressed varied concerns.

In my experience, information security is about managing how often loss takes place, and how much loss will be realized when/if it occurs. That is our profession’s value proposition, and it’s what management cares about. Consequently, whatever definition we use needs to align with this purpose.

The Open Group’s Risk Taxonomy (shown below), based on Factor Analysis of Information Risk (FAIR), helps to solve this problem by providing a clear and practical definition for risk. In this taxonomy, Risk is defined as, “the probable frequency and probable magnitude of future loss.”

Taxonomy image

The elements below risk in the taxonomy form a Bayesian network that models risk factors and acts as a framework for critically evaluating risk. This framework has been evolving for more than a decade now and is helping information security professionals across many industries understand, measure, communicate and manage risk more effectively.

In the communications context, you have to have a very clear understanding of what constitutes signal before you can effectively and reliably filter it out from noise. The Open Group’s Risk Taxonomy gives us an important foundation for achieving a much clearer signal.

I will be discussing this topic in more detail next week at The Open Group Conference in Newport Beach. For more information on my session or the conference, visit: http://www.opengroup.org/newportbeach2013.

Jack Jones HeadshotJack Jones has been employed in technology for the past twenty-nine years, and has specialized in information security and risk management for twenty-two years.  During this time, he’s worked in the United States military, government intelligence, consulting, as well as the financial and insurance industries.  Jack has over nine years of experience as a CISO, with five of those years at a Fortune 100 financial services company.  His work there was recognized in 2006 when he received the 2006 ISSA Excellence in the Field of Security Practices award at that year’s RSA conference.  In 2007, he was selected as a finalist for the Information Security Executive of the Year, Central United States, and in 2012 was honored with the CSO Compass award for leadership in risk management.  He is also the author and creator of the Factor Analysis of Information Risk (FAIR) framework.

1 Comment

Filed under Cybersecurity