Tag Archives: Internet of Things

Managing Your Vulnerabilities: A Q&A with Jack Daniel

By The Open Group

With hacks and security breaches becoming more prevalent everyday, it’s incumbent on organizations to determine the areas where their systems may be vulnerable and take actions to better handle those vulnerabilities. Jack Daniel, a strategist with Tenable Network Security who has been active in securing networks and systems for more than 20 years, says that if companies start implementing vulnerability management on an incremental basis and use automation to help them, they can hopefully reach a point where they’re not constantly handling vulnerability crises.

Daniel will be speaking at The Open Group Baltimore event on July 20, presenting on “The Evolution of Vulnerability Management.” In advance of that event, we recently spoke to Daniel to get his perspective on hacker motivations, the state of vulnerability management in organizations today, the human problems that underlie security issues and why automation is key to better handling vulnerabilities.

How do you define vulnerability management?

Vulnerability detection is where this started. News would break years ago of some vulnerability, some weakness in a system—a fault in the configuration or software bug that allows bad things to happen. We used to really to do a hit-or-miss job of it, it didn’t have to be rushed at all. Depending on where you were or what you were doing, you might not be targeted—it would take months after something was released before bad people would start doing things with it. As criminals discovered there was money to be made in exploiting vulnerabilities, the attackers became more and more motivated by more than just notoriety. The early hacker scene that was disruptive or did criminal things was largely motivated by notoriety. As people realized they could make money, it became a problem, and that’s when we turned to management.

You have to manage finding vulnerabilities, detecting vulnerabilities and resolving them, which usually means patching but not always. There are a lot of ways to resolve or mitigate without actually patching, but the management aspect is discovering all the weaknesses in your environment—and that’s a really broad brush, depending on what you’re worried about. That could be you’re not compliant with PCI if you’re taking credit cards or it could be that bad guys can steal your database full of credit card numbers or intellectual property.

It’s finding all the weaknesses in your environment, the vulnerabilities, tracking them, resolving them and then continuing to track as new ones appear to make sure old ones don’t reappear. Or if they do reappear, what in your corporate process is allowing bad things to happen over and over again? It’s continuously doing this.

The pace of bad things has accelerated, the motivations of the actors have forked in a couple of directions, and to do a good job of vulnerability management really requires gathering data of different qualities and being able to make assessments about it and then applying what you know to what’s the most effective use of your resources—whether it’s time or money or employees to fix what you can.

What are the primary motivations you’re seeing with hacks today?

They fall into a couple big buckets, and there are a whole bunch of them. One common one is financial—these are the people that are stealing credit cards, stealing credentials so they can do bank wire fraud, or some other way to get at money. There are a variety of financial motivators.

There are also some others, depending on who you are. There’s the so-called ‘Hacktivist,’ which used to be a thing in the early days of hacking but has now become more widespread. These are folks like the Syrian Electronic Army or there’s various Turkish groups that through the years have done website defacements. These people are not trying to steal money, they’re trying to embarrass you, they’re trying to promote a message. It may be, as with the Syrian Electronic Army, they’re trying to support the ruler of whatever’s left of Syria. So there are political motivations. Anonymous did a lot of destructive things—or people calling themselves ‘Anonymous’—that’s a whole other conversation, but people do things under the banner of Anonymous as hacktivism that struck out at corporations they thought were unjust or unfair or they did political things.

Intellectual property theft would be the third big one, I think. Generally the finger is pointed at China, but it’s unfair to say they’re the only ones stealing trade secrets. People within your own country or your own market or region are stealing trade secrets continuously, too.

Those are the three big ones—money, hacktivism and intellectual property theft. It trickles down. One of the things that has come up more often over the past few years is people get attacked because of who they’re connected to. It’s a smaller portion of it and one that’s overlooked but is a message that people need to hear. For example, in the Target breach, it is claimed that the initial entry point was through the heating and air conditioning vendors’ computer systems and their access to the HVAC systems inside a Target facility, and, from there, they were able to get through. There are other stories about the companies where organizations have been targeted because of who they do business with. That’s usually a case of trying to attack somebody that’s well-secured and there’s not an easy way in, so you find out who does their heating and air-conditioning or who manages their remote data centers or something and you attack those people and then come in.

How is vulnerability management different from risk management?

It’s a subset of risk management. Risk management, when done well, gives a scope of a very large picture and helps you drill down into the details, but it has to factor in things above and beyond the more technical details of what we more typically think of as vulnerability management. Certainly they work together—you have to find what’s vulnerable and then you have to make assessments as to how you’re going to address your vulnerabilities, and that ideally should be done in a risk-based manner. Because as much as all of the reports from Verizon Data Breach Report and others say you have to fix everything, the reality is that not only can we not fix everything, we can’t fix a lot immediately so you really have to prioritize things. You have to have information to prioritize things, and that’s a challenge for many organizations.

Your session at The Open Group Baltimore event is on the evolution of vulnerability management—where does vulnerability management stand today and where does it need to go?

One of my opening slides sums it up—it used to be easy, and it’s not anymore. It’s like a lot of other things in security, it’s sort of a buzz phrase that’s never really taken off like it needs to at the enterprise level, which is as part of the operationalization of security. Security needs to be a component of running your organization and needs to be factored into a number of things.

The information security industry has a challenge and history of being a department in the middle and being obstructionist, which is I think is well deserved. But the real challenge is to cooperate more. We have to get a lot more information, which means working well with the rest of the organization, particularly networking and systems administrators and having conversations with them as far as the data and the environment and sharing and what we discover as problems without being the judgmental know-it-all security people. That is our stereotype. The adversaries are often far more cooperative than we are. In a lot of criminal forums, people will be fairly supportive of other people in their community—they’ll go up to where they reach the trade-secret level and stop—but if somebody’s not cutting into their profits, rumor is these people are cooperating and collaborating.

Within an organization, you need to work cross-organizationally. Information sharing is a very real piece of it. That’s not necessarily vulnerability management, but when you step into risk analysis and how you manage your environment, knowing what vulnerabilities you have is one thing, but knowing what vulnerabilities people are actually going to do bad things to requires information sharing, and that’s an industry wide challenge. It’s a challenge within our organizations, and outside it’s a real challenge across the enterprise, across industry, across government.

Why has that happened in the Security industry?

One is the stereotype—a lot of teams are very siloed, a lot of teams have their fiefdoms—that’s just human nature.

Another problem that everyone in security and technology faces is that we talk to all sorts of people and have all sorts of great conversations, learn amazing things, see amazing things and a lot of it is under NDA, formal or informal NDAs. And if it weren’t for friend-of-a-friend contacts a lot of information sharing would be dramatically less. A lot of the sanitized information that comes out is too sanitized to be useful. The Verizon Data Breach Report pointed out that there are similarities in attacks but they don’t line up with industry verticals as you might expect them to, so we have that challenge.

Another serious challenge we have in security, especially in the research community, is that there’s total distrust of the government. The Snowden revelations have really severely damaged the technology and security community’s faith in the government and willingness to cooperate with them. Further damaging that are the discussions about criminalizing many security tools—because the people in Congress don’t understand these things. We have a president who claims to be technologically savvy, and he is more than any before him, but he still doesn’t get it and he’s got advisors that don’t get it. So we have a great distrust of the government, which has been earned, despite the fact that any one of us in the industry knows folks at various agencies—whether the FBI or intelligence agencies or military —who are fantastic people—brilliant, hardworking patriotic—but the entities themselves are political entities, and that causes a lot of distrust in information sharing.

And there are just a lot of people that have the idea that they want proprietary information. This is not unique to security. There are a couple of different types of managers—there are people in organizations who strive to make themselves irreplaceable. As a manager, you’ve got to get those people out of your environment because they’re just poisonous. There are other people who strive to make it so that they can walk away at any time and it will be a minor inconvenience for someone to pick up the notes and run. Those are the type of people you should hang onto for dear life because they share information, they build knowledge, they build relationships. That’s just human nature. In security I don’t think there are enough people who are about building those bridges, building those communications paths, sharing what they’ve learned and trying to advance the cause. I think there’s still too many who horde information as a tool or a weapon.

Security is fundamentally a human problem amplified by technology. If you don’t address the human factors in it, you can have technological controls, but it still has to be managed by people. Human nature is a big part of what we do.

You advocate for automation to help with vulnerability management. Can automation catch the threats when hackers are becoming increasingly sophisticated and use bots themselves? Will this become a war of bot vs. bot?

A couple of points about automation. Our adversaries are using automation against us. We need to use automation to fight them, and we need to use as much automation as we can rely on to improve our situation. But at some point, we need smart people working on hard problems, and that’s not unique to security at all. The more you automate, at some point in time you have to look at whether your automation processes are improving things or not. If you’ve ever seen a big retailer or grocery store that has a person working full-time to manage the self-checkout line, that’s failed automation. That’s just one example of failed automation. Or if there’s a power or network outage at a hospital where everything is regulated and medications are regulated and then nobody can get their medications because the network’s down. Then you have patients suffering until somebody does something. They have manual systems that they have to fall back on and eventually some poor nurse has to spend an entire shift doing data entry because the systems failed so badly.

Automation doesn’t solve the problems—you have to automate the right things in the right ways, and the goal is to do the menial tasks in an automated fashion so you have to spend less human cycles. As a system or network administrator, you run into the same repetitive tasks over and over and you write scripts to do it or buy a tool to automate it. They same applies here –you want to filter through as much of the data as you can because one of the things that modern vulnerability management requires is a lot of data. It requires a ton of data, and it’s very easy to fall into an information overload situation. Where the tools can help is by filtering it down and reducing the amount of stuff that gets put in front of people to make decisions about, and that’s challenging. It’s a balance that requires continuous tuning—you don’t want it to miss anything so you want it to tell you everything that’s questionable but it can’t throw too many things at you that aren’t actually problems or people give up and ignore the problems. That was allegedly part of a couple of the major breaches last year. Alerts were triggered but nobody paid attention because they get tens of thousands of alerts a day as opposed to one big alert. One alert is hard to ignore—40,000 alerts and you just turn it off.

What’s the state of automated solutions today?

It’s pretty good if you tune it, but it takes maintenance. There isn’t an Easy Button, to use the Staples tagline. There’s not an Easy Button, and anyone promising an Easy Button is probably not being honest with you. But if you understand your environment and tune the vulnerability management and patch management tools (and a lot of them are administrative tools), you can automate a lot of it and you can reduce the pain dramatically. It does require a couple of very hard first steps. The first step in all of it is knowing what’s in your environment and knowing what’s crucial in your environment and understanding what you have because if you don’t know what you’ve got, you won’t be able to defend it well. It is pretty good but it does take a fair amount of effort to get to where you can make the best of it. Some organizations are certainly there, and some are not.

What do organizations need to consider when putting together a vulnerability management system?

One word: visibility. They need to understand that they need to be able to see and know what’s in the environment—everything that’s in their environment—and get good information on those systems. There needs to be visibility into a lot of systems that you don’t always have good visibility into. That means your mobile workforce with their laptops, that means mobile devices that are on the network, which are probably somewhere whether they belong there or not, that means understanding what’s on your network that’s not being managed actively, like Windows systems that might not be in active directory or RedHat systems that aren’t being managed by satellite or whatever systems you use to manage it.

Knowing everything that’s in the environment and its roles in the system—that’s a starting point. Then understanding what’s critical in the environment and how to prioritize that. The first step is really understanding your own environment and having visibility into the entire network—and that can extend to Cloud services if you’re using a lot of Cloud services. One of the conversations I’ve been having lately since the latest Akamai report was about IPv6. Most Americans are ignoring it even at the corporate level, and a lot of folks think you can ignore it still because we’re still routing most of our traffic over the IPv4 protocol. But IPv6 is active on just about every network out there. It’s just whether or not we actively measure and monitor it. The Akamai Report said something that a lot of folks have been saying for years and that’s that this is really a problem. Even though the adoption is pretty low, what you see if you start monitoring for it is people communicating in IPv6 whether intentionally or unintentionally. Often unintentionally because everythings’s enabled, so there’s often a whole swath of your network that people are ignoring. And you can’t have those huge blind spots in the environment, you just can’t. The vulnerability management program has to take into account that sort of overall view of the environment. Then once you’re there, you need a lot of help to solve the vulnerabilities, and that’s back to the human problem.

What should Enterprise Architects look for in an automated solution?

It really depends on the corporate need. They need to figure out whether or not the systems they’re looking at are going to find most or all of their network and discover all of the weakness, and then help them prioritize those. For example, can your systems do vulnerability analysis on newly discovered systems with little or no input? Can you automate detection? Can you automate confirmation of findings somehow? Can you interact with other systems? There’s a piece, too—what’s the rest of your environment look like? Are there ways into it? Does your vulnerability management system work with or understand all the things you’ve got? What if you have some unique network gear that your vulnerability management systems not going to tell you what the vulnerability’s in? There are German companies that like to use operating systems other than Windows and garden variety Linux distributions. Does it work in your environment and will it give you good coverage in your environment and can it take a lot of the mundane out of it?

How can companies maintain Boundaryless Information Flow™–particularly in an era of the Internet of Things–but still manage their vulnerabilities?

The challenge is a lot of people push back against high information flow because they can’t make sense of it; they can’t ingest the data, they can’t do anything with it. It’s the challenge of accepting and sharing a lot of information. It doesn’t matter whether vulnerability management or lot analysis or patch management or systems administration or back up or anything—the challenge is that networks have systems that share a lot of data but until you add context, it’s not really information. What we’re interested in in vulnerability management is different than what you’re automated backup is. The challenge is having systems that can share information outbound, share information inbound and then act rationally on only that which is relevant to them. That’s a real challenge because information overload is a problem that people have been complaining about for years, and it’s accelerating at a stunning rate.

You say Internet of Things, and I get a little frustrated when people treat that as a monolith because at one end an Internet enabled microwave or stove has one set of challenges, and they’re built on garbage commodity hardware with no maintenance ability at all. There are other things that people consider Internet of Things because they’re Internet enabled and they’re running Windows or a more mature Linux stack that has full management and somebody’s managing it. So there’s a huge gap between the managed IoT and the unmanaged, and the unmanaged is just adding low power machines in environments that will just amplify things like distributed denial of service (DoS). As it is, a lot of consumers have home routers that are being used to attack other people and do DoS attacks. A lot of the commercial stuff is being cleaned up, but a lot of the inexpensive home routers that people have are being used, and if those are used and misused or misconfigured or attacked with worms that can change the settings for things to have everything in the network participate in.

The thing with the evolution of vulnerability management is that we’re trying to drive people to a continuous monitoring situation. That’s where the federal government has gone, that’s where a lot of industries are, and it’s a challenge to go from infrequent or even frequent big scans to watching things continuously. The key is to take incremental steps, and the goal is, instead of having a big massive vulnerability project every quarter or every month, the goal is to get down to where it’s part of the routine, you’re taking small remediated measures on a daily or regular basis. There’s still going to be things when Microsoft or Oracle come out with a big patch that will require a bigger tool-up but you’re going to need to do this continuously and reach that point where you do small pieces of the task continuously rather than one big task. That’s the goal is to get to where you’re doing this continuously so you get to where you’re blowing out birthday candles rather than putting out forest fires.

Jack Daniel, a strategist at Tenable Network Security, has over 20 years experience in network and system administration and security, and has worked in a variety of practitioner and management positions. A technology community activist, he supports several information security and technology organizations. Jack is a co-founder of Security BSides, serves on the boards of three Security BSides non-profit corporations, and helps organize Security B-Sides events. Jack is a regular, featured speaker at ShmooCon, SOURCE Boston, DEF CON, RSA and other marque conferences. Jack is a CISSP, holds CCSK, and is a Microsoft MVP for Enterprise Security.

Join the conversation – @theopengroup #ogchat #ogBWI

1 Comment

Filed under Boundaryless Information Flow™, Internet of Things, RISK Management, Security, the open group, The Open Group Baltimore 2015

Catching Up with The Open Group Internet of Things Work Group

By The Open Group

The Open Group’s Internet of Things (IoT) Work Group is involved in developing open standards that will allow product and equipment management to evolve beyond the traditional limits of product lifecycle management. Meant to incorporate the larger systems management that will be required by the IoT, these standards will help to handle the communications needs of a network that may encompass products, devices, people and multiple organizations. Formerly known as the Quantum Lifecycle Management (QLM) Work Group, its name was recently changed to the Internet of Things Work Group to more accurately reflect its current direction and focus.

We recently caught up with Work Group Chairman Kary Främling to discuss its two new standards, both of which are geared toward the Internet of Things, and what the group has been focused on lately.

Over the past few years, The Open Group’s Internet of Things Work Group (formerly the Quantum Lifecycle Management Work Group) has been working behind the scenes to develop new standards related to the nascent Internet of Things and how to manage the lifecycle of these connected products, or as General Electric has referred to it, the “Industrial Internet.”

What their work ultimately aims to do is help manage all the digital information within a particular system—for example, vehicles, buildings or machines. By creating standard frameworks for handling this information, these systems and their related applications can be better run and supported during the course of their “lifetime,” with the information collected serving a variety of purposes, from maintenance to improved design and manufacturing to recycling and even refurbishing them.

According to Work Group Chairman Kary Främling, CEO of ControlThings and Professor of Practice in Building Information Modeling at Aalto University in Finland, the group has been working with companies such as Caterpillar and Fiat, as well as refrigerator and machine tool manufacturers, to enable machines and equipment to send sensor and status data on how machines are being used and maintained to their manufacturers. Data can also be provided to machine operators so they are also aware of how the machines are functioning in order to make changes if need be.

For example, Främling says that one application of this system management loop is in HVAC systems within buildings. By building Internet capabilities into the system, now a ventilation system—or air-handling unit—can be controlled via a smartphone from the moment it’s turned on inside a building. The system can provide data and alerts to facilities management about how well it’s operating and whether there are any problems within the system to whomever needs it. Främling also says that the system can provide information to both the maintenance company and the system manufacturer so they can collect information from the machines on performance, operations and other indicators. This allows users to determine things as simple as when an air filter may need changing or whether there are systematic problems with different machine models.

According to Främling, the ability to monitor systems in this way has already helped ventilation companies make adjustments to their products.

“What we noticed was there was a certain problem with certain models of fans in these machines. Based on all the sensor readings on the machine, I could deduce that the air extraction fan had broken down,” he said.

The ability to detect such problems via sensor data as they are happening can be extremely beneficial to manufacturers because they can more easily and more quickly make improvements to their systems. Another advantage afforded by machines with Web connectivity, Främling says, is that errors can also be corrected remotely.

“There’s so much software in these machines nowadays, so just by changing parameters you can make them work better in many ways,” he says.

In fact, Främling says that the Work Group has been working on systems such as these for a number of years already—well before the term “Internet of Things” became part of industry parlance. They first worked on a system for a connected refrigerator in 2007 and even worked on systems for monitoring how vehicles were used before then.

One of the other things the Work Group is focused on is working with the Open Platform 3.0 Forum since there are many synergies between the two groups. For instance, the Work Group provided a number of the uses cases for the Forum’s recent business scenarios.

“I really see what we are doing is enabling the use cases and these information systems,” Främling says.

Two New Standards

In October, the Work Group also published two new standards, both of which are two of the first standards to be developed for the Internet of Things (IoT). A number of companies and universities across the world have been instrumental in developing the standards including Aalto University in Finland, BIBA, Cambridge University, Infineon, InMedias, Politechnico di Milano, Promise Innovation, SAP and Trackway Ltd.

Främling likens these early IoT standards to what the HTML and HTTP protocols did for the Internet. For example, the Open Data Format (O-DF) Standard provides a common language for describing any kind of IoT object, much like HTML provided a language for the Web. The Open Messaging Interface (O-MI) Standard, on the other hand, describes a set of operations that enables users to read information about particular systems and then ask those systems for that information, much like HTTP. Write operations then allow users to also send information or new values to the system, for example, to update the system.

Users can also subscribe to information contained in other systems. For instance, Främling described a scenario in which he was able to create a program that allowed him to ask his car what was wrong with it via a smartphone when the “check engine” light came on. He was then able to use a smartphone application to send an O-MI message to the maintenance company with the error code and his location. Using an O-MI subscription the maintenance company would be able to send a message back asking for additional information. “Send these five sensor values back to us for the next hour and you should send them every 10 seconds, every 5 seconds for the temperature, and so on,” Främling said. Once that data is collected, the service center can analyze what’s wrong with the vehicle.

Främling says O-MI messages can easily be set up on-the-fly for a variety of connected systems with little programming. The standard also allows users to manage mobility and firewalls. O-MI communications are also run over systems that are already secure to help prevent security issues. Those systems can include anything from HTTP to USB sticks to SMTP, as well, Främling says.

Främling expects that these standards can also be applied to multiple types of functionalities across different industries, for example for connected systems in the healthcare industry or to help manage energy production and consumption across smart grids. With both standards now available, the Work Group is beginning to work on defining extensions for the Data Format so that vocabularies specific to certain industries, such as healthcare or manufacturing, can also be developed.

In addition, Främling expects that as protocols such as O-MI make it easier for machines to communicate amongst themselves, they will also be able to begin to optimize themselves over time. Cars, in fact, are already using this kind of capability, he says. But for other systems, such as buildings, that kind of communication is not happening yet. He says in Finland, his company has projects underway with manufacturers of diesel engines, cranes, elevators and even in Volkswagen factories to establish information flows between systems. Smart grids are also another potential use. In fact his home is wired to provide consumption rates in real-time to the electric company, although he says he does not believe they are currently doing anything with the data.

“In the past we used to speak about these applications for pizza or whatever that can tell a microwave oven how long it should be heated and the microwave oven also checks that the food hasn’t expired,” Främling said.

And while your microwave may not yet be able to determine whether your food has reached its expiration date, these recent developments by the Work Group are helping to bring the IoT vision to fruition by making it easier for systems to begin the process of “talking” to each other through a standardized messaging system.

By The Open GroupKary Främling is currently CEO of the Finnish company ControlThings, as well as Professor of Practice in Building Information Modeling (BIM) at Aalto University, Finland. His main research topics are on information management practices and applications for BIM and product lifecycle management in general. His main areas of competence are distributed systems, middleware, multi-agent systems, autonomously learning agents, neural networks and decision support systems. He is one of the worldwide pioneers in the Internet of Things domain, where he has been active since 2000.

@theopengroup; #ogchat

Comments Off on Catching Up with The Open Group Internet of Things Work Group

Filed under digital technologies, Enterprise Transformation, Future Technologies, Internet of Things, Open Platform 3.0, Uncategorized

The Emergence of the Third Platform

By Andras Szakal, Vice President and Chief Technology Officer, IBM U.S. Federal

By 2015 there will be more than 5.6 billion personal devices in use around the world. Personal mobile computing, business systems, e-commerce, smart devices and social media are generating an astounding 2.5 billion gigabytes of data per day. Non-mobile network enabled intelligent devices, often referred to as the Internet of Things (IoT), is poised to explode to over 1 trillion devices by 2015.

Rapid innovation and astounding growth in smart devices is driving new business opportunities and enterprise solutions. Many of these new opportunities and solutions are based on deep insight gained through analysis of the vast amount of data being generated.

The expansive growth of personal and pervasive computing power continues to drive innovation that is giving rise to a new class of systems and a pivot to a new generation of computing platform. Over the last fifty years, two generations of computing platform have dominated the business and consumer landscape. The first generation was dominated by the monolithic mainframe, while distributed computing and the Internet characterized the second generation. Cloud computing, Big Data/Analytics, the Internet of Things (IoT), mobile computing and even social media are the core disruptive technologies that are now converging at the cross roads of the emergence of a third generation of computing platform.

This will require new approaches to enterprise and business integration and interoperability. Industry bodies like The Open Group must help guide customers through the transition by facilitating customer requirements, documenting best practices, establishing integration standards and transforming the current approach to Enterprise Architecture, to adapt to the change in which organizations will build, use and deploy the emerging third generation of computing platform.

Enterprise Computing Platforms

An enterprise computing platform provides the underlying infrastructure and operating environment necessary to support business interactions. Enterprise systems are often comprised of complex application interactions necessary to support business processes, customer interactions, and partner integration. These interactions coupled with the underlying operating environment define an enterprise systems architecture.

The hallmark of successful enterprise systems architecture is a standardized and stable systems platform. This is an underlying operating environment that is stable, supports interoperability, and is based on repeatable patterns.

Enterprise platforms have evolved from the monolithic mainframes of the 1960s and 1970s through the advent of the distributed systems in the 1980s. The mainframe-based architecture represented the first true enterprise operating platform, referred to henceforth as the First Platform. The middleware-based distributed systems that followed and ushered in the dawn of the Internet represented the second iteration of platform architecture, referred to as the Second Platform.

While the creation of the Internet and the advent of web-based e-commerce are of historical significance, the underlying platform was still predominantly based on distributed architectures and therefore is not recognized as a distinct change in platform architecture. However, Internet-based e-commerce and service-based computing considerably contributed to the evolution toward the next distinct version of the enterprise platform. This Third Platform will support the next iteration of enterprise systems, which will be born out of multiple simultaneous and less obvious disruptive technology shifts.

The Convergence of Disruptive Technologies

The emergence of the third generation of enterprise platforms is manifested at the crossroads of four distinct, almost simultaneous, disruptive technology shifts; cloud computing, mobile computing, big data-based analytics and the IoT. The use of applications based on these technologies, such as social media and business-driven insight systems, have contributed to both the convergence and rate of adoption.

These technologies are dramatically changing how enterprise systems are architected, how customers interact with business, and the rate and pace of development and deployment across the enterprise. This is forcing vendors, businesses, and governments to shift their systems architectures to accommodate integrated services that leverage cloud infrastructure, while integrating mobile solutions and supporting the analysis of the vast amount of data being generated by mobile solutions and social media. All this is happening while maintaining the integrity of the evolving businesses capabilities, processes, and transactions that require integration with business systems such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM).

Cloud computing and the continued commoditization of computer storage are key facilitating elements of this convergence. Cloud computing lowers the complexity of enterprise computing through virtualization and automated infrastructure provisioning, while solid-state and software-based Internet storage has made big data practical and affordable. Cloud computing solutions continue to evolve and offer innovative services like Platform as a Service (PaaS)-based development environments that integrate directly with big data solutions. Higher density, cloud-based and solid-state storage continue to lower the cost and complexity of storage and big data solutions.

The emergence of the smartphone and enterprise mobile computing is a key impetus for the emergence of big data solutions and an explosion of innovative storage technologies. The modern mobile platform, with all its rich applications, device sensors, and access to social networks, is almost single-handedly responsible for the explosion of data and the resulting rush to provide solutions to analyze and act on the insight contained in the vast ocean of personalized information. In turn, this phenomenon has created a big data market ecosystem based on the premise that open data is the new natural resource.

The emergence of sensor-enabled smartphones has foreshadowed the potential value of making everyday devices interconnected and intelligent by adding network-based sensors that allow devices to enhance their performance by interacting with their environment, and through collaboration with other devices and enterprise systems in the IoT. For example, equipment manufacturers are using sensors to gain insight into the condition of fielded equipment. This approach reduces both the mean time to failure and pinpoints manufacturing quality issues and potential design flaws. This system of sensors also integrates with the manufacturer’s internal supply chain systems to identify needed parts, and optimizes the distribution process. In turn, the customer benefits by avoiding equipment downtime through scheduling maintenance before a part fails.

Over time, the IoT will require an operating environment for devices that integrates with existing enterprise business systems. But this will require that smart devices effectively integrate with cloud-based enterprise business systems, the enterprise customer engagement systems, as well as the underlying big data infrastructure responsible for gleaning insight into the data this vast network of sensors will generate. While each of these disruptive technology shifts has evolved separately, they share a natural affinity for interaction, collaboration, and enterprise integration that can be used to optimize an enterprise’s business processes.

Evolving Enterprise Business Systems

Existing enterprise systems (ERP, CRM, Supply Chain, Logistics, etc.) are still essential to the foundation of a business or government and form Systems of Record (SoR) that embody core business capabilities and the authoritative processes based on master data records. The characteristics of SoR are:

  • Encompass core business functions
  • Transactional in nature
  • Based on structured databases
  • Authoritative source of information (master data records)
  • Access is regulated
  • Changes follow a rigorous governance process.

Mobile systems, social media platforms, and Enterprise Market Management (EMM) solutions form another class of systems called Systems of Engagement (SoE). Their characteristics are:

  • Interact with end-users through open collaborative interfaces (mobile, social media, etc.)
  • High percentage of unstructured information
  • Personalized to end-user preferences
  • Context-based analytical business rules and processing
  • Access is open and collaborative
  • Evolves quickly and according to the needs of the users.

The emergence of the IoT is embodied in a new class of system, Systems of Sensors (SoS), which includes pervasive computing and control. Their characteristics are:

  • Based on autonomous network-enabled devices
  • Devices that use sensors to collect information about the environment
  • Interconnected with other devices or enterprise engagement systems
  • Changing behavior based on intelligent algorithms and environmental feedback
  • Developed through formal product engineering process
  • Updates to device firmware follow a continuous lifecycle.

The Third Platform

The Third Platform is a convergence of cloud computing, big data solutions, mobile systems and the IoT integrated into the existing enterprise business systems.

The Three Classes of System

Figure 1: The Three Classes of Systems within the Third Platform

The successful implementation and deployment of enterprise SoR has been embodied in best practices, methods, frameworks, and techniques that have been distilled into enterprise architecture. The same level of rigor and pattern-based best practices will be required to ensure the success of solutions based on Third Platform technologies. Enterprise architecture methods and models need to evolve to include guidance, governance, and design patterns for implementing business solutions that span the different classes of system.

The Third Platform builds upon many of the concepts that originated with Service-Oriented Architecture (SOA) and dominated the closing stanza of the period dominated by the Second Platform technologies. The rise of the Third Platform provides the technology and environment to enable greater maturity of service integration within an enterprise.

The Open Group Service Integration Maturity Model (OSIMM) standard[1] provides a way in which an organization can assess its level of service integration maturity. Adoption of the Third Platform inherently addresses many of the attributes necessary to achieve the highest levels of service integration maturity defined by OSIMM. It will enable new types of application architecture that can support dynamically reconfigurable business and infrastructure services across a wide variety of devices (SoS), internal systems (SoR), and user engagement platforms (SoE).

Solution Development

These new architectures and the underlying technologies will require adjustments to how organizations approach enterprise IT governance, to lower the barrier of entry necessary to implement and integrate the technologies. Current adoption requires extensive expertise to implement, integrate, deploy, and maintain the systems. First market movers have shown the rest of the industry the realm of the possible, and have reaped the rewards of the early adopter.

The influence of cloud and mobile-based technologies has changed the way in which solutions will be developed, delivered, and maintained. SoE-based solutions interact directly with customers and business partners, which necessitates a continuous delivery of content and function to align with the enterprise business strategy.

Most cloud-based services employ a roll-forward test and delivery model. A roll-forward model allows an organization to address functional inadequacies and defects in almost real-time, with minimal service interruptions. The integration and automation of development and deployment tools and processes reduces the risk of human error and increases visibility into quality. In many cases, end-users are not even aware of updates and patch deployments.

This new approach to development and operations deployment and maintenance is referred to as DevOps – which combines development and operations tools, governance, and techniques into a single tool set and management practice. This allows the business to dictate, not only the requirements, but also the rate and pace of change aligned to the needs of the enterprise.

[1] The Open Group Service Integration Maturity Model (OSIMM), Open Group Standard (C117), published by The Open Group, November 2011; refer to: www.opengroup.org/bookstore/catalog/c117.htm

Andras2

Figure 2: DevOps: The Third Platform Solution Lifecycle

The characteristics of an agile DevOps approach are:

  • Harmonization of resources and practices between development and IT operations
  • Automation and integration of the development and deployment processes
  • Alignment of governance practices to holistically address development and operations with business needs
  • Optimization of the DevOps process through continuous feedback and metrics.

In contrast to SoE, SoR have a slower velocity of delivery. Such systems are typically released on fixed, pre-planned release schedules. Their inherent stability of features and capabilities necessitates a more structured and formal development approach, which traditionally equates to fewer releases over time. Furthermore, the impact changes to SoR have on core business functionality limits the magnitude and rate of change an organization is able to tolerate. But the emergence of the Third Platform will continue to put pressure on these core business systems to become more agile and flexible in order to adapt to the magnitude of events and information generated by mobile computing and the IoT.

As the technologies of the Third Platform coalesce, organizations will need to adopt hybrid development and delivery models based on agile DevOps techniques that are tuned appropriately to the class of system (SoR, SoS or SoS) and aligned with an acceptable rate of change.

DevOps is a key attribute of the Third Platform that will shift the fundamental management structure of the IT department. The Third Platform will usher in an era where one monolithic IT department is no longer necessary or even feasible. The line between business function and IT delivery will be imperceptible as this new platform evolves. The lines of business will become intertwined with the enterprise IT functions, ultimately leading to the IT department and business capability becoming synonymous. The recent emergence of the Enterprise Market Management organizations is an example where the marketing capabilities and the IT delivery systems are managed by a single executive – the Enterprise Marketing Officer.

The Challenge

The emergence of a new enterprise computing platform will usher in opportunity and challenge for businesses and governments that have invested in the previous generation of computing platforms. Organizations will be required to invest in both expertise and technologies to adopt the Third Platform. Vendors are already offering cloud-based Platform as a Service (PaaS) solutions that will provide integrated support for developing applications across the three evolving classes of systems – SoS, SoR, and SoE. These new development platforms will continue to evolve and give rise to new application architectures that were unfathomable just a few years ago. The emergence of the Third Platform is sure to spawn an entirely new class of dynamically reconfigurable intelligent applications and devices where applications reprogram their behavior based on the dynamics of their environment.

Almost certainly this shift will result in infrastructure and analytical capacity that will facilitate the emergence of cognitive computing which, in turn, will automate the very process of deep analysis and, ultimately, evolve the enterprise platform into the next generation of computing. This shift will require new approaches, standards and techniques for ensuring the integrity of an organization’s business architecture, enterprise architecture and IT systems architectures.

To effectively embrace the Third Platform, organizations will need to ensure that they have the capability to deliver boundaryless systems though integrated services that are comprised of components that span the three classes of systems. This is where communities like The Open Group can help to document architectural patterns that support agile DevOps principles and tooling as the Third Platform evolves.

Technical standardization of the Third Platform has only just begun; for example, standardization of the cloud infrastructure has only recently crystalized around OpenStack. Mobile computing platform standardization remains fragmented across many vendor offerings even with the support of rigid developer ecosystems and open sourced runtime environments. The standardization and enterprise support for SoS is still nascent but underway within groups like the Allseen Alliance and with the Open Group’s QLM workgroup.

Call to Action

The rate and pace of innovation, standardization, and adoption of Third Platform technologies is astonishing but needs the guidance and input from the practitioner community. It is incumbent upon industry communities like the Open Group to address the gaps between traditional Enterprise Architecture and an approach that scales to the Internet timescales being imposed by the adoption of the Third Platform.

The question is not whether Third Platform technologies will dominate the IT landscape, but rather how quickly this pivot will occur. Along the way, the industry must apply the open standards processes to ensure against the fragmentation into multiple incompatible technology platforms.

The Open Group has launched a new forum to address these issues. The Open Group Open Platform 3.0™ Forum is intended to provide a vendor-neutral environment where members share knowledge and collaborate to develop standards and best practices necessary to help guide the evolution of Third Platform technologies and solutions. The Open Platform 3.0 Forum will provide a place where organizations can help illuminate their challenges in adopting Third Platform technologies. The Open Platform 3.0 Forum will help coordinate standards activities that span existing Open Group Forums and ensure a coordinated approach to Third Platform standardization and development of best practices.

Innovation itself is not enough to ensure the value and viability of the emerging platform. The Open Group can play a unique role through its focus on Boundaryless Information Flow™ to facilitate the creation of best practices and integration techniques across the layers of the platform architecture.

andras-szakalAndras Szakal, VP and CTO, IBM U.S. Federal, is responsible for IBM’s industry solution technology strategy in support of the U.S. Federal customer. Andras was appointed IBM Distinguished Engineer and Director of IBM’s Federal Software Architecture team in 2005. He is an Open Group Distinguished Certified IT Architect, IBM Certified SOA Solution Designer and a Certified Secure Software Lifecycle Professional (CSSLP).  Andras holds undergraduate degrees in Biology and Computer Science and a Masters Degree in Computer Science from James Madison University. He has been a driving force behind IBM’s adoption of government IT standards as a member of the IBM Software Group Government Standards Strategy Team and the IBM Corporate Security Executive Board focused on secure development and cybersecurity. Andras represents the IBM Software Group on the Board of Directors of The Open Group and currently holds the Chair of The Open Group Certified Architect (Open CA) Work Group. More recently, he was appointed chair of The Open Group Trusted Technology Forum and leads the development of The Open Trusted Technology Provider Framework.

1 Comment

Filed under big data, Cloud, Internet of Things, Open Platform 3.0

IT Trends Empowering Your Business is Focus of The Open Group London 2014

By The Open Group

The Open Group, the vendor-neutral IT consortium, is hosting an event in London October 20th-23rd at the Central Hall, Westminster. The theme of this year’s event is on how new IT trends are empowering improvements in business and facilitating enterprise transformation.

Objectives of this year’s event:

  • Show the need for Boundaryless Information Flow™, which would result in more interoperable, real-time business processes throughout all business ecosystems
  • Examine the use of developing technology such as Big Data and advanced data analytics in the financial services sector: to minimize risk, provide more customer-centric products and identify new market opportunities
  • Provide a high-level view of the Healthcare ecosystem that identifies entities and stakeholders which must collaborate to enable the vision of Boundaryless Information Flow
  • Detail how the growth of “The Internet of Things” with online currencies and mobile-enabled transactions has changed the face of financial services, and poses new threats and opportunities
  • Outline some of the technological imperatives for Healthcare providers, with the use of The Open Group Open Platform 3.0™ tools to enable products and services to work together and deploy emerging technologies freely and in combination
  • Describe how to develop better interoperability and communication across organizational boundaries and pursue global standards for Enterprise Architecture for all industries

Key speakers at the event include:

  • Allen Brown, President & CEO, The Open Group
  • Magnus Lindkvist, Futurologist
  • Hans van Kesteren, VP & CIO Global Functions, Shell International, The Netherlands
  • Daniel Benton, Global Managing Director, IT Strategy, Accenture

Registration for The Open Group London 2014 is open and available to members and non-members. Please register here.

Join the conversation via Twitter – @theopengroup #ogLON

Comments Off on IT Trends Empowering Your Business is Focus of The Open Group London 2014

Filed under architecture, Boundaryless Information Flow™, Business Architecture, Enterprise Architecture, Future Technologies, Governance, Healthcare, Internet of Things, Interoperability, Open Platform 3.0, Standards, Uncategorized

The Open Group Boston 2014 Preview: Talking People Architecture with David Foote

By The Open Group

Among all the issues that CIOs, CTOs and IT departments are facing today, staffing is likely near the top of the list of what’s keeping them up at night. Sure, there’s dealing with constant (and disruptive) technological changes and keeping up with the latest tech and business trends, such as having a Big Data, Internet of Things (IoT) or a mobile strategy, but without the right people with the right skills at the right time it’s impossible to execute on these initiatives.

Technology jobs are notoriously difficult to fill–far more difficult than positions in other industries where roles and skillsets may be much more static. And because technology is rapidly evolving, the roles for tech workers are also always in flux. Last year you may have needed an Agile developer, but today you may need a mobile developer with secure coding ability and in six months you might need an IoT developer with strong operations or logistics domain experience—with each position requiring different combinations of tech, functional area, solution and “soft” skillsets.

According to David Foote, IT Industry Analyst and co-founder of IT workforce research and advisory firm Foote Partners, the mash-up of HR systems and ad hoc people management practices most companies have been using for years to manage IT workers have become frighteningly ineffective. He says that to cope in today’s environment, companies need to architect their people infrastructure similar to how they have been architecting their technical infrastructure.

“People Architecture” is the term Foote has coined to describe the application of traditional architectural principles and practices that may already be in place elsewhere within an organization and applying them to managing the IT workforce. This includes applying such things as strategy and capability roadmaps, phase gate blueprints, benchmarks, performance metrics, governance practices and stakeholder management to human capital management (HCM).

HCM components for People Architecture typically include job definition and design, compensation, incentives and recognition, skills demand and acquisition, job and career paths, professional development and work/life balance.

Part of the dilemma for employers right now, Foote says, is that there is very little job title standardization in the marketplace and too many job titles floating around IT departments today. “There are too many dimensions and variability in jobs now that companies have gotten lost from an HR perspective. They’re unable to cope with the complexity of defining, determining pay and laying out career paths for all these jobs, for example. For many, serious retention and hiring problems are showing up for the first time. Work-around solutions used for years to cope with systemic weaknesses in their people management systems have stopped working,” says Foote. “Recruiters start picking off their best people and candidates are suddenly rejecting offers and a panic sets in. Tensions are palpable in their IT workforce. These IT realities are pervasive.”

Twenty-five years ago, Foote says, defining roles in IT departments was easier. But then the Internet exploded and technology became far more customer-facing, shifting basic IT responsibilities from highly technical people deep within companies to roles requiring more visibility and transparency within and outside the enterprise. Large chunks of IT budgets moved into the business lines while traditional IT became more of a business itself.

According to Foote, IT roles became siloed not just by technology but by functional areas such as finance and accounting, operations and logistics, sales, marketing and HR systems, and by industry knowledge and customer familiarity. Then the IT professional services industry rapidly expanded to compete with their customers for talent in the marketplace. Even the architect role changed: an Enterprise Architect today can specialize in applications, security or data architecture among others, or focus on a specific industry such as energy, retail or healthcare.

Foote likens the fragmentation of IT jobs and skillsets that’s happening now to the emergence of IT architecture 25 years ago. Just as technical architecture practices emerged to help make sense of the disparate systems rapidly growing within companies and how best to determine the right future tech investments, a people architecture approach today helps organizations better manage an IT workforce spread through the enterprise with roles ranging from architects and analysts to a wide variety of engineers, developers and project and program managers.

“Technical architecture practices were successful because—when you did them well—companies achieved an understanding of what they have systems-wise and then connected it to where they were going and how they were going to get there, all within a process inclusive of all the various stakeholders who shared the risk in the outcome. It helped clearly define enterprise technology capabilities and gave companies more options and flexibility going forward,” according to Foote.

“Right now employers desperately need to incorporate in human capital management systems and practice the same straightforward, inclusive architecture approaches companies are already using in other areas of their businesses. This can go a long way toward not just lessening staffing shortages but also executing more predictably and being more agile in face of constant uncertainties and the accelerating pace of change. Ultimately this translates into a more effective workforce whether they are full-timers or the contingent workforce of part-timers, consultants and contractors.

“It always comes down to your people. That’s not a platitude but a fact,” insists Foote. “If you’re not competitive in today’s labor marketplace and you’re not an employer where people want to work, you’re dead.”

One industry that he says has gotten it right is the consulting industry. “After all, their assets walk out the door every night. Consulting groups within firms such as IBM and Accenture have been good at architecting their staffing because it’s their job to get out in front of what’s coming technologically. Because these firms must anticipate customer needs before they get the call to implement services, they have to be ahead of the curve in already identifying and hiring the bench strength needed to fulfill demand. They do many things right to hire, develop and keep the staff they need in place.”

Unfortunately, many companies take too much of a just-in-time approach to their workforce so they are always managing staffing from a position of scarcity rather than looking ahead, Foote says. But, this is changing, in part due to companies being tired of never having the people they need and being able to execute predictably.

The key is to put a structure in place that addresses a strategy around what a company needs and when. This applies not just to the hiring process, but also to compensation, training and advancement.

“Architecting anything allows you to be able to, in a more organized way, be more agile in dealing with anything that comes at you. That’s the beauty of architecture. You plan for the fact that you’re going to continue to scale and continue to change systems, the world’s going to continue to change, but you have an orderly way to manage the governance, planning and execution of that, the strategy of that and the implementation of decisions knowing that the architecture provides a more agile and flexible modular approach,” he said.

Foote says organizations such as The Open Group can lend themselves to facilitating People Architecture in a couple different ways. First, through extending the principles of architecture to human capital management, and second through vendor-independent, expertise and experience driven certifications, such as TOGAF® or OpenCA and OpenCITS, that help companies define core competencies for people and that provide opportunities for training and career advancement.

“I’m pretty bullish on many vendor-independent certifications in general, particularly where a defined book of knowledge exists that’s achieved wide acceptance in the industry. And that’s what you’ve got with The Open Group. Nobody’s challenging the architectural framework supremacy of TOGAF that that I’m aware of. In fact, large vendors with their own certifications participated actively in developing the framework and applying it very successfully to their business models,” he said.

Although the process of implementing People Architecture can be difficult and may take several years to master (much like Enterprise Architecture), Foote says it is making a huge difference for companies that implement it.

To learn more about People Architecture and models for implementing it, plan to attend Foote’s session at The Open Group Boston 2014 on Tuesday July 22. Foote’s session will address how architectural principles are being applied to human capital so that organizations can better manage their workforces from hiring and training through compensation, incentives and advancement. He will also discuss how career paths for EAs can be architected. Following the conference, the session proceedings will be available to Open Group members and conference attendees at www.opengroup.org.

Join the conversation – #ogchat #ogBOS

footeDavid Foote is an IT industry research pioneer, innovator, and one of the most quoted industry analysts on global IT workforce trends and multiple facets of the human side of technology value creation. His two decades of groundbreaking deep research and analysis of IT-business cross-skilling and technology/business management integration and leading the industry in innovative IT skills demand and compensation benchmarking has earned him a place on a short list of thought leaders in IT human capital management.

A former Gartner and META Group analyst, David leads the research and analytical practice groups at Foote Partners that reach 2,300 customers on six continents.

Comments Off on The Open Group Boston 2014 Preview: Talking People Architecture with David Foote

Filed under architecture, Conference, Open CA, Open CITS, Professional Development, Standards, TOGAF®, Uncategorized

The Onion & The Open Group Open Platform 3.0™

By Stuart Boardman, Senior Business Consultant, KPN Consulting, and Co-Chair of The Open Group Open Platform 3.0™

Onion1

The onion is widely used as an analogy for complex systems – from IT systems to mystical world views.Onion2

 

 

 

It’s a good analogy. From the outside it’s a solid whole but each layer you peel off reveals a new onion (new information) underneath.

And a slice through the onion looks quite different from the whole…Onion3

What (and how much) you see depends on where and how you slice it.Onion4

 

 

 

 

The Open Group Open Platform 3.0™ is like that. Use-cases for Open Platform 3.0 reveal multiple participants and technologies (Cloud Computing, Big Data Analytics, Social networks, Mobility and The Internet of Things) working together to achieve goals that vary by participant. Each participant’s goals represent a different slice through the onion.

The Ecosystem View
We commonly use the idea of peeling off layers to understand large ecosystems, which could be Open Platform 3.0 systems like the energy smart grid but could equally be the workings of a large cooperative or the transport infrastructure of a city. We want to know what is needed to keep the ecosystem healthy and what the effects could be of the actions of individuals on the whole and therefore on each other. So we start from the whole thing and work our way in.

Onion5

The Service at the Centre of the Onion

If you’re the provider or consumer (or both) of an Open Platform 3.0 service, you’re primarily concerned with your slice of the onion. You want to be able to obtain and/or deliver the expected value from your service(s). You need to know as much as possible about the things that can positively or negatively affect that. So your concern is not the onion (ecosystem) as a whole but your part of it.

Right in the middle is your part of the service. The first level out from that consists of other participants with whom you have a direct relationship (contractual or otherwise). These are the organizations that deliver the services you consume directly to enable your own service.

One level out from that (level 2) are participants with whom you have no direct relationship but on whose services you are still dependent. It’s common in Platform 3.0 that your partners too will consume other services in order to deliver their services (see the use cases we have documented). You need to know as much as possible about this level , because whatever happens here can have a positive or negative effect on you.

One level further from the centre we find indirect participants who don’t necessarily delivery any part of the service but whose actions may well affect the rest. They could just be indirect materials suppliers. They could also be part of a completely different value network in which your level 1 or 2 “partners” participate. You can’t expect to understand this level in detail but you know that how that value network performs can affect your partners’ strategy or even their very existence. The knock-on impact on your own strategy can be significant.

We can conceive of more levels but pretty soon a law of diminishing returns sets in. At each level further from your own organization you will see less detail and more variety. That in turn means that there will be fewer things you can actually know (with any certainty) and not much more that you can even guess at. That doesn’t mean that the ecosystem ends at this point. Ecosystems are potentially infinite. You just need to decide how deep you can usefully go.

Limits of the Onion
At a certain point one hits the limits of an analogy. If everybody sees their own organization as the centre of the onion, what we actually have is a bunch of different, overlapping onions.

Onion6

And you can’t actually make onions overlap, so let’s not take the analogy too literally. Just keep it in mind as we move on. Remember that our objective is to ensure the value of the service we’re delivering or consuming. What we need to know therefore is what can change that’s outside of our own control and what kind of change we might expect. At each visible level of the theoretical onion we will find these sources of variety. How certain of their behaviour we can be will vary – with a tendency to the less certain as we move further from the centre of the onion. We’ll need to decide how, if at all, we want to respond to each kind of variety.

But that will have to wait for my next blog. In the meantime, here are some ways people look at the onion.

Onion7   Onion8

 

 

 

 

SONY DSCStuart Boardman is a Senior Business Consultant with KPN Consulting where he leads the Enterprise Architecture practice and consults to clients on Cloud Computing, Enterprise Mobility and The Internet of Everything. He is Co-Chair of The Open Group Open Platform 3.0™ Forum and was Co-Chair of the Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by KPN, the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI as well as several Open Group white papers, guides and standards. He is a frequent speaker at conferences on the topics of Open Platform 3.0 and Identity.

2 Comments

Filed under Cloud, Cloud/SOA, Conference, Enterprise Architecture, Open Platform 3.0, Service Oriented Architecture, Standards, Uncategorized

Future Technologies

By Dave Lounsbury, The Open Group

The Open Group is looking toward the future – what will happen in the next five to ten years?

Those who know us think of The Open Group as being all about consensus, creating standards that are useful to the buy and supply side by creating a stable representation of industry experience – and they would be right. But in order to form this consensus, we must keep an eye on the horizon to see if there are areas that we should be talking about now. The Open Group needs to keep eyes on the future in order to keep pace with businesses looking to gain business advantage by incorporating emerging technologies. According to the McKinsey Global institute[1], “leaders need to plan for a range of scenarios, abandoning assumptions about where competition and risk could come from and not to be afraid to look beyond long-established models.”

To make sure we have this perspective, The Open Group has started a series of Future Technologies workshops. We initiated this at The Open Group Conference in Philadelphia with the goal of identifying emerging business and technical trends that change the shape of enterprise IT.  What are the potential disruptors? How should we be preparing?

As always at The Open Group, we look to our membership to guide us. We assembled a fantastic panel of experts on the topic who offered up insights into the future:

  • Dr. William Lafontaine, VP High Performance Computing, Analytics & Cognitive Markets at IBM Research: Global technology Outlook 2013.
  • Mike Walker, Strategy and Enterprise Architecture Advisor at HP: An Enterprise Architecture’s Journey to 2020.

If you were not able to join us in Philadelphia, you can view the Livestream session on-demand.

Dr. William Lafontaine shared aspects of the company’s Global Technology Outlook 2013, naming the top trends that the company is keeping top of mind, starting with a confluence of social, mobile analytics and cloud.

According to Lafontaine and his colleagues, businesses must prepare for not “mobile also” but “mobile first.” In fact, there will be companies that will exist in a mobile-only environment.

  • Growing scale/lower barrier of entry – More data created, but also more people able to create ways of taking advantage of this data, such as companies that excel at personal interface. Multimedia analytics will become a growing concern for businesses that will be receiving swells of information video and images.
  • Increasing complexity – the Confluence of Social, Mobile, Cloud and Big Data / Analytics will result in masses of data coming from newer, more “complex” places, such as scanners, mobile devices and other “Internet of Things”. Yet, these complex and varied streams of data are more consumable and will have an end-product which is more easily delivered to clients or user.  Smaller businesses are also moving closer toward enterprise complexity. For example, when you swipe your credit card, you will also be shown additional purchasing opportunities based on your past spending habits.  These can include alerts to nearby coffee shops that serve your favorite tea to local bookstores that sell mysteries or your favorite genre.
  •  Fast pace – According to Lafontaine, ideas will be coming to market faster than ever. He introduced the concept of the Minimum Buyable Product, which means take an idea (sometimes barely formed) to inventors to test its capabilities and to evaluate as quickly as possible. Processes that once took months or years can now take weeks. Lafontaine used the MOOC innovator Coursera as an example: Eighteen months ago, it had no clients and existed in zero countries. Now it’s serving over 4 million students around the world in over 29 countries. Deployment of open APIs will become a strategic tool for creation of value.
  • Contextual overload – Businesses have more data than they know what to do with: our likes and dislikes, how we like to engage with our mobile devices, our ages, our locations, along with traditional data of record. The next five years, businesses will be attempting to make sense of it.
  • Machine learning – Cognitive systems will form the “third era” of computing. We will see businesses using machines capable of complex reasoning and interaction to extend human cognition.  Examples are a “medical sieve” for medical imaging diagnosis, used by legal firms in suggesting defense / prosecution arguments and in next generation call centers.
  • IT shops need to be run as a business – Mike Walker spoke about how the business of IT is fundamentally changing and that end-consumers are driving corporate behaviors.  Expectations have changed and the bar has been raised.  The tolerance for failure is low and getting lower.  It is no longer acceptable to tell end-consumers that they will be receiving the latest product in a year.  Because customers want their products faster, EAs and businesses will have to react in creative ways.
  • Build a BRIC house: According to Forrester, $2.1 trillion will be spent on IT in 2013 with “apps and the US leading the charge.” Walker emphasized the importance of building information systems, products and services that support the BRIC areas of the world (Brazil, Russia, India and China) since they comprise nearly a third of the global GDP. Hewlett-Packard is banking big on “The New Style of IT”: Cloud, risk management and security and information management.  This is the future of business and IT, says Meg Whitman, CEO and president of HP. All of the company’s products and services presently pivot around these three concepts.
  • IT is the business: Gartner found that 67% of all EA organizations are either starting (39%), restarting (7%) or renewing (21%). There’s a shift from legacy EA, with 80% of organizations focused on how they can leverage EA to either align business and IT standards (25%), deliver strategic business and IT value (39%) or enable major business transformation (16%).

Good as these views are, they only represent two data points on a line that The Open Group wants to draw out toward the end of the decade. So we will be continuing these Future Technologies sessions to gather additional views, with the next session being held at The Open Group London Conference in October.  Please join us there! We’d also like to get your input on this blog.  Please post your thoughts on:

  • Perspectives on what business and technology trends will impact IT and EA in the next 5-10 years
  • Points of potential disruption – what will change the way we do business?
  • What actions should we be taking now to prepare for this future?

[1] McKinsey Global Institute, Disruptive technologies: Advances that will transform life, business, and the global economy. May 2013

Dave LounsburyDave Lounsbury is The Open Group‘s Chief Technology Officer, previously VP of Collaboration Services.  Dave holds three U.S. patents and is based in the U.S.

1 Comment

Filed under Cloud, Enterprise Architecture, Future Technologies, Open Platform 3.0