Tag Archives: Cloud services

The Open Group London 2014 Preview: A Conversation with RTI’s Stan Schneider about the Internet of Things and Healthcare

By The Open Group

RTI is a Silicon Valley-based messaging and communications company focused on helping to bring the Industrial Internet of Things (IoT) to fruition. Recently named “The Most Influential Industrial Internet of Things Company” by Appinions and published in Forbes, RTI’s EMEA Manager Bettina Swynnerton will be discussing the impact that the IoT and connected medical devices will have on hospital environments and the Healthcare industry at The Open Group London October 20-23. We spoke to RTI CEO Stan Schneider in advance of the event about the Industrial IoT and the areas where he sees Healthcare being impacted the most by connected devices.

Earlier this year, industry research firm Gartner declared the Internet of Things (IoT) to be the most hyped technology around, having reached the pinnacle of the firm’s famed “Hype Cycle.”

Despite the hype around consumer IoT applications—from FitBits to Nest thermostats to fashionably placed “wearables” that may begin to appear in everything from jewelry to handbags to kids’ backpacks—Stan Schneider, CEO of IoT communications platform company RTI, says that 90 percent of what we’re hearing about the IoT is not where the real value will lie. Most of media coverage and hype is about the “Consumer” IoT like Google glasses or sensors in refrigerators that tell you when the milk’s gone bad. However, most of the real value of the IoT will take place in what GE has coined as the “Industrial Internet”—applications working behind the scenes to keep industrial systems operating more efficiently, says Schneider.

“In reality, 90 percent of the real value of the IoT will be in industrial applications such as energy systems, manufacturing advances, transportation or medical systems,” Schneider says.

However, the reality today is that the IoT is quite new. As Schneider points out, most companies are still trying to figure out what their IoT strategy should be. There isn’t that much active building of real systems at this point.

Most companies, at the moment, are just trying to figure out what the Internet of Things is. I can do a webinar on ‘What is the Internet of Things?’ or ‘What is the Industrial Internet of Things?’ and get hundreds and hundreds of people showing up, most of whom don’t have any idea. That’s where most companies are. But there are several leading companies that very much have strategies, and there are a few that are even executing their strategies, ” he said. According to Schneider, these companies include GE, which he says has a 700+ person team currently dedicated to building their Industrial IoT platform, as well as companies such as Siemens and Audi, which already have some applications working.

For its part, RTI is actively involved in trying to help define how the Industrial Internet will work and how companies can take disparate devices and make them work with one another. “We’re a nuts-and-bolts, make-it-work type of company,” Schneider notes. As such, openness and standards are critical not only to RTI’s work but to the success of the Industrial IoT in general, says Schneider. RTI is currently involved in as many as 15 different industry standards initiatives.

IoT Drivers in Healthcare

Although RTI is involved in IoT initiatives in many industries, from manufacturing to the military, Healthcare is one of the company’s main areas of focus. For instance, RTI is working with GE Healthcare on the software for its CAT scanner machines. GE chose RTI’s DDS (data distribution service) product because it will let GE standardize on a single communications platform across product lines.

Schneider says there are three big drivers that are changing the medical landscape when it comes to connectivity: the evolution of standalone systems to distributed systems, the connection of devices to improve patient outcome and the replacement of dedicated wiring with networks.

The first driver is that medical devices that have been standalone devices for years are now being built on new distributed architectures. This gives practitioners and patients easier access to the technology they need.

For example, RTI customer BK Medical, a medical device manufacturer based in Denmark, is in the process of changing their ultrasound product architecture. They are moving from a single-user physical system to a wirelessly connected distributed design. Images will now be generated in and distributed by the Cloud, thus saving significant hardware costs while making the systems more accessible.

According to Schneider, ultrasound machine architecture hasn’t really changed in the last 30 or 40 years. Today’s ultrasound machines are still wheeled in on a cart. That cart contains a wired transducer, image processing hardware or software and a monitor. If someone wants to keep an image—for example images of fetuses in utero—they get carry out physical media. Years ago it was a Polaroid picture, today the images are saved to CDs and handed to the patient.

In contrast, BK’s new systems will be completely distributed, Schneider says. Doctors will be able to carry a transducer that looks more like a cellphone with them throughout the hospital. A wireless connection will upload the imaging data into the cloud for image calculation. With a distributed scenario, only one image processing system may be needed for a hospital or clinic. It can even be kept in the cloud off-site. Both patients and caregivers can access images on any display, wherever they are. This kind of architecture makes the systems much cheaper and far more efficient, Schneider says. The days of the wheeled-in cart are numbered.

The second IoT driver in Healthcare is connecting medical devices together to improve patient outcomes. Most hospital devices today are completely independent and standalone. So, if a patient is hooked up to multiple monitors, the only thing that really “connects” those devices today is a piece of paper at the end of a hospital bed that shows how each should be functioning. Nurses are supposed to check these devices on an hourly basis to make sure they’re working correctly and the patient is ok.

Schneider says this approach is error-ridden. First, the nurse may be too busy to do a good job checking the devices. Worse, any number of things can set off alarms whether there’s something wrong with the patient or not. As anyone who has ever visited a friend or relative in the hospital attest to, alarms are going off constantly, making it difficult to determine when someone is really in distress. In fact, one of the biggest problems in hospital settings today, Schneider says, is a phenomenon known as “alarm fatigue.” Single devices simply can’t reliably tell if there’s some minor glitch in data or if the patient is in real trouble. Thus, 80% of all device alarms in hospitals are turned off. Meaningless alarms fatigue personnel, so they either ignore or turn off the alarms…and people can die.

To deal with this problem, new technologies are being created that will connect devices together on a network. Multiple devices can then work in tandem to really figure out when something is wrong. If the machines are networked, alarms can be set to go off only when multiple distress indicators are indicated rather than just one. For example, if oxygen levels drop on both an oxygen monitor on someone’s finger and on a respiration monitor, the alarm is much more likely a real patient problem than if only one source shows a problem. Schneider says the algorithms to fix these problems are reasonably well understood; the barrier is the lack of networking to tie all of these machines together.

The third area of change in the industrial medical Internet is the transition to networked systems from dedicated wired designs. Surgical operating rooms offer a good example. Today’s operating room is a maze of wires connecting screens, computers, and video. Videos, for instance, come from dynamic x-ray imaging systems, from ultrasound navigation probes and from tiny cameras embedded in surgical instruments. Today, these systems are connected via HDMI or other specialized cables. These cables are hard to reconfigure. Worse, they’re difficult to sterilize, Schneider says. Thus, the surgical theater is hard to configure, clean and maintain.

In the future, the mesh of special wires can be replaced by a single, high-speed networking bus. Networks make the systems easier to configure and integrate, easier to use and accessible remotely. A single, easy-to-sterilize optical network cable can replace hundreds of wires. As wireless gets faster, even that cable can be removed.

“By changing these systems from a mesh of TV-cables to a networked data bus, you really change the way the whole system is integrated,” he said. “It’s much more flexible, maintainable and sharable outside the room. Surgical systems will be fundamentally changed by the Industrial IoT.”

IoT Challenges for Healthcare

Schneider says there are numerous challenges facing the integration of the IoT into existing Healthcare systems—from technical challenges to standards and, of course, security and privacy. But one of the biggest challenges facing the industry, he believes, is plain old fear. In particular, Schneider says, there is a lot of fear within the industry of choosing the wrong path and, in effect, “walking off a cliff” if they choose the wrong direction. Getting beyond that fear and taking risks, he says, will be necessary to move the industry forward, he says.

In a practical sense, the other thing currently holding back integration is the sheer number of connected devices currently being used in medicine, he says. Manufacturers each have their own systems and obviously have a vested interest in keeping their equipment in hospitals, so many have been reluctant to develop or become standards-compliant and push interoperability forward, Schneider says.

This is, of course, not just a Healthcare issue. “We see it in every single industry we’re in. It’s a real problem,” he said.

Legacy systems are also a problematic area. “You can’t just go into a Kaiser Permanente and rip out $2 billion worth of equipment,” he says. Integrating new systems with existing technology is a process of incremental change that takes time and vested leadership, says Schneider.

Cloud Integration a Driver

Although many of these technologies are not yet very mature, Schneider believes that the fundamental industry driver is Cloud integration. In Schneider’s view, the Industrial Internet is ultimately a systems problem. As with the ultrasound machine example from BK Medical, it’s not that an existing ultrasound machine doesn’t work just fine today, Schneider says, it’s that it could work better.

“Look what you can do if you connect it to the Cloud—you can distribute it, you can make it cheaper, you can make it better, you can make it faster, you can make it more available, you can connect it to the patient at home. It’s a huge system problem. The real overwhelming striking value of the Industrial Internet really happens when you’re not just talking about the hospital but you’re talking about the Cloud and hooking up with practitioners, patients, hospitals, home care and health records. You have to be able to integrate the whole thing together to get that ultimate value. While there are many point cases that are compelling all by themselves, realizing the vision requires getting the whole system running. A truly connected system is a ways out, but it’s exciting.”

Open Standards

Schneider also says that openness is absolutely critical for these systems to ultimately work. Just as agreeing on a standard for the HTTP running on the Internet Protocol (IP) drove the Web, a new device-appropriate protocol will be necessary for the Internet of Things to work. Consensus will be necessary, he says, so that systems can talk to each other and connectivity will work. The Industrial Internet will push that out to the Cloud and beyond, he says.

“One of my favorite quotes is from IBM, he says – IBM said, ‘it’s not a new Internet, it’s a new Web.’” By that, they mean that the industry needs new, machine-centric protocols to run over the same Internet hardware and base IP protocol, Schneider said.

Schneider believes that this new web will eventually evolve to become the new architecture for most companies. However, for now, particularly in hospitals, it’s the “things” that need to be integrated into systems and overall architectures.

One example where this level of connectivity will make a huge difference, he says, is in predictive maintenance. Once a system can “sense” or predict that a machine may fail or if a part needs to be replaced, there will be a huge economic impact and cost savings. For instance, he said Siemens uses acoustic sensors to monitor the state of its wind generators. By placing sensors next to the bearings in the machine, they can literally “listen” for squeaky wheels and thus figure out whether a turbine may soon need repair. These analytics let them know when the bearing must be replaced before the turbine shuts down. Of course, the infrastructure will need to connect all of these “things” to the each other and the cloud first. So, there will need to be a lot of system level changes in architectures.

Standards, of course, will be key to getting these architectures to work together. Schneider believes standards development for the IoT will need to be tackled from both horizontal and vertical standpoint. Both generic communication standards and industry specific standards like how to integrate an operating room must evolve.

“We are a firm believer in open standards as a way to build consensus and make things actually work. It’s absolutely critical,” he said.

stan_schneiderStan Schneider is CEO at Real-Time Innovations (RTI), the Industrial Internet of Things communications platform company. RTI is the largest embedded middleware vendor and has an extensive footprint in all areas of the Industrial Internet, including Energy, Medical, Automotive, Transportation, Defense, and Industrial Control.  Stan has published over 50 papers in both academic and industry press. He speaks at events and conferences widely on topics ranging from networked medical devices for patient safety, the future of connected cars, the role of the DDS standard in the IoT, the evolution of power systems, and understanding the various IoT protocols.  Before RTI, Stan managed a large Stanford robotics laboratory, led an embedded communications software team and built data acquisition systems for automotive impact testing.  Stan completed his PhD in Electrical Engineering and Computer Science at Stanford University, and holds a BS and MS from the University of Michigan. He is a graduate of Stanford’s Advanced Management College.

 

Leave a comment

Filed under architecture, Cloud, digital technologies, Enterprise Architecture, Healthcare, Internet of Things, Open Platform 3.0, Standards, Uncategorized

First Open Group Webjam — Impact of Cloud Computing on our Resumes

By E.G. Nadhan, HP

The Open Group conducted its first ever webjam within The Cloud Work Group last month. A Webjam is an informal mechanism for the members within a particular work group with a common interest to have an interactive brainstorming debate on a topic of their choice. Consider it to be a panel discussion — except everyone on the call is part of the panel! I coordinated the first webjam for The Cloud Work Group — the topic was “What will Cloud do to your resume?”

The webjam was attended by active members of the Cloud work group including

  • Sanda Morar and Som Balakrishnan from Cognizant Technologies
  • Raj Bhoopathi and E.G.Nadhan from HP.
  • Chris Harding from The Open Group

We used this post on the ECIO Forum Blog to set the context for this webjam. Click here for recording. Below is a brief summary of the key takeaways:

  • Cloud Computing is causing significant shifts that could impact the extent to which some roles exist in the future—especially the role of the CTO and the CIO. The CIO must become a cooperative integrator across a heterogeneous mix of technologies, platforms and services that are provisioned on or off the cloud.
  • Key Cloud characteristics—such as multi-tenancy, elasticity, scalability, etc.—are likely to be called out in resumes. There is an accelerated push for Cloud Architects who are supposed to ensure that aspects of the Cloud are consistently addressed across all architectural layers.
  • DevOps is expanding the role of the developer to transcend into operations. Developers’ resumes are more likely to call this experience out in Cloud Computing environments.
  • Business users are likely to call out their experience directly procuring Cloud services.
  • Application testers are more likely to address interoperability between the services provided—including the validation of the projected service levels—which could, in turn, show up on their resumes.
  • Operations personnel are likely to call out their experience with tools that can seamlessly monitor physical and virtual resources.

The recording provides much more detail.

I really enjoyed the webjam. It provided an opportunity to share the perspectives of individuals from numerous member companies of The Open Group on a topic germane to us as IT professionals as well as to The Cloud Work Group.

Are there other roles that are impacted? Are there any other changes to the content of the resumes in the future? Please listen to the recording and let me know your thoughts.

If you are a member of the Cloud Work Group, I look forward to engaging in an interesting discussion with you on other topics in this area!

A version of this blog post was originally published on HP’s Journey through Enterprise IT Services blog.

NadhanHP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has more than 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project, and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, LinkedIn and Journey Blog.

 

Comments Off

Filed under Cloud, Cloud/SOA

How Should we use Cloud?

By Chris Harding, The Open Group

How should we use Cloud? This is the key question at the start of 2013.

The Open Group® conferences in recent years have thrown light on, “What is Cloud?” and, “Should we use Cloud?” It is time to move on.

Cloud as a Distributed Processing Platform

The question is an interesting one, because the answer is not necessarily, “Use Cloud resources just as you would use in-house resources.” Of course, you can use Cloud processing and storage to replace or supplement what you have in-house, and many companies are doing just that. You can also use the Cloud as a distributed computing platform, on which a single application instance can use multiple processing and storage resources, perhaps spread across many countries.

It’s a bit like contracting a company to do a job, rather than hiring a set of people. If you hire a set of people, you have to worry about who will do what when. Contract a company, and all that is taken care of. The company assembles the right people, schedules their work, finds replacements in case of sickness, and moves them on to other things when their contribution is complete.

This doesn’t only make things easier, it also enables you to tackle bigger jobs. Big Data is the latest technical phenomenon. Big Data can be processed effectively by parceling the work out to multiple computers. Cloud providers are beginning to make the tools to do this available, using distributed file systems and map-reduce. We do not yet have, “Distributed Processing as a Service” – but that will surely come.

Distributed Computing at the Conference

Big Data is the main theme of the Newport Beach conference. The plenary sessions have keynote presentations on Big Data, including the crucial aspect of security, and there is a Big Data track that explores in depth its use in Enterprise Architecture.

There are also Cloud tracks that explore the business aspects of using Cloud and the use of Cloud in Enterprise Architecture, including a session on its use for Big Data.

Service orientation is generally accepted as a sound underlying principle for systems using both Cloud and in-house resources. The Service Oriented Architecture (SOA) movement focused initially on its application within the enterprise. We are now looking to apply it to distributed systems of all kinds. This may require changes to specific technology and interfaces, but not to the fundamental SOA approach. The Distributed Services Architecture track contains presentations on the theory and practice of SOA.

Distributed Computing Work in The Open Group

Many of the conference presentations are based on work done by Open Group members in the Cloud Computing, SOA and Semantic Interoperability Work Groups, and in the Architecture, Security and Jericho Forums. The Open Group enables people to come together to develop standards and best practices for the benefit of the architecture community. We have active Work Groups and Forums working on artifacts such as a Cloud Computing Reference Architecture, a Cloud Portability and Interoperability Guide, and a Guide to the use of TOGAF® framework in Cloud Ecosystems.

The Open Group Conference in Newport Beach

Our conferences provide an opportunity for members and non-members to discuss ideas together. This happens not only in presentations and workshops, but also in informal discussions during breaks and after the conference sessions. These discussions benefit future work at The Open Group. They also benefit the participants directly, enabling them to bring to their enterprises ideas that they have sounded out with their peers. People from other companies can often bring new perspectives.

Most enterprises now know what Cloud is. Many have identified specific opportunities where they will use it. The challenge now for enterprise architects is determining how best to do this, either by replacing in-house systems, or by using the Cloud’s potential for distributed processing. This is the question for discussion at The Open Group Conference in Newport Beach. I’m looking forward to an interesting conference!

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

1 Comment

Filed under Cloud, Conference

Flying in the Cloud by the Seat of Our Pants

By Chris Harding, The Open Group

In the early days of aviation, when instruments were unreliable or non-existent, pilots often had to make judgments by instinct. This was known as “flying by the seat of your pants.” It was exciting, but error prone, and accidents were frequent. Today, enterprises are in that position with Cloud Computing.

Staying On Course

Flight navigation does not end with programming the flight plan. The navigator must check throughout the flight that the plane is on course.  Successful use of Cloud requires, not only an understanding of what it can do for the business, but also continuous monitoring that it is delivering value as expected. A change of service-level, for example, can have as much effect on a user enterprise as a change of wind speed on an aircraft.

The Open Group conducted a Cloud Return on Investment (ROI) survey in 2011. Then, 55 percent of those surveyed felt that Cloud ROI would be easy to evaluate and justify, although only 35 percent had mechanisms in place to do it. When we repeated the survey in 2012, we found that the proportion that thought it would be easy had gone down to 44 percent, and only 20 percent had mechanisms in place. This shows, arguably, more realism, but it certainly doesn’t show any increased tendency to monitor the value delivered by Cloud. In fact, it shows the reverse. The enterprise pilots are flying by the seats of their pants. (The full survey results are available at http://www.opengroup.org/sites/default/files/contentimages/Documents/cloud_roi_formal_report_12_19_12-1.pdf)

They Have No Instruments

It is hard to blame the pilots for this, because they really do not have the instruments. The Open Group published a book in 2011, Cloud Computing for Business, that explains how to evaluate and monitor Cloud risk and ROI, with spreadsheet examples. The spreadsheet is pretty much the state-of-the-art in Cloud ROI instrumentation.  Like a compass, it is robust and functional at a basic level, but it does not have the sophistication and accuracy of a satellite navigation system. If we want better navigation, we must have better systems.

There is scope for Enterprise Architecture tool vendors to fill this need. As the inclusion of Cloud in Enterprise Architectures becomes commonplace, and Cloud Computing metrics and their relation to ROI become better understood, it should be possible to develop the financial components of Enterprise Architecture modeling tools so that the business impact of the Cloud systems can be seen more clearly.

The Enterprise Flight Crew

But this is not just down to the architects. The architecture is translated into systems by developers, and the systems are operated by operations staff. All of these people must be involved in the procurement and configuration of Cloud services and their monitoring through the Cloud buyers’ life cycle.

Cloud is already bringing development and operations closer together. The concept of DevOps, a paradigm that stresses communication, collaboration and integration between software developers and IT operations professionals, is increasingly being adopted by enterprises that use Cloud Computing. This communication, collaboration and integration must involve – indeed must start with – enterprise architects, and it must include the establishment and monitoring of Cloud ROI models. All of these professionals must co-operate to ensure that the Cloud-enabled enterprise keeps to its financial course.

The Architect as Pilot

The TOGAF® architecture development method includes a phase (Phase G) in which the architects participate in implementation governance. The following Phase H is currently devoted to architecture change management, with the objectives of ensuring that the architecture lifecycle is maintained, the architecture governance framework is executed, and the Enterprise Architecture capability meets current requirements. Perhaps Cloud architects should also think about ensuring that the system meets its business requirements, and continues to do so throughout its operation. They can then revisit earlier phases of the architecture development cycle (always a possibility in TOGAF) if it does not.

Flying the Cloud

Cloud Computing compresses the development lifecycle, cutting the time to market of new products and the time to operation of new enterprise systems. This is a huge benefit. It implies closer integration of architecture, development and operations. But this must be supported by proper instrumentation of the financial parameters of Cloud services, so that the architecture, development and operations professionals can keep the enterprise on course.

Flying by the seat of the pants must have been a great experience for the magnificent men in the flying machines of days gone by, but no one would think of taking that risk with the lives of 500 passengers on a modern aircraft. The business managers of a modern enterprise should not have to take that risk either. We must develop standard Cloud metrics and ROI models, so that they can have instruments to measure success.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

10 Comments

Filed under Cloud/SOA

Data Governance: A Fundamental Aspect of IT

By E.G. Nadhan, HP

In an earlier post, I had explained how you can build upon SOA governance to realize Cloud governance.  But underlying both paradigms is a fundamental aspect that we have been dealing with ever since the dawn of IT—and that’s the data itself.

In fact, IT used to be referred to as “data processing.” Despite the continuing evolution of IT through various platforms, technologies, architectures and tools, at the end of the day IT is still processing data. However, the data has taken multiple shapes and forms—both structured and unstructured. And Cloud Computing has opened up opportunities to process and store structured and unstructured data. There has been a need for data governance since the day data processing was born, and today, it’s taken on a whole new dimension.

“It’s the economy, stupid,” was a campaign slogan, coined to win a critical election in the United States in 1992. Today, the campaign slogan for governance in the land of IT should be, “It’s the data, stupid!”

Let us challenge ourselves with a few questions. Consider them the what, why, when, where, who and how of data governance.

What is data governance? It is the mechanism by which we ensure that the right corporate data is available to the right people, at the right time, in the right format, with the right context, through the right channels.

Why is data governance needed? The Cloud, social networking and user-owned devices (BYOD) have acted as catalysts, triggering an unprecedented growth in recent years. We need to control and understand the data we are dealing with in order to process it effectively and securely.

When should data governance be exercised? Well, when shouldn’t it be? Data governance kicks in at the source, where the data enters the enterprise. It continues across the information lifecycle, as data is processed and consumed to address business needs. And it is also essential when data is archived and/or purged.

Where does data governance apply? It applies to all business units and across all processes. Data governance has a critical role to play at the point of storage—the final checkpoint before it is stored as “golden” in a database. Data Governance also applies across all layers of the architecture:

  • Presentation layer where the data enters the enterprise
  • Business logic layer where the business rules are applied to the data
  • Integration layer where data is routed
  • Storage layer where data finds its home

Who does data governance apply to? It applies to all business leaders, consumers, generators and administrators of data. It is a good idea to identify stewards for the ownership of key data domains. Stewards must ensure that their data domains abide by the enterprise architectural principles.  Stewards should continuously analyze the impact of various business events to their domains.

How is data governance applied? Data governance must be exercised at the enterprise level with federated governance to individual business units and data domains. It should be proactively exercised when a new process, application, repository or interface is introduced.  Existing data is likely to be impacted.  In the absence of effective data governance, data is likely to be duplicated, either by chance or by choice.

In our data universe, “informationalization” yields valuable intelligence that enables effective decision-making and analysis. However, even having the best people, process and technology is not going to yield the desired outcomes if the underlying data is suspect.

How about you? How is the data in your enterprise? What governance measures do you have in place? I would like to know.

A version of this blog post was originally published on HP’s Journey through Enterprise IT Services blog.

NadhanHP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has more than 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project, and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, LinkedIn and Journey Blog.

1 Comment

Filed under Cloud, Cloud/SOA

2013 Open Group Predictions, Vol. 1

By The Open Group

A big thank you to all of our members and staff who have made 2012 another great year for The Open Group. There were many notable achievements this year, including the release of ArchiMate 2.0, the launch of the Future Airborne Capability Environment (FACE™) Technical Standard and the publication of the SOA Reference Architecture (SOA RA) and the Service-Oriented Cloud Computing Infrastructure Framework (SOCCI).

As we wrap up 2012, we couldn’t help but look towards what is to come in 2013 for The Open Group and the industries we‘re a part of. Without further ado, here they are:

Big Data
By Dave Lounsbury, Chief Technical Officer

Big Data is on top of everyone’s mind these days. Consumerization, mobile smart devices, and expanding retail and sensor networks are generating massive amounts of data on behavior, environment, location, buying patterns – etc. – producing what is being called “Big Data”. In addition, as the use of personal devices and social networks continue to gain popularity so does the expectation to have access to such data and the computational power to use it anytime, anywhere. Organizations will turn to IT to restructure its services so it meets the growing expectation of control and access to data.

Organizations must embrace Big Data to drive their decision-making and to provide the optimal service mix services to customers. Big Data is becoming so big that the big challenge is how to use it to make timely decisions. IT naturally focuses on collecting data so Big Data itself is not an issue.. To allow humans to keep on top of this flood of data, industry will need to move away from programming computers for storing and processing data to teaching computers how to assess large amounts of uncorrelated data and draw inferences from this data on their own. We also need to start thinking about the skills that people need in the IT world to not only handle Big Data, but to make it actionable. Do we need “Data Architects” and if so, what would their role be?

In 2013, we will see the beginning of the Intellectual Computing era. IT will play an essential role in this new era and will need to help enterprises look at uncorrelated data to find the answer.

Security

By Jim Hietala, Vice President of Security

As 2012 comes to a close, some of the big developments in security over the past year include:

  • Continuation of hacktivism attacks.
  • Increase of significant and persistent threats targeting government and large enterprises. The notable U.S. National Strategy for Trusted Identities in Cyberspace started to make progress in the second half of the year in terms of industry and government movement to address fundamental security issues.
  • Security breaches were discovered by third parties, where the organizations affected had no idea that they were breached. Data from the 2012 Verizon report suggests that 92 percent of companies breached were notified by a third party.
  • Acknowledgement from senior U.S. cybersecurity professionals that organizations fall into two groups: those that know they’ve been penetrated, and those that have been penetrated, but don’t yet know it.

In 2013, we’ll no doubt see more of the same on the attack front, plus increased focus on mobile attack vectors. We’ll also see more focus on detective security controls, reflecting greater awareness of the threat and on the reality that many large organizations have already been penetrated, and therefore responding appropriately requires far more attention on detection and incident response.

We’ll also likely see the U.S. move forward with cybersecurity guidance from the executive branch, in the form of a Presidential directive. New national cybersecurity legislation seemed to come close to happening in 2012, and when it failed to become a reality, there were many indications that the administration would make something happen by executive order.

Enterprise Architecture

By Leonard Fehskens, Vice President of Skills and Capabilities

Preparatory to my looking back at 2012 and forward to 2013, I reviewed what I wrote last year about 2011 and 2012.

Probably the most significant thing from my perspective is that so little has changed. In fact, I think in many respects the confusion about what Enterprise Architecture (EA) and Business Architecture are about has gotten worse.

The stress within the EA community as both the demands being placed on it and the diversity of opinion within it increase continues to grow.  This year, I saw a lot more concern about the value proposition for EA, but not a lot of (read “almost no”) convergence on what that value proposition is.

Last year I wrote “As I expected at this time last year, the conventional wisdom about Enterprise Architecture continues to spin its wheels.”  No need to change a word of that. What little progress at the leading edge was made in 2011 seems to have had no effect in 2012. I think this is largely a consequence of the dust thrown in the eyes of the community by the ascendance of the concept of “Business Architecture,” which is still struggling to define itself.  Business Architecture seems to me to have supplanted last year’s infatuation with “enterprise transformation” as the means of compensating for the EA community’s entrenched IT-centric perspective.

I think this trend and the quest for a value proposition are symptomatic of the same thing — the urgent need for Enterprise Architecture to make its case to its stakeholder community, especially to the people who are paying the bills. Something I saw in 2011 that became almost epidemic in 2012 is conflation — the inclusion under the Enterprise Architecture umbrella of nearly anything with the slightest taste of “business” to it. This has had the unfortunate effect of further obscuring the unique contribution of Enterprise Architecture, which is to bring architectural thinking to bear on the design of human enterprise.

So, while I’m not quite mired in the slough of despond, I am discouraged by the community’s inability to advance the state of the art. In a private communication to some colleagues I wrote, “the conventional wisdom on EA is at about the same state of maturity as 14th century cosmology. It is obvious to even the most casual observer that the earth is both flat and the center of the universe. We debate what happens when you fall off the edge of the Earth, and is the flat earth carried on the back of a turtle or an elephant?  Does the walking of the turtle or elephant rotate the crystalline sphere of the heavens, or does the rotation of the sphere require the turtlephant to walk to keep the earth level?  These are obviously the questions we need to answer.”

Cloud

By Chris Harding, Director of Interoperability

2012 has seen the establishment of Cloud Computing as a mainstream resource for enterprise architects and the emergence of Big Data as the latest hot topic, likely to be mainstream for the future. Meanwhile, Service-Oriented Architecture (SOA) has kept its position as an architectural style of choice for delivering distributed solutions, and the move to ever more powerful mobile devices continues. These trends have been reflected in the activities of our Cloud Computing Work Group and in the continuing support by members of our SOA work.

The use of Cloud, Mobile Computing, and Big Data to deliver on-line systems that are available anywhere at any time is setting a new norm for customer expectations. In 2013, we will see the development of Enterprise Architecture practice to ensure the consistent delivery of these systems by IT professionals, and to support the evolution of creative new computing solutions.

IT systems are there to enable the business to operate more effectively. Customers expect constant on-line access through mobile and other devices. Business organizations work better when they focus on their core capabilities, and let external service providers take care of the rest. On-line data is a huge resource, so far largely untapped. Distributed, Cloud-enabled systems, using Big Data, and architected on service-oriented principles, are the best enablers of effective business operations. There will be a convergence of SOA, Mobility, Cloud Computing, and Big Data as they are seen from the overall perspective of the enterprise architect.

Within The Open Group, the SOA and Cloud Work Groups will continue their individual work, and will collaborate with other forums and work groups, and with outside organizations, to foster the convergence of IT disciplines for distributed computing.

3 Comments

Filed under Business Architecture, Cloud, Cloud/SOA, Cybersecurity, Enterprise Architecture

Discover the World’s First Technical Cloud Computing Standard… for the Second Time

By E.G. Nadhan, HP

Have you heard of the first technical standard for Cloud Computing—SOCCI (pronounced “saw-key”)? Wondering what it stands for? Well, it stands for Service Oriented Cloud Computing Infrastructure, or SOCCI.

Whether you are just beginning to deploy solutions in the cloud or if you already have existing cloud solutions deployed, SOCCI can be applied in terms of each organization’s different situation. Where ever you are on the spectrum of cloud adoption, the standard offers a well-defined set of architecture building blocks with specific roles outlined in detail. Thus, the standard can be used in multiple ways including:

  • Defining the service oriented aspects of your infrastructure in the cloud as part of your reference architecture
  • Validating your reference architecture to ensure that these building blocks have been appropriately addressed

The standard provides you an opportunity to systematically perform the following in the context of your environment:

  • Identify synergies between service orientation and the cloud
  • Extend adoption of  traditional and service-oriented infrastructure in the cloud
  • Apply the consumer, provider and developer viewpoints on your cloud solution
  • Incorporate foundational building blocks into enterprise architecture for infrastructure services in the cloud
  • Implement cloud-based solutions using different infrastructure deployment models
  • Realize business solutions referencing the business scenario analyzed in this standard

Are you going to be SOCCI’s first application? Are you among the cloud innovators—opting not to wait when the benefits can be had today?

Incidentally, I will be presenting this standard for the second time at the HP Discover Conference in Frankfurt on 5th Dec 2012.   I plan on discussing this standard, as well as its application in a hypothetical business scenario so that we can collectively brainstorm on how it could apply in different business environments.

In an earlier tweet chat on cloud standards, I tweeted: “Waiting for standards is like waiting for Godot.” After the #DT2898 session at HP Discover 2012, I expect to tweet, “Waiting for standards may be like waiting for Godot, but waiting for the application of a standard does not have to be so.”

A version of this blog post originally appeared on the Journey through Enterprise IT Services Blog.

HP Distinguished Technologist and Cloud Advisor, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

Comments Off

Filed under Cloud, Cloud/SOA