Tag Archives: architecture

“New Now” Planning

By Stuart Boardman, KPN

In my last post I introduced the idea of “the new now,” which I borrowed from Jack Martin Leith. I suggested that the planning of large transformation projects needs to focus more on the first step than on the end goal, because that first step, once taken, will be the “new now” – the reality with which the organization will have to work. There were some interesting comments that have helped me further develop my ideas. I also got pointed, via Twitter to this interesting and completely independent piece that comes to very similar conclusions.

I promised to try to explain how this might work in practice, so it here goes…

As I see it, we would start our transformation program by looking at both the first step and the long term vision more or less in parallel.

In order to establish what that first step should be, we need to ask what we want the “new now” to look like. If we could have a “new now” – right now – what would that be? In other words, what is it that we can’t do at the moment that we believe we really need to be able to do? This is a question that should be asked as broadly as possible across the organization. There are three reasons for that:

  1. We’ll probably come across a variety of opinions and we’ll need to know why they vary and why people think they are important, if we are to define something feasible and useful. It’s also possible that out of this mixture of views something altogether different may emerge.
  2. Changes in the relatively near future will tend to be changes to operational practices and those are best determined and managed by the part of the organization that performs them (see Stafford Beer’s Viable Systems Model and associated work by Patrick Hoverstadt and others).
  3. Everyone’s going to experience the “new now” (that’s why we call it the “new now”), so it would be good not to just drop it on them as if this were a new form of big bang. By involving them now, they’ll have known what’s coming and be more likely to accept it than if they were just “informed.” And at least we’ll know how people will react if the “new now” doesn’t meet their particular wishes.

This process addresses, I hope, both Ron van den Burg’s comment about different people having different “horizons” and an interesting observation made by Mark Skilton at The Open Group Conference in Newport Beach that at any one time an organization may have a large number of “strategies” in play.

The longer term perspective is about vision and strategy. What is the vision of the enterprise and what does it want to become? What are the strategies to achieve that? That’s something typically determined at the highest levels of an organization, even though one might hope these days that the whole organization would be able to contribute. For the moment, we’ll regard it as a board decision.

Maybe the board is perfectly happy and doesn’t need to change the vision or strategy. In that case we’re not talking about transformation, so let’s assume they do see a need to change something. A strategic change doesn’t necessarily have to affect the entire organization. It may be that the way a particular aspect of the enterprise’s mission is performed needs to be changed. Nonetheless if it’s at a strategic level it’s going to involve a transformation.

Now we can lay the “new now” and the long term vision next to each other and see how well they fit. Is the first step indeed a step towards the vision? If not we need to understand why. Traditionally we would tend to say the first step must then be wrong. That’s a possibility but it’s equally possible that the long-term view is simply too long-term and is missing key facts about the organization. The fact alone that the two don’t fit may indicate a disconnect within the organization and require a different change altogether. So simply by performing this action, we are addressing one of the risks to a transformation project. If we had simply defined the first step based on the long term vision, we’d probably have missed it. If, however, the fit is indeed good, then we know we have organizational buy-in for the transformation.

Once we have broad alignment, we need to re-examine the first step for feasibility. It mustn’t be more ambitious than we can deliver within a reasonable time and budget. Nothing new there. What is different is that while we require the first step to be aware of the long term vision, we don’t expect it to put a platform in place for everything the future may bring. That’s exactly what it shouldn’t do, because the only thing we know for certain is that we need to be adaptable to change

What about the second step? We’ve delivered the first step. We’re at the “new now.” How does that feel? Where would we like to be now? This essentially an iteration over the process we used for the first step. There’s a strong chance that we’ll get a different result than we would have had, if we’d planned this second step back at the beginning. After all, we have a new “now,” so our starting state is something that we couldn’t experience back then. We also need to revisit the vision/strategy aspect. The world (the Environment in VSM terms) will not have stood still in the meantime. One would hope that our vision wasn’t so fragile that it would change drastically but at the very least we need to re-validate it.

So now we can compare the new next step and the (revised) vision, just as we did with our first step. And then we move on.

So what this process comes down to is essentially a series of movements to a “new now.” After each movement we have a new reality. So yes, we’re still planning. We’re just not making hard plans for fuzzy objectives. Our planning process is as flexible as our results need to be. Of course that doesn’t mean we can’t start thinking about step two before we actually arrive at step one but these plans only become concrete when we know what the “new now” feels like and therefore exactly what the following “new now” should be.

In their comments on the previous blog both Matt Kern and Peter Bakker made the reasonable points that without a plan, you’re probably not going to get funding. The other side of the coin is that these days (and actually for a few years now) it’s increasingly difficult to get funding for multi-year transformation processes, exactly because the return on investment takes too long – and is too uncertain. That’s exactly what I’m trying to address. The fundamental concept of “new now” planning is that something of agreed value is delivered within an acceptable timescale. Isn’t that more likely to get funding?

Once again, I’d be delighted to see people’s reaction to these ideas. I’m 100 percent certain they can be improved.

Stuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity. 

2 Comments

Filed under Enterprise Architecture

The Death of Planning

By Stuart Boardman, KPN

If I were to announce that planning large scale transformation projects was a waste of time, you’d probably think I’d taken leave of my senses. And yet, somehow this thought has been nagging at me for some time now. Bear with me.

It’s not so long ago that we still had debates about whether complex projects should be delivered as a “big bang” or in phases. These days the big bang has pretty much been forgotten. Why is that? I think the main reason is the level of risk involved with running a long process and dropping it into the operational environment just like that. This applies to any significant change, whether related to a business model and processes or IT architecture or physical building developments. Even if it all works properly, the level of sudden organizational change involved may stop it in its tracks.

So it has become normal to plan the change as a series of phases. We develop a roadmap to get us from here (as-is) to the end goal (to-be). And this is where I begin to identify the problem.

A few months ago I spent an enjoyable and thought provoking day with Jack Martin Leith (@jackmartinleith). Jack is a master in demystifying clichés but when he announced his irritation with “change is a journey,” I could only respond, “but Jack, it is.” What Jack made me see is that, whilst the original usage was a useful insight, it’s become a cliché which is commonly completely misused. It results in some pretty frustrating journeys! To understand that let’s take the analogy literally. Suppose your objective is to travel to San Diego but there are no direct flights from where you live. If the first step on your journey is a 4 hour layover at JFK, that’s at best a waste of your time and energy. There’s no value in this step. A day in Manhattan might be a different story. We can (and do) deal with this kind of thing for journeys of a day or so but imagine a journey that takes three or more years and all you see on the way is the inside of airports.

My experience has been that the same problem too often manifests itself in transformation programs. The first step may be logical from an implementation perspective, but it delivers no discernible value (tangible or intangible). It’s simply a validation that something has been done, as if, in our travel analogy, we were celebrating travelling the first 1000 kilometers, even if that put us somewhere over the middle of Lake Erie.

What would be better? An obvious conclusion that many have drawn is that we need to ensure every step delivers business value but that’s easier said than done.

Why is it so hard? The next thing Jack said helped me understand why. His point is that when you’ve taken the first step on your journey, it’s not just some intermediate station. It’s the “new now.” The new reality. The new as-is. And if the new reality is hanging around in some grotty airport trying to do your job via a Wi-Fi connection of dubious security and spending too much money on coffee and cookies…….you get the picture.

The problem with identifying that business value is that we’re not focusing on the new now but on something much more long-term. We’re trying to interpolate the near term business value out of the long term goal, which wasn’t defined based on near term needs.

What makes this all the more urgent is the increasing rate and unpredictability of change – in all aspects of doing business. This has led us to shorter planning horizons and an increasing tendency to regard that “to be” as nothing more than a general sense of direction. We’re thinking, “If we could deliver the whole thing really, really quickly on the basis of what we know we’d like to be able to do now, if it were possible, then it would look like this” – but knowing all the time that by the time we get anywhere near that end goal, it will have changed. It’s pretty obvious then that a first step, whose justification is entirely based on that imagined end goal, can easily be of extremely limited value.

So why not put more focus on the first step? That’s going to be the “new now.” How about making that our real target? Something that the enterprise sees as real value and that is actually feasible in a reasonable time scale (whatever that is). Instead of scoping that step as an intermediate (and rather immature) layover, why not put all our efforts into making it something really good? And when we get there and people know how the new now looks and feels, we can all think afresh about where to go next. After all, a journey is not simply defined by its destination but by how you get there and what you see and do on the way. If the actual journey itself is valuable, we may not want to get to the end of it.

Now that doesn’t mean we have to forget all about where we might want to be in three or even five years — not at all. The long term view is still important in helping us to make smart decisions about shorter term changes. It helps us allow for future change, even if only because it lets us see how much might change. And that helps us make sound decisions. But we should accept that our three or five year horizon needs to be continually open to revision – not on some artificial yearly cycle but every time there’s a “new now.” And this needs to include the times where the new now is not something we planned but is an emergent development from within or outside of the enterprise or is due to a major regulatory or market change.

So, if the focus is all on the first step and if our innovation cycle is getting steadily shorter, what’s the value of planning anything? Relax, I’m not about to fire the entire planning profession. If you don’t plan how you’re going to do something, what your dependencies are, how to react to the unexpected, etc., you’re unlikely to achieve your goal at all. Arguably that’s just project planning.

What about program planning? Well, if the program is so exposed to change maybe our concept of program planning needs to change. Instead of the plan being a thing fixed in stone that dictates everything, it could become a process in which the whole enterprise participates – itself open to emergence. The more I think about it, the more appealing that idea seems.

In my next post, I’ll go into more detail about how this might work, in particular from the perspective of Enterprise Architecture. I’ll also look more at how “the new planning” relates to innovation, emergence and social business and at the conflicts and synergies between these concerns. In the meantime, feel free to throw stones and see where the story doesn’t hold up.

Stuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity. 

7 Comments

Filed under Enterprise Architecture, Uncategorized

Flying in the Cloud by the Seat of Our Pants

By Chris Harding, The Open Group

In the early days of aviation, when instruments were unreliable or non-existent, pilots often had to make judgments by instinct. This was known as “flying by the seat of your pants.” It was exciting, but error prone, and accidents were frequent. Today, enterprises are in that position with Cloud Computing.

Staying On Course

Flight navigation does not end with programming the flight plan. The navigator must check throughout the flight that the plane is on course.  Successful use of Cloud requires, not only an understanding of what it can do for the business, but also continuous monitoring that it is delivering value as expected. A change of service-level, for example, can have as much effect on a user enterprise as a change of wind speed on an aircraft.

The Open Group conducted a Cloud Return on Investment (ROI) survey in 2011. Then, 55 percent of those surveyed felt that Cloud ROI would be easy to evaluate and justify, although only 35 percent had mechanisms in place to do it. When we repeated the survey in 2012, we found that the proportion that thought it would be easy had gone down to 44 percent, and only 20 percent had mechanisms in place. This shows, arguably, more realism, but it certainly doesn’t show any increased tendency to monitor the value delivered by Cloud. In fact, it shows the reverse. The enterprise pilots are flying by the seats of their pants. (The full survey results are available at http://www.opengroup.org/sites/default/files/contentimages/Documents/cloud_roi_formal_report_12_19_12-1.pdf)

They Have No Instruments

It is hard to blame the pilots for this, because they really do not have the instruments. The Open Group published a book in 2011, Cloud Computing for Business, that explains how to evaluate and monitor Cloud risk and ROI, with spreadsheet examples. The spreadsheet is pretty much the state-of-the-art in Cloud ROI instrumentation.  Like a compass, it is robust and functional at a basic level, but it does not have the sophistication and accuracy of a satellite navigation system. If we want better navigation, we must have better systems.

There is scope for Enterprise Architecture tool vendors to fill this need. As the inclusion of Cloud in Enterprise Architectures becomes commonplace, and Cloud Computing metrics and their relation to ROI become better understood, it should be possible to develop the financial components of Enterprise Architecture modeling tools so that the business impact of the Cloud systems can be seen more clearly.

The Enterprise Flight Crew

But this is not just down to the architects. The architecture is translated into systems by developers, and the systems are operated by operations staff. All of these people must be involved in the procurement and configuration of Cloud services and their monitoring through the Cloud buyers’ life cycle.

Cloud is already bringing development and operations closer together. The concept of DevOps, a paradigm that stresses communication, collaboration and integration between software developers and IT operations professionals, is increasingly being adopted by enterprises that use Cloud Computing. This communication, collaboration and integration must involve – indeed must start with – enterprise architects, and it must include the establishment and monitoring of Cloud ROI models. All of these professionals must co-operate to ensure that the Cloud-enabled enterprise keeps to its financial course.

The Architect as Pilot

The TOGAF® architecture development method includes a phase (Phase G) in which the architects participate in implementation governance. The following Phase H is currently devoted to architecture change management, with the objectives of ensuring that the architecture lifecycle is maintained, the architecture governance framework is executed, and the Enterprise Architecture capability meets current requirements. Perhaps Cloud architects should also think about ensuring that the system meets its business requirements, and continues to do so throughout its operation. They can then revisit earlier phases of the architecture development cycle (always a possibility in TOGAF) if it does not.

Flying the Cloud

Cloud Computing compresses the development lifecycle, cutting the time to market of new products and the time to operation of new enterprise systems. This is a huge benefit. It implies closer integration of architecture, development and operations. But this must be supported by proper instrumentation of the financial parameters of Cloud services, so that the architecture, development and operations professionals can keep the enterprise on course.

Flying by the seat of the pants must have been a great experience for the magnificent men in the flying machines of days gone by, but no one would think of taking that risk with the lives of 500 passengers on a modern aircraft. The business managers of a modern enterprise should not have to take that risk either. We must develop standard Cloud metrics and ROI models, so that they can have instruments to measure success.

Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. He is a member of the BCS, the IEEE and the AEA, and is a certified TOGAF practitioner.

10 Comments

Filed under Cloud/SOA

PODCAST: The Open Group FACE™ Consortium is Providing the Future of Airborne Systems

By The Open Group Staff

Recently, Judy Cerenzia, director of The Open Group Future Airborne Capability Environment (FACE™) Consortium sat down with Defense IQ to talk about FACE and its support for open architectures. The interview is in conjunction with the Interoperable Open Architecture (IOA) Conference taking place in London from October 29 31, 2012.

In the podcast interview, Judy talks about the FACE Consortium, an aviation-focused professional group made up of U.S. industry suppliers, customers and users, and its work to create a technologically appropriate open FACE reference architecture, standards and business models that point the way to the warfighter of tomorrow. Judy also discusses the evolution of FACE standards and business guidelines and what that means to the marketplace.

About IOA 2012

The IOA Conference will take place October 29-31, 2012 in London. The conference looks to make open systems truly open by empowering attendees to base future platforms architectures on publically available standards. More information about IOA is available on its website, and registration is available here.

Comments Off

Filed under Conference, FACE™

Secrets Behind the Rapid Growth of SOA

By E.G. Nadhan, HP

Service Oriented Architecture has been around for more than a decade and has steadily matured over the years with increasing levels of adoption. Cloud computing, a paradigm that is founded upon the fundamental service oriented principles, has fueled SOA’s adoption in recent years. ZDNet blogger Joe McKendrick calls out a survey by Companies and Markets in one of his blog posts – SOA market grew faster than expected.

Some of the statistics from this survey as referenced by McKendrick include:

  • SOA represents a total global market value of $5.518 billion, up from $3.987 billion in 2010 – or a 38% growth.
  • The SOA market in North America is set to grow at a compound annual growth rate (CAGR) of 11.5% through 2014.

So, what are the secrets of the success that SOA seems to be enjoying?  During the past decade, I can recall a few skeptics who were not so sure about SOA’s adoption and growth.  But I believe there are 5 “secrets” behind the success story of SOA that should put such skepticism to rest:

  1. Architecture. Service oriented architectures have greatly facilitated a structured approach to enterprise architecture (EA) at large. Despite debates over the scope of EA and SOA, the fact remains that service orientation is an integral part of the foundational factors considered by the enterprise architect. If anything, it has also acted as a catalyst for giving more visibility to the need for well-defined enterprise architecture to be in place for the current and desired states.
  2. Application. Service orientation has promoted standardized interfaces that have enabled the continued existence of multiple applications in an integrated, cohesive manner. Thanks to a SOA-based approach, integration mechanisms are no longer held hostage to proprietary formats and legacy platforms.
  3. Availability. Software Vendors have taken the initiative to make their functionality available through services. Think about the number of times you have heard a software vendor suggest Web services as their de-facto method for integrating to other systems? Single-click generation of a Web service is a very common feature across most of the software tools used for application development.
  4. Alignment. SOA has greatly facilitated and realized increased alignment from multiple fronts including the following:
    • Business to IT. The definition of application and technology services is really driven by the business need in the form of business services.
    • Application to Infrastructure. SOA strategies for the enterprise have gone beyond the application layer to the infrastructure, resulting in greater alignment between the application being deployed and the supporting infrastructure. Infrastructure services are an integral part of the comprehensive set of services landscape for an enterprise.
    • Platforms and technology. Interfaces between applications are much less dependent on the underlying technologies or platforms, resulting in increased alignment between various platforms and technologies. Interoperability has been taken to new levels across the extended enterprise.
  5. AdoptionSOA has served as the cornerstone for new paradigms like cloud computing. Increased adoption of SOA has also resulted in the evolution of multiple industry standards for SOA and has also led to the evolution of standards for infrastructure services to be provisioned in the cloudStandards do take time to evolve, but when they do, it is a tacit endorsement by the IT industry of the maturity of the underlying phenomenon — in this case, SOA.

Thus, the application of service oriented principles across the enterprise has increased SOA’s adoption spurred by the availability of readily exposed services across all architectural layers resulting in increased alignment between business and IT.

What about you? What factors come to your mind as SOA success secrets? Is your SOA experience in alignment with the statistics from the report McKendrick referenced? I would be interested to know.

Reposted with permission from CIO Magazine.

HP Distinguished Technologist, E.G.Nadhan has over 25 years of experience in the IT industry across the complete spectrum of selling, delivering and managing enterprise level solutions for HP customers. He is the founding co-chair for The Open Group SOCCI project and is also the founding co-chair for the Open Group Cloud Computing Governance project. Twitter handle @NadhanAtHP.

1 Comment

Filed under Cloud/SOA

Three Things I Wish I Had Known When I Started My Career

By Leonard Fehskens, The Open Group

It being the time of year for commencement speeches, Patty Donovan asked if I could offer some advice to graduates entering the Enterprise Architecture profession.

She specifically asked what three things I wished I had known when I began my career, and it’s impossible to resist the setup.  I wish I had known:

1)   What stocks to buy and sell when

2)   Which managers at what companies to work for

3)   Which personal relationships to pursue and which to avoid

Had I known these things, my life would likely have been free of much unproductive stress.

OK; that’s not really helpful advice; these aren’t things that one can actually know in advance.

But there are some things that I sort of knew when I got out of school, that in retrospect have proven to be far more important than I imagined at the time.  They are:

1)   Things, especially big things, only get done by collaborating with other people.

2)   Be open to other perspectives.

3)   Nothing in the real world is linear or one-dimensional.

4)   You have to be able to commit, and be prepared to deal with the consequences.

Let’s explore each of these in turn.

Things, especially big things, only get done by collaborating with other people

This seems pretty obvious, but we never seem to take it into account.  Unless you’re a genius of staggering magnitude, your success is going to be largely dependent on your ability to work with other people.

If you majored in some aspect of information systems, unless you minored in psychology or sociology it’s unlikely you took more than one or two elective courses in one or the other.  If you’re lucky, the company you work for will send you on a two or three day “team building exercise” every few years.  If you’re really lucky, you may get sent to a week-long “executive development program” in leadership or “organizational dynamics.”  These sorts of development programs used to be much more common, but are now much harder to cost-justify.  My experience with these things was that they were often interesting, though some of the exercises were a bit contrived.  But the key problem was that whatever one might learn from them was easily forgotten without any subsequent coaching and reinforcement, washed out by the implicit assumption that how to collaborate as part of team is something we all knew how to do intuitively.

So what we’re left with is “learning by doing,” and it’s clear from experience that this basically means picking up habits that, without expert coaching, will be a random mix of both good and bad.  What can we do about this?

Most organizations have an HR policy about staff development plans, and while people are rarely held accountable for not carrying out such a plan, a sensible request to take advantage of the policy will also rarely be refused.  Don’t neglect any opportunity you get to develop your “soft skills” or “people skills.”

 Be open to other perspectives

A thoughtfully open mind—the ability to recognize good ideas and not so good ideas, especially when they’re someone else’s ideas—is probably one of the most useful and most difficult faculties to develop.

It’s a cliché that truly effective communication is difficult.  In practice I have found this often means that we don’t understand why someone takes a position different from ours, and without that understanding, it is too easy to discount that position.  This is compounded by our predisposition, especially among techno-dweeb-weenies, to focus on differences rather than similarities, something Freud called the “narcissism of small differences.”

Fred Brooks (“The Mythical ManMonth,” “The Design of Design”) has long argued that the chief or lead architect is responsible for ensuring the “conceptual integrity” of a design, but this doesn’t mean that all the ideas have to come from that architect.  Nobody has all the answers.  It is the architect’s responsibility to synthesize worthwhile contributions, wherever they come from, into an integrated whole.

 Nothing in the real world is linear or one-dimensional

When I moved on to a new position after leading an architecture team for several years at Digital Equipment Corporation, the team gave me two rubber stamps as a token of their appreciation.  One said “It depends …”, and the other said “Yes, but …”.

Though it’s almost never possible, or sensible, to rank anything non-trivial on a single linear scale, we try to do this all the time.  Simple models of complex things do not make those things simple.  Acting as if they do is called “magical thinking,” for a reason.

So there’s almost never going to be a clearly best answer.  The best we can do is understand what the tradeoffs are, and make them knowingly and deliberately.

 You have to be able to commit, and be prepared to deal with the consequences

Each of the above three lessons tends to complicate things, and complications tend to delay decision-making and commitment to a particular way forward.  While successful architects understand that delayed binding is often an effective design strategy, they also understand that they will never have all the information they need to make a fully informed decision, and finally, and most importantly, that you can’t postpone decisions indefinitely.  They seem to have a knack for understanding which decisions really need to be made when, and how to connect the information they do have into a coherent context for making those decisions.

But they also have contingency plans, and ways to tell as early as possible whether they need to use them.  In a genuinely supportive environment, it will be OK to reconsider a decision, but only if you do so as soon as you realize that you need to.

So, don’t make decisions any sooner than they must be made, but don’t make them any later either, and make sure you don’t “paint yourself into a corner.”

 Len Fehskens is Vice President of Skills and Capabilities at The Open Group. He is responsible for The Open Group’s activities relating to the professionalization of the discipline of enterprise architecture. Prior to joining The Open Group, Len led the Worldwide Architecture Profession Office for HP Services at Hewlett-Packard. Len is based in the US.

1 Comment

Filed under Enterprise Architecture, Professional Development

2012 Open Group Predictions, Vol. 1

By The Open Group

Foreword

By Allen Brown, CEO

2011 was a big year for The Open Group, thanks to the efforts of our members and our staff – you all deserve a very big thank you. There have been so many big achievements, that to list them all here would mean we would never get to our predictions. Significantly though, The Open Group continues to grow and this year the number of enterprise members passed the 400 mark which means that around 30,000 people are involved, some more so than others, from all over the world.

Making predictions is always risky but we thought it might be fun anyway. Here are three trends that will wield great influence on IT in 2012 and beyond:

  • This year we experienced the consumerization of IT accelerating the pace of change for the enterprise at an astonishing rate as business users embraced new technologies that transformed their organizations. As this trend continues in 2012, the enterprise architect will play a critical role in supporting this change and enabling the business to realize their goals.
  • Enterprise architecture will continue its maturity in becoming a recognized profession. As the profession matures, employers of enterprise architects and other IT professionals, for that matter, will increasingly look for industry recognized certifications.
  • As globalization continues, security and compliance will be increasing issues for companies delivering products or services and there will be a growing spotlight on what might be inside IT products. Vendors will be expected to warrant that the products they purchase and integrate into their own products come from a trusted source and that their own processes can be trusted in order not to introduce potential threats to their customers. At the same time, customers will be increasingly sensitive to the security and dependability of their IT assets. To address this situation, security will continue to be designed in from the outset and be tightly coupled with enterprise architecture.

In addition to my predictions, Other Open Group staff members also wanted to share their predictions for 2012 with you:

Security

By Jim Hietala, VP of Security

Cloud security in 2012 becomes all about point solutions to address specific security pain points. Customers are realizing that to achieve an acceptable level of security, whether for IaaS, SaaS, or PaaS, they need to apply controls in addition to the native platform controls from the Cloud service provider. In 2012, this will manifest as early Cloud security technologies target specific and narrow security functionality gaps. Specific areas where we see this playing out include data encryption, data loss prevention, identity and access management, and others.

Cloud

By Chris Harding, Director of Interoperability

There is a major trend towards shared computing resources that are “on the Cloud” – accessed by increasingly powerful and mobile personal computing devices but decoupled from the users.

This may bring some much-needed economic growth in 2012, but history shows that real growth can only come from markets based on standards. Cloud portability and interoperability standards will enable development of re-usable components as commodity items, but the need for them is not yet appreciated. And, even if the vendors wanted these standards for Cloud Computing, they do not yet have the experience to create good ones.  But, by the end of the year, we should understand Cloud Computing better and will perhaps have made a start on the standardization that will lead to growth in the years ahead.

Here are some more Cloud predictions from my colleagues in The Open Group Cloud Work Group: http://blog.opengroup.org/2011/12/19/cloud-computing-predictions-for-2012/

Business Architecture

By Steve Philp, Professional Certification

There are a number of areas for 2012 where Business Architects will be called upon to engage in transforming the business and applying technologies such as Cloud Computing, social networking and big data. Therefore, the need to have competent Business Architects is greater than ever. This year organizations have been recruiting and developing Business Architects and the profession as a whole is now starting to take shape. But how do you establish who is a practicing Business Architect?

In response to requests from our membership, next year The Open Group will incorporate a Business Architecture stream into The Open Group Certified Architect (Open CA) program. There has already been significant interest in this stream from both organizations and practitioners alike. This is because Open CA is a skills and experience based program that recognizes, at different levels, those individuals who are performing in a Business Architecture role. I believe this initiative will further help to develop the profession over the next few years and especially in 2012.

1 Comment

Filed under Business Architecture, Cloud, Cybersecurity, Enterprise Architecture, Enterprise Transformation, Semantic Interoperability, Uncategorized

New Open Group Guide Shows Enterprise Architects How to Maximize SOA Business Value with TOGAF®

By Awel Dico, Bank of Montreal

Service Oriented Architecture (SOA) has promised many benefits for both IT and business. As a result, it has been widely adopted as an architectural style among both private business and government enterprises. Despite SOA’s popularity, however, relatively few of these enterprises are able to measure and demonstrate the value of SOA to their organization. What is the problem and why is it so hard to demonstrate that SOA can deliver the much needed business value it promises? In this post I will point out some root causes for this problem and highlight how The Open Group’s new guide, titled “Using TOGAF® to Define and Govern Service-Oriented Architectures,” can help organizations maximize their return on investment with SOA.

The main problem is rooted in the way SOA adoption is approached. In most cases, organizations approach SOA by limiting the scope to individual solution implementation projects – using it purely as a tool to group software functions into services described by some standard interface. As a result, each SOA implementation is disconnected and void of the larger business problem context. This creates disconnected, technology-focused SOA silos that are difficult to manage and govern. Reuse of services across business lines, arguably one of the main advantages of SOA, in turn becomes very limited if not impossible without increased cost of integration.

SOA calls for standard-based service infrastructure that requires big investment. I have seen many IT organizations struggle to establish a common SOA infrastructure, but fail to do so. The main reason for this failure is again the way SOA is approached in those organizations; limiting SOA’s scope to solution projects makes it hard for individual projects to justify the investment in service infrastructure. As a result they fall back to their tactical implementation which cannot be reused by other projects down the road.

The other culprit is that many organizations think SOA can be applied to all situations – failing to realize that there are cases when SOA is not a good approach at all. An SOA approach is not cheap, and trying to fit it to all situations results in an increased cost without any ROI.

Fortunately there’s a solution to this problem. The Open Group SOA Work Group recently developed a short guide on how to use TOGAF® to define and govern SOA. The guide’s main goal is to enable enterprises to deliver the expected business value from their SOA initiatives. What’s great about TOGAF® in helping organizations approach SOA is the fact that it’s an architecture-style, agnostic and flexible framework that can be customized to various enterprise needs, architectural scopes and styles. In a nutshell, the guide recommends the incorporation of SOA style in the EA framework through customization and enhancement of TOGAF® 9.

How does this solve the problem I pointed out above? Well, here’s how:

SOA, as an architectural style, becomes recognized as part of the organization’s overall Enterprise Architecture instead of leaving it linked to only individual projects. The guide advises the identification of SOA principles and establishment of supporting architectural capabilities at the preliminary phase of TOGAF®. It also recommends establishment of SOA governance and creating linkage to both IT and EA governance in the enterprise. These architecture capabilities lift the heavy weight from the solution projects and ensure that any SOA initiative delivers business value to the enterprise. This means SOA projects in the enterprise share a larger enterprise context and each project adds value to the whole enterprise business in an incremental, reusable fashion.

When TOGAF® is applied at the strategic level, then SOA concepts can be incorporated into the strategy by indentifying the business areas or segments in the enterprise that benefit from a SOA approach. Likewise, the strategy could point out the areas in which SOA is not adding any value to the business. This allows users to identify the expected key metrics from the start and focus their SOA investment on high value projects. This also makes sure that each smaller SOA project is initiated in the context of larger business objectives and as such, can add measurable business value.

In summary, this short and concise guide links all the moving parts (such as SOA principles, SOA governance, Reference Architectures, SOA maturity, SOA Meta-model, etc.) and I think it is a must-read for any enterprise architect using TOGAF® as their organization’s EA framework and SOA as an architectural style. If you are wondering how these architectural elements fit together, I recommend you look at the guide and customize or extend its key concepts to your own situation. If you read it carefully, you will understand why SOA projects must have larger enterprise business context and how this can be done by customizing TOGAF® to define and govern your own SOA initiatives.

To download the guide for free, please visit The Open Group’s online bookstore.

Awel Dico, Ph. D., is Enterprise Architect for the Bank of Montreal. He is currently working on enterprise integration architecture and establishing best practice styles and patterns for bank wide services integration.  In the past he has consulted on various projects and worked with many teams across the organization and worked on many architecture initiatives, some of which include: leading mid-tier service infrastructure architecture; developing enterprise SOA principles, guidelines and standards; Developing SOA Service Compliance process; developing and applying architectural patterns; researching technology and industry trends, and contributing to the development of bank’s Enterprise Reference Architecture blueprint. In addition, Dr. Dico currently co-chairs The Open Group SOA Work Group and The Open Group SOA/TOGAF Practical Guide Project. He also co-supervises PhD candidates at Addis Ababa University, Computer Science – in Software Engineering track. Dr. Dico is also a founder of Community College helping students in rural areas of Ethiopia.

2 Comments

Filed under Service Oriented Architecture

Enterprise Architecture & Emerging Markets

By Balasubramanian Somasundram, Honeywell Technology Solutions Ltd.

Recently I came across an interesting announcement by an SaaS vendor NetSuite on two-tier ERP and an analyst’s observations on the same. The analyst mentioned that the industry is moving in cycles from multiple-ERP suites across the company locations, then flattening those differences by having a single corporate standard ERP and again multiple-ERP stack with the advent of SaaS options.

The crux of this phenomenon is how we manage the technology selection across a globally distributed organization with diversified complexities. I see it as an interesting challenge for the Enterprise Architecture practice to solve.

Enterprise Architecture, when governed from global/corporate headquarters of a company, needs to balance the needs of the global and local entities. Often, the needs are conflicting and it requires lots of experience and courage to balance the needs of both. The local needs of the Architecture are most often triggered by various factors such as:

  • Cost – Need to have a cost-effective solution at an emerging region
  • Size – Need to have a lightweight solution rather than a heavyweight (ERP)
  • Regulatory/Compliance Requirements – Need to comply with local laws
  • Business Processes – Need to accommodate business process variations or cater to different customer segments

In the event of choosing a local solution that is not a corporate standard, there is a need to govern those architecture exceptions including integration of two different solutions for a cohesive management.  The two-tier ERP mentioned above is a typical example of this scenario.

If we visualize Enterprise Architecture as a series of layers – Business/Information/Technology/Application Architectures – the verticals/segments across those layers would define the organizational units/locations (Local Specific or Organizational Unit specific Enterprise Architectures).

The location verticals, when influenced by the above factors, could lead to new technology selections such as Cloud Computing and Software-as-a-Service. While this practice can improve the autonomy at the local level, if unmanaged, it could soon lead to sphegetti architectures. The most important side-effect of localized adoption of cloud computing or mobile would lead to increased fragmentation (of data/process/technology). And that would defeat the purpose of Enterprise Architecture.

In another constructive scenario, if these standalone solutions need to exchange information with corporate information systems, again EA has a role to play by arbitrating the integration by the use of standards and guidelines.

As Serge Thorn articulated few weeks ago in The Open Group blog, it’s time to review our EA practices and make amendments to the core frameworks and processes to face the challenges emerging from technology mega trends (Cloud/Mobile) and evolving business models (emerging markets).

Balasubramanian Somasundaram is an Enterprise Architect with Honeywell Technology Solutions Ltd, Bangalore, a division of Honeywell Inc, USA. Bala has been with Honeywell Technology Solutions for the past five years and contributed in several technology roles. His current responsibilities include Architecture/Technology Planning and Governance, Solution Architecture Definition for business-critical programs, and Technical oversight/Review for programs delivered from Honeywell IT India center. With more than 12 years of experience in the IT services industry, Bala has worked with variety of technologies with a focus on IT architecture practice.  His current interests include Enterprise Architecture, Cloud Computing and Mobile Applications. He periodically writes about emerging technology trends that impact the Enterprise IT space on his blog. Bala holds a Master of Science in Computer Science from MKU University, India.

1 Comment

Filed under Enterprise Architecture

The Open Group updates Enterprise Security Architecture, guidance and reference architecture for information security

By Jim Hietala, The Open Group

One of two key focus areas for The Open Group Security Forum is security architecture. The Security Forum has several ongoing projects in this area, including our TOGAF® and SABSA integration project, which will produce much needed guidance on how to use these frameworks together.

When the Network Application Consortium ceased operating a few years ago, The Open Group agreed to bring the intellectual property from the organization into our Security Forum, along with extending membership to the former NAC members. While the NAC did great work in information security, one publication from the NAC stood out as a highly valuable resource. This document, Enterprise Security Acrhitecture (ESA), A Framework and Template for Policy-Driven Security, was originally published by the NAC in 2004, and provided valuable guidance to IT architects and security architects. At the time it was first published, the ESA document filled a void in the IT security community by describing important information security functions, and how they related to each other in an overall enterprise security architecture. ESA was at the time unique in describing information security architectural concepts, and in providing examples in a reference architecture format.

The IT environment has changed significantly over the past several years since the original publication of the ESA document. Major changes that have affected information security architecture in this time include the increased usage of mobile computing devices, increased need to collaborate (and federation of identities among partner organizations), and changes in the threats and attacks.

Members of the Security Forum, having realized the need to revisit the document and update its guidance to address these changes, have significantly rewritten the document to provide new and revised guidance. Significant changes to the ESA document have been made in the areas of federated identity, mobile device security, designing for malice, and new categories of security controls including data loss prevention and virtualization security.

In keeping with the many changes to our industry, The Open Group Security Forum has now updated and published a significant revision to the Enterprise Security Architecture (O-ESA), which you can access and download (for free, minimal registration required) here; or purchase a hardcover edition here.

Our thanks to the many members of the Security Forum (and former NAC members) who contributed to this work, and in particular to Stefan Wahe who guided the revision, and to Gunnar Peterson, who managed the project and provided significant updates to the content.

Jim HietalaAn IT security industry veteran, Jim is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

Comments Off

Filed under Security Architecture

SOA is not differentiating, Cloud Computing is

By Mark Skilton, Capgemini

Warning: I confess at the start of this blog that I chose a deliberately evocative title to try to get your attention and guess I did if are reading this now. Having written a couple of blogs to date with what I believed were finely honed words on current lessons learnt and futures of technology had created little reaction, so I thought I’d try the more direct approach and head directly towards a pressing matter of architectural and strategic concern.

Service Oriented Architecture (SOA) is now commonplace across all software development lifecycles and has entered the standard language of information technology design. We hear “service oriented” and “service enabled” as standard phrases handed out as common terms of reference. The point is that the processes and practices of SOA are industrial and are not differentiating, as everyone is doing these either from a design standpoint or as a business systems service approach. They enable standardization and abstraction of services in the design and build stages to align with key business and technology strategy goals, and enable technology to be developed or utilized that meets specific technical or business service requirements.

SOA practices are prerequisites to good design practice. SOA is a foundation of Service Management ITIL processes and is to be found in diverse software engineering methods from Business Process Management Systems (BPMS) to rapid Model Driven Architecture design techniques that build compose web-enabled services. SOA is seen as a key method along the journey to industrialization supporting consolidation and rationalization, as well as lean engineering techniques to optimize business and systems landscape. SOA provides good development practice in defining user requirements that provide what the user wants, and in translating these into understanding how best to build agile, decoupled and flexible architectural solutions.

My point is that these methods are now mainstream, and merely putting SOA into your proposal or as a stated capability is no longer going to be a “deal clincher” or a key “business differentiator”. The counterview I hear practitioners in SOA will say is that SOA is not just the standardized service practices but is also how the services can be identified that are differentiating. But that’s the rub. If SOA treats every requirement or design as a service problem, where is the difference?

A possible answer is in how SOA will be used. In the future and today it will be a business differentiator in the way the SOA method is used. But not all SOA methods are equal, and what will be necessary to highlight SOA method differentiation for business benefit?

Enter Cloud Computing, its origins in utility computing and the ubiquitous web services and Internet. The definitions of what is Cloud Computing, much like the early days of Service Orientation, is still evolving in understanding where the boundary and types of services it encompasses. But the big disruptive step change has been the new business model the Cloud Computing mode has introduced.

Cloud Computing has introduced automatic provisioning, self-service, automatic load balancing and scaling of resources in technology. Building on virtualization principles, it has extended into on-demand metering and billing consumption models, large-scale computing resource data centers, and large-scale distributed businesses on the web using the power of the Internet to reach and run new business models. I can hear industry observers say this is just a consequence of the timely convergence of pervasive technology network standards, the rapid falling costs per compute and storage costs and the massive “hockey stick” movement of bandwidth, smart devices and wide-scale adoption of web-based services.

But this is a step change movement from a simple realization that it’s just “another technology phase”.

Put another way: It has brought the back office computing resources and the on-demand Software as a Service Models into a dynamic new business model that changes the way business and IT work. It has “merged” physical and logical services into a new marketplace on-demand model that hitherto was “good practice“ to design as separate consumer and provider services. All that’s changed.

But does SOA fully realize these aspects of a Cloud Computing Architecture? Answer these three simple questions:

  • Does the logical service contracts define how multi-tenant environments need to work to support many concurrent services users?
  • Does SOA enable automating balancing and scaling to be considered if the initial set of declarative conditions in the service contract don’t “fit” the new operating conditions that need scaling up or down?
  • Does SOA recognize the wider marketplace and ecosystem dynamics that may result in evolving consumer/producer patterns that are dynamic and not static, driving new sourcing behaviors and usage patterns that may involve using services through a portal with no contract?

For sure, ecosystem principles are axiomatic in that they will drive standards for containers, protocols and semantics which SOA standards are perfect to adopt as boundary conditions for service contracts in a Service Portfolio. But my illustrations here are to broaden the debate as to how to engage SOA as a differentiator when it meets a “new kid on the block” like Cloud, which is rapidly morphing into new models “as we speak” extending into social networks, mobile services and location aware integration.

My real intention is to raise awareness and interest in the subjects and the activities that The Open Group is engaged in to address such topics. I sincerely hope you can follow these up as further reading and investigation with The Open Group; and of course, do feel free to comment and contact me J

Cloud Computing and SOA are key topics of discussion at The Open Group Conference, London, May 9-13, which is underway. 

Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.

3 Comments

Filed under Cloud/SOA

Exploring Synergies between TOGAF® and Frameworx

By Andrew Josey, The Open Group

A joint team of The Open Group and the TM Forum has recently completed a technical report exploring the synergies and identifying integration points between the TM Forum Frameworx and TOGAF® specifications.

The results of this activity are now available as a 110-page technical report published by The Open Group and TM Forum, together with a Quick Reference Guide spreadsheet (available with the report).

The technical report focuses on mapping TOGAF® to the Frameworx: Business Process Framework (eTOM), Information Framework (SID) and Application Framework (TAM). The purpose of this mapping is to assess differences in their contents, complementary areas and the areas for application – with the TOGAF® Enterprise Continuum in mind.

Identified Synergies

A summary of the identified synergies is as follows:

  1. Immediate synergies have been identified between the TOGAF Architecture Development Method (ADM) phases Preliminary, A, B, C and the Common Systems Architecture of the Enterprise Continuum. This document addresses the TOGAF ADM phases from Preliminary to Phase C. The synergies between business services (formerly known as NGOSS contracts) and the Common Systems Architecture will be dealt with in a separate document.
  2. TOGAF® provides an Architecture Repository structure that can smoothly accommodate the mapping of TM Forum assets; this feature can be leveraged to identify and derive the added value of the content.
  3. TM Forum assets can be classified as either Industry Architectures or Common Systems Architecture in (TOGAF®) Enterprise Continuum language. TOGAF® provides a widely accepted methodology to leverage these architectures into the development of enterprise architecture.
  4. Professionals that use TM Forum assets will find templates and guidelines in TOGAF® that facilitate the transformation of such TM Forum assets into deliverables for a specific project/program.
  5. TOGAF concepts as defined in the TOGAF® Architecture Content Framework provide clear definitions as to what artifacts from TM Forum assets have to be developed in order to be consistent and comprehensive with an architecture construct.

The full report can be obtained from The Open Group and TM Forum websites. At The Open Group, you can download it here.

Andrew Josey is Director of Standards within The Open Group, responsible for the Standards Process across the organization. Andrew leads the standards development activities within The Open Group Architecture Forum, including the development and maintenance of TOGAF® 9, and the TOGAF® 9 People certification program. He also chairs the Austin Group, the working group responsible for development and maintenance of the POSIX 1003.1 standard that forms the core volumes of the Single UNIX® Specification. He is the ISO project editor for ISO/IEC 9945 (POSIX). He is a member of the IEEE Computer Society’s Golden Core and is the IEEE P1003.1 chair and the IEEE PASC Functional chair of Interpretations. Andrew is based in the UK.

1 Comment

Filed under Enterprise Architecture, TOGAF®

The Business Case for Enterprise Architecture

By Balasubramanian Somasundram, Honeywell Technology Solutions Ltd.

Well, contrary to this blog post title, I am not going to talk about the finer details of preparing a business case for an Enterprise Architecture initiative. Rather, I am going to talk about ‘What makes the client to ask for a business case?”

Here is a little background…

Statistics assert that only 5% of companies practice Enterprise Architecture. And most of them are successful leaders in their businesses, not just IT.

When I attended Zachman’s conference last year, I was surprised to see Zachman being cynical about the realization of EA in the industry. He, in fact, went on to add that it may take 10-20 years to see EA truly alive in companies.

I am also closely watching some of the Enterprise Architects’ blogs. I don’t see convictions by looking at their blog posts titled – ‘Enterprise is a Joke’. ‘Enterprise Architects do only powerpoint presentations’. ‘There are not enough skilled architects’, etc.

In the recent past, when I was evangelizing EA among the top IT leadership, I often got questions on ‘short-term quick hits that can be achieved by EA’. That’s a tough one to answer!

Now the question is – ‘Why there is lack of faith in IT?’

And many of us know the answer – Because the teams often fail to deliver, despite spending lot of cash, effort and energy. The harsh reality is that IT does not believe in itself that it can deliver something significant, valuable and comprehensive.

If IT doesn’t believe in itself, how can we expect business to believe in us, to treat us like partners and not as order takers?

Now, getting to metrics… I happened to read this revealing Datamonitor whitepaper on the EDS site. Though the intent of the paper is to analyze the maintenance issues Vs adopting new innovations in existing applications, I found something very relevant and interesting to our topic of discussion here.

Some of the observations are:

  • IT departments that are overwhelmed by application maintenance do not see the benefit of planning
  • Datamonitor believes that skepticism of these overwhelmed decision makers can be largely attributed to a sense of ‘hopelessness’ or ‘burn out’ over formalized IT strategies.
  • Such decision makers are operating in a state of survival rather than one of enthusiastic optimism
  • IT departments see the value of planning primarily in the ‘build’ phase and not in the ‘run’ phase. They don’t really care too much about the ‘lifecycle’ of those application in the ‘planning’ phase.
  • And now, this compounds the maintenance complexity and inhibits the company from embarking into new initiatives – creating a vicious cycle.

What a resounding observation!

As someone said, adopting EA is like a lifestyle change – like following a fitness regimen. And that cannot be realized without discipline and commitment to change! The problem is not with EA but the way we look at it!

Balasubramanian Somasundaram is an Enterprise Architect with Honeywell Technology Solutions Ltd, Bangalore, a division of Honeywell Inc, USA. Bala has been with Honeywell Technology Solutions for the past five years and contributed in several technology roles. His current responsibilities include Architecture/Technology Planning and Governance, Solution Architecture Definition for business-critical programs, and Technical oversight/Review for programs delivered from Honeywell IT India center. With more than 12 years of experience in the IT services industry, Bala has worked with variety of technologies with a focus on IT architecture practice.  His current interests include Enterprise Architecture, Cloud Computing and Mobile Applications. He periodically writes about emerging technology trends that impact the Enterprise IT space on his blog. Bala holds a Master of Science in Computer Science from MKU University, India.

20 Comments

Filed under Enterprise Architecture

World-class EA

By Mick Adams, Capgemini UK

World-class Enterprise Architecture is all about creating definitive collateral that defines how the architecture delivers value for societal value.

I know that’s a big, bold claim, but there’re enough dreamers and doers that are making this happen right now. World-class EA tackles big industry issues and offers big, brave solutions. The Open Group has already published several whitepapers at on this… banking, anyone? no problem… public services? Absolutely. World-class EA tackles these industry verticals and a bunch of others to describe a truly holistic model that unlocks value. Take a look at the World Class EA White Paper available in The Open Group’s online bookstore. Highlights of the whitepaper include:

  • Selection of industry drivers and potential architecture response
  • Suggested maturity model to calibrate organizations
  • Example of applying a maturity rating
  • Set of templates and suggested diagrams to provision TOGAF® 9 content

The work is ongoing; it’s not definitive yet. We are looking for more problem definitions and solutions to drive a collective global mindset forward to ensure that IT delivers benefits across the entire value chain. If we agree on what the problems are, prioritize and work on them in a wholly collegiate manner, the industry is in a better place as a consequence. My view is that The Open Group is the only viable platform to provision BIG IT to industry and society.

The Open Group India is running an event soon that I’m hoping will further refine world-class EA. The IT industry in India is flying red hot, and thriving at the moment. I’ve been lucky enough to work with some of the boldest and most innovative entrepreneurial people in the world that happen to come from India. There is an absolute passion for learning and contribution on the sub continent like no other. At The Open Group India event, we will discuss:

  • Defining the BIG IT topics for today
  • Insights about IT and EA
  • Providing/provisioning demonstrable value to make a difference

The countdown has begun to The Open Group India Conference. If you want to know what’s happening in architecture right now, or want to influence what could happen to our industry in India or globally, come along.

World-class EA will be a topic of discussion at The Open Group India Conference in Chennai (March 7), Hyderabad (March 9) and Pune (March 11). Join us for best practices and case studies in the areas of Enterprise Architecture, Security, Cloud and Certification, presented by preeminent thought leaders in the industry.

As a member of Capgemini global architecture leadership, Mick Adams has been involved in the development of some of the world’s largest enterprise architectures and has managed Capgemini contributions to The Open Group Architecture Forum for over two years. He has wide industry experience but his architecture work is currently focused on Central Government(s) and oil super-majors.

1 Comment

Filed under Enterprise Architecture

What’s the use of getting certified?

By Mieke Mahakena, Capgemini

After a day discussing business architecture methods and certification at The Open Group Conference in San Diego last week, I had to step back and consider if what I have been doing was still adding value. It seems to me that there is still much resistance against certification. “I don’t need to be certified; I have my college degree.” Or, “I have so much experience. Why should I need to prove anything?”

But let me ask you a question. Suppose you need to have surgery. The surgeon tells you that he hasn’t got a medical license, but you shouldn’t worry because he is so experienced. Would you let him perform surgery on you? I wouldn’t! So, if we expect others to be able to prove their skills before we hire them to work for us, shouldn’t the same apply to business architects? In our profession, mistakes can have severe consequences. As such, it is only reasonable for customers to demand some kind of impartial proof of our professional skills.

To become a good surgeon you not only need good education, you need a lot of practical experience as well. The same goes for the IT and architecture profession: Your skills develop with every new practical experience. This brings us to the importance of the ITAC or ITSC certifications. Both programs define the skills necessary for a certain profession and use a well-defined certification process to ensure that the candidate has the experience needed to develop those skills.

During The Open Group India Conference in March, you will be able to learn more about these certification programs and find out if they can bring value to you and your organization.

Certification will be a topic of discussion at The Open Group India Conference in Chennai (March 7), Hyderabad (March 9) and Pune (March 11). Join us for best practices and case studies in the areas of Enterprise Architecture, Security, Cloud and Certification, presented by preeminent thought leaders in the industry.

Find out more about the ITSC program by joining our webinar on Thursday, March 3.

Mieke Mahakena is an architect and architecture trainer at Capgemini Academy and label lead for the architecture training portfolio. She is the chair of the Business Forum at The Open Group, working on business architecture methods and certification. She is based in the Netherlands.

5 Comments

Filed under Certifications, Enterprise Architecture

Security & architecture: Convergence, or never the twain shall meet?

By Jim Hietala, The Open Group

Our Security Forum chairman, Mike Jerbic, introduced a concept to The Open Group several months ago that is worth thinking a little about. Oversimplifying his ideas a bit, the first point is that much of what’s done in architecture is about designing for intention — that is, thinking about the intended function and goals of information systems, and architecting with these in mind. His second related point has been that in information security management, much of what we do tends to be reactive, and tends to be about dealing with the unintended consequences (variance) of poor architectures and poor software development practices. Consider a few examples:

architecture under fireSignature-based antivirus, which relies upon malware being seen in the wild, captured, and having signatures being distributed to A/V software around the world to pattern match and stop the specific attack. Highly reactive. The same is true for signature-based IDS/IPS, or anomaly-based systems.

Data Loss (or Leak) Prevention, which for the most part tries to spot sensitive corporate information being exfiltrated from a corporate network. Also very reactive.

Vulnerability management, which is almost entirely reactive. The cycle of “Scan my systems, find vulnerabilities, patch or remediate, and repeat” exists entirely to find the weak spots in our environments. This cycle almost ensures that more variance will be headed our way in the future, as each new patch potentially brings with it uncertainty and variance in the form of new bugs and vulnerabilities.

The fact that each of these security technology categories even exist has everything to do with poor architectural decisions made in years gone by, or inadequate ongoing software development and Q/A practices.

Intention versus variance. Architects tend to be good at the former; security professionals have (of necessity) had to be good at managing the consequences of the latter.

Can the disciplines of architecture and information security do a better job of co-existence? What would that look like? Can we get to the point where security is truly “built in” versus “bolted on”?

What do you think?

P.S. The Open Group has numerous initiatives in the area of security architecture. Look for an updated Enterprise Security Architecture publication from us in the next 30 days; plus we have ongoing projects to align TOGAF™ and SABSA, and to develop a Cloud Security Reference Architecture. If there are other areas where you’d like to see guidance developed in the area of security architecture, please contact us.

Jim HietalaAn IT security industry veteran, Jim Hietala is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.

Comments Off

Filed under Cybersecurity