By Stuart Boardman, KPN
Ruth Malan recently tweeted a link to a piece by Alistair Cockburn about the Last Responsible Moment concept (LRM) in Lean Software Development. I’ve been out of software development for a while now but I could guess what that might mean in an “agile” context and wondered how it might apply to problems I’ve been considering recently in Enterprise Architecture. Anyway, Alistair Cockburn is an interesting writer who would be deservedly famous even if he’d never done anything after writing the most practical and insightful book ever written about use cases. So I read on. The basic idea of the LRM is that in order to deal with uncertainty you avoid taking deterministic decisions until just before it would become irresponsible (for cost or delivery reasons) not to take them. Or to put it another way, don’t take decisions you don’t yet need to take if the result will be to constrain your options but do be ready to take them when it’s dangerous to wait longer.
Alistair’s not a big fan of LRM. He makes the following statement: “If you keep all decisions open until the hypothetical LRM, then your brain will be completely cluttered with open decisions and you won’t be able to keep track of them all.” Later in the discussion, he modifies this a bit but it certainly struck a chord with me. I’ve argued recently in this column that the degree of uncertainty (I called this entropy) in which enterprise architects have to operate is only increasing and that this in turn is due to three factors: the increasing rate of change happening in or affecting the enterprise; the increasing complexity of the environment in which the enterprise exists; and the decreasing extent to which any one enterprise can control that environment. This in turn increases the level of complexity in decision making. I’ll come back to these factors later but if you give me the benefit of the doubt for the moment, you can see that there’s actually a pretty good argument for taking any decision you can reasonably take (i.e. one which does not unjustifiably constrain everything else), as early as you can – in order to minimize complexity as you go along.
This is not (repeat not) a dogma. If it’s totally unclear what decision you should take, you’d probably be better off waiting for more information – and a last responsible moment will undoubtedly arrive.
So assuming you gave me the benefit of the doubt, you might now reasonably be thinking that this is theoretically all very well but how can we actually put it into practice. To do that we need first to look at the three sources of complexity I mentioned:
- That the rate of change is increasing is pretty much a truism. Some change is due to market forces such as competition, availability/desirability of new capabilities, withdrawal of existing capabilities or changes in the business models of partners and suppliers. Some change is due to regulation (or deregulation) or to indirect factors such as changing demographics. Factors such as social media and Cloud are perhaps more optional but are certainly disruptive – and themselves constantly in change.
- The increase in complexity of the environment is largely due to the increase in the number of partners and to more or less formal value networks (extended enterprise), to an increased number of delivery channels and to lack of standardization at both the supply and delivery ends.
- The decrease in control (or more accurately in exclusive and total control) arises from all forms of shared services, which the enterprise one way or another makes use of. This can be Cloud (in which case we talk about multi-tenancy), social media (in which case we talk about anarchy) but equally well the extended enterprise network where not merely do our partners and suppliers have other customers but they also have their own partners and suppliers who have other customers. A consequence of most of this is that you can’t expect to be consulted before change decisions are made.
At best you will be notified well enough in advance of it happening. So you need to take that into account in what you implement.
Each of these factors may affect what the organization is – its core values, its key value propositions, its strategy. They may also affect how it carries out its business – its key activities and processes, its partners and even its customers. And they can affect how those activities and processes are implemented, which by the way can in turn drive change at the strategic level – it’s not just one way traffic – but this is a subject worthy of its own blog.
The point is that, if we want to be able to deal with this, to make sensible decisions in a non-deterministic environment, we would do well to address them where they first manifest themselves in order to avoid a geometric expansion of complexity further on. I’m inclined to think this is primarily in the business architecture (assuming we all accept that business architecture is not just a collection of process models). Almost all of the factors are encountered first here and subsequently reflected possibly in strategy and nearly always on the implementation side. If we make the reasonable assumption that the implementation side will encounter its own complexities, we can at least keep that manageable by not passing on all the possible options from the business architecture.
I said almost all factors are encountered first in the business architecture. The most obvious exceptions I can think of are the Infrastructure as a Service and Platform as a Service variants on Cloud. There’s a good case to be made that the effects of these are primarily felt within IT (strategy and implementation). But wherever we start, the principle doesn’t change – start the analysis at the first point of impact.
The next thing we need to do is look for ways to a) reduce the level of entropy in the part of the system we start with and b) understand how to make decisions that don’t create unnecessary lock in. There’s not enough space in a blog to go into this in detail but it’s worth mentioning some new and some established techniques.
My attention has recently been drawn (by Verna Allee and others) to the study of networks of things, organizations and people. This in turn makes a lot of use of visualizations. These enable us to “see” the level of entropy around the particular element we’re focusing on – without the penalty of losing sight of the big picture. An example that I found useful is by Eric Berlow. Another concept in this area involves identifying what are referred to as communities (because the idea came out of the study of social networks – clusters of related elements, which are only loosely coupled to other communities. These techniques allow us to reduce the scope (and therefore complexity) of the problem we’re trying to solve at any one time without falling into the trap of assuming it’s entirely self- contained.
A few blogs ago I mentioned an idea of Robert Phipps’s, where he visualizes the various forces within an organization as vectors. Each vector represents some factor driving (or even constraining) change. Those can be formal or informal organizational groupings (people), stakeholders both within and external to the organization, economic factors around supply or revenue, changes in the business model or even in technology. In that blog I used this as a way of illustrating entropy but Robert is actually looking at ways of applying measures to these vectors in order to be able to establish their actual force (and direction) and therefore their impact on change. Turning an apparently random factor into something knowable reduces the level of entropy and makes us more confident about taking decisions early – and therefore in turn reduces the entropy at a later stage.
One more example: Ruth Malan and Dana Bredemeyer produced a paper last year in which they examined the idea that organizations can make the most use of the creativity of their personnel by replacing the traditional hierarchical and compartmentalized structures with what they called a fractal approach. The idea is that patterns of strategy creation are reflected in all parts of an organization, thus making strategy integral to an organization rather than merely dictated from “above”. It has the added benefit of making the overall complexity more manageable. Architects belong in each fractal both as creators and interpreters of strategy. I can’t possibly do this long paper justice here but I wanted to mention an additional thought I had. What can also help architects is to look for these fractals even in formally hierarchical organizations. There’s a great chance that they really exist and are just waiting for someone to pay them attention.
Having achieved focus on a manageable area and gathered as much meaningful data as possible, we can then apply some basic (but often forgotten or ignored) design principles. Think of separation of concerns, low coupling, high cohesion. All that starts by focusing on the core purpose of the element(s) of the architecture we’ve zoomed in on. And folks, the good news is that this will all have to wait for another occasion.
The very last thing I want to say is something I tend to hammer on about. You have to take some risks. No creative, successful organization does not take risks. You need a degree of confidence about the level and potential impact of the risk but at the end of the day you’ll have to make a decision anyway. Even if you believe that everything is potentially knowable, you know that we often don’t have the information available to achieve that. So you take a gamble on something that seems to deliver value and where the risk is at worst manageable. And by doing that you reduce the total entropy in the system and make taking other decisions easier.
Stuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity.
i quite agree with the autor