Conant-Ashbey Theorem

I came across the Conant-Ashbey Theorem recently. It states:

Every good regulator of a system must be a model of that system.

It got stuck in my head longer than it logically should – I guessed that there must be something to it that I’m just not seeing. Then we worked on a gamification project, and it started making sense. In plain english it could mean: To tune a system you need a good analytics model of that system.1

Why is this important?

During the design phase of a project we rarely think about designing control mechanisms that allow us to fine tune our designs when they are out in the world. Instead, we unleash variables in the world that run wild and soon disappear from sight.

Design with an analytics model upfront

Analytics not only tell us if our designs are doing their jobs – designing with a good analytics model in mind gives us levers to pull when they are not.


  1. Paraphrase by Phil Barrett

Making models

Good consultants spend more time thinking about their clients’ challenges than the clients themselves – and making models is a big part of this.

It follows that at the start of each project we spend a lot of time making models. It’s the best way to get under the skin of a project. Models are great at focusing a group. We do it to understand situations better, and it helps our teams visualise and think through a design challenge from all angles. Model making sits at the core of product discovery.

The models that I’m referring to here aren’t deliverables, but they could be if we polish them up, but in most cases this is a waste of valuable time. I see models as necessary discardables, their purpose is to move you along, without them you’re not going anywhere.

Systems Approaches to Managing Change highlights two interesting things about models that are worth considering.

Models are subjective

As with any model, viewpoints are inevitably partial in the sense of being both incomplete and of being viewed from a particular or partisan perspective necessarily based on it’s particular purpose.

Subjectivity is good here, it gives you a place to start when faced with a blank canvas. The sole reason for making models is to change them. Good models change shape as soon as you start showing them to people because the partisan perspectives start breaking down when more minds start looking at a problem.

Models are wrong

Constructing a model is a practical way of visualising the key elements of a problem. Statistician George Box said, ‘Essentially, all models are wrong, but some are useful.’ Models are always wrong in that they don’t serve as detailed illustrations of the problem. This is also why they’re right. The simpler you can make the model, the easier it is to understand a problem.

Models lose their power when they grow too complex, and when they are ‘finished’ they no longer draw ideas out of us.

Making better models

It turns out that the two weaknesses of models are also their strengths. But it takes courage to take the first step and risk being subjective and wrong. But if you think about it, when you start exploring, you can’t start out any other way.

The Planning Fallacy

Jared Spool’s Beans and Noses caused quite a stir when it re-surfaced and worked it’s way around our office. It’s the perfect metaphor to describe the consultant’s dilemma when working with big clients:

The idea is blindingly simple, actually. Every so often, you’ll run into someone with beans who has, for no good reason, decided to put them up their own nose. Way up there. In a place where beans should not go.

What do you do?

Lean Enterprise: How High Performance Organizations Innovate at Scale takes a more diplomatic approach framing it as the planning fallacy:

Due to a cognitive bias known as the planning fallacy, executives tend to “make decisions based on delusional optimism rather than on a rational weighing of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or to deliver the expected returns — or even to be completed.”

The planning fallacy is a means of managing uncertainty by spinning ‘scenarios of success’ at the outset of a project. It’s easy to fall for this delusion because we don’t like thinking about failure, or admitting that we may be acting before having explored different options sufficiently. What can we do about this? The Lean Startup process can help:

The Lean Startup process being relatively cheap, in an enterprise context we can pursue multiple possible business models simultaneously using the Principle of Optionality.

The Principle of Optionality simply means that by investing limited amounts of time and money in small experiments we can investigate more ideas simultaneously:

…the principles of constraining time and resources, thus limiting downside, and building a minimum viable product to test your value hypothesis as soon as possible with real customers should be applied at the start of every endeavor.

The Principle of Optionality: building and testing minimum viable products simultaneously. Most will fail, but the probability of a big win increases. 1

All roads lead to a minimum viable product, and testing a minimum viable product with real customers is a rational means of weighing up the gains, losses, and probabilities when things are uncertain. It is the antidote to delusional optimism, and will increase the likelihood that beans end up in the ground.


  1. Diagram interpreted and redrawn from Lean Enterprise: How High Performance Organizations Innovate at Scale. Originally from Antifragile: Things that gain from Disorder by Nassim Nicolas Taleb.

 

The Robot Curve

Marty Neumeier’s Metaskills: Five Talents for the Robotic Age is well worth a read if, like me, you often wonder what it is exactly that you do, and how it has changed over the years. It turns out that we are locked in a race to stay ahead of the machines with creativity our competitive advantage… for now. Neumeier conjures up the Robot Curve to illustrate why cultivating creativity is the only way for us to stay ahead of the machines:

The Robot Curve is a waterfall of opportunity that flows endlessly from the creative to the automated.

As work becomes routinised, and then mechanised, its value decreases regardless of how complex the task is for a human to perform.

Thinking about my own career: I used to do things that required a fair amount of skill, but some were repetitive. In time they were partly automated by new frameworks and boilerplates. I wasn’t too happy about it claiming: ‘I can do it better’, experiencing it as a threat and not an opportunity.

Seizing the opportunities that emerge from automation requires that we take more risk in our approach to life, work, and learning. We have to be more honest, open, and robust – antifragile. If we don’t we’ll start the slow descent down the Robot Curve. Moving too far down the curve our skills become ‘brittle’ – soon to be overtaken by machines rising up form the bottom of the curve.

But this does not mean that machines will replace us. Neumeier comforts – and cautions:

We have an unfounded fear that machines will someday start thinking like humans. What we should really fear is that humans have already started thinking like machines.

Friction, gaps, and countermeasures

In Lean Enterprise: How High Performance Organizations Innovate at Scale the authors introduce the concept of friction. The idea first appeared in On War by Carl von Clausewitz. He wrote about the uncertainty faced by actors in rapidly changing environments acting on limited information about the environment as a whole.

Basically, von Clausewitz describes friction as the acclimation of unexpected events that prevents reality from unfolding as we expect it to. It is an excellent metaphor to understand the behaviour of any human organisation, including ourselves:

Friction is ultimately a consequence of the human condition – the fact that organisations are composed of people with independent wills and limited information. Thus friction cannot be overcome.

In The Art of Action: How Leaders Close the Gaps between Plans, Actions, and Results, Stephen Bungay writes that friction creates three gaps:

  1. a knowledge gap – resulting from acting on incomplete information,
  2. an alignment gap – resulting from misunderstanding between people,
  3. an effects gap – resulting form outcomes that differ from what we expected.

Bungay continues that in complex systems with a lot of friction scientific management techniques don’t work. It makes the gaps bigger. The premise of Lean Enterprise is that big enterprises can use lean and agile practices1 as countermeasures to reduce friction by creating cultures of continuous improvement.

Accepting friction as a fact of life frees you up from trying to make it go away. Instead, identify friction that is healthy, and friction that is unhealthy, and when you find the latter, start implementing your own countermeasures to improve the situation, but don’t expect to close the gap.


  1. From Lean Enterprise on lean and agile practices: However, too often these are adopted as rituals or “best practices” but are not seen for what they really are – countermeasures that are effective within a particular context in the pursuit of a particular goal.