December 20, 2023

Dave Snowden on abductive reasoning, estuarine mapping, AI and human capability, and weak signal detection (AC Ep24)

“Human beings learn more from anomalies than they do from anything else.”

– Dave Snowden

Robert Scoble
About Dave Snowden

Dave Snowden is Founder of The Cynefin Centre and Founder and Chief Scientific Officer of The Cynefin Company. He is the creator of the extremely influential Cynefin Framework as well as SenseMaker®, the world’s first distributed ethnography tool. He has held roles as Extraordinary Professor and Visiting Professor at eight universities globally, and is the author of numerous influential articles, including as lead author of Managing complexity (and chaos) in times of crisis, created with the European Commission, an article featured on the cover of Harvard Business Review, and winning the Academy of Management award for best paper.

Website: www.thecynefin.co

LinkedIn: Dave Snowden

Twitter: @snowded

 

What you will learn

  • Exploring abduction, beyond traditional logic and its impact on innovation and education (03:43)
  • Understanding human cognition and contrasting AI energy demands (06:44)
  • Harnessing collective intelligence for decision-making (10:08)
  • Optimism in collective decision-making and technology’s role (11:58)
  • Rethinking research through natural sciences and energy studies (14:53)
  • Innovative approaches in physics and strategy (16:33)
  • Estuary metaphor in complexity and situational assessment (22:03)
  • Integrating theory and practice in social sciences and humanities (24:37)
  • Navigating AI’s challenges and leveraging complexity principles (27:51)
  • Exploring AI’s impact and control in organizational contexts (32:01)

Episode Resources

Transcript

Ross Dawson: Dave, it’s a great pleasure to have you on Amplifying Cognition.

Dave Snowden: Pleasure to be with you again.

Ross: The frame here is to amplify cognition. One of the best things we can do probably is to unpack some of your cognitive processes, and how those apply in a highly complex world. But as a starting point, you point to abductive reasoning as something which, amongst other things, humans are good at or can be good at, and AI is not good at but perhaps for a broader audience, it would be a lovely frame, what is abductive reasoning and where are we potentially hitting any limits with that today?

Dave: The interesting thing about abduction is it probably isn’t limited, whereas induction is, but let’s do a high-level summary. You normally would talk about three types of logic: deductive – if A then B, inductive – all the cases of A have B so there’s some association between them, although the danger of false correlation is very high there, and then abduction, which is sometimes known as a logic of hunches. Another definition would be what’s the shortest distance between apparently unconnected things. You can look at roughly three approaches to abduction, all of which, by the way, are completely compatible with each other. These are three lenses.

One is if you go back to the original American pragmatists, which is where the idea comes from, it’s about hypothesis generation, suspension of belief, seeing things, and finding hypotheses. In Batesons’ work, both Gregory and his daughter, Nora, and we’ve done a lot of podcasts on this one, it’s about metaphor and the use of metaphor to see things from a different perspective. For example, I’m currently working on some of the ways in which animals and indigenous people optimize the food search where they don’t know where the food is. That gives you a whole new insight. In fact, I’m writing a paper about it at the moment. So the use of metaphor to throw ideas across.

The third one, which is the one we’re focused on, also recognizes the fact that music and drawing can be for any real substantial development of language in humans. Although you can see the original utility, the reason it develops up to the heights of, and I’ll reveal my prejudices here, Caravaggio and Wagner, is not because we enjoy it, it is because abstraction allows us to see things from different perspectives. It breaks us away from the material, and we make sudden, unexpected connections. Now, in evolutionary terms, that has major advantages. It has a downside; in it, it makes us prone to conspiracy theories. But it means we don’t need training datasets. Because actually, and that’s important, we’re not making probability forecasts based on what’s happened before, we’re gaining genuinely new insights and ways of looking at things.

But it is why we have significant worries. For example, in Australia, the overemphasis on STEM education potentially actually destroys human innovation. In epigenetics terms, after three generations, you can lose capability. That’s a wider worry.

Ross: One thing that seemed to read into one of your papers is that as the world becomes more complex, or there’s more information, then it is more challenging for human cognition to do abductive reasoning well.

Dave: I thought you’d say the other way around. Abduction develops in humans because it actually means we can handle significantly more information. Take the classic example, if you give radiologists a batch of X-rays, on the final X-ray, you put a picture of a gorilla in plain sight, which is 40 times the size of a cancer module, 83% of radiologists will not see it. Now, you might think that’s bad news, but 99.9% of the time there will not be a gorilla in an X-ray. We don’t want to waste energy on it. Human beings evolved cognitively to reduce energy consumption in making decisions.

If you look at Andy Clark’s work and my work, we often scaffold our knowledge into the narratives of the society that surrounds us—this is distributed consciousness. Human beings have evolved to make sense of information at scale as a species, but not to be completely logical in each individual, particular decision. The way bats, we scan less than 5% of data before we make a decision. We’re constantly hallucinating based on our own experience, other people’s experience, or imagination and those two things constantly interact with each other. We’re not intelligent cameras. We only pay attention if there’s an anomaly.

If you’re walking down the street, you don’t think about walking. But if you stumble, you start to think about it. Now, in evolutionary terms, that makes a lot of sense. If you think about the early hominoids on the savannahs of Africa, something large and yellow with very sharp teeth runs toward you at high speed, do you want to artistically scan all available data, look up a catalog of the flora and fauna of the African Veldt, and then have to decide it’s a lion, or look at best practice case studies on how to avoid lions? By that time, the only book of any use to you will be the book of Jonah from the Old Testament, which is the only example I found of an escape manual from Jesse Attractive Enlarge Carnival, written by a survivor.

We evolved to handle huge volumes of information by distributing consciousness into our collective experiences as a species. We actually take this energy. Now, if you can trust that with AI, every time you produce a training dataset for any AI machine, it’s the carbon footprint of a transatlantic flight because of the amount of energy it has to consume.

Ross: Yes. One of the details is around the nature of human cognition in a changing information environment. I just say, yes, we are well-suited if we can effectively use our cognition and our perception well, but a lot of people are getting trapped in either, as you say, very narrow tunnel vision, which is very ready and not necessarily having the right scope. We want to balance between the breadth and the narrowness, as you say, depending on context and that’s something which not everyone can achieve in their current information environment.

Dave: I think that way of phrasing it is part of the issue. No individual is capable of achieving it. For example, what’s called the problem of abduction, which is what we had to address when I was working for DARPA, is I have an intuitive insight, you have an intuitive insight, Fred has an intuitive insight – which of us is right? Now, you can do that in a power dynamic. But some of our work, for example, is, and this is in the European Union Field Guide, is to use your workforce as a sensor network.

You present an idea to the whole workforce, they reply within three minutes, this is called high obstruction metadata – taking that high obstruction point, then you look for patterns in the interpretation, and you’ll find that 17% have seen a gorilla. And if you find that 17%, you’re open to talking with them. If they walked into the door and said, “I’ve seen a gorilla in X-rays,” you’d ignore them.

There are mechanisms by which we can use collective intelligence. We didn’t evolve to make decisions as individuals. Don’t tell anybody in cognitive neuroscience. You take Malcolm Gladwell’s book ‘Blink’ seriously otherwise I’d never invite you for dinner again, right? We’re really bad as individual decision-makers, but we’re actually very good in extended families and in clans.

Ross: As you say, in the past, we had effectively collective intelligence. But as we scale up, particularly to large organizations or societies, one of the things is being able to have the right mechanisms or structures for collective intelligence. There’s a long way to go to be able to both have those in place and more broadly adopted.

Dave: I’m a bit more optimistic on that side. There are areas where I’m fairly pessimistic, but this, I’m more optimistic. One interesting hypothesis is we know that hordes could gather faster than the fastest horse could gallop if you go back to the steppes. We know there is actually no argument, that’s because we’re really good at picking out pheromones. It’s not shamanistic behavior. It’s pheromones that spread fast. We know that pheromones are a key to determining human trust. Historically, we’ve had lots of ways. We can make decisions collectively, often of which we’re unaware.

Now, one of the things we can do with technology, as I say, is we can create employees as sensor networks. We’ve now completed the experimental phases on using schoolchildren as sensors into their communities as ethnographers every week. One of my ambitions, which we’re seeking funding for, is to have every 17-year-old in every school in the world act as an ethnographer into their community, which means we get real-time access to how people are thinking at a school district level. We’ve proved we can do that in Sweden, in Wales, in Columbia, and elsewhere.

Technology has given us a mechanism to be the equivalent of the pheromones trace. But we have to use it wisely because if you look at what happens with the internet, you get these clustering of pervert structures of narrative because everybody’s fussed about algorithms to tell you what’s true or false. What you should focus on is making sure what comes into it in the first place is accurate. That’s a lot easier and that’s what we’re doing with things like the school program.

Ross: That was fantastic. Is there any way to find out more and put them in show notes? I’d love any references on that school program. That sounds fantastic. 

Dave: Yes, our website thecynifin.co, citizen engagement, which is where that comes from.

Ross: Okay. I want to dig into the theme of estuarine mapping, which you describe as more important, I believe, in your Cynefin framework, which is extremely widely used. We’d love to just start with where the metaphor of estuarine mapping comes from. And to lay out, I suppose, what this is at the high level, and how people can frame what estuarine mapping is?

Dave: The way my company does research is we’ve rejected the concept of empirical study. Because my background is in physics and philosophy, from the point of view of physics, no social scientist ever has enough data to form any valid conclusion anyway, and Management scientists tend to be even worse, they go for very limited datasets, and you get the false correlations. Also, the world is changing very rapidly. We’re probably going to see two more plagues in my lifetime, and I’m 70 next year. The idea that you can take a series of cases, the study by academics over the past five years, and create a recipe for success from that, I think is just fundamentally flawed.

You can see that in the Feynman cycle that nobody can find a way to do that. It also tends to focus on context-free. What we do instead is, given an issue or problem, we actually go away and study it in the natural sciences. That normally takes about five to six years. Then you start to experiment with it and that normally takes another three to four years. Then you get something you can make into a tool or create an open-source method from. That’s like theoretical physics to experimental physics to engineering, is the way we do it.

Estuarine mapping came from a few bits and pieces started to come together. One is we knew that energy was critical. We knew from the work of Clarke and Seth and others, and this links in with Kreisen, that energy minimization was key in human evolution.

Then every year I go to this thing called HowTheLightGetsIn at Hay-on-Wye which is really worth everybody knowing about. There’s the main book festival. Hay-on-Wye is known as the town of books. It’s just inside Wales on the English border and it’s nothing but secondhand book shops. Take a Kindle there, and it will probably get broken. Every year, they have a 10-day Book Festival, which is great for kids because they get to meet their favorite authors. In parallel with that, there’s a four-day festival of music and philosophy, which is every 140 minutes, another three philosophers or scientists or politicians in the debate, and occasionally half-day courses. It’s this wonderful festival, and I’ve got a lot of ideas from it.

One year I hit a lecture on constructor theory in physics. This is quite exciting. It’s come out of quantum mechanics. It’s the first attempt to describe a system as a whole, rather than find the smallest possible particle. I quite like quarks. I think we went far enough with quarks, we should have stopped that. They come in threes. They’re cubed. The concept is you identify it is called, ‘the science of canning can’t’. The first thing you do is identify what can’t change, so gravity can’t change. Then you identify constructors. A constructor is something that transforms things, and I don’t mean that in the sense of transformation programs, I mean it in the Physics sense. It transforms things while remaining substantially unchanged in the act of transformation. At a very high level, this basic summary is, map the counterfactual things which can’t be, map the constructors, whatever has the lowest energy gradient will win.

There’s a seminal paper they wrote applying constructor theory to evolution. If you see evolution, it’s the most energy-efficient. It’s a very different way of looking at it. It’s a brilliant paper, I think. I picked that map. We’re also using the constraint-based approach to the complexity that comes from Gerardo and others. We started when actually we might be to do something like this. We started to say, let’s identify the constraints and the constructors. Technically, you can call a constructor a constraint, or vice versa. But in terms of making things practical, it makes a lot of difference.

We go to somebody and say, look, there are three things. First of all, there are actors. An actor generally roles or functions very-ready people. Secondly, there are constraints. Constraints can contain things, or they can connect things. Then there are constructors and constructors transform things by passage, process ritual does that; by presence, everything changes sometimes if something else is present; or by contagion, they create imitation. Every executive can get that. We then brainstorm that and that goes on to a grid between the energy cost of change and time to change. There’s a key principle here, if nobody can agree on something, they break it down until they can agree. One of the reasons we’re doing that is we want an accurate situational assessment, which isn’t compromised.

Human beings are incapable of assessing the situation objectively if they’re thinking about what they should do next. If you’re in a strategy session, you’re choosing the evidence to support your own forward action. We’ve done a lot of work to prevent that. This does that. Because nobody can say what we should do. It’s all about well, what is there? Well, break it down until we agree on what it is. Where is it placed on the grid? Well, break it down until we agree on that. What we’re doing is we’re mapping where we can change things and can’t, with everything on the northeast of that grid, near the upper right is, in effect, a counterfactual, the energy cost of change and the time to change is too high. Then we draw a liminal area next to that, which means we can’t change this, but somebody might allow us to, and the vulnerable align at the bottom.

Then the really simple thing which comes next is you just take…there are nine different action types, in which you take a constraint or a constructor, and you either reduce or increase the energy cost of change, that’s a project, you stabilize it, you monitor it, you destroy it, you say it’s a conditional change, there’s a whole catalog of these, and you don’t ever say what we’re going to do. What you do is try and change the energy gradients of the landscape. The way I summarized it once, you want the energy cost of virtue to be less than the energy cost of sin before you even start to intervene.

Now, it’s proved hugely valuable. It’s taken off in six months which has never happened before. It’s proved hugely valuable in reducing conflict in strategy and without basically saying, Look, do this before you do something conventional because if you do the classic systems thinking approach, you get everybody together in a room, agree where we’d like to be, you’re probably going to end up in the counterfactual domain almost by definition because you’re going to get idealistic. This is called an affordance landscape as well. Everything southwest of that line is where you can play.

Now, the reason we call it estuary is I didn’t want people to confuse constructor theory with construction law, which is Bayesian. I’m not wild about that anyway, it’s thermodynamics, not quantum mechanics. In Deleuzian terms, it’s advertorial so it assumes everything is always flowing in one direction, and branching and branching, and branching. One day in intense frustration about a couple of consultants who love this stuff, it’s the Enlightenment myth of steady progress to an ideal future, I said it’s not a river for God’s sake, it’s an estuary.

In an estuary, the water goes out, and the water comes back. You can cross it at the turn of the tide, but not when the tide is flooding. There are sandbags that change every day. The brackishness of the estuary indicates the speed of flow. The granite cliffs only have to be checked every 20 years. That’s a powerful metaphor for effectively a situation…This is a situational assessment tool. Cynefin is a decision-support framework. It’s in its 25th year now, and it’s getting more and more adoption. It is not a complexity framework, it uses complexity. This is a pure complexity framework. What it’s doing is taking the complexity principle that you scale by decomposition and recombination. What it does is decomposing until you reach an agreement, then it has recombination so that you get novelty.

Ross: I’ll put notes in the show notes for those who want to delve deeper. Just taking a step back, David Deutsch created the constructor theory. It goes to the heart of his work and makes a distinction between what is possible and what is not possible. If it is possible, then that is all the plain space that we have, which we can explore in a whole variety of maze and be able to construct or find ways and mechanisms to be able to create that. What I’d like to dig into just a little bit is the distinction between the hard sciences and the social sciences. You describe some of the structures where you’re taking the mechanisms or the approaches of physics. But these are also, you’re applying this to some lot less structured or easily quantifiable structures like social sciences. How does that map from those hard science approaches through to the social sciences?

Dave: We call it practice. I’m going back to the ’70s now when we said practice makes perfect. It’s theory-informed practice. Now, the problem you’ve got in social science, and there’s a major crisis in psychology at the moment because people are trying to replicate the original experiments, and they don’t replicate in the main. If you look at adult development theory, which is a particular bête noire of mine, I now agree completely with Nora Bateson that it’s eugenic in nature. Everybody has an evidence base to support completely different models of development stages because what they do is they have a hypothesis, they test the hypothesis on people, and lo and behold, the hypothesis is confirmed.

In natural sciences, other people repeat your experiments. You may have the theory, but other people can check the theory, and then people practice it, so if you haven’t got replicable experiments by third-party agents, the best you’ve got is explanatory power. There’s nothing wrong with that. For example, we used to lose in epistemology a lot, particularly assemblage theory. But I can map an assemblage theory to a strange attractor in convexity science. What we’re doing is saying the way human beings make decisions, the way systems work, gives you a hard core effect that you can rely on. You don’t have to rely on cases so you start with that and then you test experimentally to see if you can achieve results, but always consistent with the theory.

On the other hand, I would say more than the social sciences, humanities has explanatory power and that’s what you see in philosophy and anthropology. You won’t see anybody in philosophy, anthropology, or anybody with respect who is trying to pretend they’re real scientists through surveys. Warren Bennis called it Physics Envy, which is a delightful play on words. The use of the humanities with its explanatory power and adaptive reasoning capability combined with physics to give you a scaffolding around how things work is a much safer way of going forward in conditions of uncertainty.

Ross: Absolutely. This takes us back to the beginning in the sense of abductive reasoning being a human attribute and AI using very different structures. There are dangers in using AI to support our cognition, decisions, framing, thinking, and experience, but there are also, we would hope, some opportunities. I’d love to frame the current divide between what we frame as AI in terms of its thinking structures and human cognition, and where those could be brought together to create something better than the sum of the parts.

Dave: I wouldn’t be worried if it was going to be intelligent. But it isn’t. It’s just a super-fast set of algorithms. That’s what’s really scary about it. There’s a book by Neal Stephenson (he and I have worked together with the Singapore government) called ‘Dodge in Hell‘, which generally is a bad book apart from the first three chapters which are brilliant which basically positive future in which only the rich can have their information curated. Everybody else is sold to an algorithm. There’s an example of somebody crucifying themselves because they sell to a religious group who want as an example. To be quite frank, that’s pretty close to where we are, there are AI bots now that tailor a lie to people and give it to them on social media as they’re approaching the ballot box.

This stuff is significantly scary. The recent debacle really worries us because the people who wanted to inhibit or at least know how to control these things lost out to the religious AI people, who are within the American tradition of the rapture, they think miraculously AI is going to save humanity. That really worries me. Also, they can’t think abductively, and by the way, I’m dyslexic. Dyslexics think abductively all the time, and we can’t understand why other people haven’t seen the connections. I can’t read a book aligned to time other than with significant effort because I’m looking for patterns. What we can do, and this is where we did our original work on DARPA, is the big thing on AI is what are the training data sets?

We originally developed the software SenseMaker to create epistemically balanced training datasets to avoid the problems you see in stochastic parrots, which was written by a Google employee and published by an ex-Google employee because Google didn’t like what she and the co-authors said. Rag isn’t enough. the real focus needs to be on training datasets. Now, if you do that, and this is our stage three estuary, which we’re going to become into next year, the year afterward. Then I can create what is called anticipatory triggers. I can use past terrorist examples. I can use examples of people finding novel solutions to poverty, etc. to trigger humans very quickly to pay attention to something which is sowing a similar emergent pattern.

We are not saying there are two things that start to replace scenario planning because scenario planning relies on some historical data in some way or other, it’s not just imagination. Two ways in which we can handle that; one is estuary mapping is a new foresight tool. Because whatever has the lowest energy gradient is what’s likely to happen next, and that’s probably more predictable than the scenario. The other is to create training datasets from history at a micro level. This is the decomposition that can trigger alerts so that human beings will pay attention to anomalies before they can become very anomalous.

That’s a key principle in complexity. It’s called weak signal detection. You want to see things very early so you can amplify the good things and dampen the bad things. Conventional scanning only comes to them late. With the right training datasets and the right algorithms, we can hugely improve that capability in humans. There are things we can do with it. But the main danger at the moment, to be honest, is that human beings will become dependent on it, and human beings like magical reasoning. I can smell snow coming in, but my children can’t. It doesn’t take long for humans to lose capability.

Ross: Yes. Over-reliance is one of my biggest fears, and we’re moving toward it. A lot of what you’re describing is the curation of the training data is fundamental to be able to create that. Is there anything in terms of the interfaces or how we pull things from the large language models that can also be useful?

Dave: We’re currently experimenting. We’ve been doing this now for a couple of companies; we’re about to release it for other people to use with measuring attitudes to the use of AI in companies. Because attitudes are lead indicators, compliance is a lag indicator. What we’re starting to do is to test the degree to which your employees can discriminate between real data and AI data because if you lose the discriminatory capacity, then you’ve got a problem. What we’re looking at is a rolling program in which people are constantly seeing anomalies because they suddenly discover, “Oh, my God, that wasn’t a real person or Oh, my god, that was a person.”

Human beings learn more from anomalies than they do from anything else. That’s what we call microstimulation and micronudging. That’s going to be critical. It’s the ability to understand the limits and capabilities. That’s one thing. I think political control is the one thing that worries me most because I don’t see that happening. But that’s a wider danger. The big danger for geoengineering is somebody like Musk will just decide to do it without any control. That’s the problem we’ve got; we’ve got an environment in which super-rich individuals from a very particular culture with a very particular type of cognitive bias are determining the world, and that’s really scary. We’ve been run by Autistics.

Ross: Certainly. To round out, are there any recommendations you would make to listeners which could be simply to dig deeper into your work or anything else that they can do to enhance their cognition in this extraordinary world we live in today?

Dave: The key thing for me is people need to be reading broadly, and we need to stop reading the shallow skimming books. The airport bookseller, can tell you how to use cognitive neuroscience to change your personality type. If you read those, you deserve everything you get. But if you look at Andy Clark’s latest book, I could give you a whole list of books. Sorry, Andy Clark was the one I finished the other day. There’s eminently approachable reading with a basic reading thing in cognitive neuroscience, in physics. Helgoland is a really good introduction to quantum mechanics; you can get the essence of it. You need to read broadly and widely, read DeLanda. I don’t expect anybody to read Deleuze in the original, but DeLanda translates him in a way that is authentic to the original. It’s read broadly, think broadly, talk broadly, and do not allow your information sources to be controlled for you and limited.

Ross: That’s fantastic and very important advice. Thank you for your time, your insights, and all your work, Dave. I really appreciate it.

Dave: It’s a pleasure.

Join community founder Ross Dawson and other pioneers to:

  • Amplify yourself with AI
  • Discover leading-edge techniques
  • Collaborate and learn with your peers

“A how-to for turning a surplus of information into expertise, insight, and better decisions.”

Nir Eyal

Bestselling author of Hooked and Indistractable

Thriving on Overload offers the five best ways to manage our information-drenched world. 

Fast Company

11 of the best technology books for summer 2022

“If you read only one business book this year, make it Thriving on Overload.”

Nick Abrahams

Global Co-leader, Digital Transformation Practice, Norton Rose Fulbright

“A must read for leaders of today and tomorrow.”

Mark Bonchek

Founder and Chief Epiphany Officer, Shift Thinking

“If you’ve ever wondered where to start to prioritize your life, you must buy this book!”

Joyce Gioia

CEO, The Herman Group of Companies and Author, Experience Rules

“A timely and important book for managers and executives looking to make sense of the ever-increasing information deluge.”

Sangeet Paul Choudary

Founder, Platformation Labs and Author, Platform Revolution

“This must-read book shares the pragmatic secrets of how to overcome being overwhelmed and how to turn information into an unfair advantage.”

R "Ray" Wang

CEO, Constellation Research and author, Everybody Wants to Rule the World

“An amazing compendium that can help even the most organised and fastidious person to improve their thinking and processes.”

Justin Baird

Chief Technology Office, APAC, Microsoft

Ross Dawson

Futurist, keynote speaker, author and host of Thriving on Overload.

Discover his blog, other books, frameworks, futurist resources and more.

rossdawson.com