“For me, decision-making is not just a company making decisions in a certain way but it’s also what they are enabling with the world, society, businesses, organizations, and countries, what decisions they are allowing them to make to be more resilient. “
– Roger Spitz
About Roger Spitz
Roger is an international bestselling author of the four book collection “The Definitive Guide to Thriving on Disruption”, the President of Techistential, which works on Climate & Foresight Strategy, the Chair of the Disruptive Futures Institute, and a frequent keynote speaker globally. Roger was previously Global Head of Technology M&A with BNP Paribas, and has two decades of leading investment banking and venture capital businesses.
Websites:
Book: The Definitive Guide to Thriving on Disruption (4 book series)
Instagram: @disrupt_futures
Twitter: @disrupt_futures
What you will learn
- Tech Existentialism and how it’s changing the nature of decision-making (03:09)
- Value chain of decision-making and how it progresses from descriptive to prescriptive (03:50)
- Highlighting the difference between complicated and complex systems (05:45)
- The impact of social media and information on decision-making (06:58)
- Potential risks of over-reliance on machines (07:45)
- The cost of making wrong assumptions and what has to be done (13:13)
- The need to take a step back from information overload (15:11)
- Growing interest and demand for capacity building in areas related to foresight and resilience (19:26)
- Importance of having strategic and emergent agility (24:52)
- Introduction to Climate Intelligence and its impact on decision-making (26:41)
- Three important things when spotting weak signals (32:35)
Transcript
Roger Spitz: Amazing to be with you, Ross.
Ross: One of our many common areas of interest is the future of strategic decision-making. Why do we need to be thinking about the future of strategic decision-making?
Roger: That’s a very fine point. What’s changing? Because we’re humans. We have a brain. We make decisions. There are a lot of things that I know you’re also very tuned into. For the anecdote, before I answer thriving on disruption is meeting thriving on overload. Those two elements disruption and overload are contributing in a way to having to consider decision-making differently. We box it into a few things. Firstly, it’s simply that the exclusivity of decision-making is no longer necessarily just for humans. Insofar as computers can make decisions, whether they understand the decisions, whether they’re imitating the brain, it’s almost a separate debate, insofar as the outcomes of what they might do have implications on decisions that may not be taken by humans. So the first thing is really that; the delegated authority, which we call Tech Existentialism. What is technology and existentialism? What is that world where we no longer have that exclusivity?
The second thing is the decision-making value chain itself, if you think of getting information, like the OODA loop, you get information, you get signals, that’s kind of descriptive. The computers analyze, it’s data analytics, computers have been doing this for decades, fine. And then there’s a bit more predictive. That’s algorithm-augmented, machine learning, pattern recognition, and you can process tons of drug discovery, all the things you know. You can test and process millions and billions of things and decide what could make sense for a particular drug and that which humans cannot do. So that’s predictive, it’s supporting decision-making. Now, the thing that interests me most is that value chain where decision-making is moving to prescriptive by machines. That prescriptive is really deciding the preferred option. It’s having that agency or at least authority with action triggers to make autonomous decisions.
Now your focus, Thriving on Overload, how much information is to process? Do you need support for that? How good is the support you’re getting with computers? That’s two elements, the exclusivity that’s being delegated, and the moving up the value chain. We quite like to look at the framework which is obviously very helpful for sense-making and responses to different environments: Dave Snowden’s Cynefin Framework.
Ross: Yes.
Roger: He divides, for your listeners, what is complicated? And what responses were complicated? So complicated, you can rely on experts, you have known and unknowns, it’s a more linear predictable environment, and cause and effect can be anticipated. Then complex, it’s different. It’s nonlinear, so it’s less predictable or not predictable at all, like the Amazon River. If you change something, how does it affect everything else? Complicated examples are, how you send a probe to Mars and all that. You can do the calculations and get expertise on how to fix a plane on that. In this complex environment, cause and effect can’t be necessarily determined, it’s an emergent discovery mode, trial and error. Cause and Effect can’t be established necessarily, as we said, it’s nonlinear. The multiple drivers of change cannot necessarily establish it’s just A or B, etc.
That’s a lot of the real world and then more and more in a world that is hyper-connected, which is technological, where things are basically very much playing off each other and can be self-reinforcing, the speed of which can be extraordinary. If you look at Silicon Valley Bank, one of the interesting aspects to it, and we can put aside the many aspects of the hundreds of case studies in the post-mortem, but one specifically, I think is interesting to our discussion, is it’s probably the first time that social media and information is the way it is, we didn’t have that in the 2008 crisis, where instantaneously, if one VC mentioned something to a customer, or startup, or whatever, that was pretty much amplified within minutes to 10s if not hundreds of millions or potentially billions of people.
That instantaneity, warp speed of many, many things basically mean that the decision-making and the person in that complex environment is different. When you add all that together, my worry and my concern, in terms of what it means for the future of decision making is that machines will continue their course and will hopefully have improved ethics and safeguards and make the most of it for the augmentation, because there are benefits as well to AI. But putting that aside, my worry is that the focus is so much on the machines that we forget humanity, what we as humans need to do so that decision-making remains effective.
What are humans good at in decision-making? And then we’ll kind of wrap up, I don’t want to monopolize it, I’d love to get your thoughts as well, and reactions, but kinesthetic intelligence; you use a temporal lobe, timing your senses and all that. Emotional intelligence, we have instinct, we have intuition, feeling, self-awareness, consciousness, and you don’t need algorithms, it’s not the prefrontal cortex where you have that emotional intelligence. Now, the one thing with what we’re talking about, though, is, for strategic decision making, which is our topic, complex systems in that quadrant of David Snowden’s Cynefin Framework, where there are those known, unknowns, computers aren’t that good yet at the complex because it’s nonlinear, because there’s no data, and the future is unpredictable because cause and effect does not determine it, etc. But with natural language processing and machine learning, it’s emergent, it’s quite good at being emergent because everything immediately is taken into account.
The machines are learning fast and potentially encroaching, or even if they’re not, are being delegated the authority to process things and make decisions even when it should really be that the humans keep the edge. The problem is that if humans are not upgrading themselves to the reality of our nonlinear complex, unpredictable world, if we’re not changing the way leadership teams and our minds are cabled, if we’re not changing the educational systems, we’re not good at making sense of complex, we’re not good at processing and making decisions in those environments because we’re linear, we’re just not capable like that. Long story short, when I think about the future of decision-making, I think more about what humans need to do the upgrade their capabilities into the context of Tech Existentialism, where we don’t have exclusivity to decision-making, where we’re de-skilling, by delegating authority to machines, and where those machines are moving up the value chain and learning every day.
Ross: Yes.
Roger: Are we?
Ross: Yes, it’s an extraordinary time to be looking at that. Let’s unpack that. I completely agree that this has to be about augmenting human cognition, and the augmentation of human cognition could be partly in terms of delegation where irrelevant. But it comes back to in a way, the prescriptive that you described where the analysis is provided by machines. But prescriptive isn’t delegation, it is prescribing something, but you can still choose whether or not to take that medicine.
Roger: If you’re capable of making that decision.
Ross: Yes. That’s the nub of that is that precisely when we start to have, for example. This is probably a sequencing issue, where you say, first of all, AI provides a recommendation, hopefully with justification or some kind of parameters on, then humans say yes or no or with modification. Now there are various other sequences, ones where humans provide analysis and then get AI to assess different scenarios or things like that. But what’s the nub? What is the best way or ways in which we can then take that prescriptive approach where you can get some rich multivariate analysis by AI, but where it still remains ultimately a human decision?
Roger: I completely agree with you, prescriptive is not taking autonomously a decision, it’s providing the options to make a decision. Where I would add nuances is we’re less good at bluffing than machines are. Several things can happen with that prescriptive, which theoretically, is not the decision making but which can quickly become a proxy for decision making. Number one is are we overwhelmed by the complexity? And are we understanding what the machine has done, what the machine is prescribing, and what to do with it, and our situational awareness and understanding to make a decision?
We can see it every day; when it was a pandemic, when there were a lot of recent geopolitical events, when we are in this environment, which is the reality of the world that is nonlinear, unpredictable, and complex, it’s just that the cost of making the assumptions, of relying on assumptions is going through the roof. We used to make the wrong assumptions all the time, but the cost of those assumptions was less severe. Now, the issue is, the cost of making the assumptions of the narrative are wrong, we, therefore, have to have a different mode, and acknowledging the real nature of our worlds, including in our decision-making, are we able to, are we changing the incentive, the governance structures, the educational systems to allow us sufficiently to understand what machines are doing, their limitations, whether it’s garbage coming out to understand the situations well enough? And are we delegating sometimes things we may not be better able to make decisions based on the answers than the machines but even machines themselves can make mistakes or can be wrong with the algorithms, God knows what happens then.
I’m not dismissive of technology or AI. Because there are a lot of miracles that are happening through that. Often, it’s the tensions and dualities, and paradoxes. But if we move to your question about how does one manage that decision-making? And what do humans do? We can unpack that. For me, I do use technology constantly. I guess that’s what you’re driving at. There are uses where technology augments us whether it’s note-taking, or translations with machine learning, or transcriptions, or indirectly drug discovery because the farmers are using the platform today. Yes, indirectly, one benefits when you use it every day, at the personal level, at the societal level, etc.
What I then try and do is really focus a lot on, and this is where I think humanity needs to do, is what we call the AAA, which is how to be more anticipatory and think about the future because we need to imagine broader scenarios and we need the time to think of things differently to unwind.
We need to take a breath. There’s a lot of noise and overload as your book correctly points out. You need to isolate the time to think about different things or think differently from the linear approach. I think foresight and futures is a good way of doing that. It’s quite broad in that. You need to isolate the mind, sometimes even spending time doing other things, to just invest 10-20% of your time to connect the dots and see things differently, because that same way of looking at things and developing more expertise compared to being T shaped or benefiting from the compound role of investing in the time, you have to isolate your time, you have to isolate the breathing and the thinking differently whether it’s Eastern philosophy with Shoshin – Beginner’s Mind, whether it’s meditation, and then you need imagination, inspiration to accept that the world may not be as it is, and to be able to challenge that. But this is very different from the educational system leadership, governance structures, and incentives that we’re determined to do which give you a carrot to learn things for which they are ready-made solutions and answers.
Ross: Let’s get into the real world. These are all important topics. And I think unpacking them as you have in your books, I think is really valuable. You and I both work with boards and executive teams. You get personalities, you get more or greater or lesser openness to different ideas or to new technologies. What’s the reality? Have you seen any boards or executive teams which have superior, or advanced, or better, or interesting processes of decision-making? Or have you encountered any interesting anecdotes to recount about how you have tried to get groups to adopt better decision-making processes?
Roger: Yes, you’re right, there’s a lot to unpack. I like the provocation around getting real. A few things, first of all, it’s hard. You’re talking about changing mindsets, always going against how we’re cabled to think maybe more linearly. It’s hard, no doubt about that. That’s why these are complex challenges. That’s why we’ve deferred them for decades, even though we knew the consequences for some. No doubt about the challenges. The second element is that I think that at present, unfortunately, quite a lot of organizations, countries, institutions are still in Business as Usual operating mode in terms of writing on the usual assumptions and everything, treating the world as a controllable, predictable, and linear.
Having said all that, I find that there are more countries that are beefing up things they’re not bad at already, in terms of being anticipatory; Singapore, Canada, and Scandinavian countries on that, and those who aren’t are starting to realize the importance of that and being more intentional around them. US, maybe to a degree, and others.
Ross: Is this around policy?
Roger: Yes, around policy; and organizations, likewise, around capacity building for better resilience, for foresight strategy, and I see it myself, one of the reasons I’m able to do what I do today, which five years ago would have been maybe a bit more esoteric, not that they weren’t futurists or very bright people like you have been doing this for a long time but coming out of nowhere, out of 20 years of M&A to be able to fit myself and get traction, for capacity building, for anticipatory governance, for resilience, for strategic foresight, I’m not sure that these kind of topics had more demand. You’ve always had a very small number of organizations that were cabled on that. I’m finding, there’s more interest in these and more and more, not just keynote talks, but executive programs or pathways to capacity building.
It’s not easy, just because you do a good session, even if you’re effective, and people are well-intentioned, that is easy. But I’m finding a lot of interest in that. I’m even finding the same for education for that matter. I’m seeing private groups who have educators who are asking me to do sessions for them to think about how to better allow their students to be capable with these topics. If you take the leverage points and Donella Meadows when you’re dealing with education, and then policy, and then the corporates, little by little, you’re doing a lot with the value chain around the drivers for change. It does have come back to a question. Being anticipatory, which is one of our triple A’s, and strategic foresight and those capabilities, I’m seeing more and more demand for, it’s better to have the awareness and the demands than not. I think we’re less and less relying on some well-known institutions who have their DNA in terms of this way of being anticipatory. I think it’s becoming more relevant to more people.
Ross: One of the challenges with foresight is anticipation. You can be in a mindset of anticipation and you can get foresight methodologies internally or get some great people externally and so on. Always originally, the whole thing was scenarios to strategy. We got some great scenarios, how do we turn those into strategy? That’s something I’m interested in. Also, the strategy is, again, around the decisions and I think there are plenty of organizations again, that go out and get some insights around the world far more than other organizations and leaders do so they’re more open and they look for things and they get help and they build foresight teams and so on, but still, how does that flow through to an actual decision? And so that’s still the snub where you’ve introduced more complexity, you evaded harder in a way because you’re bringing in more complexity. But that’s still, from that anticipatory frame or the actions you take, how does that then flow through into a decision-making process? And a decision at the end of that?
Roger: That’s one of the reasons why we don’t focus. For us, the capacity building is not just foresight strategy, or capabilities, because, to your point, you need to think about the outcomes. The simple answer is incentives determine outcomes. That’s not me, it’s Munger, of course. But ultimately, there are certain things in terms of the way the leaderships are cabled, the type of things to achieve alignment, the way of achieving certain outcomes that come through incentives and other means. If an organization is serious about changing how people think and behave and is more outcome-focused, then tick the box we have a team that does with foresight or whatever. Clearly, you go to the core of it. That’s where we think about things.
Anticipatory is more this kind of foresight, futures thinking, we then add anti-fragile, borrowed from Nassim Taleb, but the way we look at what is the organization. How rigid is it? How are the decisions made? How resilient to shocks is it? How does risk management work in terms of asymmetries? Are people still thinking that if it’s just 1%, it’s fine but if the 1% ends the world or your company, it’s an existential question, you need to look at the asymmetrical risk. Anti-fragility, which includes ways of having skin in the game, and it’s linked to decision-making, and it’s linked to incentives, actually has very important elements, which we unpack quite considerably in our frameworks in volume two. Even though we didn’t invent anti-fragility, I think we applied it quite well as to how it ties in with anticipatory and futures thinking. Then the third piece is, again, decision-making, scenario analysis, and all that is very important, everything is to inform decision-making today.
Coming back to the emergence, if you take the complexity, everything is emergent, though today and the reality exists, so how do you constantly zoom in, and zoom out? And what strategic and emergent agility do you have to zoom in from your longer-term envisioning and all that to emergence today, and that, again, has different elements that organizations can take on in terms of decision making, decentralization, training of people, or the right incentives. There’s no quick fix. But I do personally believe that if you have certain incentives that are focused on the right outcomes, that if people are more aware of it, that you bring in talent that’s more susceptible to adopters, that you have an understanding of being anticipatory, anti-fragile, and the agility to emerge with decision making today, reconciling long-term objectives as opposed to constant firefighting, because you’re anticipating it, you’re seeing the world differently, etc., those combinations of things with agency and then alignment are more likely than not to help. But yes, it’s not easy and not everybody is prepared to go through an understanding of the different facets.
Ross: Can you give an example? Let’s talk specifics. Presumably, either something you’ve seen where there’s public information, or if there’s a client where you can disguise the details, just some facet of whether it worked or hasn’t worked of how organizations or leadership teams can change, or improve. What are the specifics of how this has happened and how we’ve seen shifting in decision-making?
Roger: Let’s take something concrete. I’m, for instance, on the Climate Intelligence Council of a startup called Cervest, which is an AI company, which is focused on climate intelligence. What’s climate intelligence? Climate intelligence is allowing you to make decisions today on specific assets or investments based on very long-term uncertainties. What the state of climate and the weather and resilience needed might be in 5, 10, 15, 20 years. It’s in the long term, it’s uncertain and basically, you need to make decisions today in that. It fits quite well with the different parameters. They basically have 100-plus climate scientists, they’ve developed AI software. It’s open source, and they spend a lot of time educating clients, the world in many different ways, people like me, support, and others, around what are the world views, and understanding what’s happening.
They are then supporting with the disclosures, with understanding what’s happening with the regulators in terms of future disclosures that would provide feedback loops, what are good disclosures, bad disclosures, they’re helping companies themselves prepare for the accounts and all that with the required disclosures. As part of that, they are evaluating and giving a dashboard, which helps you, without predicting because we don’t have the certainty, but it helps to map out and plan the different possibilities for your assets over the next 5, 10, 15 years for the different possible climate eventualities, and see what your competitors are doing, what your assets are doing.
If you’re a hotel chain with 200 hotels, you need to decide, today, you need to understand, we all understand that mitigation is not enough for climate, we need to make our asset base in our lives more resilient and adaptable. What does that mean if I have a supply chain, if I am a city and I have the infrastructure, if I’m a hotel that I have 200 hotels, if I’m about to make an acquisition, basically, that hits. I think if you take leverage points and Donella Meadows in a complex systemic world, it’s looking at all the different facets because you’re a CEO or board or whatever management team, you’re looking and you can see, okay, I can see the different exposures and different risks, I understand that it’s not 100% sure that that will happen, but a lot of information and things that can be processed and mapped out in that, I see the different scenarios, you can decide how to make yourself resilient or not resilient for the different eventualities, etc. That might sound a bit basic, it’s just investments, but actually, it’s huge, because of the resilience of the world.
If there’s flooding, we’ve had big winds in San Francisco, suddenly even Salesforce tower and all these buildings that are built with the best architects and engineers in the world realized that you have glass falling through 80-story buildings or 30-story buildings because of how strong the wind is, and leaks everywhere, things collapsing, and this is California, there are places where there’s more extreme weather, so if you multiply that across the world for insurance, for decisions, for companies, it’s trillions of dollars, potentially, that are exposed, not to mention the number of lives and all that.
For me, decision-making is not just a company making decisions in a certain way but it’s also what they are enabling with the world, society, businesses, organizations, and countries, what decisions they are allowing them to make to be more resilient. I think this is quite a good example. Because I’m not pushing for them. I’m rooting for them, but I’m not lobbying for them in that sense. But it really hits the different elements of long-term uncertainty, emergent decisions today that need to be concrete, building resiliency into our systems and thinking if you have the information, the mindset and you can see, you’re able to make the decisions, you’re getting the data to then decide what decisions to make. It’s not a kind of black box in terms of what information is provided, etc. I think, as an example, that I think, is not a bad example.
Ross: Yes, certainly, in terms of climate, there’s a lot of uncertainty, massive impacts, and decisions now are critical. I think as you suggest, I’m in terms of having both the right sorts of information and the frameworks to assess it, I think it can be extraordinarily valuable, and the scope of information is critical. To round out, you obviously, I would say Thrive on Overload, you soak in lots of information, you make sense of it, you are a foresight practitioner amongst other things, so what are three things that you do in your practices and how you work with information that you think might be useful for other people to consider in how they interface with information.
Roger: When I saw your work, I was very interested because first of all, the thriving on was something I had approached as well. But it’s probably one of the biggest challenges today irrespective of how one uses AI to support or not, just simply the noise from the rest. To your point, as you professionalize foresight and then futures, the trick is that you’re seeing even more things. Because you want to go abroad, you’re going to be on the fringe and everything you know.
I would say the three things are the following for me, personally. Number one is when I’m looking to spot the weak signals is to just start allowing the themes to connect the shifting dots. Serendipity has a big role to play with that, which is why for the wonders of AI, serendipity is often where I get the greatest ideas from, I find them great, whether the outside world does is another matter. But the idea is I’m proud and happy about often come from serendipity. That’s just scanning a heck of a log and seeing things, it’s what patterns come, what clusters, not in a kind of AI, I’ve processed the million random things, but, in terms of you, Ross, or Sarah, or me, Roger, connecting the shifting dots.
The next thing and that is broad, we’re going outside of our fields of expertise and everything we know, we’re going wide, deep, etc. The second thing is thinking about what are the next thought implications of that, at that point, I’m almost intentionally trying to drop the information not to be overloaded. I’m trying to intentionally isolate myself whether it’s through meditation or Beginner’s Mind, Shoshin, and Eastern philosophy, or how do I get a blank page to imagine and be inspired by the multiple possibilities and think about the next thought implications of what that means. There is no data on the future. That is important for me. That is where I think humans, to our earlier discussion, need to enhance our capabilities of imagining the next thought implications to avoid, to enhance preparation, and then avoid the surprise of things that are necessarily rushed or unprepared.
The third thing is scanning continuously in a way to evaluate and compare how things are changing. Again, it’s very subjective. Sometimes it’s inspiration even, it’s not necessarily just monitoring data and creating feedback loops from what the computer says, it’s thinking about how new do things feel, how might they compound, how might they impact which then links to the next thought implication. For me, it’s connecting the shifting dots, imagining the next thought implications, and thinking about that in a dynamic world, because constantly, the world is updating itself, so our ideas and our thoughts and our perspectives of the world also need to be emergent and constantly updated.
Ross: Yes, that’s great. As I was referring to before, the humanity; this is what humans are good at, the way our brains work is around the cognition. I think that’s fantastic to pull out from this is the way our minds can bring things together. Thanks so much for your time and your insights today, Roger, it’s fantastic. Just to round out, where should people go if they want to find out more about your work?
Roger: Thanks a lot for that. So to follow all the content, it’s basically whether it’s LinkedIn, YouTube, etc., Disruptive Futures Institute, so follow the Disruptive Futures Institute and the handles on Twitter and Instagram @disrupt_futures, but Google us and you’ll be fine, Disruptive Future Institute. If you’re specifically interested in learning more about the books, there’s a dedicated website thrivingondisruption.com. We also have detailed articles on every single one of the volumes to know what’s in it as well as podcasts and a lot of other information. The dedicated books, thrivingondisruption.com for the guidebooks, and then just generally follow the institute, Disruptive Futures Institute. We’re trying to beef up how much is free and available for anybody.
Ross: Fabulous, great resources; that’ll all be in the show notes. Thank you, Roger.
Roger: My pleasure, Ross. Have a good rest of the day. Enjoy it.
Podcast: Play in new window | Download