“We wanted to see what the effect of AI might be on forecasting accuracy… to our surprise, we find that even when the model gives biased or noisy advice, human forecasters still improve—something we didn’t expect.”

– Philipp Schoenegger

“I kind of call these Gen AI systems a mirror. Pose it a question, play with scenarios, and see what comes out. It’s like an accelerant for thinking—pushing the boundaries of what’s possible.”

– Nikolas Badminton

“Future thinking is an everyday practice. It’s about becoming more aware of what’s happening around us, sensing signals, and collectively imagining what’s next.”

– Sylvia Gallusser

“The question of the future isn’t ‘How creative are you?’ but ‘How are you creative?’ Because what we can imagine, we can create—and we have a responsibility to build a better future.”

– Jack Uldrich

Robert Scoble

About Philipp Schoenegger, Nikolas Badminton, Sylvia Gallusser, & Jack Uldrich

Philipp Schoenegger is a researcher at London School of Economics working at the intersection of judgement, decision-making, and applied artificial intelligence. He is also a professional forecaster, working as a forecasting consultant for the Swift Centre as well as a ‘Pro Forecaster’ for Metaculus, providing probabilistic forecasts and detailed rationales for a variety of major organizations.

Nikolas Badminton is the Chief Futurist of the Futurist Think Tank. He is a world-renowned futurist speaker, award-winning author, and executive advisor, with clients including Disney, Google, J.P. Morgan, Microsoft, NASA, and many other leading companies. He is author of Facing Our Futures and host of the Exponential Minds podcast.

Sylvia Gallusser is Founder and CEO of Silicon Humanism, a futures thinking and strategic foresight consultancy. Previous roles include a variety of strategic roles at Accenture, Head of Technology at Business France North America, General Manager at French Tech Hub, and Co-founder at big bang factory. She is also a frequent keynote speaker and author of speculative fiction.

Jack Uldrich is a leading futurist, author, and speaker who helps organizations gain the critical foresight they need to create a successful future. His work is based on the principles of unlearning as a strategy to survive and thrive in an era of unparalleled change. He is the author of 9 books including Business As Unusual.

What you will learn

  • How AI-augmented predictions enhance human forecasting
  • The surprising impact of biased AI advice on accuracy
  • Why generative AI acts as a mirror for future thinking
  • The role of signal scanning in spotting emerging trends
  • How creativity and imagination shape the future
  • The evolving nature of community in an AI-driven world
  • Why unlearning is key to adapting in a changing era

Episode Resources

Transcript

Ross Dawson: Now, it’s wonderful to see the work which you’re doing. Speaking of which, recently, you were the lead author of a paper, AI-Augmented Predictions: LLM Assistants Improve Human Forecasting Accuracy.

So first of all, perhaps just describe the paper at a high level, and then we can dig into some of the specifics.

Philipp Schoenegger: Yeah. So the basic idea of this paper is: how can we improve human forecasting?

Human judgmental forecasting is basically the idea that you can query a bunch of very interested and sometimes laypeople about future events and then aggregate their predictions to arrive at surprisingly accurate estimations of future outcomes.

This goes back to work on Superforecasting by Philip Tetlock, and there are a lot of different approaches on how one might go about improving human prediction capabilities.

There might be some training—it was called The Ten Commandments of Forecasting—on how you can be a better forecaster. Or there might be some conversations where different forecasters talk to each other and exchange their views.

And we want to look at how we can—how we could—think about improving human forecasting with AI.

I think one of the main strengths of the current generation of large language models is the interactive nature of the back and forth, having a highly competent model that people can interact with and query whenever they want really.

They might ask the model, “Please help me on this question. What’s the answer?” They might also just say, “Here’s what I think. Please critique it.

And so this opens up for human forecasters a whole host of different interactions, and we wanted to see what the effect of this might be on forecasting accuracy.

Ross: So that’s fascinating. I suppose one of the starting points is thinking about these forecasters. So I suppose, just so people can be clear, human forecasting in complex domains is superior to AI forecasting because they don’t have those capabilities.

So now you’re saying humans are better than AI alone, but now the results of the paper suggest that humans augmented by AI are superior to either humans alone or AI alone.

Philipp: At the current ammount of papers that I have published, yes, but depending on when this airs, there might be another paper coming out that adds another twist to this.

But yes, in early work, we find that just a simple GPT-4 forecaster underperforms a human crowd, and on top of that, it underperforms just seeing 50% of every question.

But in this paper, we find that if we give people the opportunity to interact with a large language model, which in this case was GPT-4 Turbo, and we prompted it specifically to provide super forecasting.

So our main treatment had a prompt that explained The Ten Commandments of Superforecasting and instructed the model to provide estimates that take care of the base rate.

So you look at how often things like this have typically happened, quantify uncertainty, and identify branch points in reasoning.

But then we also looked at what happens if the large language model doesn’t give good advice. What if it gives what we call biased advice? It might be more noisy advice.

So what if the model is told to not think about the base rate—not think about how often things like this happen—to be overconfident, to basically give very high or very low estimates, and be very confident?

And to our surprise, we find that actually, these two approaches similarly effectively improve forecasting accuracy, which is not what we expected.

Ross: So I think that this is a really interesting point because, essentially, this is about human cognition.

It is human cognition taking very complex domains and coming up with a forecast of a probability of an event or a specific outcome in a defined timeframe.

So in this case, the interaction with the AI is a way of enhancing human cognition—they are basically making better sense of the world.

And I guess one of the things that is more distinctive about your approach is, as you say, you could allow them to use anything, any ways of interacting, as opposed to a specific dynamic.

So in this case, it was all human-directed. There was no AI direction. It is AI as a tool, with humans, I suppose, seeking to augment their own ways of thinking about this challenge.

Philipp: Yes, that’s right.

And, of course, being human, the vast majority—at least a sizable amount—of participants simply asked the model a question, right?

They just said, “Well, what’s the question? What would be the closing value for the Dow Jones at the end of December?” and they just copied it in and saw what the model did.

But then many others did not, and they had their own view. They typed in, “Well, I think that’s the answer. What do you think?” or “Please critique this.”

And I think these kinds of interactions are especially promising going forward because there’s also this whole literature on the different impact of AI augmentation on differently skilled participants, differently skilled workers.

In my understanding, the literature is currently mixed, with studies finding different results.

We didn’t find a specific effect here, but other work finds that when the model just gives the answer, low performers typically tend to do better because, you know, they take a lot from the answer, and the model is probably better than them.

But if the model is instructed to give guidance only, low performers tend to not be able to pick up on the guidance and follow it.

But I think there is still a lot of interesting work to be done before we can pin this down because there’s so much diversity in which models are being used.

Nikolas Badminton: I do a lot of research on, with every key now, I into a ton of clients. You know, on the client side, I go into the industry. I call people in the industry. I read a ton of academic research behind the industry—stuff on the edge academically, as well as sort of what’s in the mainstream and what’s being done.

And also, you know, those sort of edge players. When I start to move forward and start to create some new thoughts, then I can sort of start to play around with scenarios. And this is what’s become really interesting to me.

I know that you talk a lot about the augmentation of capability through the use of things like generative AI and the such like. This has been something that I’ve been playing with quite a lot—not only from the generation of textual content but also the exploration from a visual perspective as a helping mechanism to take us in whole new directions as well.

I mean, in my work, it’s like signals to trends, to scenarios, and to stories. I’ve really been trying to push the boundaries of what scenario exploration is with platforms like ChatGPT, Claude, and Gemini, and starting to see what we can do to look at positive and dystopian scenarios, which was obviously part of the work that I was doing, a part in Facing Our Futures.

Over the last couple of years, since that book was completed, zero Gen AI sort of help, as it was in my book. And actually, very little Gen AI help is going to be in my next book because, contractually, you’re not allowed to do this.

So what we have—what we can do—is start to explore the mirror. I kind of call these Gen AI systems a mirror. Pose it a question. Pose it some scenarios. Try to work out and see what comes out of it.

And generally, what I find is maybe I’m talking about energy and ecological ecosystems, and I’ll pose a question, “What if renewable energy is pushed to the side, green initiatives are canceled, and we go full tilt into a maximalist fossil fuel society?”

In preparation for this chat, I went into that to delve even deeper into the mechanisms behind that. And it’s sort of interesting—you get this mirror of like, “Oh yeah, I kind of expect that, you know, the answers to come from that.”

Okay, let’s push that out to 2050. Yeah, it’s kind of an accelerant and whatever. It’s kind of interesting when you start to think about the reference points of all these systems and where they’re getting it from.

Where something like Claude and ChatGPT actually feels like they’ve been drinking from the same fountain, and Gemini just seems to be a little bit freaky.

So it’s super interesting. As I went into it, it was like poetic and dystopic.

For example, I asked this: “Describe a world in 2100 where environmentally friendly, non-carbon fuel solutions are discarded.”

And I went on and on in a prompt, very directional. The others would be like, “Here’s a list of things that happen”—very cold. I didn’t ask it to write in a particular style of a publication or anything like that.

And then Gemini just came out with this. And this is fabulous:

“The year is 2100. The gamble on renewables failed spectacularly. Big Oil, whispering sweet nothings of energy independence and economic growth, won the hearts and minds of a desperate world. The result? A planet drowning in its own fumes.”

And I kind of love that poetic nature.

Gemini, I think, is sort of the unsung hero a little bit, right? In the scheme of things, suddenly, we’re getting something interesting that starts to talk about the geopolitical chessboard, tech on steroids, violence, and exodus.

And it’s like—whoa.

Ross: A lot of it, I think, is about sensitizing ourselves to signals so that we are more likely to notice the things that are relevant or important or point to things that might change in the future.

And that’s what futurists do. But how can we, I suppose, convey this as a capability or skill that others can learn and develop—that’ll been able to see and sense signals that, you know, point to change?

Sylvia Gallusser: It’s a very interesting thing with signals. It’s like raw material. It’s something that anybody can apprehend, and that’s what makes future thinking something that really anybody can work with and develop as a personal skill.

Because it’s about becoming more aware of what is going on around us. And that’s why I think it works really in tandem, in deal with the first step, which is about knowing always more, understanding always more about what is the long-term landscaping, and then being more aware of the variation.

And this can go from analyzing behaviors of people around you—like, what changed during the pandemic? Were people more polite, more civilized? Did we see new behaviors, new words?

Maybe also studying popular culture is a very interesting aspect because if you see what is going on in the media—TV series, movies, books—you also sense a lot of what people are attracted to. What new changes are starting when there’s this kind of enthusiasm for a new book; sometimes, that means something.

So how can you get more aware of this? It’s really an everyday practice, and I like to say two things: it’s a personal practice, and it’s a collective practice.

That’s something you can really train yourself to do all the time—just reading the news, being aware of what is around you, just having your sensors open to the world around. And once again, it’s all senses. It’s about listening. It’s about observing people around you. It’s a different taste in the air. It’s really multi-sensitive here.

Why I say it’s also collective is that, you know, the futurist community is very active. It’s not that big; it’s small. But it’s very interconnected.

And there are a lot of platforms to be able to exchange around signals. They call it sometimes signal swarming or signal scanning—you have different names for it—but the idea is that futurists love to exchange around that topic, to meet and say, “Hey, this week, what did you notice?”

And once again, this STEEPLE aspect is interesting because when you’re on your own, coming maybe from one industry or one profession, maybe you’re a kind of a bias around one or the other.

Like, I’m coming from technology, so at first, I would really focus on everything around new technology and so on. But I guess someone who’s a psychologist might have a different opinion. An economist might see things differently.

So coming together as a collective, as a community, is really interesting into enhancing and amplifying the way you connect with those signals around you.

And finally, I would say, on top of it being collective, what’s interesting when you want to bring a group, a population, a company, or a corporation to work around future thinking is to build the capability to do this.

It’s very simple. It can start with just an Excel file. It doesn’t need something very fancy.

But just bring people to come to see what signals are and get them to understand the texture of it—how does it look like? How does it sound like? And they start to log on their own signals.

And then you already have a big bases of signals of change in a corporation. A great first way to enter the field of foresight.

Ross: So one of the other things you were talking about was putting yourself in the scenario.

And I suppose part of the practice is to create a useful scenario that thus helps you think about new things or envisage things that help shape your current actions.

But as individuals, what are ways in which we can, I suppose, conceive of and bring ourselves—or enter into—I think you used the word meditation there.

And, you know, I’d love to hear about that. What is that practice? How do we put ourselves, immerse ourselves in these useful future scenarios?

Sylvia: Absolutely. Once again, you know, it can be very personal and intimate, or it can be something more collective.

So I try to address both aspects because I think they can work really well together. You can develop your own future-thinking practice as an everyday discipline, let’s say.

I wrote a few years ago, an article about mental stretching exercises you can practice to work on that. It can go from dealing with different perspectives, trying to develop empathy, putting yourself in the shoes of someone else, and imagining a story.

You know what? Actually, learning new languages and learning new cultures is also a great way to practice this perspective change and teasing things in different ways.

Reading, listening, and learning about fiction, for me, has been an immense way to stretch myself to see futures that are possible and not necessarily dystopian.

That’s why I love to talk about science fiction, because we tend to think, to see science fiction as something very dystopian and very scary and not necessarily the good way to start for people who are scared about the future.

But I would say there are more and more interesting science fiction now that create a future world that is not necessarily negative. They can be really engaging and develop a plot which has a narration where the problems are, but it doesn’t mean that the negative aspect is the world-building.

Like the story, to be interesting, needs to always have something of a dilemma or something of a complexity or a knot to it.

But it can be interpersonal stories, not necessarily in the world-building around it.

So I think science fiction and future fiction really offer us ways to think about the future.

So, for example, the way we do it collectively with groups, and I was talking about those meditative exercises.

A really great way we’ve been doing it in the past was around the future of the home.

Because during the pandemic, the home evolved dramatically, and not just the structure but also the way we reorganized life within it.

And I like to talk about the structures and the intangibles that happen in the home.

So what we would do, for example, in terms of envisioning meditations with a few groups, was really you waking up in the future home you live in—maybe 10 years from now, 20 years from now.

How do you wake up? What is the first trigger? What happens?

Is it a wake-up call? Is it natural lighting? Do you still live in a bedroom?

Like, we really start just—what do you smell? What do you think? What do you feel? How does it sound?

So five senses meditation is really effective.

Changing perspective, as I was saying, and so on.

So these are different tools we would use to bring people to get into that state of the future and then go throughout a day in the life.

Like, okay, what do you do from your bed? Then do you go to breakfast? Do you go to your bathroom?

How does the bathroom look? Is it interactive? Do you live alone? Do you live with other people in a community?

And just—it starts asking so many questions that people naturally get their minds to wander around the future home.

And that was a really great tool to get a sense of that new type of space that could exist.

And, oh, they would like that home to be.

Because, once again, it is also about developing what would be our preferable future, our favorite futures, and building them.

Jack Uldrich: And I’ve spent a lot of time as a futurist with the concept of unlearning.

It’s that people in organizations—it’s not that they can’t understand the future is going to change. What we have a really difficult time doing is letting go of the way we’ve always done things.

And so I think when we’re talking about the future of work, to me, work does give most humans this intrinsic value, and they feel as though they’re an integral part of a community.

And so I think there will always be this innate need to be doing something—not just for yourself but on behalf of something bigger.

And when I say bigger, typically I’m thinking of community. You just want to do something for, of course, yourself, your immediate family, but then your neighborhood and your community.

And so as I think about the long-term future, one of the things I’m really excited about is—first, I’m going to go dark, but I think there’s going to be a bright side to this.

One of the things that I think is happening right now that’s not getting enough attention, as a futurist, is that the internet is breaking.

In the sense that there’s so much misinformation and disinformation out there that we can no longer trust our eyes and our ears in this world of artificial intelligence.

And I think that’s going to become increasingly murkier, and it’s going to be really destabilizing to a lot of people and organizations.

So what’s the one thing we still can trust? What’s small groups that are right in front of us?

And so I think one of the things we’re going to see in a future of AI is an increased importance on small communities.

There’s some really compelling science that says the most cohesive units are about 150 people in size.

And this is true in the military, educational units, and other things like that.

And I think that we might start seeing that, but it’s going to look different than in the past.

Like, I’m not suggesting that we’re all going to look like Amish communities here in the U.S., where we’re saying no to technology and doing things the old-fashioned way.

But the new communities of the future are—and now I’m just thinking out loud—something I want to spend more time thinking about.

Like, what will that look like? What will the roles and the skills be needed in this new future?

And again, I don’t have any answers right now, just more questions and thinking.

But it’s one of these scenarios I could see playing out that might catch a lot of people by surprise.

Ross: Yeah, very much so. I mean, we are a community-based species, and the nature of community has changed from what it was.

And I think, you know, thinking about the future of humanity, I think a future of community and how that evolves is actually a very useful frame to round out.

Jack, what advice can you share with our listeners on how to think about the future? I suppose you did a little at the beginning.

But, I mean, do you have any concluding thoughts on how people can usefully think about the extraordinary change in the world today?

Jack: Yeah, the first thing I would say is this—and I was just doing a short video on this.

Ever since we’ve been in grade school, most of us have been asked the question or graded on the question of How creative are you?

And if you ask most people, like on a scale of one to ten, to just answer that question, they’ll do it.

But you know what I always tell people? That’s a bad question.

The question of the future isn’t How creative are you? It is How are you creative?

Each and every one of us is creative in our own way. And as a futurist, I take that really seriously.

We do have the ability to create our own future, but we first have to understand that we are creative, and most people don’t think of themselves that way.

So how do you nurture creativity?

And this is where I’m trying to spend a lot of my time as a futurist. This is where the ideas of unlearning and humility come in.

But I would say it starts with curiosity and questions, and that’s why I like getting out under the night stars and just being reminded of how little I actually know.

But then, it’s in that space of curiosity that imagination begins to flow.

And there’s this wonderful quote from Einstein—most people would say he was one of the more brilliant minds of the 20th century. He said, Imagination is more important than knowledge.

Like, why did Einstein, this great scientist, say that?

And I think—and I don’t have proof of this—that everything around us today was first imagined into existence.

It was imagined into existence by the human mind.

The very first tool. The very first farm implement.

And then farming as an industry, and then civilizations and cities and commerce and democracy and communism.

They were all imagined first into existence.

And so, what we can imagine, we can, in fact, create.

And that’s why I’m still optimistic as a futurist—this idea that we’re not passive agents, that we can create a future.

And I just like to remind people that our future can, in fact, be incredibly fucking bright.

The idea that we can have cleaner water and sustainable energy and affordable housing and better education and preventive health care.

We can address inequality. We can address these issues.

People just have to be reminded of this.

And so, at the end of the day, that’s why I get fired up, and I don’t think I’ll ever sort of lose the title of futurist, because until my last breath, I’m going to be, hopefully, reminding people that we can create—and we have a responsibility to create—a better future.

Let me just end on this.

I think the best question we can ask ourselves right now comes from Jonas Salk, the inventor of the polio vaccine.

And he said, Are we being good ancestors?

And I think the answer right now is, we’re not.

But we still have the ability to be better ancestors.

And maybe if I could just say one last thing—I also spend a lot of time helping people just embrace ambiguity and paradox.

And here’s the truth: the world is getting worse.

In terms of climate change, the rise of authoritarianism, inequality—you could say things are going bad.

But at the same time, on the other hand, you could say the world is getting demonstrably better.

It has never been a better time to be alive as a human.

The likelihood that you’re going to die of starvation or war or not be able to read—never been lower.

So the world is also getting better.

But the operative question becomes: How can we make the world even better?

And that’s where we have to spend our time.

And that’s why we need creativity, curiosity, and imagination—to create that better future.

Join community founder Ross Dawson and other pioneers to:

  • Amplify yourself with AI
  • Discover leading-edge techniques
  • Collaborate and learn with your peers

“A how-to for turning a surplus of information into expertise, insight, and better decisions.”

Nir Eyal

Bestselling author of Hooked and Indistractable

Thriving on Overload offers the five best ways to manage our information-drenched world. 

Fast Company

11 of the best technology books for summer 2022

“If you read only one business book this year, make it Thriving on Overload.”

Nick Abrahams

Global Co-leader, Digital Transformation Practice, Norton Rose Fulbright

“A must read for leaders of today and tomorrow.”

Mark Bonchek

Founder and Chief Epiphany Officer, Shift Thinking

“If you’ve ever wondered where to start to prioritize your life, you must buy this book!”

Joyce Gioia

CEO, The Herman Group of Companies and Author, Experience Rules

“A timely and important book for managers and executives looking to make sense of the ever-increasing information deluge.”

Sangeet Paul Choudary

Founder, Platformation Labs and Author, Platform Revolution

“This must-read book shares the pragmatic secrets of how to overcome being overwhelmed and how to turn information into an unfair advantage.”

R "Ray" Wang

CEO, Constellation Research and author, Everybody Wants to Rule the World

“An amazing compendium that can help even the most organised and fastidious person to improve their thinking and processes.”

Justin Baird

Chief Technology Office, APAC, Microsoft

Ross Dawson

Futurist, keynote speaker, author and host of Thriving on Overload.

Discover his blog, other books, frameworks, futurist resources and more.

rossdawson.com