“Not everyone can see with dragonfly eyes, but can we create tools that help enable people to see with dragonfly eyes?”
â Anthea Roberts
About Anthea Roberts
Anthea Roberts is Professor at the School of Regulation and Global Governance at the Australian National University (ANU) and a Visiting Professor at Harvard Law School. She is also the Founder, Director and CEO of Dragonfly Thinking. Her latest book, Six Faces of Globalization, was selected as one of the Best Books of 2021 by The Financial Times and Fortune Magazine. She has won numerous prestigious awards and has been named âThe World’s Leading International Law Scholarâ by the League of Scholars.
Website:
LinkedIn Profile:
University Profile:
What you will learn
- Exploring the concept of dragonfly thinking
- Creating tools to see complex problems through many lenses
- Shifting roles from generator to director and editor with AI
- Understanding metacognition in human-AI collaboration
- Addressing cultural biases in large language models
- Applying structured analytic techniques to real-world decisions
- Navigating the cognitive industrial revolution with AI
Episode Resources
People
Companies/Organizations
Books
- Is International Law International? by Anthea Roberts
- Six Faces of Globalization by Anthea Roberts
Technical Terms
Transcript
Ross Dawson: Anthea, it is a delight to have you on the show.
Anthea Roberts: Thank you very much for having me.
Ross: So you have a very interesting company called Dragonfly Thinking, and I’d like to delve into that and dive deep. But first of all, I’d like to hear the backstory of how you came to see the idea and create the company.
Anthea: Well, itâs probably an unusual route to creating a startup. I come with no technology background initially, and two years ago, if you told me I would start a tech startup, I would never have thought that was very likelyâand no one around me would have, either.
My other hat that I wear when I’m not doing the company is as a professor of global governance at the Australian National University and a repeat visiting professor at Harvard. Iâve traditionally worked on international law, global governance, and, more recently, economics, security, and pushback against globalization.
I moved into a very interdisciplinary role, where I ended up doing a lot of work with different policymakers. Part of what I realized I was doing as I moved around these fields was creating something that the intelligence agencies call structured analytic techniquesâtechniques for understanding complex, ambiguous, evolving situations.
For instance, in my last book, I used one technique to understand the pushback against economic globalization through six narrativesâlooking at a complex problem from multiple sides. Another was a risk, reward, and resilience framework to integrate perspectives and make decisions. All of this, though, I had done completely analog.
Then the large language models came out. I was working with Sam Bide, a younger colleague who was more technically competent than I was. One day, he decided to teach one of my frameworks to ChatGPT. On a Saturday morning, he excitedly sent me a message saying, âThat framework is really transferable!â
I replied, âI made it to be really transferable.â
He said, âNo, no, itâs really transferable.â
We started going back and forth on this. At the time, Sam was moving into policy, and he created a persona called âRobo Anthea.â He and other policymakers would ask Robo Anthea questions. It had my published academic scholarship, but also my unpublished work.
At a very early stage, I had this confronting experience of having a digital twin. Some people asked, âWerenât you horrified or worried about copyright infringement?â But I didnât have that reaction. I thought it was amazingly interesting.
What could happen if you took structured techniques and worked with this extraordinary form of cognition? It allowed us to apply these techniques to areas I knew nothing about. It also let me hand this skill off to other people.
I leaned into it completelyâon one condition: we changed the name from Robo Anthea to Dragonfly Thinking. It was both less creepy for me and a better metaphor. This way of seeing complex problems from many different sides is a dragonflyâs ability.
I think Iâm a dragonfly, but I believe there are many dragonflies out there. I wanted to create a platform for this kind of thinkingâwhere dragonflies could âswarmâ around and develop ideas together
Ross: Just explain the dragonfly concept.
Anthea: We took the concept from some work done by Philip Tetlock. When the CIA wanted to determine who was best at understanding complex problems, they found that traditional experts performed poorly.
These experts tended to have one lens of analysis, which they overemphasized. This caused them to overlook some things and get blindsided by others.
In contrast, Tetlock found a group of individuals who were much better forecasters. They were incredibly diverse and 70% better than traditional expertsâ35% better than the CIA itself, even without access to classified material.
The one thing they had in common was that they saw the world through dragonfly eyes. Dragonfly eyes have thousands of lenses instead of one, allowing them to create an almost 360-degree view of reality. This predictive ability makes dragonflies some of the best predators in the world.
These qualitiesâseeing through multiple lenses, integrating perspectives, and stress-testingâare exactly what we need for complex problems.
- We need to see problems from many lenses: different perspectives, disciplines, and cognitive approaches.
- We must integrate this into a cohesive understanding to make decisions.
- We need to stress-test it by thinking about complex systems, dynamics, and future scenarios, so we can act with foresight despite uncertainty.
The AI part of this is critical because not everyone can see with dragonfly eyes. The question becomes: can we create tools to enable people to do so?
Ross: There are so many things Iâd like to dive into, but just to get the big picture: this is obviously human-AI collaboration. These are complex problems where humans have the fullest context and decision-making ability, complemented by AI.
What does that interface look like? How do humans develop the skills to use AI effectively?
Anthea: I think this is one of the most interesting and evolving questions. In the kind of complex cognition we deal with, we aim to co-create with the LLMs as partners.
What Iâve noticed is that you shift roles. Instead of being the primary generator, you become the director or manager, deciding how you want the LLM to operate. You also take on a role as an editor or co-editor, moving back and forth.
This means humans stay in the loop but in a different way.
Another important aspect is recognizing where humans and AI excel. Not everyone is good at identifying when theyâre better at a task versus when the AI is.
For instance, AI can hold a level of cognitive complexity that humans often cannot. In our risk, reward, and resilience framework, humans may overfocus on risk or reward. Some can hold the drivers of risk, reward, and resilience but canât manage the interconnections.
AI can offload some of this cognitive load. The key is creating an interface that lets you focus on specific elements, cognitively âoffloadâ them, and continue building.
This is just a partial clean-up to give you a sense of the process. Let me know if youâd like me to continue or make further adjustments!
Anthea: Thatâs not easy to do with a basic chat interface, for example. This is why I think the way we interact with LLMsâand the UI/UXâwill evolve significantly. Itâs about figuring out when the AI leads, when you lead, and how you co-create.
Something like GPTâs Canvas mode is a great example. It allows real-time editing and co-creation of individual sentences, which feels like a glimpse into where this technology is heading.
Ross: Yes, I completely agree on the metacognition aspect. Thatâs becoming central to my workâseeing your own cognition and recognizing the AIâs cognition as well. You need to pull back, observe the systemic cognition between humans and AI, and figure out how to allocate tasks effectively.
Anthea: I completely agree. Over the last year and a half, Iâve realized that almost all of my work is metacognitive. I rarely tell people what to think, but I have an ability to analyze how people thinkâhow groups think, what paradigms they operate in, and where disagreements occur at higher levels of abstraction.
It turns out those second- and third-order abstractions about how to think are exactly what we can teach into these models and apply across many areas.
Initially, I thought I was just applying my own metacognitive approaches on top of the AI. Now I realize I also need a deep understanding of whatâs happening inside the models themselves.
For instance, agentic workflows can introduce biases or particular ways of operating. You need cognitive awareness not just of your relationship with the AI but also of how the model itself operates.
Another challenge is managing the sheer volume of output from the AI. Thereâs often a deluge of information, and you have to practice discernment to avoid being overwhelmed.
Now, Iâm also starting to think about how to simplify these tools so that people with different levels of cognitive complexity can easily access and use them. Thatâs where a product manager would come inâto streamline what I do and make it less intimidating for others.
If you combine this with interdisciplinary agentsâlooking at problems from different perspectives and working with expertsâitâs metacognition layered on metacognition. I think this will be one of the defining challenges of our time: how we process this complexity without becoming overwhelmed or outsourcing too much of our thinking.
Ross: Yes, absolutely. As a startup, you do have to choose your audiences carefully. Focusing on highly complex problems makes sense because the value is so high, and itâs an underserved market.
On that note, Iâm curious about the interfaces. Are you incorporating visual elements? Or is it primarily text-based, step-by-step interactions?
Anthea: I tend to be a highly visual and metaphorical thinker, so Iâm drawn to visuals to help with this. Visual representations can often capture complex concepts more intuitively and succinctly than words.
Weâre currently experimenting with ways to visually represent concepts like complex systems diagrams, interventions, causes, consequences, and effects.
I also think the idea of artifacts is crucial. You see this with tools like Claude, Canvas, and others. Itâs about moving beyond a chat interface and creating something that can store, build upon, and expand ideas over time.
Another idea Iâm exploring is âdaemonsâ or personasâAI agents that act like specialists sitting on your shoulder. You could invoke an economics expert, a political science expert, or even a writing coach to give you critiques or perspectives.
This leads to new challenges, like saving and version control when collaborating not just with an AI but with other humans and their AIs. These are open questions, but I expect significant progress in the next few years as we move beyond the dominance of chat interfaces.
Ross: Harrison Chase, CEO of LangChain, talks about cognitive architectures, which I think aligns perfectly with what youâre doing. Youâre creating systems where human and AI agents work together to enhance cognition.
Anthea: Exactly. I read a paper recently on metacognitionâon knowing when humans make better decisions versus when the AI does. It showed that humans often made poor decisions about when to intervene, while the AI did better when deciding whether to involve humans.
Thatâs fascinating and shows how much work we need to do on understanding these architectures.
Ross: Are there any specific cognitive architecture archetypes youâre exploring or see potential in?
Anthea: I havenât made as much progress on that yet, beyond observing the shift from humans being primary generators to directors and editors.
One thing Iâve been thinking about is how our culture celebrates certain rolesâlike the athlete on the field, the actor on stage, or the writerâwhile undervaluing the coach, director, or editor.
With AI, weâre moving into a world where the AI takes on those celebrated roles, and we become the coach, director, or editor. For instance, if you were creating an AI agent to represent a famous athlete, you wouldnât ask the athlete to articulate their skillsâthey often canât. Youâd ask the coach.
Yet, culturally, we valorize the athlete, not the coach. This redistribution of roles will be fascinating to watch.
Similarly, weâve historically overvalued STEM knowledge compared to the humanities and social sciences. Now weâre seeing a shift where those disciplinesâlike philosophy and argumentationâbecome crucial in the AI age.
Ross: Yes, absolutely. The framing and broader context are where humans shine, especially when AI has inherent limitations despite its generative capabilities.
Anthea: Exactly. AI models are generative, but theyâre ultimately limited and contained. Humans bring the broader perspective, but we also get tired and cranky in ways the models donât.
Ross: Earlier, you mentioned intelligence agencies as a core audience. How do their needs differ in terms of delivering these interfaces?
Anthea: Weâre still in the early stages, with pilots launching early next year. Iâve worked with government agencies for a long time, so I know there are differences.
AI adoption in institutions is much slower than the technology itself. Governments and big enterprises are risk-averse, concerned about safety, transparency, and bias.
For intelligence agencies, I expect theyâll want models that are fully disconnected from the internet, with heightened security requirements.
Iâm also fascinated by the Western and English-language biases in current frontier models. Down the track, Iâd like to explore Chinese, Arabic, and French models to understand how different training data and reinforcement learning influence outcomes. This could enhance cross-cultural diplomacy, intelligence, and understanding.
Weâre already seeing ideas like wisdom of the silicon crowd, where multiple models are combined for better predictions. But I think itâs not just about combining modelsâitâs about embracing their diverse cultural perspectives.
Ross: Yes, and Iâve seen papers on the biases in LLMs based on language and cultural training data. Thatâs such a fascinating and underexplored area.
Anthea: Absolutely. The first book I wrote, Is International Law International?, explored how international law isnât uniform. Lawyers in China, Russia, and the US operate with different languages, universities, and assumptions.
Weâre going to see the same thing with LLMs. Western and Chinese models may each have their own bell curves, but theyâll be very different. Itâs a dynamic we havenât fully grappled with yet.
Ross: And that interplay between polarization and convergence will be key.
Anthea: Exactly. Social media polarizes, creating barbellsâhollowing out the middle and amplifying extremes. In contrast, LLMs tend to squash toward a bell curve, centering on the median.
Within a language area, LLMs can be anti-polarizing. But between language-based models, weâll see significant polarizationâdifferent bell curves reinforcing different realities.
Understanding this interplay will be critical as we move forward.
Ross: This has been an incredible conversation, Anthea. What excites you most about the futureâwhether in your company, your work, or the world at large?
Anthea: Iâve fallen completely down the AI rabbit hole. As someone without a tech background, I now find myself reading AI papers constantlyâitâs like a new enlightenment or cognitive industrial revolution.
The speed, scale, and cognitive extension AI enables are extraordinary. I feel like Iâm living through a transformative moment that will redefine education, research, and so many other fields.
Itâs exciting, turbulent, and challengingâbut I just canât look away.
Ross: I couldnât agree more. Itâs a privilege to be alive at this moment, experiencing what it means to think and be human in an age of such transformation.
Thank you for everything youâre doing, Anthea.
Anthea: Thank you for having me.
Podcast: Play in new window | Download