“Collective intelligence is the ability of a group to solve a wide range of problems, and it’s something that also seems to be a stable collective ability.”
– Anita Williams Woolley
“When you get a response from a language model, it’s a bit like a response from a crowd of people. It’s shaped by the collective judgments of countless individuals.”
– Jason Burton
“Rather than just artificial general intelligence (AGI), I prefer the term augmented collective intelligence (ACI), where we design processes that maximize the synergy between humans and AI.”
– Gianni Giacomelli
“We developed Conversational Swarm Intelligence to scale deliberative processes while maintaining the benefits of small group discussions.”
– Louis Rosenberg

About Anita Williams Woolley, Jason Burton, Gianni Giacomelli, &Â Louis Rosenberg
Anita Williams Woolley is the Associate Dean of Research and Professor of Organizational Behavior at Carnegie Mellon University’s Tepper School of Business. She received her doctorate from Harvard University, with subsequent research including seminal work on collective intelligence in teams, first published in Science. Her current work focuses on collective intelligence in human-computer collaboration, with projects funded by DARPA and the NSF, focusing on how AI enhances synchronous and asynchronous collaboration in distributed teams.
Jason Burton is an assistant professor at Copenhagen Business School and an Alexander von Humboldt Research fellow at the Max Planck Institute for Human Development. His research applies computational methods to studying human behavior in a digital society, including reasoning in online information environments and collective intelligence.
Gianni Giacomelli is the Founder of Supermind.Design and Head of Design Innovation at MIT’s Center for Collective Intelligence. He previously held a range of leadership roles in major organizations, most recently as Chief Innovation Officer at global professional services firm Genpact. He has written extensively for media and in scientific journals and is a frequent conference speaker.
Louis Rosenberg is CEO and Chief Scientist of Unanimous A.I., which amplifies the intelligence of networked human groups. He earned his PhD from Stanford and has been awarded over 300 patents for virtual reality, augmented reality, and artificial intelligence technologies. He has founded a number of successful companies including Unanimous AI, Immersion Corporation, Microscribe, and Outland Research. His new book Our Next Reality on the AI-powered Metaverse is out in March 2024.
Websites:
University Profile:
LinkedIn Profile:
What you will learn
- Understanding the power of collective intelligence
- How teams think smarter than individuals
- The role of ai in amplifying human collaboration
- Memory, attention, and reasoning in group decision-making
- Why large language models reflect collective intelligence
- Designing synergy between humans and ai
- Scaling conversations with conversational swarm intelligence
Episode Resources
People
Concepts & Frameworks
- Transactive Memory Systems
- Reinforcement Learning from Human Feedback (RLHF)
- Conversational Swarm Intelligence
- Augmented Collective Intelligence (ACI)
- Artificial General Intelligence (AGI)
Technology & AI Terms
Transcript
Anita Williams Woolley: Individual intelligence is a concept most people are familiar with. When we’re talking about general human intelligence, it refers to a general underlying ability for people to perform across many domains. Empirically, it has been shown that measures of individual intelligence predict a person’s performance over time. It is a relatively stable attribute.
For a long time, when we thought about intelligence in teams, we considered it in terms of the total intelligence of the individual members combined—the aggregate intelligence. However, in our work, we challenged that notion by conducting studies that showed some attributes of the collective—the way individuals coordinated their inputs, worked together, and amplified each other’s contributions—were not directly predictable from simply knowing the intelligence of the individual members.
Collective intelligence is the ability of a group to solve a wide range of problems. It also appears to be a stable collective ability. Of course, in teams and groups, you can change individual members, and other factors may alter collective intelligence more readily than individual intelligence. However, we have observed that it remains fairly stable over time, enabling greater capability.
In some cases, collective intelligence can be high or low. When a group has high collective intelligence, it is more capable of solving complex problems.
I believe you also asked about artificial intelligence, right? When computer scientists work on ways to endow a machine with intelligence, they essentially provide it with the ability to reason, take in information, perceive things, identify goals and priorities, adapt, and change based on the information it receives. Humans do this quite naturally, so we don’t really think about it.
Without artificial intelligence, a machine only does what it is programmed to do and nothing more. It can still perform many tasks that humans cannot, particularly computational ones. However, with artificial intelligence, a computer can make decisions and draw conclusions that even its own programmers may not fully understand the basis of. That is where things get really interesting.
Ross Dawson: We’ll probably come back to that. Here at Amplifying Cognition, we focus on understanding the nature of cognition. One fascinating area of your work examines memory, attention, and reasoning as fundamental elements of cognition—not just on an individual level, but as collective memory, collective attention, and collective reasoning.
I’d love to understand: What does this look like? How do collective memory, collective attention, and collective reasoning play into aggregate cognition?
Anita: That’s an important question. Just as we can intervene to improve collective intelligence, we can also intervene to improve collective cognition.
Memory, attention, and reasoning are three essential functions that any intelligent system—whether human, computer, or a human-computer collaboration—needs to perform. When we talk about these in collectives, we are often considering a superset of humans and human-computer collaborations. Research on collective cognition has been running parallel to studies on collective intelligence for a couple of decades.
The longest-standing area of research in this field is on collective memory. A specific construct within this area is transactive memory systems. Some of my colleagues at Carnegie Mellon, including Linda Argote, have conducted significant research in this space. The idea is that a strong collective memory—through a well-constructed transactive memory system—allows a group to manage and use far more information than they could individually.
Over time, individuals within a group may specialize in remembering different information. The group then develops cues to determine who is responsible for retaining which information, reducing redundancy while maximizing collective recall. As the system forms, the total capacity of information the group can manage grows considerably.
Similarly, with transactive attention, we consider the total attentional capacity of a group working on a problem. Coordination is crucial—knowing where each person’s focus is, when focus should be synchronized, when attention should be divided across tasks, and how to avoid redundancies or gaps. Effective transactive attention allows groups to adapt as situations change.
Collective reasoning is another fascinating area with a significant body of research. However, much of this research has been conducted in separate academic pockets. Our work aims to integrate these various threads to deepen our understanding of how collective reasoning functions.
At its foundation, collective reasoning involves goal setting. A reasoning system must identify the gap between a desired state and the current state, then conceptualize what needs to be done to close that gap. A major challenge in collective reasoning is establishing a shared understanding of the group’s objectives and priorities.
If members are not aligned on goals, they may decide that their time is better spent elsewhere. Thus, goal-setting and alignment are foundational to collective reasoning, ensuring that members remain engaged and motivated over time.
Ross: One of the interesting insights from your paper is that large language models (LLMs) themselves are an expression of collective intelligence. I don’t think that’s something everyone fully realizes. How does that work? In what way are LLMs a form of collective intelligence?
Jason Burton: Sure. The most obvious way to think about it is that LLMs are machine learning systems trained on massive amounts of text. Companies developing these language models source their text from the internet—scraping the open web, which contains natural language encapsulating the collective knowledge of countless individuals.
Training a machine learning system to predict text based on this vast pool of collective knowledge is essentially a distilled form of crowdsourcing. When you query a language model, you aren’t getting a direct answer from a traditional relational database. Instead, you receive a response that reflects the most common patterns of answers given by people in the past.
Beyond this, language models undergo further refinement through reinforcement learning from human feedback (RLHF). The model presents multiple response options, and humans select the best one. Over time, the system learns human preferences, meaning that every response is shaped by the collective judgments of numerous individuals.
In this way, querying a language model is like consulting a crowd of people who have collectively shaped the model’s responses.
Gianni Giacomelli: I view this through the lens of augmentation—augmenting collective intelligence by designing organizational structures that combine human and machine capabilities in synergy. Instead of thinking of AI as just a tool or humans as just sources of data, we need to look at how to structure processes that allow large groups of people and machines to collaborate effectively.
In 2023, many became engrossed with AI itself, particularly generative AI, which in itself is an exercise in collective intelligence. These systems were trained on human-generated knowledge. But looking at AI in isolation limits our understanding. Rather than just artificial general intelligence (AGI), I prefer the term augmented collective intelligence (ACI), where we design processes that maximize the synergy between humans and AI.
Louis Rosenberg: There are two well-known principles of human behavior: one is collective intelligence—the idea that groups can be smarter than individuals if their input is harnessed effectively. The other is conversational deliberation—where groups generate ideas, debate, surface insights, and solve problems through discussion.
However, scaling these processes is difficult. If you put 500 people in a chat room, it becomes chaotic. Research shows that the ideal conversation size is five to seven people. To address this, we developed Conversational Swarm Intelligence, using AI agents in small human groups to facilitate discussions and relay key insights across overlapping subgroups. This allows us to scale deliberative processes while maintaining the benefits of small group discussions.
Â
Podcast: Play in new window | Download