About Gianni Giacomelli
Gianni Giacomelli is the Founder of and Head of Design Innovation at MIT’s Center for Collective Intelligence. He previously held a range of leadership roles in major organizations, most recently as Chief Innovation Officer at global professional services firm Genpact. He has written extensively for media and in scientific journals and is a frequent conference speaker.
Ross Dawson: Gianni, it’s a delight to have you on the show.
Gianni Giacomelli: It is fantastic to be here. Thanks for having me.
Ross: So we have a shared passion for how AI can enhance our thinking. And, yeah, there’s many approaches Eureka mindset, you can have practices, you can have tools and techniques. So just the big frame, how should we go about thinking? Well, we have this wonderful, generative AI? How do we start by making it helping us to think better?
Gianni: I mean, it’s a big question. And it’s probably one of the defining questions as we start recording this at the beginning of 24. I’m gonna provide one view, which may be one of the many views but that mine is a little bit in depth to the work that I’ve done over the years with Thomas Malone at MIT at the Center for Collective Intelligence. And it’s the view of augmentation of collective intelligence; augmentation, meaning, not considering humans, just as a crowdsourcing exercise for machines as a set of technologies, but really the design the organizational design of the combination and the synergy between the two. And that sounds obvious to a bunch of people being exposed to many tools in the recent past, etc. But when you start peeling the onion and looking into how you really make it happen, both at an individual level, but even more importantly, at an organizational level, when you do processes that string together, people, that is actually a lot less obvious. And I think the first maybe the first answer to your question is we should actually try to step away and, and try to look at the forest, instead of just looking at the tree.
And I think we got an obviously 2023. Everybody got engrossed with artificial intelligence, which in itself, the generative AI kind, is an exercise in collective intelligence. I mean, those machines were trained on us, right, we were trained on the things that the humans have been accumulating for many years. But if you look at them in isolation, I think we don’t get to where we want to get to. And obviously, a bunch of people talk about artificial general intelligence AGI, I really like to talk about ACI, which is augmented collective intelligence, which is a state in which we will design practices, processes, tools, that enable that synergy between large groups of humans, and large groups of machines. There’s a lot of design space there. And I think we’re gonna get to a place where we really can amplify our collective cognition, by doing that job right there, doing that job, almost a process and organizational design, using the technologies that we have now. And the practices that, by the way, by MIT colleagues and others in the world, we’re not being the ones, you’re the only ones. So we have a lot of those, and I think we can bring them to bear.
Ross: Absolutely. In an organization, hopefully, is collective intelligence is a bunch of people, you’ve got processes and communication to together, hopefully be somewhat collectively intelligent. But now we have these tools, which any of us within your organization can use, we can find ways to scale them and build them in processes. But to get to that collective intelligence, I think, you know, it’s very significant starting point is the individual, how can an individual augment themselves in a particular role? So I’d love to come back to two, I suppose, in a way end up how do we create that collective intelligence. But starting with an individual who’s working well, maybe entrepreneur and maybe working in an organization, they have, of course, access to generative AI in various guises. So what should they be doing, to start to think better, act better, and make better decisions?
Gianni: I think it’s a bunch of practical things. And we can get into the tools and the practices. And you know, we’ll get to that in a second. But I think the first thing that needs to be done is, is to change a little bit our frame of reference. I think most of us, especially in the West, but I think even in the East, in the recent times have been almost trained imprinted with this notion of, we need to be smart. Our brain is to be better. And obviously, we learn and we push ourselves and we apply a bunch of techniques and all that kind of stuff. But I think one of the things I have realized over the years and you know, I did my share of you know big jobs of being a Chief Innovation Officer in a large IT services company and such, but the more you think about it, and obviously the work on collective intelligence helped me with this is my brain, our brain is not what we’re just having our skull.
Essentially, what we have in our skull is, is a catalyst is a is a point where a bunch of signals from the outside common mingle. If you think about the brain that way, then you think about the fact that in order to optimize the functioning and intelligence of the brain, you need to use all the ecosystem of things that exist around you. And we live in a time where we have access to information in a way that’s unprecedented. I mean, you and I, kind of same age, I guess, you know, imagine the remember when we were kids, right? I mean, we, at best, we will go to the library. And that was the way we had to augment our intelligence, and then we go to school, and then there’s teachers and pupils, and maybe we can work with each other. And now we have all this stuff around us. So I think the first point is really to think about your brain is not the neuro, you know, stuff that you have in your skull.
And so the mind, the extended mind, I mean, it is not a philosophical concept is, you know, maybe we’re stepping into the practicality now, you have a bunch of steps to actually make yourself more intelligent, the moment you stop thinking that your brain is yours, right. So, I’ve done a lot of work over the years on how to do that. But I think there are at least four steps that people should take and may apply at an individual level, but also at an organizational level, but we’ll start with the individual.
And the other thing that I suggest when when we do this is try to go obviously, for diversity. I mean, that could be and I work in academia, law, I mean, academics tend to be people who know a lot about very few things as a job. If you’re like most people, you will want to have a diversity of perspectives. I mean, the obvious example is, you know, Steve Jobs, who used to know a bunch of things about computer science, but also marketing and design and a bunch of things that ran through culture. So I mean, there’s all kinds of things I got to get. And so if you want to really become smart, try to deliberately find nodes, but also find nodes that push you into different dimensions spaces. Yeah, I found it for myself. You know, I work in software and consulting used to work at BCG at SAP, etc. And then I found at some point that this whole space of design that has its own techniques and methods and people and that really completely opened my mind. So this first thing to do.
Ross: I just wanna sneak in there. This echoes what you’re saying, because it very much echoes what’s on Thriving on Overload. This idea of find the right people and sources and information as inputs. This is cognition, cognition, human cognition, or any cognition is around, you got information going in, something happens. And then from that you have some, you hopefully, hopefully, you’ve useful actions. So can AI help us in what you’ve just described in identifying the nodes and in being able to ensure that those are sufficiently diverse? So are there specific techniques or approaches to do that?
Gianni: Yeah. So first of all, the short answer is a resounding yes. And we haven’t seen anything yet. Right. So what we can do today, you can ask machines. I mean, a lot of AI has already been embedded in our search mechanisms for some time. I mean, if you search properly, if you know how to search properly, I know that you had Marshall Kirkpatrick for example, on your podcast a while back, you can actually find through simple algorithms, people on Twitter, etc, you can you can really search I mean, these days, you can even go to perplexity.ai or ChatGPT4. You can have a conversation with ChatGPT. Let’s take an example, right? Say, “Hey, I’m interested in the development of artificial intelligence. And I want to follow people who have a good and interesting point of view on how artificial intelligence of man’s human resources”. Just, you can have a conversation with perplexities of AI, which by now has a copilot function as well, or ChatGPT, and ask, “Who are the people out there”, and it will know some of those people. And then you go into LinkedIn and you start looking at, you know, what they have tagged, etc.
So it’s actually important that the machine helps you first break up the semantic space, break it into its sub components, very often this is one of the techniques in design is so awesome, we think of a problem with our bounded completion. But when you start peeling out the layers of the problem, you actually find that the problem has individual components, right? So for example, where you say AI for human resources, what does that mean? Why it doesn’t just mean the thing for payroll, it might be engagement, customer, employee engagement might be skilled taxonomies, it might be collaboration tools, and with the machine, you can actually break up the space. And then in those sub spaces, you go and find the people, that’s already the first thing that they should do.
Ross: I just want to dig a little bit into that. So we’ve talked about this, mapping semantic spaces, and I think that’s that really a fundamental thing in cognition. What are you thinking about? Alright, so let’s map that out. Let’s find the elements to that. So, as you said, that’s, that’s relevant in finding people in space is relevant in many other aspects of cognition. So, can we just dig a little bit deeper? And say, what, what? And there’s many approaches, but what are one or two approaches, which people can just take to be able to start to map a semantic space?
Gianni: I think this falls under the umbrella of falling in love with the problem before falling in love with the solution, which is a very simple fundamental concept in design. And most of us, don’t do it right. We don’t do it individually. We don’t do it in groups, we like to run to, you know, we’ve been trained like that, right? I mean, teachers would look at us, like, you know, how long do you take in the final problem? Actually, you need to find the problem, so how do you do that with? I mean, first of all, you can have a conversation with machines. I think we, we kind of forget, because we haven’t been trained in an apprentice, the support model. I mean, I was born in Florence, Italy, right? I mean, I can sometimes go back and you actually bump into the places where Leonardo was trained. Right? And the guy went into the shop, and then and then he talked to people and in your head, the you had Leonardo’s Master would engage in a conversation with Leonardo and saying, “What is light?” I mean, Leonardo was one of the guys who actually figured out how to depict light in a very different way. And that’s the reason why you had the Mona Lisa. I mean, if you look at the Mona Lisa is awesome, because the light is used in different way. But in those periods in that time, it wasn’t normal, right? And so what you do, you don’t say all art is art? I mean, I have all these things. I mean, you go around in Florence, you can see and you do some more. And you can say, no, no, let’s decompose it. What is art? Obviously, the shapes, there’s motion that is reflected is the colors in the techniques? And is this the light, right? At some point, that conversation must have happened between Leonardo and his master, or the other vehicles.
And so use these machines as a conversation tool, actually follow? It just be basically ask them “what is in this?” Break it down into its individual sub components, that’s the first thing to do. And they do a really good job at it. One of the things about generating AI is that it knows the semantic space, the latent semantic space into whatever you’re talking about. And we don’t, as humans, we kind of into it, but they do a really good job of doing it. And if you push them and say, Well, this is not good enough, they will do and then you explode. And so I think that’s the first thing to do. And they work really well at that thing, if you tried but if it’s actually it’s actually fabulous. And they do it in like an effortless manner, which is a little disconcerting at times, but then, as a human, you can say well then explode the first point Then the third point and the fifth point that tell me what’s in there. So it’s like a nested doll. I mean, in the end reality is like, there’s all this sub structures that we typically are oblivious to, because there’s too much for our mind to take. But when you start the job of identifying nodes, boy, you want to go in and disassemble that, that network structure, right. So I think as the first, that’s a very simple thing that most people should be able to do, right?
Ross: Yes, absolutely. So going back, we’ve got given off going on a couple of tangents, who said that before. I think steps, we started off by looking at the nodes, and not sure diversity was the second one? So where do we continue where I interrupted.
Gianni: With diversity was stealing nodes, most often, you know, hold them to stay with their bounded things. But you know, again, you talk to the machine and say, what are the spaces that are adjacent is once I know the kinds of things is really important. One of the things that they and just to finish on this, and then we’ll move to the second one is, a lot of people have been, I mean, we always want to build being on the shoulders of giants, right? We never want to start from I mean, this is one of the ideas collective intelligence, right? And so one of the things that are super important is to go into the node identification also with the lens of using concepts in theories from people. So, for example, managerial theories, I don’t know, Christensen’s disruption. That is one way to break down the space. So a bunch of this theory is my all theories, all scientific theories, including management theories are one way of looking at the world and this assembling the world. It’s like a lens, right? You have you ever tried to some people gave me the other day was really a lot of fun ultraviolet lamp that you can use. When you look at the world through an ultraviolet lamp, you actually see all sorts of different things and managerial theories of like that lamp, right? When you ask that very question, you know, AI HR through the lens of disruption theory, it will give you I mean, you can actually have the conversation with the machine, it will give you a bunch of perspectives that you didn’t, you didn’t think you had. But I think, that’s a really important because especially generative AI, they will do a good job with the semantic space. But these still don’t do a fantastic job, this symbolic space, I mean, they have some representation of the world, but not the entire one. But if you start pushing it and say, well look at it through the perspective of I don’t know, blue ocean theory, it will actually disassemble the space, and then you can go and find people and stuff. So let’s finish that for a second, that creates diversity.
Ross: So potentially, an individual may, depending on their role, may choose a set of different models or frames or lenses, which can be most relevant to their work.
Gianni: A product presented in models that you’re familiar with. There’s many of them, I mean, you just go in, if you can ask the machine, what at the moment, it’s interesting, the machines are optimized to optimize for their computing. And so they will give you what, you know, simple answers. But if you ask him, What are the models that I could apply to this thing, they will tell you right and say, Oh, this 15 things and then you say okay, well start with the first one, tell me how you then see the problems with that. And then you look at the people who are behind those things. So is that such, I think that’s a very simple thing to do as fun thing to do in itself is already inside. So the second step is really most people miss this, to have really insightful conversations, you need to do something for them, those nodes.
Obviously, if you use Google search or something, you’re literally paying with your advertising dollars that sit in your click right, but for hat for more intentional, intelligent conversations. But look at the relationship that you and I have. I think we’ve met each other online. You know, you live in Australia, I live in Berlin, Germany, I mean, we never met, we work on a bunch of the same stuff. We kind of follow each other. I mean, for sure. I’ve been following your work for a long time and appreciate it and, then I give feedback. You get feedback to me, you know, I refer you to people. It’s important to always pay forward in these communities. In these networks, if you really want to have conversation with people as opposed to just scraping. Scraping, it’s fine, by the way, for most people may be sufficient. But if you want to go deeper, you need to do something for the nodes out there. And what they try to do and what you try to do, and it’s a very good example, you publish a lot, right? You know, I mean, it takes you time to do what you do; it takes you time to do what you’re doing now; it takes time to put out the insights that you put out on social media. In a way you’re paying forward; and people will pay back in form in the form of, “Yeah, I want to talk to you.” “Yeah, I’m going to have a conversation with you.” So the first, the second pillar is really how to set up incentives so that the network fires, the equivalent in our brain is hormones, right? Boy, your brain will be very happy to sleep all day. But it has some powerful things like oxytocin’s, and, you know, adrenaline, etc. And boy, it fires when when he sees those things. And so the equivalent in networks is incentives. And they can be norms, they can be culture, they can be a bunch of things, and at an individual level, is do something for the next person, or for a bunch of people. Somebody said, if you cannot code, at least you should be able to write. And I think it’s a very good thing to do. And many of us do that. And, you know, obviously, we get something or so that was a second pillar.
Ross: So the one thing, which is fairly obvious, again, as for the alignment of ideas is that, you know, what you’re describing, for me as the living networks, again, sort of echoing ideas, it’s not as how you bring them to life, while you create value for others, you make connections, out of that good things happen, you participate in this, you know, higher order organism. And so yeah, and part of the that frame now is that it’s not just humans in that network, there’s also AI and within that context, sees the interstitial piece of AI, but also AI has nodes as well.
Gianni: And so look, I mean, incentives can also be you pay – your 20 bucks a month to open AI, right? I mean, you give them incentives, and they give you something that there’s, there’s all sorts of lab stripe, you can be for free, and then you get some stuff, you know, sometimes good sometimes, you know, empty calories, and then you go up and you pay for services, and then you go up and you pay for your time that you put into having something really unique or new novel and share it with others and engage with others a session that will give you an increasing amount of interaction with the network, you really get deeper and deeper into new things. And maybe for some people, it’s enough to stay a level one and maybe for other people like you and I we need to go all the way to level three, because we try to discover new things, you know, to go where we don’t think people have been before. So that was the second.
The third one is all these networks have an inclination to become a little insular. Once in work, for example, it has been done around which affects super fascinating around a guy called Carl Tristan, free energy principle and the the concept of active inference. We are built to survive minus the first thing that we want to do and to survive, we have this balance between doing and resting. And the more you do, the more you use energy. And the more you rest, the more you don’t get energy. So but you use less energy. So what do you do? And so the the feeling that you know, you need to forage yourself, you need to go out and forage yourself. And so that’s the basic concept of build information feeders into those networks, because those networks are quite happy to stay in their little bubble and there’s less cognitive dissonance and everybody kind of agrees with each other, etc. But then obviously they ossified. But obviously going out and looking for new information is expensive, expensive in terms of cognitive resources, sometimes it in jars you write because you find people you don’t necessarily like the opinion of and but you need to build that stuff. Marshall you’re having on the pod through X, old time Twitter, you get all sorts of feeds actually stuff that you wouldn’t want to see if you go and look for it. You’ll find it. And we’ve been using Feedly for the longest time, we obviously have all our good pods and good newsletters essentially. It’s not just enough to find the nodes is also you need to you know, put your their fire hose into your backyard otherwise you don’t get anything out of it.
Increasingly, I think we will get a good help from artificial intelligence. One of the things that I really really longed for, and I think it will happen and one things I think is going to be okay. Give it a good chance in 24, it will happen. Something that summarizes the firehose, you know, we all work in spaces where we have a ton of stuff in you, I’m sure you read like the silliest you, you’re subscribing to more newsletters and podcasts, etc, that you can consume. And so the objective there is, can we have machine summarize what comes from those feeds, so that you can consume it better. I don’t believe actually, that the machine will do a fantastic job of summarizing the really juicy things, because it tends to be I mean, unless it is trained in a certain way. And by the way, there’s work in that direction. But I very often find the AI based summaries a little, you know, bland kind of thing. But when they do that, is to give you almost the equivalent of a table of contents, so that you as a human can decide, oh, you know, the third point and the seventh point are really interesting, I want to go and see. And that makes you a lot more efficient in your consumption of knowledge. That’s a huge point. I mean, we, you can find the nodes and you can pay forward, you can do all the stuff. And then you have a firehose, and you can’t really do all the matches. Everybody has a job and how much time in so summarization is going to be important, you can already use machines to point you at the right subcomponents of the corpus. And then you need to do the job that humans do, which is you go in and you find the connections at least. So that I think as the third element and you know, retrieval advantage generation will do a good job in the future, I think increasingly does. But But I still feel that for some time, we will need to have humans going in and connecting, not just a semantic dots, but really the symbolic dots really the core connection between analogies that machines will find. So that’s the third.
Ross: Absolutely. And so. So this goes to where sometimes get a little frustrated, because there’s a lot of saying, Okay, how do we get the best information, but the information has value when we, as you suggest, make it part of our own understanding or knowledge, our own comprehension. And so I frame this as knowledge creation, these are the mental models or the frameworks we have in our mind. And we’ve got new information, we can integrate it, we can assimilate it, we can improve our mental models as an on the basis of the information we get. So I am fascinated by how it is we can use AI to build better mental models or to, you know, I mean, partly through making our mental models more visible, to allow that new information to essentially mean we understand better, we’re able to make better decisions, we have a better understanding, not just presented with better quality information.
Gianni: Let’s double click on that, because I think that’s a fundamental thing. I mean, it was one of those things that make us smarter or not. Right, so there’s a big fork in the road. I think part of the answer there is, we still need to work hard at it. I think the also in the work that I’ve done, the mighty in other places, you realize pretty quickly how there could be a dependency that humans form around the machines. And we just click and we hope that the thing will just be broken down for you in a perfect several pieces, and you need to do the minimum amount of effort. I don’t think that that’s, first of all, I don’t think that’s up right in general, because it will make us complacent and to be lazy. And then. But also it doesn’t do the job today, I mean, to your point, let’s go back to an earlier point that we made, I can talk to a machine. And if I ask the machine, so up for it. To break down a space, I need to engage with the machine, or the machine to break down the space. So programs and AI and human resources, just break it down. And then tell me who’s been writing about this, you know, tell me who I should follow all that kind of things you need to engage with the machine to get the data. Right, in the second part is your decision as a human to decide which lenses you want to apply. Right. So I know you may be familiar with the concept that it’s called ‘Ikigai’. It’s a Japanese concept about meaning of life. And it’s really interesting. From a career guidance standpoint, the machine wasn’t going to tell you that the thing is there may be relevant but it’s your decision as a human to decide that that thing which is again, you’re basically it says you need to find your space where you work and the intersection of your knowledge, your passion and what the world needs because you don’t end either you get paid for it or in some way intrinsically or extrinsically. It’s a lot of work. And that concept of ‘Ikigai’ the machine knows what it is. But it doesn’t take the decision for you. And you should take that decision, you as a human, as a team as an organization, you are in charge of deciding which models which mental models, the machine can actually tell you which mental models exist. But it’s your job to decide which ones to use.
Ross: So fourth point?
Gianno: So four point is, is the collaboration element. So if you think about I mean, this is, but the work that we’ve done over the years of the materials, this was intended to be a cross disciplinary group between organizational and management science, but also neuroscience and computer science, right. So if you think about all the things we just talked about, it’s like, very often missed out on some level of the brain works.
So the fourth aspect is collaboration. So once you have the nodes, you’re given the incentives, you have all these feeders, you need to engage with this thing. And I think the easiest example would sit at the beginning of 2024. Everybody talks about mixtures of experts in AI. So mixtures of experts basically having different agents with different characteristics engage with each other. And those are different models. Instead, if your objective is how do we get to that? AGI at some point, how do we make this machine smarter, one of the things that people are saying now is, especially based on the success of GPT4. GPT4 is in itself is a combination of models is not like a GPT3+++, it’s actually a a bunch of comparable, a bunch of those models combined and interacting and collaborating with each other. So if you see how even the artificial changes people are trying to solve, the problem is not just have more data in more parameters is actually well, let’s take eight models, and let’s orchestrate an architecture of collaboration between them. So you can get them to dialogue, dialectic, etc. So that is a super important concept. And we built a couple of things that I’ll tell you about in a second. This foundation…so in you need to have collaboration infrastructures that are built for that.
So if you’re in a company, people often talk about, well, I have teams or I have slack, or I have something like that, well, there’s the ROI of that stuff. For a bunch of people is well, what is the ROI of a telephone, right. And I was having conversations the other day with with folks what has been the ROI of cellular phones. And then there’s a lot of studies that show that in Africa, you know, without cellular phones, you know, the agriculture will be much worse and because in the farmers and helping connect etc. But it’s interesting, the West, we are in developed economies very often take that stuff for granted. And then it was our you know, the CIO will implement some collaboration technology, that isn’t the right way to think about it.
So first of all, is the tools, but also the change management related to the tools and getting people to adopt them. Which also means for example, that, you know, if you’re an individual person and individual professional in a company, suppose that you lead a team, or you’re the leader of an organization, you need to lead by example, in terms of usage of those tools. Don’t wait for your teams to use it. Get into the ahead, I’ve seen it. And I’ve done it many times in my career, because obviously, that was part of my job as an innovation officer to create systems that then innovate by themselves, even when I’m sleeping, right. And in the way you do it, you see sometimes business leaders, P&L owners, you know, those kinds of people theory and theory, they’re not the CEO, they say, well, you know, give me the tools. But instead what they would do, they would get in to those channels and say, don’t call me, I’m not going to answer the phone. Come here and write your answer a question here. I’m going to give you an answer here. So one, everybody sees the answer in the question. And two, everybody knows that next time you have that kind of conversation, you don’t keep it into a dark hole between you and me. And then once we have the phone, nobody will ever know that we had that conversation. So that element of change management, right on top of choosing the right collaboration tools, but also getting them implemented is vital to the ROI of that thing. And in the end, the organizations that do that well exhibit a higher level of collective intelligence and the others were the silos, these cliques, there’s people talking behind the curtains and you don’t quite know what’s going on.
So the fourth aspect is really this collaboration element. And again, AI if it works as advertised, I think we’ll get a lot of because AI increasingly already, I mean, some of the basic things. It transcribes video calls and it gives you the summaries as of this video call, so at least it tells you what you talked about. And then on that basis, people haven’t been your calls, can decide, oh, yeah, this is interesting to minister caught up or No, no, I want to go in and listen to that thing. This is huge. I mean, we don’t realize how this is a, this blurring boundary between the synchronous and the asynchronous collaboration is one of the defining elements of our evolutionary in our collective intelligence in 2024. People never fully realized that we had the time in which synchronous collaboration was pretty much invisible to pretty much everybody wasn’t in that room. And now, that’s not the case anymore. But also asynchronous collaboration. I mean, I don’t know, if you’ve tried chatGPT with the voice interface. I mean, increasingly, you’re gonna be able to build agents that you can query in real time during a meeting and say, Tell me what you know about what that team has done? What, you know, I mean, marketing, what did the finance team say last time we did this kind of promotion. And the machine will actually tell you in real time, well, this is the oldest stuff that we’ve seen in the last two years. And this is the cases where they gave us funding, or they didn’t give us funding and the parameters. And that’s all asynchronous knowledge that can be made synchronous, when humans collaborate in synchronous in real time. To me, that is one of the things that I’m most engrossed about, because I think it will fundamentally change our collective brain works. I mean, really, literally, spacetime, compression, right? Well, you know, the knowledge searchable, wearable, and you can collaborate with a corpus. I don’t know if you ever read Iain Banks, right? I mean, the work of Iain Banks, you have all these agents to basically learn and then you interact with them.
Ross: I love that, that, as you point out, sort of the merging of the synchronous, and asynchronous is changing now. Yeah, absolutely. Transforming collective intelligence. So the potential most organizations do, and I think that’s, if you’re a leader, being able to design so I was early COVID, I was reading quite a few workshops with leaders of large organizations and bringing to their attention distinction between synchronous and asynchronous, which is not always evident to them. But as you say, now the start to merge that changes the nature of what an organization can be. So I think we’ll probably, in due course, have to do a follow up episode, since we’re only just beginning really to dig into this stuff, but just to come back to the beginning. Collective intelligence, and so you can think of multi-agent as in this idea of human agents and AI agents, and how those can come together, but also this nature of how AI can facilitate collective intelligence, be that in an organization or potentially be beyond organizational boundaries. So I suppose just there just to wrap this up, I suppose what are what are some of the most teasing tantalizing aspects today of where we can go with us?
Gianni: It’s a really good question, because there’s a lot that will happen that we don’t know of already. So one of the things I would say just stay tuned to what’s happening, because you and I don’t, you know, these wildcards, right? I mean, Apple is coming up with their, you know, God will type stuff, what is that going to do? Do you know, that more that’s multimodal, that brings, that also is a, you know, collapsing of space, for example, right, you have all these layers, and now you can at least two things I would say make me feel like we are on the verge of a real inflection. One is what you said before, the work that you do with organizations, I think, is hugely important. Most of us, you know, think of this and obviously, this is a podcast, so it doesn’t represent well. But you know, think of this as a two dimensional space. On one axis, you have a number of people, you know, zero people, one person, many, many people, the other axis you have a number of machines, zero machine, one machine, multiple machines, we’ve been mostly designing in the space of one person, one machine, you know, GPT3 with one person. There’s a ton of design space in the many people, many machines cannot space. What the what we did with the MIT Ideator, for example, which was a tool to help people think through almost like a design thinking facilitator in a box, if you will. It was one person, one machine.
But increasingly, for example, you could say you could build things like that where you don’t have only one machine, you have almost a analogous of multiple personas, can you say to the personas, you know, I want to do something to HR through artificial intelligence. And then you take multiple personas, you take the personalities CHRO, you take the persona of the head of hiring, you take the persona of the business partner who the interface with the business, the persona of the entry level employee, or the persona of the senior employee. And you make all those personas participants in a virtual workshop. So that’s multiple machines, maybe one human, and then you do a design workshop like that. But then you take a bunch of humans, and then you do a design workshop like that with multiple personas. And then multiple people, I think there’s a horn of design space there.
And the interesting thing is, is design space is not technology or in the space anymore, because the tools that we have, maybe I don’t know if we have show notes here, but we may actually add a couple of links. Especially with the GPTs, right, you know, the release of GPTs that was done. And I think the the marketplace for GPTs is coming this week, for an excellent. Remember, a lot of people will be able to do a lot of the things that we just talked about, right. So that’s the I think that’s the lower hanging fruit, I think we’ll see that playing out in the next two, three months. And I think there’s even a bigger picture that we can talk about, but maybe we can talk about it separate time, I still think that there’s some, something really fundamental happening here, in which if you build an ecosystem of the things that we talked about, for individual augmentation, suppose you have somebody that I don’t know, like Gates Foundation, I’m just making an example.
Building what is called a super mind, all of the things that we just talked about at MIT called Super Minds, building a Super Mind that basically does the four things that we talked about, right, identify the nodes, give incentives to note feeding the ecosystem and a creative collaboration environment, and do it at a, at a level that is much bigger than the individual person, but you know, putting enough resources there and computational resources to the sort of the machines autonomously look for nodes autonomously look for, you know, feeding the right people in the networks, for finding and feeding information in a way that is not just based on advertising, empty calories, right? And then enabling, you know, say, a machine knocking on Ross’s door and say, “Ross, you should talk to Jeremy because he’s been writing some stuff that is related to the stuff that you’ve been writing.” Can you imagine if we did that, again, I was talking about the Gates Foundation, because they’re trying to solve very complicated proper complex problems. Can we build machines and architectures that do that autonomously for each of the spaces that we really need to have important solutions? And obviously, financial sustainability environments, etc. I think that, that Ross doesn’t feel like is five years ahead anymore. Right? When I started working in this space, it felt like, we needed to have a very different technology paradigm. I don’t think that’s the case anymore. I think it’s more of an organizational design. Putting the processes in the right place, I’m going to change.
Ross: Absolutely. If you, as you say, everything you’ve described, I mean, there’s some there’s some pretty wild and wacky stuff that we’ve talked about. Other stuff, if you really think about it. But as you say, this is all possible. I mean, there’s a lot of it’s the underlying tools of the genre, and part of it is now just sort of a processor interface or organizational design. And that is all there for the taking, as it were so yes, as you suggest 2024 and beyond is going to be pretty interesting. So Gianni, where can people go to find out more about your work?
Gianni: Super easy, Supermind.Design is where I tried, I tried to work out in the open. And obviously on LinkedIn, this is a bunch of stuff I tried to put out there. I used to do a lot more of Twitter. I’m not less keen on that these days, maybe just me being a little despondent. But again, LinkedIn and Supermind.Design and I really encourage people I mean, obviously the work that we do is important that people show up and say I didn’t like what you said, or I think you missed a piece etc. So I really encourage people to reach out. Fantastic. Thank you so much for your time in your insights, Gianni.
Gianni: It was awesome. Thank you for the work you do, Ross