July 18, 2023

Jerry Michalski on ethical cyborgs, amplifying uniqueness, peak knowledge, and fractal conversations (AC Ep2)

“I believe that more people would be eager to jump in and think together if the thinking were fun and led to something truly productive and useful. For me, that’s a significant aspect of amplifying cognition.

Jerry Michalski

Robert Scoble
About Jerry Michalski
On this episode we learn from the incredible connector Jerry Michalski. His fascinating career is hard to summarise, playing a central role in the emerging digital economy as long time managing editor of Esther Dyson’s Release 1.0 newsletter. He is now leader at the Relation Economy Expedition (REX) as well as an advisor facilitator and speaker at the Institute For The Future with a deep focus on trust and relationships.

What you will learn

  • The potential of generative AI and other advanced models in enhancing human capabilities (03:12
  • Difference between Cyborg and Centaur (04:25)
  • Exploring boundaries and augmentation in the age of ChatGPT (05:33)
  • Balancing individuality and AI in knowledge management (07:10)
  • Using AI to explore diverse perspectives (09:14)
  • The danger of the loss of distinction between fact and fiction (10:37)
  • The significance of collective intelligence among cyborgs and the urgency to address ethical considerations (11:44)
  • Embracing contagious and viral ideas while avoiding oversimplification (18:20)
  • Dealing with an ethical cyborg vs. an ethical person (20:26)
  • Ethical concerns related to AI research and the potential misuse of open-source models (21:52)
  • Navigating arguments and perspectives with ChatGPT (25:46)
  • Amplifying cognition as the by-product of collective thinking and knowledge sharing (33:20)

Episode Resources

Transcript

Ross Dawson: Jerry, it’s amazing to have you back on the show.

Jerry Michalski: It’s very exciting to have another conversation with you. Thanks for the invite.

Ross: It’s almost two years since you were one of the first guests on the show, a very obvious guest, and you are very obvious to relaunch Amplifying Cognition. What are you thinking about? What are you doing? What are you delving into these days?

Jerry: It’s funny; we were just comparing notes a little bit and it seems like my path is converging with your path as we speak even. It’s very fun because I realized not that long ago that I’m more of a cyborg than anybody I know because I externalize more of what I think into this Brain software that I use. I find it incredibly useful and usable. Even though it’s called the Brain, it has no AI in it. That has not been an experience for me of using generative AI or any of the models that we’re talking about here. But, oh my gosh, those things are all completely complimentary.

My general notion is that the future of work is cyborg; we’re going to have to learn how to meld well with technology. That means we’re probably going to have to figure out how the tools work and how to incorporate them into our lives. But also, the ethics of this stuff is really important. The other piece of what I’m working on is standing up a community of cyborgs who are trying to work together to figure out, Hey, what does this next generation of work look like? And how do we do it in some ethical way so that maybe our efforts are making the world a better place instead of destroying it?

Ross: How would you define cyborg?

Jerry: I was torn between cyborg and centaur. Centaur does not roll off the tongue. People don’t know what centaurs are. But, oh my gosh, cyborg immediately brings to mind Arnold Schwarzenegger in Terminator 2, which is totally the wrong image. But that’s funny, and I like that that’s the first thing because I’m like… and I don’t mean the robots from Skynet that are going to come to kill us all, I just mean extensions to human capacity. I’m not even talking about biological extensions. At this point, I’m mostly talking about software. But that biological stuff is just on the horizon. It’s not that far off. The man-brain-machine interface stuff isn’t that far off. I don’t know where it’s going to go. A lot of it is going to be for making up deficits; when somebody loses capacity, that’s where prosthetics go a lot. I think we’re a longer way from I think something and it’s manifested in the world. But that’s not that far off. But for now, it’s like, we need to integrate better with software.

Ross: We talked about cyborgs and work context; I’d want to delve into that. But perhaps let’s pull back as well because what happens when we become cyborgs?

Jerry: It’s funny because when this whole ChatGPT thing got exciting and heated up, I was having a conversation with my friend, Pete Kaminski. I said to him, Pete, are you losing your boundaries? Are you having any boundary issues? Because one of the things that come up right away is if you share information or start a conversation with ChatGPT, where do you end and where do you start? If you take the results of a query and turn them into your essay, what did it create? What did you create?

There are a lot of interesting boundaries about where are the borders of the participants anymore. That’s just one of several different layers of things that start to show up. The other one obviously is: is my job going to be automated? There’s a phrase here I really like: augment versus replace. Doug Engelbart famously gave us the augmentation of humans, it was his goal. I think that’s a fantastic goal. I don’t know where we lost his thread, but we’re busy trying to automate jobs out of existence when I think what we should be doing is making tasks go away but helping people do more powerful things together.

Ross: Let’s say, organization of today, suddenly they’ve got ChatGPT. There are two frames: individuals and organizations. It seems to me that it started with individuals. Let’s start there. Let’s say a person says, All right, I will make myself a cyborg so I can be better at my job. How does that work now?

Jerry: It’s interesting because I probably have a not-quite-unique but a quirky outlook on this because I’ve been feeding this mind map for 25 and a half years. I have a highly developed public external web of everything I believe in, and one of the questions that is coming up right now is: is notetaking obsolete? Should we stop taking Thiago Forte’s Build the Second Brain course or things like it? To my mind, because of my personal experience, I think it’s an extremely dangerous course of action to give up on personal notetaking and conceptualizing things ourselves and decide, I’m just going to ask ChatGPT and it’s going to give me the answer because it’s going to increasingly know everything and be able to organize things, like magically it’ll come up with the eight categories that perfectly map to some domain that I’m curious about or trying to write about.

The tools are scarily powerful at doing things exactly like that. I’m very interested in that boundary between individual notetaking, note sharing with other people to build some kind of collective intelligence, and how all of that folds in with this new set of intelligences that are outside of us but they’re only smart because they’ve swallowed everything humans have ever written.

Ross: I think one of the really important pieces here is our uniqueness. We’re all absolutely unique humans, and one of the things is that we think uniquely. We need it to accentuate our uniqueness and how it is we think. That’s the diversity of mental models. Cognitive diversity is what we look for in an organization. We don’t want everyone to think exactly the same. If we all outsource our thinking to GPT, then, in fact, we will all be thinking exactly the same. To amplify our own uniqueness, as you say, we need to have our own mental models, which means capturing our own thoughts and our notes and how they fit together in our own unique way. That’s pretty important.

Jerry: I had a fun conversation with somebody yesterday on a walk, where he was saying that it seems like generative art is kind of converging on a particular aesthetic. Maybe that aesthetic will change over time as the tools get a little finer-grained and better, but he was worried that we were going to turn everything into pudding, basically, that’s like an intellectual gray goo scenario, where all of a sudden, everything winds up kind of being the same. I think that humans do provide some of the spice and uniqueness in the mix. But I also argued for some of these AIs, where you can tell the AI to take very different perspectives. I mean, one way to bring more voices into the room is to ask your AI to represent indigenous ways of knowing or a particular indigenous group and say, Hey, you speak for these people that I don’t have anyone in this room who understands that perspective, let’s see if that will help us think of something a bit differently than we normally would.

Ross: That’s a great use. Just yesterday, I was sharing about this new software which apparently can predict music hits with 97% accuracy. Again, that’s pretty dangerous, where if suddenly, the only thing we get is what is supposed to be hit, and we lose the rest, but I don’t think that’s going to happen. I think our musical tastes are diverse enough, and we will express our uniqueness in what we listen to.

Jerry: There’s this problem that several people are worried that the outputs of generative AI are going to be fed into the search engines and are going to become the new inputs for everything. And then the snake will eat its own tail. It’ll be like the famous ouroboros, and that is not out of the question. Some of the dangers in that scenario are that we start to lose the difference between fact and fiction because we will be feeding hallucinations into the system as if they were facts, and then all hell breaks loose. One of the early posts was heavily past peak knowledge. Is this the end of the Golden Age when we suddenly have query engines where we can search everything and most everything humans have written is now in the system? But oh, now we’re breaking that.

Ross: This comes back almost precisely to the cyborg piece, as in, we both independently have come up with the phrase how to be a better cyborg. How, Jerry, could we become better cyborgs?

Jerry: Part of it is understanding how the tools work and what the limitations are, and not becoming the lawyer who submitted a brief that they fact-checked using the tool that generated the hallucinations and therefore got themselves really embarrassed in public a month or two ago. You don’t want to be that guy. There are a lot of ways to avoid those errors. Understanding how the tools work and what their limitations are, lets you then use them well to generate creative first drafts of things.

One of the enemies of mankind is the blank sheet of paper. So many people are given an assignment, and they’re like sitting down, and it’s just like, No, and you ball up two words, and you throw it in the trash. And here, all of a sudden, you can have six variants of something put in front of you. We need to become better editors of generated texts. Then the other piece of being a better cyborg is not about being a lonely cyborg. But what does it mean to be in a collective of cyborgs? What does it mean to be in a cyborg space? What does it mean to co-inhabit cyborg intelligence with other people and other intelligences that are just going to get faster and better at this thing? I think it’s really urgent that we figure out the collaboration side of this so we don’t think of it only as, Well, they gave everybody a better spreadsheet and now everybody’s making a lot of spreadsheets, this is different; this is different in type.

The third thing I would bring in is the ethics of it, which is boy, it’s easy to misuse these tools in so many ways. Unless we understand A – how they work and what they’re doing, but B – have some better notion ourselves of what is right and what is wrong to do, and some relatively strong idea of what is right and what is wrong to do, then this is going to evolve. There’s one school of thought. Bill Joyce said this years ago: There is no more privacy; forget about it; privacy is overrated. And the other realm is like what the EU is doing right now, with new privacy regulations. They’re really working hard to try to figure out how to protect us from having our data just sucked out of our lives and used by other people to manipulate us in our lives, which is what capitalism wants to do.

It’s not as easy as I’m going to get good at Photoshop, Final Cut, or whatever, and become an ace with some software. I point to those kinds of people as the early cyborgs. I’m like if there’s any piece of software where you no longer think of the commands, maybe you’re a spreadsheet ace and you do these massive, incredible models with pivot tables and who knows what, and the software you’ve internalized so well that it doesn’t even come to consciousness, you’re down this road of cyborgness. But this is more complicated than that because the issues are so important and because we can now collaborate and communicate better all of those issues.

Ross: There are a few layers in being an ethical cyborg. One is being aware of the concept of ethics in the first place. Another is the desire to be ethical. Another is knowing how to do it. As people become cyborgs, amongst other things, they have greater power. This amplifies our capabilities, which arguably makes ethics more important. How do we go through those layers of making people aware that they could or should be approaching the world ethically, learning what are the principles, and actually putting that into practice?

Jerry: If you’ll permit, you just reminded me of a story from long ago. Then there’s another thread about the word consumer that I’ll bring in. I went to Wharton Business School a really long time ago. I was on the Dean’s Advisory Board in my second year. I said, Gosh, it’s really nice that we have a six-week-long ethics course that’s mandatory. But when you’re in the ethics course, everything looks like an ethics case and you answer everything ethically, because you’re in the ethics course, da. The only way to teach ethics is to hide it in the curriculum through every course; you must redesign courses everywhere so that one of the tasks in any course is for a student to stand up and say, Hey, we could do this, but it would be wrong and here’s why.

Number one, I think we need to figure out how to make people aware, and how to… hide the broccoli is the wrong metaphor but basically, make sure that people have drills. In the Toyota Production System, one of the things was any worker on the line could stop the line because of quality, and they taught that and it worked. In Japan, where you don’t want to stand up and the nail that’s poking up will get hammered down, it worked, it really worked, because there was this sense of shared responsibility for the whole process. Awesome.

Then the second thing is there are ways in which we do unethical things that we don’t even notice because we’ve normalized them so much. My whole journey started 30-35 years ago when I realized I don’t like the word consumer. I can point to a couple of briefings, like 93-94, where I realized this word really bothers me and it’s a major issue. Then later, maybe a decade later, I realized we had consumerized every sector of human activity, which meant we were treating people as just people to control and manipulate as opposed to citizens with whom to engage in this activity. I can go in a hundred directions from that point. But being aware that that is a problem and that things that you’re busy coding or doing might actually be contributing to the problem instead of fixing it is another piece of this puzzle.

I’m very interested in provoking and maybe facilitating some of those conversations so that we can all be having these conversations to start to realize we’ve got choices. Maybe between your community and my community, we collect up enough voices, and several other people’s communities, to go have an effect when legislation is being drawn, when companies try to do things, etc. That would be a great thing.

Ross: Just picking out of that, it suggests that the path to the ethical cyborg is significantly conversations.

Jerry: It’s very social, shockingly collegial and social; it’s lovely. Very much

Ross: Yes, well, we can’t put everybody in the world in an ethics workshop. To your point, that’s not necessarily the best way to get there.

Jerry: Right. One of the things, the three words I’ve heard kill more good ideas are: it won’t scale. What we have in mind usually when we say scale, is like industrial scale. When Intel is busy creating a new fab, they put up like a dozen lines for production, then they tweak all the variables, pick the best-producing, best-yielding line, and say, replicate exactly all the settings on all the devices on this line, and you will get lots and lots of chips out the other end. Human Systems are not like that whatsoever. We are flaky, we are fluky, and we are weird. But we are also very social.

I prefer the term adaptive or fractal scale, by which I mean lots of conversations can happen at lots of scales down to four people, three people at a time, and the same thoughts can be had over and over and over again. It doesn’t bother anybody, and that scales, because when I say scale, I mean influencing or touching a whole lot of people. I don’t mean telling everybody to do the same thing to get the same results. Those are two very different ways of thinking about scale. So if we want to have something contagious, the free hugs meme is contagious. It’s viral in a very cheap way. Because once you’ve seen this thing, a video of somebody giving free hugs, you’re like, Oh, that’s cool. It’s in your brain now. How do we take these issues and not oversimplify them, but make them that kind of palpable in our lives?

Ross: What do we need to understand when we are dealing with an ethical cyborg as opposed to an ethical person?

Jerry: That’s a crazy, interesting question. For example, if you ask GPT to name 10 philosophers, it’ll name 10 dead white guys, a couple of whom might be alive. You have to say, Hey, GPT, name 10 Islamic philosophers. Oops, those will still be guys. But if you say, what about other sorts of indigenous wisdom or women? It will come up with a great list of people, but you need to prompt it because there’s a bias built into the systems’ web of meaning because the Western canon, the human canon contains so much bias anyway, and it’s extremely hard to purge that out of the system. An awareness of the bias and some ways to circumvent, neutralize, or improve on the bias are really crucially important. That’s just one of the ways we have to kind of walk into this.

Ross: For example, when I was looking at my map of intelligence and how we view intelligence,  GPT was an incredibly useful tool in finding female and diverse perspectives on intelligence. The trouble was it was always hallucinated. To get the fact check on, that was pretty tough because it was all hallucinated. But at least it identified some people I should be looking into.

Jerry: There are also questions about there should be a moratorium on all this research. We need to stop it. I’m unclear if that’s even possible. I can easily imagine that there are bad actors in the world and there are many of them out there. Even though a big piece of my lifetime message is trust, but there is a bunch of bad actors out there who are taking some of these open-source models and building the thing you just sort of said humorously that might exist as maybe a malevolent artificial intelligence.

It could be powerful. These things are hard to gate. Without a lot of imagination, I can envision too many scenarios that do worry me, but I don’t see any way to put the brakes on this. We need to actually infect more people with good intentions while using these tools so that they can find and stop the people who are using them badly.

Ross: Overlaying a few of the themes we’ve talked about is organizations of cyborgs, then we have collective intelligence, we have the ethics of having organizations of cyborgs, and essentially, how do we build that into something that is both effective and has a positive impact on the world?

Jerry: I think one of the things that’s hard to imagine is that software is instantly and cheaply replicable. Not only could I have an agent out there doing my work in the world that is smart, I could have a hundred of them or a thousand of them; it really comes down to how many of them can I manage and how do they connect. I can easily imagine that somebody’s working on that problem of how do you delegate work across software agents in different ways to create an army, basically a robot army, for any person who can step into that and control them. That’s kind of crazymaking. That’s popping straight out of some good science fiction novels into our present reality.

We have to figure out what does collective intelligence, hive mind, or collaborative sense-making look like and what would we like it to be. Is it like Wikipedia, where there’s a canonical page with the right answer for each thing? Like, here’s the page for carbon, and this is what is allowed to be on the page for carbon. You have to duke it out on the talk pages behind this page and then only that gets seen? Or is it an overlapping hive mind where different constituencies wind up saying, Here’s what we believe, and here’s what they believe, in a way that lets us compare notes but doesn’t force us to blend everything into the gray goo? 

Because the moment we’re all forced to come up with the one canonical answer to everything, and here, I’m torn because I think truth matters and facts matter, but the moment we’re all forced into the pressure cooker of having the same answer, the consensus answer, meaning the answer everybody agrees to, those answers will be A – targets of all sorts of bad pressure and B – probably worthless; they’ll wind up becoming unusable.

Ross: Thinking about, for example, Bridgewater Associates, where everyone’s encouraged to be very difficult and contradict others, and Andreessen Horowitz, where they have red teaming on major decisions. This is all about disagreement, ultimately leading to a decision. In a world of cyborgs, do we then get individual cyborgs having opposing views that resolve or are they within the one cyborg? How is that configured?

Jerry: Who knows? There’s a term called Steel Manning. You’ve heard of the straw man, right? The straw man argument, you sort of put up. Steel Manning is when you know your opponent’s argument better than they do. You can represent the logic of their argument so well that they would agree with that logic. You can tell ChatGPT to go do this; you can sort of tell it to take both sides of an argument and present both things as if it were debating itself, not a big problem there. I think it may be easier to do these sorts of things. 

What’s interesting to me is the boundary between facts and logic, faith and politics, and argument. Because a lot of what’s happening out there is arguments on faith or things that have no basis in data or results, they’re just assumptions, and assumptions that if the other side slowed down and agreed to abide by the data, their argument would probably melt. They’re very likely to be unwilling to do that. Right? Nobody wants their argument to fall apart. So this space is going to get really contentious. We have to worry and try to figure out how to navigate the waters of stories meeting factual narratives or causal narratives. By the way, in a fight between emotions and facts, emotions win every time. There could be this interesting battle between fact and fiction that we’re entering as well as everything else we’ve talked about.

Ross: Yes. I just think it would be good to come back in a year or two to sort of see if we have any URI or others if any, structures to facilitate this. Pulling back to the leader. Let’s say you’re talking to a CEO. Are you going to say to him or her that you are now the leader of an organization of cyborgs? How is it that they should be thinking about and enabling? Is it a sidewalk organization? Or is it an organization of sidewalks? How does this work?

Jerry: I know it’s really very interesting. On the one hand, in the US, we have an association called AARP, the American Association of Retired People, which is now an obsolete name because nobody’s really retiring, etc. But they kind of claim to speak for people of age 60 and over or something like that. They don’t speak for me because every time I get a mail from them, I tear it up and throw it away. But they’re also doing nothing to actually communicate with people. They’re a big, centralized organization, that’s a huge lobbyist in our capital DC. But they don’t represent the people that they claim to represent, as opposed to Alcoholics Anonymous, where the structure is just given and people set up groups, and there’s a protocol and a method for going through the process. There’s no money exchanging hands, which gives it a certain kind of authenticity, veracity, and importance because the work that’s being done there is really important work for humans, right? Those are two opposite kinds of organizations.

Part of what I’m playing with is, How do you do a highly decentralized organization that has some sense of rituals, connection, meaning, and some agreement on what is right and wrong to do? Which is hard. I don’t think that’s easy work. But if you can set those things up, then everybody doesn’t have to be like I’m stunned by the fact that Facebook now has more monthly average users than the populations of China and India combined and they are ruled by a single person who has dictatorial powers over them all. Because that’s how he worked out how the shares work. That really is incredible to me. You could consider that to be the largest country on Earth. We’re already in those waters. That’s done.

Ross: This is a leaderless organization. How do you configure or architect a leaderless organization that achieves objectives alignment and where the cyborgs collectively go and get it done?

Jerry: Yes. People are working on different parts of this, like the Indieweb, Fediverse, or several others that are working on federated, distributed things. Crypto people would say, Over here, over here, Blockchain is distributed, and so forth. I’m just not a fan of what’s happened over there. I don’t think it’s contributing to the kinds of puzzles that we’re thinking about here. I’m having trouble explaining what a shared memory looks like. I can tell you what Wikipedia is and you know what Wikipedia is because it’s an encyclopedia on open-source software that runs in a wiki-style, that runs on these servers, is funded by donations, we can explain exactly what Wikipedia is. But what makes that easy is that it’s only an encyclopedia. I can’t use Wikipedia to tell you a story of why I think the global financial crisis happened. Right?

I have a thesis about that, and I put some videos online about the GFC just to try to storytell, to explain, building on evidence, why that happened. We don’t have a place to share those sorts of things that we share an understanding about. In a fit of pique with a little bit of humor, I bought thebigfungus.org, where there isn’t much; it’s just kind of a placeholder site. But I think that thebigfungus is a nice metaphor for the shared knowledge web because mycelial links and mushrooms are just really great metaphors for almost everything. It works really well for shared knowledge. What does that look like? When we start thinking together, what does that look like?

And I have a second funny way of looking at it. When Zuckerberg renamed Facebook as Meta and went on his Metaverse binge and spent tens of billions of dollars on worthless research, Sorry, people on that project, I bought the domain, thebetterverse, I think “.com”. because I’m like, Hey, I don’t think any of that floating around in 3D with an avatar head is going to lead to a better universe. But if we figured out this shared wisdom, if we knew what we knew and made it better over time, we could get to a better verse. For me, all this talk about cyborgs, software, data, and the future brings in some of those really big issues because I think this is as large a transformation as programming was when we first got into coding, and we can see how much that transformed the world.

Ross: This, we could go on talking forever. We need Episode Three before long. 

Jerry: I feel like I’m a helium balloon in this conversation where you’re like, Okay, let’s talk about this thing and then I’m like, Yes, but, and then I feel like I’m floating up in the chair, like, up a couple of floors.

Ross: Let’s try to round it out by grinding it down to the ground.

Jerry: Yes. 

Ross: The theme is amplifying cognition. We have incredible brains, how do we amplify those? How do we make those better? Part of it is becoming cyborgs with AI and other machines and technologies. What would you suggest to people who want to amplify their cognitions? Or be a cyborg or be a better cyborg? What are some of the steps? What is that journey that we need to be on?

Jerry: A couple of hours ago, I was in a conversation where the other side of the argument from me was that creating bodies of documents that make sense is too complicated. Most people won’t engage in it. I was like, But there’s Wikipedia. It doesn’t matter that there is only a few people who did the organizing. But everyone else who touches it gets to benefit from it. No? It was sort of disheartening because I believe that more people would be eager to jump in and think together if the thinking was fun, and if it led to something really productive and useful. For me, that’s a big part of amplifying cognition. It’s like, Hey, folks, let’s think together, let’s learn to think, and let’s step into some ways of sharing what we know and what we believe. Even sharing the wildest guesses that are probably wrong. But it would be interesting then to compare notes and say, Well, it’s wrong because this, Okay, great. then you can change your mind and make that explicit out in this shared memory of some sort.

I think that starts by just learning to do notetaking and then figuring out how to manifest what you see in some way that other people might be able to use. It could be in Obsidian, it could be in Roam, there’s a whole bunch of thinking tools or mapping tools. I happen to use the Brain and really like it, but I’m extremely aware that it’s not for everyone. But then, how do we collect this up so that it’s a larger artifact that all humans can benefit from? It’s a little bit like the foundation library that the foundation series was looking at way back when, it’s like, How do we create a library? Because we’re going to destroy civilization, we need to build a library somewhere far enough away that it survives the destruction so that we can rebuild ourselves later on. I’m not quite at that plot point. But it sometimes feels like a project sort of like that.

Ross: Yes, the thing is, the conversation is perhaps the most wonderful thing in the universe, but there’s also the writing, or words, or any visual things—anything that takes our thoughts out in a way that others can engage with them is the foundation of collective intelligence. But then, I think part of it’s also how do we use the AI in that piece of taking or integrating or the pieces of what it is we are thinking that expression, but I absolutely agree that people do need to be capturing things.

Jerry: It is fun. I have a whole series of lessons from using my Brain for 25 years. One of the lessons is that using the Brain forces me or switches me into System-2 thinking all the time. In Thinking Fast and Slow, Danny Kahneman says System-1 is your instinctive response, your quick answer and System-2 is when you have to slow down, piece things out, and make them make sense. What happens to me is something floats by in the info torrent, and I’m like, Oh, that’s worth remembering. Okay, good. That’s the first question. Where does it go? What do I name it? What can I learn from it? What is it connected to? I’ve gotten to where I can do that little loop very quickly. That is a piece of the kind of thinking that I’m talking about. Too many of us are just overwhelmed by the info torrent. We’re drowning in the info flood, and every year somebody invents a new tool like Snapchat, TikTok, or what have you, that we all seem to have to go get on, and all of this is flow and we don’t have good tools to capture the good stuff and put it someplace where it’ll last a little longer.

Ross: You are setting up a community of ethical cyborgs, is that right?

Jerry: That’s the goal. It’s not set up yet. But I’ve been having several of those conversations just this week, and it feels like we’re on parallel paths here.

Ross: Well, I’ll help get the word out when that comes out. Anything that you want to point people to who want to know more about what you do?

Jerry: Sure. I’m easily found at jerrymichalski.com and jerrysbrain.com. The community that I started three years ago at the start of the lockdown is called openglobalmind.com. You can join those conversations. We have several standing calls every week where we haven’t written a lot of code, but we’ve sort of turned over all these issues to the point where we’re getting somewhere with our understanding of the shape of the problem and whom to go talk to about what, so those are some places to find me. And if you go to jerrysbrain.com, you can browse my Brain for free by clicking on Launch Jerry’s Brain.

Ross: Yep. It will all be in the show notes. Thank you for all of the wonderful work that you do, Jerry.

Jerry: Ross, same here, and it’s just exciting to see how similar our thinking is.

Join community founder Ross Dawson and other pioneers to:

  • Amplify yourself with AI
  • Discover leading-edge techniques
  • Collaborate and learn with your peers

“A how-to for turning a surplus of information into expertise, insight, and better decisions.”

Nir Eyal

Bestselling author of Hooked and Indistractable

Thriving on Overload offers the five best ways to manage our information-drenched world. 

Fast Company

11 of the best technology books for summer 2022

“If you read only one business book this year, make it Thriving on Overload.”

Nick Abrahams

Global Co-leader, Digital Transformation Practice, Norton Rose Fulbright

“A must read for leaders of today and tomorrow.”

Mark Bonchek

Founder and Chief Epiphany Officer, Shift Thinking

“If you’ve ever wondered where to start to prioritize your life, you must buy this book!”

Joyce Gioia

CEO, The Herman Group of Companies and Author, Experience Rules

“A timely and important book for managers and executives looking to make sense of the ever-increasing information deluge.”

Sangeet Paul Choudary

Founder, Platformation Labs and Author, Platform Revolution

“This must-read book shares the pragmatic secrets of how to overcome being overwhelmed and how to turn information into an unfair advantage.”

R "Ray" Wang

CEO, Constellation Research and author, Everybody Wants to Rule the World

“An amazing compendium that can help even the most organised and fastidious person to improve their thinking and processes.”

Justin Baird

Chief Technology Office, APAC, Microsoft

Ross Dawson

Futurist, keynote speaker, author and host of Thriving on Overload.

Discover his blog, other books, frameworks, futurist resources and more.

rossdawson.com