September 13, 2023

Genevieve Bell on the history and relevance of Cybernetics, frameworks for the past, present and future, and decolonizing AI (AC Ep10)

AI was never a technology; it was always a research agenda and more than a research agenda it’s a thought exercise, or conjecture that says you can automate thinking if you can describe it in smaller pieces.

Genevieve Bell

Robert Scoble
About Genevieve Bell

Genevieve is Distinguished Professor at Australian National University, and Director of the School of Cybernetics and the 3Ai Autonomy Agency and Assurance Institute at the university. Her many roles and honors include Senior Fellow at Intel, SRI International Engelbart Distinguished Fellow, Non-Executive Director of the Commonwealth Bank of Australia Board, and an Officer of the Order of Australia.

What you will learn

  • Exploring the essence of cybernetics (03:27)
  • Tracing the influence of cybernetics on diverse fields (11:53)
  • Approach of the School of Cybernetics at the Australian National University (16:03)
  • Decolonizing AI and understanding its historical roots (22:32)
  • The intricate power dynamics and interests at play in the establishment of AI (28:06)
  • A more inclusive view of AI (29:25)
  • Embracing plural futures (32:28)

Episode Resources

Transcript

Ross Dawson: Genevieve, it’s a delight to have you on the show.

Genevieve Bell: It’s great to be here Ross and I’ve brought my coffee.

Ross: Oh! Very important to start in the morning.

Genevieve: Absolutely.

Ross: You are the director of a new school of cybernetics at the Australian National University. Please tell us, what is cybernetics?

Genevieve: It’s like you’re doing a thing – Discuss! I’m the Inaugural Director of the School of Cybernetics here at the Australian National University. The first new school the University started in about 40 years, and I have to say the upside/downside of that is, upside, it’s been so long that no one remembers how to do it, downside is it’s been too long, no one remembers how to do it. There’s something about starting something new that’s always incredibly appealing to me. The school is new, but the idea isn’t.

What is cybernetics and why should you care? Cybernetics has lots of history but the one that’s important starts in the United States in the 1940s. It starts with a whole collection of conversations that were bubbling along in the World War II period and the immediate aftermath of World War II. In that period, in addition to all the complexities and horror that was World War II, there were also conversations going on about what is the role of technology and all of that. What does it mean to think about increasingly complex technological objects? Part of what was happening in World War II was the rise of computing, an ancestor to the computing we know now.

We were in this transition where computers are stopping being people who do maths and starting to become machines that do maths and machines do it much faster, but in ways that are more energy intensive, more human labor intensive, that are quite large, and where people are desperately trying to find the right language to describe what these computational objects would be. What are they going to be? We’re not talking about the slick machines that you and I are talking by, we’re talking about things that were the size of multiple shipping containers, and that they were loud and smelly. I love the idea that computers smelled. They were really mechanical, and there was quite a lot of labor. People were trying to find a way to talk about all of that because talking about them as calculators didn’t get it done. It certainly didn’t encapsulate the possibility.

You have all kinds of people in the UK, the US, and a little bit in Europe who start talking about these things as brains; giant brains, electronic brains, brains, brains, basically. As a result of that, there’s this really interesting intersection of conversations about cognition, about sanctions, about intelligence, about learning, about thinking, and about these thinking, computing, brain machines and all those conversations start to come together. It’s probably because of the people who were having them right. They all knew each other. They’re all moving in similar kinds of circles around universities, around various government agencies, and there are conferences going on because that hasn’t changed in the last 50 years. It’s always good going to those old conferences.

There were a couple of epic conferences happening on the East Coast of the United States from about 1943-44 to 1950. Those bring together people out of philosophy, what we would now think of as neurobiology, physiology, people in maths, philosophy, and linguistics, in my field anthropology, and they’re talking about how does thinking work, how does conscience work, and how does sentience works. In parallel to that, computers are getting more sophisticated. One of these conversations that’s going on is being funded by a philanthropic organization who are interested in interdisciplinary thinking, they didn’t quite call it that, but that’s what they were doing. They brought all these people together to talk about this intersection of thinking and being and how animals think and humans think and what machines could do.

It’s got all these incredibly clunky names, all these amazing people are there. Gregory Bateson, Margaret Mead, amazing anthropologist, Richard Northrop, who’s a philosopher at Harvard or Yale, Yale I think and other kinds of people like McCulloch Pitts, people who were on the edges of what we would now think of computing. Into that morass, unexpectedly, a book is published. It’s unexpected because this book became a New York Times bestseller. That’s great, except it’s all mathematics, and mathematics books don’t usually become New York Times bestsellers, but this one does. It’s 160 pages. It is dense and it has equations on many of those pages. The thing that captures everyone’s attention is the name of the book because the book is called Cybernetics.

Cybernetics was a word that didn’t really exist up until that point in English. The author of this book, a man, named Norbert Wiener, who is a mathematician, a philosopher, and a difficult human being, had a fairly complicated life; classically, we call them a polymath now, but at the time, he was roaming intellectually fairly widely and had a theory about the world. He’s interested in what’s going to happen as these thinking machines, these computers get more powerful, how humans are going to relate to them, and what’s going to be the nature of that relationship. Importantly, how could you theorize it mathematically? If you could, where would that get you?

He coins the phrase cybernetics to be the study of the communication between computational objects and human beings, or in his language actually, the study of the relationship between the machine and the animal. He’s interested in how we would think about that. There wasn’t a word for that in the 1940s. How on earth would you talk about computers because we’ve barely called them that and their relationship with humans. There’s a language, sort of blankness. The language he goes back to is Greek. Now Greek for him, he learned Greek as a little boy, his dad was a linguist who insisted that Norbert learn all kinds of languages, so he read Greek as a child. The word he searches for in Greek and the word he ends up pulling through into the 20th century is this word kubernetes’, which means steering, and steering specifically of a boat.

I imagine he was thinking of a small boat because of the words, it’s in Greek mythology, but the kind of boat where you can feel the water through the edges of the boat, where you can see the water everywhere, where as you adjust the rudder, you know the boat is moving and that interplay between you, the boat and the water is a very clear system, and what it means to navigate, steer, and nudge is a very clear feedback loop which is what he was also really interested in. That word ‘kubernetes’ has this second meaning which I’m sure he was also playing on, which is about governing systems. Somewhere in his mind, he’s managed to find this word that doesn’t sit well in English, to steer and to govern, specifically of systems, and particularly of human systems and something else.

From that Greek ‘kubernetes’, he creates the word cybernetics. Cybernetics, 1940s, the book comes out, a national bestseller, attracts everyone’s attention, and it becomes a shorthand for talking about the future of computing, and specifically the future of computing with humans. In the 21st century, if you start to think to yourself, we desperately need to have a different set of conversations about the world we are building, about who we are, with, through and around machinery, about what it means to have AI in our lives, about how we want to think about us and these computational objects and how we want to think about the environment in which we find ourselves literally both the physical world and more broadly put, the environment and ecosystem, and you want words to describe all of that. We looked around, and my colleagues and I went, “Oh look, that word from the 1900s, we should bring that forward because that word still works!” That’s how we became a school of cybernetics. The too-long don’t-read version of that is, that cybernetics is a word borrowed from the Greek to describe and theorize the relationship between humans and computational objects, where the ecosystem is important, or put it another way, cybernetics is a way of talking about a complex dynamic system with humans, computation, and environment at its core.

Ross: That’s fantastic! It’s important to have the reference frame for where we are today. As you say, it’s finding what we’ve got already to be able to use that. A lot of my lineage or thinking is tied into that. Gregory Bateson is one of my most influential people. Buckminster Fuller, his idea of the trim tab is deeply tied to the idea of the steering.

Genevieve: Yes, he’s tied up in his conversations too. Once you start picking this moment that gets labeled as cybernetics, and all the people that are involved in that conversation, and the Macy’s conferences, they get rebranded from their interminably long name to just the Macy’s Conference on Cybernetics. Everyone turns up in that conversation in the first half of the 20th century. Have you known Vannevar Bush, who goes on to be the underlying memex machine? Licklider, who does run DARPA. Von Neumann, who’s building the ENIAC. You and I both know Bateson absolutely, and then it just goes on to be the wellspring of like almost everything.

Pick anything interesting as far as I’m concerned in the second half of the 20th century, and there is some cybernetician lurking in the background. You’ve got Stafford Beer, who gives us all the organizational development stuff, and who also then profoundly influences Brian Eno, and then in turn Bowie and talking Henson craftwork, that whole genealogy. Eno talks about being influenced by cybernetics, which is crazy and fabulous. You’ve got the branch that springs out of Silicon Valley, which is the one in some ways I’m closest to, which is Bateson, but then also Stuart Brand, Terry Wettergren, the systems and symbols program at Stanford, frankly, the internet, and Doug Engelbart at SRI, all those kinds of pieces.

You’ve got other branches that turn up through art with people like Sacha Reichardt, and then all of the early digital and animated art. You’ve got multiple other tangents that run through… there is absolutely one in American anthropology, it’s the shift from the Belizean School of Anthropology, culture is just a list of things to culture as a system, or as Geertz would say, a web of meaning, a web of significance in which we are suspended. That’s very cybernetic. You just start to realize that it’s through all of these things. Importantly, for you and I, also, all the protagonists who turned up in 1956 at Dartmouth, at the AI conference, were influenced by the cybernetics conference. Some of them had been there. Claude Shannon, founder of information theory, is also in the Macy’s conferences. There’s this very clear sense that you can’t have AI without having had cybernetics first, that you can’t have a notion of wanting to theorize about the future of computing without having had this prior conversation. For me, there’s something about wanting to reassert that history, and the intention of that particular collection of voices, because there’s also a politics to all of it too. They’re very clear in those conferences in the late 1940s that a future of technology that unfolds unchecked is problematic, and that we need to be having conversations about power, morality, ethics, and that we need to be thinking about whether there are ways of computation unfolding that aren’t just about war and aiming machinery.

Ross: There’s this idea of cybernetics, which is, as you say, influencing so much including the technology domain, the arts, system dynamics, and systems thinking. Ultimately, cybernetics is the most comprehensive system perspective you can have, I’d say it covers what’s alive and what’s not alive, and how all those fit together. That brings us to 2023, where we have a blossoming of AI and extraordinary possibilities. We always think from the perspective that humans are inventors. We’ve invented incredible machines now where we have relationships with them—all kinds of different relationships, out of which these systems flow. Today, we have many choices, perspectives, ways of thinking, and initiatives with AI and humans. We’d love to hear about what your work is now and what the School of Cybernetics is doing to help frame these conversations and work in a positive direction.

Genevieve: Look, it’s such a good question, Ross. When we start to approach these conversations, we bring three very different techniques to bear as we want to imagine how we should encounter the 21st century. One of them is shaped by where we find ourselves. The Australian National University has this distinctive mission. It’s a university created by a federal government at almost the same time as the Macy’s conferences were unfolding, with a very clear sense that it was a university designed to build capacity and capability. It was all about how do we equip people to handle the world in which they find themselves now, not just looking backward, but forward.

In some ways, part of the mission of the school is to build capability and capacity. I believe you have to do that in lots of different ways. There are some classic tools to research the thing. For me, there’s the ‘go teach it.’ Then there’s certainly the ‘go engage with the world and bring a different conversation to bear.’ For us, step one of this school is that it really is about critical thinking and critical doing. For me, that sense of how do you engage to build capability and capacity. I know sounds like a total word salad for the 21st century, but it’s really important to us.

That means we have multiple education programs ongoing. We have a tiny master’s program that has about 15 to 20 students a year. It’s cohort-driven and designed to be painfully interdisciplinary. People come from really different places and have different lived experiences. Right now, they hate us desperately, August to September is always bad because it suddenly feels like, ‘Oh, God, that’s such a lot you’re making us think and do.’ But that’s really about how do we create a cohort of people who have a different set of tools and a different way of talking. The same goes for the PhD program. But I also know you can’t expect everyone to come to a university and spend a year or three years here. So it says a little bit about how we take what we know and push it out into the world in different kinds of formats. Come spend a day with us, and we’ll come spend a day with you. Are there shorter kinds of conversations? But that piece where it’s always about how do you equip people with a critical set of questions, a framework, a way of actively engaging with the present and the future, for me, feels hugely important.

The second piece is about what would the nature of those questions be and where are they going to come from. The first set of questions are historical questions. It’s hugely important to know where something comes from and what was happening at that moment in time. Things always have a history, a backstory, and knowing who those people were and what was in their heads is hugely useful to remind yourself that these things will also inevitably have politics. They tend to have a point of view and being able to get your hands around that is hugely useful. We teach our students and ourselves to ask, ‘Where did it come from, and why was it that?’ set of questions.

The second set is about how you critically interrogate it in the present. How do you think about what makes something a system now, you’re right; cybernetics is a systems approach. We are very much invested in thinking about things as systems. I tend to think that the 21st century is going to be the century of systems. If the 20th century was about nations, states, and economies, I tend to think we still have to talk about those. We also have to talk about systems more broadly as an analytical lens, as a way of making sense of and seeing the world. You and I probably both like Scott’s on ‘Seeing Like a State’ but I think it’s a bit like seeing as a system that feels important at the moment for me, and an underserved skill.

Ask about history, see the world as systems in the present, and then the future-looking piece, I also believe that’s important. We have a responsibility to create a set of narratives about the future that feels more hopeful than the present. More hopeful, both in an optimistic, not naïve sense, but in a sense where we can see toward and start telling stories about a future or futures that are more hopeful, more just, more sustainable, more fair, and more equitable. Then make sure that those stories become things that guide us in the present so that we make different kinds of choices now to do that.

A school that’s invested in building capability, a school that has a set of approaches – historical, present, and future set of approaches. Then a piece about feeling like it’s not just about capability and questions, but it’s also about where do you stand and what are the things that you want to be interested in. For us, that meant an abiding interest in the current technological apparatus, so absolutely AI, but also water systems, and digital systems more broadly. I’ve been doing something on the history of telegraphy because I’m interested in the first systems that created the world we move in today. We’re just finishing up some stuff, looking at ideas of the metaverse, and the virtual. We have projects going on looking at relationships with robots through the lens of dance, empathy, and affection. We have stuff going, on any given day, about a myriad of different things. For me, it’s those three pieces. Where do you wait to turn up for us to build capability? How do you do that through these different kinds of approaches? Then picking a set of places to go apply that so it’s not just abstract, but real.

Ross: That’s incredibly broad-ranging. If we think about today, the humans and technologies, the technology we’ve created, plus humans, that’s the world we live in. There are many systems and untangling that so we can have some understanding and we have a path forward. One phrase that you have used, which might inform this is “decolonizing AI”. I’d love to hear what that means and how this fits into this conversation.

Genevieve:  The idea of decolonization is a theoretical and pragmatic approach to various kinds of political, social, and technical systems. It’s an approach that has its roots probably 30 or 40 years ago, coming out of various forms of critical social sciences. The idea about it was to be able to see that the colonial act—the act of either colonizing or taking over things—tends to be about a profound form of power, enactment of power, a rearrangement of human bodies, often political systems, done with a degree of violence to an erasure of things that have come before and the profound tidying up of things.

The Colonial Act is always, in this particular theoretical view, understood as being about power, about a violent rearrangement of things, the erasure of things, and the assertion and imposition of very particular kinds of order. They usually flow out of the West, and the flow about very particular ideas about the self-relationship between self and others, ideas about time and space. Decolonization has two quite different kinds of senses. One is what happened through the radical rejection of the colonial regimes in lots of different countries over the years. How did we reject British rule in Africa and other parts of the world, mostly Africa. How did we reject other forms of colonial rule?

In the academic and intellectual space, it became how did we reject or critically examine some of those same moods. In position of power, rearrangements of body, ideas that were totalizing around time and space, how do you pick those things? Then depending on which school of decolonization you’re interested in might be about how do you read through the silences or read against the grains, do things like subaltern studies which was people like Ranajit Guha from India, how did you read Indians’ resistance to the British colonial rule through British colonial documents that portrayed them as senseless, but if you assume they were sense-full, how do you make sense of their activities by reading through the gaps, so multiple strategies inside decolonization. 

In decolonizing AI, there are a couple of things you might want to do there. One is to be critically aware of the historical articulations of AI. For me, that’s usually about knowing two things, at least. One is about what is the first constitutive document of AI, the grand proposal that gets written to Rockefeller in 55, and the importance in that of the sense that basically commences the project description at the conference, where the sense is something like, the study is to proceed basis of the conjecture that every aspect of learning or intelligence can be so precisely described, that a machine can be made simulated. The importance of that sentence is that it’s about every aspect of learning and intelligence can be so precisely described, that a machine can be made to simulate it. Sitting in that as all sorts of things going on. One doesn’t think it really interesting. Number two, it suggests that you can break those things down into small pieces. Number three is that if you can do that, you can then automate it. AI was never a technology; it was always a research agenda and more than a research agenda it’s a thought, exercise, or conjecture that says you can automate thinking if you can describe it in smaller pieces which you and I…

Ross: You deconstruct humans.

Genevieve: Not even humans, because humans are never mentioned in that sentence. It’s about the learning.

Ross: Right. True.

Genevieve: Other things were at play at that moment in time. One way of decolonizing AI is just to go back to the original words and say, “Look, it’s clear, this was never about a technology. This is about a research agenda and a stance that says that you can automate learning if you can describe it precisely enough,” which is the classic industrialization, automation, pick, Veva, Taylor, Fordism, break it down into little itty bitty pieces, and make it into a production line. That’s interesting. Likewise, it’s important to go back there and ask, who was in the room and what were their agendas, right?

In 1955-56, the people who frame up that conjecture are Claude Shannon and Nathaniel Rochester who are respectively the firstly, CTO at Bell Labs in charge of the largest R&D shop in the US at the time, and the man who is the leading designer at IBM for mainframes. Two well-established industrial leaders at the top of their game inside the companies that are building the future in 1955.  Sorry, while my machine beeps at us. Then on the opposite side of that are two academics, who are both postdocs, Minsky and McCarthy who are just finishing up their PhDs and who have an idea about neural networks and are looking for industry partners.

The money comes from the random Rockefeller. Going back to 1955-56, and asking yourself, “Who is establishing AI? What do they think they’re up to when they’re doing it?” One way of giving ourselves a different perspective about the inevitability of it in the 21st century is to say, it’s actually from the 50s. It’s tied up with large companies who are building computers and wanting to have a story about the future, and academics who are looking for the next thing to study. That’s one way of saying, how do you in some ways, critique its inevitability, and give back some of the messiness that’s sitting underneath, that you can also see where is the power being asserted, and whose power is being asserted?

Ross: I can see from what you’re saying that the history piece is significant in being able to understand the present. I’ve gone back and seen a lot of that history. There are many branches. Engelbart took this in a different direction. Many others have taken more or less humanistic perspectives amongst others along the path. We have very diverse, multi-faceted aspects of what it is today, but with roots, which come from that. But now there is a bit of a beast that we can shift and frame to a more positive future in time.

Genevieve: But for me, one way of thinking about how to decolonize it is to make clear where the first ideas came from and in whose interests those were serving. Then you can ask about what are the pieces that get erased there. What’s missing from that conversation? What are some of the ways that…  you want to go back and say, what are the earlier other versions of computing, of automation? How do we go back and look at everything from the tradition of robotics in Japan, Karakuri, to Islamic engineers like Al-Jazari or the Banu brothers who were building automata over 1,000 years ago? What is their sense of things that they are doing, which is not about being precisely described so that a machine can be made to simulate it? It’s much more about notions of beauty, grace, enacting ritual, or pleasure.

Being able to unpick the logic of industrialization or capitalism or suggest there are other kinds of formulations sitting inside those ideas or other histories, is intensely useful, or indeed to say, look a whole other way you could critique all of this would be not about that particular history, but about what the work, that math, and statistical models do inside contemporary AI? What are the other ways you might want to approach that? How do we get past ideas about automation, and machine simulating things to thinking about other kinds of locuses? How would you talk about responsibility instead of simulation? How would you talk about different kinds of ideas of not replication, but expansion? Or, as you sometimes talk about, how do we imagine ideas of support and entanglement, not erasure, and replacement?

All of those become other ways of unpacking that. I get to look at some of my extraordinary colleagues like Angie Abdilla, Michaela Jade, and Tyson Yunkaporta, who are starting to push in Australia on what it would mean to think about indigenous AI, to think about artificial intelligence through the lens of kinship, responsibility, not automation and replacement. And then you just start to see all these incredible other possibilities. That’s hugely hopeful and generative, not just Skynet going live.

Ross: Absolutely. Just rounding out with that’s so much around amplifying cognition, but amplifying humanity, which is our ability not just to think but to be at our best.  I’d love to hear any frames around where we can be going and what we should be doing now, to be able to have our human technology systems amplify how humanity is at its best and create a better future.

Genevieve: That’s such a good question, Ross. For me, it’s a little bit about how we make that all plural. That sounds like a terrible cop-out. But I tend to imagine we should want multiple futures because I don’t think everyone wants the same one and the cognition turns up in the same way everywhere. There’s a little bit about how we imagine and make the space for lots of different amplifications and not always reifying the human. There are a lot of cultures around the world where humans and the environment are a hugely important relationship and technology is incidental to that or it’s not even humans as individuals, but collectives, it’s important. There’s a little bit for me about how do we ensure in those conversations that we’re talking not just about individuals, but about different kinds of collectives and communities and different sorts of relationships. I was finding that piece of the puzzle, wonderfully delightful.

Then in terms of what are the kinds of moments for the present that we get to do that in? Look, one of the conversations I do find myself coming back to at the moment, I don’t quite know what to do with, is one about how do we right-size the environmental footprint of all this technology we’re talking about. I know that it doesn’t sound cool, but it does sound like it’s a critical conversation we need to have. I’ve been struck as the stuff around generative AI has unfolded over the last 10 months now, how much more we have talked about the energy footprint of these systems than we have about anyone in the past. I take that as a good thing, that we know, as we are discussing these crazy computational budgets, that it’s not just that the technology itself is amazing, it’s that its use of energy is also remarkable, not necessarily in a good way. There’s something for me about how we have a conversation about our futures with these computational objects, where we have one that is about right-sizing both the role of the computation but what we need to do to make that real.

Whether that’s about saying we need to push much harder on our impulse that says, automate everything. We should be saying, we could automate everything, but we don’t have enough electricity or water to make that source. We’re going to be more selective about that, feels to me like an interesting conversation to have about the present. Then you’re right, how do we imagine the role of technology where it is infinitely more expansive than simply describing every aspect of something so precisely that a machine can be made to simulate it? It feels to me like a terrible answer. It’s funny because even back in the 50s, those guys, they’re interested in Art, Music, and more machines make art, music, and how we think about that. I’m interested in how we imagine the role of not just the computation, but the relationship between us and the computation. How do we imagine that as being generative, as being not just about efficiencies and productivity but about other kinds of discourses? Amplifying cognitions is a much lovelier way of framing a future than saying productivity, should have higher productivity. Because it always feels to me just like a not-good answer.

There’s something about how we celebrate all the things that technology has delivered for us that we aren’t so good at leaning into whether that is cognition or creativity, whether it’s about things that are a little less heavy, I do tend to think, seems to get asked this question a lot, what’s the most important technology in the 20th century? I used to think to myself, I don’t know, electricity, probably is the right answer. I’m sure the internet was what I was supposed to say. I inevitably ended up saying things like television or elastic, like, elastic stopped our pants from falling down and television gave us a whole something to do. There was gas or something to enjoy. I don’t want to lose in the stories we tell about the future, just some of the little pleasures in life, that aren’t necessarily about things as heavy as cognition or creativity but are sometimes just about being able to read a trashy novel, or hang around with your friends get nothing done. Things that feel important too.

Ross: Absolutely! Your work is really important at the moment in providing just a massively bigger frame than most people are taking around technologies and how they’re used. We tend to get lost in the presence, just the nature of who we are, of others, individuals, and society. The breadth of the perspective that your framing of cybernetics, and what you’re doing is fantastic. I want to delve more into it, hopefully, many others will as well. Where can people go to find out more about your work in the school?

Genevieve: Excellent, they can find us on the internet, the usual way. If you type Cybernetics and ANU, we will pop up. I suspect if you type cybernetics into Google’s at the moment…

Ross: It does actually.

Genevieve:  We might still be the first thing, that turns out. We have a newsletter you can subscribe to. We refresh the stuff on our website pretty often. You can see our various projects there. We’re just going through a cycle of one-day short courses, which we’re going to teach the first one tomorrow. By the time this launches, we will have already taught them, but you can sign up for new ones on that same website. Stay tuned because as we go into 2024, she says, making the face, “Oh God! We’re going into 2024” I’m sure we’re going to be even better about sharing our work and starting to investigate other ways of showing the stuff we do. As is always the case, I don’t want to have these conversations in a vacuum with myself. If people think what’s there is interesting, feel free to reach out to us. You can also find us on socials these days, Instagram and LinkedIn are your best bets. We have abandoned X, which is a very strange thing for someone who has loved Twitter deeply for many, many years. As a school, at least, we have found that to be a less productive site for us moving forward.

Ross: Fair enough. Thank you very much for your time and your insights, Genevieve. We love the work you’re doing.

Genevieve: Thank you, Ross. We’re very happy. It’s nice to get to see you.

 

 

Join community founder Ross Dawson and other pioneers to:

  • Amplify yourself with AI
  • Discover leading-edge techniques
  • Collaborate and learn with your peers

“A how-to for turning a surplus of information into expertise, insight, and better decisions.”

Nir Eyal

Bestselling author of Hooked and Indistractable

Thriving on Overload offers the five best ways to manage our information-drenched world. 

Fast Company

11 of the best technology books for summer 2022

“If you read only one business book this year, make it Thriving on Overload.”

Nick Abrahams

Global Co-leader, Digital Transformation Practice, Norton Rose Fulbright

“A must read for leaders of today and tomorrow.”

Mark Bonchek

Founder and Chief Epiphany Officer, Shift Thinking

“If you’ve ever wondered where to start to prioritize your life, you must buy this book!”

Joyce Gioia

CEO, The Herman Group of Companies and Author, Experience Rules

“A timely and important book for managers and executives looking to make sense of the ever-increasing information deluge.”

Sangeet Paul Choudary

Founder, Platformation Labs and Author, Platform Revolution

“This must-read book shares the pragmatic secrets of how to overcome being overwhelmed and how to turn information into an unfair advantage.”

R "Ray" Wang

CEO, Constellation Research and author, Everybody Wants to Rule the World

“An amazing compendium that can help even the most organised and fastidious person to improve their thinking and processes.”

Justin Baird

Chief Technology Office, APAC, Microsoft

Ross Dawson

Futurist, keynote speaker, author and host of Thriving on Overload.

Discover his blog, other books, frameworks, futurist resources and more.

rossdawson.com