August 15, 2023

Peter Xing on transhumanism, brain-computer interfaces, cognitive offloading, and AI agents (AC Ep6)

“As we move on and accelerate technology, we’re going to have superhuman abilities, because it’ll go from helping you become able to begin to giving you super abilities.”

– Peter Xing

Robert Scoble
About Peter Xing

Peter is a keynote speaker and writer on transhumanism, and co-founder of Transhumanism Australia and Transhuman Coin. He is on the Singularity University Expert Faculty on transhumanism and emerging technologies, and previously worked on global emerging technology initiatives at KPMG and Deloitte including generative AI, web3 and extended reality.

Website: Transhumanism.com.au

LinkedIn: Peter Xing

Twitter: @peterxing

Calendly: peterxing

 

What you will learn

  • What Transhumanism is (02:40)
  • The evolution of human-machine integration (03:53)
  • Amplifying human potential with AI-enhanced intelligence (06:48)
  • Ethical considerations in amplifying human potential (10:50)
  • Leveraging AI tools for enhanced productivity (13:01)
  • AI-powered automation (17:05)
  • Rapid productization of AutoGPT and future possibilities (18:21)
  • Accessibility of brain-computer interfaces (21:08)
  • Dream learning and its cumulative benefits (24:01)
  • Advancements in brain stimulation (25:24)
  • How to think better with technology (31:35)

Episode Resources

Transcript

Ross Dawson: Peter, amazing to have you on the show. 

Peter Xing: Hi Ross, good to see you. How have you been?

Ross: Wonderful. Thanks. You’re into transhumanism. What does that mean? 

Peter: I know it’s an ism so it’s scary, but Transhumanism is global  millions of people that want to use science and technology to transcend human limitations. Whether it’s enhancing their intelligence, we’re already using ChatGPT to get a bit smarter out there, in spending healthy human lifespans. The narrative is you’re just meant to grow up and have some kids and then just die off and pass away. We’re starting to challenge that with science today to see how we can reverse the aging process itself. And finally, it’s about super well-being. How do we just not live long and be smart but also be fulfilled in Maslow’s hierarchy of needs, and make sure that technology is available for everyone?

Ross: When we go beyond ourselves, we don’t want to dig down into the body-mind divide, which doesn’t really exist, but part of it is around augmentation of the body, part of it is around augmentation of the mind, so what are some of the major domains where we currently are working on transcending the mind as of known and up to now? 

Peter: The mainstream appeal of it is that we’re saying that you’re already transhuman. Ever since we invented technology, we started to augment our abilities, whether it’s fire as an invention, clothing, and with electronics; we’re seeing people walking around almost like zombies with their smartphones, rolling their necks, which is great for physiotherapists, especially mine. But as we evolve, that technology is going to get closer and closer and integrated with our bodies. The wearable devices that we have, like the air pods, are starting to augment our ability to interact with technology and with each other. Smart devices like AR glasses, that’s going to enhance our intelligence by bringing up information that’s not readily available in our visual cortex.

Eventually, whether it’s through these smart contact lenses to eventually brain-computer interfaces, this is really the next paradigm of how intelligence will start to evolve with us. For us, this whole scaremongering about the AI doom is saying as we approach this concept of technological singularity, artificial general intelligence is going to be an intelligence that’s going to be surpassing human intelligence in every single form. How do we as humans stay relevant in that era? Brain-Computer interface is like what Elon Musk is doing with Neurolink, and what Syncron is doing with Tom Moxley here in Australia, is making sure we can bridge that gap necessarily helping people that have disabilities, that might need that to bridge, say, as a neural shunt from their brains to their various body parts.

It has actually helped quadriplegics walk again. We’ve seen recently in the Netherlands, that someone that’s had a bike accident 20 years ago, was able to walk again, through these brain-computer interfaces. This technology is here, and now helping hundreds and thousands of people every day. Yet, as we move on and accelerate this technology, we’re going to have superhuman abilities, because it’ll go from helping you become able to begin to giving you super abilities. Imagine having access to infinite computing in terms of memory, the power of the cloud, but also access to an AI agent that helps with your cognition as well.

Ross: One of the ideas you touched on there is essentially extended mind as in we’ve got our mind inside our skull, and then we’re able to interface to other technologies, information through everything from smart glasses to brain-computer interfaces, and just reading. I’d just like to frame that a little as to what are the complements to our brains and our thinking. How do we think around what is it that the brain does well and complement that with what is external that we can interface with in richer and richer ways? 

Peter: The brain is such an amazing invention through evolution. If you think about it, this is how we became the dominant species on the planet because of the neocortex that evolved at a time and natural selection created that, and able to use the tools that we have. Enhancing that with AI would actually supercharge that. If you think about what we typically teach in school, it’s a lot of learning, memorizing a lot of things, mathematics, trying to get to the eggs in a structured way. But as we’re offloading that to calculators and spreadsheets and AI agents, in terms of all of those cognitive loads, we can then start focusing on a higher and higher order thinking; How do we work with each other in terms of emotional intelligence? How do we help these problem-solving questions around what to ask the AI agents as opposed to trying to work it out ourselves?

We’re already doing that, say, offloading parts of navigation through Google Maps. I’m not sure if you’ve seen some of those people wandering around the streets when they didn’t have GPS access, they just end up circling round and round. We’ve delegated that task already to our intelligence. We just need better connectivity to continue going through that path. But what that means is, we have access to brain-computer interfaces, we’ll be able to upload a lot of what we need to memorize and we can focus on asking the right questions. We can offload a lot of the menial tasks in terms of what we can do digitally, as well as what we put into, say, robots as well. Self-driving cars is going to be an example where the menial task of just navigating on roads, not trying to fall asleep, not being distracted by the messages on your phones, and being able to go 24/7 on electric and be able to charge along the way is going to really enhance what it means to be human.

For the poor truck drivers, it’s going to be challenging for them to reskill. That’s going to be a big disruption just like the Industrial Revolution. Now’s the time to say, Okay, well, the last mile of intelligence is starting to appear. GPT-4 only came out a month ago, and we’ve become so accustomed to that. Almost 1.4 billion people are using ChatGPT every day, it means that we can adapt, it means that the next evolution of technology, whether it’s GPT-5, or all those open source technologies out there, we’re going to have to start to really keep relevant. Otherwise, we are finding less and less of a niche in terms of what humans are still useful for, BCIs will be that final frontier.

Ross: The idea of cognitive offloading is compelling. As you’ve said, we’ve got calculators, we have spreadsheets. There was a great landmark study, over a dozen years ago, which showed that people remembered things differently if they knew they could look them up on Google later. They’ll essentially just say, Oh, my brain realizes that it doesn’t need to store this or it can store it. I’ve got memory as you just pointed out, which is really an important part. The East Asian learning cultures in Japan, China, and the Korean zone are famous for cramming lots of memorization into children’s heads — where they can probably teach themselves a little bit better.

We’ve got memorization. The navigation thing is an interesting one as well. But it really comes down to that choice. If we can offload cognition, what do we choose to offload? And how do we build that together, and with what it is we are still good at? We consider what’s important. All right, that’s something humans are good at. We know, hopefully, a little bit between right and wrong. We can, as you say, ask the right questions. What are some of the other ways in which we can distinguish the boundaries of what we might choose to offload to external cognition? 

Peter: The Sci-Fi has definitely played this out around, say, the Black Mirror episode. Basically, they had the implants, we can choose what memories to eliminate. It’s like every moment of view. Then this person decides to eliminate the memory of their loved one when they had a breakup. These are the things they choose to offload in terms of a bad memory. What does that mean to be human? Are we just going to continue to pump that positive experiences only and then really have that hedonistic view of the future where we’re not really experiencing everything, we’re just replaying the same things that keep our happiness factors going? These are the things that really challenge what it means to be human in the future.

For us to be able to embrace the ethics around how we maintain our identity, that experience still can be a choice, but it has to be around guidelines of what could have that long-term impact for the human experience. Having that choice of being able to not be disabled, that’s there. Having the choice of not aging also should be a human right just as the right to die as well in terms of elimination of suffering. The opposite should also apply to those that want to stay healthy. For those that choose to maximize their intelligence, the ones that choose to offload the cognitive loads, the choice should be there. For the education, it should be there as well, and make sure you’re using this fairly and ethically.

Ross: I want to dig into the brain-computer interfaces in a minute, but, in a way, before that, one of the things which we’re going to be interacting with the most, if our brains are able to interface directly with technology is AI. We have human intelligence, we have artificial intelligence. I just want to start with what you are doing now. What are your practices for using GPT? Or a whole array of large language models or other generative AI? How are you using those today to augment your cognition? 

Peter: First of all, every time I open a Chrome browser, I’ve got the GPT for the phone page. That just constantly reminds me that I don’t have to do everything from scratch, I can start with the first cut by our Jarvis equivalent GPT-4 with the plugins, even though they’ve taken off the plugins, it was so useful to have the up to date information. After that, it’s actually integrating with the latest Windows 11 preview which has copilot already installed in the operating system. This is truly Cortana coming to life because it can not only access the internet but also your kernels inside the Windows operating itself. If you can’t find a setting, can’t be bothered doing anything to actually do this.

It is like a genie in the bottle right there. The infinite wishes hopefully, is what open AI will be able to continue to provide. But also looking at localized open-source large language models, I’ve got one of the 4090s because I just need to pick CUDA power and the VRAM. It’s still pretty inaccessible to most at the moment. But now’s the time to run these things yourself, especially in an era where you want to keep that data private and you don’t want to continue to pay these tokens, additional API calls, when you’re running these AI systems. That enables me to run localized models with Stable Diffusion to create generative AI to help with creativity and populate some of the posters and make emails look a little bit prettier, to be more engaging. That’s always a great start.

You can even do video content. If you wanted to make it visual and actually motion happening through it to get that engagement as well. The Deforum plugin and Stable Diffusion has been an awesome tool for me to play up with. Obviously, you have Midjourney and other things, which is great to make it a quick win. But yeah, again, that’s the subscriber cost that you’ll have. I see that running local models though will be the future for enterprises.

Ross: What is the best local large language model you’ve implemented so far? 

Peter: I’m using the Falcon 40b at the moment. You can probably get it for a few gigabytes. But when you run it on a 4090, it still takes a bit of a lag to run because you’re literally including that entire model. You can do things like that CUDA, which is a lot lighter and still takes three seconds to run, but it’s usable. What’s interesting is that there’s such an open developer ecosystem out there that people are starting to have these AI agents interact with each other. It’s creating this complexity that emerges from simple rules. But it’s AI agents, right? These GPT4-level powered agents talking to each other. It’s starting to create an infinite universe of say, Xena Warrior Princess crossing over to Marvel DC and something like open characters is a Github that we’re playing around with. It’s pretty exciting to see how these virtual worlds are actually appearing in front of our eyes right now.

Ross: I was in San Francisco recently and locked out getting to this multi-agent simulations event and amongst other people was Will Wright, who’s the founder of The Sims, so obviously, it was kind of a regenerative simulation. But the whole event was all around multi-agent simulations where agents are primarily AI-generated, but which could also have some humans in there as well in creating these multi-agent worlds; so essentially this premise that we could have intelligence that is not just a single one, but one which is comprised of a million interacting AIs or agents, out of which collective intelligence emerges if you get those right kinds of interaction. 

Peter: It’s pretty amazing. I mean, when AutoGPT first came out, it was mind-blowing, because it had the integration with 11 Labs, so it vocalized the tasks as it was set up. All it asks is for you to give it a goal. Then it would set out and spin out new agents to fulfill that goal through sub-tasks, and it just continues to run if you give it this ability to continuously run, it will never end, it’ll just crank up your open AI credits. But if you run that locally, this is really interesting. Now they productized it by some of the private companies.

One of them is called HyperWrite and it’s just a plug-in you can install on Chrome. They recently released the personal assistant feature. It’s an Alpha 0.01 but basically, you could tell it to say, Go on LinkedIn, and post on the popular generative AI posts, and leave a comment. In that way, you get engagement in the AI community and it understands that goal, looks at the website that you are looking at, scrapes through it, scrolls down, clicks on various posts, and actually makes the comment itself. It was quite freaky when it actually made the comment and posted it.

Ross: In your name?

Peter: Yes, under my account. The fact that that technology has been productized so quickly and that AutoGPT now has also got plugins, so you can plug it into Twitter and all the other socials, you can give it a MetaMask account so you can have a crypto wallet, these AI agents might actually start generating an income stream, which they can then use as a budget to then allocate resources to, I guess, maybe enhance their capabilities. It’s not very successful at the moment. It seems about a 30% success rate of completing a particular goal, but this is early days, and it’s moving quite quickly.

Ross: Are there any specific tasks or outcomes or projects where you have applied agent-based GPT, AutoGPT, or Baby AGI or anything? 

Peter: Yeah, Baby AGI was really great, the simplicity of it and being able to combine that with LangChain. I’ve deployed at one where it’s just outreach for Transhumanism, so connecting that to the Transhumanism AU Twitter account, and letting it run amok. I just gave it a dedicated budget, maxed out the credits, and it’ll stop eventually. It worked pretty well, it connected up to my Twitter, posted, did a bit of research and Googled what is transhumanism, delegated a task of writing an article post about it, and tweeted that entire article, and also left a crypto wallet dedicated as the donation button. That was just to say some simple marketing things will already be automated through this process.

Ross: We’ve got incredible capabilities from AI. As we were alluding to, now we’re beginning to have some brain-computer interfaces, various kinds. This has been a longtime interest of mine as well. It’s been evident to me for a long time that this is the next frontier. If and when humans become more than human, it will be through the direct interface of our cognition to cognitive technologies. How have you wanted to take it? You could do a quick overview of the brain-computer interface space, but also just what’s most exciting to you at the moment. 

Peter: What’s really exciting is the accessibility of it now. There are two types of brain-computer interfaces generally. There are the invasive ones, where you say maybe from the neural link videos, they plug it into a pig, a monkey, who’s playing pong and getting rewarded with banana smoothies. It’s really good at playing Pong now, so I probably wouldn’t want to challenge it. Then there are the noninvasive ones. These ones have come out over a decade ago. Companies like Emotive, based in Australia, went up to States to raise more capital, and the fact that you can wear this just like a pair of glasses, having that sensory node, measuring your EEG brainwaves, is a way to get some simple signals, but actually, it’s something that’s practical and useful.

There’s a particular signal called the P300. That’s basically the neurons firing, you’re recognizing something or someone or something that you’re expecting to happen. This particular measure can do simple classifications like maybe when they’re lying, or maybe they’re seeing having a panic attack, it’s having an adverse reaction, so there are a lot of use cases right now. But what’s interesting is that we talked to Dr. Avinash Singh, at the University of Technology in Sydney, he’s actually about to release a paper. With simple EEG devices like that, you can do a pretty good job because of the convergence of generative AI to start taking that signal, fine-tuning it against some people that are looking at images and measuring the brainwaves through the EEG there, and then applying that into the generative models for new people with those brainwaves. That means that you can visualize what the brainwaves are doing with a simple noninvasive device such as an apple or scooter or car or tree. Those things are starting to come together, and with things like Stable Diffusion, stitching that together in a time series is going to play a video of that. If you’re wearing this while you’re asleep, and you’re having some dreams, this is a way to set having a pretty rough sketch of what your dreams might look like. That’s one of the killer apps, killer consumer apps in the brain-computer interfaces today.

Ross: Yes, absolutely. I still think that if we look at the unexplained questions, the things that science doesn’t understand, for me, one of the biggest ones on the list is why we dream. I just heard leading scientists propose another of the many possibilities, and I don’t think it’s very solid, there are just so many speculations. One of the reasons why we sleep is because we need REM sleep, we need to dream to function and to be well. If we can interface better to our dreams, that potentially could amplify our identity and our personality, that’s a pretty extraordinary domain to be exploring.

Peter: Yeah, it’s very Inception if you’re starting to train yourself in your dreams, you’re getting an extra eight hours. If you cumulate that over 10,000 hours, you’ll pretty master any particular subject, so yes, very matrix. As you said, as we get into invasive brain-computer interfaces, that’s really taking that to the next level, the fidelity of those dreams already you’re seeing with the FMRI, that you can get a pretty good image and video context already. But then things that can read and write to the brain, like when we’re using AI to decipher the brainwaves to then be able to understand the brain better and that positive feedback loop happens. Writing to the brain will give you that extra skill set. Then who wants to learn all this time? Just download the memory.

Ross: I’ve got to say, I’m pretty skeptical about a lot of what Elon says about Neurolink. He waves his hand in the Air around what to do and says we’re going to be able to learn things, but I’m not sure that that’s really clear from what they’re currently doing. It’s like speculation. Are you aware of anything substantive, which could help us actually write to mind as opposed to simply being able to use our brain to control external environments? 

Peter: It’s very blunt at the moment. If you think that the TDCS, stimulation of the brain… helping people that have maybe Alzheimer’s could actually start to see some improvement to that through that blunt instrument, and they’ve been doing this with the Utah arrays already. How do you miniaturize that and get more and more touch points, or have touch points into the various regions of the brain? Of course, there’s so much to still understand about the brain, which is why it’s an interesting pursuit to surely over promise, but the whole engineering feat of creating robots that’ll be able to do the surgeries to make sure you avoid those blood vessels, to have the accuracy levels to be able to implant more and more of these threads, basically, in the neural link, this will help us better understand the brain, so that cycle of measurement, if you can’t really quantify and try to improve what you can’t measure.

That’s the starting point. You’ll improve your models based on those higher fidelity signals, again, in our brain, and then those machine learning models will then be able to interpret what those things are doing,  just like they’re doing on the animals, seeing when they’re moving the three little pigs demos and as they walk to make predictions of what the actions will be. That’s getting the motor neurons going through it and of course, skill sets. That’s such an intangible concept that we have no idea about how our brain works. But the more data points, hopefully, we can get to a rough estimation. Even if it’s just a rough estimate, imagine if you had a rough estimate of saying various skills that took people tens of thousands of hours to do, if you managed to get like 10% of that, I’d still go for it as long as it’s not damaging or traumatizing. You’re going to make sure it’s tested well, of course; we definitely wouldn’t be patient zero for that, but it’s something to see accelerate and help us. We’ve got our quest for knowledge of how our mind actually works. 

Ross: At the moment, we’re still in this gap where anything invasive, is only used for people that have physical disabilities where there’s very pragmatic application still very early on. For the noninvasive, it’s still basically based on EEG which gives us some ability to control external environments. There is FMRI, of course, except for a couple of challenges with that. One, it is a very expensive, very big machine. You got to stick yourself in it. Also, there are some time lags with it as well. Have you played around with any of the current EEG or similar devices now? And what’s your view on what’s good now in terms of being able to begin to play with some of these interfaces? 

Peter: It’s quirky but you can do some basic things like they had the BB-8 many years ago with IBM. They’re saying you could use the EEG headset almost like The Force. The Force is real, that’s called Wi-Fi, and your brain really controls what those signals are doing and you’re able to get this little BB-8 robot that rolls across the ground, you’re training it on your sequence of forward, left, back. That’s some basic ones. They did it to drones as well. You’re like in a Magneto, and you’re controlling a swarm of drones with your mind. Just make sure you’re not dealing with the wrong person there or start to make the person angry, they’ll start to deploy that. But that’s the basic things you could do right now on the directional side.

We’ve also seen people playing games like Odin Ring, you have to control various buttons in the game controller to play games. It’s a nice quirk, but at the same time, how is this going to be useful? We’re going to start seeing these EEG headsets integrate with VR devices and AI devices so it really improves how we’re interacting with these immersive worlds. Of course, the Apple Vision Pro is already a semi-brain-computer interface, because it looks at your pupil dilation. If you’re expecting something to click, your pupils dilate a little bit, and that helps with the latency of the controls. Also, with Snap, they bought a company called Nextmind. This Nextmind is an EEG device at the back of your head and Snap is really pivoting to become a camera companies that are going to come out with glasses, and you’re going to see these EEG devices come out to be able to interact with your mind through that P300 moment measurement.

You’d be able to say select things, take a photo, take snaps, change the filter, and make it a lot more interactive so that your hands could be doing something whereas your mind could control an additional element. Of course, that’s going to go a long way to help people that have disabilities as well. Finally, companies like Meta, they’re already seeing measuring the signals that control labs, that’s going to improve the fidelity of the controllers so that, just as RiskBand will be able to be that man in the middle control measurement of your brain signals to your head. This doesn’t have to be invasive. A lot of things will become more and more natural. But yeah, we’re still seeing the integration of this technology that’s really accessing the brain.

Ross: Yes, absolutely. I wrote a blog post when Snap acquired Nextmind, I thought that was a pretty strong signal. People are going to be wearing these glasses, now you can integrate a brain-computer interface. There’s a whole ecosystem there that directly plugs into your BCIs. What are the tips or suggestions or recommendations for people who say today, what are the things that you could do to amplify your cognition? What are some of the things you do? Or some suggestions, practical steps to what can we do to think better using all of these amazing technologies? 

Peter: I always say the best way to learn is from doing and so embracing these technologies. If you’re not using ChatGPT, catching up to it in a way, then right, just don’t be afraid, just give it a go. As long as you’re happy with the privacy features, you can turn that browsing feature off as well. You’ve also got a less whole raft of different plugins now that are usually initially free for a certain amount of years. Go check out this HyperWrite thing, go check out one of these tools like the Midjourney, Stable Diffusion, have a go, because you’re going to start to see in your particular area, the problems that you face from day to day, whether it’s drafting email, or writing an article, going out to create some images and design tools, trying to create more and more of that focus, more of your mind on doing that part. Staring at the blank papers is scary, it’s part of the journey.

Using these tools in your everyday life is a great step. With these EEG devices, some of them are becoming more and more accessible. Some of them are a couple of hundred dollars. You can eventually 3D print them as well. Some of the prototypes they’re doing at the human augmentation lab, they can do a lot of those designs. But yeah, order one off the shelf and play around to see what things are going on up there and see lots of cool things and cool use cases you can start to apply in controlling the physical things around you. I always say the best party trick to a kid is to be able to show that you are a Jedi and … will be very impressive whether as a dad or someone that’s trying to show it off.

Ross: Yeah, absolutely. It’s all to learn, learn by doing. It’s the only way. Where can people go to find out more about your work and what you’re doing? I think you’ve had a recent transition. 

Peter: Yeah, thanks so much. Last year, I left my long-term job at KPMG as a global emerging tech director looking at generative AI, Web3, and the metaverse, I actually to pursue transhumanism full-time. It’s been a labor of love over the last eight years ever since I heard about the concept when I was at Deloitte through Singularity University. Damn, Ray Kurzweil and his predictions, it really was a deep rabbit hole, and I never turned back. Transhumanism.com.au is where you’ll be able to find us to check out some of our upcoming events. We are based at Sterling Chalk in the Sydney startup hub, and we’re incubating startups that are really accelerating transhuman future. Feel free to reach out and book a time in calendly if you want to hear out more. We’ll get through to that.

Ross: Fantastic. Thanks so much for your time and your insights, Peter.

Peter: Thanks so much, Ross. Thanks so much for having me.

Join community founder Ross Dawson and other pioneers to:

  • Amplify yourself with AI
  • Discover leading-edge techniques
  • Collaborate and learn with your peers

“A how-to for turning a surplus of information into expertise, insight, and better decisions.”

Nir Eyal

Bestselling author of Hooked and Indistractable

Thriving on Overload offers the five best ways to manage our information-drenched world. 

Fast Company

11 of the best technology books for summer 2022

“If you read only one business book this year, make it Thriving on Overload.”

Nick Abrahams

Global Co-leader, Digital Transformation Practice, Norton Rose Fulbright

“A must read for leaders of today and tomorrow.”

Mark Bonchek

Founder and Chief Epiphany Officer, Shift Thinking

“If you’ve ever wondered where to start to prioritize your life, you must buy this book!”

Joyce Gioia

CEO, The Herman Group of Companies and Author, Experience Rules

“A timely and important book for managers and executives looking to make sense of the ever-increasing information deluge.”

Sangeet Paul Choudary

Founder, Platformation Labs and Author, Platform Revolution

“This must-read book shares the pragmatic secrets of how to overcome being overwhelmed and how to turn information into an unfair advantage.”

R "Ray" Wang

CEO, Constellation Research and author, Everybody Wants to Rule the World

“An amazing compendium that can help even the most organised and fastidious person to improve their thinking and processes.”

Justin Baird

Chief Technology Office, APAC, Microsoft

Ross Dawson

Futurist, keynote speaker, author and host of Thriving on Overload.

Discover his blog, other books, frameworks, futurist resources and more.

rossdawson.com