“AI is going to change humanity into possibly a new species; we could call it a new form of humanity, which is different from what we have today. “
– Pedro Uria Recio
About Pedro Uria Recio
Pedro Uria-Recio is a highly experienced analytics and AI executive. He was until recently the Chief Analytics and AI Officer at True Corporation, Thailand’s leading telecom company, and is about to announce his next position. He is also the author of the recently launched book Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity. He was previously a consultant at McKinsey and is on the Forbes Tech Council.
Websites:
LinkedIn: www.linkedin.com/in/uriarecio
Medium: @uriarecio
YouTube: @uriarecio
Book: Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity
What you will learn
- Exploring the evolution of AI from past to present
- Discussing the concept of human-AI interlacing
- Examining advancements in brain-computer interfaces
- Understanding AI’s role in future education systems
- Highlighting the importance of adaptability and critical thinking
- Predicting the long-term impacts of AI on humanity
- Emphasizing the need for an entrepreneurial mindset in an AI-driven world
Episode Resources
- Artificial intelligence (AI)
- Generative AI
- Large language models
- OpenAI
- Artificial General Intelligence (AGI)
- Brain-computer interfaces (BCIs)
- Neuralink
- Elon Musk
- Blue Brain Project
- Mind emulation
- GitHub Copilot
- Prompt engineering
Book
Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity
Transcript
Ross Dawson: It’s wonderful to have you on the show, Pedro.
Pedro Uria Recio: Wonderful. Thank you, Ross. Thank you very much for inviting me. It’s a pleasure to be here with you.
Ross: So you’ve got a book, Machines of Tomorrow, which I think has a pretty vast scope, in terms of humanity and machines and where that might go on a pretty grand scale. But one of the central themes there is how humans and AI will be interlaced. And we’d love to just hear more about where you see that now, and how you see that evolving over the next years.
Pedro: Wonderful. So in this book, Machines of Tomorrow, what I try to do is I try to explain artificial intelligence, from a human history point of view, from the moment in which artificial intelligence started to be created, or started to be designed, from those aspirations that humans had a very long time ago to create a copy of themselves a machine, like ourselves, to the present to 2024 with generative AI and what is happening right now with open AI, etc, etc. And also looking into the future, right? What is going to happen in the next few decades and in the very long term future, how it is gonna be, and how artificial intelligence is central to human history, right, particularly in the future?
One of the aspects that is most important or most central in this book is the concept of interlacing. Which means that humans are going to interlace with artificial intelligence –- we are going to become more intimately related. At this moment, we will have our phones. And we are using our phones for everything –- we can call people that are far behind, or that are in other places; we use it for our daily life. The fact that the phone is outside your body is just an anecdote. In the future, it is going to be inside our bodies, right? It’s going to be inseparable. And we’re going to be interlaced with artificial intelligence, we’re going to be interlaced with electronics, right?
And there are a lot of technologies that are being developed at this moment that are pointing in this direction. One of them is will all the cyborg technologies, possibly brain-computer interfaces have the most critical one, then we’ll have robotics, then you have applications of AI to medicine and biology –- how we can be modified, where we live longer, we don’t have cancer, we might see in the dark, et cetera, et cetera. So one of the aspects, not the only one, but one of the aspects of this book is that AI is going to change humanity into possibly a new species; we could call it a new species, a new form of humanity, which is different from what we have today. And that will happen in the long term. It is difficult to know where and how.
Ross: So what a start in the present. So of course, there are many phases to this. And I’m kind of interested to look at first of all, the next year or two, and sort of wait, so we’re already in some ways interlaced. A lot of people are using these generative AI tools in particular, as part of the embedded into their thinking processes. They’re already arguably interlaced into the thinking and ways of working. So let’s start with the first next year or two. What do you think are the next steps? And then maybe the next sort of two to five years around? What are the next technologies in the way you see those panning out whether that be brain-computer interfaces or something else?
Pedro: It is difficult to predict what things are going to happen in the short term and long term, I tried to escape from giving dates in the book, but there are things that probably are going to happen in the shorter term, right? One, and let me cover a few of them, maybe a laundry list, but one of the things that is going to happen in the short term is how large language models are going to become more intelligent. And this is how these are some of the steps that a lot of researchers and startups are taking in creating artificial general intelligence because that’s clearly the direction in which we are going in creating intelligence that is more intelligent than large language models.
Large language models are not that intelligent, right? One critical aspect, which is counterintuitive, is providing these large language models with logic. Large language models cannot think even if you ask a simple mathematical or logic question to ChatGPT, they’re very likely to give you the wrong answer, right? How can we make large language models split a complex problem into small pieces solve each one of those pieces first, and then apply those results together so that they get to a bigger answer? This is the same way we work right? This is how a programmer writes code. This is how a writer writes a book. This is how a pro engineer solves a problem right?
Ross: So one of the key points, though, I find is that the current AI developments, as you say, logic has been one of the relative deficiencies of large language models. But now in fact, what we’re just seeing in the last month or so, the new models coming out are building in logic through the use of multi-agent modeling, chain of thoughts, tree of thought, and a whole lot of algorithms laid over things which are making that logic better.
However, this in a way is, again, looking to replace humans and saying, ‘Okay, well, humans are able to think logically, AI doesn’t, so it’d be able to create it.’ And so in terms of interlacing, we want to be able to build logic systems where humans and AI are interlaced in that process of logic not simply being able to create chains of structures where the AI itself can fully undertake logic tasks. I mean, that may be part of it. But I mean, the more interesting part is where humans and AI are together, solving logical problems.
Pedro: This is an important aspect. To the extent that we are able to interlace more, to apply AI to ourselves, to become more intimately connected to AI, not only in use cases but also in our own body in our own existence. To the extent that we are more integrated with it, the more likely we are going to be successful, and the more likely we are going to be to survive.
And this is one of the other polemic aspects of the book –- the idea that this interlacing might not be a bad thing, that this interlacing might be actually our way to continue evolving and continue going better and better and better. One of the examples that I mentioned in the book, and this is something that is polemic. The book is polemic. One of the examples that I mentioned is what happened to the Native Americans when the Europeans arrived, right? And then you have the example of Mexico, Mexico today has a large population, which is Native American, or mixed, very, very large population, 60% of the population of Mexico, over 60% of the population of Mexico is mixed. You go to the United States, nobody’s mixed. Native Americans are a very, very small percentage of people. What happened? Well, in Mexico, they merged with the Europeans and they survived, and in the United States, they didn’t merge with the Europeans and they didn’t survive, right? And then you can judge that from an ethical perspective, from today’s point of view, which will be historically wrong. And I don’t want to get into that, but that’s what happened. So if you compare that with AI, yeah, the more we integrate with it, the better it will be for us.
Ross: I certainly take that premise. I’ve written about very similar ideas for a couple of decades. But what we’re really interested in today is what are the mechanics of it. How specifically? Yeah, now, in the coming years, can we best integrate? Or what are the pathways at least?\
Pedro: There are a number of ideas. And this is something that is just happening. It’s not that I promote these, I am just an observer, I’m just observing the world and saying what I see, I’m just an observer. One of the things I tried to say in the book is, that I tried to escape from the idea of what is good and what is bad. I don’t know what is good and what is bad, I’m just telling you what I see.
But one of the things that is going to be quite relevant is brain-computer interfaces. Brain-computer interfaces, actually, are a way of connecting a brain that creates electric signals with a computer. And this is something that has been happening since the 90s. People who lost mobility through an accident, and then they were able to regain mobility through a brain-computer interface. And they were able to move a robotic arm, or they were able to move a whole robot or they were able to communicate telepathically with somebody else through a computer system, right? And that there are examples of people who have done that in the early 2000s. Any day and in the 90s.
Now we have one of probably one of the most well-known startups that is working on this is Elon Musk’s ‘neural link’, right? Elon Musk’s ‘neural link’ is precisely what is a brain-computer interface that has three kinds of them. There are some of them that are very invasive because you have to put electronics inside, literally inside your brain, others are external. And then you have a neural link that is somewhere in the middle in which they have to put an electrode in your head, but he’s not in the brain, is in the surface of it? Well, Elon Musk will be the one who has the approval to do what he has done, if he just started doing tests with humans, with very polemic, there would be a lot of controversy. I don’t know if this particular startup will succeed or not, but the reality is that that is an area of work that will continue.
And then think about this, our brain is limited. One of the reasons is that our head has a volume and we cannot make it bigger, right? I mean, our intellectual capacity is limited in that you could offload a problem to a computer system, that is a super large computer, and get the solution back from it. Imagine that through a brain-computer interface, we could communicate with other people, and you could put the intelligence of multiple people together, working together on a problem. Imagine that you could operate through a brain-computer interface, another body, or a robotic body, I mean, all these take us into science fiction, and I have to use a lot of science fiction in the book to tell the story.
However, I am also describing the real science of things that are working today. And one of the things that you’ll realize is that actually, science fiction influences science a lot. And one of the reasons is that a lot of these entrepreneurs or scientists were reading science fiction when they were children. And Elon Musk is an example. Elon Musk has talked about Asimov Walker, in many of his interviews. So yeah, I mean, it’s difficult to read the future. You can make mistakes predicting the future. But yeah, there is a possibility that these kinds of technologies will continue evolving, and will take us to scenarios that we cannot really imagine today.
Ross: I’m most interested in what we can get a clear pathway to today. One of the key points, of course, around the invasive brain-computer interfaces is that it’s quite a long way until people who are not disabled will choose to have electrodes in their brain or whatever it said things in their brain. So the noninvasive brain BCIs are, at the moment, getting some quite interesting outcomes from looking at the short term, what we can do with noninvasive BCIs. And so this is where what are the practical applications in the next years for particularly noninvasive BCIs. And, you know, being able to interlace humans and AI…
Pedro: One of the applications that I have seen is communication. So people through electrodes can communicate, they can exchange words, right, at a slow speed at, a very, very, very low speed. But that is something that works, brain-computer interfaces have been used for that, and it only works right. And I think the difference between those that are more intrusive, and those that are external is the strength of the signal and the noise that you get from the brain. And well, I mean, what a lot of scientists are working on I’m not a biologist, I mean, I am a chief data officer, and I’m working on business applications of AI, I wrote a book that is taking a much wider scope, because I’m interested in this, but my daily work is not that but yeah, telepathy we will call it telepathy. So communicating without a mouth is one of the things that have been tested with non-intrusive brain-computer interfaces.
Ross: So looking at some of them, one of the things which you touch on is education. And so, you know, again, keeping that sort of shorter, potentially medium term, what are the ways in which you see humans plus AI in the education context?
Pedro: Education has been interesting, because well, I have taught at university. And, I have talked to students that are from 23 to 30, I will say. So undergrads and postdocs. One of the things that I have realized is that there are a lot of people, a lot of students that I asked him to do a presentation with, I asked him to write something, they read something with AI, and you see clearly that they will do something with it. I mean, you see it clearly because you can feel they don’t understand it. And then when they go to the whiteboard to explain to you, they read it, and then you say okay, so if you didn’t write it, and now you’re just reading it, what did you learn by doing this?
So I think AI is a sort of double-edged sword. It has two possibilities. It can be used for good. And it can be used for wrong. So when it comes to education, of course, you can use artificial intelligence to problem-solve a particular topic. And I did that in this book. Sometimes I didn’t know what to write. And then I asked, ‘Hey, how can I say this? How can I call this chapter, et cetera, et cetera.’ And I think that can add a lot of value. You can use artificial intelligence to simulate environments for science students or mathematical students. You can also for mathematical students, you can use AI to find algorithms to solve a problem in a much more effective way than how a human will design that algorithm. So I know a friend who is working in a startup and his job is creating algorithms that are just much more efficient than what a mathematician would do.
The problem with that is what will happen if we use AI in education as a substitute for thinking, and that is something you wouldn’t want. You don’t want AI coming back to what we have just discussed a few seconds ago.
Ross: So what are the ways then? I mean, again, we want to be as specific as possible. So what are the…either for students or for educators? Or systems? What are the specific things that we can do to make AI more a tool of learning, as opposed to…
Pedro: I’ll give you an example about programming that is more in, in the work environment in the business environment in a company, and then maybe we can translate it to education, because I think it should work in exactly the same way, right?
When you look at programmers today, or data scientists, until now, a lot of them were doing low-level work, designing algorithms, and coding part of the program themselves. And they started their career as a coder, we’re coding things, right, that’s how they started, then they became a manager. And then suddenly, they took the responsibility to supervise another person who was doing the easy parts of it while you were still a coder. But you were also responsible for somebody else’s work, and you had to supervise and be critical about that work. And then you became a hiring manager, and then you have a team of developers, and then you will not call in anymore that you have to make sense of what they call that you had to make sense that make sure that it was right, you have to make sure that there were no integration problems, you had to define a strategy you had to all that.
Now, that is what in the short term AI is going to be for a lot of programmers, not only programmers, programmers, lawyers, data scientists, and accountants, they are going to be junior colleagues that you have to supervise. Right? So they are going to be that programmer, for example, a GitHub copilot is gonna be that programmer that is creating code, and you have to supervise it, but you have to make sure that it works, you have to make sure that it doesn’t have cybersecurity problems, you have to make sure that is efficient, you have to make sure that it’s really making what you want that piece of code to make without bugs. Now, the AI is going to help you in creating the code, supervising the code, or doing all simple tasks, but you are the one who is thinking, and you are the one who is in charge.
Same thing with a lawyer. I was with a lawyer the other day, and he was telling me well AI is taking 70% of my work, and I’m worried about my junior employees because now I’m not going to need a junior lawyer anymore. Until now, there was kind of an agreement, right? So you allow these junior lawyers to make some mistakes, so that they learn and one day, they can become a partner of the firm, and they come here and they and they and they do that piece of work. But a lot of that you are not going to need any more right? Now, when we take it to education, what it means is that for a lot of people who are studying now, the first job is not going to be any more to be a programmer, it’s going to be to be a manager. And those employees that they are going to have to manage that you’d have a pilot and we’re going to have, we’re going to have agents that are going to be AI agents collaborating with each other on software programs. The one is going to be making this model, the other one is going to be making the other module the other one is going to do in cybersecurity and you are going to have say 10 AI agents, hundreds of AI agents, all of them working on a software project. But some humans have to supervise it, that’s what you’re gonna be when you get out of university. I’m very fast, not like before I might tell you 10 years now he’s going to be very fast. So, I believe that critical thinking is something that should not be outsourced to AI or to anybody for that purpose you should not also be if you should make your own decisions. So critical thinking is going to be very, very important. And I think this education system will promote it and not destroy that. Because otherwise, if you get into a dystopian view, uncovering dystopian views, as well, if you get into a dystopian world in which humans are not responsible for economic output, education has always been focused on making people ready for production, in an economy from an economical standpoint to be ready, so that the economy can keep running. If humans are no longer in charge of production or not, not necessary for production, and AI is the only thing that is necessary for production, then education will lose its purpose. And what remains is education as a tool for social conformity. Yeah, it has always been social conformity, to sustain and make whatever political system of the future. Those political stances will have been defeated, and we cannot even think about how they will be. So that people conform to those systems, and are integrated with them without creating too much trouble.
Ross: thank you, for your points here about how you describe the future of work. It’s all by design, we make the choices, and we decide the way while we should be deciding how the the future of work unfolds. And I think part of the question is, what are the specific structures and architectures and roles for humans in these multi-agent systems, but crudely, what you described as a manager of all of these is, I think, a pretty, pretty good idea.
So I wanna come back in a moment, just to round out by looking at, you know, what specific skills people can develop, and as well, but I’ve tried to keep it quiet, I suppose, grounded for now. But I mean, let’s for a moment just go out with the big picture. And part of what you cover in the book is talking about how biology could change so perhaps just, perhaps discuss the big picture of how human or other biology might evolve in a world of AI?
Pedro: I have no idea. I mean, the difference is evolution, until now has been driven by the vehicle of reproduction, right? So you reproduce in that reproduction, whoever comes out is a little bit different, and some of them will survive, some of them will not.
Now, the thing is, that nowadays, there might be another vehicle, which modifications will be made on your own. A word that will lead us is impossible to know, one of the technologies that I’m exploring sounds very science fiction, but it has been done with rats. One of the technologies that I discuss in the book is something that is called ‘mind emulation’. A mind emulation is basically scanning the structure of a part of a brain and reproducing that in a computer. This basically means that your brain would be running outside biology, which is a form of simulating your brain.
But I mean, think about if this could be done at scale, with enough level of detail, you could have a brain simulated in a computer, you could have a form of immortality. And it sounds very strong. But there is a project called the Blue Brain Project that has been done in Switzerland, where they have tried to do that with rats, and they have been able to simulate the small parts of the brains of rats, obviously not at the level of at the level of detail that will allow you to keep a conscience or to keep, like, all all the magic of the brain, but it’s an area of research and believe it or not, there are researchers doing that.
Ross: So let’s round out by saying in this world where humans and AI will be interlaced we will be able to change who we are and what that future lies. So what are the skills we need to develop to prosper in that world? And how do we develop those skills?
Pedro: So we go back to education. Because in those cases what you have to learn when you’re a child. That’s what really defines who you will be in the future the first years of your life. We talked about problem-solving and analytical skills, I think they are still super important. A lot of people say they are not, but I do believe they are. The second one is adaptability. And the third one is entrepreneurship. And that is going to be important. In the next few years, something is already important, but in the next 10 years is going to become even much more important –- adaptability. So AI is automating tasks and those tasks are jobs. So if you are starting your career doing one task accounting, and then for example, accounting, and then AI at some point automates all that, you don’t need accountants anymore, you just need a Chief Financial Officer. Well, that person that goes in accounting, we’ll have to find something else to do, because it’s not going to get a job anymore and may start doing. I don’t know maybe something about human-AI interfaces, imagine that is something that becomes quite odd. Now people are talking about prompt engineering, right? Whatever there is an opportunity in the market to find the kind of jobs well, and then that’s that for a number of years. And then at some point that doesn’t require prompt engineering. I mean, if you think about it, these people are talking about product engineering quite a lot as if it is going to be the next evolution quite simply. I mean, prompt engineering is not something that an intelligent person cannot learn in a weekend, or, or in a quite quite, quite short time, right? So then you have to change your job. And then imagine that you have to change jobs every few years. Well, a lot of people will be able to change their job, their job every few years. But a lot of people will not. And that’s the problem because those people that will not be able to change jobs every few years that don’t have this flexibility will be unemployable. Because they’re not learning fast enough.
Ross: So how do we move from not being adaptable to being more adaptable? What is that process? What do we do?
Pedro: It’s about being curious. It’s about learning. It’s about having that idea that you have to remain in charge of your own future and that other people should not think that you’re not entitled to be taken care of by others. I think that’s quite important. It is about always trying to go the extra mile and it is very difficult. So it’s a trait is a psychological trait but I think is quite important.
And in line with this is entrepreneurship. And the idea is this. If you think about all the technologies, or all the technological revolutions that we have had: the IT revolution, industrial revolution, agriculture, the mechanization of agriculture, all these. If they all created more jobs than they have destroyed, we have many more jobs now than what happened years ago. But there is one, there are two reasons for this. And the first one is that those things were tools. And what we are discussing now is that AI is the first tool that has the potential to become an equal, which is what we call Artificial General Intelligence, will that happen or will not I don’t know. But to the degree that happens, the ability of human beings to remain employable decreases, right? To the degree that that happens, the potential for having a net job described, destruction is bigger. And the second reason why AI might destroy jobs is the speed of change. If the speed of change is so fast, that people can just not adapt, they can not learn fast enough, they cannot find jobs fast enough. They start doing accounting, and they have to do another thing, maybe one or two years later, and then they just get stuck, they cannot do it. Some people will not be able to do all those years.
So these are two reasons why AI in the midterm or long term might not lead to a net creation of jobs. Again, we don’t know what the future is going to be in the short term. Yes, it is going to create a lot of jobs, many more jobs than will destroy but what about the long term? So that’s why entrepreneurship is important because, in the future, a lot of people are going to be working not because they need but because they want and those are the entrepreneurs. Those are the people that say ‘Hey, you know, I have this vision in mind. This is what I want to do with my life and I’m gonna do it.’ And yes, it’s possible that an AI could do it. I don’t know, but I want to do it myself.’
There will be people in the future who will own assets and own factories, we don’t think they will say, hey, yes, I know that an AI could be doing my job of managing all my assets, all my things, but I want to do it myself because it’s mine. That idea of doing things because we want, which I call entrepreneurship, in a way, I think is going to become very, very important. And again, all these three things that I’m seeing are mindset. The first idea of critical thinking is, that I want to think for myself because I want it the second one is adaptability, which is I want to find my own way I want I will find a way I will adapt. And the third one is I do things because I want and I think that is quite important to remain relevant.
And then you can go to the tactics and the tactics will change every every couple of years. I mean, I do this for a while. And we’re talking about my book. But my job is Chief Data Officer and I work with developers, data scientists, engineers, women, engineers, all these kinds of things. And I think these are the skills and these are certainly the skills of people that I hire. I want people who have these mindsets, rather than skills but mindsets.
Ross: Absolutely. So where can people find out more about your work, Pedro?
Pedro: So this book is available on Amazon. It is called Machines of Tomorrow. You can go there, you can find it on paper, you can find an electronic format, you can put my name on the internet or you can find a lot of things. Our website is machines of tomorrow.ai. And if you want to contact me, IamPedro@machinesoftomorrow.aI. So quite the same. And if you Google my name, you will find all this. So now is quite easy.
Ross: Excellent. All right. Thank you for your time and your insights.
Pedro: Thank you very, very much for us. And it has been a pleasure. I hope this is useful and interesting to people who are listening to your blog. In some aspects, I try to go very long-term. And I try to go deep into the philosophy of all this but the book is full of details about what is happening right now, examples of current startups, and current scientists that sustain this idea right now. Will the future happen exactly? It certainly will not happen. It is impossible to predict the future. But I think it’s a very plausible avenue for our future.
Ross: Great. Thank you, Pedro.
Pedro: Thank you very much.
Podcast: Play in new window | Download