“There’s no point trying to teach ourselves to be better calculators because machines are much better calculators. We should be playing to our strengths and handing over those tasks, many of which we never actually like doing, to machines that they’re better at.”
– Toby Walsh
About Toby Walsh
Toby Walsh is Chief Scientist at UNSW.ai, the University of NSW’s new AI Institute. He is Laureate Fellow and Scientia Professor of Artificial Intelligence at UNSW Sydney. His many honors include winning the prestigious Humboldt Prize, the NSW Premier’s Prize for Excellence in Engineering and ICT, and the ACP Research Excellence award. He appears regularly in global media including ABC, BBC, CNN, NPR, New Scientist, and many others, with a profile piece in New York Times featuring his prominent work on the Campaign to Stop Killer Robots. He is author of four books on AI, with his most recent, Faking It: Artificial Intelligence in a Human World, just out.
What you will learn
- Human Intelligence vs. Artificial Intelligence (03:36)
- Alien nature of AI Intelligence (08:06)
- The complexity of defining intelligence (09:38)
- ChatGPT and its deceptive design choices (12:27)
- Symbiotic relationship between human creativity and AI capabilities (15:40)
- Role of Probabilities in Large Language Models (19:27)
- Growing sophistication and personalization in Large Language Models (21:26)
- Outsourcing human tasks with AI’s evolving roles (23:43)
- Human advantages over AI (25:40)
- Amplifying human strengths and recognizing AI distinctions (29:37)
- Preference for human judgment over machine precision (33:07)
Episode Resources
Resources
Book
Faking It: Artificial Intelligence in a Human World by Toby Walsh
Machines Behaving Badly: The Morality of AI by Toby Walsh
2062: The World that AI Made by Toby Walsh
It’s Alive!: Artificial Intelligence from the Logic Piano to Killer Robots by Toby Walsh
Transcript
Ross Dawson: Toby, it is absolutely awesome to have you on the show.
Toby Walsh: It’s good to see you again, Ross.
Ross: Toby, you have just come out with a new book Faking It, which gives some really powerful insights for people into understanding where AI is today. One of the interesting things is that you started the story by questioning whether artificial intelligence was the right term, and then you concluded that it is a good description of what we’re dealing with. How did you come to evolve your thoughts about that?
Toby: Ross, it mainly comes with a lot of baggage. It was invented by John McCarthy in 1956. It was a pleasure, indeed, to know John McCarthy, and he came up with a name, as far as I can tell, because it was just different from anything else that was in use at the time. It was not cybernetics, that might have been the name chosen. It’s problematic, and it comes with quite a bit of baggage. It is an invitation for people to make jokes about natural stupidity and other such things. I must admit, I remember in the early days of AI, when I told people I was doing AI, they could always think artificial insemination and said, confusion as… Intelligence is not a very well defined concept itself so naming something…is problematic in that sense.
But over time, indeed, just in the last couple of years, I’ve come to increase…actually fortuitous, it was quite a fortuitous, quite a good choice. Because it is about trying to build intelligent machines that replicate the sorts of things that humans do that require intelligence. But that other word that’s there, that doesn’t get as much attention, for most people, the artificial world actually has a really important role to play. Artificial intelligence is going to be artificial, quite different from human intelligence. One of the arguments in my book Faking It: AI in a Human World is that we’re going to be increasingly fooled and deceived into thinking it’s like our human intelligence.
That is a natural conceit because our experience in intelligence is the one that we get when we open our eyes in the morning and start thinking. It is natural for us to suppose that artificial intelligence might be similar to human intelligence. But certainly, the early indications that we’ve got from the limited AI that we’ve been able to build so far is that it has a very different flavor. There are many reasons to suppose it is going to be very different; a bunch of really important characteristics are going to be different. AI works in a different fashion, it’s not evolved in the way that human intelligence works.
It has several natural advantages to offer. It’s going to work at electronic speeds, not biological speeds. Circuits work in billions of billions of instructions per second, and the brain works in the 10s or hundreds of instructions per second. Our brain is limited by the size of our skull, we can’t have any larger brains and be born. Artificial intelligence is not limited. We can just connect it up to more and more memory, there’s no limitations. It doesn’t have to forget things. Certainly, as you get older, you discover there are more things you’ve forgotten than the things that you’ve ever remembered. There are several characteristics, but my suggestion is that it will be different.
Certainly, the way that AI works today seems to be a different flavor. There are some really fascinating examples. You could take computer vision systems, that is an example of AI. There are what are called adversarial examples. There are ways of fooling computer vision systems, just like there are ways of fooling human vision system. Optical illusions are a catalog of ways that you can fool the human vision system into seeing things that aren’t there. You could do the same with computer vision systems.
But what’s interesting is that computer vision systems are fooled in completely different ways. I can take a picture of a stop sign and give it to an autonomous car. I can change just one pixel, and it becomes the go sign. As a human looking at the image, you can’t even see the pixel that has been changed and yet you fooled the computer vision system. It is clearly seeing, understanding, and perceiving the world in a completely different way than the human vision system is working. It’s easy for us to be fooled by that and to think it’s going to work in the same ways and think that AI is going to be like human intelligence, whereas actually, I think it’s really important to remember it’s going to be quite artificial.
Ross: Yes, I and some others have suggested that AI could stand for alien intelligence as it is, in some ways, intelligence, but it is alien to human intelligence, as you say, a completely different form.
Toby: There is one other really important characteristic to point out as well. Because again, it is a source of huge amounts of confusion, huge amounts of fear, which is that certainly the machines, the AI that we’ve built today, has nothing like sentience, nothing like consciousness. In that sense, it is quite alien because we’re not used to it intelligence being disconnected from consciousness. It’s an intimate part of our intelligence, and a part of the animal kingdom as a whole, so the other intelligent beings as well. We are not the only conscious, sentient, intelligent life on the planet. But we have got something, alien is quite a good way of describing it, which is intelligent, but not, as far as we can tell, at least today, sentient in any way at all.
Ross: Yes, this comes back to the definitional issues around intelligence. It is almost misleading to use the term because no one has really ever defined it properly, that I can find or in any way that we can agree on. But the theme of your book is really that we have this, let’s call it, alien intelligence or different intelligence anyway. But it is pretending to be human intelligence, or people are trying to make it out to be like human intelligence, so that we start to, as you say, get misled to thinking that it is conscious or thinking that it actually is intelligent, when often it isn’t, or that it is purporting, in many ways, to be not what it actually is.
Toby: Yes, there is a natural conceit. We do that all the time. We project ourselves onto inanimate non-sentient things so we do that, unsurprisingly, with AI. But it is also, in some cases, more conscious. Indeed, one of the arguments in the book is that in some sense, it’s an original sin. It goes back to the very beginning of the field. If you go back to the beginning of the field, the very first person who arguably thought about building Thinking Machines was the remarkable Alan Turing who invented computing and wrote what is generally considered to be the very first scientific paper about artificial intelligence, where he talked about what happens if we do build machines that might be intelligent. I propose what is now known as the Turing test, which he called The Imitation Game.
Many people have heard of this. The idea is that he said, it’s a bit hard to answer the question of is a machine intelligent because we don’t know what, as you said, the word intelligent means. He suggested rephrasing it as if you can’t tell it from something that’s intelligent, then you might as well say, this is Occam’s razor, you might as well suppose it is intelligent. He proposed the idea of The Imitation Game, now called the Turing test, in which you would sit down, have a conversation with the computer and the person at the end of a terminal, and if you couldn’t tell who was the computer and who was the person, then you might as well say that the AI was intelligent.
If you actually look at that idea, that’s a game of deceit, where the computer is trying to pretend to pass for a human. Interestingly enough, Alan Turing proposed an example set of questions that a judge might ask in forming one of these tests. The questions were all about deceit, about pretending, for example, a computer would pretend to make an arithmetic mistake so you wouldn’t think it was a computer, but you’d think it was a human. You could ask any of the questions that only humans would and pretend that only humans know the answers to.
Ross: That is interesting. In fact, as you say, a Turing test is like the reference point, and it is essentially deceit. It’s really become quite clear through your book that almost every aspect of AI is in a form of deceit, as in faking being a person, and the way that many companies pretend that they are AI but actually have humans doing the work, or the faking creativity in various guises. It is a relevant frame.
Toby: Fantastic. A really topical example is ChatGPT. It is a wonderful example. It’s a very impressive tool. It is remarkably fluent. But there are a couple of bits of design that actually really do for. If anyone who has used ChatGPT, this AI chatbot has captured people’s imagination. When you sit down and use it, and when you type in a query, you say, write me a poem in the style of Shakespeare about my dog, and it will go off and do quite a good job of that. But first of all, when it blinks as though it’s thinking at you, and then it types out the answer, types out the poem, word by word, as though it was a human typing. The reality is actually, it’s got the whole answer in the flesh, it could actually just flash it up on the screen, it takes milliseconds for it to generate that, it doesn’t actually have to write it out slowly, like a human. But that is a bit of a deceit. It’s a beautiful design choice, which makes you think, it’s a bit like a human talking and typing the answer out to me. It is not surprising that people are fooled by this.
Ross: Yes. As you say, there are different ways in which AI is designed to make us think of it as human, to anthropomorphize it.
Toby: Yes.
Ross: One of the very interesting examples you used was an AI patent generator called DABUS. The person who created it said that it essentially was an AI inventor and tried to patent it in the name of the AI. You pointed out that, in fact, of course, the person invented the system and it was really just an assistant to him. There was a human plus AI endeavor, as opposed to something that you could attribute fully to the AI.
Toby: Indeed, yes. It’s a very interesting example. There was a court case brought in the US and one in Australia where briefly before the initial judgment was overruled, the AI was actually allowed to be named on the patent as the inventor, but that now has been overturned, at least in the US and Australia. Again, we returned to the place where only humans are allowed to be named as inventors. But the system, as he says, is an interesting example of how humans can be helped. These are really powerful tools for helping people do things that we initially thought required quite a bit of intelligence, coming up with, there’s nothing perhaps more endemic of what is something that’s intelligent is to come up with something that’s patentable. There is a certain mark there that it must be novel, and done something truly creative, otherwise, you wouldn’t be allowed the patent.
DABUS helped Steven Tyler, the guy who wrote the program, to come up with a couple of ideas that have patents that have been filed for. A fractal light. The idea is that you turn this light on, and it flashes in a fractal way. Fractal is that it doesn’t have any repetition in it. The frequencies keep on changing. That will attract our attention, obviously, because it is not going to be flashing like a lighthouse, or in any rhythmic way. It’s actually going to be disturbing our mental perception of it. It will actually be quite a good way of attracting people’s attention. Then another example is, interestingly, we have both fractal inventions, a fractal container. The idea is that the surface of this container would have a fractal dimension to it. Again, if you know something about fractals, it means it’s good to have a huge, truly fractal, in fact, infinite surface area. If you want to have something where you can heat up the container very easily, then having a large surface area to the volume will be very useful.
What’s interesting is that these are the only AI programs that are being used by people to help invent stuff. What people do is that they get the program to define what you might call a design space, a set of ideas, and building blocks that you put together. Of course, the great stake of a computer will be very exhaustive and do things in all the possible ways. Maybe our human intuitions will stop us from doing some of the more extreme, unusual ways, putting these things together. But the computer will beautifully peer it as it won’t be inhibited in those ways. It puts all these things together in interesting ways. But the problem is that it is huge, actually infinite design space. You’ve got to tame it in some way. You’ve got to say, what are the interesting ways of putting things together, and then we come to this ill-defined word interesting.
This is where there was a synergy between the human and the AI, which was that it actually outsourced the idea of saying, what’s an interesting promising direction to follow. If I’m trying to build up this idea, it’s going to be a fractal container or a fractal light. He pushed it in those two directions. Then it’s like, okay, let us explore a bit more about in what way is the light fractal. He kept on deciding which of the many possible combinations of concepts that he was trying to put together to go off and follow because it was a hugely branching, in fact, infinite search base to explore. It was playing to the strengths of the computer is exhausting this ability to put things together irrespective of how silly they might sound. Then the human who was bringing the judgment and the taste of what might be interesting, what might be a promising direction to follow.
Ross: This is, in a way, describing combinatorial ideation. Kary Mullis, who won the Nobel Prize for Chemistry for inventing PCR said, basically, there are no new ideas, it is just combining them in new ways. If your brain is not too strait-laced, and it goes in different directions, it certainly can draw some connections, and that’s what inventors do, and that is what creative people do. But with large language models, it turns out very good because of the scope of what they’ve been trained to be able to find different things and see how those combined.
Toby: Indeed. One of the important characteristics of a large language model is if you play with them, you start to create your discoveries, the stochastic, actually slightly random, but they say things that are probable, but they don’t say only the thing that’s most probable, because actually, it’d be very boring if they did that. If you run a large language model a second time, it will say something slightly different.
Again, it is an interesting design choice. There are these probabilities. A large language model is actually computing the probability of the sentence. There are other ways perhaps to finish that sentence. Interesting design choices. OpenAI, the company behind ChatGPT, chose not to surface those probabilities, which has led to one of the problems that we have with large language models, which is people say they hallucinate, and make stuff up. They do so in a really confident way because they chose not to tell us this probability. They could have said, Oh, I’m really 99% certain, this sentence is always going to finish this way, this is highly probable, this is likely to be true as opposed to this one is going to tell us, if you run me again, I’ll give you a completely different answer. They could have been color-coded…indeed, there are some new large language models starting to do that, things like putting color case to give you a clue as to which things are absolutely certain, and which things are less probable.
Ross: A long time ago, I talked about the idea of a serendipity dial for recommendations, either you want no accidents or lots of happy accidents, lots of randomness in what you’re suggested, I suppose that’s analogous to temperature, as it is used in OpenAI or other computers, where you can actually vary how random or otherwise the outputs are, so that can be a choice depending on what you’re trying to achieve.
Toby: Yes. We’re going to see a growing sophistication and personalization in the large language models that you can, first of all, change the style of the way they speak, you can fine-tune them, train them on your emails and SMS as a mail, to speak like you. But equally in terms of whether they’re going to be highly creative, much more stochastic, or actually much more sober and conservative in what they say. There are going to be these dials that we are going to start surfacing that will allow us to make those sorts of choices. In fact, I have actually predicted that we will end up choosing our large language model and also things like the politics. There are lots of choices, which are political choices or social choices. Again, you can actually train the large language models that have a particular politics, a particular line, a particular place on a political spectrum. I predicted that we are going to end up choosing our large language models like we choose our newspapers because they align with our personal politics.
Ross: I am very interested in this idea of humans plus AI. You’ve laid out how humans and AI are different, and how AI is not human intelligence as we know it. When we look at the landscape of how AI can amplify the capabilities of humans, humans want to achieve things, we have different tasks, missions, or intentions. What is the scope of the ways in which AI can amplify who we are and what we can do?
Toby: In many ways. It comes down to the fact that AI is different from human intelligence, and therefore, we have different strengths and weaknesses. We already recognize the fact that computers are much better at doing arithmetic and calculations, which we have already outsourced. I remember school being to log tables. I’ve never used a log table in my life. Totally wasted my time at school. But it taught me about numbers, I suppose. I’ve got a calculator on my watch, I’ve got a calculator on my phone, I’m never without a calculator. The calculator is much better, makes fewer mistakes, and does things much quicker than I would ever do. We can outsource those things to computers. But increasingly, we are going to find other things where computers do the things that we outsourced to them.
Actually, what I take away from the success of large language models is that we have overestimated quite a bit of human intelligence, that there is a huge amount of human communication, writing a business letter, which requires minimal intelligence. It is quite formulaic. We have now taught those formulas to machines. They are really good at that. I’ve written my last business letter, I just put the four dot points into ChatGPT and said write me a polished, polite business letter that covers these topics. Of course, the irony is going to be very soon, that business letter is going to land on the desk of that business. They’re not going to bother to read it, because it’s too many words. They’re going to put it into ChatGPT and say, summarize these four bullet points for us, whatever Toby has just written to me about.
Ross: This goes to information theory, which Claude Shannon created, where it looks at redundancy, what is it that you can pick out and still retain the message, and as you suggested, you can take out most human communication and still get the message across.
Toby: It does. But to go back to your question, because I think it’s really important to understand what is going to happen in the next century, understanding our relationships to the machines that we are building is that they have strength and we have other strengths as well. Machines don’t have our social intelligence, they don’t have our emotional intelligence. It’s unclear to me that they necessarily will, the very least they are uniquely disadvantaged in that respect. If you and I have a conversation, I can think, wait a second, before I say that, if someone said that to me, how would I feel? Would I be a bit upset? I’m probably not going to say that to, Ross, then. Because I can reflect, I have a similar emotional life to you, I share a similar biology. Therefore, I can reflect upon that, whereas machines can’t do that because they don’t share our biology. They don’t have emotions, they don’t have an inner life, as far as we can tell, like you and I. That puts them at a real disadvantage in terms of having empathy and emotional intelligence.
Yes, of course, they can fake it. Indeed, they already are. I mean if you tell ChatGPT that it answered the question wrongly, it says, Oh, I’m very sorry, I won’t do that again. But the 10th time, it says that to you, you realize it actually always makes those sorts of mistakes, or perhaps that’s a bit vacuous, it’s not really as meaningful as a human who is going to be embarrassed and is going to follow up by actually not doing that again. I think there are places where we are going to have unique advantages over the machines. Those things actually, those areas of advantage are actually the ones I think, in many respects, are most important to us as humans.
At the end of the day, we are social animals, that was the thing, but actually, in many respects, more than intelligence, that got us to where we were that actually got us to be, for better or for worse, the dominant species on the planet was that we came together into tribes and then villages and towns and cities and did stuff together in a cooperative way. Of course, we invented language and knowledge along the way to help amplify that. But it is the way that we work together that has allowed us to be as powerful as we are. It’s something that I think, in many respects, gives us the most pleasure and satisfaction in life. If there were a few modest gifts amongst all the pain of the pandemic, it was that we realized that coming together and spending time with people was really important to us. If that was taken away from us, even though we had all these virtual tools, that I am now talking to you with, they are not the same. I’d have had even more pleasure from our conversation if we were actually sitting in the same room together. Hopefully, one day that will be.
Ross: Absolutely! It makes me think the book was about “Faking It” as essentially that artificial intelligence is being made out to be like human intelligence, in many ways. What you’re suggesting is that we shouldn’t try to even amplify the difference between artificial intelligence to human intelligence, not trying to make it the same as human intelligence, but to push it in a different direction. Because human intelligence, we already have it, why try to emulate it?
Toby: There are pleasurable ways to make more of it as well.
Ross: Yes, absolutely. Is that fair that we should try to make it more different from human intelligence rather than try to make it seem the same?
Toby: Yes, I think it’s just two things. One is that which is that we should try and amplify the strength of each player in this game. It also tells us where we should be focusing our energy. There’s no point trying to teach ourselves to be better calculators because machines are much better calculators, we should be playing to our strengths and handing over those tasks, many of which we never actually like doing, to machines that they’re better at. The other really important follow on is that we should also be careful to distinguish AI from human intelligence.
For example, one of the ideas in the book is the idea of cheering red flag laws, this is named after Alan Turing, whom we talked about earlier, red flag, which is an homage to the red flags that people used to have to walk in front of motor vehicles, when they were first invented, to warn people about this strange new technology that was coming around the corner that might startle the people on horseback. We needed to be warned, there was this strange new technology that was entering our lives that might cause potential harm, as well as significant benefits.
Similarly, I think we should have red flags over AI. For example, if you ring up a business, and it’s not a human in the call center, but it’s an AI answering your call, which is increasingly going to be the case, and increasingly, it is going to answer your call in an interactive way where it is actually responding, it’s not the telephone tree that it is today, it’s literally going to be listening to what you say, and actually responding in an intelligent way to that, you should be told that because there’s a real deceit going on here. The most valuable thing that we have as humans is our time, an infinite amount of which can be wasted by machines. But I think we have a basic human right to know whether it’s another human and therefore someone you should be treating with kindness and respect, or a machine.
That challenge, I always wonder whether I should tell to my daughter whether she should say please or thank you to Alexa. Because on one level, you shouldn’t because it’s completely wasted breath. You are wasting your time. Alexa doesn’t care. There’s no caring in Alexa. There’s no point for you to say please or thank you to Alexa. But what does it say to us? What does it say about us if we end up in a world where we’re commanding machines and maybe we say less please and thank you to humans as a consequence because we get out of the habit of saying please and thank you? I still say to her that she should say please and thank you even if you think it might be a computer.
Ross: Yes, this is one of the modern etiquette dilemmas. Just to round out with a bit of future gazing, we have AI which is better than humans, and a bunch of things, hopefully, there are some things that humans continue to be best at. There are many, many things where humans and AI will come together where, as you say, each is playing to their strengths. What should we look for? How can we shape this evolving world in a way that serves us best?
Toby: I would have a third category. There will be things that machines do better than humans but we will still only have humans do them. I think it’s really important to realize that there are things that if we handed them over to machines, even though machines would do a better job at those things, I’m pretty convinced that that’s not the world that we should be in. As an example, there are some pretty high-stake places where, potentially, already, people are starting to think about replacing humans with machines, making sentencing decisions, and making decisions about welfare payments, I’m not suggesting that we might not include machines in the loop to help us sift through voluminous amounts of information but removing humans completely from those sorts of decisions will take us to the world that people like have warned us about.
I don’t want to wake up in a world where the computer judge says well, Toby, you’re going to jail for six years. I want to be able to throw myself at the mercy of a human, a human judge, who might understand the circumstances that led me to that sad situation where I’m standing in front of the judge facing the prospect of a long jail term. I suspect most of us don’t want to wake up in that world. Even if I could put demonstrable evidence, the machine was doing a much more systematic and much more reliable job than a human judge, because let’s be frank here, humans are terrible at making decisions. We are full of subconscious unconscious biases.
There’s a wonderful, somewhat disputed, study about Israeli judges that you’re more likely to get parole if they’ve just had their lunch, or if they’ve just had tea, and if they’re just about to go for lunch or tea because their blood sugars are low. We all know that people are like this. Nevertheless, I still prefer to throw myself at the fallibility of that decision-making than end up in the world where it’s an algorithm without any empathy that is making those decisions. I think that’s the world that most of us would like not to be in. As I said, I think this is the third category where we will decide not to hand those decisions to machines, even though, arguably, the machines will do a better job.
Ross: What you’ve described is one of the fundamental junctures to which of those paths we go down. I think it was a strong case. I think that aligned with a lot of your work, to ensure that humans are making the decisions that matter. You have a book coming out shortly after we’re recording this episode. Where can people go to find out about your work or your book? There are a number of initiatives you are involved in, it will all be in the show notes, of course, but where can people go to find out and learn more from you?
Toby: You can find my book as well as my previous books explaining the history of AI and where it has got to today at all leading bookstores; they are sold by Black Ink. The new book is called Faking It: Artificial Intelligence in a Human World. But I did have a book before that, looking at some ethical challenges that we’ve touched on today, called Machines Behaving Badly. And one before that, which is trying to do the stargazing that we briefly did looking into what would happen when machines might match many human capabilities called 2062. 2062 when machines might start to equal, in some sense, human intelligence, although it’s a much more refined problem as we talked about. There are different strengths and weaknesses that we each bring different characteristics because we have quite a different design. You can follow me on Twitter @TobyWalsh. I tweet quite a lot, although sadly, Mr. Musk is boiling the place but nevertheless, there’s nothing still quite as good for discovering what’s happening in this exciting world that we live in. I have a blog, The Future of AI, futureofai.blogspot.com. I do a lot of media. People complain to me, I’m always on the TV or radio. Look out for me there.
Ross: Fabulous. Thank you so much for your time and insights, Toby. I am very glad that your humane perspectives and views are so influential today as we collectively shape the future of AI. I think it’s really important that we have your kinds of voices heard in the discussions and debates because that’s going to push us more toward the better path.
Toby: Thank you. That is very kind of you. The other important part of what you said there was: as we shape. One of the messages in this book, and in all my books is about making the right choices. The reason I write these books is to try and help inform people so that we do make some good choices. Technology is not destiny. It’s about making some good choices today.
Ross: Fantastic. Thank you, Toby.
Podcast: Play in new window | Download