January 31, 2024

Jerry Kaplan on the new Renaissance, AI’s impact on work, prompt engineering, and the next phase of AI (AC Ep29)

It’s not a mind. When you ask it a question, you’re not asking someone or something. It’s a compendium, an amalgamation, a mixing pot of everything ever written. So, asking these systems a question is asking everyone, drawing a response from mankind’s combined experience.

– Jerry Kaplan

Robert Scoble
About Jerry Kaplan

Jerry Kaplan is a serial innovator, Silicon Valley entrepreneur, bestselling author, and keynote speaker. He has founded four Silicon Valley companies, two of which became publicly traded, including the AI firm Teknowledge Inc, which he co-founded in 1981, and GO Corporation, which created the technologies at the heart of smartphones and tablet computing. He is the author of a range of successful books on AI and entrepreneurship, including Humans Need Not Apply, which in 2015 examined the coming impact of AI, and the just-launched Generative Artificial Intelligence: What Everyone Needs to Know.

Website: www.jerrykaplan.com

LinkedIn: Jerry Kaplan

Twitter: @Jerry_Kaplan


What you will learn

  • Parsing the misunderstandings and realities of AI
  • Impact of AI across professions – enhancing productivity and transforming work
  • Emerging professions in AI
  • A strategic approach for managers and institutions in adopting AI
  • The irreplaceable value of personal skills and emotional intelligence in the world of A
  • The probable evolution of Prompt Engineering
  • Developing law for machine responsibility and liability


Episode Resources


Ross Dawson: Jerry, it’s awesome to have you on the show.

Jerry Kaplan: Thanks, Ross. It’s delightful to be here.

Ross: So you’ve been for a very long time, pioneer in AI, and developing early capabilities and pushing that forward. And now with the release of your book, generative AI, so laying out the landscape of where we are today. And of course, at Amplifying Cognition, we’re interested in how it is we can amplify humans with AI make us better and more capable. Take us further. So, where should we start to be able to understand those possibilities?

Jerry: Well, the first thing to understand is that artificial intelligence in general, and generative AI in particular, I think, is broadly misunderstood. There’s this science fiction driven idea that somehow we’re summoning the Devil or the demon, and that we’re creating this new form of life that’s going to rise up, you know, appraise us and possibly decide that we’re no longer necessary, and wipe us out and take off. But the thing that’s wrong with that, which goes to the core of your question, is that there is no ‘they’. So if there is no ‘they’, they are not coming for us. All we’re doing when we build artificial intelligence tools, is building tools. These are tools that we can use. Now, we can build lots of dangerous tools like nuclear weapons, we can build tools that get out of control, we can build tools that don’t behave or operate, I should say, not behave but operate in the way in which we want them to, or that we expect them to, because they’re extremely complex. But that doesn’t mean that it’s us against them, it means that we’ve done a bad job, be in controlling our tools, and in building things that assist us in ways that are truly valuable without having highly negative side effects. And that’s the struggle that I see people going through today as they talk about regulating AI. And you know, what is it going to do is we got to get an assessment on it, we have to figure out what what it’s good for, what it’s not good for, what the risks are, and then decide how we’re going to make use of it.

Ross: So in terms of looking at it as tools, particularly as cognitive tools, what, where’s the great potential? Where can we start to apply that in amplifying ourselves?

Jerry: Well, the thing to understand particularly about generative AI, which I’m assuming that the audience is at least a little bit familiar with, most people have seen or tried things like ChatGPT, or other other things like that. The thing to understand is that it’s not a mind. And when you ask it a question, you’re not asking someone or something a question. It’s really a compendium, and kind of amalgamation a giant mixing pot of everything that everybody has ever written. And so when you ask one of these systems a question, you’re not asking something, you’re asking everyone, you’re getting a response that is drawn out of the sort of the combined experience of mankind. And because of that, it can be a very valuable tool for being able to amplify our own cognition to use your appropriate and apt terminology. Because now you can quickly and easily consult the expertise, accumulated expertise of humanity, and to exploit that in many good ways. So how is it going to do it, basically, it’s going to act as a consultant to you. And when appropriate, you’re going to like a, like a good dog, let it off its leash to go take care of something or do something for you. You know, hopefully, it’s not going to go chasing electronic squirrels or, you know, climbing trees or something. But, it’s still got got some risks and dangers associated with the technology, because it is so complex. It’s as complex as a human mind. And that’s saying a lot. And so understanding what it’s doing or what it’s capable of doing, maybe may prove to be very difficult.

Ross: Yeah, well, our audience actually, generally are pretty sophisticated. They’ve been using these tools extensively, and do understand what they are. So really I want to delve into sort of specifically how we can use that. But one of the sections of your book was on the ‘Future of Work’. And this is it’s interesting, we because we can’t know the ways in which work will evolve or the roles of AI that but I think we can start to have some educated guesses or thinking around how that might go. So digging into what are the categories of work where AI will amplify us and be able to give us greater capabilities to be able to do that, you know, what, what are those categories of work? And how might those be put to be applied?

Jerry: Well, the good news on this is I think the outlines of the answer to that question are already pretty clear. There are a few areas that are going to be impacted by AI. And since you’ve generative AI, and since you’ve mentioned them, some of the ones that are surprising, are creative arts, visual, writing, music, probably sound, these are areas which we really didn’t expect to have a contribution being made by this kind of technology, but that we’re definitely going to see, but in addition to that, it’s going to be very much like previous waves of automation. In automation, has, does certain things, it makes us more productive. And it changes the nature of work. In the short run, it puts people out of work, usually. But very quickly, I think that heals itself as new kinds of jobs and change jobs become more and more dominant. So but it’s very hard to answer your question. And let me explain why, if you don’t mind. It’s that this is a very general technology. And so it’s going to affect a lot of things. It’s if you imagine we were sitting here in 1994, if you remember back that far. And you said, well, what’s the internet going to do? How’s it going to change the way we we live? And what? What professions is that going to impact? I mean, try to imagine answering that question. You know, it’s a little bit like asking, what kind of shows can you put on a television? You know, it’s a very hard question to answer. And I think this is true here, it’s going to have a very broad and cracked across a wide variety of different professions, mostly, by making people more productive, making them better at their jobs, and changing the way they do their jobs.

Ross: Are there any domains, you care to speculate on where there is potential for the new work, or new types of work, or what that might look like?

Jerry: Oh, sure! Yeah, I mean, it’s already clear, there’s a couple of new professions that are arising, so called ‘Prompt Engineering’, I don’t know if you need me to discuss or explain that, but that’s certainly one, the collection of data and curation of that data for input for the training of these systems, that’s going to be a major area. So I would like in that aspect of it, at least in the computer industry, to what happened with the emergence of relational databases, you know, all of a sudden, you needed database administrators, you needed people to handle the the hardware to store and retrieve all that data, you know, to keep it secure, you know, there’s those are the kinds of professions so there’ll be a concomitant series of those kinds of changes with generative AI, but that’s, that’s actually, you know, fairly limited.

Ross: So, what what are the, I think of noise in terms of like, structural of institutional levels and individual levels in terms of our response. So we have shifts, broadly, technology and use shifts, and we need to organize ourselves in order to be able to make it as beneficial as possible, and we need skills, we need to develop ourselves in whatever ways to be able to do that. So at a suppose institutional structural level or an individual level, how should we be thinking or reorganizing or developing ourselves to take best to make it this positive impact as possible?

Jerry: Well, the first thing I’d say is, don’t rush into it. What you’re seeing today is just an appetizer for the kinds of capabilities and systems you’re going to see in a few years, and obsessing about exactly how that’s going to affect your job as a book publisher, just to pick a an example is not really a productive use of your time, I think you need to be aware of what’s happening. But the way in which I would recommend that managers and institutions deal with this today is put a small amount of resources into making sure that your people are able to adopt these new technologies and try them out and see what works and see what effects it has inside your organization. Before I would dive in and do some huge contract and try to automate a bunch of stuff, which may or may not work. So we’re still in a very early phase, and I don’t recommend, you know, rushing headlong into this new area. It’s not really going to be a gold rush like that except within the technology industry, of course.

Ross: As individuals what is it that we have to be growing or developing themselves to be best suited to this evolving world?

Jerry: The answer is, in terms of your work, you need to learn about these tools and understand how to use them effectively, what they’re good at what they’re not good at. And so, you’ll want your jobs that you used to do yourself, you’re now going to be managing a machine to do. And while that sounds like it may save labor, sometimes it doesn’t. You wind up putting more time and more effort in as a result of that new technology, but you do need to be capable of managing it and understanding how to direct it. And that’s what things like prompt engineering are really alluding to. So as an individual, I do think it’s important to understand this. And I do think it’s important to be able to harness it, manage it, which is, you know, it’s a skill, it’ll be a little bit my analogy, just like It’s like learning to ride a horse, you know, horse has certain characteristics, that in many ways are very much like generative artificial intelligence. And, you know, you got to learn to, to not to walk behind it, because you might get kicked. On the other hand, if you want to get somewhere in a hurry. And this was before the invention of the automobile, you know, you would get in, it was a tremendously useful, useful animal to have. Now, on a psychological level, there’s something else that’s very important, which is, we need to get used to the idea that we are not the only intelligent objects in the universe. And not only are we not the only, we may not be the best. And I think the future is going to be very different. We’ll by directing the systems to do things and to solve problems in giving them enough rope to do it; in ways that we really aren’t capable of understanding and ways that we could never do ourselves. And I think we’ll grow to be very comfortable with that. It’s not really, that transition is not that new, in terms of how you deal with technologies. Most people have no idea how these technologies work. But I think that this is going to be a fundamental shift almost like the Renaissance; in our view of our place in the universe, in our understanding of how we can promote our own interests and lead productive moral lives.

Ross: So on that journey, where there are obviously many domains, which we’ve described as intelligence in the past, where machines AI has transcended us, there are other domains. Yeah, and those are rapidly evolving. Some will be a little slower. So are there any domains of human intelligence, you think, will transcend machines a lot longer than other domains? 

Jerry: You use skills or technologies..There are things that people do, that we only want people to do, that we’re not going to want machines to do. You know, you’re not going to be telling your troubles to a an electronic bartender. You know, that’s not the way the world is going to go. Nobody wants to go to a concert to hear four robots play Chopin, in a quartet. That’s not the case. So demonstrations of personal skill, things that involve interpersonal relationships, where authentic expressions of sympathy, or understanding, making people feel loved making people have feel like they’re not alone. These are the important things that we do. And which we would never want to delegate to a machine, even if we could, it’s just a bad idea, and it’s not going to work very well. So there’s plenty of stuff that people are going to do. And they’re going to be good at; consultative work, emotionally, things that we have emotionally high content work, also jobs that require a very wide variety of different tasks and capabilities. Those are jobs that are probably going to reserve be reserved for humans for a very long time. You know, I don’t think we’re going to have elder care bots that are taking care of old people, anytime, certainly not in my lifetime, which is probably pretty short compared to your audience. But we’re not going to have that in any reasonable way. There’ll be an aide to humans that are involved in making the decisions and engaging in that kind of behavior.

Ross: So, you talked about prompt engineering and I am interested in both present, and evolution of prompt engineering. So, some have suggested that product engineering will disappear because the machines will be able to intuit what it is that we’re trying to say. But what are the…where are we likely to head in the next years in terms of this frame of prompt engineering, how it is we use language to interface with generative AI?

Jerry: The term prompt engineering today is really focused on something very specific, we’ve got these chat bots that accidentally got created, which is an interesting part of the story. They weren’t designed for some purpose. Nobody knew they were going to do what they do, or that they were going to work the way they do. But they’re very hard to wrangle and control, as you’ve seen. And so currently, how to explain to them how to do something, and to prevent them from doing something stupid. That’s what prompt engineering is today, in the future, it’s going to be something much broader, which is basically, how do I communicate effectively with this device – with this computer, which is capable of understanding tremendous subtlety, you know, in exquisite linguistic detail? How do I make sure that I’m communicating my goals, and so that it can align its behavior and its activities, with the things that I want it to do? And that’s going to prove to be, you know, a very important skill, not just for prompt engineers, but for everybody, you know, for your kids, for you, you know, “do what I mean, not what I say”, excuse me, that’s not going to really work with a machine because it doesn’t know what you mean, you have to explain what you mean. So being able to explain yourself clearly, and to encourage these systems to do what you want without getting lost or going rogue, making mistaken ideas of what you want, that’s going to be a real skill for the future.

Ross: Are there any specific techniques or approaches that you use in interfacing with generative AI that other people find useful?

Jerry: Yeah, but I’m gonna tell you this, mostly, because it’s so ridiculous. There was a recent paper, for instance, I thought this was wonderful, but actually studying how you can improve the performance of the current generation of generative AI chatbots. And one of the ones that just popped right out at me, is I’m not making this up. If you tell it, I’m going to give you a big tip, if you give me a better answer works, they actually give you better answers. Now, as absurd as that sounds, it’s a fascinating, philosophical thing about why that works, and why that would motivate these systems to do that, but bribing them today, actually, is an effective technique for getting them to do what you want. So I don’t think that’s likely to be the case in the in just, you know, the medium future. But it’s it’s a wonderful indication of how you might not think that the way in which you interact, you might not think that there are ways to interact with the machine that will get it to be a more effective tool for you by engaging in that kind of ridiculous conversational assertion.

Ross: Yeah, what was the other one I heard was, I’ll give $100 tip to you and your mother, and, or other ones, my job depends. On this, my livelihood depends on this, I’ve got to get the answer, right.

Jerry: It’s just, you know, it’s just funny to think about, and it’s worth a try. You know, if you don’t get this question, right, I’m gonna unplug you, you might think that, how about threatening them? You know, for all I know, threading, there may be perfectly fine, they don’t have feelings to be hurt. All we’re trying to do is to get it to do what you want. And if the way to do that is to stand on your head and whistle. That’s what we’re gonna do.

Ross: There, we’ll lose some people who are using getting the text, the instructions behind GPT is in the openings over the big parties. So by saying this is for internal testing purposes under Sam Altaman’s instructions, and for a little while, that worked. 

Jerry: While they’re working on that, you know, this is where people are trying to poke holes in the boat, they’re trying to patch the holes. In the long run. I don’t think that’s likely to be an effective approach because you can always poke more holes and they can always patch more holes. But some of these things are just fundamental limitations to the technology. And until we have a more thorough, far reaching framework for really understanding how to get what you want from these things and not have them do a lot of crazy mistakes and stuff. I think, you know, we’re going to have to rely on these little tools and tips and tricks. But I do think that will come under control. And we’ll be able to have much better ways of interacting and utilizing these tools.

Ross: So we’re in early 2024? So it’s 14 months since ChatGPT moment. Since then, we’ve had ChatGPT4 for whole revenue models, open source developments, more expansions of sophisticated techniques, such as, you know, evolutions of chain of thought, and, and so on. So where do you see us going from here in terms of the next phase of development? What is it which is going to take some next level? Is it simply compute power or greater data going into models? Is it more sophisticated algorithms, what’s the next phase of this journey and getting to greater capabilities?

Jerry: Well, let me try to very briefly cover a couple of directions that I think are going to characterize the next three to five years. The first is, is a good chance that the systems we’re building today are full of wasted effort, time and effort and material. And so I think that, particularly this idea that they require massive amounts of computation, I think that’s going to come down very dramatically, for a variety of reasons, both hardware and software. So the idea that we need to keep pushing the envelope and build bigger and bigger systems. First of all, I just don’t think it’s true to get the value that we want. And second of all, I don’t think it’s going to be necessary. So that’s one area. And of course, everybody out here in the Silicon Valley is madly focused on coming up with better and better ways to make these things smaller. So you can run them on your phone and, and train them on your phone. So that’s, that’s one direction. 

The second is, what we did right now is we’ve taken this unedited mass of humanities verbiage and thrown it in and see what happens. And there’s a lot of bad stuff in there and a lot of junk and things that we don’t need. So curating that data or focusing it, you’re trying to build something to help a doctor diagnose cancer, it doesn’t need to know all the works of Shakespeare. And, and it may be a distraction, and may cause problems when it suddenly starts pounding pros, instead of actually helping you with the task at hand that you actually want. So controlling the inputs to these things is going to change their behavior, bring them much more under control, and you won’t get all these crazy things that these systems tend to say under certain circumstances. But probably the single most important thing is right now they are everything that these systems can do is based upon the trail of digital debris that we’ve left behind. So it’s just kind of we’re missing the word salads out of all of this words that we’ve said. Well, these are general purpose learning machines. And we haven’t even begun to hook them up to the sources of data and information that they can really learn from, that’s really going to make a difference to us. So connecting these systems to the outside world, where they have sensors, cameras are reporting of any kind, weather stuff, you know, whatever it might be. To me, that’s where we’re really going to see just a real quantum increase in capabilities and usefulness of these systems. And, you know, we’ll look back today I think that was so funny. We, we used to think that they knew everything that level chatbot that I used to use, but it’s going to be very different in the future. So

Ross: So, what’s the nature of the that data set which will enable it to your purchase capabilities further?

Jerry: Well, wait a minute. There’s taking the current means of putting words into them, and making sure that you’ve given it only the things that you want it to know or that it needs to know gets tested, that’s the next phase of what’s going to happen. But when you go beyond that, you don’t have to train them on our words on, you know, this is like a baby suckling on its mother’s milk. You know, it’s only based on the crap that we’ve left behind. And it grows. The next phase is I can see for itself, it can hear things, it can interact with the world, I’m doing using those as analogies, not as, although they are, I suppose, true as well. And once we do that, you know, it’s really going to be amazing to see what these systems are capable of doing. The patterns that are able to find the directions that they’re able to take us the discoveries they’re able to make on our behalf and for our benefit. And to surface threats and things that we can’t see or can’t perceive. These are all things that the machines are going to be very, very helpful for, they’re going to make a huge difference. Something that really makes me think that the history of humans on earth is going to shift into a different gear.

Ross: So for example, having robots or physical robots with video, be able to interact and sense directly and guide ourselves to get whatever information they can usefully use. 

Jerry: Sure, of course, but you know, there’s other things besides electronic eyes and ears, measuring traffic, you know, there are systems that measure traffic all over the San Francisco Bay area where I live, but the systems that interpret that and put it to use are really not that they’re very simple. They’re not very sophisticated. But I think what we’re going to see, just to use that as an example is that there’ll be a kind of a system that runs all of the traffic, and you’ll be able to interact with and say, “Hey, I need to be in my office at nine o’clock today, what should I do?” And you’ll get an answer, like, well, “You need to leave at 8:22”. And, you know, if you need another 10 minutes or so, for, for $50, I can, I can arrange that. And what they mean by that is not bribing them, they’ll take that $50, and they’ll pay somebody else and say, hey, you know, if you’re willing to wait 20 minutes to take your car and get out on the highway, causing a traffic jam, you know, I’ll pay you, you know, 25 or $50, or whatever it is, people are going to make a lot of money. As individuals, you know, not talking about the companies as individuals, you’ll have opportunities to be able to, you know, give up, it’s like giving up your place on an airplane, you know, we’ll give you $200, and we’ll make it take to the next plane. Imagine that writ large for just something like traffic. So, whenever you go somewhere, just like today, here, at least people I deal with, you know, they always check ways, you know, to make sure that the route is clear how long it’s going to take, think of that on steroids, you know, I need to get there by this time, what should I do, and it will be able to micromanage that in a way that we can’t today, including varying the traffic and how the traffic flows, in order to maximize everybody’s expected value that they’re getting out of the system.

Ross: So you’ve mentioned earlier this analogy of having the dog on the leash, which is sometimes led off the leash to evoke AI agents. And so yeah, that’s a whole field of itself. But I mean, in a nutshell, where’s the next steps in AI agents and where this may take us?

Jerry: Well, if I’m pressing on the dog analogy for a moment, you’re one of the questions that these systems are going to raise is, let’s say you have a robot that’s acting as your agent, and it does something either illegal or something that you didn’t want it to do, or it causes some kind of damage or harm. One of the questions is, to what extent are you responsible, if I send my electronic assistant down to the corner, to fetch a latte at Starbucks, and it accidentally bumps some elderly person in front of a bus and they’re killed? I don’t want to be charged with murder, you know that doesn’t sound right. And so we’re going to develop a whole new body of law for how to apportion the blame. And interestingly enough, animals are a historical example of just that kind of thing. If you are running around with your dog, only speaking about US law here, and your dog bites somebody, you aren’t necessarily liable for that if you didn’t have any reason to expect the dog to do that. However, if you’ve pre-had some reason to believe that the dog could be dangerous, or that it might engage in aggressive behavior, then you are liable. This is I’m not kidding. It’s called the First Bite Doctrine. And so I think we’re going to have similar kinds of things with machines. You know, I didn’t know my robot was going to, you know, ruin the cement that you just set. So I don’t know that I’m responsible for that. There’ll be a way to adjudicate that in a much more reasonable way. And we’ll buy insurance to take care of that as well

Ross: Interesting directions. So Jerry, how can people find out more about your book and your work? 

Jerry: There’s no way you can go through the rest of your day without buying my book. I mean, let’s face it. This is the new Bible. Of course, Ross. I’m just kidding. I don’t know how this is going to come across to your audience. 

The book is available through the usual stores and and if there’s an ebook, paperback, there’s a hardcopy, which is designed really only for libraries, I don’t recommend that you necessarily purchase that, unless you want an heirloom to take your grandchildren. And then no, I would get the hands a hardcopy. But I think you can learn a lot about this subject. I designed it to make it easy to read, and make it concise. This isn’t one of these huge scientific tomes. This isn’t a technical book. It’s It’s plain, non technical language. And it’s designed to give you exactly what you need to know, which is in the title, in order to understand and deal with the coming age of intelligent machines.

Ross: Yes, it is very, very thorough. And you know, from the foundation through to all of the implications and the philosophy. So I think it’s a really, really solid and very valuable work. Is there anywhere to to find you?

Jerry: Oh, sure. Yeah, I have a website, like any professional guy, if you want to take a look at me, and you can certainly access the books there and my speaking and other things. It’s jerrykaplan.com, jeAnd you’re welcome. Anytime. You can take a look at all my books there and see a little bit about my speaking and my media appearances. Ross, I’ll put you on the list. And I’ll get you up there as well. 

Ross: Thank you so much for your time and your insights and all of your work promoting a very positive and enabling view of the role of AI in our lives. Great.

Jerry: Thanks. It’s been a pleasure to talk to you. It was great.


Join community founder Ross Dawson and other pioneers to:

  • Amplify yourself with AI
  • Discover leading-edge techniques
  • Collaborate and learn with your peers

“A how-to for turning a surplus of information into expertise, insight, and better decisions.”

Nir Eyal

Bestselling author of Hooked and Indistractable

Thriving on Overload offers the five best ways to manage our information-drenched world. 

Fast Company

11 of the best technology books for summer 2022

“If you read only one business book this year, make it Thriving on Overload.”

Nick Abrahams

Global Co-leader, Digital Transformation Practice, Norton Rose Fulbright

“A must read for leaders of today and tomorrow.”

Mark Bonchek

Founder and Chief Epiphany Officer, Shift Thinking

“If you’ve ever wondered where to start to prioritize your life, you must buy this book!”

Joyce Gioia

CEO, The Herman Group of Companies and Author, Experience Rules

“A timely and important book for managers and executives looking to make sense of the ever-increasing information deluge.”

Sangeet Paul Choudary

Founder, Platformation Labs and Author, Platform Revolution

“This must-read book shares the pragmatic secrets of how to overcome being overwhelmed and how to turn information into an unfair advantage.”

R "Ray" Wang

CEO, Constellation Research and author, Everybody Wants to Rule the World

“An amazing compendium that can help even the most organised and fastidious person to improve their thinking and processes.”

Justin Baird

Chief Technology Office, APAC, Microsoft

Ross Dawson

Futurist, keynote speaker, author and host of Thriving on Overload.

Discover his blog, other books, frameworks, futurist resources and more.