“The goodness of what humans desire, AI will do that; the bad players, these tools will also amplify that. It’s for us to determine the course of how these technologies will be used.”
– Jeremiah Owyang
About Jeremiah Owyang
Jeremiah is an industry analyst based in Silicon Valley, and advisor to Fortune 500 companies on Digital Business, as well as an entrepreneur, investor, andthe host of tech events including some of the current major AI events in the San Francisco Bay Area. He has a strong global profiles and has appeared in publications including The Wall Street Journal, The New York Times, USA Today and Fast Company.
Websites: web-strategist.com
LinkedIn: Jeremiah Owyang
Facebook: Jeremiah Owyang
Instagram: @jowyang
Twitter: @jowyang
What you will learn
- How the local AI scene is thriving and offers valuable opportunities for enthusiasts (02:53)
- AI’s potential to amplify humanity and reshape society (04:40)
- Recognizing the fear of AI replacing humans and its underlying causes (06:36)
- The potential for a mutually beneficial division of labor between AI and humans (08:20)
- The Centaur concept, as a fusion of human and AI capabilities (08:44)
- Critical role of organizational infrastructure in AI adoption (10:02)
- Highlighting the current fervor and interest in AI across corporations (13:30)
- The challenge of AI Integration in go-to-market (16:19)
- The importance of embracing curiosity and staying informed about AI tools and concepts (18:08)
- Real-world examples of AI utility (21:37)
- Introducing the concept of foundational models and their evolving role in AI technology (22:37)
- Addressing the potential future of AI that involves extensive data access (25:10)
- The centralization of AI and the race for data (28:37)
- The importance of business models in AI ethics (29:07)
- The critical considerations for enterprises embarking on AI projects (31:23)
Episode Resources
Transcript
Ross Dawson: Jeremiah, fantastic to have you on the show.
Jeremiah Owyang: Ross, I’m delighted to be here. Thank you.
Ross: You’re deep-deep into AI. I’d love to just get the big-picture perspective on what you’re seeing happening and what the potential is, this year, next year, and beyond.
Jeremiah: Sure. I’ve been living in Silicon Valley since the .com era so I’ve seen approximately five tech trends. I haven’t seen a movement this big, perhaps since the .com era. There’s notable excitement and energy all across Silicon Valley and San Francisco, you can touch it, you can feel it. I attend a minimum of three AI events per week so I can stay abreast of the rapid changes that are happening. Most of the AI startups’ foundational models are in the Bay Area, so it’s happening here, plus the big tech giants who are all moving into AI.
I also host an event series for AI startups called the Llama Lounge. It’s a clever name, and hundreds signed up in over ten different startups’ demos. Also, I have been an investor in AI startups since 2017 and I’m working with a VC firm. I’m doing other things for corporate executives as well. I’m definitely entrenched. Ross, in June, there were 84 AI events. In July, the “Slow Month”, there were 69 AI events. Those are just the public events that we know about. There are private events, and co-working mansions, and events with the tech CEOs. There is so much happening, and I’m excited to come to share that knowledge with you today.
Ross: Fantastic. We’re particularly interested in Humans plus AI. Humans are wonderful, AI has extraordinary capabilities. For the big picture frame, how should we be thinking about Humans plus AI, and how humans can amplify their capabilities with AI?
Jeremiah: I think that the verb “Amplify” is correct. There is a book written by Reed Hoffman, co-written with a friend of mine called Impromptu that talks about AI amplifying humanity. That is the right lens for this. All tools that we’ve built technologies throughout the course of human history have done that, from fire to splitting the atom to technology to AI. I do believe AI is at that level, it is quite significantly going to change society in many ways. The goodness of what humans desire, this tool will do that; the bad players, these tools will also amplify that.
It’s for us to determine the course of how these technologies will be used. But there’s something different here, where the experts I know believe that we will see AGI (Artificial General Intelligence) equal to human intelligence within the decade. This is the first time, Ross, that we’ve actually created a new species in a way. I think that’s something quite amazing and shocking. These are tools that will amplify what we desire as humans, what we already do.
Ross: If we think Frame AI as a new species, as you put it, as a new novel type of intelligence, one of the key points is that it’s not replicating human intelligence. Some AI has been trying to model human intelligence, and neural structures, and others have been taking other pathways. It becomes a different type of intelligence. I suppose if we are looking at how we can complement or collaborate, then a lot of is around that interface between different types of intelligence. How can we best engineer that interface or collaboration between human intelligence and artificial intelligence?
Jeremiah: That’s a great question. I think that we can use artificial intelligence to do the chores and the repetitive tasks that we no longer desire to do. Let’s acknowledge that there’s a lot of fear that AI will replace humans. But when we dig deeper into what people are fearful of, they’re more fearful of the income loss that they’ll have in some of the repetitive roles. It’s not always the things that they have sought after to do in their career. It’s just the way that they’ve landed in their career and they’re doing tasks that are repeated over and over. But if it’s just using your keyboard and repeating the same messages over and over, that is really not endearing to the human spirit. This is where AI can help complement so we can level up and do tasks that require more empathy or connection with humans or unlock new creative outlets.
Ross: One thing is a division of labor. All right, Human does that, AI does that or robots do this.
What is more interesting is when we are collaborating on tasks. This could be from anything like, ‘I’m trying to build a new product.’ There are many elements within that where Humans and AI can collaborate. Another could be strategic thinking. In terms of how we build these together, rather than dividing, separating, and conquering, where is it that we can bring together to collaborate effectively on particularly higher-level thinking?
Jeremiah: Yes, those are great things. AI is great at finding patterns and unstructured data, which humans struggle with doing. Humans are often able to unlock new forms of thinking in creative ways that are not currently capable of being done by machine learning or Gen-AI. Those are the opportunities where we segment the division of labor. I want to reference, I had the opportunity to interview Garry Kasparov, Grandmaster Champion of chess, at IBM of all places, and his thinking is that we want to look for the Centaur. He believes the best chess player in the world will be a human, and she would also be using the AI. He wants to create a league where the humans with AI would be combating another human with AI in a chess battle. He believes that would be the greatest chess player ever. It’s not just a human or not just an AI, but it’s that centaur, that’s a mixture of the species coming together. And I think Garry is right.
Ross: Garry, specifically, in our context says that it’s not about how good the AI is or how good the human is, it is around the process. The quality of the process is what determines the ability of that centaur, and the human and AI working together to be more effective. This comes down to the idea of the process of bringing together humans plus AI. Thinking about it from a large company perspective, how is it that we can design processes that bring together humans and AI to create this centaur that can transcend either humans or AI individually?
Jeremiah: In August, I went to the largest AI business conference that’s independent from a tech vendor. There were 2500 business leaders who are leading AI at large companies and government organizations, most of them are American, I want to add. One of the biggest challenges right now is that the organization is not even set up correctly to prepare for AI. What I found is that there are about three different models in which I’m seeing AI being grouped. The first one is product innovation. The second one is a go-to-market, which is marketing, sales, and customer care partnerships. The third would be loosely called Enterprise, which is operations, finance, IT, legal, and security.
Those three groups are what I tend to see; there could be a fourth group, which would be an overarching group that would be running a center of excellence for AI and/or defining ethics and purpose that would cascade across all three of those groups. To bucket those at those high levels is correct what I’m seeing, and I’ve confirmed that with other leaders. Note that they span multi-departments because AI is enterprise-wide. Now, this is just the context; let me just set this up. In that room, one of the speakers who was leading analytics at a large makeup company, a beauty company, polled the room and said, ‘How many of you have a center of excellence?’ Out of the 2000 people, only 20 people raised their hands. This tells us something quite interesting.
We saw this, by the way, in Web 2.0 when I was a Forrester analyst, the social media of the corporation will reflect the culture of the company internally. The way that the social media accounts were rolled out, you can tell how that company was organized. The same thing is starting to happen with AI. If a company is not organized correctly, and there is not a single source of truth from data, data modeling, cleaning the data, plus an ethics layer, and then making sure that the data is being fed back, this is all before it even touches any foundational model or machine learning, then you have multiple versions of AI, and it would be fragmented.
A fragmented organization results in a fragmented data set or fragmented data strategy, which would result in a fragmented AI experience across any of those three groups. That’s the biggest challenge that companies have right now; it’s not much about machine learning skills or the ability to generate prompts, it’s that they’re not set up correctly from the infrastructure at the get-go, in most cases. That even includes the large tech giants as well. They’re so large. They allow innovation to the fringes of the organization, that their data is spread across the organization. The big soft skill here, Ross, is organizational leadership cross-departments, that is the most important thing that is needed right now before they can even think about prompt engineering or using the tools.
Ross: It’s around having common data governance, common data models, their architecture, and then coordination across whatever AI models across that.
Jeremiah: Correct. Thank you for succinctly articulating the exact steps. I’m going to rely on you for that. Ross, in addition, there’s a lot of heat and interest right now for AI. Every corporation wants this; however, a few weeks ago, I visited a Colocation Center in Santa Clara. For those that don’t know, Colo is where corporations have their servers. I visited one that was focused on AI. Now, big companies are at a crossroads; do they go to the giant tech scalars, like Amazon, Google, or Microsoft, and give them data so they can train and learn their models against your own customer data? It’s like paying rent to somebody while they sleep in your bed. That’s basically what they think about it. Or do they train their own models with their own data in their own private colocation centers, or on-premises data centers and servers, where their data is safe?
Now, the latter option is quite expensive. Right now, there is a wait time of thirty to fifty weeks for Nvidia chips. Yes, there are cheaper versions out there. But that’s a long wait. And with that wait, most of those chips have already been pre-purchased by the hyper scalars. Then you have to have the power and service. Right now, a full stack of a server for AI is 1.25 million dollars if you can get it. The capital expenditures for a big corporation to lean into this, plus staff, and ongoing maintenance is millions of dollars of a bet. That’s just for one AI, for one of those products or groups that we talked about, let alone for the enterprise.
The issue here is that even if they get the organizational alignment and the set of criteria that you just listed out, their project could still be a business failure and the corporation may lose interest and appetite in a few years, resulting in a net negative project. That’s another business model issue that also has to be contended with.
Ross: Generally, what are the parameters that would suggest whether the enterprise should be looking at using off-the-shelf models and off-the-shelf training, as opposed to being able to build their own models?
Jeremiah: Regulated industries, I have been speaking to the heads of AI at financial & pharma. They’re more likely to grab off-the-shelf open source right now, the common model, surprisingly, would be Llama or Llama Two, which is built by Facebook of all people. But they can download that from a repository like Hugging Face and/or Falcon. There are other players out there that are offering banded suites that would do this on a safe cloud or a private cloud away from the big hyperscalers or set it up on-premise. There are other things that could happen to do that. That would require a significant commitment from the C suite to set that up unless there was an IT unit already ready to deploy that.
In most cases, a marketing group or a sales group will not have time to wait for an enterprise to do that; that could take months if not years. They’re more likely to go use a cloud by Salesforce and/or Adobe, which is now offering AI, in addition to the three hyperscalers that I prior mentioned. That’s what most likely is going to happen. As a result, you’ll see a fragmentation with the go-to-market team which I broke apart, and the product team which is more likely to have it on-premise because they have the infrastructure, and then you have a breakage. Now, this results in a broken customer experience because the product might have AI integrated, but when it’s time for customer care or marketing, their systems are not talking to each other and the customer is going to be quite frustrated. They don’t care which department the AI belongs to. They just want their problems fixed.
Ross: Interesting. There are a lot of architectural or structural issues which do need to be led, as you suggest, from the top of the enterprise. One of the things that you’ve said is that in this world, we need to become a master at using AI tools. What is that process? Is it all up for us individually to go out and learn how to engage? Do enterprises need to roll out education programs? How do we become masters at using AI?
Jeremiah: I believe in the listeners of your show that they are curious. Even if you’re not technical, you follow Ross, and you’re going to explore new ideas because Ross is your leader. I am sure people here are, or if they are not, they’re trying some of the most basic tools and you should become familiar with those. It’s also important for you to train your kids on these things. If they’re under teens and you do it with them, you’d be careful. I do this for my young children. We’re doing prompts and creating stories or fun, kids are using Gen-AI and understanding how it works. That same attitude of curiosity and safely doing this should be applied to you as well. Just like you learned the Internet, email, learned to use apps on your mobile phone, then social media, and then you might have learned Web3, now you need to use this next technology set, there’s no question.
Now this one is a very simple interface; it’s quite easy to use these tools aside from Midjourney. Most of them are just text-based chats. Yes, personal exploration is required for you to stay current in all things in life. This one is coming at us quite quickly. I also invested to take a continuing education class from an esteemed University; it actually did not cost me much, it was around USD 600 total, which is a tax write-off, and I paid out of pocket to do that. That’s something that I’m willing to do to make sure that I’m current. For those who are working at companies, you should request that or use your educational credits to do that and/or request HR to offer classes. There’s no shortage of classes now, including those provided for free from Khan Academy, LinkedIn Learning, and beyond. There is no shortage of content to learn from. Those are the ways that that needs to happen.
Ross: That’s very sound advice as in getting there and doing it is the only way to learn. You’ve been talking a lot about AI agents in quite a few different contexts.
Let’s take a step back. What’s an AI agent? Why is it important? Best we can dig in from there.
Jeremiah: Yes, great question. What is an AI agent? That term, you might have seen a science fiction movie called Matrix, where there’s an independent agent. There are good agents and bad agents. There are some that would help the main character, Neo, and there are quite a few actually, that were antagonists against the main character, Agent Smith, in particular, in all of his forms. Those agents, good or bad, operate independently with very little human oversight. They are like living creatures. There’s a term used for them sometimes called Baby Artificial General Intelligence, otherwise known as baby AGI, like infants because it’s the precursor of regular human intelligence. These tools need little oversight. The easiest way to try these is to use Agent GPT. You can just put that into a search tool, find that site, and try it out with a login or without a login. There are different variations; you can purchase additional credits.
The most common use cases are to book a complicated travel experience. For example, Ross, you travel quite a bit; you know how travel works. But imagine somebody who is short on time or is new to travel could say ‘Book me a trip to San Francisco from Bondi Beach’, and it would list out all the things you need: passports, vaccinations; then it would go find flights, then find hotels. It would do all these things with little human intervention across multiple different sites. Then what it surprisingly does in some cases, I asked it to help me improve my cardiovascular skills, and it actually started to code. It started to code in Python an app to track my fitness. I didn’t ask it to do that; it started to code and generated code, which I then grabbed, and then actually, if I had those technical skills, I could get it to build an app. It’s doing all those things without any human intervention. That’s what an autonomous agent is. I hope that is a definition, an example. Those are rising at a rapid pace.
Now, there’s another technology set, which you are all quite familiar with, called a foundational model. The most common one is called GPT. The foundational model is trained on human knowledge and intelligence, then it’s tokenized, and it creates new variations, and anticipates what our needs are. Now, those foundational models are starting to also become like autonomous agents. You can see those markets are starting to merge. I did a diagram called the AI tech stack; you can go search for it and find it. The foundational models currently are separated. But having spoken to some of the CEOs of those companies, you can see that they are quickly moving towards an AGI, which means they would all have that.
Long story short, to summarize, autonomous agents are the precursor to Artificial General Intelligence equal to human capability. They’re being developed quite rapidly. They would be living next to us, and supporting us. I imagine, Ross, that we would have different autonomous agents, just as we have as many email accounts or as many social network accounts, as an example. Ross, you’ll probably have a personal intelligence agent; you’ll also have one for your personal business. If you were working at a company, they would assign one to you and probably take it away from you post-employment. You might have one provided to you by your healthcare provider that just focuses on that, with a very dedicated set of data that’s regulated by, typically, governments. There might be wacky fun ones out there as well that do things for personal interests. Right there, I can imagine three to five different personal agents that are working alongside you; you have a pocket of experts, doctors, MBAs, and geniuses at your disposal working for you while you sleep.
Ross: Okay, that’s a compelling vision if we can make it work the way we want it to work. One of the first questions that comes to mind is, again, the interface between the human and the agent. Will this be something where we can just use text or speech to be able to tell it and it interprets it? Will it ask us questions to clarify? How do we make sure that the agent is truly aligned with what we want, does understand our intentions even if we’re not good communicators? How do we get that alignment with the agent and ourselves?
Jeremiah: What I’m going to say now is going to unnerve some people, but others, they’ll find it a wonderful solution. Let’s see, Ross, which way you think on this. Having spoken to the leaders who are building these things, two things are going to happen. One is it’s going to look historically through your data which means you will expose all your emails and it’ll already find your public social media. You have published quite a few things on your amazing website, including your awesome frameworks. It would already grab that information and as you allow API access to your personal apps, it would get that.
Secondly, it would compare that to other people that are like you. You and I have a common friend, Chris Saad, who’s a thought leader in his own right when it comes to technology. I consider you my very smart peer. We’re similar in many ways when it comes to the business content that we produce and think about. From these different data sets, your personal historical data, and those that are like you, it can start to anticipate what are your needs and what you’re thinking. By the way, that’s not new, social networks and Google have been doing that for 25 years. Google is ancient, 25 years, and Facebook has been around since 2004, about 19 years. All right, that’s part one. That’s not that new. But we’re going to expose a lot of information.
Part two is where people get a little nervous. But in some of the foundational models, they will be listening and recording everything that you’re doing in real-time, with your permission. Some of them will have the microphone ON at all times so it can listen to the context of what’s happening. Of course, this needs to be done legally, with rights and permissions so it can understand what are your needs. Maybe there’s a camera ON, so it can understand your facial expressions, I can see you right now as we’re recording, and get real-time feedback even though we’re in different parts of the globe, it’s a very important piece of data and the AI will have that as well, including voice inflections, background noise, and how much sleep you’ve had. The more information that you give to the AI in real-time, the more accurately it will be able to understand the context and predict. Then of course, finally, you would give it explicit prompts as those things you mentioned. Ross, given those three phases that I talked about, where do you lean on this? Optimistic or pessimistic for that future?
Ross: It completely depends on how it is architected, and the ownership. If this is run by a current tech giant, I would be extremely cautious. If we’re able to build this into a decentralized system where I have a reasonable degree of data ownership or control, then I’m all for it. That’s one of the challenges; I’ve believed so much in decentralized data sovereignty and all of these things for a couple of decades now. We’ve really seen very little progress in the big picture.
I think the promise of what you describe is incredible in how can we amplify ourselves. The challenge is that can we do this without it being run by tech giants from which we can question do they really have our interests at heart.
Jeremiah: 100% agreed. That’s a bigger topic. It could fill a whole podcast on its own. In short, I do see this AI movement heading towards centralized. It’s already centralized aside from some of the open-source models. But what happens is those open-source platforms become very strong. Even Hugging Face has trained data, right? That’s already a centralized database in a way. That’s one issue. Big corporations are the ones that can afford to do the training. That already results in the training. Whoever gets the most data has the most accurate model. There’s a race to get the data.
There are ways that you can segment your data to make sure that it doesn’t get overly shared. But what’s key is the business model. This is where Facebook let us down. Their business model is a free product. Now, if these AI agents, as we just discussed, are a premium model and we pay, we know who the actual customer is. The issue is, that the rich people benefit first; they get compounding benefits versus those that don’t have that money in emerging markets will get behind in the innovation curve. Then we create yet a tiered society. This is why, again, going back to is AI revolutionary? In many ways, it’s amplifying and echoing what already exists in society. I just want to make sure that that’s clear that we shouldn’t just cast blame on the tools only. It’s just doing what we have already been doing in society.
Ross: Yes. That’s a great point. The vision you described is compelling. The reality is that even if it is run by tech giants, it is such a compelling proposition that most people will go along for that ride.
Jeremiah: Yes, convenience and price are… When I was a Forrester analyst, we researched privacy, we asked people, ‘Is privacy important to you in the era of web 2.0 and digital?’ People said, ‘Yes, very important.’ and we said, ‘How many of you have looked at your security settings?’ None, it’s like under 1%. How many have read the Terms of Service which, of course, is becoming more challenging over time? We can consume them now with GPT. But that being aside, and then how many of you are willing to pay for a social network or email? No, no, no, I want free. That’s an issue that we have.
Ross: Let’s pull this lot of advice for individuals. But let’s pull this to enterprise leaders. What is your advice to enterprise leaders in a world where AI is changing the nature of work, it’s changing the nature of value creation, and what are the things that need to be put in place to understand the values that humans can still bring to this world?
Jeremiah: Ross, that’s such a big question. At a high level, for any AI project that’s rolled out, the enterprise needs to solve an existing pain. Looking where there’s a breakage in, perhaps, customer care, or where there’s marketing breakage, or sales breakage, those are where you want to use AI to solve those problems. Because those are the only programs that will sustain over three years because we’re going to need that for it to be successful. Not just doing skunkworks. There’s been a trend recently in corporate innovation programs, where many of them are not separate budgets. In fact, they roll up to an existing P&L, a product team, in most cases, where the project can land and be incorporated versus a skunkworks. The age of Skunkworks and a lot of those Skunkworks projects got destroyed during the pandemic, those innovation outposts. Now we’re tying it back to business goals. That’s step one.
Step two is having a clear… I’m not even sure. I watched presentations at this AI Four conference from Deloitte, which wants to sell consulting services around setting up AI centers of excellence. They have a wonderful framework and a process, and it was very idealistic in getting all your stakeholders. But at the end of the day, there’s a real challenge here, Ross, because the data is owned by each BU, and the customer relationship is passed from department to department. It’s not clear who the sovereign data owner is because there are so many people involved. Can there be a single data owner across the enterprise? Is that the CIO? Is that the Chief Digital Officer? Is that the Chief Strategy Officer? Is that the Chief AI Officer, which is now a title, by the way? Even though those roles are supposed to be horizontal across business functions, it’s not clear who that individual is and if they can even keep the data aligned. That is the second thing to figure out is data alignment.
Those two things, aligning it to real business problem and data alignment are the two biggest things that you need. The third thing is tying in purpose to the human side of that. When I think of how enterprises need to engage in this space, we have a mission, which is AI for business and humanity. In some of the projects I’m doing related to enterprise, that is the mission statement. This means you need to be careful about how you communicate this to employees, especially lower-level employees who are extremely sensitive to the topic of AI, in particular, entry-level, most of those tasks will be automated and replicated by AI because they are repeatable processes, so instilling humanity from employee to executive plus the partners, plus your customers and greater societal, they have to have this ring effect of how does AI impact all of those stakeholders is required. Just as we did the same thing, in many cases, for sustainability, you had to look at those different rings of how you align that for the organization, that same process needs to happen for AI.
Ross: That’s fantastic! I just have to say it was a big question and that was a fantastic answer in terms of having the value, and intent. The data alignment thing is a really interesting issue the way you’ve raised it and it’s something that is coming to the floor in the world of AI, and I love that you’ve ended with the folks on humanity which has to be at the center. Where should people go to find out more about your work?
Jeremiah: It’s been a delight to spend time with you, Ross. You’ve asked such great questions. I’m available on most social channels as my first initial, and last name, which is JOwyang. I also have a blog called, web-strategist, and a newsletter but I can be available on those multiple channels.
Ross: Fantastic! Thank you so much for your time and insights. It’s been a great conversation.
Jeremiah: I’m so grateful for you. Thank you.
Podcast: Play in new window | Download