“Generative AI is the first technology with an almost natural propensity to build a symbiotic relationship with us. But symbiosis isnât always mutualisticâit can be parasitic, where AI benefits at the detriment of humans. How we deploy AI will determine which path we take.”
â Alexandra Diening
“AI provides dual affordancesâit can automate our work or augment our abilities. The key challenge is deciding where to draw the line. In low-stakes tasks, automation makes sense. But in high-stakes decision-making, human intuition is irreplaceable.”
â Mohammad Hossein Jarrahi
“We talk a lot about lifelong learning, but we also need to embrace lifelong forgetting. If we keep piling new knowledge on top of outdated thinking, we wonât evolve. The future isnât about âus vs. themââitâs about humans and AI co-evolving together.”
â Erica Orange
“AI isnât just changing how we workâitâs changing what it means to be human. We are interlacing with technology more deeply than ever, and in the future, AI wonât just be something we useâit will be something we integrate into ourselves.”
â Pedro Uria Recio

About Alexandra Diening, Mohammad Hossein Jarrahi, Erica Orange, & Pedro Uria Recio
Alexandra Diening is Co-founder & Executive Chair of Human-AI Symbiosis Alliance. She has held a range of senior executive roles including as Global Head of Research & Insights at EPAM Systems. Through her career she has helped transform over 150 digital innovation ideas into products, brands, and business models that have attracted $120 million in funding . She holds a PhD in cyberpsychology, and is author of Decoding Empathy: An Executive’s Blueprint for Building Human-Centric AI and A Strategy for Human-AI Symbiosis.
Mohammad Hossein Jarrahi is Associate Professor at the School of Information and Library Science at University of North Carolina at Chapel Hill. He has won numerous awards for teaching and his papers, including for his article âArtificial intelligence and the future of work: Human-AI symbiosis in organizational decision making.â His wide-ranging research spans many aspects of the social and organizational implications of information and communication technologies.
Erica Orange is a futurist, speaker, and author, and Executive Vice President and Chief Operating Officer of leading futurist consulting firm The Future Hunters. She has spoken at TEDx and keynoted over 250 conferences around the world, and been featured in news outlets including Wired, NPR, Time, Bloomberg, and CBS This Morning. Her book AI + The New Human Frontier: Reimagining the Future of Time, Trust + Truth is out in September 2024.
Pedro Uria-Recio is a highly experienced analytics and AI executive. He was until recently the Chief Analytics and AI Officer at True Corporation, Thailandâs leading telecom company, and is about to announce his next position. He is also author of the recently launched book Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity. He was previously a consultant at McKinsey and is on the Forbes Tech Council.
Websites:
Â
LinkedIn Profiles:
What you will learn
- Understanding human-AI symbiosis and its impact
- Why AI can be mutualistic or parasitic
- The crucial role of human intuition in AI decision-making
- How automation and augmentation shape the future of work
- Rethinking AI deployment beyond traditional software models
- The need for lifelong forgetting to adapt to AI advancements
- How AI could transform humanity through deep integration
Episode Resources
Companies & Organizations
Books & Publications
Technical Terms & Concepts
Transcript
Ross Dawson: So, you’ve recently established the Human-AI Symbiosis Alliance, and that sounds very, very interesting. But before we dig into that, I’d like to hear a bit of the backstory. How did you come to be on this journey?
Alexandra Diening: It’s a long journey. I’ll try to make it short and interesting.
I entered the world of AI almost two decades ago through a very unconventional pathâneuroscience. I’m a neuroscientist by training, and my focus was on understanding how the brain works. Naturally, if you want to process all the neuroscience data, you can’t do it alone. You inevitably have to touch upon AI. That was my gateway into the field.
As I started working with AI, I gained a basic understanding of how it operates from a technical perspective as a scientific discipline. At that time, there werenât many people working in this kind of AI, so the industry naturally pulled me in. I started working in the business application of AI, progressively shifting from neuroscience to AI deployment within a business context. I worked with Fortune 500 companies across life sciences, retail, finance, and many more industries.
That was my entryâmy “chapter one”âinto the world of AI. But as I began deploying AI within real businesses, I started noticing patterns. Sometimes AI projects succeeded, and sometimes they failed. I realized that success was most often achieved when we doubled down on human-centricity. That was an easy concept for me to grasp because cognitive science is my foundation.
This human-centric approach became even more important with the emergence of generative AI. AI was no longer just in the background, crunching data and steering our decisions without us realizing it. AI has been around for quite some time, but suddenly, we could interact with it directly, almost like an agent. We could communicate with it using our language. It could capture emotions, build relationships with us, and augment our capabilities. It was no longer just a toolâit was becoming a social-technological actor.
This realization led us to our hypothesis: generative AI is the first technology with an almost natural, almost default propensity to form a symbiotic relationship with humans. Itâs not just a tool that does something or doesnâtâitâs about mutual interaction.
The term “symbiosis” sounds very romantic, particularly because of the way pop culture has shaped our understanding of it. But in nature, symbiosis manifests across a spectrum of outcomes. It can be highly positive and mutualistic, where both parties benefitâhumans improve, and AI gets better. However, it can also be parasitic, where one party benefits at the detriment of the other.
This pattern became clear to me, especially as generative AI adoption increased. I saw the emergence of what I call “parasitic AI,” and that realization started stealing my sleep. I was no longer proud of the AI world we were building.
At the time, I was working for a multibillion-dollar tech company, and I doubled down on advocating for responsible AI and human-centric practices. But even with all the support in the world, I quickly realized that corporate agendas and business impediments limited the impact I could make. Thatâs why we established the Human-AI Symbiosis Alliance.
Our goal is twofold: first, to educate people that AI can be parasitic. Itâs not just a happy story, and itâs not simply about AI taking overâitâs about how we deploy it. Second, we want to teach and empower companies to steer AI development away from parasitism and toward mutualistic AI.
Ross: We are deeply immersed in digital environments, and these systems are becoming increasingly human-like. You mentioned the idea of positive symbiosis. Achieving that requires well-designed systems and an understanding of how humans behave. What do you see as the foundational leverage points that can shift us toward a positive and constructive symbiosis between humans and AI?
Alexandra: The most important realization is that AI is not a living entity. Itâs just a large dataset. It doesnât have consciousness, intent, or agency. Instead of seeing AI as something that will inherently harm us, we need to take responsibility for how we deploy it.
Of course, we need to ensure AI is properly regulated, that it is trained on unbiased data, and that we establish appropriate guardrails. But thereâs another chapter of the conversation that very few people talk about, and it keeps me up at night: the way we deploy AI.
Deploying AI in a way that doesnât harm individuals or companies is critical. No company wants to build parasitic AI within its environment. The main issue in deployment comes from literacy. Many software engineering companies are now venturing into AI without realizing that AI development is fundamentally different from traditional software development.
You cannot deploy AI the same way you deploy web pages or apps. It has a completely different lifecycle, set of activities, and expertise requirements. Raising awareness about this difference is crucial.
Beyond that, we need frameworksâstructured processes that guide responsible AI deployment. We also need to recognize that AI is not just a technology we implement; itâs a symbiotic relationship we must architect. That means not only enhancing employee efficiency in the short term but also ensuring that AI doesnât erode human skills over time. Otherwise, we risk creating a workforce that is highly efficient but, in the long run, less capable.
Another crucial element is measurement. The traditional ways we measure technology successâprimarily through productivity and efficiencyâare outdated for AI. We need to consider additional factors, such as how AI impacts innovation, employee well-being, and a company’s brand relationships. Instead of being shortsighted, we need a long-term focus on AIâs broader impact.
Finally, AI brings entirely new risks, many of which are unprecedented. A very personal and tragic example is the case of a teenager who took his own life after interacting with an AI chatbot.
When I used to warn clients about the importance of setting the right level of anthropomorphism and properly guarding AI to prevent harm, it often felt abstract. But now, unfortunately, we have a very tangible example of how things can go wrong.
The key takeaway is that building a responsible, mutualistic AI requires expertise, proper architectural planning, accurate measurement frameworks, and a heightened awareness of risks. If we get those things right, we can steer AI away from parasitism and toward a future where it genuinely benefits society.
Ross:In this section, we hear from Mohammad Hossein Jarrahi, Associate Professor at the University of North Carolina, Chapel Hill, from Episode 62.
Ross: So, you have been focusing on human-AI symbiosis. Iâd love to hear how you came to believe this is where you should be focusing your energy and attention.
Mohammad Hossein Jarrahi: It was in 2017, and I was stuck in traffic. If I want to tell you the story, there was an IBM engineer being interviewed on NPR. They were asking him a bunch of questions about the future of AI.
This was before the rise of ChatGPT and what I would call the consumerization of AI. As I was sitting in traffic with not much to do, something clicked. The engineer was providing examples that fit into three categories: uncertainty, complexity, and eco-locality.
As soon as I got home, I immediately started sketching out an article and finished writing it within two weeks. The idea was that we, as humans, have very unique capabilities, but we tend to underestimate them. At the same time, the smart technologies we see todayâat that time, primarily powered by deep learningâare inherently different from previous information technologies.
This means we need a completely different paradigm to understand how humans and AI can work together. AI isnât going to make us extinct, but we shouldnât treat it as just another infrastructure technology, like Skype or other traditional communication tools.
Thatâs when I realized that the term human-AI symbiosisâwhich comes from biologyâwas a perfect way to describe how two sources of intelligence can work together.
Ross: That concept is very much aligned with my work and the people I engage with. The key question is, how do we make it happen? There are quite a few people exploring this path, but we donât yet have all the answers.
What are some of the pathways that could move us toward effective human-AI symbiosis?
Mohammad: It really depends on the context. Thatâs the crux of the issue Iâve been exploring in my articles.
The question of how much we can delegate to AI isnât black and white. It exists on a spectrum between automation and augmentation. AI provides dual affordancesâit can automate tasks or augment human capabilities.
Automation means AI performs tasks autonomously with minimal supervision. Augmentation, on the other hand, keeps humans deeply involved, making them more efficient and effective.
The balance between automation and augmentation depends on the context:
- In low-stakes decision-making, we see more automation. Many mundane tasks can be offloaded to algorithms.
- In high-stakes decision-making, such as in medicine, human experts need to stay in the loop for accountability reasons. These scenarios require more augmentation than automation.
Machines are excellent at handling tasks that are repetitive, data-centric, and do not require intuition or emotional intelligence. However, humans excel at exception handlingâmaking nuanced judgment calls.
For example, consider loan applications:
- AI can efficiently process thousands of applications at once, using data to determine approvals.
- However, if an application is denied, a human might review it and see contextual factorsâperhaps the applicant had financial troubles in the past but has shown stability in recent years. This kind of intuitive decision-making is something AI struggles with.
Thatâs why, when it comes to organizational decision-making, AI shouldnât be the sole authority. Stakeholder interests are often in conflictâwhat benefits shareholders may harm employees or customers. AI tends to optimize for one metric, but a human leader must strike a balance among competing priorities.
Ross:I think a lot about the architecture of AI integration.
Keeping humans in the loop is important, but where should humans be involved? That depends on the organization, decision type, and context.
Are there structured ways we can design points of human involvementâwhether in exceptions, approvals, or shaping judgment?
Mohammad: The simplest answer is that humans should be involved whenever intuition is required.
In my article on human-AI symbiosis, I described two decision-making styles:
- Analytical decision-making â Data-driven and highly structured. AI has largely mastered this area.
- Intuition-based decision-making â Often subconscious, difficult to quantify, and essential in complex scenarios.
For example, in algorithmic management, AI can assist managers, but the higher you go in an organization, the more important intuition becomes. Research in management and psychology has shown that holistic decision-makingâwhich accounts for multiple stakeholdersârelies heavily on intuition.
If AI only optimizes decisions based on data, it risks missing broader considerations, such as company culture, long-term brand impact, or ethical concerns. Thatâs why judgment calls must remain in human hands.
Ross: Next, we hear from Erica Orange, futurist and author of AI and the New Human Frontier, from Episode 59.
Ross: What will allow us to master AI and ensure it benefits humanity?
Erica Orange: Thatâs such a great question. I often talk about the difference between lifelong learning and lifelong forgetting.
Itâs common to hear that we should all be lifelong learnersâconstantly acquiring new knowledge to stay relevant. But if we keep layering new information on top of outdated thinking, we wonât truly evolve.
We must also become lifelong forgettersâletting go of outdated assumptions, biases, and ways of working.
I often tell my clients and audiences to identify one or two things theyâre holding onto that no longer serve them. It could be a belief, a work habit, or an outdated mental model. The faster we embrace forgetting, the more space we free up for new ways of thinking.
Another key point is to embrace the “AND” mindset instead of thinking in polarized extremes.
We live in a world of hyper-polarizationâsocial media echo chambers and tribalism reinforce “us vs. them” thinking. But the future isnât either-orâitâs about “and.”
For example, when discussing humans and AI, thereâs often fear of an “AI takeover.” But AI isnât replacing usâitâs collaborating with us. The reality is one of coexistence and co-evolution.
The same applies to progress and stagnation, chaos and creativity, imagination and inertiaâthese forces always exist together.
Ross: Finally, we hear from Pedro Uria Recio, author of Machines of Tomorrow, from Episode 50.
Pedro Uria Recio: In Machines of Tomorrow, I explore AI through human history.
From ancient aspirations of creating human-like machines to todayâs generative AI revolution, AI has always been intertwined with our progress.
One of the bookâs key concepts is interlacingâthe idea that humans and AI will become more intimately connected.
Right now, we use smartphones for everything. The fact that they exist outside our bodies is merely an anecdoteâin the future, they will be inside us.
Brain-computer interfaces, robotics, and AI-driven medicine will interlace humans and AI, potentially transforming humanity into a new species.
This shift wonât happen overnight, but AI will be central to our evolution.
Ross: That wraps up this episode. Thank you to all our guests for their incredible insights on human-AI symbiosis.
Podcast: Play in new window | Download