Episodios

  • The Science of Disagreeing Better (ft. author Julia Minson)
    Apr 14 2026

    Send us Fan Mail

    We live in a moment where disagreement feels dangerous.

    Politics is polarized. Social media amplifies outrage. Inside companies, dissent is often muted — not because people agree, but because they assume speaking up will damage relationships or reputations.

    But what if most of that fear is wrong?

    Julia Minson, decision scientist at Harvard Kennedy School, studies the psychology of disagreement. Her research on “conversational receptiveness” reveals something counterintuitive: people systematically overestimate how much disagreement will harm a relationship and underestimate how much thoughtful dissent earns respect.

    That miscalculation has consequences.

    When leaders avoid disagreement, bad ideas survive. When teams confuse persuasion with understanding, trust erodes. When we treat conflict as a character flaw rather than a cognitive process, we weaken our institutions.

    In this episode, we explore why humans are wired to assume they’re objectively right, how subtle language shifts can dramatically increase receptiveness, and why polarization may be less about ideology and more about judgment errors.

    And in an era where AI systems increasingly summarize, mediate, and even “assist” in conflict, what happens if our tools inherit our biases? And if healthy disagreement is essential to good decision-making, how do we preserve it inside organizations that prize alignment over friction?

    This isn’t a conversation about compromise.

    It’s about whether we still know how to disagree in ways that make us smarter.

    Más Menos
    27 m
  • The Storytelling Revolution: Why Humanity's Earliest Innovation Still Matters (ft. author Kevin Ashton)
    Mar 26 2026

    Send us Fan Mail

    In this episode of FUTUREPROOF., we sit down with Kevin Ashton—the technologist who coined the term Internet of Things and helped usher in the smartphone era—to talk about something even more foundational than AI.

    Stories.

    In his new book, The Story of Stories, Kevin traces a million-year arc—from the first fires where early humans gathered, to the invention of writing and printing, to electricity, electronics, and the smartphone. His thesis is provocative: language did not create stories. Stories created language.

    Every major storytelling revolution has followed a simple pattern: it increases the number of people who can tell stories—and the number of people who can hear them.

    For the first time in history, anyone can tell stories to everyone.

    But there’s a catch.

    While AI cannot understand meaning, algorithms now determine which stories we see, amplifying bias, shaping belief, and influencing behavior at scale. The power of storytelling has never been more democratized—or more intermediated.

    We explore:

    • Why storytelling is innate, not cultural
    • The eight great revolutions of human communication
    • Why machines can generate content but not meaning
    • The risks of algorithmic amplification
    • The role of critical thinking in a post-scarcity information world
    • Whether the next storytelling revolution is technological—or cognitive

    This conversation isn’t about nostalgia.
    It’s about understanding the oldest human technology in a moment when the newest one is accelerating everything.

    If we think in stories—and we always will—the question becomes:
    Who shapes the stories that shape us?

    Más Menos
    24 m
  • The Workforce Is *Not* AI-Ready (ft. Ben Tasker, AI education leader)
    Mar 31 2026

    Send us Fan Mail

    Everyone says they’re “AI-first.”

    Very few organizations are AI-ready.

    In this episode of FUTUREPROOF., we sit down with Ben Tasker, who is leading one of the largest workforce-scale AI education efforts in the public utility sector — upskilling 36,000 employees while advising global organizations on certification and governance.

    Ben calls this moment the “AI Between Times.” The tools are evolving rapidly, but the AI-driven economy they promise hasn’t fully stabilized. That gap creates risk — and opportunity.

    We unpack what actually breaks when companies try to move beyond pilot projects:

    • Why buying AI tools is easy — and building internal capability isn’t
    • The tension between augmentation and displacement
    • What the 70/30 rule means in cost-constrained environments
    • Why governance must precede implementation
    • And how AI fluency is quietly becoming a new form of institutional power

    Ben argues that AI strategy lives or dies at the human level. Not because technology isn’t powerful, but because incentives, culture, and leadership determine whether that power compounds or fractures an organization.

    This conversation isn’t about hype cycles.

    It’s about whether institutions can transform fast enough — without breaking trust in the process.

    Because the future of work won’t be defined by who bought the best tools.

    It will be defined by who prepared their people.

    Más Menos
    23 m
  • GLP-1s, AI, and the New Health Economy (ft. Rajiv Leventhal, health analyst)
    Mar 10 2026

    Send a text

    Healthcare is colliding with technology faster than most people realize.

    In this episode of FUTUREPROOF., I sit down with analyst Rajiv Leventhal, who covers the intersection of healthcare, pharma, and tech, to unpack three forces reshaping the system at once: AI, GLP-1 weight loss drugs, and the mental health impact of digital life.

    We start with AI as a health tool. Nearly a quarter of ChatGPT’s global weekly users now use it for health-related prompts. That’s not a niche behavior. It’s a mainstream one. The question isn’t whether people will turn to AI for medical guidance. They already are.

    The real tension is trust and liability. General-purpose AI tools aren’t bound by HIPAA in the same way healthcare providers are. Yet they’re increasingly acting as digital concierges — answering late-night pediatric questions, explaining lab results, and helping people prepare for appointments in a system where access is strained.

    And that system is strained. Even in major cities, patients can wait months — sometimes a year — to see specialists. When access gaps widen, alternative tools step in. AI isn’t replacing doctors. It’s filling holes.

    We then turn to GLP-1 drugs and the weight-loss explosion. What began as diabetes treatment became a cultural and commercial wave driven by social media, FDA approvals, and aggressive advertising. But beneath the surface is a regulatory gray market of compounded versions, patent battles, and telehealth platforms monetizing demand.

    Finally, we tackle social media’s impact on mental health. The evidence linking heavy use — especially among teens — to anxiety and depression is growing, even if causation remains complex. Is this a regulation problem? A parental problem? A public health issue? Or another example of technology moving faster than governance?

    This episode isn’t about hype.

    It’s about what happens when broken systems create openings — and tech companies move into the space.

    Because when trust erodes and access declines, people don’t wait.

    They improvise.

    Más Menos
    27 m
  • Less DEI, more FAIRness (ft. author Lily Zheng)
    Feb 24 2026

    Send us Fan Mail

    For years, organizations have poured millions into DEI training.

    And yet most employees still report discrimination. Promotion gaps persist. Trust remains uneven.

    So what’s going on?

    In this episode of FUTUREPROOF., I sit down with Lily Zheng — strategist and author of Fixing Fairness — to interrogate a hard truth: much of what we call DEI doesn’t work. Not because fairness is unpopular. Not because inclusion is misguided. But because we keep trying to fix people instead of fixing systems.

    Lily introduces the FAIR framework — Fairness, Access, Inclusion, and Representation — and argues that the real leverage isn’t in workshops. It’s in incentives, evaluation criteria, hiring processes, and executive accountability.

    We explore:

    • Why standalone DEI training can backfire
    • The “missing stair” metaphor — and how organizations normalize dysfunction
    • The Cobra Effect of poorly designed diversity incentives
    • Why representation is ultimately about trust, not optics
    • What meritocracy gets wrong about itself
    • And why rebranding DEI won’t solve structural problems

    At a moment when DEI faces political backlash and corporate retrenchment, Lily makes a counterintuitive claim: the future of workplace inclusion will be more rigorous, more measured, and more accountable — not less.

    This is a systems conversation.

    Not about slogans.
    Not about performative commitments.
    About incentives, power, and what actually moves outcomes.

    If you care about leadership, governance, and the second-order effects of institutional design, this episode will challenge you.

    Más Menos
    32 m
  • Soft Skills Are the Hard Advantage in the AI Era (ft. Bushra Khan)
    Feb 17 2026

    Send us Fan Mail

    For years, we treated emotional intelligence like a cultural add-on.

    Nice to have.
    Important, maybe.
    But not central to performance.

    That framing doesn’t survive the AI era.

    In this episode of FUTUREPROOF., I sit down with Dr. Bushra Khan, founder of Leading with BK, to examine what actually differentiates leaders as automation compresses the knowledge gap. When AI can draft, analyze, summarize, and even simulate difficult conversations, the advantage shifts. It moves from what you know to how you show up.

    Bushra has spent over 15 years helping leaders translate emotional intelligence from buzzword into operating system. We talk about why “soft skills” should be understood as strategic skills, how negativity bias quietly distorts leadership judgment, and why loneliness inside high-performing teams is less about remote work and more about emotional avoidance.

    We also explore some uncomfortable tensions:

    • If AI amplifies leaders, what exactly is it amplifying?
    • When does candor become bluntness — and erode trust instead of building it?
    • Why do leaders underestimate the emotional consequences of automation?
    • What does bravery look like when decisions are both rational and painful?

    Bushra argues that most organizations are still trying to fix people instead of fixing environments. They invest in workshops while ignoring incentives. They push productivity while neglecting psychological safety. They assume proximity equals connection.

    But as AI takes over more technical tasks, influence becomes the real differentiator. And influence is emotional before it is analytical.

    This conversation isn’t about positivity or platitudes. It’s about leadership under pressure — layoffs, automation, rapid skills shifts — and what it takes to signal trust and authority through noise.

    Because the future of work won’t just test our systems.

    It will test our emotional maturity.

    Más Menos
    28 m
  • The ROI of Not Being a Robot (ft. author & VaynerX exec Claude Silver)
    Feb 3 2026

    Send us Fan Mail

    What if the most undervalued leadership skill in the AI era isn’t technical fluency—but emotional presence?

    This episode of FUTUREPROOF. features Claude Silver, the world’s first Chief Heart Officer and the No. 2 executive at VaynerX, joining the show to unpack why authenticity, empathy, and belonging are no longer “nice-to-haves,” but strategic advantages.

    Claude’s 2025 book, Be Yourself at Work, challenges the long-standing belief that professionalism requires emotional distance. Instead, she argues that in a world defined by AI, automation, and burnout, the leaders who win are the ones who lead with heart—intentionally, skillfully, and without performative fluff.

    We explore:

    • Why “authenticity” has been misunderstood—and how to practice it without oversharing or losing authority
    • What leading with heart actually looks like inside a 2,000-person global organization
    • How emotional skills become power skills as AI absorbs more technical work
    • The difference between fitting in and true belonging—and why that gap is costing companies talent and trust
    • How leaders can balance emotional bravery with emotional efficiency in an always-on, high-pressure world

    This is a conversation about leadership after the old playbook breaks—and what replaces it when humanity becomes the edge.

    Más Menos
    25 m
  • How People Endure When Systems Collapse (ft. Trevor Reed, author & Russia detainee)
    Feb 10 2026

    Send us Fan Mail

    This episode of FUTUREPROOF. is different.

    My guest is Trevor Reed, a former U.S. Marine who was wrongfully detained and abused in a Russian gulag for nearly three years, freed in a high-profile prisoner exchange in 2022—and then made a decision few could comprehend: he voluntarily went to Ukraine to fight against the same system that imprisoned him.

    In this conversation, Trevor reflects on what captivity does to the human mind, how survival reshapes your definition of justice, and why freedom—real freedom—can’t be taken for granted once you’ve lost it.

    We talk about:

    • What daily life inside a Russian penal colony is actually like—and how close he came to dying there
    • The mental discipline required to survive prolonged isolation, hunger, and uncertainty
    • The emotional toll of being turned into a geopolitical bargaining chip
    • Why revenge eventually gave way to a deeper definition of justice
    • The surreal contrast between everyday life and active war zones in Ukraine
    • Being critically wounded by a landmine—and what it means to survive twice
    • How his understanding of freedom, responsibility, and humanity has fundamentally changed

    This is not a conversation about politics.
    It’s a conversation about power, resilience, moral injury, and what it means to remain human when systems fail you.

    Trevor’s memoir, Retribution: A Former US Marine's Harrowing Journey from Wrongful Imprisonment in Russia to the Front Lines of the Ukrainian War, is not an easy read—but it is an important one. And this conversation is not comfortable—but it is necessary.

    Más Menos
    25 m