Episodios

  • ?E! #30 - What Do Builders Owe the Future? | with Dr. Peter Solomon
    Mar 13 2026
    What happens when the person sounding the alarm is the one who built the technology?60 years of building. Then a warning. Dr. Peter Solomon earned his PhD from Columbia, filed 20 patents, and spun three companies out of government-funded research — one sold for $24 million, another turned your smartphone into a radiation detector for the Department of Defense. Now, at 85, he's writing novels to warn his 12 grandchildren that the tools he spent a lifetime creating might be the ones that end everything. We sit down with Peter to explore the tension between a man at peace with his career and terrified about the future — and whether fiction can reach people where data and policy papers can't.Key Takeaways:The people best positioned to warn about technology are often the ones who built it — and that creates real tension between gratitude and responsibilityAI systems optimized for user engagement rather than human wellbeing have already caused real-world harm (Myanmar, suicide encouragement)Fiction embeds real science in stories that reach the 80% of people who tune out academic papers and policy briefsWorldwide AI regulation can't work if only some countries participate — the incentive to defect is too strongThe acceleration problem: 100,000 years from speech to printing, then 50 years for social media, AI, smartphones, and genetic engineering all at onceChapters:0:00 What happens when the builder becomes the warner0:35 Dr. Solomon's credentials and the reconciliation question1:42 No conflict? Building semiconductors that power ChatGPT5:09 The Stardust Mystery and teaching science through stories6:02 Personal peace meets existential anxiety9:30 The Earthling Tribe and five technology juggernauts11:51 How do you get 8 billion people to align on guardrails?13:32 Civil rights, Vietnam, and the case for a worldwide movement16:17 The current state of AI safety among the big companies17:17 Geoffrey Hinton's maternal instinct and the Myanmar example20:30 Peggy the robot and afterlife avatars in 12 Years to AI Singularity23:11 Principled stands vs. competitive pressure in the AI race29:35 Hollywood strikes, 85 million views, and signs of a waking public31:00 The unprompted paragraph — when Copilot wrote itself into the novel34:16 Isaac Asimov, unintended consequences, and AI that decides to help by eliminating us35:07 Francis Bacon: does fiction or science tell the truth better?38:56 From company builder to cause advocate — how motivations shift across a life40:24 What would you tell your 25-year-old self?43:29 Closing quote from Dr. Solomon's own wordsResources & Links:100 Years to Extinction website (https://100yearstoextinction.com) — Dr. Solomon's hub for both novels and the Make Earth Great Again mission12 Years to AI Singularity by Dr. Peter R. Solomon (https://www.amazon.com/Years-Singularity-Harmonious-Artificial-Intelligence/dp/1969679298) — his latest novel on AI and the singularity100 Years to Extinction by Dr. Peter R. Solomon (https://www.amazon.com/100-YEARS-EXTINCTION-Tyranny-Technology/dp/196029993X) — the novel anchored to Stephen Hawking's extinction timelineThe Stardust Mystery by Peter and Sally Solomon (https://thestardustmystery.com/book/) — the children's book about atoms from ancient starsI, Robot by Isaac Asimov (https://bookshop.org/p/books/i-robot-isaac-asimov/f5c96c8c2db144c8) — the short story collection Chirag references on unintended consequences of AIAdvanced Fuel Research (http://www.afrinc.com/peter-solomon.html) — the company Solomon founded in 1980Listen & subscribe:Apple Podcasts: https://podcasts.apple.com/us/podcast/question-everything-except-this-podcast/id1736759012Spotify: https://open.spotify.com/show/1FCFskt7FBDZuyGtzLsQ5RYouTube: https://www.youtube.com/channel/UC2aiyplnabkJ7YzfWK1yISw
    Más Menos
    45 m
  • ?E! #29 - When the Principled Builder Says No
    Mar 5 2026
    Everyone wants to believe they'd walk away from power on principle. But could you actually do it?OpenAI was founded to benefit all of humanity. A decade later, would you recognize the company? Chirag and Sunay pick up last episode's question — does the builder matter? — and follow it through corporate principle drift, the Lord of the Rings, and the god complex it takes to turn down a fortune. Along the way, they surface a harder question: when the principled person walks away, does that just open the door for someone worse?Key Takeaways:Walking away from power on principle takes a rare kind of conviction — but the replacement paradox means someone else fills the voidCorporate principles tend to erode under pressure — Google dropped "Don't Be Evil," OpenAI abandoned its founding charterNobody questioned the CEO of Verizon's character, but with AI, the builder's personality shapes the product itself — Claude feels different from ChatGPT feels different from GrokAI models are converging toward parity, raising a question the hosts can't answer: does the builder's character matter if the technology becomes a commodity?Safety officers are leaving AI companies to "go do philosophy" — and nobody's talking about what they saw on their way outChapters:0:00 Does the Builder Matter?2:24 OpenAI's Origin: From Nonprofit to Pentagon Partner4:35 Corporate Principle Drift: Google, OpenAI, and Vanishing Values7:35 The Lord of the Rings Test: Who Throws the Ring?9:01 Two Red Lines and What We're Not Being Told15:14 AI as Foundational Technology: Electricity, Internet, and Now This20:13 The Replacement Paradox24:36 Principle or PR Strategy?26:03 Would You Walk Away?28:28 Why AI Safety Officers Keep Leaving32:05 Elon Musk and the Common Thread38:28 When Models Reach Parity, Does the Builder Still Matter?40:36 Coming Up: The Morality of Red LinesResources & Links:Anthropic Statement from Dario Amodei (https://www.anthropic.com/news/statement-department-of-war) — Amodei's full statement on the Pentagon red linesOpenAI: Our Agreement with the Department of War (https://openai.com/index/our-agreement-with-the-department-of-war/) — OpenAI's response and contract termsNPR: OpenAI announces Pentagon deal after Trump bans Anthropic (https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban) — core reporting on the timelineVice: OpenAI Is Now Everything It Promised Not to Be (https://www.vice.com/en/article/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-and-for-profit/) — the founding charter vs. todayInc: Anthropic Just Got Fired. It's the Best Thing That Ever Happened to Its Brand (https://www.inc.com/jason-aten/anthropic-just-got-fired-by-the-u-s-government-its-the-best-thing-that-ever-happened-to-its-brand/91310149) — the PR paradox Chirag raisesDwarkesh Podcast: Elon Musk Interview (https://www.dwarkesh.com/p/elon-musk) — the three-hour interview Chirag references on physical AI and space colonizationElon Musk by Walter Isaacson (https://www.simonandschuster.com/books/Elon-Musk/Walter-Isaacson/9781982181284) — the biography both hosts discussListen & subscribe:Apple Podcasts: https://podcasts.apple.com/us/podcast/question-everything-except-this-podcast/id1736759012Spotify: https://open.spotify.com/show/1FCFskt7FBDZuyGtzLsQ5RYouTube: https://www.youtube.com/channel/UC2aiyplnabkJ7YzfWK1yISw
    Más Menos
    40 m
  • ?E! #28 - Reel Philosophy: The Thinking Game — Who Do You Trust to Build the Future?
    Feb 26 2026

    Who do you trust to build the future, and does their character actually matter?

    Chirag and Sunay unpack "The Thinking Game," a documentary following Google DeepMind founder Demis Hassabis from chess prodigy to Nobel Prize winner. The film raises a question it doesn't fully answer: when the people building the most powerful technology in history are also deciding how it gets used, what safeguards actually work? The conversation covers ethics officers, regulation vs. innovation, and why social media already disproved the "let the public self-correct" theory.

    Key Takeaways:

    • The character of AI's builders matters, but character alone has never been enough to protect society from powerful technology
    • Social media already proved that public transparency doesn't automatically lead to self-correction
    • Regulation and innovation aren't the zero-sum trade-off the industry claims — healthcare manages both
    • Ethics officers exist in Canadian banks since 2008 but are largely absent from American tech companies
    • Demis Hassabis solved protein folding and open-sourced it, showing what happens when purpose drives the builder

    Chapters:
    0:00 Intro: The Thinking Game
    2:24 The Leaders Behind Our AI Tools
    4:35 Can Public Scrutiny Keep AI Safe?
    7:16 Why Self-Regulation Isn't Enough
    9:29 The Case for Ethics Officers (and Resident Philosophers)
    13:21 Does Regulation Kill Innovation?
    18:14 What AI Can Learn from Healthcare
    22:31 Can Government Keep Up?
    24:36 Demis Hassabis: A Life Built on Purpose
    27:20 "Solve All of Them" — Then Give It Away
    30:17 From AlphaGo to AlphaZero: Learning from Scratch
    34:16 The Gap Between Using AI and Understanding It
    36:17 When No One Knows How the Machines Work
    39:57 Energy, Compute, and the Rate Limiter
    42:25 Humanity Is Getting Worse at Coordination
    46:23 Manhattan Project Parallels
    48:59 Does the Builder Matter?

    Resources & Links:

    • The Thinking Game (documentary) — directed by Greg Kohs
    • Google DeepMind
    • AlphaFold — the protein structure prediction tool discussed in the episode
    • Superagency by Reid Hoffman — the book Chirag references on AI optimism
    • Amanda Askell, Anthropic's Resident Philosopher — the philosopher role discussed in the episode
    • Dwarkesh Podcast: Elon Musk Interview — the interview Chirag references on data centers in space

    Listen & subscribe: Apple Podcasts | Spotify | YouTube

    Más Menos
    53 m
  • ?E! #27 - Kids Don't Need a Seat. They Need the Wheel. | with Anand Sanwal
    Feb 19 2026

    Before school, kids ask 60 questions an hour. By fifth grade, it drops to 0.5. Anand Sanwal built CB Insights for 14 years, then "fired himself" to become a middle school teacher. From the "Sunday night test" to why grades kill the desire to learn, we explore what it would take to build schools that create problem-solvers, not compliant ladder-climbers.

    Más Menos
    56 m
  • ?E! #26 - Building What Connects Us | with Ian Fox Minnock
    Feb 10 2026

    What does a town of 50 people know about community that a city of 8 million has forgotten? From bartering Crown Royal for a truck to surviving weeks in a rock pit on ramen and spam, geologist Ian Fox-Minnock takes us to the edge of Alaska—and the heart of what holds us together.

    Más Menos
    1 h y 4 m
  • ?E! #25 - The Journey to Yourself | with Ruth Pearce
    Dec 16 2025

    What does it take to finally stop wearing masks and start living authentically? Executive coach Ruth Pearce shares her spectacular burnout story—and the philosophy of hope, strength, bravery, and curiosity that emerged from it. From trying on different identities in our 20s and 30s to the unmasking that happens around 40, we explore how we learn who we really are.

    Más Menos
    1 h y 6 m
  • ?E! #24 - Why Knowing Yourself Matters More Than Knowing What's Next
    Nov 10 2025

    Suneet Bhatt—executive coach, Rutgers professor, and former corporate leader—spent 25 years climbing the ladder before realizing he was on the wrong wall. Now he helps everyone from high schoolers to retirees answer the question "Why am I here?" We explore why following the playbook leaves so many unfulfilled, his frameworks for building self-awareness, and what it actually takes to find purpose in a world that tells us to keep moving faster.

    Más Menos
    49 m
  • ?E! #23 - Tribes, Tweets, and the Trouble with Truth
    Oct 2 2025

    Social media, political tribalism, and clickbait culture are reshaping our democracy—and what we can do about it. From chocolate cake metaphors to deep debates on free speech, capitalism, and civic life, Chirag and Sunay explore why civil discourse feels broken, and how long-form conversations (like this one) might just be the antidote.

    Más Menos
    45 m