The Existential Hope Podcast Podcast Por Foresight Institute arte de portada

The Existential Hope Podcast

The Existential Hope Podcast

De: Foresight Institute
Escúchala gratis

The Existential Hope Podcast features in-depth conversations with people working on positive, high-tech futures. We explore how the future could be much better than today—if we steer it wisely.


Hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite the scientists, founders, and philosophers shaping tomorrow’s breakthroughs— AI, nanotech, longevity biotech, neurotech, space, smarter governance, and more.


About Foresight Institute: For 40 years the independent nonprofit Foresight Institute has mapped how emerging technologies can serve humanity. Its Existential Hope program is the North Star: mapping the futures worth aiming for and the breakthroughs needed to reach them. This podcast is that exploration in public. Follow along and help tip the century toward success.


Explore more:

  • Transcript, listed resources, and more: https://www.existentialhope.com/podcasts
  • Follow on X

Hosted on Acast. See acast.com/privacy for more information.

Foresight Institute
Ciencia Ciencias Sociales
Episodios
  • How your personal moral compass helps you build a better world | SJ Beard
    Feb 19 2026

    To make the future go well, we might not need a perfect model for its end state, or an abstract philosophical theory to guide us. Can your own sense of “the right thing to do” actually help make the world better?

    In this episode we talk with SJ Beard, researcher at the Centre for the Study of Existential Risk, and author of the book “Existential Hope”.

    Some of the topics we discuss:

    • How to shift our focus from "preventing the end of the world" to actively building a future worth living.
    • Why aiming for a “happy ever after” state of the world might be dangerous, and why improving the world one generation at a time is less likely to backfire.
    • Relying on our own sense of “the right thing to do” as a practical guide to make the world better.
    • Why decisions about AI and global risk need input from a broad mix of people and their real-world experiences, not just experts at the top.
    • Why building AI with compassion and curiosity about human values may be safer than giving it a rigid list of rules to follow.


    Timestamps:

    [01:31] SJ’s background in philosophy and existential risk

    [02:02] Why write a book on existential hope?

    [04:43] Defining existential hope, and its relationship with existential risks and existential anxiety

    [11:09] Human agency without the guilt

    [13:59] Why there are no truly "natural" disasters

    [16:49] Why we shouldn’t try to build a perfect utopia

    [19:05] Protopia: is iterative improvement enough?

    [22:19] Defining progress: what does it mean to "get better"?

    [26:13] Protopia vs. viatopia: setting goals and achieving a great future

    [29:48] Existential safety as a collective project

    [35:06] Using participatory tools to make global decisions

    [36:32] Making existential hope reasonably demanding

    [40:06] Can we achieve systemic change in a tech-focused world?

    [46:00] Concrete socio-technical projects for AI safety

    [49:02] Aligning AI by building its character

    [51:45] The importance of history in building a good future

    [54:24] Key 17th-century ideas that are shaping modern society

    [58:20] Cultivating "humanity as a virtue"

    [01:04:37] Lessons from nuclear near-misses: the example of Petrov

    [01:09:20] The trade-offs of a humanistic, bottom-up approach to decision-making

    [01:12:16] Literacy vs. orality: how ideas become simplified

    [01:16:45] Meme culture and the transmission of deep context

    [01:18:48] How writing the book changed SJ’s mind

    [01:21:38] SJ Beard’s vision for existential hope

    On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures.


    Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts


    Follow on X.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    1 h y 26 m
  • Raising science ambition: how to identify the highest-impact research for an AI world | Anastasia Gamick
    Feb 4 2026

    Most scientists do “safe” research to secure their next grant. But what if more of them worked on the most important problems instead?

    In this episode, we talk with Anastasia Gamick, co-founder of Convergent Research, about how to raise our level of ambition for what science can actually achieve.

    Convergence Research incubates Focused Research Organizations: small, startup-style teams that build critical “public good” tech, which both academia and for-profits ignore.

    We discuss:

    • What makes a research project truly high-impact in view of an AI world
    • Concrete examples of these projects: maps of brain synapses, software that’s provably safe, drug screening, good data for AI-powered scientific research, and more
    • How to prioritize defensive technology, such as biosafety tools, instead of just pushing every frontier as fast as possible
    • How young scientists can find the work that matters most for the future


    [00:00] Cold open

    [01:52] Introducing Anastasia Gamick and the mission of Convergent Research

    [02:44] Defining Focused Research Organizations (FROs) and their unique characteristics

    [09:46] Backcasting from 2075: what research to prioritize now to prepare for the intelligence age

    [19:08] The four types of projects Convergent decides not to fund

    [25:35] Biological and ecological dark matter: why we need better datasets for AI science

    [28:28] Why academia and industry aren’t incentivized to build tech capabilities for the public good

    [29:32] Defining “moonshot projects”: how boring drug screening creates massive downstream impact

    [32:56] The future of neuroscience: capturing videos of synapses firing

    [35:46] How the FRO model is catching on internationally

    [36:25] Steering vs. accelerating: selecting defense-dominant technology

    [41:22] Increasing human agency and how scientists can choose high-impact research areas

    [46:51] The evolution of scientific funding and the role of new philanthropy

    [48:05] Finding existential hope in the community of future-builders

    On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures.


    Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts


    Follow on X.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    49 m
  • Jason Crawford on how technology expands human choice and control
    Jan 21 2026

    Our fast-paced world isn’t spinning out of our control; we’re actually becoming more capable of steering it than ever before. Throughout history, technological progress has expanded human agency, that is our ability to choose our destiny rather than being subject to the whims of nature.

    Jason Crawford, founder of the Roots of Progress Institute, joins the podcast to discuss The Techno-Humanist Manifesto, his book exploring his philosophy of progress centered around human life and wellbeing.

    In our conversation, we dive into the core arguments of the manifesto:

    • How we are more in control of our lives than ever before
    • Why we should reframe the goal of “stopping climate change” into “controlling climate change” and work toward installing a “thermostat for the Earth”
    • The value of nature and its interaction with humanity
    • Allowing ourselves to celebrate human achievement and industrial civilization
    • The concept of “solutionism”, as a kind of optimism that acknowledges risks while keeping a proactive attitude towards solving problems
    • Why two common fears around the slowing of progress – that we could run out of natural resources or new ideas – are actually unfounded
    • The possibility that AI represents a transformation as significant as the Industrial Revolution or the invention of agriculture
    • How to rebuild a culture of progress in the 21st century, from reforming scientific institutions to creating new, non-dystopian science fiction


    Chapters:

    [00:00] Cold open

    [01:30] Intro: Jason Crawford and the Techno-Humanist Manifesto

    [04:10] Defining progress as the expansion of human agency

    [06:16] How to use our newfound agency to live a meaningful life

    [10:07] Climate control: installing a “thermostat” for the Earth

    [13:26] Anthropocentrism and the value of nature

    [19:41] Ode to man: celebrating human achievement

    [20:53] Solutionism: believing in our problem-solving abilities to tackle risks

    [26:26] Why pessimism sounds smart but misses the solution space

    [31:29] The myth of finite natural resources and the power of knowledge

    [34:27] Why we are getting better at finding ideas faster than they get harder to find

    [39:03] The Intelligence Age: a new mode of production

    [41:19] Amplifying human agency in an AI-driven world

    [43:09] Developing a healthy relationship with AI and attention

    [46:28] The culture of progress and why we soured on the future

    [50:10] Building the infrastructure for a global progress movement

    [53:54] A 20-year vision for progress studies in the mainstream

    [57:33] High-leverage regulations for progress: from nuclear to supersonic flight

    [58:57] Jason Crawford’s existential hope vision


    On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures.


    Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts


    Follow on X.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    1 h y 1 m
Todavía no hay opiniones