80,000 Hours Podcast Podcast Por Rob Luisa and the 80000 Hours team arte de portada

80,000 Hours Podcast

80,000 Hours Podcast

De: Rob Luisa and the 80000 Hours team
Escúchala gratis

OFERTA POR TIEMPO LIMITADO. Obtén 3 meses por US$0.99 al mes. Obtén esta oferta.
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.All rights reserved
Episodios
  • The low-tech plan to patch humanity's greatest weakness | Andrew Snyder-Beattie
    Oct 2 2025
    Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s single greatest vulnerability. Andrew Snyder-Beattie thinks conventional wisdom could be wrong.Andrew’s job at Open Philanthropy is to spend hundreds of millions of dollars to protect as much of humanity as possible in the worst-case scenarios — those with fatality rates near 100% and the collapse of technological civilisation a live possibility.Video, full transcript, and links to learn more: https://80k.info/asbAs Andrew lays out, there are several ways this could happen, including:A national bioweapons programme gone wrong, in particular Russia or North KoreaAI advances making it easier for terrorists or a rogue AI to release highly engineered pathogensMirror bacteria that can evade the immune systems of not only humans, but many animals and potentially plants as wellMost efforts to combat these extreme biorisks have focused on either prevention or new high-tech countermeasures. But prevention may well fail, and high-tech approaches can’t scale to protect billions when, with no sane people willing to leave their home, we’re just weeks from economic collapse.So Andrew and his biosecurity research team at Open Philanthropy have been seeking an alternative approach. They’re proposing a four-stage plan using simple technology that could save most people, and is cheap enough it can be prepared without government support. Andrew is hiring for a range of roles to make it happen — from manufacturing and logistics experts to global health specialists to policymakers and other ambitious entrepreneurs — as well as programme associates to join Open Philanthropy’s biosecurity team (apply by October 20!).Fundamentally, organisms so small have no way to penetrate physical barriers or shield themselves from UV, heat, or chemical poisons. We now know how to make highly effective ‘elastomeric’ face masks that cost $10, can sit in storage for 20 years, and can be used for six months straight without changing the filter. Any rich country could trivially stockpile enough to cover all essential workers.People can’t wear masks 24/7, but fortunately propylene glycol — already found in vapes and smoke machines — is astonishingly good at killing microbes in the air. And, being a common chemical input, industry already produces enough of the stuff to cover every indoor space we need at all times.Add to this the wastewater monitoring and metagenomic sequencing that will detect the most dangerous pathogens before they have a chance to wreak havoc, and we might just buy ourselves enough time to develop the cure we’ll need to come out alive.Has everyone been wrong, and biology is actually defence dominant rather than offence dominant? Is this plan crazy — or so crazy it just might work?That’s what host Rob Wiblin and Andrew Snyder-Beattie explore in this in-depth conversation.What did you think of the episode? https://forms.gle/66Hw5spgnV3eVWXa6Chapters:Cold open (00:00:00)Who's Andrew Snyder-Beattie? (00:01:23)It could get really bad (00:01:57)The worst-case scenario: mirror bacteria (00:08:58)To actually work, a solution has to be low-tech (00:17:40)Why ASB works on biorisks rather than AI (00:20:37)Plan A is prevention. But it might not work. (00:24:48)The “four pillars” plan (00:30:36)ASB is hiring now to make this happen (00:32:22)Everyone was wrong: biorisks are defence dominant in the limit (00:34:22)Pillar 1: A wall between the virus and your lungs (00:39:33)Pillar 2: Biohardening buildings (00:54:57)Pillar 3: Immediately detecting the pandemic (01:13:57)Pillar 4: A cure (01:27:14)The plan's biggest weaknesses (01:38:35)If it's so good, why are you the only group to suggest it? (01:43:04)Would chaos and conflict make this impossible to pull off? (01:45:08)Would rogue AI make bioweapons? Would other AIs save us? (01:50:05)We can feed the world even if all the plants die (01:56:08)Could a bioweapon make the Earth uninhabitable? (02:05:06)Many open roles to solve bio-extinction — and you don’t necessarily need a biology background (02:07:34)Career mistakes ASB thinks are common (02:16:19)How to protect yourself and your family (02:28:21)This episode was recorded on August 12, 2025Video editing: Simon Monsour and Luke MonsourAudio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCamera operator: Jake MorrisCoordination, transcriptions, and web: Katy Moore
    Más Menos
    2 h y 31 m
  • Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution
    Sep 26 2025

    Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought it was such a good interview and we wanted more people to see it, so we’re cross-posting it here on The 80,000 Hours Podcast.

    Jake and host Nathan Labenz discuss:

    • Jake’s four-category framework to think about AI risks and opportunities: security, economics, society, and existential.
    • Why Jake advocates for "managed competition" with China — where the US and China "compete like hell" while maintaining sufficient guardrails to prevent conflict.
    • Why Jake thinks competition is a "chronic condition" of the US-China relationship that cannot be solved with “grand bargains.”
    • How current conflicts are providing "glimpses of the future" with lessons about scale, attritability, and the potential for autonomous weapons as AI gets integrated into modern warfare.
    • Why Jake worries that Pentagon bureaucracy prevents rapid AI adoption while China's People’s Liberation Army may be better positioned to integrate AI capabilities.
    • And why we desperately need private sector leadership: AI is "the first technology with such profound national security applications that the government really had very little to do with."

    Check out more of Nathan’s interviews on The Cognitive Revolution YouTube channel: https://www.youtube.com/@CognitiveRevolutionPodcast

    Originally produced by: https://aipodcast.ing

    This edit by: Simon Monsour, Dominic Armstrong, and Milo McGuire | 80,000 Hours

    Chapters:

    • Cold open (00:00:00)
    • Luisa's intro (00:01:06)
    • Jake’s AI worldview (00:02:08)
    • What Washington gets — and doesn’t — about AI (00:04:43)
    • Concrete AI opportunities (00:10:53)
    • Trump’s AI Action Plan (00:19:36)
    • Middle East AI deals (00:23:26)
    • Is China really a threat? (00:28:52)
    • Export controls strategy (00:35:55)
    • Managing great power competition (00:54:51)
    • AI in modern warfare (01:01:47)
    • Economic impacts in people’s daily lives (01:04:13)
    Más Menos
    1 h y 6 m
  • #223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)
    Sep 15 2025

    At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It’s mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”

    Video, full transcript, and links to learn more: https://80k.info/nn2

    This means creating as many opportunities as possible for surprisingly good things to happen:

    • Write publicly.
    • Reach out to researchers whose work you admire.
    • Say yes to unusual projects that seem a little scary.

    Nanda’s own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.

    His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.

    Most remarkably, he ended up running DeepMind’s mechanistic interpretability team. He’d joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it’s gone reasonably well.”

    His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.

    In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind. (And be sure to check out part one of Rob and Neel’s conversation!)


    What did you think of the episode? https://forms.gle/6binZivKmjjiHU6dA

    Chapters:

    • Cold open (00:00:00)
    • Who’s Neel Nanda? (00:01:12)
    • Luck surface area and making the right opportunities (00:01:46)
    • Writing cold emails that aren't insta-deleted (00:03:50)
    • How Neel uses LLMs to get much more done (00:09:08)
    • “If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:22)
    • Why Neel refuses to share his p(doom) (00:27:22)
    • How Neel went from the couch to an alignment rocketship (00:31:24)
    • Navigating towards impact at a frontier AI company (00:39:24)
    • How does impact differ inside and outside frontier companies? (00:49:56)
    • Is a special skill set needed to guide large companies? (00:56:06)
    • The benefit of risk frameworks: early preparation (01:00:05)
    • Should people work at the safest or most reckless company? (01:05:21)
    • Advice for getting hired by a frontier AI company (01:08:40)
    • What makes for a good ML researcher? (01:12:57)
    • Three stages of the research process (01:19:40)
    • How do supervisors actually add value? (01:31:53)
    • An AI PhD – with these timelines?! (01:34:11)
    • Is career advice generalisable, or does everyone get the advice they don't need? (01:40:52)
    • Remember: You can just do things (01:43:51)

    This episode was recorded on July 21.

    Video editing: Simon Monsour and Luke Monsour
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Music: Ben Cordell
    Camera operator: Jeremy Chevillotte
    Coordination, transcriptions, and web: Katy Moore

    Más Menos
    1 h y 47 m
Todas las estrellas
Más relevante
For anyone who's interested in audiobooks, especially non-fiction work, this podcast is perfect. For people used to short-form podcasts, the 2-5 hour range may seem intimidating, but for those used to the length of audiobooks it's great. The length allows the interviewer to ask genuinely interesting questions, with a bit of back-and-forth with the interviewee.

Brilliant

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.