Episodios

  • How scary is Claude Mythos? 303 pages in 21 minutes
    Apr 10 2026

    With Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin works through its 244-page System Card and 59-page Alignment Risk Update to explain why:

    • Mythos is a nightmare for computer security
    • It has arrived far ahead of schedule
    • It might be great news for alignment and safety
    • But 3 key problems mean we can’t take its alignment results at face value
    • Mythos isn’t building its replacement yet, probably
    • Anthropic staff are, for the first time, kinda scared of Claude
    • He's losing sleep

    Learn more & full transcript: https://80k.info/mythos

    This episode was recorded on April 9, 2026.

    Chapters:

    • Why people are panicking about computer security (01:05)
    • Mythos could break out of containment (04:23)
    • Anthropic is losing billions in revenue by not releasing Mythos (06:21)
    • Mythos is actually the most aligned model to date, except… (07:48)
    • Mythos knows when it’s being tested (09:52)
    • Mythos can hide its thoughts (11:50)
    • Mythos can’t be trusted about whether it’s untrustworthy (14:02)
    • Does Mythos advance automated AI R&D? (17:03)
    • Mythos scares Anthropic (19:15)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour

    Camera operator: Dominic Armstrong

    Production: Elizabeth Cox, Nick Stockton, and Katy Moore

    Más Menos
    21 m
  • Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health
    Apr 7 2026
    What does it really take to lift millions out of poverty and prevent needless deaths?In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors — share their most powerful and actionable insights from the front lines of global health and development. You’ll hear about the critical need to boost agricultural productivity in sub-Saharan Africa, the staggering impact of lead poisoning on children in low-income countries, and the social forces that contribute to high neonatal mortality rates in India.What’s so striking is how some of the most effective interventions sound almost too simple to work: banning certain pesticides, replacing thatch roofs, or identifying village “influencers” to spread health information.Full transcript and links to learn more: https://80k.info/ghdChapters:Cold open (00:00:00)Luisa’s intro (00:00:58)Development consultant Karen Levy on why pushing for “sustainable” programmes isn’t as good as it sounds (00:02:15)Economist Dean Spears on the social forces and gender inequality that contribute to neonatal mortality in Uttar Pradesh (00:06:55)Charity founder Sarah Eustis-Guthrie on what we can learn from the massive failure of PlayPumps (00:14:33)Economist Rachel Glennerster on how randomised controlled trials are just one way to better understand tricky development problems (00:19:05)Data scientist Hannah Ritchie on why improving agricultural productivity in sub-Saharan Africa is critical to solving global poverty (00:24:36)Charity founder Lucia Coulter on the huge, neglected upsides of reducing lead exposure (00:47:48)Malaria expert James Tibenderana on using gene drives to wipe out the species of mosquitoes that cause malaria (00:53:11)Charity founder Varsha Venugopal on using village gossip to get kids their critical immunisations (01:04:14)Rachel Glennerster on solving tough global problems by creating the right incentives for innovation (01:11:31)Karen Levy on when governments should pay for programmes instead of NGOs (01:26:51)Open Philanthropy lead Alexander Berger on declining returns in global health, and finding and funding the most cost-effective interventions (01:29:40)GiveWell researcher James Snowden on making funding decisions with tricky moral weights (01:34:44)Lucia Coulter on “hits-based giving” approaches to funding global health and development projects (01:43:01)Rachel Glennerster on whether it’s better to fix problems in education with small-scale interventions versus systemic reforms (01:48:12)GiveDirectly cofounder Paul Niehaus on why it’s so important to give aid recipients a choice in how they spend their money (01:51:09)Sarah Eustis-Guthrie on whether more charities should scale back or shut down, and aligning incentives with beneficiaries (01:56:12)James Tibenderana on why we need loads better data to harness the power of AI to eradicate malaria (02:11:22)Lucia Coulter on rapidly scaling a light-touch intervention to more countries (02:20:14)Karen Levy on why pre-policy plans are so great at aligning perspectives (02:32:47)Rachel Glennerster on the value we get from doing the right RCTs well (02:40:04)Economist Mushtaq Khan on really drilling down into why “context matters” for development work (02:50:13)GiveWell cofounder Elie Hassenfeld on contrasting GiveWell’s approach with the subjective wellbeing approach of Happier Lives Institute (02:57:24)James Tibenderana on whether people actually use antimalarial bed nets for fishing — and why that’s the wrong thing to focus on (03:05:30)Karen Levy on working with governments to get big results (03:10:53)Leah Utyasheva on how a simple intervention reduced suicide in Sri Lanka by 70% (03:17:38)Karen Levy on working with academics to get the best results on the ground (03:29:03)James Tibenderana on the value of working with local researchers (03:32:15)Lucia Coulter on getting buy-in from both industry and government (03:35:05)Alexander Berger on reasons neartermist work makes sense even by longtermist standards (03:39:26)Economist Shruti Rajagopalan on the key skills to succeed in public policy careers, and seeing economics in everything (03:47:42)J-PAL lead Claire Walsh on her career advice for young people who want to get involved in global health and development (03:55:20)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireMusic: CORBITCoordination, transcriptions, and web: Katy Moore
    Más Menos
    4 h y 7 m
  • What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.
    Apr 3 2026

    When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its supporters are some combination of 'hypocritical', 'naive', and 'anti-democratic'. Rob Wiblin dissects each claim finding that all three are mediocre arguments dressed up as hard truths. (Though the 'naive' one is at least interesting.)

    Watch on YouTube: What Everyone is Missing about Anthropic vs The Pentagon

    Plus, from 13:43: Leaked documents from Meta revealed that 10% of the company's total revenue — around $16 billion a year — came from ads for scams and goods Meta had itself banned. These likely enabled the theft of around $50 billion dollars a year from Americans alone. But when an internal anti-fraud team developed a screening method that halved the rate of scams coming from China... well, it wasn't well received.

    Watch on YouTube: The Meta Leaks Are Worse Than You Think

    Chapters:

    • Introduction (00:00:00)
    • What Everyone is Missing about Anthropic vs The Pentagon (00:00:26)
    • Charge 1: Hypocrisy (00:01:21)
    • Charge 2: Naivety (00:04:55)
    • Charge 3: Undemocratic (00:09:38)
    • You don't have to debate on their terms (00:12:32)
    • The Meta Leaks Are Worse Than You Think (00:13:43)
    • Three fixes for social media's scam problem (00:16:48)
    • We should regulate AI companies as strictly as banks (00:18:46)

    Video and audio editing: Dominic Armstrong and Simon Monsour
    Transcripts and web: Elizabeth Cox and Katy Moore

    Más Menos
    21 m
  • Could a biologist armed with AI kill a billion people? | Dr Richard Moulange
    Mar 31 2026
    Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family.That alone is remarkable. But as today's guest — Dr Richard Moulange, one of the world's top experts on 'AI–Biosecurity' — explains, it's just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach.For years, experts have reassured us that 'tacit knowledge' — the hands-on, hard-to-Google lab skills needed to work with dangerous pathogens — would prevent bad actors from weaponising biology. So far, they've been right.But as of 2025 that reassurance is crumbling. The Virology Capabilities Test measures exactly this kind of troubleshooting expertise, and finds that modern AI models crushed top human virologists even in their self-declared area of greatest specialisation and expertise — 45% to 22%.Meanwhile, Anthropic’s research shows PhD-level biologists getting meaningfully better at weapons-relevant tasks with AI assistance — with the effect growing with each new model generation.Richard joins host Rob Wiblin to discuss all that plus:What AI biology tools already existWhy mid-tier actors (not amateurs) are the ones getting the most dangerous boostThe three main categories of defence we can pursueWhether there’s a plausible path to a world where engineered pandemics become a thing of the pastThis episode was recorded on January 16, 2026. Since recording this episode, Richard has seconded to the UK Government — please note that his views expressed here are entirely his own.Links to learn more, video, and full transcript: https://80k.info/rmAnnouncements:Our new book is available to preorder: 80,000 Hours: How to have a fulfilling career that does good is written by our cofounder Benjamin Todd. It’s a completely revised and updated edition of our existing career guide, with a big new updated section on AI — covering both the risks and the potential to steer it in a better direction, and how AI automation should affect your career planning and which skills one chooses to specialise in. Preorder now: https://geni.us/80000HoursWe're hiring contract video editors for the podcast! For more information, check out the expression of interest page on the 80,000 Hours website: https://80k.info/video-editorChapters:Cold open (00:00:00)Who's Richard Moulange? (00:00:31)AI can now design novel genomes (00:01:11)The end of the 'tacit knowledge' barrier (00:04:34)Are risks from bioterrorists overstated? (00:18:20)The 3 key disasters AI makes more likely (00:22:41)Which bad actors does AI help the most? (00:30:03)Experts are more scary than amateurs (00:41:17)Barriers to bioterrorists using AI (00:46:43)AI biorisks are sometimes dismissed (and that's a huge mistake) (00:48:54)Advanced AI biology tools we already have or will soon (01:04:10)Rob argues that the situation is hopeless (01:09:49)Intervention #1: Limit access (01:18:16)Intervention #2: Get AIs to refuse to help (01:32:58)Intervention #3: Surveillance and attribution (01:42:38)Intervention #4: Universal vaccines and antivirals (01:56:38)Intervention #5: Screen all orders for DNA (02:10:00)AI companies talk about def/acc more than they fund it (02:19:52)Can you build a profitable business solving this problem? (02:26:32)This doesn't have to interfere with useful science (much) (02:30:56)What are the best low-tech interventions? (02:33:01)Richard's top request for AI companies (02:37:59)Grok shows governments lack many legal levers (02:53:17)Best ways listeners can help fix AI-Bio (02:56:24)We might end all contagious disease in 20 years (03:03:37)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operator: Jeremy ChevillotteTranscripts and web: Elizabeth Cox and Katy Moore
    Más Menos
    3 h y 8 m
  • #240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war
    Mar 24 2026

    Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dangerous, just in different ways.

    That’s the counterintuitive argument from Samuel Charap, Distinguished Chair in Russia and Eurasia Policy at RAND. He’s not worried about a Russian blitzkrieg on Estonia. He forecasts instead a fragile peace that breaks down and drags in European neighbours; instability in Belarus prompting Russian intervention; hybrid sabotage operations that escalate through tit-for-tat responses.

    Samuel’s case isn’t that peace is bad, but that the Ukraine conflict has remilitarised Europe, made Russia more resentful, and collapsed diplomatic relations between the two. That’s a postwar environment primed for the kind of miscalculation that starts unintended wars.

    What he prescribes isn’t a full peace treaty; it’s a negotiated settlement that stops the killing and begins a longer negotiation that gives neither side exactly what it wants, but just enough to deter renewed aggression. Both sides stop dying and the flames of war fizzle — hopefully.

    None of this is clean or satisfying: Russia invaded, committed war crimes, and is being offered a path back to partial normalcy. But Samuel argues that the alternatives — indefinite war or unstructured ceasefire — are much worse for Ukraine, Europe, and global stability.


    Links to learn more, video, and full transcript: https://80k.info/sc26

    This episode was recorded on February 27, 2026.

    Chapters:

    • Cold open (00:00:00)
    • Could peace in Ukraine lead to Europe’s next war? (00:00:47)
    • Do Russia’s motives for war still matter? (00:11:41)
    • What does a good ceasefire deal look like? (00:17:38)
    • What’s still holding back a ceasefire (00:38:44)
    • Why Russia might accept Ukraine’s EU membership (00:46:00)
    • How to prevent a spiraling conflict with NATO (00:48:00)
    • What’s next for nuclear arms control (00:49:57)
    • Finland and Sweden strengthened NATO — but also raised the stakes for conflict (00:53:25)
    • Putin isn’t Hitler: How to negotiate with autocrats (00:56:35)
    • Why Russia still takes NATO seriously (01:02:01)
    • Neither side wants to fight this war again (01:10:49)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Transcripts and web: Nick Stockton, Elizabeth Cox, and Katy Moore

    Más Menos
    1 h y 12 m
  • Why automating human labour will break our political system | Rose Hadshar, Forethought
    Mar 17 2026

    The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.

    That’s the view of Rose Hadshar, researcher at Forethought, who believes we could see extreme, AI-enabled power concentration without a coup or dramatic ‘end of democracy’ moment.

    She foresees something more insidious: an elite group with access to such powerful AI capabilities that the normal mechanisms for checking elite power — law, elections, public pressure, the threat of strikes — cease to have much effect. Those mechanisms could continue to exist on paper, but become ineffectual in a world where humans are no longer needed to execute even the largest-scale projects.

    Almost nobody wants this to happen — but we may find ourselves unable to prevent it.

    If AI disrupts our ability to make sense of things, will we even notice power getting severely concentrated, or be able to resist it? Once AI can substitute for human labour across the economy, what leverage will citizens have over those in power? And what does all of this imply for the institutions we’re relying on to prevent the worst outcomes?

    Rose has answers, and they’re not all reassuring.

    But she’s also hopeful we can make society more robust against these dynamics. We’ve got literally centuries of thinking about checks and balances to draw on. And there are some interventions she’s excited about — like building sophisticated AI tools for making sense of the world, or ensuring multiple branches of government have access to the best AI systems.

    Rose discusses all of this, and more, with host Zershaaneh Qureshi in today’s episode.

    Links to learn more, video, and full transcript: https://80k.info/rh

    This episode was recorded on December 18, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Who's Rose Hadshar? (00:01:05)
    • Three dynamics that could reshape political power in the AI era (00:02:37)
    • AI gives small groups the productive power of millions (00:12:49)
    • Dynamic 1: When a software update becomes a power grab (00:20:41)
    • Dynamic 2: When AI labour means governments no longer need their citizens (00:31:20)
    • How democracy could persist in name but not substance (00:45:15)
    • Dynamic 3: When AI filters our reality (00:54:54)
    • Good intentions won't stop power concentration (01:08:27)
    • Slower-moving worlds could still get scary (01:23:57)
    • Why AI-powered tyranny will be tough to topple (01:31:53)
    • How power concentration compares to "gradual disempowerment" (01:38:18)
    • Some interventions are cross-cutting — and others could backfire (01:43:54)
    • What fighting back actually looks like (01:55:15)
    • Why power concentration researchers should avoid getting too "spicy" (02:04:10)
    • Why the "Manhattan Project" approach should worry you — but truly international projects might not be safe either (02:09:18)
    • Rose wants to keep humans around! (02:12:06)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Nick Stockton and Katy Moore

    Más Menos
    2 h y 14 m
  • #238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)
    Mar 10 2026

    How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.

    Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence:

    • Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?
    • Would road-mobile launchers still be able to hide in tunnels and under netting?
    • Would missile defence become so accurate that the United States could be protected under something like Israel’s Iron Dome?
    • Can we imagine an AI cybersecurity breakthrough that would allow countries to infiltrate their rivals’ nuclear command-and-control networks?

    Yet even without undermining deterrence, Sam and Nikita claim that AI could make the nuclear world far more dangerous. It could spur arms races, encourage riskier postures, and force dangerously short response times. Their message is urgent: AI experts and nuclear experts need to start talking to each other now, before the technology makes any conversation moot.


    Links to learn more, video, and full transcript: https://80k.info/swlnl

    This episode was recorded on November 24, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Who are Nikita Lalwani and Sam Winter-Levy? (00:01:03)
    • How nuclear deterrence actually works (00:01:46)
    • AI vs nuclear submarines (00:10:31)
    • AI vs road-mobile missiles (00:22:21)
    • AI vs missile defence systems (00:28:38)
    • AI vs nuclear command, control, and communications (NC3) (00:35:20)
    • AI won't break deterrence, but may trigger an arms race (00:43:27)
    • Technological supremacy isn't political supremacy (00:52:31)
    • Fast AI takeoff creates dangerous "windows of vulnerability" (00:56:43)
    • Book and movie recommendations (01:08:53)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Nick Stockton and Katy Moore

    Más Menos
    1 h y 11 m
  • Using AI to enhance societal decision making (article by Zershaaneh Qureshi)
    Mar 6 2026

    The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead.

    This article is narrated by the author, Zershaaneh Qureshi. It explores why AI decision-making tools could be a big deal, who might be a good fit to help shape this new field, and what the downside risks of getting involved might be.

    Read the original article on the 80,000 Hours website: https://80000hours.org/problem-profiles/ai-enhanced-decision-making/

    Chapters:

    • Check out our new narrations feed (00:00:00)
    • Summary (00:01:21)
    • Section 1: Why advancing AI decision making tools might matter a lot (00:02:52)
    • AI tools could help us make much better decisions (00:05:59)
    • We might be able to differentially speed up the rollout of AI decision making tools (00:11:04)
    • Section 2: What are the arguments against working to advance AI decision making tools? (00:13:17)
    • Section 3: How to work in this area (00:26:19)
    • Want one-on-one advice? (00:29:50)

    Audio editing: Dominic Armstrong and Milo McGuire

    Más Menos
    31 m