• What AI companies can do today to help with the most important century
    Feb 20 2023

    Major AI companies can increase or reduce global catastrophic risks.

    https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/


    Show more Show less
    18 mins
  • Jobs that can help with the most important century
    Feb 10 2023

    People are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.

    https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/



    Show more Show less
    31 mins
  • Spreading messages to help with the most important century
    Jan 25 2023

    For people who want to help improve our prospects for navigating transformative AI, and have an audience (even a small one).

    https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/


    Show more Show less
    20 mins
  • How we could stumble into AI catastrophe
    Jan 13 2023

    Hypothetical stories where the world tries, but fails, to avert a global disaster.

    https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe

    Show more Show less
    29 mins
  • Transformative AI issues (not just misalignment): an overview
    Jan 5 2023

    An overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI.

    https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/



    Show more Show less
    25 mins
  • Racing Through a Minefield: the AI Deployment Problem
    Dec 22 2022

    Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?

    https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/


    Show more Show less
    21 mins
  • High-level hopes for AI aligment
    Dec 15 2022

    A few ways we might get very powerful AI systems to be safe.

    https://www.cold-takes.com/high-level-hopes-for-ai-alignment/

    Show more Show less
    24 mins
  • AI safety seems hard to measure
    Dec 8 2022

    Four analogies for why "We don't see any misbehavior by this AI" isn't enough.

    https://www.cold-takes.com/ai-safety-seems-hard-to-measure/

    Show more Show less
    22 mins