LessWrong Curated Podcast  By  cover art

LessWrong Curated Podcast

By: LessWrong
  • Summary

  • Audio version of the posts shared in the LessWrong Curated newsletter.
    © 2024 LessWrong Curated Podcast
    Show more Show less
Episodes
  • “Announcing ILIAD — Theoretical AI Alignment Conference ” by Nora_Ammann, Alexander Gietelink Oldenziel
    Jun 6 2024
    Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We are pleased to announce ILIAD — a 5-day conference bringing together 100+ researchers to build strong scientific foundations for AI alignment.

    ***Apply to attend by June 30!***

    • When: Aug 28 - Sep 3, 2024
    • Where: @Lighthaven (Berkeley, US)
    • What: A mix of topic-specific tracks, and unconference style programming, 100+ attendees. Topics will include Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics and more to be announced.
    • Who: Currently confirmed speakers include: Daniel Murfet, Jesse Hoogland, Adam Shai, Lucius Bushnaq, Tom Everitt, Paul Riechers, Scott Garrabrant, John Wentworth, Vanessa Kosoy, Fernando Rosas and James Crutchfield.
    • Costs: Tickets are free. Financial support is available on a needs basis.
    See our website here. For any questions, email iliadconference@gmail.com

    About ILIAD

    ILIAD is a 100+ person conference about alignment with a mathematical focus. The theme is ecumenical. [...]

    ---

    First published:
    June 5th, 2024

    Source:
    https://www.lesswrong.com/posts/r7nBaKy5Ry3JWhnJT/announcing-iliad-theoretical-ai-alignment-conference

    ---

    Narrated by TYPE III AUDIO.

    Show more Show less
    4 mins
  • “Non-Disparagement Canaries for OpenAI” by aysja, Adam Scholl
    May 31 2024
    Since at least 2017, OpenAI has asked departing employees to sign offboarding agreements which legally bind them to permanently—that is, for the rest of their lives—refrain from criticizing OpenAI, or from otherwise taking any actions which might damage its finances or reputation.[1]

    If they refused to sign, OpenAI threatened to take back (or make unsellable) all of their already-vested equity—a huge portion of their overall compensation, which often amounted to millions of dollars. Given this immense pressure, it seems likely that most employees signed.

    If they did sign, they became personally liable forevermore for any financial or reputational harm they later caused. This liability was unbounded, so had the potential to be financially ruinous—if, say, they later wrote a blog post critical of OpenAI, they might in principle be found liable for damages far in excess of their net worth.

    These extreme provisions allowed OpenAI to systematically silence criticism [...]

    The original text contained 4 footnotes which were omitted from this narration.

    ---

    First published:
    May 30th, 2024

    Source:
    https://www.lesswrong.com/posts/yRWv5kkDD4YhzwRLq/non-disparagement-canaries-for-openai

    ---

    Narrated by TYPE III AUDIO.

    Show more Show less
    5 mins
  • “MIRI 2024 Communications Strategy” by Gretta Duleba
    May 30 2024
    As we explained in our MIRI 2024 Mission and Strategy update, MIRI has pivoted to prioritize policy, communications, and technical governance research over technical alignment research. This follow-up post goes into detail about our communications strategy.

    The Objective: Shut it Down[1]

    Our objective is to convince major powers to shut down the development of frontier AI systems worldwide before it is too late. We believe that nothing less than this will prevent future misaligned smarter-than-human AI systems from destroying humanity. Persuading governments worldwide to take sufficiently drastic action will not be easy, but we believe this is the most viable path.

    Policymakers deal mostly in compromise: they form coalitions by giving a little here to gain a little somewhere else. We are concerned that most legislation intended to keep humanity alive will go through the usual political processes and be ground down into ineffective compromises.

    The only way we [...]

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:
    May 29th, 2024

    Source:
    https://www.lesswrong.com/posts/tKk37BFkMzchtZThx/miri-2024-communications-strategy

    ---

    Narrated by TYPE III AUDIO.

    Show more Show less
    14 mins

What listeners say about LessWrong Curated Podcast

Average customer ratings
Overall
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Performance
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    1
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.