Episodios

  • AI Tools, Privilege, and Work Product: Recent Court Decisions
    Mar 12 2026

    In this episode, Katherine Forrest and Scott Caravello examine two recent federal court decisions on whether AI-generated materials are protected by the attorney-client privilege and the work product doctrine. They break down those decisions, United States v. Heppner and Warner v. Gilbarco, explaining how and why the outcomes diverged, the different factual footings, and what these decisions may (or may not) mean for future disputes.

    ##

    Learn More About Paul, Weiss’s Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

    Más Menos
    28 m
  • Moltbook, Part 2: Agentic AI and Cybersecurity
    Mar 5 2026

    In this episode, Katherine Forrest and Scott Caravello continue their conversation on Moltbook—this time with a special guest. John Carlin, Chair of the firm's Cybersecurity & Data Protection and National Security & CFIUS practice groups, joins for a closer look at the cybersecurity risks of the agentic social network. In their wide-ranging discussion, the trio covers a host of concerns, from exposed credentials to hypothetical botnet threats to issues stemming from Moltbook’s vibe-coded origins.

    ##

    Learn More About Paul, Weiss’s Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

    Más Menos
    20 m
  • Hyperscalers: Where the Cloud Touches Ground
    Feb 26 2026

    In this episode, Katherine Forrest and Scott Caravello go inside the material world of digital minds. Our hosts explain how companies operate the massive data centers serving as the physical foundation for AI, break down the staggering energy demands behind them, and consider what powering the future might mean—by way of gigawatts and governance.

    ##

    Learn More About Paul, Weiss’s Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

    Más Menos
    18 m
  • Embodied Intelligence: A Potential Physical Path to AGI
    Feb 19 2026

    In this episode, Katherine Forrest and Scott Caravello examine one of China's approaches to achieving artificial general intelligence (AGI), drawing on reports from Georgetown's Center for Security and Emerging Technology (CSET). They discuss the country's focus on embodied AI and robotics as a potential path to AGI, multilevel government initiatives supporting this development, a large-scale social simulator project in Wuhan, and significant investments in power grid and data center infrastructure.

    ##

    Learn More About Paul, Weiss’s Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

    Más Menos
    18 m
  • Claws and Effect: Inside the Agent-Only Internet
    Feb 12 2026

    In this episode, Katherine Forrest and Scott Caravello trace how a "vibe-coded" project became Moltbook, a social network for AI agents. Our hosts unpack its lobster-themed lore and early community drama, consider whether the site represents truly autonomous agent activity or human direction, and assess the cybersecurity risks.

    ##

    Learn More About Paul, Weiss’s Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

    Más Menos
    27 m
  • Memory: Market Rates and Model Weights
    Feb 5 2026

    In this episode Katherine Forrest and Scott Caravello take us down “memory lane” to explain the importance of high bandwidth memory (HBM) and RAM to AI development. Our hosts also give us a rundown of potential challenges ahead, unpacking developments in the market for memory, including plans for additional capacity and lobster-style RAM pricing.

    ##

    Learn More About Paul, Weiss’s Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

    Más Menos
    18 m
  • Small Language Models: The Case for Less
    Jan 29 2026

    In this episode, Katherine Forrest and Scott Caravello explore small language models (“SLM”) and their potential implications for task specialization, speed, and confidentiality. Our hosts also share some recent research covering expectations surrounding SLM adoption and growth.

    ##

    Learn More About Paul, Weiss’s Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

    Más Menos
    18 m
  • Confessions of a Large Language Model
    Jan 22 2026

    In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers’ proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers’ proof of concept results and the framework’s resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind’s “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors’ proposed four layer safety stack.

    ##

    Learn More About Paul, Weiss’s Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

    Más Menos
    23 m