Episodios

  • AI Safety Meet Trust & Safety with Ravi Iyer and David Sullivan
    Oct 7 2025

    David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI.

    They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals.

    You’ll “like” (bad pun intended) this one.

    Leo Wu provided excellent research assistance to prepare for this podcast.

    Read more from David:

    https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/

    https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/

    Read more from Ravi:

    https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-design

    https://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

    Read more from Kevin:

    https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    47 m
  • Rapid Response: California Governor Newsom Signs SB-53
    Sep 30 2025
    In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53, the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    36 m
  • The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference).
    Sep 30 2025

    Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.

    The trio recorded this podcast live at the Institute for Humane Studies’s Technology, Liberalism, and Abundance Conference in Arlington, Virginia.


    Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-tower

    Learn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    43 m
  • AI and Young Minds: Navigating Mental Health Risks with Renee DiResta and Jess Miers
    Sep 23 2025
    Alan Rozenshtein, Renee DiResta, and Jess Miers discuss the distinct risks that generative AI systems pose to children, particularly in relation to mental health. They explore the balance between the benefits and harms of AI, emphasizing the importance of media literacy and parental guidance. Recent developments in AI safety measures and ongoing legal implications are also examined, highlighting the evolving landscape of AI regulation and liability.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    59 m
  • AI Copyright Lawsuits with Pam Samuelson
    Sep 16 2025

    On today's Scaling Laws episode, Alan Rozenshtein sat down with Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at the University of California, Berkeley, School of Law, to discuss the rapidly evolving legal landscape at the intersection of generative AI and copyright law. They dove into the recent district court rulings in lawsuits brought by authors against AI companies, including Bartz v. Anthropic and Kadrey v. Meta. They explored how different courts are treating the core questions of whether training AI models on copyrighted data is a transformative fair use and whether AI outputs create a “market dilution” effect that harms creators. They also touched on other key cases to watch and the role of the U.S. Copyright Office in shaping the debate.

    Mentioned in this episode:

    • "How to Think About Remedies in the Generative AI Copyright Cases"
    • by Pam Samuelson in Lawfare
    • Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith
    • Bartz v. Anthropic
    • Kadrey v. Meta Platforms
    • Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc.
    • U.S. Copyright Office, Copyright and Artificial Intelligence, Part 3: Generative AI Training


    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    59 m
  • AI and the Future of Work: Joshua Gans on Navigating Job Displacement
    Sep 11 2025

    Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education.


    Select works by Gans include:

    A Quest for AI Knowledge (https://www.nber.org/papers/w33566)

    Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)

    How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105)

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    58 m
  • The State of AI Safety with Steven Adler
    Sep 9 2025

    Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.

    You can read Steven’s Substack here: https://stevenadler.substack.com/

    Thanks to Leo Wu for research assistance!

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    47 m
  • Contrasting and Conflicting Efforts to Regulate Big Tech: EU v. US
    Sep 2 2025

    Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing contrasting and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and US. The trio start with an assessment of the EU’s use of the Brussels Effect, coined by Anu, to shape AI development. Next, then explore the US’s increasingly interventionist industrial policy with respect to key sectors, especially tech.

    Read more:

    Anu’s op-ed in The New York Times

    The Impact of Regulation on Innovation by Philippe Aghion, Antonin Bergeaud & John Van Reenen

    Draghi Report on the Future of European Competitiveness

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    46 m