Episodios

  • Can AI Enable Human Agency?, with Tomicah Tilleman
    Mar 13 2026
    Tomicah Tilleman, President at Project Liberty Institute, joins the show. Tomicah offers a unique perspective on regulating emerging technology given his time as a venture capitalist and head of policy at Andreessen Horowitz and Haun Ventures. His contemporary focus is on identifying “policy solutions that enable human agency and human flourishing in an AI-powered world.” It’s a tall order that he breaks down with Kevin Frazier, a Senior Fellow at the Abundance Institute, Adjunct Research Fellow at the Cato Institute, and a Senior Editor at Lawfare.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    46 m
  • Live from Ashby: Taking a Long View on AI Governance with Austin Carson and Caleb Whitney
    Mar 10 2026

    Kevin Frazier hangs out with Caleb Watney of the Institute for Progress and Austin Carson of SeedAI at the Ashby Workshops to discuss the long-run policy foundations needed for the AI Age.


    Rather than focusing on near-term regulation, the conversation explores how AI challenges existing assumptions about state capacity, research funding, talent pipelines, and institutional design. Caleb and Austin unpack concepts like meta-science, public compute infrastructure, immigration policy, and congressional expertise—and explain why these “boring” policy areas may matter more for AI outcomes than headline-grabbing rules.


    The episode also examines how AI policy discourse has evolved in Washington, what lessons policymakers should draw from efforts like the National AI Research Resource, and why many AI governance failures may ultimately be failures of institutions rather than intent.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    58 m
  • Scaling Laws x AI Summer: Who Controls the Machine God?
    Mar 6 2026

    Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and senior editor at Lawfare, were joined by Dean Ball, senior fellow at the Foundation for American Innovation and author of the Hyperdimensional newsletter, and Timothy B. Lee, author of the Understanding AI newsletter, for a joint crossover episode of the Scaling Laws and AI Summer podcasts about the escalating dispute between Anthropic and the Pentagon over AI usage restrictions in military contracts.


    The conversation covered the timeline of the Anthropic-Pentagon dispute and Secretary Hegseth's supply chain risk designation; the legal basis for the designation under 10 U.S.C. § 3252 and whether it was intended to apply to domestic companies; the role of personality and politics in the dispute; OpenAI's competing Pentagon contract and debate over whether its terms actually match Anthropic's red lines; public opinion polling showing bipartisan concern about AI mass surveillance and autonomous weapons; the broader question of what the government-AI industry relationship should look like; the prospect of partial or full nationalization of AI capabilities; and whether frontier AI models are actually decisive for military applications.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    58 m
  • In Defense of Optimism with Packy McCormick
    Mar 3 2026

    Packy McCormick, founder of Not Boring and Not Boring Capital, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the power of narratives in tech, the intersection of investing and policy, and what it means to build frameworks for the future in an age of rapid technological change.


    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    46 m
  • The Pentagon Goes to War With Anthropic
    Feb 27 2026

    An impasse is coming to a head. The resolution is unknown. The Department of Defense has made clear that Anthropic has until 5:01pm ET today, February 27th, 2026, to permit its use of Claude for any lawful purpose. CEO Dario Amodei doubled down on his insistence that Anthropic tools should not be used for mass domestic surveillance or the operation of lethal autonomous weapons. The Pentagon's Spokesman agrees that such usage would indeed be unlawful and yet, the two parties cannot come to terms. If the DOD is to be taken at its word, the likely result is that Anthropic will be labled as a supply chain risk--an unprecedented decision with huge business ramifications. Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, joins Kevin Frazier, Senior Fellow at the Abudnance Institute and a Senior Editor at Lawfare, to break this all down.


    You can also read more on this weighty issue via Alan’s two recent Lawfare pieces here and here.



















    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    46 m
  • Can AI Make AI Regulation Cheaper?, with Cullen O'Keefe and Kevin Frazier
    Feb 24 2026

    Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law and senior editor at Lawfare, about their paper, "Automated Compliance and the Regulation of AI" (and associated Lawfare article), which argues that AI systems can automate many regulatory compliance tasks, loosening the trade-off between safety and innovation in AI policy.


    The conversation covered the disproportionate burden of compliance costs on startups versus large firms; the limitations of compute thresholds as a proxy for targeting AI regulation; how AI can automate tasks like transparency reporting, model evaluations, and incident disclosure; the Goodhart's Law objection to automated compliance; the paper's proposal for "automatability triggers" that condition regulation on the availability of cheap compliance tools; analogies to sunrise clauses in other areas of law; incentive problems in developing compliance-automating AI; the speculative future of automated compliance meeting automated governance; and how co-authoring the paper shifted each author's views on the AI regulation debate.


    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    52 m
  • Claude's Constitution, with Amanda Askell
    Feb 20 2026

    Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, spoke with Amanda Askell, head of personality alignment at Anthropic, about Claude's Constitution: a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.


    The conversation covered how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of "case law," and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications.


    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    Aún no se conoce
  • Live from Ashby: Adaptive AI Governance with Gillian Hadfield and Andrew Freedman
    Feb 17 2026

    Kevin Frazier sits down with Andrew Freedman of Fathom and Gillian Hadfield, AI governance scholar, at the Ashby Workshops to examine innovative models for AI regulation.

    They discuss:


    • Why traditional regulation struggles with rapid AI innovation.
    • The concept of Regulatory Markets and how it aligns with the unique governance challenges posed by AI.
    • Critiques of hybrid governance: concerns about a “race to the bottom,” the limits of soft law on catastrophic risks, and how liability frameworks interact with governance.
    • What success looks like for Ashby Workshops and the future of adaptive AI policy design.


    Whether you’re a policy wonk, technologist, or governance skeptic, this episode bridges ideas and practice in a time of rapid technological change.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    55 m