Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621 Podcast Por  arte de portada

Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621

Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621

Escúchala gratis

Ver detalles del espectáculo

Obtén 3 meses por US$0.99 al mes + $20 crédito Audible

Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction.
Todavía no hay opiniones