• Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726

  • Apr 7 2025
  • Duración: 52 m
  • Podcast

Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726

  • Resumen

  • Today, we're joined by Maohao Shen, PhD student at MIT to discuss his paper, “Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search.” We dig into how Satori leverages reinforcement learning to improve language model reasoning—enabling model self-reflection, self-correction, and exploration of alternative solutions. We explore the Chain-of-Action-Thought (COAT) approach, which uses special tokens—continue, reflect, and explore—to guide the model through distinct reasoning actions, allowing it to navigate complex reasoning tasks without external supervision. We also break down Satori’s two-stage training process: format tuning, which teaches the model to understand and utilize the special action tokens, and reinforcement learning, which optimizes reasoning through trial-and-error self-improvement. We cover key techniques such “restart and explore,” which allows the model to self-correct and generalize beyond its training domain. Finally, Maohao reviews Satori’s performance and how it compares to other models, the reward design, the benchmarks used, and the surprising observations made during the research. The complete show notes for this episode can be found at https://twimlai.com/go/726.
    Más Menos
adbl_web_global_use_to_activate_webcro768_stickypopup

Lo que los oyentes dicen sobre Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726

Calificaciones medias de los clientes

Reseñas - Selecciona las pestañas a continuación para cambiar el origen de las reseñas.