Episodios

  • Beyond Language Modeling: An Exploration of Multimodal Pretraining
    Mar 6 2026
    In this episode, we discuss Beyond Language Modeling: An Exploration of Multimodal Pretraining by Shengbang Tong, David Fan, John Nguyen, Ellis Brown, Gaoyue Zhou, Shengyi Qian, Boyang Zheng, Théophane Vallaeys, Junlin Han, Rob Fergus, Naila Murray, Marjan Ghazvininejad, Mike Lewis, Nicolas Ballas, Amir Bar, Michael Rabbat, Jakob Verbeek, Luke Zettlemoyer, Koustuv Sinha, Yann LeCun, Saining Xie. The paper investigates native multimodal foundation models by training from scratch on diverse visual and language data using the Transfusion framework. Key findings include the effectiveness of Representation Autoencoder for unified visual representation, synergy between vision and language data, emergence of world modeling from unified pretraining, and the role of Mixture-of-Experts in efficient multimodal scaling. The study also reveals a scaling asymmetry with vision requiring more data than language, which MoE architectures can balance to enable truly unified multimodal models.
    Más Menos
    14 m
  • Mode Seeking meets Mean Seeking for Fast Long Video Generation
    Mar 4 2026
    In this episode, we discuss Mode Seeking meets Mean Seeking for Fast Long Video Generation by Shengqu Cai, Weili Nie, Chao Liu, Julius Berner, Lvmin Zhang, Nanye Ma, Hansheng Chen, Maneesh Agrawala, Leonidas Guibas, Gordon Wetzstein, Arash Vahdat. The paper presents a novel training paradigm combining mode seeking and mean seeking to decouple local video fidelity from long-term coherence using a Decoupled Diffusion Transformer. It employs a global Flow Matching head trained on limited long videos for narrative structure and a local Distribution Matching head aligned with a frozen short-video teacher to ensure local realism. This approach enables fast synthesis of minute-scale videos that maintain both high-quality local details and coherent long-range motion, significantly improving the fidelity–horizon trade-off.
    Más Menos
    9 m
  • Recursive Language Models
    Mar 4 2026
    In this episode, we discuss Recursive Language Models by Alex L. Zhang, Tim Kraska, Omar Khattab. The paper introduces Recursive Language Models (RLMs), a novel inference approach that enables large language models to handle extremely long prompts by recursively processing prompt snippets. RLMs significantly extend effective context length by up to 100 times and outperform standard LLMs and existing long-context methods on multiple tasks without increasing computational cost. Additionally, the authors develop RLM-Qwen3-8B, a recursive model that notably improves performance over its base model and rivals GPT-5 on several long-context benchmarks.
    Más Menos
    9 m
  • PaperBanana: Automating Academic Illustration for AI Scientists
    Feb 10 2026
    In this episode, we discuss PaperBanana: Automating Academic Illustration for AI Scientists by Dawei Zhu, Rui Meng, Yale Song, Xiyu Wei, Sujian Li, Tomas Pfister, Jinsung Yoon. The paper presents PaperBanana, an autonomous framework that generates publication-ready academic illustrations using advanced vision-language and image generation models. It coordinates specialized agents to retrieve references, plan, render, and refine images through self-critique. Evaluated on a new benchmark from NeurIPS 2025 diagrams, PaperBanana outperforms existing methods in faithfulness, clarity, and aesthetics, and also effectively creates high-quality statistical plots.
    Más Menos
    9 m
  • World-Gymnast: Training Robots with Reinforcement Learning in a World Model
    Feb 10 2026
    In this episode, we discuss World-Gymnast: Training Robots with Reinforcement Learning in a World Model by Ansh Kumar Sharma, Yixiang Sun, Ninghao Lu, Yunzhe Zhang, Jiarao Liu, Sherry Yang. The paper introduces World-Gymnast, a method that fine-tunes robot policies using reinforcement learning within a video-based world model conditioned on vision and language. This approach significantly outperforms traditional supervised finetuning and simulator-based RL in real-robot tasks, achieving up to 18x and 2x improvements, respectively. World-Gymnast also enables training on diverse instructions and novel scenes, offering a promising path for scalable robot learning outside controlled environments.
    Más Menos
    8 m
  • On the generalization of language models from in-context learning and finetuning: a controlled study
    Jan 5 2026
    In this episode, we discuss On the generalization of language models from in-context learning and finetuning: a controlled study by Andrew K. Lampinen, Arslan Chaudhry, Stephanie C. Y. Chan, Cody Wild, Diane Wan, Alex Ku, Jörg Bornschein, Razvan Pascanu, Murray Shanahan, James L. McClelland. The paper compares the generalization and deductive reasoning abilities of large language models when learning through fine-tuning versus in-context learning, finding that in-context learning generally enables more flexible generalization. It introduces novel datasets to rigorously test these differences by isolating new factual information from pretraining knowledge. Additionally, the authors propose enhancing fine-tuning by including in-context reasoning traces, which improves the models' reasoning and generalization performance across multiple benchmarks.
    Más Menos
    8 m
  • Memory-V2V: Augmenting Video-to-Video Diffusion Models with Memory
    Jan 29 2026
    In this episode, we discuss Memory-V2V: Augmenting Video-to-Video Diffusion Models with Memory by Dohun Lee, Chun-Hao Paul Huang, Xuelin Chen, Jong Chul Ye, Duygu Ceylan, Hyeonho Jeong. The paper addresses the challenge of maintaining cross-consistency in multi-turn video editing using video-to-video diffusion models. It introduces Memory-V2V, a framework that enhances existing models by incorporating an explicit memory through an external cache of previously edited videos. This approach enables iterative video editing with improved consistency across multiple rounds of user refinements.
    Más Menos
    8 m
  • Self-Rewarding Language Models
    Jan 8 2026
    In this episode, we discuss Self-Rewarding Language Models by Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, Jason Weston. The paper proposes training language models to give themselves feedback using a self-rewarding approach, bypassing the limitations of human-labeled reward models. By iteratively fine-tuning Llama 2 70B with this method, the model improves both its instruction-following and self-assessment abilities. The resulting model surpasses several top systems, demonstrating the potential for continual self-improvement in AI agents.
    Más Menos
    9 m