Instruction Tuning & RLHF
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
In this episode, we explore how large language models learned to follow instructions—and why this shift turned raw text generators into reliable AI assistants. We trace the move from early, unaligned models to instruction-tuned systems shaped by human feedback.
We explain supervised fine-tuning, reward models, and reinforcement learning from human feedback (RLHF), showing how human preference became the key signal for usefulness, safety, and control. The episode also looks at the limits of RLHF and how newer, automated alignment methods aim to scale instruction learning more efficiently.
This episode covers:
- Why early LLMs struggled with instructions
- Supervised instruction tuning (SFT)
- RLHF and reward modeling
- Helpfulness, truthfulness, and safety trade-offs
- Bias, cost, and scalability of alignment
- The future of automated alignment
This episode is part of the Adapticx AI Podcast. Listen via the link provided or search “Adapticx” on Apple Podcasts, Spotify, Amazon Music, or most podcast platforms.
Sources and Further Reading
Additional references and extended material are available at:
https://adapticx.co.uk