Optimising for Trouble – Game Theory and AI Safety | with Jobst Heitzig
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
What happens when an AI system faithfully follows a flawed goal? In this episode, we explore how even well-designed algorithms can produce dangerous outcomes — from amplifying hate speech to mismanaging infrastructure — simply by optimising a reward function which, like all reward functions, fails to encode all that matters. We discuss the hidden risks of reinforcement learning, why over-optimisation can backfire, and how game theory helps us rethink what it means for AI to act "rationally" in complex, real-world environments.
Jobst Heitzig is a mathematician at the Potsdam Institute for Climate Impact Research and an expert in AI safety and decision design.
Todavía no hay opiniones