The Impact of Inference: Performance
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
Traditional performance meant deterministic response times. Identical inputs produced near-identical execution times. Optimizations reduced latency, but variance was minimal. Insert AI inference and performance engineering has been flipped upside down. Latency depends on model size, tokenization, batching strategies, and generation settings. Identical inputs may produce different response times. The new dimension of performance is variance—not just how fast the system responds, but how response times distribute across requests, how many tokens per second are processed, and how efficient each response is relative to cost.
In this episode of Pop Goes the Stack, Lori MacVittie, Joel Moses, and special guest Nina Forsyth dive into the impact of AI inference on measuring performance. It's time to rethink performance observability, focus on infrastructure optimization, agent-to-agent interactions, and robust measurement techniques. Listen in to learn how traditional approaches must evolve to manage this multi-dimensional puzzle.