SE Radio 703: Sahaj Garg on Low Latency AI
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
In this episode, Sahaj Garg, CTO of wispr.ai, joins SE Radio host Robert Blumen to talk about the challenges of building low-latency AI applications. They discuss latency's effect on consumer behavior as well as interactive applications. The conversation explores how to measure latency and how scale impacts it. Then Sahaj and Robert shift to themes around AI, including whether "AI" means LLMs or something broader, as they look at latency requirements and challenges around subtypes of AI applications. The final part of the episode explores techniques for managing latency in AI: speed vs accuracy trade-offs; speed vs cost; latency vs cost; choosing the right model; reducing quantization; distillation; and guessing + validating.
Brought to you by IEEE Computer Society and IEEE Software magazine.