Evaluating AI Systems: Metrics, Methods, and Measurement Gaps Podcast Por  arte de portada

Evaluating AI Systems: Metrics, Methods, and Measurement Gaps

Evaluating AI Systems: Metrics, Methods, and Measurement Gaps

Escúchala gratis

Ver detalles del espectáculo

A deep dive into the metrics and methodologies essential for robust AI evaluations. Agnès Delaborde examines measurement challenges, standards alignment, and the tools supervisory authorities need to assess AI system performance.

The conversation highlights gaps between emerging benchmarks and real-world regulatory needs.

Speaker: Agnès Delaborde (Laboratoire national de métrologie et d'essais – LNE)
Interviewer: Lihui Xu, Programme Specialist, Ethics of AI Unit, UNESCO


Hosted on Ausha. See ausha.co/privacy-policy for more information.

Todavía no hay opiniones