AI incidents, audits, and the limits of benchmarks
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
AI is moving fast from research to real-world deployment, and when things go wrong, the consequences are no longer hypothetical. In this episode, Sean McGregor, co-founder of the AI Verification & Evaluation Research Institute and also the founder of the AI Incident Database, joins Chris and Dan to discuss AI safety, verification, evaluation, and auditing. They explore why benchmarks often fall short, what red-teaming at DEF CON reveals about machine learning risks, and how organizations can better assess and manage AI systems in practice.
Featuring:
- Sean McGregor– LinkedIn
- Chris Benson – Website, LinkedIn, Bluesky, GitHub, X
- Daniel Whitenack – Website, GitHub, X
Links:
- AI Verification & Evaluation Research Institute
- AI Incident Database
- 38th convening of IAAI
- BenchRisk
- State of Global AI Incident Reporting
Upcoming Events:
- Register for upcoming webinars here!
Todavía no hay opiniones