AI Testing: How to Ensure Quality in Non-Deterministic Systems with Adam Sandman
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
How do you ensure software quality when the system you're testing doesn't give the same output twice? That's the core challenge facing every QA team building or testing AI-powered applications today and it's breaking all the rules we've relied on for decades.
In this episode of the TestGuild Automation Podcast, I sit down with Adam Sandman, co-founder of Inflectra, to get into what non-deterministic AI testing actually means in practice, why traditional pass/fail testing no longer cuts it, and what quality professionals need to do differently right now.
We cover:
- Why AI-generated code is raising the stakes for QA teams while budgets stay flat
- The fundamental difference between deterministic and non-deterministic systems — and why it changes everything about how you test
- How to set acceptable risk thresholds for AI systems (hint: it depends on whether you're building an e-commerce chatbot or an air traffic control system)
- Why testers who embrace AI as a tool — not a threat — will be the ones leading their organizations forward
- How a live demo failure at a conference inspired Inflectra's new non-deterministic testing tool, SureWire
If you're a tester, QA manager, or automation engineer trying to figure out how to keep up with AI-driven development without losing your mind — or your job — this one's for you.
Todavía no hay opiniones