How to Trust AI on the Battlefield
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
In this episode of From the Crows’ Nest, host Ken Miller unpacks one of the key challenges with using artificial intelligence and machine learning (AI/ML) in combat: How can human agents trust AI in a live, complex military operation?
Jeff Druce, Senior Scientist, Human-Centered AI at Charles River Analytics, is at the heart of trying to answer this question. Jeff says that neural networks are inherently opaque; a system can perform millions of computations in seconds with a user being in the dark of how a system arrived at a certain recommendation or action. He tells Ken that their RELAX (Reinforcement Learning with Adaptive Explainability) research effort aims to add ways that AI systems can explain their decision making to human operators.
Jeff says that efforts to improve transparency and trust in these AI tools are key, arguing bottlenecks for AI use soon may not be from the technology plateauing but operators being unprepared and ill-equipped to effectively use this technology.
To learn more about today’s topics or to stay updated on EMSO and EW developments, visit our homepage.