In this episode, we tackle one of the most critical challenges facing MedTech today: how to reconcile the established, process-based world of medical software regulation, namely IEC 62304, with the disruptive force of Artificial Intelligence. As AI-driven diagnostics and therapeutic systems become more prevalent, manufacturers, regulators, and quality professionals must navigate the significant friction between standards built for deterministic code and the new realities of probabilistic, data-driven systems.
We begin by deconstructing the two competing paradigms. First, we explore the core philosophy of IEC 62304, a standard built for traditional software where a rigorous, documented process is a proxy for safety. We then introduce the new frontier defined by the soon -to-be-pproved draft technical specification ISO/IEC DTS 42119-3, which provides a modern toolkit of "V&V analysis" designed specifically for the unique nature of AI. We break down its three core pillars: Formal Methods for providing mathematical proof of properties like robustness ; Simulation for testing system behavior in complex virtual environments ; and Evaluation for assessing trustworthiness through novel metrics like calibration error and explainability (XAI).
The heart of our discussion lies in the inconsistencies and conceptual gaps between these standards. We dissect the regulatory paradox created by applying IEC 62304's "100% probability of failure" rule to AI models that have a known, measurable accuracy. We also expose the critical "data gap" in the traditional software lifecycle, which fails to explicitly govern data collection and model training—the very processes that create an AI's intelligence. Furthermore, we explore new categories of risk unique to AI, such as "model drift," where performance degrades over time as real-world data changes , and "functional insufficiency," a concept borrowed from the automotive industry's SOTIF standard, where a system can cause harm even without a technical fault.
To find a path forward, we look to adjacent safety-critical industries. We analyze the lessons from the automotive sector, which has evolved from functional safety (ISO 26262) to address AI-specific challenges with SOTIF (ISO 21448) and the new ISO/PAS 8800. We also draw insights from the avionics industry's DO-178C standard, highlighting its emphasis on architectural partitioning to contain risk and the demand for complete, auditable evidence.
Finally, we present a practical framework for harmonization. This isn't about replacing the old with the new, but integrating them. We demonstrate how the V&V analysis techniques from ISO/IEC DTS 42119-3 can generate the objective evidence needed to satisfy IEC 62304's requirements in an AI context. The episode culminates in the concept of a "braided argument of assurance"—the strategy of building a defensible safety case not from a single claim of perfection, but from multiple, interwoven strands of evidence from formal methods, simulation, evaluation, and robust process controls.
Join us for a deep dive into the future of medical device compliance, as we provide a strategic roadmap for navigating the complex intersection of traditional regulation and artificial intelligence, ensuring that the next generation of medical devices is not only innovative but demonstrably safe and effective.
For listeners interested in a more detailed analysis, the full Briefing Report is available for purchase. Please contact us at contact@complear.com for more information.