Episodios

  • 209: USCAP 2026: Digital Pathology 101 With Hamamatsu
    Mar 23 2026

    Send us Fan Mail

    What makes digital pathology feel so hard to enter, even for smart people already working around it?

    In this special USCAP conversation, Stephanie Fullerton from Hamamatsu turns the tables and interviews me about Digital Pathology 101 — the book I wrote for people who are starting or continuing their digital pathology journey.

    We talk about why the book is not meant to be an exhaustive manual, but a practical framework. A way to help people see the full picture, ask better questions, and understand how the pieces of digital pathology fit together.

    One of the biggest themes in this conversation is that digital pathology is a team effort. It is not just pathology. It involves scanners, software, image analysis, engineers, vendors, and people who often do not speak the same professional language.

    That matters because sometimes getting the right answer starts with asking the right question.

    We also talk about the challenge of translating expert knowledge into beginner-friendly language, why vendors often become guides as labs go through digital transformation, and why I think a shared vocabulary can make implementations smoother and more collaborative. Toward the end, we shift into the fun side of USCAP: signed book giveaways, stickers, pins, and ways to make connections at the conference.

    Topics discussed

    • [00:03] Why Stephanie interviewed me this time, and the idea behind Digital Pathology 101
    • [01:07] What the book is actually for: a framework, not a one-size-fits-all manual
    • [04:07] The hardest part of writing for beginners without talking down to them
    • [06:26] Why digital pathology implementation feels like a mountain, and how to lower the barrier
    • [08:15] Why a shared vocabulary matters in digital pathology teams
    • [09:44] Translating between pathologists, engineers, vendors, and marketing
    • [11:26] Why vendors and partners often become guides during digital transformation
    • [12:33] Who the book is for, including students and early-career professionals
    • [13:33] Book signing, giveaways, and where to find me at USCAP
    • [19:05] Stickers, pins, and why small things can help start real conversations at conferences

    Resources mentioned

    • Digital Pathology 101
    • Hamamatsu Booth 312 at #USCAP2026 in San Antonio, Texas
    • My histology and microscopy videos on YouTube

    Support the show

    Get the "Digital Pathology 101" FREE E-book and join us!

    Más Menos
    14 m
  • 205: What Makes AI Useful in Pathology Beyond the Demo?
    Mar 21 2026

    Send us Fan Mail

    What happens when AI looks strong in a paper, but the workflow still isn’t ready?

    In DigiPath Digest #40, I reviewed five recent papers across kidney pathology, oral and maxillofacial pathology, glioma biomarker prediction, digital twins in neuro-oncology, and a major European colorectal cancer cohort. A common theme kept coming back: good performance is not the same thing as real-world readiness.

    We started with kidney biopsies and the challenge of assessing interstitial fibrosis and tubular atrophy, where AI shows promise but still does not fully agree with humans. That led into a bigger point I keep seeing in digital pathology: our “ground truth” is often based on human interpretation, and human interpretation has variability too.

    From there, I looked at AI in oral and maxillofacial pathology, where the field is still early and one major bottleneck is the lack of strong public datasets. Then I discussed a systematic review on adult-type gliomas showing that multimodal models performed better than unimodal ones, which makes sense when you think about how pathologists actually work: we do not diagnose from one input alone.

    I also covered a systematic review on digital twins in neuro-oncology. The idea is exciting, but the paper makes it clear that reproducibility, public code, multimodal integration, and external validation are still limiting factors.

    And finally, I talked about a paper I really liked: a large European colorectal cancer cohort built across 26 biobanks in 12 countries. That kind of harmonized, quality-checked dataset matters. A lot. Because better AI starts with better data.

    In this episode, I discuss:

    • Why AI vs human comparisons are harder than they first look
    • the “gold standard paradox” in pathology
    • Why multimodal AI keeps outperforming unimodal models
    • What is holding digital twins back from broader use
    • Why curated multicenter datasets are so important for digital pathology research

    Resources mentioned:

    • Digital Pathology 101 pdf copy
    • Pathology AI Makeover Course
    • DigiPath Digest AI-powered paper summaries

    Papers discussed:

    • https://pubmed.ncbi.nlm.nih.gov/41830415/
    • https://pubmed.ncbi.nlm.nih.gov/41826004/
    • https://pubmed.ncbi.nlm.nih.gov/41824546/
    • https://pubmed.ncbi.nlm.nih.gov/41823607/
    • https://pubmed.ncbi.nlm.nih.gov/41820399/


    Support the show

    Get the "Digital Pathology 101" FREE E-book and join us!

    Más Menos
    33 m
  • 196: DigiPath Digest #39 - If AI Sees More Than We Do. What Makes It Clinically Trustworthy?
    Mar 9 2026

    Send a text

    If AI can detect patterns we cannot see, how do we know when its answers are clinically trustworthy?

    In this episode of DigiPath Digest #39, I explore a big-picture question in digital pathology and medical AI. Many models now match or even exceed human performance in specific diagnostic tasks. But most of that evidence comes from controlled or retrospective datasets. So what happens when we try to bring these tools into real clinical workflows?

    I review four recent papers that help frame this challenge and point toward the next steps for trustworthy AI in healthcare.

    You will hear about the role of prospective validation, real-world effectiveness, transparent reporting standards, and multimodal data integration as recurring themes across these studies.

    Key Highlights

    00:00 – Introduction
    What do we do when AI detects signals that humans cannot see? The core challenge is verifying those outputs before trusting them in clinical decision making.

    03:32 – AI Across the Healthcare Continuum
    A narrative review shows AI achieving clinician-level performance in well-defined imaging tasks, including digital pathology. But most evidence comes from retrospective or controlled environments, and prospective validation remains limited.

    08:34 – Multi-Omics and AI in Gastric Biopsy Diagnostics
    Morphology alone cannot fully capture molecular heterogeneity or predict disease progression. Integrating genomics, proteomics, metabolomics, and other omics with AI is shifting gastric pathology toward data-driven precision gastroenterology.

    13:38 – Hyperspectral Imaging for Real-Time Surgical Guidance
    Spectral imaging can analyze tissue composition during surgery without staining, freezing, or contact with the tissue. Studies show promising sensitivity for detecting malignancy and supporting intraoperative decision making.

    17:20 – REFINE Reporting Guideline for Foundation Models and LLMs
    An international consensus guideline introduces a 44-item reporting checklist to standardize how AI studies are described. The goal is transparent, reproducible, and comparable research in medical AI.

    22:35 – Big Takeaway
    AI should be viewed as clinical decision support, not a replacement for clinicians. Real-world validation, ethical governance, and reproducible research standards will determine how these tools enter pathology workflows.

    References (Articles Discussed)

    Artificial Intelligence in Healthcare: From Diagnosis to Rehabilitation
    https://pubmed.ncbi.nlm.nih.gov/41755929/

    Transforming Gastric Biopsy Diagnostics: Integrating Omics Technologies and Artificial Intelligence
    https://pubmed.ncbi.nlm.nih.gov/41751306/

    From Image-Guided Surgery to Computer-Assisted Real-Time Diagnosis with Hyperspectral and Multispectral Imaging
    https://pubmed.ncbi.nlm.nih.gov/41750768/

    REFINE Reporting Guideline for Foundation and Large Language Models in Medical Research
    https://pubmed.ncbi.nlm.nih.gov/41762555/

    If you enjoy staying current with digital pathology and AI research, this episode will help you connect the dots between promising algorithms and practical clinical adoption.

    Support the show

    Get the "Digital Pathology 101" FREE E-book and join us!

    Más Menos
    27 m
  • 191: Hallucinations, Agents, and AI in Pathology
    Mar 2 2026

    Send a text

    Clinical Artificial Intelligence in 2026. Accuracy, Education, and Guardrails

    Artificial intelligence is evolving fast in medicine. But how accurate is it. And are we building it safely?

    In this episode of DigiPath Digest, I review five new studies shaping digital pathology, radiology, burn diagnostics, and agent-based large language model systems. We discuss accuracy gains, hallucination filtering, education challenges, and why safeguards are essential before clinical deployment.

    Clear. Practical. Evidence-based.

    ⏱ Topics & Timestamps

    [00:02] Introduction
    Weekly journal club on digital pathology and artificial intelligence.

    [05:13] Hallucination Filtering in Radiology
    Using Discrete Semantic Entropy to detect hallucination-prone responses in Vision Language Models.
    Accuracy improved from 51.7 percent to 76.3 percent after filtering high-entropy answers.

    [15:04] Artificial Intelligence in Pathology Training
    Supervised use during residency.
    Balancing artificial intelligence adoption with preservation of morphological analysis and critical thinking.

    [20:12] Colorectal Cancer Lymph Node Detection
    Two-stage classification and segmentation model in Whole Slide Imaging.
    Recall 1.0. Specificity 0.935. Dice coefficient 0.818.
    Artificial intelligence as a second opinion.

    [25:04] Burn Depth Prediction with Artificial Intelligence
    Tissue Doppler Elastography and Harmonic B-mode ultrasound combined with artificial intelligence.
    90 to 95 percent accuracy in human subjects.

    [31:20] Agent-Based Large Language Model Systems
    OpenManus and Manus evaluated in clinical simulations.
    Up to 60.3 percent accuracy. High computational cost.
    89.9 percent of hallucinations filtered by safeguards.

    [40:08] Patient Access to Pathology Images
    Why viewing pathology slides can empower patients and improve communication.

    Resources

    1. https://pubmed.ncbi.nlm.nih.gov/41720937/
    2. https://pubmed.ncbi.nlm.nih.gov/41720644/
    3. https://pubmed.ncbi.nlm.nih.gov/41716065/
    4. https://pubmed.ncbi.nlm.nih.gov/41709317/
    5. https://pubmed.ncbi.nlm.nih.gov/41708802/

    Support the show

    Get the "Digital Pathology 101" FREE E-book and join us!

    Más Menos
    30 m
  • 190: Can a Better Stain Improve AI in Pathology?
    Feb 24 2026

    Send a text

    What if one of the biggest sources of diagnostic variability in prostate cancer isn’t the pathologist—but the stain we’ve trusted for decades?

    In this episode, I speak with Professor Ingid Carlbom, founder of CADESS.AI, about a different way to approach prostate cancer grading—by rethinking staining, segmentation, and AI decision support from the ground up. We explore why 30–40% interobserver variability persists in Gleason grading and how optimized stains combined with explainable AI can significantly reduce that uncertainty.

    Ingrid shares her journey from applied mathematics and computer science into pathology, the skepticism she faced in 2008, and why CADESS.AI chose not to “optimize H&E,” but instead developed a Picrosirius red + hematoxylin stain designed specifically for computational pathology. We discuss how grading at the gland and cellular level improves reproducibility, why explainability matters for trust, and what it really takes to build both stain and software as a single diagnostic workflow.

    This conversation challenges long-held assumptions—and asks whether improving data quality should come before building smarter algorithms.


    Highlights:

    • [00:00–01:08] The problem: 30–40% disagreement in prostate cancer grading
    • [01:08–03:03] Ingrid’s path from applied math to digital pathology
    • [03:03–04:58] Early skepticism toward AI in pathology and fear of replacement
    • [04:58–08:56] Why H&E limits segmentation—and how a new stain changes that
    • [10:55–15:09] Clinical testing: non-inferiority, AI assistance, and NCCN risk stratification
    • [19:47–22:59] Explainable UI: color-coded glands and pathologist override
    • [26:16–27:29] Why grading glands (not whole slides) reduces variability
    • [38:09–41:47] Regulatory challenges of combined stain + AI devices
    • [45:52–48:55] The future of optimized stains in routine pathology


    Resources from This Episode

    • CADESS.AI – Prostate cancer decision support system
    • NCCN prostate cancer risk stratification guidelines

    Support the show

    Get the "Digital Pathology 101" FREE E-book and join us!

    Más Menos
    56 m
  • 189: Digital Pathology Deployment Decoded the Rigorous 4 Phase Framework
    Feb 24 2026

    Send a text

    Sometimes a paper comes out that’s so practical and relevant to what we do in digital pathology that I know we have to talk about it.

    In this episode, I dive into “A Guide for the Deployment, Validation and Accreditation of Clinical Digital Pathology Tools” from Geneva University Hospital (HUG) — one of the most useful, real-world frameworks I’ve seen for bringing digital pathology tools safely into clinical practice.

    If you’ve ever built an AI model and wondered, “Now what?”, this episode is for you.
    Because building the model is often the easy part — deployment is where things get complex.

    This guide breaks the process into four practical phases every lab can follow:

    1️⃣ Pre-Development – Define your clinical need, project scope, and validation plan before writing a single line of code.
    2️⃣ Development – Build and integrate the algorithm in a production-ready environment.
    3️⃣ Validation & Hardening – Turn your research code into a reliable, secure, and compliant clinical tool.
    4️⃣ Production & Monitoring – Keep the tool validated and performing consistently over time.

    We also discuss what makes qualification, validation, and accreditation different — and why that order really matters.
    You’ll hear about the multidisciplinary team behind these deployments, especially the deployment engineer (DE) — the technical linchpin who turns AI research into clinical reality.

    I share the story of HUG’s H. pylori detection tool, which cut diagnostic time by 26% while maintaining a 0% false negative rate. The team’s secret? Careful planning, quality control, and continuous user feedback — not just great code.

    Other highlights include:

    • Why integration often takes longer than building the AI model itself
    • How to avoid invalidating your validation data
    • What continuous performance monitoring looks like in real labs
    • And why every lab still needs to do local validation, even with proven tools

    If you’re working on digital or computational pathology tools — or just want to understand how AI safely moves from research to routine diagnostics — this episode will give you a roadmap grounded in real experience.

    🎧 Listen now to learn how to move from algorithm to accreditation, step by step.

    And if you’re just getting started in digital pathology, I’d love to give you my free eBook, Digital Pathology One-on-One: All You Need to Know to Start and Continue Your Digital Pathology Journey.
    You’ll find the link to download it in the show notes.

    See you in the episode!

    Support the show

    Get the "Digital Pathology 101" FREE E-book and join us!

    Más Menos
    23 m
  • 188: AI in Pathology: Biomarkers, Multimodal Data & the Patient
    Feb 21 2026

    Send a text

    Is AI in pathology actually improving diagnosis — or just adding complexity?

    In DigiPath Digest #37, we reviewed four recent publications covering AI-based biomarker quantification in glioblastoma, real-world digital workflow integration in prostate cancer, multimodal AI combining histopathology and genomics, and patient perspectives on AI in cancer diagnostics.

    This episode connects technical performance with something equally important: trust.

    Episode Highlights

    [00:02] Community & updates
    Digital Pathology 101 free PDF, upcoming patient-focused book, and global attendance.

    [04:07] AI-based image analysis in glioblastoma
    AI showed strong consistency with pathologists when quantifying Ki-67, P53, and PHH3.
    Significant biological correlations (Ki-67 ↔ PHH3, PHH3 ↔ P53) were detected by AI — not by manual assessment.
    Takeaway: computational quantification improves precision.

    [09:28] Real-world digital workflow + AI in prostate cancer (France)
    AI-pathologist concordance:
    • 93.2% (high probability cancer detection)
    • 99.0% (low probability slides)
    Gleason concordance: 76.6%
    10% failure rate due to pre-analytical artifacts.
    Takeaway: infrastructure and sample quality still matter.

    [15:58] Multimodal AI (MARBIX framework)
    Combines whole slide images + immunogenomic data in a shared latent space using binary “monograms.”
    Performance in lung cancer: 85–89% vs 69–76% unimodal models.
    Takeaway: integrated data improves case retrieval and similarity reasoning.

    [22:13] AI-powered paper summary subscription introduced
    Structured summaries for busy professionals who want more than abstracts.

    [26:17] Patient roundtable on AI in pathology (Belgium)
    Patients expect:
    • Better accuracy
    • Faster turnaround
    • Stronger collaboration

    Trust is high when:
    • Algorithms use diverse datasets
    • Pathologists retain final responsibility

    Clinical validity mattered more than full algorithm transparency.
    Privacy concerns focused more on insurer misuse than cloud transfer.

    Key Takeaways

    • AI improves biomarker precision in glioblastoma.
    • Digital pathology implementation works — but pre-analytics can limit AI performance.
    • Multimodal AI represents the next meaningful step in precision diagnostics.
    • Patients are not afraid of AI — they want validation, oversight, and governance.
    • Human–AI collaboration remains central.

    If you’re working in digital pathology, computational pathology, or precision oncology, this episode connects evidence, implementation, and patient perspective.

    Support the show

    Get the "Digital Pathology 101" FREE E-book and join us!

    Más Menos
    21 m
  • 184: Digital Pathology Guidelines: What Every Lab Must Get Right
    Feb 20 2026

    Send a text

    What actually needs to be in place before digital pathology can replace the microscope?

    In this episode of DigiPath Digest, I walk through the 2026 Polish Society of Pathologists guidelines and translate them into practical steps for real pathology labs. This isn’t theory. It’s about hardware fidelity, data integrity, validation, and AI integration — and what each of these actually requires in daily workflow.

    We talk about scanner resolution standards (≤0.26 μm per pixel), 4K monitor calibration, visually lossless compression (20:1), scalable storage, pathologist-driven validation, and what “non-inferiority” truly means.

    Digital pathology is not just a change of medium. It’s an operational shift.

    Episode Highlights

    [00:02] Community & growth
    1,600+ new newsletter subscribers, 10,000+ Facebook members, and free Digital Pathology 101 book access.

    [07:20] The 4 pillars of adoption
    Hardware fidelity · Data integrity · Clinical validation · Future integration.

    [08:30] Hardware requirements
    40x equivalent scanning (≤0.26 μm/px), 4K monitors, >300 cd/m² luminance, 10-bit color depth.

    [12:00] Workflow & throughput
    200–300 slides/day per scanner, automated focus control, urgent case prioritization.

    [17:25] Storage & archiving
    ~1 GB per slide. Active archive (6–24 months). Long-term retention (10–20 years). GDPR compliance & TLS encryption.

    [23:09] Validation philosophy
    Pathologist-centered validation.
    Two phases:
    • Familiarization (~20 retrospective cases)
    • Dual review with discrepancy tracking
    Goal: digital must be non-inferior to glass.

    [29:03] AI in digital pathology
    AI supports quantification (Ki-67, HER2, ER/PR, PD-L1), tumor detection, and future multimodal predictions — but pathologists remain central.

    [33:26] Intraoperative telepathology
    <5-minute scan-to-view time.
    Minimum 100 Mbps upload.
    Redundancy and safety protocols required.

    [34:50] Can digital cameras replace scanners?
    Hybrid workflows exist. Regulatory compliance still applies.

    [38:19] Adoption checklist summary
    Certified scanners (CE-IVD/FDA), calibrated monitors, scalable storage, phased validation, and documented QC.

    Key Takeaways

    • Digital pathology adoption is a structured process — not just buying a scanner.
    • Validation is individualized and tissue-specific.
    • Infrastructure and quality control are as important as image quality.
    • AI enhances reproducibility and quantification but does not replace pathologists.
    • Regulatory compliance and data governance are non-negotiable.

    Support the show

    Get the "Digital Pathology 101" FREE E-book and join us!

    Más Menos
    34 m