Learning Bayesian Statistics Podcast By Alexandre Andorra cover art

Learning Bayesian Statistics

Learning Bayesian Statistics

By: Alexandre Andorra
Listen for free

Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is?

Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow.

When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible.

So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best.

So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners!

My name is Alex Andorra by the way. By day, I'm a Senior data scientist. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages PyMC and ArviZ. I also love Nutella, but I don't like talking about it – I prefer eating it.

So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and unlock exclusive Bayesian swag on Patreon!

2025 Alexandre Andorra
Science
Episodes
  • #157 Amortized Inference & BayesFlow in Practice, with Stefan Radev
    May 6 2026

    Support & Resources
    → Support the show on Patreon
    → Bayesian Modeling Course (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work


    Takeaways:

    Q: What is simulation-based inference and what does "sim-to-real" mean?
    A: Simulation-based inference (SBI) uses a mechanistic simulator as an epistemic tool: you train a neural network on a large number of labeled simulations and then deploy it on real, unlabeled data. The "sim-to-real" framing captures the key asymmetry -- your network never sees real data during training, only simulations, but it generalizes to real observations at inference time. This is the opposite of the more common "synthetic-for-ML" approach, where fake data is used purely to augment real training data.

    Q: What is the amortized inference agent skill and what does it do?
    A: It's an open-source AI agent skill, co-developed by Stefan and Alexandre, that teaches an AI coding agent to run a complete, state-of-the-art amortized inference workflow. Because amortized inference is recent enough that it's underrepresented in LLM training data, vanilla agents tend to get it wrong. The skill injects the right methodology: it guides the agent to set up the simulator, choose the right network architecture, run a pilot, train with appropriate diagnostics, and produce an actionable report -- without the user needing to know the details.

    Q: What is calibration coverage and why should you never skip it?
    A: Calibration coverage tells you whether your posterior uncertainty is honest -- whether your credible intervals actually contain the true parameter at the right frequency. A model can show poor parameter recovery yet still be well-calibrated (because it's falling back on the prior), or it can appear to recover parameters while being poorly calibrated. Running calibration diagnostics both in-sample and out-of-sample is especially revealing for hierarchical models, which often appear to underfit in-sample but generalize much better out-of-sample thanks to shrinkage.

    Full takeaways here

    Chapters:
    00:00:00 How does amortized inference fit into the Bayesian workflow?
    00:12:03 What does "sim-to-real" mean in simulation-based inference?
    00:15:57 Why is amortized inference particularly suited to psychology and neuroscience?
    00:21:51 What is the amortized inference agent skill?
    00:39:00 What is calibration coverage and how do you interpret it?
    00:41:50 How do you decide what to do next after your first training run?
    00:44:53 How do actionable insights make Bayesian workflows more usable?
    00:49:08 What are the unique challenges of hierarchical models in amortized inference?
    01:00:51 What is the current state of BayesFlow's support for hierarchical models?
    01:05:00 What are the main failure modes of amortized inference and how do you handle model misspecification?

    Thank you to my Patrons for making this episode possible!

    Links from the show

    Show more Show less
    1 hr and 19 mins
  • How to Design Better Experiments with Expected Information Gain
    May 1 2026

    Today's clip is from Episode 156 featuring Adam Foster. In this conversation, Adam explains Expected Information Gain (EIG) -the scoring function at the heart of optimal Bayesian experimental design.


    The core idea: when designing an experiment, you need a way to compare possible designs and pick the best one. EIG is that score - it tells you how much information you expect to gain about your model parameters from a given design. The higher the EIG, the better the design.

    Adam builds intuition for EIG from two directions that sound completely different but lead to the same place. First, the Bayesian angle: simulate datasets from your prior predictive distribution, run inference on each, measure how much uncertainty dropped, and average across datasets. Second, a classic puzzle - the 12 prisoners balance scale problem - where the best weighing strategy turns out to be the one that makes all three outcomes (tip left, tip right, balance) equally likely. This maximizes outcome entropy, which is exactly what EIG does: it steers you toward designs where every possible result narrows down your hypotheses as fast as possible.

    The takeaway: good experimental design isn't about intuition or convention - it's about making your data work as hard as possible, and EIG gives you a rigorous way to do that.

    Get the full discussion here

    Support & Resources
    → Support the show on Patreon
    → Bayesian Modeling Course (first 2 lessons free)


    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work

    Show more Show less
    6 mins
  • #156 Bayesian Experimental Design & Active Learning, with Adam Foster
    Apr 25 2026

    Support & Resources
    → Support the show on Patreon
    → Bayesian Modeling Course (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work

    Takeaways

    Q: What is Bayesian experimental design and what problem does it solve?
    A: It's the practice of using a Bayesian model to decide how to collect data before you collect it. Most statistical thinking starts with a fixed dataset. Bayesian experimental design sits upstream -- you have control over experimental parameters (which questions to ask, which reagents to mix, which conditions to test) and you want to choose them optimally. The Bayesian angle is to ask: what new data would most reduce my current uncertainty?

    Q: When should you actually use Bayesian experimental design?
    A: When two conditions hold: you have active control over how data is collected (not just passive observation), and you have a Bayesian model whose prior predictive distribution gives a reasonable picture of what typical data might look like. It's especially valuable when data collection is expensive or irreversible -- when the "committal step" of running an experiment has real cost, it's worth doing the analysis first.

    Q: What is expected information gain (EIG) and why is it central to Bayesian experimental design?
    A: EIG is the score you assign to a candidate experimental design -- the amount of information you expect to gain about your model parameters by running an experiment with that design. You compute it by simulating datasets from your prior predictive, doing Bayesian inference on each, and averaging how much the uncertainty decreased. What's remarkable is that you can derive the same quantity from two completely different starting points -- reducing parameter uncertainty, or maximizing outcome uncertainty while correcting for noise - and arrive at the same formula. That convergence is why EIG keeps being re-discovered independently across fields.

    Full takeaways here


    Chapters:

    00:00 What is Bayesian experimental design and why does it matter?

    00:06:02 What problem does Bayesian experimental design actually solve?

    00:08:54 When should practitioners use Bayesian experimental design?

    00:12:00 Is Bayesian experimental design changing how scientists work in practice?

    00:15:04 What are the limitations of Bayesian experimental design?

    00:17:55 What is expected information gain (EIG) and how does it work?

    00:21:05 How do you compute expected information gain in practice?

    00:23:48 What is active learning and how does it connect to Bayesian experimental design?

    00:41:02 What is active learning by disagreement?

    00:48:57 What is deep adaptive design and when should you00: use it?

    00:56:02 How is Bayesian experimental design applied in protein dynamics and quantum chemistry?

    01:01:58 What does a practical Bayesian experimental design workflow look like?

    Thank you to my Patrons for making this episode possible!

    Links from the show

    Show more Show less
    1 hr and 17 mins
adbl_web_anon_alc_button_suppression_c
No reviews yet