Learning Bayesian Statistics Podcast Por Alexandre Andorra arte de portada

Learning Bayesian Statistics

Learning Bayesian Statistics

De: Alexandre Andorra
Escúchala gratis

Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is?

Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow.

When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible.

So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best.

So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners!

My name is Alex Andorra by the way. By day, I'm a Senior data scientist. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages PyMC and ArviZ. I also love Nutella, but I don't like talking about it – I prefer eating it.

So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and unlock exclusive Bayesian swag on Patreon!

2025 Alexandre Andorra
Ciencia
Episodios
  • Pricing Under Uncertainty: A Bayesian Workflow
    Apr 16 2026

    Today's clip is from Episode 152 of the podcast, featuring Daniel Saunders. In this conversation, Daniel explores how Bayesian decision theory handles real-world risk aversion beyond the textbook maximum expected utility framework.

    The key insight: classical Bayesian decision theory assumes risk neutrality, but in practice, people and businesses are risk-averse. Using a pricing optimization example, Daniel shows how uncertainty varies dramatically across price points—lower prices have predictable demand, while higher prices create wide uncertainty in profits. This asymmetry matters when you want safer decisions.

    Daniel introduces exponential utility functions—a technique from economics that models diminishing returns on money. By adjusting a risk-aversion parameter, you can see how increasing risk aversion shifts optimal decisions away from high-uncertainty, high-profit scenarios toward more predictable outcomes.

    The broader lesson: optimal decision-making requires separating the modeling process from the decision process, allowing you to build in constraints and risk adjustments that pure expected utility maximization would miss.

    Get the full discussion here

    Support & Resources
    → Support the show on Patreon: https://www.patreon.com/c/learnbayesstats
    → Bayesian Modeling Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

    Más Menos
    5 m
  • #155 Probabilistic Programming for the Real World, with Andreas Munk
    Apr 8 2026

    Support & Resources
    → Support the show on Patreon
    → Bayesian Modeling Course (first 2 lessons free):

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work

    Takeaways:

    Q: Why is bridging deep learning and probabilistic programming so important?
    A: Deep learning is extraordinarily good at fitting complex functions, but it throws away uncertainty. Probabilistic programming keeps uncertainty explicit throughout. Combining the two – as in inference compilation – lets you get the expressiveness of neural networks while still doing proper Bayesian inference.

    Q: What is inference compilation and how does it relate to amortized inference?
    A: Amortized inference is the general idea of training a model upfront so you don't have to run expensive inference from scratch every single time. Inference compilation is a specific form of amortized inference where a neural network is trained to propose good posterior samples for a given probabilistic program – essentially learning to do inference rather than computing it fresh each query.

    Q: What is PyProb and what problems does it solve?
    A: PyProb is a probabilistic programming library designed specifically to support amortized inference workflows. It lets you write probabilistic models in Python and then train inference networks on top of them, making methods like inference compilation practical for real-world simulators and scientific models.

    Q: What are probabilistic surrogate networks and why do they matter?
    A: A probabilistic surrogate network is a learned approximation of a complex, expensive simulator that preserves uncertainty. Instead of running a costly simulation thousands of times, you train a surrogate that can answer probabilistic queries much faster – crucial for applications like risk modeling where speed and uncertainty quantification both matter.

    Chapters:

    00:00:00 Introduction to Bayesian Inference and Its Barriers
    00:03:51 Andreas Munch's Journey into Statistics
    00:10:09 Bridging the Gap: Bayesian Inference in Real-World Applications
    00:15:56 Deep Learning Meets Probabilistic Programming
    00:22:05 Understanding Inference Compilation and Amortized Inference
    00:28:14 Exploring PyProb: A Tool for Amortized Inference
    00:33:55 Probabilistic Surrogate Networks and Their Applications
    00:38:10 Building Surrogate Models for Probabilistic Programming
    00:45:44 The Challenge of Bayesian Inference in Enterprises
    00:52:57 Communicating Uncertainty to Stakeholders
    01:01:09 Democratizing Bayesian Inference with Evara
    01:06:27 Insurance Pricing and Latent Variables
    01:16:41 Modeling Uncertainty in Predictions
    01:20:29 Dynamic Inference and Decision-Making
    01:23:17 Updating Models with Actual Data
    01:26:11 The Future of Bayesian Sampling in Excel
    01:31:54 Navigating Business Challenges and Growth
    01:36:40 Exploring Language Models and Their Applications
    01:38:35 The Quest for Better Inference Algorithms
    01:41:01 Dinner with Great Minds: A Thought Experiment

    Thank you to my Patrons for making this episode possible!

    Más Menos
    1 h y 54 m
  • Bitesize | "What Would Have Happened?" - Bayesian Synthetic Control Explained
    Apr 2 2026

    Today's clip is from Episode 154 of the podcast, with Thomas Pinder.

    In this conversation, Thomas Pinder explains how Bayesian methods naturally lend themselves to causal modeling, and why that matters for real-world business decisions. The key insight is that causal questions in industry are rarely black and white: instead of a single treatment effect, you get a full posterior distribution, credible intervals, and the ability to communicate the probability that an effect is positive, which is far more useful to stakeholders than a p-value.

    Thomas then dives into Bayesian Synthetic Control, a reframing of the classic synthetic control method from a constrained optimization problem into a Bayesian regression problem. Rather than optimizing weights on a simplex, you place a Dirichlet prior on the regression coefficients, which turns out to be not just mathematically elegant but practically richer: you can express prior beliefs about how many control units are informative, set the concentration parameter accordingly, or let a gamma hyperprior on that parameter let the data decide. The result is a more flexible, less fragile counterfactual, implemented cleanly in PyMC or NumPyro.

    Get the full discussion here

    Support & Resources
    → Support the show on Patreon: https://www.patreon.com/c/learnbayesstats
    → Bayesian Modeling Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

    Más Menos
    5 m
Todavía no hay opiniones