Episodios

  • Bitesize | "What Would Have Happened?" - Bayesian Synthetic Control Explained
    Apr 2 2026

    Today's clip is from Episode 154 of the podcast, with Thomas Pinder.

    In this conversation, Thomas Pinder explains how Bayesian methods naturally lend themselves to causal modeling, and why that matters for real-world business decisions. The key insight is that causal questions in industry are rarely black and white: instead of a single treatment effect, you get a full posterior distribution, credible intervals, and the ability to communicate the probability that an effect is positive, which is far more useful to stakeholders than a p-value.

    Thomas then dives into Bayesian Synthetic Control, a reframing of the classic synthetic control method from a constrained optimization problem into a Bayesian regression problem. Rather than optimizing weights on a simplex, you place a Dirichlet prior on the regression coefficients, which turns out to be not just mathematically elegant but practically richer: you can express prior beliefs about how many control units are informative, set the concentration parameter accordingly, or let a gamma hyperprior on that parameter let the data decide. The result is a more flexible, less fragile counterfactual, implemented cleanly in PyMC or NumPyro.

    Get the full discussion here

    Support & Resources
    → Support the show on Patreon: https://www.patreon.com/c/learnbayesstats
    → Bayesian Modeling Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

    Más Menos
    5 m
  • #154 Bayesian Causal Inference at Scale, with Thomas Pinder
    Mar 25 2026

    • Support & get perks!

    Bayesian Modeling course (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!

    Takeaways:


    Q: Why was GPJax created and how does it benefit researchers?
    A: GPJax was developed to provide a high-performance, flexible framework for Gaussian processes (GPs) within the JAX ecosystem. It allows researchers to move beyond black-box implementations and easily experiment with custom kernels and model structures while leveraging JAX’s automatic differentiation and GPU acceleration.

    Q: What are the primary advantages of using Gaussian processes for data modeling?
    A: Gaussian processes are highly effective at modeling complex, nonlinear relationships in data. Unlike many machine learning methods that only provide a point estimate, GPs offer built-in uncertainty quantification, which is essential for understanding the reliability of predictions in research and industry.

    Q: How does the GPJax and NumPyro integration enhance probabilistic modeling?
    A: The integration allows users to treat GPJax models as components within a larger NumPyro probabilistic program. This combination enables the use of advanced sampling techniques like NUTS (No-U-Turn Sampler), making it easier to build and fit complex hierarchical models that include Gaussian processes.

    Q: What are the main challenges when applying Gaussian processes to high-dimensional data?
    A: High-dimensional data significantly complicates GP modeling due to the curse of dimensionality and the cubic scaling of computational costs. In high dimensions, defining meaningful distance metrics for kernels becomes harder, often requiring specialized techniques like sparse GPs or dimensionality reduction to remain tractable.

    Full takeaways here!

    Chapters:


    11:40 What is GPJax and how does it simplify Gaussian Process modeling?
    15:48 How are Bayesian methods used for experimentation and causal inference in industry?
    18:40 How do you implement Bayesian Synthetic Control?
    32:17 What is Bayesian Synthetic Difference-in-Differences?
    39:44 What are the research applications and supported methods for the GPJax library?
    45:47 What are the primary software and computational bottlenecks when scaling Gaussian Processes?
    49:02 What are the real-world industrial applications of Gaussian Process models?
    54:36 How is Bayesian modeling applied to soccer and sports analytics?
    58:43 What is the future development roadmap for the GPJax ecosystem?
    01:05:37 What is Impulso and how does it integrate into a Bayesian modeling workflow?
    01:13:42 How do you balance Bayesian computational overhead with industrial latency requirements?
    01:20:26 Why is there optimism that scalable Bayesian methods for causal inference are now within reach?

    Thank you to my Patrons for making this episode possible!

    Links from the show here!

    Más Menos
    1 h y 26 m
  • #153 The Neuroscience of Philanthropy, with Cherian Koshy
    Mar 11 2026

    • Support & get perks!

    • Bayesian Modeling course (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !

    Takeaways:

    Q: Is generosity a natural human trait?
    A: Yes, generosity is hardwired in our brains and is essential for social interaction.

    Q: Why do people say they care about causes but not act on it?
    A: There is often a disconnect between stated care for causes and actual action. Understanding the conditions under which generosity aligns with a person's identity is crucial for bridging this gap.

    Q: How should fundraising efforts be approached?
    A: Fundraising should primarily focus on belief updating rather than mere persuasion.

    Q: What are the benefits of being generous?
    A: Generosity has significant mental and physical health benefits, as the brain's reward systems activate when we give, making us feel good.

    Q: How do our beliefs relate to our actions?
    A: Our beliefs about ourselves strongly influence our actions and decisions, including our decision to be generous.

    Q: Can generosity impact a community?
    A: Yes, generosity can be a powerful tool for improving community dynamics.

    Q: How can technology like AI assist institutions with donors?
    A: AI could help institutions remember donors better, improving the donor-institution relationship.

    Chapters:

    00:00 What's the role of Behavioral Science inPhilanthropy
    19:57 What is The Neuroscience of Generosity?
    24:40 How can we best understand Donor Decision-Making?
    32:14 How can we achieve reframe Beliefs and Actions?
    35:39 What is the role of Identity in Habit Formation?
    38:06 What is the Generosity Gap in Philanthropy?
    45:06 How can we reduce Friction in Donation Processes?
    48:27 What is the role of AI and Trust in Nonprofits?
    52:11 How can we build Predictive Models for Donor Behavior?
    55:41 What is the role of Empathy in Sales and Stakeholder Engagement?
    01:00:46 How can we best align ideas with Stakeholder Beliefs?
    01:02:06 How can we explore Generosity and Memory?

    Thank you to my Patrons for making this episode possible!

    Links from the show:

    • Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026! https://www.fieldofplay.co.uk/
    • Bayesian workflow agent skill
    • Neurogiving, The Science of Donor Decision-Making
    • Cherian's website
    • Cherian's press kit
    • LBS #89 Unlocking the Science of Exercise, Nutrition & Weight Management, with Eric Trexler
    Más Menos
    1 h y 9 m
  • Bitesize | How To Model Risk Aversion In Pricing?
    Mar 4 2026

    Today's clip is from Episode 152 of the podcast, with Daniel Saunders.

    In this conversation, Daniel Saunders explains how to incorporate risk aversion into Bayesian price optimization. The key insight is that uncertainty around expected profit is asymmetric across price points, low prices yield more predictable (if modest) returns, while high prices introduce much wider uncertainty. Rather than simply maximizing expected profit, you can pass profit through an exponential utility function that models diminishing returns, a well-established idea from economics.

    This adds an adjustable risk aversion parameter to the optimization: as risk aversion increases, the model shifts toward more conservative price recommendations, trading off potentially large but uncertain gains for outcomes with tighter, more reliable distributions.

    Get the full discussion here

    • Join this channel to get access to perks:
    https://www.patreon.com/c/learnbayesstats

    • Intro to Bayes Course (first 2 lessons free): https://topmate.io/alex_andorra/503302
    • Advanced Regression Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

    Más Menos
    4 m
  • #152 A Bayesian decision theory workflow, with Daniel Saunders
    Feb 26 2026

    • Support & get perks!

    • Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com

    Intro to Bayes and Advanced Regression courses (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !

    Chapters:

    00:00 The Importance of Decision-Making in Data Science

    06:41 From Philosophy to Bayesian Statistics

    14:57 The Role of Soft Skills in Data Science

    18:19 Understanding Decision Theory Workflows

    22:43 Shifting Focus from Accuracy to Business Value

    26:23 Leveraging PyTensor for Optimization

    34:27 Applying Optimal Decision-Making in Industry

    40:06 Understanding Utility Functions in Regulation

    41:35 Introduction to Obeisance Decision Theory Workflow

    42:33 Exploring Price Elasticity and Demand

    45:54 Optimizing Profit through Bayesian Models

    51:12 Risk Aversion and Utility Functions

    57:18 Advanced Risk Management Techniques

    01:01:08 Practical Applications of Bayesian Decision-Making

    01:06:54 Future Directions in Bayesian Inference

    01:10:16 The Quest for Better Inference Algorithms

    01:15:01 Dinner with a Polymath: Herbert Simon

    Thank you to my Patrons for making this episode possible!

    Links from the show:

    • Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026! https://www.fieldofplay.co.uk/
    • A Bayesian decision theory workflow
    • Daniel's website, LinkedIn and GitHub
    • LBS #124 State Space Models & Structural Time Series, with Jesse Grabowski
    • LBS #123 BART & The Future of Bayesian Tools, with Osvaldo Martin
    • LBS #74 Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian Seyboldt
    • LBS #76 The Past, Present & Future of Stan, with Bob Carpenter
    Más Menos
    1 h y 19 m
  • BITESIZE | How Do Diffusion Models Work?
    Feb 19 2026

    Today's clip is from Episode 151 of the podcast, with Jonas Arruda

    In this conversation, Jonas Arruda explains how diffusion models generate data by learning to reverse a noise process. The idea is to start from a simple distribution like Gaussian noise and gradually remove noise until the target distribution emerges. This is done through a forward process that adds noise to clean parameters and a backward process that learns how to undo that corruption. A noise schedule controls how much noise is added or removed at each step, guiding the transformation from pure randomness back to meaningful structure.

    Get the full discussion here

    • Join this channel to get access to perks:
    https://www.patreon.com/c/learnbayesstats

    • Intro to Bayes Course (first 2 lessons free): https://topmate.io/alex_andorra/503302
    • Advanced Regression Course (first 2 lessons free): https://topmate.io/alex_andorra/1011122

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

    Más Menos
    4 m
  • #151 Diffusion Models in Python, a Live Demo with Jonas Arruda
    Feb 12 2026

    • Support & get perks!

    • Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com

    Intro to Bayes and Advanced Regression courses (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !

    Chapters:
    00:00 Exploring Generative AI and Scientific Modeling
    10:27 Understanding Simulation-Based Inference (SBI) and Its Applications
    15:59 Diffusion Models in Simulation-Based Inference
    19:22 Live Coding Session: Implementing Baseflow for SBI
    34:39 Analyzing Results and Diagnostics in Simulation-Based Inference
    46:18 Hierarchical Models and Amortized Bayesian Inference
    48:14 Understanding Simulation-Based Inference (SBI) and Its Importance
    49:14 Diving into Diffusion Models: Basics and Mechanisms
    50:38 Forward and Backward Processes in Diffusion Models
    53:03 Learning the Score: Training Diffusion Models
    54:57 Inference with Diffusion Models: The Reverse Process
    57:36 Exploring Variants: Flow Matching and Consistency Models
    01:01:43 Benchmarking Different Models for Simulation-Based Inference
    01:06:41 Hierarchical Models and Their Applications in Inference
    01:14:25 Intervening in the Inference Process: Adding Constraints
    01:25:35 Summary of Key Concepts and Future Directions

    Thank you to my Patrons for making this episode possible!

    Links from the show:

    - Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026!
    - Jonas's Diffusion for SBI Tutorial & Review (Paper & Code)
    - The BayesFlow Library
    - Jonas on LinkedIn
    - Jonas on GitHub
    - Further reading for more mathematical details: Holderrieth & Erives
    - 150 Fast Bayesian Deep Learning, with David Rügamer, Emanuel Sommer & Jakob Robnik
    - 107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt

    Más Menos
    1 h y 36 m
  • #150 Fast Bayesian Deep Learning, with David Rügamer, Emanuel Sommer & Jakob Robnik
    Jan 28 2026

    • Support & get perks!

    • Proudly sponsored by PyMC Labs! Get in touch at alex.andorra@pymc-labs.com

    • Intro to Bayes and Advanced Regression courses (first 2 lessons free)

    Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work !


    Chapters:

    00:00 Scaling Bayesian Neural Networks
    04:26 Origin Stories of the Researchers
    09:46 Research Themes in Bayesian Neural Networks
    12:05 Making Bayesian Neural Networks Fast
    16:19 Microcanonical Langevin Sampler Explained
    22:57 Bottlenecks in Scaling Bayesian Neural Networks
    29:09 Practical Tools for Bayesian Neural Networks
    36:48 Trade-offs in Computational Efficiency and Posterior Fidelity
    40:13 Exploring High Dimensional Gaussians
    43:03 Practical Applications of Bayesian Deep Ensembles
    45:20 Comparing Bayesian Neural Networks with Standard Approaches
    50:03 Identifying Real-World Applications for Bayesian Methods
    57:44 Future of Bayesian Deep Learning at Scale
    01:05:56 The Evolution of Bayesian Inference Packages
    01:10:39 Vision for the Future of Bayesian Statistics

    Thank you to my Patrons for making this episode possible!

    Come meet Alex at the Field of Play Conference in Manchester, UK, March 27, 2026!

    Links from the show:


    David Rügamer:
    * Website
    * Google Scholar
    * GitHub

    Emanuel Sommer:
    * Website
    * GitHub
    * Google Scholar

    Jakob Robnik:
    * Google Scholar
    * GitHub
    * Microcanonical Langevin paper
    * LinkedIn

    Más Menos
    1 h y 20 m