Episodios

  • State of AI Risk with Peter Slattery
    Apr 16 2025

    Understanding AI Risks with Peter Slattery

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Peter Slattery, behavioral scientist and lead researcher at MIT’s FutureTech lab, where he spearheads the groundbreaking AI Risk Repository project. Together, they dive into the complex and often overlooked risks of artificial intelligence—ranging from misinformation and malicious use to systemic failures and existential threats.

    Peter shares the intellectual and emotional journey behind categorizing over 1,000 documented AI risks, how his team built a risk taxonomy from 17,000+ sources, and why shared understanding and behavioral science are critical for navigating the future of AI.

    This one is a must-listen for anyone curious about AI safety, behavioral science, and the future of technology that’s moving faster than most of us can track.

    --

    LINKS:

    • Peter's LinkedIn Profile
    • MIT FutureTech Lab: futuretech.mit.edu
    • AI Risk Repository


    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Más Menos
    1 h y 10 m
  • Enter the AI Lab
    Mar 20 2025

    Enter the AI Lab: Insights from LinkedIn Polls and AI Literature Reviews

    In this episode of the Behavioral Design Podcast, hosts Samuel Salzer and Aline Holzwarth explore how AI is shaping behavioral design processes—from discovery to testing. They revisit insights from past LinkedIn polls, analyzing audience perspectives on which phases of behavioral design are best suited for AI augmentation and where human expertise remains crucial.

    The discussion then shifts to AI-driven literature reviews, comparing the effectiveness of various AI tools for synthesizing research. Samuel and Aline assess the strengths and weaknesses of different platforms, diving into key performance metrics like quality, speed, and cost, and debating the risks of over-reliance on AI-generated research without human oversight.

    The episode also introduces Nuance’s AI Lab, highlighting upcoming projects focused on AI-driven behavioral science innovations. The conversation concludes with a Behavioral Redesign series case study on Peloton, offering a fresh take on how AI and behavioral insights can reshape product experiences.

    If you're interested in the intersection of AI, behavioral science, and research methodologies, this episode is packed with insights on where AI is excelling—and where caution is needed.


    LINKS:

    • Nuance AI Lab: Website


    TIMESTAMPS:
    00:00 Introduction and Recap of Last Year's AI Polls
    06:27 AI's Strengths in Literature Review
    15:12 Emerging AI Tools for Research
    19:31 Evaluating AI Tools for Literature Reviews
    23:57 Comparing Chinese and American AI Tools
    26:01 Evaluating Literature Review Outputs
    28:12 Critical Analysis and Human Oversight
    35:19 The Worst Performing Model
    37:21 Introducing Nuance's AI Lab
    38:51 Behavioral Redesign Series: Peloton Example
    45:21 Podcast Highlights and Future Guests

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Más Menos
    48 m
  • When to AI, and When Not to AI with Eric Hekler
    Mar 6 2025

    When to AI, and When Not to AI with Eric Hekler

    "People are different. Context matters. Things change."

    In this episode of the Behavioral Design Podcast, Aline is joined by Eric Hekler, professor at UC San Diego, to explore the nuances of AI in behavioral science and health interventions. Eric’s mantra—emphasizing the importance of individual differences, context, and change—serves as a foundation for the conversation as they discuss when AI enhances behavioral interventions and when human judgment is indispensable.

    The discussion explores just-in-time adaptive interventions (JITAI), the efficiency trap of AI, and the jagged frontier of AI adoption—where machine learning excels and where it falls short. Eric shares his expertise on control systems engineering, human-AI collaboration, and the real-world challenges of scaling adaptive health interventions. The episode also explores teachable moments, the importance of domain knowledge, and the need for AI to support rather than replace human decision-making.

    The conversation wraps up with a quickfire round, where Eric debates AI’s role in health coaching, mental health interventions, and optimizing human routines.

    LINKS:

    • Eric Hekler:


    TIMESTAMPS:
    02:01 Introduction and Correction
    05:21 The Efficiency Trap of AI
    08:02 Human-AI Collaboration
    11:04 Conversation with Eric Hekler
    14:12 Just-in-Time Adaptive Interventions
    15:19 System Identification Experiment
    28:27 Control Systems vs. Machine Learning
    39:44 Challenges with Classical Machine Learning
    43:16 Translating Research to Real-World Applications
    49:49 Community-Based Research and Context Matters
    59:46 Quickfire Round: To AI or Not to AI
    01:08:27 Final Thoughts on AI and Human Evolution

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Más Menos
    1 h y 7 m
  • Sci-Fi and AI: Exploring Annie Bot with Sierra Greer
    Feb 20 2025

    Sci-Fi and AI: Exploring Annie Bot with Sierra Greer

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel dive into the ethical, emotional, and societal complexities of AI companionship with special guest Sierra Greer, author of Annie Bot. This thought-provoking novel explores AI-human relationships, autonomy, and the blurred line between artificial intelligence and the human experience.

    Sierra shares her inspiration for Annie Bot and how sci-fi can serve as a lens to explore real-world ethical dilemmas in AI development.

    • The conversation covers the concept of reinforcement learning in AI and how it mirrors human conditioning
    • The gender dynamics embedded in AI design, and the ethical implications of AI companions.
    • The discussion also examines real-life cases of people forming deep emotional bonds with AI chatbots

    The episode rounds out with a lively quickfire round, where Sierra debates whether AI should replace lost loved ones, act as conversational assistants for introverts, or intervene in human arguments.

    This is a must-listen for fans of sci-fi, behavioral science, and those fascinated by the future of AI companionship and emotional intelligence.


    LINKS:

    • Sierra Greer website
    • Annie Bot – Official Book Page
    • Goodreads Profile


    TIMESTAMPS:

    01:43 AI Companions: A Controversial Opinion

    05:48 Exploring Sci-Fi and AI in Literature

    07:42 Introducing Sierra Greer and Her Book

    09:12 Reinforcement Learning Explained

    15:47 Diving into the World of Annie Bot

    23:17 Power Dynamics and Human-Robot Relationships

    32:31 Humanity and Artificial Intelligence

    41:31 Autonomy vs. Agreeableness in Relationships

    43:20 Reinforcement Learning in AI and Humans

    46:13 Ethics and Gaslighting in AI

    48:57 Gender Dynamics in AI Design

    57:18 AI Companions and Human Relationships

    01:06:45 Quickfire Round: To AI or Not to AI

    01:12:39 Final Thoughts and Controversial Opinions

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Más Menos
    1 h y 7 m
  • AI and Behavioral Science in Public Policy with Laura de Molière
    Feb 6 2025

    AI and Behavioral Science in Public Policy with Laura de Moliere

    In this episode of the Behavioral Design Podcast, host Samuel Salzer is joined by Laura de Moliere, a behavioral scientist with deep expertise in applying behavioral insights to public policy. As the former head of behavioral science at the UK Cabinet Office, Laura has worked at the intersection of behavioral science and policymaking during some of the most high-stakes moments in recent history, including Brexit and COVID-19.

    Samuel and Laura explore the evolving role of AI in behavioral science, reflecting on how AI can enhance decision-making, improve policymaking, and surface unintended consequences. Laura shares her AI “aha moment”—when she realized the potential of large language models to support policymakers in making more behaviorally informed decisions.

    The discussion also covers the promises and perils of AI in behavioral science, the potential of synthetic users to test interventions, and the growing challenge of balancing AI’s capabilities with human biases and policymaking needs. The episode wraps up with a playful quickfire round, where Laura debates the use of AI in everything from tax optimization to gamified urinals.

    This episode is a must-listen for anyone interested in the intersection of AI, behavioral science, and public policy, offering a nuanced and thought-provoking perspective on the future of AI in decision-making.

    LINKS:

    Laura de Moliere:

    • LinkedIn Profile

    • INCASE Framework on Unintended Consequences


    TIMESTAMPS:

    00:00 A Surprise Gift

    05:38 Reflections on 2025

    09:28 AI and Behavioral Science

    19:29 Introducing Laura de Moliere

    21:30 Start of Laura interview

    33:08 Applying Behavioral Science to AI and Government

    35:16 Behavioral Science and AI: Use Cases and Impacts

    36:32 Understanding and Interacting with AI Models

    47:43 Synthetic Users and Their Potential

    01:01:08 Quickfire Round: To AI or Not to AI

    01:06:35 Controversial Opinions on AI

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Más Menos
    1 h y 11 m
  • Predicting 2025 and Beyond with Jared Peterson
    Jan 22 2025

    Predictions for 2025: AI, AGI, and the Future of Behavioral Science with Jared Peterson

    In this episode of the Behavioral Design Podcast, host Samuel is joined by Jared Peterson, a behavioral scientist and expert in decision science at Nuance Behavior. Together, they explore some of the most pressing questions and exciting developments at the intersection of AI, behavioral science, and the future of human-centered design.

    The conversation highlights key advancements from 2024, including the rise of multimodal AI, breakthroughs in AI agents, and the transformative use of AI in scientific research. Samuel and Jared share bold predictions for 2025, tackling questions like:

    • Will AI agents become trusted coworkers?
    • Can AI revolutionize science?
    • And how should we navigate the hype surrounding artificial general intelligence (AGI)?

    The discussion is packed with hot takes, nuanced perspectives, and thoughtful reflections, including Jared’s controversial prediction about the future of AI in predicting research replicability.

    This episode is a must-listen for anyone curious about the rapidly evolving AI landscape and its implications for behavioral science, creativity, and society at large.

    For questions or comments - email samuel@nuancebehavior..com

    LINKS:

    • Jared's website
    • Jared's linkedin
    • The Science of Context
    • A Failure to Disagree

    TIMESTAMPS

    00:00 – Meet Jared Peterson: Behavioral Scientist and AI Expert

    01:01 – Reflections on 2024: Key Breakthroughs and Predictions

    03:36 – The Multimodal Evolution of AI

    10:06 – AI Surpassing Human Benchmarks

    21:25 – The Rise of AI Agents and Synthetic Content

    35:18 – Musical Turing Test: AI vs. Eurovision

    43:26 –Predictions for 2025: AI Coworkers and Beyond

    44:06 – AI Coworkers: The Future of Work?

    51:11 – AI in Science: Revolutionizing Research

    01:05:56 – The Hype and Reality of AGI

    01:10:42 – Adoption Challenges and Future Predictions

    01:25:40 – Final Thoughts and Controversial Predictions

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Más Menos
    1 h y 35 m
  • Psychological Targeting & AI with Sandra Matz
    Jan 8 2025

    Exploring Psychological Targeting and the Power of AI with Sandra Matz

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel sit down with Sandra Matz, leading expert on psychological targeting and Associate Professor of Business at Columbia Business School.

    As a computational social scientist, Sandra uses Big Data analytics and experimental methods to study human behavior, uncovering how psychological traits influence business outcomes like financial well-being, consumer satisfaction, and team performance.

    The conversation covers how digital footprints from social media, GPS data, and more are leveraged to create psychological profiles, shaping everything from advertisements to decision-making. Sandra provides unique insights into the controversial Cambridge Analytica case and discusses the democratization of personalized content generation through tools like ChatGPT.

    Whether you're curious about personality psychology, the ethics of data privacy, or the evolving role of AI, this episode is a must-listen.

    LINKS:

    • Sandra Matz:

      • Sandra's Website
      • Her New Book: Mindmasters
    • Relevant Research and Resources:

      • Cambridge Analytica and the Evolution of Psychological Targeting
      • The Social Dilemma Documentary
      • Big Five Personality Model Explained
      • Moral Foundations Theory Overview


    TIMESTAMPS: 02:03 – Personality Tests
    09:23 – ChatGPT Gift Experiment
    19:50 – Introducing Sandra Matz
    21:35 – Understanding Psychological Targeting
    24:27 – Real-World Examples and Implications
    34:58 – Cambridge Analytica and Data Privacy
    39:38 – The Social Dilemma and Personality Representation
    41:19 – Understanding Personality Traits
    43:49 – Dynamic Personality and Context
    46:26 – AI's Role in Psychological Targeting
    50:32 – Generative AI and Personalized Content
    58:40 – Ethical Considerations and Future of AI
    01:11:40 – Final Thoughts and Sandra’s New Book

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Más Menos
    1 h y 16 m
  • Behavior Change Score with Roos van Duijnhoven
    Dec 11 2024

    Behavior Change Score with Roos van Duijnhoven

    In this special episode of the Behavioral Design Podcast, host Samuel continues the mini-series featuring expert practitioners from the Nuance Behavior team. This week’s guest is Roos van Duijnhoven, a behavioral scientist with a deep passion for designing human-centered digital solutions that drive meaningful behavior change.

    Samuel and Roos explore a wide range of topics, including the Behavior Change Score Framework, strategies for improving onboarding and retention in digital health products, and the importance of focusing on real-world behavior (‘big E’ engagement) versus in-app behavior (‘little e’ engagement). They also dive into insights from Nuance Behavior’s ‘Behavior Change Score Report,’ which evaluates fitness apps and provides actionable lessons for designing more effective digital interventions.

    This episode offers a treasure trove of insights for anyone interested in applying behavioral science to digital product design and health interventions!


    LINKS:

    • ⁠⁠Roos's LinkedIn⁠
    • The Behavior Change Score Report
    • ⁠Nuance Behavior Website⁠
    • Engagement with Heather Cole-Lewis


    TIMESTAMPS

    00:36 Meet Roos van Duijnhoven

    01:06 Recap of Susan Murhpy episode

    07:31 Insights from the Behavior Change Score Report

    20:14 Big E vs. Little e Engagement: Real-World vs. In-App Behavior

    26:31 Controversial Opinions: Electric Bicycles

    29:32 Conclusion and Farewell

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Más Menos
    32 m
adbl_web_global_use_to_activate_webcro768_stickypopup