Humans of Martech Podcast Por Phil Gamache arte de portada

Humans of Martech

Humans of Martech

De: Phil Gamache
Escúchala gratis

Future-proofing the humans behind the tech. Follow Phil Gamache and Darrell Alfonso on their mission to help future-proof the humans behind the tech and have successful careers in the constantly expanding universe of martech.©2026 Humans of Martech Inc. Economía Exito Profesional Marketing Marketing y Ventas
Episodios
  • 213: John Whalen: The next marketing advantage is pre-testing ideas on synthetic users
    Mar 31 2026
    What’s up everyone, today we have the pleasure of sitting down with Dr. John Whalen, Cognitive Scientist, Author, and Founder at Brilliant Experience.Summary: John has spent his career studying how people actually think, and his conclusion is uncomfortable for anyone who believes their marketing decisions are more rational than they are. In this episode, John explores how synthetic users built from cognitive science principles can fill the massive research gap that most teams quietly ignore, and why removing the human interviewer from the room might be the fastest way to finally hear the truth.In this Episode…(00:00) - Intro (01:13) - In This Episode (04:31) - What Are Synthetic Users and Why Do They Matter? (10:00) - How Synthetic Users Make Stakeholders Hungry for Real Human Research (15:56) - Pre-Testing on Synthetic Users: Shortcut or Smart Step? (18:53) - How to Actually Build a Synthetic User: Tools, Layers, and Agentic Systems (40:51) - Is the Average Persona Dead? Scale, Diversity, and the World Model (43:01) - Asking the Uncomfortable Questions: What AI Agents Reveal That Humans Won't (49:30) - Ending the Quant vs. Qual Debate with Statistically Relevant Qualitative Data (56:37) - Mining the 'Why' Behind Silent Behavioral Data with Synthetic Users (01:02:31) - Designing for Agent Users: The Coming Shift to Human-and-Machine-Centered Design (01:05:28) - The Happiness Question: Dogs, Nature, and Staying AnalogAbout JohnDr. John Whalen is a Cognitive Scientist, Author, and Founder of Brilliant Experience, where he applies cognitive science principles to help organizations design products and experiences that align with how people actually think and make decisions. He’s also an educator, teaching two AI customer research courses on Maven.His work explores the intersection of human psychology and marketing, including the emerging practice of pre-testing ideas on synthetic users to give brands a faster and more informed competitive edge. He is also the author of a book on the science of designing for the human mind, bringing academic rigor to practical business challenges.How Synthetic User Research Works and When to Trust ItSynthetic user research sounds like something creepy out of a dystopian science fiction film, and John is the first to admit the terminology does nobody any favors. When asked about what synthetic users actually are and what they mean for research, he admited: if he had been on the branding team, he would have pushed hard for something like “dynamic personas” instead. The name creates unnecessary friction before the conversation even starts. And that friction matters when you’re trying to get skeptical executives or methiculous researchers to take the whole thing seriously.Under the hood, specialized AI tools simulate how a defined audience segment would respond to a question, concept, or stimulus, without recruiting, scheduling, incentivizing, or waiting on real human participants. John runs a class where he collects genuine human data first, then feeds comparable inputs into these tools to benchmark accuracy head-to-head. The results are pretty wild. AI-generated responses align with real human findings somewhere between 85% and 100% of the time on major topics and consumer needs. That is not a peer-reviewed clinical trial, and John is not pretending otherwise. But 85% alignment is enough signal to stop reflexively dismissing the method and start asking harder, more specific questions about exactly where it fits into a research stack.So what does this mean for you and your company though? Think all the decisions that currently live in a black hole of zero structured input. How many product calls, campaign concepts, and messaging pivots happen with nothing more than a conference room full of people who all read the same talking heads on LinkedIn? John argues that low cost, round-the-clock accessibility, and minimal public exposure make these tools a natural fit for precisely those moments: pressure-checking a hypothesis at 11pm, testing whether a pitch direction even makes sense before it touches a client, or deciding whether a concept deserves the time and money required for proper validation.“If these are only going to keep getting better and better, which they are, then logically, what kinds of decisions right now go completely by gut and no research, and what could we use to help us frame that?”One of the more underappreciated angles John raises is global inclusivity. Large organizations routinely test in the US and Western Europe, then extrapolate those findings to markets in Southeast Asia, Latin America, or Sub-Saharan Africa because local research budgets simply do not exist. Big nono. Synthetic personas trained on broader, more representative data could at minimum provide directional signals for those markets, making research more geographically honest without a proportional spike in spend.The early AI bias problem, where models essentially mirrored the...
    Más Menos
    1 h y 8 m
  • 212: Tobias Konitzer: The Causal AI revolution and the boomerang effect in marketing decision science
    Mar 24 2026
    Summary: Tobi challenged marketing’s fixation on prediction. He has built highly accurate LTV models, but accuracy alone does not move revenue. Marketing is intervention. Correlation shows patterns; causality tells you what happens when you pull a lever. That shift reshapes experimentation, explains why dynamic allocation can outperform static A B tests, and highlights how self learning systems can backfire or get stuck in local maxima. It also fuels his skepticism of unleashing agentic AI on historical data without a causal layer. If you want to change outcomes instead of forecast them, your systems need to understand levers and log decisions you can actually audit.(00:00) - Intro (01:22) - In This Episode (04:07) - Why Predictive Models Fail Without Causal Inference (09:49) - How to Validate Causal Impact on Customer Lifetime Value (13:04) - Reducing Uncertainty Around Causal Effects by Optimizing Levers, Not Labels (17:01) - Why Dynamic Allocation Works Better Than Fixed Horizon A B Testing (31:54) - The Boomerang Effect and Why Uninformed AI Sabotages Early Results (40:15) - Escaping Local Maxima and The Failure of Randomly Initialized Decisioning (44:04) - Why Agentic AI Trained on Data Warehouse Correlations Reinforces Bias (49:00) - The Power of Composable Decisioning (53:06) - How Machine Decisioning Transcends Marketing (01:01:41) - Why Clear Priority Hierarchies Improve Executive Decision MakingAbout TobiasTobias Konitzer, PhD is VP of AI at GrowthLoop, where he’s chasing closed-loop marketing powered by reinforcement learning, causality, and agentic systems. He’s spent the past decade focused on one core problem: moving beyond prediction to actually influencing outcomes.Previously, Tobi was Chief Innovation Officer at Fenix Commerce, helping major eCommerce brands modernize checkout and delivery with machine learning. He also founded Ocurate, a venture-backed startup that predicted customer lifetime value to optimize ad bidding in real time, raising $5.5M and scaling to $500K+ ARR before its acquisition. Earlier, he co-founded PredictWise, building psychographic and behavioral targeting models that drove over $2M in revenue.Tobi earned his PhD in Computational Social Science from Stanford and worked at Facebook Research on large-scale ML and bias correction. Originally from Germany and based in the Bay Area since 2013, he writes frequently about causal thinking, machine decisioning, and the future of marketing.Why Predictive Models Fail Without Causal InferencePrediction dominates most marketing roadmaps. Teams invest months refining churn models, tightening confidence intervals, and debating which threshold deserves a campaign. Tobi built an entire company on that logic. His team produced highly accurate lifetime value predictions using deep learning and granular event data. The forecasts were sharp. The lift curves were clean. Buyers were impressed.Then lifecycle marketers asked a more uncomfortable question: what action should follow the score?A predictive model encodes the current trajectory of a customer under existing policies. It describes what will likely happen if nothing changes. Marketing changes things constantly. The moment you intervene, you alter the system that generated the prediction. The forecast reflects yesterday’s conditions, not tomorrow’s strategy.> “Prediction tells you the future if you do nothing. Causation tells you how to change it.”Consider the Prediction Trap.On the left, the status quo labels a person as high churn risk. The function is observation. The outcome is a description of what happens if you leave the system untouched. On the right, a lever gets pulled. The function is intervention. The outcome is directional change.That shift in function changes how you work.Prediction thinking centers on segmentation:Who is likely to churn?Who is likely to buy?Who looks like high LTV?Causal thinking centers on levers:Which incentive reduces churn?Which sequence increases repeat purchase?Which offer raises lifetime value incrementally?Tobi often uses an LTV example to expose the trap. Suppose high LTV customers frequently viewed a specific product early in their journey. A team might redesign the onboarding flow to feature that product more aggressively. The correlation looks persuasive. The causal effect remains unknown.Several alternative explanations could drive the pattern:The product may correlate with a specific acquisition channel.The product may have been highlighted during a limited campaign.The product view may signal prior brand familiarity.Only an intervention test can estimate incremental impact. Correlation can guide hypothesis generation, but it cannot validate the lever itself.Tobi also highlights a deeper issue. Acting on predictions introduces compounding uncertainty across multiple layers:The predictive model carries statistical variance.The translation from model features to campaign strategy introduces interpretation bias.The ...
    Más Menos
    1 h y 5 m
  • 211: Jenna Kellner: Overcoming frankenstacks and AI uncertainty with first principles and business judgement
    Mar 17 2026
    What’s up everyone, today we have the pleasure of chatting with Jenna Kellner, VP Marketing at Workleap.(00:00) - Intro (01:14) - In This Episode (04:30) - How to Manage Marketing Tech Debt During Rapid Growth (10:10) - How to Prioritize RevOps Tech Debt Without Perfect ROI Models (14:23) - Reasoning Through Broken Systems and Imperfect Data (19:23) - How High Performers Progress Anyway (24:28) - How to Build Confidence With AI Through Small Experiments (33:06) - How to Use Exit Planning and Cost Benefit Analysis for AI Tool Selection (35:57) - First principles matter more than tools (38:59) - Why Staying Close to Execution Improves Marketing Leadership (45:13) - Why Critical Thinking Skills Drive Marketing Career Growth (49:33) - How to Build Business Judgment in Technical Marketing Roles (53:03) - Why Confidence Without Humility is Dangerous (55:47) - How Revenue Leaders Prioritize Daily Energy (59:49) - Growing up (01:01:10) - Book recSummary: Jenna is a VP of marketing that can talk about the weeds of messy systems, uncertain decisions, and personal growth. You can’t hide from it, every company accumulates tech debt as teams rush to hit revenue targets. She frames tech debt as a leadership responsibility and urges executives to reinvest in core systems when patchwork begins to outweigh building. If leadership doesn’t get it, the best way to prioritize it is to shape it as an opportunity cost and lost leverage that will drain revenue the longer we wait. In the face of AI uncertainty, she argues that judgment compounds faster than technical knowledge, and that the marketers who become indispensable blend business awareness, proximity to execution, and decisive action grounded in humility.About JennaJenna Kellner is Vice President of Marketing at Workleap and a revenue-focused marketing leader who has spent more than a decade building marketing teams and scaling companies. She brings experience across Enterprise, SMB, D2C, SaaS, two-sided marketplaces, venture studios, and other high-growth environments.Her career spans senior leadership roles at Minerva, On Deck, RBCx, and Ownr, where she led marketing, growth, and revenue functions inside complex, evolving organizations. At RBCx, she served as Chief Growth Officer for Ampli and directed marketing and growth initiatives within a large financial institution setting. She has also co-founded communities such as GrowthToronto and Little Traders, reflecting her commitment to building networks and businesses in parallel.Jenna operates with a strong sense of ownership and accountability, grounded in her belief that every challenge ultimately becomes her responsibility to solve. Recognized as a WXN Top 100 Women in Canada, she focuses on developing high-performing teams that connect strategy to execution and translate marketing into measurable revenue impact.The Frankenstein Reality of Managing Tech Debt: How to Manage Marketing Tech Debt During Rapid GrowthYou know it.. Most marketers are operating inside half-connected systems. No company has a pristine, perfectly synchronized tech stack. Even if they think they do, it doesn’t last. Growth creates pressure, and pressure produces shortcuts. Jenna has seen the same cycle in startups and enterprise environments. In the early days, teams build whatever gets the job done. They start in spreadsheets, layer on point solutions, wire tools together with lightweight integrations, and move fast because revenue matters more than architecture.Those early decisions never disappear. They compound. Years later, larger organizations inherit layers of systems that were added at different stages of maturity. Tools do not scale in sync. One platform gets upgraded. Another stays frozen because a team depends on it. Reporting becomes an exercise in orchestration. Jenna recalls walking into an organization where a sales leader pulled her weekly report from eight separate tools. That routine consumed time, drained energy, and normalized operational friction.“You have to Frankenstein your way through them to get the answers you need.”That sentence captures the daily reality inside many marketing and revenue teams. Quarter-end reporting still happens. Board decks still go out. The numbers get assembled through exports, CSV files, manual joins, and late-night reconciliation. Leadership often tolerates the strain because revenue continues to land. But the cost isn’t super visible:Reporting cycles stretch longer each quarter.Forecast confidence erodes.Team morale dips as manual work expands.Strategic decisions rely on partial or inconsistent data.So how do we get out of this mess? Jenna views this as a leadership obligation. Someone has to decide that cleaning house earns priority alongside pipeline generation. She describes working with a founder who paused other initiatives to repair core systems. The work moved slowly. It required budget discipline and uncomfortable trade-offs. It rebuilt trust in data and freed ...
    Más Menos
    1 h y 2 m
Todavía no hay opiniones