Episodios

  • 178: Guta Tolmasquim: Connecting brand to revenue with attribution algorithms that reflect brand complexity
    Jul 15 2025
    What’s up everyone, today we have the pleasure of sitting down with Guta Tolmasquim, CEO at Purple Metrics. Summary: Brand measurement often feels like a polite performance nobody fully believes, and Guta learned this firsthand moving from performance marketing spreadsheets to startup rebrands that showed clear sales bumps everyone could feel. She kept seeing blind spots, like a bank’s soccer sponsorship that quietly cut churn or old LinkedIn pages driving conversions no one tracked. When she built Purple Metrics, she refused to pretend algorithms could explain everything, designing tools that encourage gradual shifts over sudden upheaval. She watched CMOs massage attribution settings to fit their instincts and knew real progress demanded something braver: smaller experiments, simpler language, and the courage to say, “We tried, we learned,” even when results stung. Her TikTok videos in Portuguese became proof that brand work can pay off fast if you track it honestly. If you’re tired of clean stories masking messy reality, her perspective feels like a breath of fresh air.How Brand Measurement Connects to RevenueBrand measurement drifted away from commercial reality when marketers decided to chase every click and impression. Guta traced this pattern back to the 1970s when companies decided to separate branding and sales into distinct functions. Before that split, teams treated branding as a sales lever that directly supported revenue. The division created two camps that rarely spoke the same language. One camp focused on lavish creative campaigns, and the other became fixated on dashboards filled with shallow metrics.Guta started her career in performance marketing because she valued seeing every dollar accounted for. She described those years as productive but ultimately unsatisfying. She moved to big enterprises and spent nearly a decade trying to make brand lift reports feel credible in boardrooms. She eventually turned her focus to startups and noticed a clearer path. Startups often have budgets that force prioritization. They pick one initiative, implement it, and measure its direct impact on revenue without dozens of overlapping campaigns.“When you only have money to do one thing, it becomes obvious what’s working,” Guta explained. “You almost get this A/B test without even planning for it.”That clarity shaped her view of brand measurement. She learned that disciplined isolation of variables makes results easier to trust. When a startup rebranded, sales moved in a way that confirmed the decision. The data was hard to ignore. Guta saw purchase volumes increase after brand updates, and she knew these signals were stronger than any generic awareness metric. The companies she worked with never relied on sentiment scores alone because they tracked actual transactions.Guta later built her own product to modernize brand research with a sharper focus on financial outcomes. She designed the system to map brand activities to revenue signals so marketing could prove its impact without resorting to vague reports. The product found traction because it respected the mindset of finance leaders and offered direct evidence that branding drives growth. Guta believed this connection was essential for any team that wants to secure resources and build trust across departments.Key takeaway: Brand measurement works best when you focus on one clear change at a time and track its impact on revenue without distractions. You can earn credibility with your finance partners by showing how brand decisions move purchase behavior in measurable ways. When you build discipline into measurement and align it with actual sales, you transform branding from a creative exercise into a proven growth lever.Examples Where Brand Investments Shifted Real Business OutcomesBrand investments often get treated as trophies that decorate a budget presentation. Guta shared a story that showed how sponsorships can drive specific business results when you track them properly. A Brazilian bank decided to sponsor a soccer championship. On the surface, the campaign looked like a glossy PR move. When Guta’s team measured what they called “mindset metrics,” they found that soccer fans reported higher loyalty toward the bank. The data set off a chain reaction that forced everyone involved to reconsider how they viewed sponsorships.The bank pulled internal reports and discovered a clear pattern. Fans who followed the soccer sponsorship churned at much lower rates than other customers. Guta said the marketing team realized they were sitting on a revenue engine they never fully understood. They began to see sponsorship as a serious retention tool rather than a vanity spend. That shift did not happen automatically. Someone had to ask whether the big brand push was connected to any measurable outcomes, and then look carefully for the link between sentiment and behavior.Guta described another client who rebranded their product suite ...
    Más Menos
    1 h y 7 m
  • 177: Chris O’Neill: GrowthLoop CEO on how AI agent swarms and reinforcement learning boost velocity
    Jul 8 2025
    What’s up everyone, today we have the pleasure of sitting down with Chris O'Neill, CEO at GrowthLoop. Summary: Chris explains how leading marketing teams are deploying swarms of AI agents to automate campaign workflows with speed and precision. By assigning agents to tasks like segmentation, testing, and feedback collection, marketers build fast-moving loops that adapt in real time. Chris also breaks down how reinforcement learning helps avoid a sea of sameness by letting campaigns evolve mid-flight based on live data. To support velocity without sacrificing control, top teams are running red team drills, assigning clear data ownership, and introducing internal AI regulation roles that manage risk while unlocking scale.The 2025 AI and Marketing Performance IndexThe 2025 AI and Marketing Performance Index that GrowthLoop put together is excellent, we’re honored to have gotten our hands on it before it went live and getting to unpack that with Chris in this episode. The report answers timely questions a lot of teams are are wrestling with:Are top performers ahead of the AI curve or just focused on solid foundations? Are top performers focused on speed and quantity or does quality still win in a sea of sameness?We’ve chatted with plenty of folks that are betting on patience and polish. But GrowthLoop’s data shows the opposite.🤖🏃 Top performerming marketing teams are already scaling with AI and their focus on speed is driving growth. For some, this might be a wake-up call. But for others, it’s confirmation and might seem obvious: Teams that are using AI and working fast are growing faster. We all get the why. But the big mystery is the how. So let’s dig into the how teams can implement AI to grow faster and how to prepare marketers and marketing ops folks for the next 5 years.Reframing AI in Marketing Around Outcomes and VelocityMarketing teams love speed. AI vendors promise it. Founders crave it. The problem is most people chasing speed have no idea where they’re going. Chris prefers velocity. Velocity means you are moving fast in a defined direction. That requires clarity. Not hype. Not generic goals. Clarity.AI belongs in your toolkit once you know exactly which metric needs to move. Chris puts it plainly: revenue, lifetime value, or cost. Pick one. Write it down. Then explain how AI helps you get there. Not in vague marketing terms. In business terms. If you cannot describe the outcome in a sentence your CFO would nod at, you are wasting everyone’s time.“Being able to articulate with precision how AI is going to drive and improve your profit and loss statement, that’s where it starts.”Too many teams start with tools. They get caught up in features and launch pilots with no destination. Chris sees this constantly. The projects that actually work begin with a clearly defined business problem. Only after that do they start choosing systems that will accelerate execution. AI helps when it fits into a system that already knows where it’s going.Velocity also forces prioritization. If your AI project can't show directional impact on a core business metric, it does not deserve resources. That way you can protect your time, your budget, and your credibility. Chris doesn’t get excited by experiments. He gets excited when someone shows him how AI will raise net revenue by half a percent this quarter. That’s the work.Key takeaway: Start with a business problem. Choose one outcome: revenue, lifetime value, or cost reduction. Define how AI contributes to that outcome in concrete terms. Use speed only when you know the direction. That way you can build systems that deliver velocity, not chaos.How to Use Agentic AI for Marketing Campaign ExecutionMany marketing teams still rely on AI to summarize campaign data, but stop there. They generate charts, read the output, and then return to the same manual workflows they have used for years. Chris sees this pattern everywhere. Teams label themselves as “data-driven,” while depending on outdated methods like list pulls, rigid segmentation, and one-off blasts that treat everyone in the same group the same way.Chris calls this “waterfall marketing.” A marketer decides on a goal like improving retention or increasing lifetime value. Then they wait in line for the data team to write SQL, generate lists, and pass it back. That process often takes days or weeks, and the result is usually too narrow or too broad. The entire workflow is slow, disconnected, and full of friction.Teams that are ahead have moved to agent-based execution. These systems no longer depend on one-off requests or isolated tools. AI agents access a shared semantic layer, interpret past outcomes, and suggest actions that align with business goals. These actions include:Identifying the best-fit audience based on past conversionsSuggesting campaign timing and sequencingLaunching experiments automaticallyFeeding all results back into a single data source“You don’t wait ...
    Más Menos
    58 m
  • 176: Rajeev Nair: Causal AI and a unified measurement framework
    Jul 1 2025
    What’s up everyone, today we have the pleasure of sitting down with Rajeev Nair, Co-Founder and Chief Product Officer at Lifesight. Summary: Rajeev believes measurement only works when it’s unified or multi-modal, a stack that blends multi-touch attribution, incrementality, media mix modeling and causal AI, each used for the decision it fits. At Lifesight, that means using causal machine learning to surface hidden experiments in messy historical data and designing geo tests that reveal what actually drives lift. Attribution alone can’t tell you what changed outcomes. Rajeev’s team moved past dashboards and built a system that focuses on clarity, not correlation. Attribution handles daily tweaks. MMM guides long-term planning. Experiments validate what’s real. Each tool plays a role, but none can stand alone.About RajeevRajeev Nair is the Co-Founder and Chief Product Officer at Lifesight, where he’s spent the last several years shaping how modern marketers measure impact. Before that, he led product at Moda and served as a business intelligence analyst at Ebizu. He began his career as a technical business analyst at Infosys, building a foundation in data and systems thinking that still drives his work today.Digital Astrology and the Attribution IllusionLifesight started by building traditional attribution tools focused on tracking user journeys and distributing credit across touchpoints using ID graphs. The goal was to help brands understand which interactions influenced conversions. But Rajeev and his team quickly realized that attribution alone didn’t answer the core question their customers kept asking: what actually drove incremental revenue? In response, they shifted gears around 2019, moving toward incrementality testing. They began with exposed versus synthetic control groups, then evolved to more scalable, identity-agnostic methods like geo testing. This pivot marked a fundamental change in their product philosophy; from mapping behavior to measuring causal impact.Rajeeve shares his thoughts on multi-touch attribution and the evolution of the space.The Dilution of The Term AttributionAttribution has been hijacked by tracking. Rajeev points straight at the rot. What used to be a way to understand which actions actually led to a customer buying something has become little more than a digital breadcrumb trail. Marketers keep calling it attribution, but what they're really doing is surveillance. They're collecting events and assigning credit based on who touched what ad and when, even if none of it actually changed the buyer’s mind.The biggest failure here is causality. Rajeev is clear about this. Attribution is supposed to tell you what caused an outcome. Not what appeared next to it. Not what someone happened to click on right before. Actual cause and effect. Instead, we get dashboards full of correlation dressed up as insight. You might see a spike in conversions and assume it was the retargeting campaign, but you’re building castles on sand if you can’t prove causality.Then comes the complexity problem. Today’s marketing stack is a jungle. You have:Paid ads across five different platformsOrganic contentDiscountsSeasonal shiftsPricing changesProduct updatesAll these things impact results, but most attribution models treat them like isolated variables. They don’t ask, “What moved the needle more than it would’ve moved otherwise?” They ask, “Who touched the user last before they bought?” That’s not measurement. That’s astrology for marketers.“Attribution, in today’s marketing context, has just come to mean tracking. The word itself has been diluted.”Multi-touch attribution doesn’t save you either. It distributes credit differently, but it’s still built on flawed data and weak assumptions. If you’re measuring everything and understanding nothing, you’re just spending more money to stay confused. Real marketing optimization requires incrementality analysis, not just a prettier funnel chart.To Measure What Caused a Sale, You Need ExperimentsEven with perfect data, attribution keeps lying. Rajeev learned that the hard way. His team chased the attribution grail by building identity graphs so detailed they could probably tell you what toothpaste a customer used. They stitched together first-party and third-party data, mapped the full user journey, and connected every touchpoint from TikTok to in-store checkout. Then they ran the numbers. What came back wasn’t insight. It was statistical noise.Every marketing team that has sunk months into journey mapping has hit the same wall. At the bottom of the funnel, conversion paths light up like a Christmas tree. Retargeting ads, last-clicked emails, discount codes, they all scream high correlation with purchase. The logic feels airtight until you realize it's just recency bias with a data export. These touchpoints show up because they’re close to conversion. That doesn’t mean they caused it.“Causality is...
    Más Menos
    1 h y 9 m
  • 175: Hope Barrett: SoundCloud’s Martech Leader reflects on their huge messaging platform migration and structuring martech like a product
    Jun 24 2025
    What’s up everyone, today we have the pleasure of sitting down with Hope Barrett, Sr Director of Product Management, Martech at SoundCloud. Summary: In twelve weeks, Hope led a full messaging stack rebuild with just three people. They cut 200 legacy campaigns down to what mattered, partnered with MoEngage for execution, and shifted messaging into the product org. Now, SoundCloud ships notifications like features that are part of a core product. Governance is clean, data runs through BigQuery, and audiences sync everywhere. The migration was wild and fast, but incredibly meticulous and the ultimate gain was making the whole system make sense again.About HopeHope Barrett has spent the last two decades building the machinery that makes modern marketing work, long before most companies even had names for the roles she was defining. As Senior Director of Product Management for Martech at SoundCloud, she leads the overhaul of their martech stack, making every tool in the chain pull its weight toward growth. She directs both the performance marketing and marketing analytics teams, ensuring the data is not just collected but used with precision to attract fans and artists at the right cost.Before SoundCloud, she spent over six years at CNN scaling their newsletter program into a real asset, not just a vanity list. She laid the groundwork for data governance, built SEO strategies that actually stuck, and made sure editorial, ad sales, and business development all had the same map of who their readers were. Her career also includes time in consulting, digital analytics agencies, and leadership roles at companies like AT&T, Patch, and McMaster-Carr. Across all of them, she has combined technical fluency with sharp business instincts.SoundCloud’s Big Messaging Platform Migration and What it Taught Them About Future-Proofing Martech: Diagnosing Broken Martech Starts With Asking Better QuestionsHope stepped into SoundCloud expecting to answer a tactical question: what could replace Nielsen’s multi-touch attribution? That was the assignment. Attribution was being deprecated. Pick something better. What she found was a tangle of infrastructure issues that had very little to do with attribution and everything to do with operational blind spots. Messages were going out, campaigns were triggering, but no one could say how many or to whom with any confidence. The data looked complete until you tried to use it for decision-making.The core problem wasn’t a single tool. It was a decade of deferred maintenance. The customer engagement platform dated back to 2016. It had been implemented when the vendor’s roadmap was still theoretical, so SoundCloud had built their own infrastructure around it. That included external frequency caps, one-off delivery logic, and measurement layers that sat outside the platform. The platform said it sent X messages, but downstream systems had other opinions. Hope quickly saw the pattern: legacy tooling buried under compensatory systems no one wanted to admit existed.That initial audit kicked off a full system teardown. The MMP wasn’t viable anymore. Google Analytics was still on Universal. Even the question that brought her in—how to replace MTA—had no great answer. Every path forward required removing layers of guesswork that had been quietly accepted as normal. It was less about choosing new tools and more about restoring the ability to ask direct questions and get direct answers. How many users received a message? What triggered it? Did we actually measure impact or just guess at attribution?“I came in to answer one question and left rebuilding half the stack. You start with attribution and suddenly you're gut-checking everything else.”Hope had done this before. At CNN, she had run full vendor evaluations, owned platform migrations, and managed post-rollout adoption. She knew what bloated systems looked like. She also knew they never fix themselves. Every extra workaround comes with a quiet cost: more dependencies, more tribal knowledge, more reasons to avoid change. Once the platforms can’t deliver reliable numbers and every fix depends on asking someone who left last year, you’re past the point of iteration. You’re in rebuild territory.Key takeaway: If your team can't trace where a number comes from, the stack isn’t helping you operate. It’s hiding decisions behind legacy duct tape. Fixing that starts with hard questions. Ask what systems your data passes through, which rules live outside the platform, and how long it’s been since anyone challenged the architecture. Clarity doesn’t come from adding more tools. It comes from stripping complexity until the answers make sense again.Why Legacy Messaging Platforms Quietly Break Your Customer ExperienceHope realized SoundCloud’s customer messaging setup was broken the moment she couldn’t get a straight answer to a basic question: how many messages had been sent? The platform could produce a number, but it was ...
    Más Menos
    1 h y 3 m
  • 174: Joshua Kanter: A 4-time CMO on the case against data democratization
    Jun 17 2025
    What’s up everyone, today we have the pleasure of sitting down with Joshua Kanter, Co-Founder & Chief Data & Analytics Officer at ConvertML. Summary: Joshua spent the earliest parts of his career buried in SQL, only to watch companies hand out dashboards and call it strategy. Teams skim charts to confirm hunches while ignoring what the data actually says. He believes access means nothing without translation. You need people who can turn vague business prompts into clear, interpretable answers. He built ConvertML to guide those decisions. GenAI only raises the stakes. Without structure and fluency, it becomes easier to sound confident and still be completely wrong. That risk scales fast.About JoshuaJoshua started in data analytics at First Manhattan Consulting, then co-founded two ventures; Mindswift, focused on marketing experimentation, and Novantas, a consulting firm for financial services. From there, he rose to Associate Principal at McKinsey, where he helped companies make real decisions with messy data and imperfect information. Then he crossed into operating roles, leading marketing at Caesars Entertainment as SVP of Marketing, where budgets were wild.After Caesars, he became a 3-time CMO (basically 4-time); at PetSmart, International Cruise & Excursions, and Encora. Each time walking into a different industry with new problems. He now co-leads ConvertML, where he’s focused on making machine learning and measurement actually usable for the people in the trenches.Data Democratization Is Breaking More Than It’s FixingData democratization has become one of those phrases people repeat without thinking. It shows up in mission statements and vendor decks, pitched like some moral imperative. Give everyone access to data, the story goes, and decision-making will become magically enlightened. But Joshua has seen what actually happens when this ideal collides with reality: chaos, confusion, and a lot of people confidently misreading the same spreadsheet in five different ways.Joshua isn’t your typical out of the weeds CMO, he’s lived in the guts of enterprise data for 25 years. His first job out of college was grinding SQL for 16 hours a day. He’s been inside consulting rooms, behind marketing dashboards, and at the head of data science teams. Over and over, he’s seen the same pattern: leaders throwing raw dashboards at people who have no training in how to interpret them, then wondering why decisions keep going sideways.There are several unspoken assumptions built into the data democratization pitch. People assume the data is clean. That it’s structured in a meaningful way. That it answers the right questions. Most importantly, they assume people can actually read it. Not just glance at a chart and nod along, but dig into the nuance, understand the context, question what’s missing, and resist the temptation to cherry-pick for whatever narrative they already had in mind.“People bring their own hypotheses and they’re just looking for the data to confirm what they already believe.”Joshua has watched this play out inside Fortune 500 boardrooms and small startup teams alike. People interpret the same report with totally different takeaways. Sometimes they miss what’s obvious. Other times they read too far into something that doesn’t mean anything. They rarely stop to ask what data is not present or whether it even makes sense to draw a conclusion at all.Giving everyone access to data is great and all… but only works when people have the skills to use it responsibly. That means more than teaching Excel shortcut keys. It requires real investment in data literacy, mentorship from technical leads, and repeated, structured practice. Otherwise, what you end up with is a very expensive system that quietly fuels bias and bad decisions and just work for the sake of work.Key takeaway: Widespread access to dashboards does not make your company data-informed. People need to know how to interpret what they see, challenge their assumptions, and recognize when data is incomplete or misleading. Before scaling access, invest in skills. Make data literacy a requirement. That way you can prevent costly misreads and costly data-driven decision-making.How Confirmation Bias Corrupts Marketing Decisions at ScaleExecutives love to say they are “data-driven.” What they usually mean is “data-selective.” Joshua has seen the same story on repeat. Someone asks for a report. They already have an answer in mind. They skim the results, cherry-pick what supports their view, and ignore everything else. It is not just sloppy thinking. It’s organizational malpractice that scales fast when left unchecked.To prevent that, someone needs to sit between business questions and raw data. Joshua calls for trained data translators; people who know how to turn vague executive prompts into structured queries. These translators understand the data architecture, the metrics that matter, and the business logic beneath ...
    Más Menos
    1 h y 5 m
  • 173: Samia Syed: Dropbox's Director of Growth Marketing on rethinking martech like HR efforts
    Jun 10 2025
    What’s up everyone, today we have the pleasure of sitting down with Samia Syed, Director of Growth Marketing at Dropbox. Summary: Samia Syed treats martech like hiring. If it costs more than a headcount, it needs to prove it belongs. She scopes the problem first, tests tools on real data, and talks to people who’ve lived with them not just vendor reps. Then she tracks usage and outcomes from day one. If adoption stalls or no one owns it, the tool dies. She once watched a high-performing platform get orphaned after a reorg. Great tech doesn’t matter if no one’s accountable for making it work.Don’t Buy the Tool Until You’ve Scoped the JobMartech buying still feels like the Wild West. Companies drop hundreds of thousands of dollars on tools after a single vendor call, while the same teams will debate for weeks over whether to hire a junior coordinator. Samia calls this out plainly. If a piece of software costs more than a person, why wouldn’t it go through the same process as a headcount request?She maps it directly: recruiting rigor should apply to your tech stack. That means running a structured scoping process before you ever look at vendors. In her world, no one gets to pitch software until three things are clear:What operational problem exists right nowWhat opportunities are lost by not fixing itWhat the strategic unlock looks like if you doMost teams skip that. They hear about a product, read a teardown on LinkedIn, and spin up a trial to “explore options.” Then the feature list becomes the job description, and suddenly there’s a contract in legal. At no point did anyone ask whether the team actually needed this, what it was costing them not to have it, or what they were betting on if it worked.Samia doesn’t just talk theory. She has seen this pattern lead to ballooning tech stacks and stale tools that nobody uses six months after procurement. A shiny new platform feels like progress, but if no one scoped the actual need, you’re not moving forward. You’re burying yourself in debt, disguised as innovation.“Every new tool should be treated like a strategic hire. If you wouldn’t greenlight headcount without a business case, don’t greenlight tech without one either.”And it goes deeper. You can’t just build a feature list and call that a justification. Samia breaks it into a tiered case: quantify what you lose without the tool, and quantify what you gain with it. How much time saved? How much revenue unlocked? What functions does it enable that your current stack can’t touch? Get those answers first. That way you can decide like a team investing in long-term outcomes, not like a shopper chasing the next product demo.Key takeaway: Treat every Martech investment like a senior hire. Before you evaluate vendors, run a scoping process that defines the current gap, quantifies what it costs you to leave it open, and identifies what your team can achieve once it’s solved. Build a business case with numbers, not just feature wishlists. If you start by solving real problems, you’ll stop paying for shelfware.Your Martech Stack Is a Mess Because Mops Wasn’t in the Room EarlyMost marketing teams get budget the same way they get unexpected leftovers at a potluck. Something shows up, no one knows where it came from, and now it’s your job to make it work. You get a number handed down from finance. Then you try to retroactively justify it with people, tools, and quarterly goals like you’re reverse-engineering a jigsaw puzzle from the inside out.Samia sees this happen constantly. Teams make decisions reactively because their budget arrived before their strategy. A renewal deadline pops up, someone hears about a new tool at a conference, and suddenly marketing is onboarding something no one asked for. That’s how you end up with shelfware, disconnected workflows, and tech debt dressed up as innovation.This is why she pushes for a different sequence. Start with what you want to achieve. Define the real gaps that exist in your ability to get there. Then use that to build a case for people and platforms. It sounds obvious, but it rarely happens that way. In most orgs, Marketing Ops is left out of the early conversations entirely. They get handed a brief after the budget is locked. Their job becomes execution, not strategy.“If MOPS is treated like a support team, they can’t help you plan. They can only help you scramble.”Samia has seen two patterns when MOPS lacks influence. Sometimes the head of MOPS is technically in the room but lacks the confidence, credibility, or political leverage to speak up. Other times, the org’s workflows never gave them a shot to begin with. Everything is set up as a handoff. Business leaders define targets, finance approves the budget, then someone remembers to loop in the people who actually have to make it all run. That structure guarantees misalignment. If you want a smarter stack, you have to fix how decisions get made.Key takeaway: ...
    Más Menos
    1 h
  • 172: Ankur Kothari: A practical guide on implementing AI to improve retention and activation through personalization
    Jun 3 2025
    What’s up everyone, today we have the pleasure of sitting down with Ankur Kothari, Adtech and Martech Consultant who’s worked with big tech names and finance/consulting firms like Salesforce, JPMorgan and McKinsey.The views and opinions expressed by Ankur in this episode are his own and do not necessarily reflect the official position of his employer.Summary: Ankur explains how most AI personalization flops cause teams ignore the basics. He helped a brand recover millions just by making the customer journey actually make sense, not by faking it with names in emails. It’s all about fixing broken flows first, using real behavior, and keeping things human even when it’s automated. Ankur is super sharp, he shares a practical maturity framework for AI personalization so you can assess where you currently fit and how you get to the next stage. AI Personalization That Actually Increases Retention - Practical ExampleMost AI personalization in marketing is either smoke, mirrors, or spam. People plug in a tool, slap a customer’s first name on a subject line, then act surprised when the retention numbers keep tanking. The tech isn't broken. The execution is lazy. That’s the part people don’t want to admit.Ankur worked with a mid-sized e-commerce brand in the home goods space that was bleeding revenue; $2.3 million a year lost to customers who made one purchase and never returned. Their churn rate sat at 68 percent. Think about that. For every 10 new customers, almost 7 never came back. And they weren’t leaving because the product was bad or overpriced. They were leaving because the whole experience felt like a one-size-fits-all broadcast. No signal, no care, no relevance.So he rewired their personalization from the ground up. No gimmicks. No guesswork. Just structured, behavior-based segmentation using first-party data. They looked at:Website interactionsPurchase historyEmail engagementCustomer service logsThen they fed that data into machine learning models to predict what each customer might actually want to do next. From there, they built 27 personalized customer journeys. Not slides in a strategy deck. Actual, functioning sequences that shaped content delivery across the website, emails, and mobile app.> “Effective AI personalization is only partly about the tech but more about creating genuinely helpful customer experiences that deliver value rather than just pushing products.”The results were wild. Customer retention rose 42 percent. Lifetime value jumped from $127 to $203. Repeat purchase rate grew by 38 percent. Revenue climbed by $3.7 million. ROI hit 7 to 1. One customer who previously spent $45 on a single sustainable item went on to spend more than $600 in the following year after getting dropped into a relevant, well-timed, and non-annoying flow.None of this happened because someone clicked "optimize" in a tool. It happened because someone actually gave a damn about what the customer experience felt like on the other side of the screen. The lesson isn’t that AI personalization works. The lesson is that it only works if you use it to solve real customer problems.Key takeaway: AI personalization moves the needle when you stop using it as a buzzword and start using it to deliver context-aware, behavior-driven customer experiences. Focus on first-party data that shows how customers interact. Then build distinct journeys that respond to actual behavior, not imagined personas. That way you can increase retention, grow customer lifetime value, and stop lighting your acquisition budget on fire.Why AI Personalization Fails Without Fixing Basic Automation FirstSigning up for YouTube ads should have been a clean experience. A quick onboarding, maybe a personalized email congratulating you for launching your first campaign, a relevant tip about optimizing CPV. Instead, the email that landed was generic and mismatched—“Here’s how to get started”—despite the fact the account had already launched its first ad. This kind of sloppiness doesn’t just kill momentum. It exposes a bigger problem: teams chasing personalization before fixing basic logic.Ankur saw this exact issue on a much more expensive stage. A retail bank had sunk $2.3 million into an AI-driven loan recommendation engine. Sophisticated architecture, tons of fanfare. Meanwhile, their onboarding emails were showing up late and recommending products users already had. That oversight translated to $3.7 million in missed annual cross-sell revenue. Not because the AI was bad, but because the foundational workflows were broken.The failure came from three predictable sources:Teams operated in silos. Innovation was off in its own corner, disconnected from marketing ops and customer experience.The tech stack was split in two. Legacy systems handled core functions, but were too brittle to change. AI was layered on top, using modern platforms that didn’t integrate cleanly.Leaders focused on innovation metrics, while no one owned ...
    Más Menos
    53 m
  • 171: Kim Hacker: Reframing tool FOMO, making AI face real work and catching up on AI skills
    May 27 2025
    What’s up everyone, today we have the pleasure of sitting down with Kim Hacker, Head of Business Ops at Arrows. Summary: Tool audits miss the mess. If you’re trying to consolidate without talking to your team, you’re probably breaking workflows that were barely holding together. The best ops folks already know this: they’re in the room early, protecting momentum, not patching broken rollouts. Real adoption spreads through peer trust, not playbooks. And the people thriving right now are the generalists automating small tasks, spotting hidden friction, and connecting dots across sales, CX, and product. If that’s you (or you want it to be) keep reading or hit play.About KimKim started her career in various roles like Design intern and Exhibit designer/consultantShe later became an Account exec at a Marketing AgencyShe then moved over to Sawyer in a Partnerships role and later Customer OnboardingToday Kim is Head of Business Operations at Arrows Most AI Note Takers Just Parrot Back JunkKim didn’t set out to torch 19 AI vendors. She just wanted clarity.Her team at Arrows was shipping new AI features for their digital sales room, which plugs into HubSpot. Before she went all in on messaging, she decided to sanity check the market. What were other sales teams in the HubSpot ecosystem actually *doing* with AI? Over a dozen calls later, the pattern was obvious: everyone was relying on AI note takers to summarize sales calls and push those summaries into the CRM.But no one was talking about the quality. Kim realized if every downstream sales insight starts with the meeting notes, then those notes better be reliable. So she ran her own side-by-side teardown of 22 AI note takers. No configuration. No prompt tuning. Just raw, out-of-the-box usage to simulate what real teams would experience.> “If the notes are garbage, everything you build on top of them is garbage too.”She was looking for three things: accuracy, actionability, and structure. The kind of summaries that help reps do follow-ups, populate deal intelligence, or even just remember the damn call. Out of 22 tools, only *three* passed that bar. The rest ranged from shallow summaries to complete misinterpretations. Some even skipped entire sections of conversations or hallucinated action items that never came up.It’s easy to assume an AI-generated summary is “good enough,” especially if it sounds coherent. But sounding clean is not the same as being useful. Most note takers aren't designed for actual sales workflows. They're just scraping audio for keywords and spitting out templated blurbs. That’s fine for keeping up appearances, but not for decision-making or pipeline accuracy.Key takeaway: Before layering AI on top of your sales stack, audit your core meeting notes. Run a side-by-side test on your current tool, and look for three things: accurate recall, structured formatting, and clear next steps. If your AI notes aren’t helping reps follow up faster or making your CRM smarter, they’re just noise in a different font.Why Most Teams Will Miss the AI Agent Wave EntirelyThe vision is seductive. Sales reps won't write emails. Marketers won’t build workflows. Customer success won’t chase follow-ups. Everyone will just supervise agents that do the work for them. That future sounds polished, automated, and eerily quiet. But most teams are nowhere close. They’re stuck in duplicate records, tool bloat, and a queue of Jira tickets no one’s touching. AI agents might be on the roadmap, but the actual work is still being done by humans fighting chaos with spreadsheets.Kim sees the disconnect every day. AI fatigue isn’t coming from overuse. It’s coming from bad framing. “A lot of people talking about AI are just showing the most complex or viral workflows,” she explains. “That stuff makes regular folks feel behind.” People see demos built for likes, not for legacy systems, and it creates a false sense that they’re supposed to be automating their entire job by next quarter.> “You can’t rely on your ops team to AI-ify the company on their own. Everyone needs a baseline.”Most reps haven’t written a good prompt, let alone tried chaining tools together. You can’t go from zero to agent management without a middle step. That middle step is building a culture of experimentation. Start with small, daily use cases. Help people understand how to prompt, what clean AI output looks like, and how to tell when the tool is lying. Get the entire org to that baseline, then layer on tools like Zapier Agents or Relay App to handle the next tier of automation.Skipping the basics guarantees failure later. Flashy agents look great in demos, but they don’t compensate for unclear processes or teams that don’t trust automation. If the goal is to future-proof your workflows, the work starts with people, not tools.Key takeaway: If your team isn't fluent in basic AI usage, agent-powered workflows are a pipe dream. Build a shared ...
    Más Menos
    1 h y 1 m