Episodios

  • Coaching in the Age of AI: Trust, Tools, and What Remains Human
    Jan 5 2026

    "Coaching in the age of AI" sounds straightforward—until you ask what it actually means. Niko asked AI and got 20 different interpretations. Are we coaching leaders to use AI? Coaching AI systems themselves? Being replaced by AI coaches? Leveraging AI to become better coaches? The answer is yes to all of them—and therein lies the problem.

    Niko anchors a conversation that refuses to pretend coaching will stay the same. Joining him are Mark, who's discovered his 15 years of coaching skills are more valuable in the AI world, not less, and Ali, bringing his characteristic skepticism about what "coach" even means anymore. With AI tools now capable of asking great questions, maintaining perfect consistency, and never forgetting a conversation, the hosts confront what remains uniquely human about the coaching relationship.

    Ali frames the stakes bluntly: "Either you gonna become a good question asker in the moment... or you're an expert in something which leans more towards the teacher profile... or you're going to be irrelevant." AI can already ask triggering questions that help people think and contextualize. But can it interject at the right moment? Can it read the room when someone's arms are crossed—and know whether that means they're closing off or focusing deeply?

    Mark cuts to what he considers foundational: "The instant that you are not treated as a vault, your ability to coach effectively is gone." When AI enters a coaching conversation—transcribing, analyzing, mining for insights—what happens to the psychological safety that makes coaching work? Niko's response is visceral: "It was the first time in my life I said no to a technology innovation."

    Yet Mark also grounds the theoretical in reality: use AI to summarize past coaching conversations, identify patterns across sessions, prepare better for calls. "Really practical, really down to earth. No science fiction required."

    The episode doesn't declare coaching dead or triumphant—it maps the territory where trust, technology, and human connection collide. For coaches wondering what to invest in and what to release, this conversation offers something rarer than answers: honest uncertainty from practitioners navigating the same questions.

    Más Menos
    58 m
  • Will AI Replace the Trainer—Or Just the PowerPoint?
    Dec 29 2025

    Three trainers who've collectively spent thousands of days in the room—physical and virtual—sit with a question that's been nagging at them: what happens to training when AI shows up?

    Mark opens with a scenario. Your trainee texts you at 11pm, panicking before their first PI Planning. You're asleep. They muddle through. But imagine they had an AI buddy from the course—one that knew the context and answered instantly. Relief that they got help? Or quiet terror that you just became optional?

    The conversation moves through what AI might extend and what it might erode. Stephan shares how his AI agent reframed his role: stop being a "knowledge dispenser" and become a "wisdom cultivator." The content isn't the hard part anymore. The hard part is helping people navigate what they don't know when they're in the thick of it. Ali picks up the thread but surfaces what's missing: AI can't interject. It can't say "whoa, stop—we need to zoom in right here." Chatbots are polite companions. Trainers sometimes need to be challengers.

    They explore practice and simulation—if someone rehearses a retrospective 50 times with an AI before trying it for real, do they arrive with justified confidence or false confidence? AI is infinitely patient in ways humans aren't. And Ali raises his "doomsday" scenario: if everyone privately asks ChatGPT instead of raising their hand, do we lose the brave question? The one that cracks things open for the whole room?

    The James Bond jiggle—courtesy of absent Niko—produces unexpected depth. Stephan casts Q as the on-the-job AI (efficient, tool-focused, never teaches why) and M as the classroom trainer. Mark chooses Blofeld for AI—omnipresent, enabling, but creating dependency—and Daniel Craig's scarred Bond for the human trainer who learned everything the hard way.

    The episode lands on two complementary edges: Ali's conviction that AI extends the trainer rather than replaces them, and Mark's sharper take—"If you think your job is to teach people what's on the PowerPoint slides, AI is going to replace you." Stephan's closing haiku captures it: "AI gives answers fast, but struggle builds the muscle—mirror, not rescue."

    Más Menos
    1 h y 1 m
  • Solution Intent in the Age of AI
    Dec 26 2025

    Does AI make Solution Intent obsolete—or finally give it the living, breathing life it was always meant to have? Four practitioners who've never seen anyone "install" a solution intent explore whether AI creates dangerous rigidity or unprecedented opportunity.

    Stephan anchors a conversation that begins with confession: Ali has rarely seen solution intent used as intended. Mark reframes it entirely—not as something to install, but as "a map, a mesh, and a mindset." Niko brings his alternative name: "product memory"—the bus factor savior. Together they navigate Stephan's opening paradox: AI can accelerate solution intent's creation while threatening to make it obsolete or dangerously rigid.

    From Automation to Augmentation

    The conversation pivots on Mark's sharp distinction: AI as sparring partner versus AI as document generator. "If you let AI generate 80% of your solution intent, the only person accountable for that much rubbish is you." The hosts explore using AI to challenge thinking—"point out the three biggest flaws," "what assumptions should I test?"—rather than producing voluminous output nobody reads.

    Intent as the New Source of Truth

    Mark surfaces a paradigm shift: "We're moving from code is the source of truth to intent is the source of truth." Citing Llewellyn Falco's hackathon where the team would have chosen specs over code, the conversation explores what happens when specifications become more valuable than what they produce.

    Optimizing the Wrong Percentage

    Are we optimizing the 8-12% of the value stream where AI writes code, while missing where solution intent actually lives? Mark argues its natural home is the beginning (exploring options) and the end (tracing what was built against intent). Niko is blunt: "We are optimizing the wrong percent."

    The Monster Network

    Niko offers a memorable framing: Is your solution intent a Jurassic Park monster—one terrifying monolith? Or Monster Inc.—a network of smaller, friendlier creatures you can navigate? The latter is what AI might help create: an entry point where some elements are augmented while others remain human-crafted.

    Highlight Moments:

    On accountability, Ali cuts to the heart: "There is a need for a single, ringable neck." Delegation to AI doesn't eliminate accountability—it concentrates it.

    Niko's warning lands hard: "We have to be careful if we outsource everything to AI to not lose the skills of critical thinking."

    The prison jiggle produces unexpected depth—and Niko's meta-observation that AI couldn't have produced these culturally-contextual answers proves his point about what machines still can't do.

    Closing:

    Stephan closes with a haiku: "AI writes so fast, yet wisdom needs time to grow. Keep options open." The takeaway isn't whether AI should power solution intent—it's that the intent must live and breathe. Producing more documentation nobody reads? That's not solution intent. That's just rubbish with your name on it.

    Más Menos
    1 h y 1 m
  • AI and Agile Teams: Amplifying Excellence or Broadcasting Waste?
    Dec 13 2025

    Ali anchors a conversation that digs into the gap between AI hype and agile reality. With statistics showing agile adoption everywhere, he challenges Mark, Stephan, and Niko to examine what's actually happening when AI meets daily practice. The question isn't whether practitioners are using AI—they clearly are. The question is whether that usage is making teams better or just making individuals busier feeling productive. For anyone who's watched team members disappear into AI-assisted solo work, this conversation hits close to home.

    The Amplifier Paradox Stephan brings his musician's eye: AI is like an amplifier—it makes whatever you're playing louder, not better. If your playing is poor, amplification just broadcasts the problem. He cites a study showing AI actually slows experienced developers by 19-20%. Are teams amplifying waste instead of eliminating it?

    Documentation's Surprising Comeback Mark—a self-described "hater of documentation"—shares a revelation: AI is driving teams toward more documentation because AI thrives on context. The twist? That shared context helps remote teams reconnect in ways they've struggled with since COVID. Building mission statements and team knowledge isn't bureaucracy anymore—it's infrastructure for AI to work effectively.

    Group Interactions Over One-on-One AI Niko proposes an update to the Agile Manifesto: "Group interactions over one-on-one AI interactions." The risk? Junior developers left alone with AI won't see the loopholes. The solution? Human + Human + AI pairing—not Human + AI in isolation. "Pairing with people plus AI," Niko argues, "not pairing with your AI."

    The Post-COVID Reality Check Mark challenges a hidden assumption: most teams don't have the human interaction baseline they imagine. If the average team member's "collaboration" is occasional Teams messages and mandatory meetings, maybe AI isn't the threat to connection—maybe it's an opportunity to rebuild what COVID already broke.

    Highlights

    When the conversation turns to what AI means for agile's future, Mark frames the stakes as a personal question: "Is there an agile I dreamed of, and I fear that AI will mean I never get to see it anymore, or is there an agile that I dreamed of, and AI gives me a chance to uplift the possibility I might see it?"

    Niko's closing advice cuts through the noise with characteristic directness: "Do not seek for speed, seek for value."

    Stephan, meanwhile, delivers his takeaway as a Japanese haiku about developers sipping margaritas while compliance drowns. Peak Stephan.

    Closing

    The episode doesn't pretend AI's impact on agile teams is resolved. Instead, it surfaces the questions practitioners should be sitting with: Are you optimizing individual productivity while starving team connection? Is your AI usage building shared context or fragmenting it? As Ali summarizes: "Small, stable teams delivering value without the overhead of the mundane, powered by AI." The mundane goes away. The essence stays. That's the aspiration worth chasing.

    Más Menos
    1 h
  • When the ground keeps moving: AI and the Architect
    Dec 10 2025

    If you put "AI Architect" on your LinkedIn headline tomorrow, what would you actually have to know—or explain—to deserve it? And in a landscape where the ground shifts weekly, how do you make architectural decisions without drowning in technical debt or chasing every buzzword that appears in your YouTube ads?

    Mark anchors a conversation with Stephan and Niko exploring what it means to be an architect when the tools, expectations, and pace of change have all shifted under your feet. All three confess their architect credentials are 10-15 years old—but they've spent those years in the trenches coaching architects through agile transformations, cloud migrations, and now AI disruption. This isn't theory. It's practitioners who know what architects are actually struggling with, thinking out loud about what's changed and what endures.

    Key Themes:

    From Gollum to Collaborator Niko opens with a vivid metaphor: the pre-agile architect as Gollum—alone, schizophrenic, clutching "my precious" architecture in an ivory tower. Agile transformed the role into something more collaborative. The question now: how does AI continue that evolution? The hosts agree that architects who try to remain gatekeepers will simply "be blown away."

    The LinkedIn Headline Test What would earning "AI Architect" actually require? Stephan wants to see evidence—real AI design work, not just buzzword collection. Niko warns against reducing AI to technology: "It's not about frameworks. It's about solving business problems." Mark adds that good architects have always known when to tap experts on the shoulder—the question is whether you understand enough to know what questions to ask.

    Balancing Executive Hype vs. Reality YouTube promises virtual employees in an hour. Enterprise reality involves governance, security, and regulatory compliance. The hosts explore the translation work architects must do between executive excitement and responsible implementation—work that looks a lot like change management with a technical edge.

    Decisions in Flux Classic architect anxiety—making choices that create lasting technical debt—gets amplified by AI's pace. Stephan returns to fundamentals: ADRs (architectural decision records), high-level designs, IT service management. Niko offers a grounding metaphor: "You can't build a skyscraper with pudding. You have to decide where the pillars are." Document your decisions, accept that you're deciding with incomplete information, and trust that you'll decide right.

    For architects navigating AI disruption, this conversation offers something practical: not a new framework to master, but a reframe of what endures. Document your decisions. Build context for AI to help prioritize your learning. Make friends who are learning different things. And recognize that "adoption rate is lower than innovation rate"—so stay calm. The ground is moving, but the work of bridging business problems and technical solutions hasn't changed. Just the speed.

    Más Menos
    1 h y 1 m
  • Mechanical vs. Meaningful: What Kind of Product Manager Survives AI
    Nov 13 2025

    Are product managers training for a role AI will do better?

    Stephan Neck anchors a conversation that doesn't pull punches: "We've built careers on the idea that product managers have special insight into customer needs—but what if AI just proved that most of our insights were educated guesses?" Joining him are Mark (seeing both empowerment and threat) and Niko (discovering AI hallucinations are getting scarily sophisticated).

    This is the first in a series examining how AI disrupts specific roles. The question isn't whether AI affects product management—it's whether there's a version of the role worth keeping.

    The Mechanical vs. Meaningful Divide Mark draws a sharp line: if your PM training focuses on backlog mechanics, writing features, and capturing requirements—you're training people for work AI will dominate. But product discovery? Customer empathy? Strategic judgment? That's different territory. The hosts wrestle with whether most PM training (and most PM roles in enterprises) have been mechanical all along.

    When AI Sounds Too Good to Be True Niko shares a warning from the field: AI hallucinations are evolving. "The last week, I really got AI answers back which really sound profound. And I needed time to realize something is wrong." Ten minutes of dialogue before spotting the fabrication. Imagine that gap in your product architecture or requirements—"you bake this in your product. Ooh, this is going to be fun."

    The Discovery Question Stephan flips the script: "Will AI kill the art of product discovery, or does AI finally expose how bad we are at it?" The conversation reveals uncomfortable truths about product managers who've been "guessing with confidence" rather than genuinely discovering. AI doesn't kill good discovery—it makes bad discovery impossible to hide.

    The Translation Layer Trap When Stephan asks if product management is becoming a "human-AI translation layer," Mark's response is blunt: "If you see product management as capturing requirements and translating them to your tech teams, yes—but that's not real product management." Niko counters with the metaphor of a horse whisperer. Stephan sees an orchestra conductor. The question: are PMs directing AI, or being directed by it?

    Mark's closing takeaway captures the tension: "Be excited, be curious and be scared, very scared."

    The episode doesn't offer reassurance. Instead, it clarifies what's at stake: if your product management practice has been mechanical masquerading as strategic, AI is about to call your bluff. But if you've been doing the hard work of genuine discovery, empathy, and judgment—AI might be the superpower you've been waiting for.

    For product managers wondering if their role survives AI disruption, this conversation offers a mirror: the question isn't what AI can do. It's what you've actually been doing all along

    Más Menos
    58 m
  • Who's Responsible When AI Decides? Navigating Ethics Without Paralysis
    Nov 8 2025

    What comes first in your mind when you hear "AI and ethics"?

    For Mark, it's a conversation with his teenage son about driverless cars choosing who to hurt in an accident. For Stephan, it's data privacy and the question of whether we really have a choice about what we share. For Niko, it's the haunting question: when AI makes the decision, who's responsible?

    Niko anchors a conversation that quickly moves from sci-fi thought experiments to the uncomfortable reality—ethical AI decisions are happening every few minutes in our lives, and we're barely prepared. Joining him are Mark (reflecting on how fast this snuck up on us) and Stephan (bringing systems thinking about data, privacy, and the gap between what organizations should do and what governments are actually doing).

    From Philosophy to Practice Mark's son thought driverless cars would obviously make better decisions than humans—until Mark asked what happens when the car has to choose between two accidents involving different types of people. The conversation spirals quickly: Who decides? What's "wrong"? What if the algorithm's choice eliminates someone on the verge of a breakthrough? The philosophical questions are ancient, but now they're embedded in algorithms making real decisions.

    The Consent Illusion Stephan surfaces the data privacy dimension: someone has to collect data, store it, use it. Niko's follow-up cuts deeper: "Do we really have the choice what we share? Can we just say no, and then what happens?" The question hangs—are we genuinely consenting, or just clicking through terms we don't read because opting out isn't really an option?

    Starting Conversations Without Creating Paralysis Mark warns about a trap he's seen repeatedly—organizations leading with governance frameworks and compliance checklists that overwhelm before anyone explores what's actually possible. His take: "You've got to start having the conversations in a way that does not scare people into not engaging." Organizations need parallel journeys—applying AI meaningfully while evolving their ethical stance—but without drowning people in fear before they've had a chance to experiment.

    Who's Actually Accountable? The hosts land on three levels: individuals empowered to use AI responsibly, organizations accountable for what they build and deploy, and governments (where Stephan is "hesitant"—Switzerland just imposed electronic IDs despite 50% public skepticism). Stephan's question lingers: "How do we make it really successful for human beings on all different levels?"

    When Niko asks for one takeaway, Mark channels Mark Twain: "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so. My question to you is, what do you know about AI and ethics?"

    Stephan reflects: "AI is reflecting the best and the worst of our own humanity, forcing us to decide which version of ourselves we want to encode into the future."

    Niko's closing: "Ethics is a socio-political responsibility"—not compliance theater, not corporate governance alone, but something we carry as parents, neighbors, humans.

    This episode doesn't provide answers—it surfaces the questions practitioners should be sitting with. Not the distant sci-fi dilemmas, but the ethical decisions happening in your organization right now, every few minutes, while you're too busy to notice.

    Más Menos
    58 m
  • Navigating AI as a Leader Without Losing the Human Touch
    Oct 27 2025

    “Use AI as a sparring partner, as a colleague, as a peer… ask it to take another perspective, take something you’re weak in, and have a dialog.” — Nikolaos Kaintantzis

    In this episode of SPCs Unleashed, the crew tackles a pressing question: how should leaders navigate AI? Stephan Neck frames the challenge well. Leadership has always been about vision, adaptation, and stewardship, but the cockpit has changed. Today’s leaders face an environment of real-time coordination, predictive analytics, and autonomous systems.

    Mark Richards, Ali Hajou, and Nikolaos (Niko) Kaintantzis share experiences and practical lessons. Their message is clear: the fundamentals of leadership—vision, empowerment, and clarity—remain constant, but AI raises the stakes. The speed of execution and the responsibility to guide ethical adoption make leadership choices more consequential than ever.

    Four Practical Insights for Leaders

    1. Provide clarity on AI use Unclear policies leave teams guessing or hiding their AI usage. Leaders must set explicit expectations. As Niko put it: “One responsibility of a leader is care for this clarity, it’s okay to use AI, it’s okay to use it this way.” Without clarity, trust and consistency suffer.

    2. Use AI to free leadership time AI should not replace judgment, it should reduce waste. Mark reframed it this way: “Learning AI in a fashion that helps you to buy time back in your life… is a wonderful thing.” Leaders who experiment with AI themselves discover ways to reduce low-value tasks and invest more time in strategy and people.

    3. Double down on the human elements Certain responsibilities remain out of AI’s reach: vision, empathy, and persuasion. Mark reminded us: “I don’t think an AI can create a clear vision, put the right people on the bus, or turn them into a high performing team.” Ali added that energizing people requires presence and authenticity. Leaders should protect and prioritize these domains.

    4. Create space for experimentation AI adoption spreads through curiosity, not mandates. Niko summarized: “You don’t have to seduce them, just create curiosity. If you are a person who is curious, you will end up with AI anyway.” Leaders accelerate adoption by opening capacity for experiments, reducing friction, and celebrating small wins.

    Highlights from the Episode
    • Treat AI as a sparring partner to sharpen your leadership thinking.
    • Provide clarity and boundaries to guide responsible AI use.
    • Buy back leadership time rather than offloading core duties.
    • Protect the human strengths that technology cannot replace.
    • Encourage curiosity and create safe spaces for experimentation.
    Conclusion

    Navigating AI is less about mastering every tool and more about modeling curiosity, setting direction, and creating conditions for exploration. Leaders who use AI as a sparring partner while protecting the irreplaceable human aspects of leadership will build organizations that move faster, adapt better, and remain deeply human.

    Más Menos
    59 m
adbl_web_global_use_to_activate_DT_webcro_1694_expandible_banner_T1