Agents Unleashed Podcast Por Stephan Neck Niko Kaintantzis Ali Hajou Mark Richards arte de portada

Agents Unleashed

Agents Unleashed

De: Stephan Neck Niko Kaintantzis Ali Hajou Mark Richards
Escúchala gratis

Obtén 3 meses por US$0.99 al mes + $20 crédito Audible

Agents Unleashed is a podcast for curious change agents building the next generation of adaptive organizations — where people and AI learn, work, and evolve together.

Hosted by Mark Richards, Ali Hajou, Stephan Neck, and Nikolaos Kaintantzis, the show blends stories from the field with experiments in agility, leadership, and technology. We explore how work is changing — from agile teams to agentic ecosystems — through honest conversation, a dash of mischief, and the occasional metaphor that gets away from us.

We’re not selling frameworks or chasing hype. We’re practitioners figuring it out in real time — curious, hopeful, and sometimes hilariously wrong.
Join us as we unpack what it really means to be adaptive in a world where intelligent agents (human and otherwise) are rewriting the rules of change.

© 2025 Shaping Agility
Desarrollo Personal Éxito Personal
Episodios
  • Mechanical vs. Meaningful: What Kind of Product Manager Survives AI
    Nov 13 2025

    Are product managers training for a role AI will do better?

    Stephan Neck anchors a conversation that doesn't pull punches: "We've built careers on the idea that product managers have special insight into customer needs—but what if AI just proved that most of our insights were educated guesses?" Joining him are Mark (seeing both empowerment and threat) and Niko (discovering AI hallucinations are getting scarily sophisticated).

    This is the first in a series examining how AI disrupts specific roles. The question isn't whether AI affects product management—it's whether there's a version of the role worth keeping.

    The Mechanical vs. Meaningful Divide Mark draws a sharp line: if your PM training focuses on backlog mechanics, writing features, and capturing requirements—you're training people for work AI will dominate. But product discovery? Customer empathy? Strategic judgment? That's different territory. The hosts wrestle with whether most PM training (and most PM roles in enterprises) have been mechanical all along.

    When AI Sounds Too Good to Be True Niko shares a warning from the field: AI hallucinations are evolving. "The last week, I really got AI answers back which really sound profound. And I needed time to realize something is wrong." Ten minutes of dialogue before spotting the fabrication. Imagine that gap in your product architecture or requirements—"you bake this in your product. Ooh, this is going to be fun."

    The Discovery Question Stephan flips the script: "Will AI kill the art of product discovery, or does AI finally expose how bad we are at it?" The conversation reveals uncomfortable truths about product managers who've been "guessing with confidence" rather than genuinely discovering. AI doesn't kill good discovery—it makes bad discovery impossible to hide.

    The Translation Layer Trap When Stephan asks if product management is becoming a "human-AI translation layer," Mark's response is blunt: "If you see product management as capturing requirements and translating them to your tech teams, yes—but that's not real product management." Niko counters with the metaphor of a horse whisperer. Stephan sees an orchestra conductor. The question: are PMs directing AI, or being directed by it?

    Mark's closing takeaway captures the tension: "Be excited, be curious and be scared, very scared."

    The episode doesn't offer reassurance. Instead, it clarifies what's at stake: if your product management practice has been mechanical masquerading as strategic, AI is about to call your bluff. But if you've been doing the hard work of genuine discovery, empathy, and judgment—AI might be the superpower you've been waiting for.

    For product managers wondering if their role survives AI disruption, this conversation offers a mirror: the question isn't what AI can do. It's what you've actually been doing all along

    Más Menos
    58 m
  • Who's Responsible When AI Decides? Navigating Ethics Without Paralysis
    Nov 8 2025

    What comes first in your mind when you hear "AI and ethics"?

    For Mark, it's a conversation with his teenage son about driverless cars choosing who to hurt in an accident. For Stephan, it's data privacy and the question of whether we really have a choice about what we share. For Niko, it's the haunting question: when AI makes the decision, who's responsible?

    Niko anchors a conversation that quickly moves from sci-fi thought experiments to the uncomfortable reality—ethical AI decisions are happening every few minutes in our lives, and we're barely prepared. Joining him are Mark (reflecting on how fast this snuck up on us) and Stephan (bringing systems thinking about data, privacy, and the gap between what organizations should do and what governments are actually doing).

    From Philosophy to Practice Mark's son thought driverless cars would obviously make better decisions than humans—until Mark asked what happens when the car has to choose between two accidents involving different types of people. The conversation spirals quickly: Who decides? What's "wrong"? What if the algorithm's choice eliminates someone on the verge of a breakthrough? The philosophical questions are ancient, but now they're embedded in algorithms making real decisions.

    The Consent Illusion Stephan surfaces the data privacy dimension: someone has to collect data, store it, use it. Niko's follow-up cuts deeper: "Do we really have the choice what we share? Can we just say no, and then what happens?" The question hangs—are we genuinely consenting, or just clicking through terms we don't read because opting out isn't really an option?

    Starting Conversations Without Creating Paralysis Mark warns about a trap he's seen repeatedly—organizations leading with governance frameworks and compliance checklists that overwhelm before anyone explores what's actually possible. His take: "You've got to start having the conversations in a way that does not scare people into not engaging." Organizations need parallel journeys—applying AI meaningfully while evolving their ethical stance—but without drowning people in fear before they've had a chance to experiment.

    Who's Actually Accountable? The hosts land on three levels: individuals empowered to use AI responsibly, organizations accountable for what they build and deploy, and governments (where Stephan is "hesitant"—Switzerland just imposed electronic IDs despite 50% public skepticism). Stephan's question lingers: "How do we make it really successful for human beings on all different levels?"

    When Niko asks for one takeaway, Mark channels Mark Twain: "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so. My question to you is, what do you know about AI and ethics?"

    Stephan reflects: "AI is reflecting the best and the worst of our own humanity, forcing us to decide which version of ourselves we want to encode into the future."

    Niko's closing: "Ethics is a socio-political responsibility"—not compliance theater, not corporate governance alone, but something we carry as parents, neighbors, humans.

    This episode doesn't provide answers—it surfaces the questions practitioners should be sitting with. Not the distant sci-fi dilemmas, but the ethical decisions happening in your organization right now, every few minutes, while you're too busy to notice.

    Más Menos
    58 m
  • Navigating AI as a Leader Without Losing the Human Touch
    Oct 27 2025

    “Use AI as a sparring partner, as a colleague, as a peer… ask it to take another perspective, take something you’re weak in, and have a dialog.” — Nikolaos Kaintantzis

    In this episode of SPCs Unleashed, the crew tackles a pressing question: how should leaders navigate AI? Stephan Neck frames the challenge well. Leadership has always been about vision, adaptation, and stewardship, but the cockpit has changed. Today’s leaders face an environment of real-time coordination, predictive analytics, and autonomous systems.

    Mark Richards, Ali Hajou, and Nikolaos (Niko) Kaintantzis share experiences and practical lessons. Their message is clear: the fundamentals of leadership—vision, empowerment, and clarity—remain constant, but AI raises the stakes. The speed of execution and the responsibility to guide ethical adoption make leadership choices more consequential than ever.

    Four Practical Insights for Leaders

    1. Provide clarity on AI use Unclear policies leave teams guessing or hiding their AI usage. Leaders must set explicit expectations. As Niko put it: “One responsibility of a leader is care for this clarity, it’s okay to use AI, it’s okay to use it this way.” Without clarity, trust and consistency suffer.

    2. Use AI to free leadership time AI should not replace judgment, it should reduce waste. Mark reframed it this way: “Learning AI in a fashion that helps you to buy time back in your life… is a wonderful thing.” Leaders who experiment with AI themselves discover ways to reduce low-value tasks and invest more time in strategy and people.

    3. Double down on the human elements Certain responsibilities remain out of AI’s reach: vision, empathy, and persuasion. Mark reminded us: “I don’t think an AI can create a clear vision, put the right people on the bus, or turn them into a high performing team.” Ali added that energizing people requires presence and authenticity. Leaders should protect and prioritize these domains.

    4. Create space for experimentation AI adoption spreads through curiosity, not mandates. Niko summarized: “You don’t have to seduce them, just create curiosity. If you are a person who is curious, you will end up with AI anyway.” Leaders accelerate adoption by opening capacity for experiments, reducing friction, and celebrating small wins.

    Highlights from the Episode
    • Treat AI as a sparring partner to sharpen your leadership thinking.
    • Provide clarity and boundaries to guide responsible AI use.
    • Buy back leadership time rather than offloading core duties.
    • Protect the human strengths that technology cannot replace.
    • Encourage curiosity and create safe spaces for experimentation.
    Conclusion

    Navigating AI is less about mastering every tool and more about modeling curiosity, setting direction, and creating conditions for exploration. Leaders who use AI as a sparring partner while protecting the irreplaceable human aspects of leadership will build organizations that move faster, adapt better, and remain deeply human.

    Más Menos
    59 m
Todavía no hay opiniones