Episode 146: AI Coaching Meets ICF Standards with guests, Jonathan Passmore & Rebecca Rutschmann
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
Send a text
What happens when an AI coach is judged by the same yardstick as a human? We invited executive coach and researcher Jonathan Passmore and AI coaching innovator Rebecca Rutschmann to unpack their new study benchmarking an AI coach against ICF Core Competencies—and the results upend assumptions. The machine reliably demonstrated ACC-level performance and crossed more than half of the PCC markers, especially in crisp summarizing and steady open questioning. That said, we draw a clear line between competence at the basics and the deep, sustained presence required for identity, values, and ethically nuanced conversations.
Across the hour, we explore where AI coaching shines—24/7 availability, structured reflection, accountability loops—and where it still stumbles: longer arcs, emotional complexity, and the tendency to praise rather than challenge. Rebecca argues capabilities are leaping forward with better prompting frameworks, onboarding, and conversational design, pointing to recent builds that reach deeper reflective work. Jonathan counters that human strengths remain decisive: relational humor, embodied presence, lived experience, and ethical maturity that can hold discomfort without defaulting to platitudes. We converge on a future of hybrid models that use AI for pre-work, micro-coaching, and late-night clarity, while reserving human time for complexity and transformation.
We also face the economics. With a surging supply of coaches and falling fees for transactional work, differentiation becomes urgent. If AI can do the basics well, human coaches must elevate to PCC-level craft as a baseline, specialize with domain and identity expertise, and design client journeys that blend AI tools without diluting trust. Finally, we call for new standards: if AI is an orange to the human apple, we need AI-specific metrics for safety, continuity, bias, escalation, and outcome transparency—so clients know what they’re choosing.
Curious where to start? We share practical steps for AI literacy and fluency, plus communities and programs that help you experiment safely and ethically. Subscribe, share this conversation with a colleague who’s on the fence, and leave a review with your take: partner, threat, or both?
Watch the full interview by clicking here.
Find the full article here.
Learn more about Jonathan here.
Learn more about Rebecca here.
Grab your free issue of choice Magazine here - https://choice-online.com/