126: Critique-al Thinking
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
Tom got AI to critique his sales call. The feedback was detailed, line-by-line, technically correct... and basically useless.
In this episode, we dig into the surprising limitations of LLMs that most people don't seem to be talking about. Not the obvious media fluff about hallucinations or training data or taking everyone's jobs, but the deeper constraint: they can't reorient.
We start with our experiment using an LLM to critique one of our client discovery calls, which led to an observation about what's missing. We talk about what happens when AI conducts research interviews, why care home robots are increasing the workload they're supposed to decrease, and the crucial difference between "reading all the books" and actually understanding what matters.
This isn't anti-AI. It's about being clear about what these tools can and can't do, and why that matters for anyone doing customer research, strategy work, or trying to understand real human problems.
Including-but-not-limited-to:
- Why the AI critique of Tom's sales call was technically brilliant but fundamentally unhelpful
- Boyd's OODA loop and the missing "orientation" capability in LLMs
- What happened when someone showed up to a research call... with an AI interviewer
- The emotion gap: why LLMs can't follow the rich seams of energy in a conversation
- Why LLMs don't know when to pivot and when to push
- Japanese care home robots that create more work than they save, and the babysitting idiots effect
- Venkatesh Rao's "it's read all the books" theory of LLM usefulness (and when it actually works)
- How our "expert panel" AI prompt is useful for critique—if you keep your critical thinking switched on
- Why pattern-matching to words isn't the same as understanding context
- You heard it here second? Active inference models: the next wave beyond LLMs?
If you'd like a copy of our experimental "expert panel of dissenters" prompt, email us at tentacles@crownandreach.com and remember the risk: it requires your critical thinking.
References
- Ben Ford ("Commando Dev") on No Way Out Podcast https://podcasts.apple.com/gb/podcast/agentic-ai-thinks-like-boyd-the-ooda-upgrade-llms-cant-touch/id1663685759?i=1000734032438
- Venkatesh Rao https://substack.com/@contraptions
- John Boyd's OODA Loop and Snowmobiling
- JP Castlin's Strategy in Praxis https://strategyinpraxis.substack.com/p/the-only-one-writing-and-ai
- Dave Snowden's Ritual Dissent - https://cynefin.io/wiki/Ritual_dissent
Find out more about us and our work at crownandreach.com
Hosted on Acast. See acast.com/privacy for more information.