I Don't Want to Believe Audiolibro Por Eduardo Valencia arte de portada

I Don't Want to Believe

AI, Prediction, and the Discipline of Delay

Muestra de Voz Virtual

Prueba gratis de 30 días de Audible Standard

Prueba Standard gratis
Selecciona 1 audiolibro al mes de nuestra colección completa de más de 1 millón de títulos.
Es tuyo mientras seas miembro.
Obtén acceso ilimitado a los podcasts con mayor demanda.
Plan Standard se renueva automáticamente por $8.99 al mes después de 30 días. Cancela en cualquier momento.

I Don't Want to Believe

De: Eduardo Valencia
Narrado por: Virtual Voice
Prueba Standard gratis

$8.99 al mes después de 30 días. Cancela en cualquier momento.

Compra ahora por $3.99

Compra ahora por $3.99

Background images

Este título utiliza narración de voz virtual

Voz Virtual es una narración generada por computadora para audiolibros..

AI, Prediction, and the Discipline of Delay The next AI prediction you receive will probably be reasonable. That is exactly the problem. It will arrive as a ranking, a default, a pre-filled field. It will feel like information. It will function as a decision. And the distance between the two will be invisible — because no one designed a moment to notice it. Eduardo Valencia kept seeing the same pattern: not AI failing, but AI succeeding in ways that quietly displaced the judgment it was supposed to support. A pharmaceutical pricing system where the human always chose the safer number. A language learning platform where instructors overrode the system to make learners feel better — and measurably slowed their retention. A hiring committee that stopped discussing candidates below a certain score threshold. No one decided to exclude anyone. Attention simply narrowed. In each case, the prediction was reasonable. In each case, belief arrived before anyone noticed it had. I Don't Want to Believe proposes the discipline of delay: not hesitation, but the practice of stating conditions before committing to outcomes. Drawing on Popper's falsifiability as an operational principle, it offers a framework for organizations that want to use AI without being used by it. The book makes three testable claims — and commits to being shelved if any prove wrong. A book about resisting premature belief should be willing to fail on its own terms. Book 3 of the Thinking AI series. Book 1: AI Requires More Human Intelligence — the human override problem. Book 2: Shadow AI — how ungoverned AI becomes structural dependency.

Gestión Gestión y Liderazgo
Todavía no hay opiniones