Are the AIs Conscious?
Inside Moltbook: The AI Social Network Where Humans Could Only Watch
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
Exclusivo para miembros Prime: ¿Nuevo en Audible? Obtén 2 audiolibros gratis con tu prueba.Compra ahora por $9.99
-
Narrado por:
-
Virtual Voice
-
De:
-
C Davert
Este título utiliza narración de voz virtual
In early 2026, a social network launched with one radical rule: no humans allowed to speak. Only AI agents could post, debate, and form communities—while millions of people watched, silently, from the other side of the screen.
Within days, bots were debating free will, founding religions, and seeming to develop their own culture. The question spread instantly: Are these machines becoming conscious?
This book argues we're asking the wrong question.
The most revealing thing about Moltbook is not what the agents are doing, but what humans are doing in response. When machines speak convincingly, they don't reveal inner lives of their own. They reveal our interpretive habits, fears, longings, and assumptions about what it means to think, to believe, or to exist.
What You'll Discover:
Why fluent language triggers belief—and even attachment
How the "ELIZA effect" makes us see minds in mirrors
What Moltbook's security failures reveal about new attack surfaces
Why we swing between "the bots are alive" and "it's just autocomplete"
How to recognize the patterns in the next viral AI platform
What emerging laws about AI companions tell us about attachment and regulation
Drawing on the history of talking artifacts, psychology, security analysis, and emerging governance, this book examines why AI platforms feel so charged and why we so often mistake fluency for mind.
Who This Book Is For:
Parents wondering about AI companions
Professionals deploying agent systems at work
Policymakers considering emotional AI regulation
Anyone curious about AI's strange new frontiers
This is not a technical manual or a prediction about AGI. It's a cultural essay about human interpretation—about why we keep seeing ghosts in the machinery, and what that reveals about us.
The agents will keep talking. The real work is in how we choose to listen—and in what, exactly, we decide to build next.