Accidentally Engineering Conscious AI
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
In this episode, we break down David Chalmers’ essay on whether large language models could ever possess consciousness. Chalmers argues that current systems likely lack subjective experience due to their feedforward architectures and lack of sensory grounding, but maintains that there is no principled reason silicon systems could not be conscious. We unpack the technical hurdles he identifies—such as unified agency, recurrent processing, and global workspaces—and how they form a roadmap for building potentially conscious AI. The discussion also raises the ethical implications of creating artificial systems that might one day deserve moral consideration.
Todavía no hay opiniones