Should We Keep Our Models Ignorant? Lessons from DEEP THOUGHT (and more) About AI Safety After Oxford's Deep Ignorance Study (ceAI - S4, E5.
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
- Explain the methodology and key findings of the Oxford Deep Ignorance study, including its effectiveness and limitations.
- Analyze how filtering dangerous knowledge creates deliberate “blind spots” in AI models, both protective and constraining.
- Interpret science fiction archetypes (Deep Thought’s flawed logic, Severance’s controlled consciousness, Golems’ partial truth, Annihilation’s Shimmer) as ethical lenses for AI cultivation.
- Evaluate the trade-offs between tamper-resistance, innovation, and intellectual wholeness in AI.
- Assess how epistemic filters, algorithmic bias, and governance structures shape both safety outcomes and cultural risks.
- Debate the philosophical shift from engineering AI for control (building a bridge) to cultivating AI for resilience and growth (raising a child).
- Reflect on the closing provocation: Should the ultimate goal be an AI that is merely safe for us, or one that is also safe, sane, and whole in itself?
Todavía no hay opiniones