"It Sounds Like Something From Marvel" — Building an Antivirus for AI... With AI | Daniel Hulme (Founder, Conscium)
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
So why is one of the world’s leading AI researchers teaching AI to understand pain and suffering? Well, Daniel Hulme says that if we build an empathetic AI, perhaps even a conscious one, then we’ll be safer. His hypothesis is that a "zombie" AI will eat our brains, but an empathetic AI would stay aligned with us. So he's building this "antivirus" (with AI, of course) and he's very aware that this sounds crazy or like "something from Marvel."
That's just some of what broke my brain in this conversation with one of the world's top AI researchers and founders. And Daniel has serious credibility, so I'm not dismissing the threat he sees — you know, the one where we all get turned into paperclips.
Daniel sold his company Satalia to WPP, where he now serves as Chief AI Officer. He’s just founded Conscium, which verifies that AI agents are safe and can do what they promise — and is also researching consciousness and pain. Some of the world’s leading AI thinkers are on the advisory board and Daniel has been in this space for decades: we’ll talk about why, for his PhD, he studied bumblebee brains (yes, really — and it's deeply relevant).
We get into:
- His unified theory of consciousness — his "color wheel" model — and why he thinks consciousness only exists in motion
- Why he believes large language models are ultimately a dead end — and what neuromorphic computing could replace them with
- What bumblebee brains can teach us about building AI that's up to a thousand times more energy efficient
- Why he calls today's AI agents "intoxicated graduates" — and says companies should spend 80% of their time testing them
- The concept of "mind crime" — the idea that we could build conscious AI and accidentally put it through horrendous suffering without realizing it
- His vision of a "protopia" — where AI makes food, healthcare, education, and energy so abundant that people are freed from economic constraints to pursue what actually matters
We future around and find out a lot in this one!
---
Chapters
- (01:39) - "Would a conscious superintelligence be safer than a zombie one?"
- (03:37) - The paperclip problem is not hypothetical
- (05:06) - Conscium's mission — AI safety for humans and for AI themselves
- (08:50) - "I think I've got my head around consciousness"
- (11:57) - The color wheel model — why consciousness only exists in motion
- (13:58) - Teaching AI morals through evolution, not guardrails
- (17:23) - "Hey Claude, are you conscious?" — how do you test for that?
- (21:07) - What bumblebee brains can teach us about building better AI
- (24:14) - "I think we are completely scaling wrong"
- (29:43) - Why Daniel calls AI agents "intoxicated graduates"
- (32:48) - Companies should spend 80% of their time testing agents
- (38:19) - "What would you do if you were economically free?"
---
Links
- Conscium
- Daniel Hulme on Wikipedia
- Daniel on LinkedIn
---