Ex–Google DeepMind Scientist, "The Real AI Threat is Losing Control", Christopher Summerfield Podcast Por  arte de portada

Ex–Google DeepMind Scientist, "The Real AI Threat is Losing Control", Christopher Summerfield

Ex–Google DeepMind Scientist, "The Real AI Threat is Losing Control", Christopher Summerfield

Escúchala gratis

Ver detalles del espectáculo

OFERTA POR TIEMPO LIMITADO | Obtén 3 meses por US$0.99 al mes

$14.95/mes despues- se aplican términos.

Professor Christopher Summerfield, a leading neuroscientist at Oxford University and Research Director at the UK AI Safety Institute, former Senior Research Scientist at Google DeepMind, discusses his new book, These Strange New Minds, which explores how large language models learned to talk, how they differ from the human brain, and what their rise means for control, agency, and the future of work.

We discuss: The real risk of AI — losing control, not extinction

How AI agents act in digital loops humans can’t see

Why agency may be more essential than reward

Fragility, feedback loops, and flash-crash analogies

What AI is teaching us about human intelligence

Augmentation vs. replacement in medicine, law, and beyond

Why trust is the social form of agency — and why humans must stay in the loop

🎧 Listen to more episodes: https://www.youtube.com/@TheNickStandleaShow

Guest Notes: Professor of Cognitive Neuroscience 🌐 Human Information Processing Lab (Oxford) 🏛 UK AI Safety Institute Experimental Psychology Oxford University

Human Information Processing (HIP) lab in the Department of Experimental Psychology at the University of Oxford, run by Professor Christopher Summerfield: https://humaninformationprocessing.com/

📘 These Strange New Minds (Penguin Random House): https://www.amazon.com/These-Strange-New-Minds-Learned/dp/0593831713

Christopher Summerfield Media: https://csummerfield.github.io/personal_website/ https://flightlessprofessors.org twitter: @summerfieldlab bluesky: @summerfieldlab.bsky.social

🔗 Support This Podcast by Checking Out Our Sponsors: 👉 Build your own AI Agent with Zapier (opens the builder with the prompt pre-loaded): https://bit.ly/4hH5JaE

Test Prep Gurus website: https://www.prepgurus.com Instagram: @TestPrepGurus

Connect with The Nick Standlea Show: YouTube: @TheNickStandleaShow Podcast Website: https://nickshow.podbean.com/ Apple Podcasts: https://podcasts.apple.com/us/podcast/the-nick-standlea-podcast/id1700331903 Spotify: https://open.spotify.com/show/0YqBBneFsKtQ6Y0ArP5CXJ RSS Feed: https://feed.podbean.com/nickshow/feed.xml

Nick's Socials: Instagram: @nickstandlea X (Twitter): @nickstandlea TikTok: @nickstandleashow Facebook: @nickstandleapodcast

Ask questions, Don't accept the status quo, And be curious.

🕒 Timestamps / Chapters 00:00 Cold open — control, agency, and AI 00:31 Guest intro: Oxford → DeepMind → UK AI Safety Institute 01:02 The real story behind AI “takeover”: loss of control 03:02 Is AI going to kill us? The control problem explained 06:10 Agency as a basic psychological good 10:46 The Faustian bargain: efficiency vs. personal agency 13:12 What are AI agents and why are they fragile? 20:12 Three risk buckets: misuse, errors, systemic effects 24:58 Fragility & flash-crash analogies in AI systems 30:37 Do we really understand how models think? (Transformers 101) 34:16 What AI is teaching us about human intelligence 36:46 Brains vs. neural nets: similarities & differences 43:57 Embodiment and why robotics is still hard 46:28 Augmentation vs. replacement in white-collar work 50:14 Trust as social agency — why humans must stay in the loop 52:49 Where to find Christopher & closing thoughts

Todavía no hay opiniones