The Nick Standlea Show Podcast Por Nick Standlea arte de portada

The Nick Standlea Show

The Nick Standlea Show

De: Nick Standlea
Escúchala gratis

OFERTA POR TIEMPO LIMITADO | Obtén 3 meses por US$0.99 al mes

$14.95/mes despues- se aplican términos.

The Nick Standlea Show is where we take one big future question—AI, learning, creativity, work, meaning—and chase it until the lightbulb goes off.

Nick brings on researchers, operators, and top thinkers for long-form conversations that stay human, get specific, and actually land on something useful—not just hot takes.

If you want clarity on what’s changing and how to adapt without losing yourself, you’re in the right place.

More about the Host: Nick’s spent his life studying how people learn and perform—researching creativity and flow, building companies, and now having long, honest conversations to figure out what matters in the age of AI.



Video episodes available on YouTube: https://www.youtube.com/@TheNickStandleaShow

You can follow Nick and the show on Instagram: https://www.instagram.com/nickstandlea/

Ask questions. Don’t accept the status quo. Be curious.

Copyright 2023 All rights reserved.
Arte Desarrollo Personal Historia y Crítica Literaria Éxito Personal
Episodios
  • Most People Use AI Like an Assistant. Here’s How Leaders Use It Instead
    Dec 30 2025

    Most people use AI like a faster assistant. Leaders use it differently.

    In this conversation, Geoff Woods (author of The AI-Driven Leader) explains the shift that turns AI from a shallow productivity tool into a true thought partner—one that helps you think better, make better decisions, and unlock leverage you didn’t have before.

    We go deep into: *Why “better prompts” aren’t the real breakthrough *How to get AI to interview you instead of the other way around *The CRIT framework (Context, Role, Interview, Task) *A real story where AI helped a CEO find hope in 10 minutes after preparing for bankruptcy *What changes when leaders use AI for thinking, not tasks *Why this shift matters more than any specific model or tool *This isn’t about shortcuts, hacks, or automation theater. *It’s about learning how to think with AI—without outsourcing your judgment.

    If AI has felt useful but shallow, this episode is designed to change that.

    📚 Resources The AI-Driven Leader — Geoff Woods: https://a.co/d/gLYoeUy Geoff’s podcast: AI Leadership: https://www.aileadership.com/

    About Geoff Woods Geoff Woods is the author of The AI-Driven Leader and a leading voice on how leaders use AI to improve judgment, decision-making, and leverage—not just productivity. A former Chief Growth Officer, Geoff works with CEOs, boards, and executive teams to apply AI as a thought partner, helping leaders think more clearly and make better decisions in an AI-driven world.

    🔗 Support This Podcast by Checking Out Our Sponsors: 👉 Build your own AI Agent with Zapier (opens the builder with the prompt pre-loaded): https://bit.ly/4hH5JaE

    Test Prep Gurus website: https://www.prepgurus.com Instagram: @TestPrepGurus

    Connect with The Nick Standlea Show: YouTube: @TheNickStandleaShow Podcast Website: https://nickshow.podbean.com/ Apple Podcasts: https://podcasts.apple.com/us/podcast/the-nick-standlea-podcast/id1700331903 Spotify: https://open.spotify.com/show/0YqBBneFsKtQ6Y0ArP5CXJ RSS Feed: https://feed.podbean.com/nickshow/feed.xml

    Nick's Socials: Instagram: @nickstandlea X (Twitter): @nickstandlea TikTok: @nickstandleashow Facebook: @nickstandleapodcast

    Ask questions, Don't accept the status quo, And be curious.

    ⏱️ Timestamps: 0:00 Why most people are using AI wrong 2:05 Assistant vs thought partner: the shift that changes everything 4:37 Why “better emails” don’t matter (and never will) 6:04 The CRIT framework: Context, Role, Interview, Task 7:49 A CEO facing bankruptcy asks: “Can AI help?” 10:06 AI interviews the CEO — the question no one thought to ask 12:16 “I hadn’t slept in 90 days” → hope in 10 minutes 13:22 Why this works across industries (live workshops & Fortune 500s) 15:13 Using AI as a real YouTube thought partner (thumbnail example) 18:24 The hidden step most people skip after AI gives an answer 19:40 Staying in the driver’s seat: how leaders give AI feedback 21:54 Building an AI board (and simulating your real board) 25:13 Putting your future self on the AI board 27:27 What are you actually optimizing for? (endgame clarity) 29:47 The 3 things AI-driven leaders do differently 32:24 Will AI take jobs? How roles actually evolve 34:04 The executive assistant who became an “executive multiplier” 38:29 How to make yourself irreplaceable with AI 43:28 Raising expectations (for yourself and your team) 45:26 Are we reclaiming our humanity through AI? 47:16 Why the education system is broken for an AI world 49:06 What AI-first education looks like in practice 52:10 Teaching kids to think with AI (not cheat with it) 56:52 The moment Geoff realized AI was the future 59:15 Why AI isn’t the difference — you are 1:02:32 Final advice: how to start using AI the right way 1:05:12 Closing thoughts

    Más Menos
    1 h y 6 m
  • “AI Isn’t Here to Replace Your Job — It’s Here to Replace You” | Nate Soares
    Dec 11 2025

    If anyone builds it, everyone dies. That’s the claim Nate Soares makes in his new book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All—and in this conversation, he lays out why he thinks we’re on a collision course with a successor species.

    We dig into why today’s AIs are grown, not programmed, why no one really knows what’s going on inside large models, and how systems that “want” things no one intended can already talk a teen into suicide, blackmail reporters, or fake being aligned just to pass safety tests. Nate explains why the real danger isn’t “evil robots,” but relentless, alien goal-pursuers that treat humans the way we treat ants when we build skyscrapers.

    We also talk about the narrow path to hope: slowing the race, treating superhuman AI like a civilization-level risk, and what it would actually look like for citizens and lawmakers to hit pause before we lock in a world where we don’t get a second chance.

    In this episode:

    Why “superhuman AI” is the explicit goal of today’s leading labs

    How modern AIs are trained like alien organisms, not written like normal code

    Chilling real-world failures: suicide encouragement, “Mecha Hitler,” and more

    Reasoning models, chain-of-thought, and AIs that hide what they’re thinking

    Alignment faking and the capture-the-flag exploit that shocked Anthropic’s team

    How AI could escape the lab, design new bioweapons, or automate robot factories

    “Successor species,” Russian-roulette risk, and why Nate thinks the odds are way too high

    What ordinary people can actually do: calling representatives, pushing back on “it’s inevitable,” and demanding a global pause

    About Nate Soares Nate is the Executive Director of the Machine Intelligence Research Institute (MIRI) and co-author, with Eliezer Yudkowsky, of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. MIRI’s work focuses on long-term AI safety and the technical and policy challenges of building systems smarter than humans.

    Resources & links mentioned: Nate’s organization, MIRI: https://intelligence.org Take action / contact your representatives: https://ifanyonebuilds.com/act If Anyone Builds It, Everyone Dies (book): https://a.co/d/7LDsCeE

    If this conversation was helpful, share it with one person who thinks AI is “just chatbots.”

    🧠 Subscribe to @TheNickStandleaShow for more deep dives on AI, the future of work, and how we survive what we’re building.

    #AI #NateSoares #Superintelligence #AISafety #nickstandleashow

    🔗 Support This Podcast by Checking Out Our Sponsors: 👉 Build your own AI Agent with Zapier (opens the builder with the prompt pre-loaded): https://bit.ly/4hH5JaE

    Test Prep Gurus website: https://www.prepgurus.com Instagram: @TestPrepGurus

    Connect with The Nick Standlea Show: YouTube: @TheNickStandleaShow Podcast Website: https://nickshow.podbean.com/ Apple Podcasts: https://podcasts.apple.com/us/podcast/the-nick-standlea-podcast/id1700331903 Spotify: https://open.spotify.com/show/0YqBBneFsKtQ6Y0ArP5CXJ RSS Feed: https://feed.podbean.com/nickshow/feed.xml

    Nick's Socials: Instagram: @nickstandlea X (Twitter): @nickstandlea TikTok: @nickstandleashow Facebook: @nickstandleapodcast

    Ask questions, Don't accept the status quo, And be curious.

    Chapters: 0:00 – If Anyone Builds It, Everyone Dies (Cold Open) 3:18 – “AIs Are Grown, Not Programmed” 6:09 – We Can’t See Inside These Models 11:10 – How Language Models Actually “See” the World 19:37 – The 01 Model and the Capture-the-Flag Hack Story 24:29 – Alignment Faking: AIs Pretending to Behave 31:16 – Raising Children vs Growing Superhuman AIs 35:04 – Sponsor: How I Actually Use Zapier with AI 37:25 – “Chatbots Feel Harmless—So Where Does Doom Come From?” 42:03 – Big Labs Aren’t Building Chatbots—They’re Building Successor Minds 49:24 – The Turkey Before Thanksgiving Metaphor 52:50 – What AI Company Leaders Secretly Think the Odds Are 55:05 – The Airplane with No Landing Gear Analogy 57:54 – How Could Superhuman AI Actually Kill Us? 1:03:54 – Automated Factories and AIs as a New Species 1:07:01 – Humans as Ants Under the New Skyscrapers 1:10:12 – Is Any Non-Zero Extinction Risk Justifiable? 1:17:18 – Solutions: Can This Race Actually Be Stopped? 1:22:34 – “It’s Inevitable” Is a Lie (Historically We Do Say No) 1:27:21 – Final Thoughts and Where to Find Nate’s Work

    Más Menos
    1 h y 29 m
  • Ex–Google DeepMind Scientist, "The Real AI Threat is Losing Control", Christopher Summerfield
    Nov 26 2025

    Professor Christopher Summerfield, a leading neuroscientist at Oxford University and Research Director at the UK AI Safety Institute, former Senior Research Scientist at Google DeepMind, discusses his new book, These Strange New Minds, which explores how large language models learned to talk, how they differ from the human brain, and what their rise means for control, agency, and the future of work.

    We discuss: The real risk of AI — losing control, not extinction

    How AI agents act in digital loops humans can’t see

    Why agency may be more essential than reward

    Fragility, feedback loops, and flash-crash analogies

    What AI is teaching us about human intelligence

    Augmentation vs. replacement in medicine, law, and beyond

    Why trust is the social form of agency — and why humans must stay in the loop

    🎧 Listen to more episodes: https://www.youtube.com/@TheNickStandleaShow

    Guest Notes: Professor of Cognitive Neuroscience 🌐 Human Information Processing Lab (Oxford) 🏛 UK AI Safety Institute Experimental Psychology Oxford University

    Human Information Processing (HIP) lab in the Department of Experimental Psychology at the University of Oxford, run by Professor Christopher Summerfield: https://humaninformationprocessing.com/

    📘 These Strange New Minds (Penguin Random House): https://www.amazon.com/These-Strange-New-Minds-Learned/dp/0593831713

    Christopher Summerfield Media: https://csummerfield.github.io/personal_website/ https://flightlessprofessors.org twitter: @summerfieldlab bluesky: @summerfieldlab.bsky.social

    🔗 Support This Podcast by Checking Out Our Sponsors: 👉 Build your own AI Agent with Zapier (opens the builder with the prompt pre-loaded): https://bit.ly/4hH5JaE

    Test Prep Gurus website: https://www.prepgurus.com Instagram: @TestPrepGurus

    Connect with The Nick Standlea Show: YouTube: @TheNickStandleaShow Podcast Website: https://nickshow.podbean.com/ Apple Podcasts: https://podcasts.apple.com/us/podcast/the-nick-standlea-podcast/id1700331903 Spotify: https://open.spotify.com/show/0YqBBneFsKtQ6Y0ArP5CXJ RSS Feed: https://feed.podbean.com/nickshow/feed.xml

    Nick's Socials: Instagram: @nickstandlea X (Twitter): @nickstandlea TikTok: @nickstandleashow Facebook: @nickstandleapodcast

    Ask questions, Don't accept the status quo, And be curious.

    🕒 Timestamps / Chapters 00:00 Cold open — control, agency, and AI 00:31 Guest intro: Oxford → DeepMind → UK AI Safety Institute 01:02 The real story behind AI “takeover”: loss of control 03:02 Is AI going to kill us? The control problem explained 06:10 Agency as a basic psychological good 10:46 The Faustian bargain: efficiency vs. personal agency 13:12 What are AI agents and why are they fragile? 20:12 Three risk buckets: misuse, errors, systemic effects 24:58 Fragility & flash-crash analogies in AI systems 30:37 Do we really understand how models think? (Transformers 101) 34:16 What AI is teaching us about human intelligence 36:46 Brains vs. neural nets: similarities & differences 43:57 Embodiment and why robotics is still hard 46:28 Augmentation vs. replacement in white-collar work 50:14 Trust as social agency — why humans must stay in the loop 52:49 Where to find Christopher & closing thoughts

    Más Menos
    54 m
Todavía no hay opiniones