LLMs Won’t Create Superintelligence… Here’s Why Podcast Por  arte de portada

LLMs Won’t Create Superintelligence… Here’s Why

LLMs Won’t Create Superintelligence… Here’s Why

Escúchala gratis

Ver detalles del espectáculo

In this episode (#145), the twins talk with Ganesh Krishnan founder of AIhello and Halzero.ai on whether LLMs actually are the path to AGI… or are we hitting a ceiling?

We break down one of the biggest debates in AI right now: whether today’s models can ever reach true intelligence. Or whether a completely different approach is needed?

We dive into why hallucinations happen, why scaling LLMs might not be enough, and why the future of AI could lie in systems that question their own data, learn through observation, and build a model of the world instead of just predicting text.

This episode also explores the difference between hype and reality in AI, how intelligence might actually work, and what it takes to build real products in a space moving this fast.

  • Why LLMs may hit a ceiling

  • The real reason AI hallucinates

  • Why current models blindly trust training data

  • What “world models” are and why they matter

  • Can AI ever question what it learns?

  • The limits of self-driving AI systems

  • Bootstrapping vs hype-driven AI startups

  • Why distribution is now more important than product

If you're building, investing, or just curious about the future of AI, this one will challenge how you think about it.

👉 Let us know in the comments: Are LLMs enough for AGI,or do we need something completely new?

#AI #AGI #LLM #ArtificialIntelligence #Startups #TechPodcast #NFtwins #WorldModels

In this episode:

Todavía no hay opiniones