The inner workings of Large Language Models Audiobook By Roger Gullhaug cover art

The inner workings of Large Language Models

how neural networks learn language

Virtual Voice Sample
Get this deal Try for $0.00
Offer ends January 29, 2026 11:59pm PT
Prime logo Prime members: New to Audible? Get 2 free audiobooks during trial.
Just $0.99/mo for your first 3 months of Audible Premium Plus.
1 audiobook per month of your choice from our unparalleled catalog.
Listen all you want to thousands of included audiobooks, podcasts, and Originals.
Auto-renews at $14.95/mo after 3 months. Cancel anytime.
Pick 1 audiobook a month from our unmatched collection.
Listen all you want to thousands of included audiobooks, Originals, and podcasts.
Access exclusive sales and deals.
Premium Plus auto-renews for $14.95/mo after 30 days. Cancel anytime.

The inner workings of Large Language Models

By: Roger Gullhaug
Narrated by: Virtual Voice
Get this deal Try for $0.00

$14.95/mo after 3 months. Cancel anytime. Offer ends January 21, 2026 11:59pm PT.

$14.95/month after 30 days. Cancel anytime.

Buy for $14.99

Buy for $14.99

LIMITED TIME OFFER | Get 3 months for $0.99 a month

$14.95/mo thereafter-terms apply.
Background images

This title uses virtual voice narration

Virtual voice is computer-generated narration for audiobooks.

Understand ChatGPT, Claude, and Copilot beyond the buzzwords

Large language models are transforming software development, productivity, and how we interact with computers.

But most people only know how to use them, not how they actually work.

If you have ever wondered:

  • How does ChatGPT know what to say next?

  • What are tokens and embeddings really doing?

  • Why does attention matter so much?

  • What happens inside a transformer layer?

This book will make it click.


A practical, visual, developer-friendly explanation

This is not a dense academic textbook. It is a clear, step by step walkthrough that explains complex ideas in plain language, supported by practical illustrations and simple examples.

You will gain a strong intuitive understanding of how modern large language models generate text, learn from data, and represent meaning using mathematics.


Inside you will learn:
  • Tokenization and why you are billed per token

  • Embeddings and vector representations and how meaning becomes math

  • Self-attention, causal masking, and multi-head attention

  • What actually happens inside a transformer block

  • Logits, softmax, decoding, sampling, and temperature

  • How models are trained, fine tuned, and optimized

  • Why hallucinations happen and what models really understand

  • Common myths and misconceptions about LLMs

  • How the author used LLMs while researching and writing this book


Who this book is for
  • Software developers and engineers

  • Tech professionals and product builders

  • Curious learners who want understanding without heavy jargon

  • Anyone using AI tools and wanting to know what is happening under the hood


Move beyond using AI and start understanding it

You do not need a math background. You do not need to read research papers.

You just need curiosity.

This book gives you a clear, practical foundation for understanding how large language models work, written to be approachable, visual, and genuinely enjoyable to read.

Computer Science
No reviews yet