AI Hardware, Software, and Architectures Powering Modern Artificial Intelligence Audiolibro Por Practicing Engineers Network arte de portada

AI Hardware, Software, and Architectures Powering Modern Artificial Intelligence

From GPUs and ASICs to CUDA, Accelerators, Compilers and Runtimes

Muestra de Voz Virtual
Prueba por $0.00
Elige 1 audiolibro al mes de nuestra inigualable colección.
Acceso ilimitado a nuestro catálogo de más de 150,000 audiolibros y podcasts.
Accede a ofertas y descuentos exclusivos.
Premium Plus se renueva automáticamente por $14.95 al mes después de 30 días. Cancela en cualquier momento.
Compra ahora por $5.99

Compra ahora por $5.99

Background images

Este título utiliza narración de voz virtual

Voz Virtual es una narración generada por computadora para audiolibros..

AI Hardware, Software, and Architectures Powering Modern Artificial Intelligence
From GPUs and ASICs to CUDA, Accelerators, Compilers, and Runtimes

Modern artificial intelligence is not powered by models alone—it is driven by a carefully engineered stack of hardware, software, and system architectures working together at scale.

This book provides a clear, practical, and system-level view of how modern AI actually runs, from silicon to software. It explains how GPUs, ASICs, and emerging accelerators execute neural networks, how CUDA and alternative runtimes expose hardware capabilities, and how compilers, communication libraries, and inference systems transform models into high-performance AI workloads.

Written for engineers, architects, and technically curious professionals, this book goes beyond surface-level explanations and focuses on how real AI systems are designed, optimized, and deployed in production environments.

You will learn:

  • How GPUs, TPUs, NPUs, and AI ASICs are architected for training and inference

  • Why CUDA became central to AI—and how alternatives like ROCm, oneAPI, and Vulkan compare

  • How compilers, graph lowering, and kernel optimization affect performance

  • How distributed training systems scale across multiple accelerators

  • How inference systems balance latency, throughput, and cost

  • Where memory, bandwidth, and communication become bottlenecks

  • How modern AI hardware trends are shaping future system designs

The book emphasizes system thinking—connecting hardware capabilities to software abstractions and real-world performance tradeoffs. Diagrams, structured explanations, and end-to-end workflows make complex concepts accessible without oversimplifying them.

Whether you are designing AI infrastructure, optimizing performance, evaluating hardware platforms, or simply seeking a deeper understanding of how AI works under the hood, this book serves as both a learning guide and long-term reference.

Todavía no hay opiniones