The NVIDIA Full Stack
From CUDA Kernels to Cloud-Native AI Deployment
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
Escucha audiolibros, podcasts y Audible Originals con Audible Plus por un precio mensual bajo.
Escucha en cualquier momento y en cualquier lugar en tus dispositivos con la aplicación gratuita Audible.
Los suscriptores por primera vez de Audible Plus obtienen su primer mes gratis. Cancela la suscripción en cualquier momento.
Compra ahora por $6.40
-
Narrado por:
-
Virtual Voice
-
De:
-
Ajit Singh
Este título utiliza narración de voz virtual
Voz Virtual es una narración generada por computadora para audiolibros..
Philosophy
The core philosophy of this book is to present the NVIDIA platform as a cohesive, end-to-end "full stack." Traditional resources often treat CUDA programming, AI frameworks, and model deployment as separate disciplines. This book dismantles those silos. I believe that a modern AI engineer must understand the entire lifecycle of an application: from the low-level CUDA kernels that execute on the hardware, to the high-level Python frameworks used for training, to the cloud-native tools required for scalable, production-grade deployment. This approach provides a holistic understanding that is essential for building efficient, robust, and maintainable systems in the real world.
Key Features
1. Full-Stack Coverage: The only book you need to understand the journey from CUDA C++ and Python kernels to production deployment with Triton and Docker.
2. Beginner to Advanced: Carefully structured to cater to both undergraduate students (B.Tech) and postgraduate specialists (M.Tech), as well as professional developers.
3. Globally Relevant: The technologies covered (CUDA, PyTorch, Docker, MLOps) are industry standards, making the curriculum compatible with international university syllabi.
4. Code-Intensive: Rich with working code examples, practical exercises, and a complete, explained capstone project.
5. Focus on Optimization: Dedicated chapters on profiling with Nsight tools and inference optimization with TensorRT, teaching the critical skill of making AI applications fast and efficient.
To Whom This Book Is For
1. B.Tech/M.Tech Computer Science Students: An ideal textbook or supplementary resource for courses on Parallel Computing, High-Performance Computing, AI/ML, and Cloud Computing.
2. Aspiring AI/ML Engineers: Provides the essential hands-on skills required for a career in building and deploying AI systems.
3. Data Scientists: For those who want to move beyond notebooks and learn how to accelerate their data pipelines and deploy models at scale using RAPIDS and Triton.
4. Software Developers & Researchers: A practical guide for professionals looking to leverage GPU acceleration for their applications, whether in scientific computing, finance, or any other domain.
Todavía no hay opiniones