Torrent details for "Cautaerts N. GPU-Accelerated Computing with Python 3 and CUDA...…" Log in to bookmark
Controls:
×
Report Torrent
Please select a reason for reporting this torrent:
Your report will be reviewed by our moderation team.
×
Report Information
Loading report information...
This torrent has been reported 0 times.
Report Summary:
| User | Reason | Date |
|---|
Failed to load report information.
×
Success
Your report has been submitted successfully.
Checked by:
Category:
Language:
None
Total Size:
114.9 MB
Info Hash:
E4C659627C0C158B905C550A7E82B954259C70AE
Added By:
Added:
April 16, 2026, 10:52 a.m.
Stats:
|
(Last updated: April 16, 2026, 10:52 a.m.)
| File | Size |
|---|---|
| ['Cautaerts N. GPU-Accelerated Computing with Python 3 and CUDA...2026.pdf'] | 0 bytes |
Name
DL
Uploader
Size
S/L
Added
-
114.9 MB
[11
/
14]
2026-04-16
| Uploaded by andryold1 | Size 114.9 MB | Health [ 11 /14 ] | Added 2026-04-16 |
NOTE
SOURCE: Cautaerts N. GPU-Accelerated Computing with Python 3 and CUDA...2026
-----------------------------------------------------------------------------------
COVER

-----------------------------------------------------------------------------------
MEDIAINFO
Textbook in PDF format Writing high-performance Python code doesn’t have to mean switching to C++. This book shows you how to accelerate Python applications using NVIDIA’s CUDA platform and a modern ecosystem of Python tools and libraries. Aimed at researchers, engineers, and data scientists, it offers a practical yet deep understanding of GPU programming and how to fully exploit modern GPU hardware. You’ll begin with the fundamentals of CUDA programming in Python using Numba-CUDA, learning how GPUs work and how to write, execute, and debug custom GPU kernels. Building on this foundation, the book explores memory access optimization, asynchronous execution with CUDA streams, and multi-GPU scaling using Dask-CUDA. Performance analysis and tuning are emphasized throughout, using NVIDIA Nsight profilers. You’ll also learn to use high-level GPU libraries such as JAX, CuPy, and RAPIDS to accelerate numerical Python workflows with minimal code changes. These techniques are applied to real-world examples, including PDE solvers, image processing, physical simulations, and transformer models. Written by experienced GPU practitioners, this hands-on guide emphasizes reproducible workflows using Python 3.10+, CUDA 12.3+, and tools like the Pixi package manager. By the end, you’ll have future-ready skills for building scalable GPU applications in Python
×


