Torrent details for "Huang K. LLM Design Patterns. A Practical Guide...Efficient AI S…" Log in to bookmark
Controls:
×
Report Torrent
Please select a reason for reporting this torrent:
Your report will be reviewed by our moderation team.
×
Report Information
Loading report information...
This torrent has been reported 0 times.
Report Summary:
| User | Reason | Date |
|---|
Failed to load report information.
×
Success
Your report has been submitted successfully.
Checked by:
Category:
Language:
None
Total Size:
6.5 MB
Info Hash:
0D31F2445C3092F0C6EDB5058C3C76522C8346CA
Added By:
Added:
April 14, 2026, 11:59 a.m.
Stats:
|
(Last updated: April 14, 2026, 12:18 p.m.)
| File | Size |
|---|---|
| Huang K. LLM Design Patterns. A Practical Guide...Efficient AI Systems 2025.pdf | 6.5 MB |
Name
DL
Uploader
Size
S/L
Added
-
12.2 MB
[39
/
47]
2025-06-11
| Uploaded by andryold1 | Size 12.2 MB | Health [ 39 /47 ] | Added 2025-06-11 |
-
13.3 MB
[32
/
48]
2023-09-14
| Uploaded by FreeCourseWeb | Size 13.3 MB | Health [ 32 /48 ] | Added 2023-09-14 |
-
330.2 MB
[38
/
31]
2023-06-01
| Uploaded by rqj93067 | Size 330.2 MB | Health [ 38 /31 ] | Added 2023-06-01 |
NOTE
SOURCE: Huang K. LLM Design Patterns. A Practical Guide...Efficient AI Systems 2025
-----------------------------------------------------------------------------------
COVER

-----------------------------------------------------------------------------------
MEDIAINFO
Textbook in PDF format Key benefits Learn comprehensive LLM development, including data prep, training pipelines, and optimization Explore advanced prompting techniques, such as chain-of-thought, tree-of-thought, RAG, and AI agents Implement evaluation metrics, interpretability, and bias detection for fair, reliable models Description This practical guide for AI professionals enables you to build on the power of design patterns to develop robust, scalable, and efficient large language models (LLMs). Written by a global AI expert and popular author driving standards and innovation in Generative AI, security, and strategy, this book covers the end-to-end lifecycle of LLM development and introduces reusable architectural and engineering solutions to common challenges in data handling, model training, evaluation, and deployment. You’ll learn to clean, augment, and annotate large-scale datasets, architect modular training pipelines, and optimize models using hyperparameter tuning, pruning, and quantization. The chapters help you explore regularization, checkpointing, fine-tuning, and advanced prompting methods, such as reason-and-act, as well as implement reflection, multi-step reasoning, and tool use for intelligent task completion. The book also highlights Retrieval-Augmented Generation (RAG), graph-based retrieval, interpretability, fairness, and RLHF, culminating in the creation of agentic LLM systems. By the end of this book, you’ll be equipped with the knowledge and tools to build next-generation LLMs that are adaptable, efficient, safe, and aligned with human values. *Email sign-up and proof of purchase required Who is this book for? This book is essential for AI engineers, architects, data scientists, and software engineers responsible for developing and deploying AI systems powered by large language models. A basic understanding of machine learning concepts and experience in Python programming is a must. What you will learn Implement efficient data prep techniques, including cleaning and augmentation Design scalable training pipelines with tuning, regularization, and checkpointing Optimize LLMs via pruning, quantization, and fine-tuning Evaluate models with metrics, cross-validation, and interpretability Understand fairness and detect bias in outputs Develop RLHF strategies to build secure, agentic AI systems
×


