Название: Large Language Models for Developers: A Prompt-based Exploration Автор: Oswald Campesato Издательство: Mercury Learning and Information Год: 2025 Страниц: 1047 Язык: английский Формат: pdf (true), epub Размер: 12.5 MB
This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to Generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.
Although this book is introductory in nature, some knowledge of Python 3.x with certainly be helpful for the code samples. Knowledge of other programming languages (such as Java) can also be helpful because of the exposure to programming concepts and constructs. The less technical knowledge that you have, the more diligence will be required in order to understand the various topics that are covered. If you want to be sure that you can grasp the material in this book, glance through some of the code samples to get an idea of how much is familiar to you and how much is new for you.
This book contains basic code samples that are written in Python, and their primary purpose is to show you how to access the functionality of LLMs. Moreover, clarity has higher priority than writing more compact code that is more difficult to understand (and possibly more prone to bugs). If you decide to use any of the code in this book, you ought to subject that code to the same rigorous analysis as the other parts of your code base.
Features: Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images — available from the publisher
The Target Audience:
This book is intended primarily for people who have a basic knowledge of machine learning or software developers who are interested in working with LLMs. Specifically, this book is for readers who are accustomed to searching online for more detailed information about technical topics. If you are a beginner, there are other books that may be more suitable for you, and you can find them by performing an online search. This book is also intended to reach an international audience of readers with highly diverse backgrounds in various age groups. In addition, this book uses standard English rather than colloquial expressions that might be confusing to those readers. This book provide a comfortable and meaningful learning experience for the intended readers.
Contents:
Preface About the Contributor Chapter 1: The Generative AI Landscape Chapter 2: Prompt Engineering (1) Chapter 3: Prompt Engineering (2) Chapter 4: Well-Known LLMs and APIs Chapter 5: Fine-Tuning LLMs (1) Chapter 6: LLMs and Fine-Tuning (2) Chapter 7: What is Tokenization? Chapter 8: Attention Mechanism Chapter 9: LLMs and Quantization (1) Chapter 10: LLMs and Quantization (2) Index
Скачать Large Language Models for Developers: A Prompt-based Exploration
|