Добавить в избранное
Форум
Правила сайта "Мир Книг"
Группа в Вконтакте
Подписка на книги
Правообладателям
Найти книгу:
Навигация
Вход на сайт
Регистрация



Реклама



Название: Introduction to Foundation Models
Автор: Pin-Yu Chen, Sijia Liu
Издательство: Springer
Год: 2025
Страниц: 307
Язык: английский
Формат: pdf (true), epub
Размер: 42.8 MB

This book offers an extensive exploration of foundation models, guiding readers through the essential concepts and advanced topics that define this rapidly evolving research area. Designed for those seeking to deepen their understanding and contribute to the development of safer and more trustworthy AI technologies, the book is divided into three parts providing the fundamentals, advanced topics in foundation modes, and safety and trust in Foundation Models.

Foundation Model is a technical term coined by Bommasani et al. to highlight a significant paradigm shift in Machine Learning. Without loss of generality, foundation models are high-capacity neural networks (e.g., neural networks with billions of trainable parameters) trained with large-scale data (e.g., the entire text data scraped from the Internet). Once a foundation model is trained, it can be used to solve various downstream Machine Learning tasks. While the training and tuning of foundation models are costly in time and resources, this “one-for-all” methodology deviates from the conventional “one-for-one” principle that trains one specific model for one task. For example, convolutional neural networks (CNNs) are often used in vision tasks such as image recognition or object detection, whereas long short-term memory (LSTM) models are often used in natural language processing tasks such as sentiment classification or summarization. Foundation models change the landscape of Machine Learning research and technology by sparing the need for training task-specific models, thereby making a unified foundation for different tasks.

Part I introduces the core principles of foundation models and Generative AI, presents the technical background of neural networks, delves into the learning and generalization of transformers, and finishes with the intricacies of transformers and in-context learning.

Part II introduces automated visual prompting techniques, prompting LLMs with privacy, memory-efficient fine-tuning methods, and shows how LLMs can be reprogrammed for time-series Machine Learning tasks. It explores how LLMs can be reused for speech tasks, how synthetic datasets can be used to benchmark foundation models, and elucidates machine unlearning for foundation models.

Part III provides a comprehensive evaluation of the trustworthiness of LLMs, introduces jailbreak attacks and defenses for LLMs, presents safety risks when find-tuning LLMs, introduces watermarking techniques for LLMs, presents robust detection of AI-generated text, elucidates backdoor risks in diffusion models, and presents red-teaming methods for diffusion models.

Mathematical notations are clearly defined and explained throughout, making this book an invaluable resource for both newcomers and seasoned researchers in the field.

Скачать Introduction to Foundation Models









НЕ РАБОТАЕТ TURBOBIT.NET? ЕСТЬ РЕШЕНИЕ, ЖМИ СЮДА!





Автор: Ingvar16 2-07-2025, 06:55 | Напечатать | СООБЩИТЬ ОБ ОШИБКЕ ИЛИ НЕ РАБОЧЕЙ ССЫЛКЕ
 
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.





С этой публикацией часто скачивают:
    {related-news}

Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.


 MyMirKnig.ru  ©2019     При использовании материалов библиотеки обязательна обратная активная ссылка    Политика конфиденциальности