Добавить в избранное
Форум
Правила сайта "Мир Книг"
Группа в Вконтакте
Подписка на книги
Правообладателям
Найти книгу:
Навигация
Вход на сайт
Регистрация



Реклама



Название: Red Teaming AI: Attacking & Defending Intelligent Systems
Автор: Philip A. Dursey
Издательство: AI Security LLC
Серия: AI Security Book 1
Год: 2025
Страниц: 1126
Язык: английский
Формат: pdf, epub
Размер: 15.9 MB

Think like an adversary. Secure the future of AI.

Red Teaming AI - Attacking & Defending Intelligent Systems is the 1126 page field manual that shows security teams, ML engineers, and tech leaders how to break - and then harden - modern AI.

The Artificial Intelligence (AI) systems you build, deploy, or manage aren't just powerful tools; they represent a fundamentally new and dangerous frontier. While promising unprecedented capabilities, they also create elusive vulnerabilities that bypass traditional defenses, leading directly to potentially catastrophic outcomes. Consider this scenario, drawn from red team exercises and real-world parallels:

A next-gen malware detection service, relying on community-shared threat data for continuous learning, became the target. The system, a cloud-based threat intelligence platform, automatically ingested user-submitted files to improve its machine-learning model. A red team simulating an advanced adversary quietly uploaded dozens of mutated ransomware samples—files similar to a known ransomware strain but with slight, benign-appearing modifications—into the shared database. Over successive updates, the AI gradually learned from these poisoned examples, confusing benign traits with malicious ones. The attackers banked on the model’s habit of continuous online learning, knowing it would blindly retrain on the new inputs without special scrutiny.

We'll start by demystifying core AI and Machine Learning (ML) concepts, focusing specifically on the aspects an AI red teamer must grasp to identify potential weaknesses. You'll see how integrating AI dramatically expands the traditional Attack Surface, creating new, often subtle, avenues for attackers – a challenge demanding systems thinking to fully appreciate the interconnected risks and potential cascading failures. We'll examine why conventional security tools and methods often provide a false sense of security against AI-specific threats and introduce the major categories of vulnerabilities that AI red teams actively hunt for – from poisoned data creating hidden backdoors to manipulated model inputs causing critical misjudgments. We'll also explore the Dual-Use Technology nature of AI, showing how the very tools used for defense can be weaponized by adversaries. Finally, we'll ground these concepts in real-world examples to underscore the tangible business, financial, and safety stakes involved. This foundational knowledge is critical for adopting the AI Red Teaming mindset needed to secure these complex, dynamic systems.

AI Red Teaming is a proactive and objective-driven security assessment methodology specifically forged for the unique battleground of AI systems. It demands we think like the attacker, employing a structured, adversarial, Systems Thinking approach to hunt for vulnerabilities, weaknesses, and potential failure modes throughout the entire AI lifecycle – from the sourcing of potentially compromised data and the training of vulnerable models to their deployment in complex environments and ongoing operation.

Inside you will master:
- Adversarial Tactics - data poisoning, inference‑time evasion, model extraction, LLM prompt injection.
- Battle‑hardened Defenses - robust training, MLSecOps pipeline hardening, real‑time detection.
- LLM & Agent Security - jailbreak techniques and mitigations for ChatGPT‑style models.
- Human‑Factor Threats - deepfakes, AI‑powered social engineering, deception counter‑measures.
- STRATEGEMS (TM) Framework - a proprietary, hypergame‑inspired methodology to red‑team AI at scale.

Why trust this guide?
Author Philip A. Dursey is a three‑time AI founder and ex‑CISO who has secured billion‑dollar infrastructures and leads HYPERGAME’s frontier‑security practice.

Who should read:
Security engineers * Red teamers * ML/AI researchers * CISOs & CTOs * Product and policy leaders.

Get the ultimate advantage - click "Download now" and outpace ai adversaries.

Скачать Red Teaming AI: Attacking & Defending Intelligent Systems









НЕ РАБОТАЕТ TURBOBIT.NET? ЕСТЬ РЕШЕНИЕ, ЖМИ СЮДА!





Автор: Ingvar16 29-06-2025, 19:25 | Напечатать | СООБЩИТЬ ОБ ОШИБКЕ ИЛИ НЕ РАБОЧЕЙ ССЫЛКЕ
 
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.





С этой публикацией часто скачивают:
    {related-news}

Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.


 MyMirKnig.ru  ©2019     При использовании материалов библиотеки обязательна обратная активная ссылка    Политика конфиденциальности