Добавить в избранное
Форум
Правила сайта "Мир Книг"
Группа в Вконтакте
Подписка на книги
Правообладателям
Найти книгу:
Навигация
Вход на сайт
Регистрация



Реклама



Название: Attacks, Defenses and Testing for Deep Learning
Автор: Jinyin Chen, Ximin Zhang, Haibin Zheng
Издательство: Springer
Год: 2024
Страниц: 413
Язык: английский
Формат: pdf (true)
Размер: 16.1 MB

This book provides a systematic study on the security of Deep Learning. With its powerful learning ability, Deep Learning is widely used in CV, FL, GNN, RL, and other scenarios. However, during the process of application, researchers have revealed that Deep Learning is vulnerable to malicious attacks, which will lead to unpredictable consequences. Take autonomous driving as an example, there were more than 12 serious autonomous driving accidents in the world in 2018, including Uber, Tesla and other high technological enterprises. Drawing on the reviewed literature, we need to discover vulnerabilities in Deep Learning through attacks, reinforce its defense, and test model performance to ensure its robustness.

The book aims to provide a comprehensive introduction to the methods of attacks, defenses, and testing evaluations for deep learning in various scenarios. We focus on multiple application scenarios such as computer vision, Federated Learning, graph neural networks, and Reinforcement Learning, considering multiple security issues that exist under different data modalities, model structures, and tasks.

Attacks can be divided into adversarial attacks and poisoning attacks. Adversarial attacks occur during the model testing phase, where the attacker obtains adversarial examples by adding small perturbations. Poisoning attacks occur during the model training phase, wherethe attacker injects poisoned examples into the training dataset, embedding a backdoor trigger in the trained Deep Learning model.

An effective defense method is an important guarantee for the application of Deep Learning. The existing defense methods are divided into three types, including the data modification defense method, model modification defense method, and network add-on method. The data modification defense method performs adversarial defense by fine-tuning the input data. The model modification defense method adjusts the model framework to achieve the effect of defending against attacks. The network add-on method prevents the adversarial examples by training the adversarial example detector.

Testing deep neural networks is an effective method to measure the security and robustness of Deep Learning models. Through test evaluation, security vulnerabilities and weaknesses in deep neural networks can be identified. By identifying and fixing these vulnerabilities, the security and robustness of the model can be improved.

The book is divided into three main parts: attacks, defenses, and testing. In the attack section, we introduce in detail the attack methods and techniques targeting Deep Learning models.

Chapter 1 introduces a black-box adversarial attack method based on genetic algorithms to solve the problem of unsatisfactory success rate in black-box adversarial attacks. This method generates initial perturbations by randomly generating and using the classic white-box adversarial attack method AM. It combines genetic algorithms, designs fitness functions, and evaluates and constrains sample individuals from both attack capability and perturbation control aspects. By calculating, it obtains approximate optimal adversarial samples, solving the problem that most black-box adversarial attack algorithms cannot achieve the expected success rate compared to white-box attacks. Experimental results show that this method outperforms existing black-box attack methods in terms of attack capability and perturbation control.

Chapter 2 introduces a Generative Adversarial Network (GAN) for poisoning attacks to solve the problem of poisoned samples being easily detected by defense algorithms. This network consists of a feature extractor, a generator network, and a discriminator network. Under the GAN framework, the generator minimizes the loss between the pixels of poisoned samples and benign samples to achieve stealthiness by binding the size of perturbations. The discriminator evaluates the similarity between poisoned samples and original samples.

Chapter 3 introduces a white-box targeted attack to solve the problem of ignoring the role of feature extraction in deep learning models. This adversarial attack method uses Gradient-weighted Class Activation Mapping (Grad-CAM) to calculate channel-space attention and pixel-space attention. Channel-space attention reduces the attention area of deep neural networks, while pixel-space attention achieves error localization of target contours. By combining the two, it can focus features on target contours to generate smaller perturbations and produce more attackable adversarial samples with less disturbance.

Chapter 4 introduces a new GNN vertical federated learning attack method to address the vulnerability of GVFL in practical applications due to distrust crises. Firstly, it steals global node embeddings and establishes a shadow model for the attack generator on the server side. Secondly, noise is added to node embeddings to confuse the shadow model. Finally, an attack is generated by leveraging gradients between pairs of nodes under the guidance of noisy node embeddings...

Contents:


Скачать Attacks, Defenses and Testing for Deep Learning









НЕ РАБОТАЕТ TURBOBIT.NET? ЕСТЬ РЕШЕНИЕ, ЖМИ СЮДА!





Автор: Ingvar16 5-06-2024, 16:32 | Напечатать | СООБЩИТЬ ОБ ОШИБКЕ ИЛИ НЕ РАБОЧЕЙ ССЫЛКЕ
 
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.





С этой публикацией часто скачивают:
    {related-news}

Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.


 MyMirKnig.ru  ©2019     При использовании материалов библиотеки обязательна обратная активная ссылка    Политика конфиденциальности