Document Type

Article

Source of Publication

International Journal of Computational Intelligence Systems

Publication Date

5-20-2023

Abstract

Artificial neural networks are currently applied in a wide variety of fields, and they are near to achieving performance similar to humans in many tasks. Nevertheless, they are vulnerable to adversarial attacks in the form of a small intentionally designed perturbation, which could lead to misclassifications, making these models unusable, especially in applications where security is critical. The best defense against these attacks, so far, is adversarial training (AT), which improves the model’s robustness by augmenting the training data with adversarial examples. In this work, we show that the performance of AT can be further improved by employing the neighborhood of each adversarial example in the latent space to make additional targeted augmentations to the training data. More specifically, we propose a robust selective data augmentation (RSDA) approach to enhance the performance of AT. RSDA complements AT by inspecting the quality of the data from a robustness perspective and performing data transformation operations on specific neighboring samples of each adversarial sample in the latent space. We evaluate RSDA on MNIST and CIFAR-10 datasets with multiple adversarial attacks. Our experiments show that RSDA gives significantly better results than just AT on both adversarial and clean samples.

ISSN

1875-6883

Publisher

Springer Science and Business Media LLC

Volume

16

Issue

1

First Page

89

Last Page

89

Disciplines

Computer Sciences

Keywords

Deep Learning, Adversarial Attacks, Adversarial Training, Data Augmentation, Robustness

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Indexed in Scopus

no

Open Access

yes

Open Access Type

Gold: This publication is openly available in an open access journal/series

Share

COinS