Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks
Document Type
Article
Source of Publication
IEEE Access
Publication Date
1-1-2024
Abstract
Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate intermediate high-level concepts into the model architecture, promise enhanced interpretability. This study delves into the robustness of Concept Bottleneck Models (CBMs) against adversarial attacks, comparing their original and adversarial performance with standard Convolutional Neural Networks (CNNs). The premise is that CBMs prioritize conceptual integrity and data compression, enabling them to maintain high performance under adversarial conditions by filtering out non-essential variations in input data. Our extensive evaluations across different datasets and adversarial attacks confirm that CBMs not only maintain higher accuracy but also show improved defense capabilities against a range of adversarial attacks compared to traditional models. Our findings indicate that CBMs, particularly those trained sequentially, inherently exhibit higher robustness against adversarial attacks than their standard CNN counterparts. Additionally, we explore the effects of increasing conceptual complexity and the application of adversarial training techniques. While adversarial training generally boosts robustness, the increment varies between CBMs and CNNs, highlighting the role of training strategies in achieving adversarial resilience.
DOI Link
ISSN
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Disciplines
Computer Sciences
Keywords
Adversarial attacks, Concept Bottleneck models, Interpretable models, Robustness
Scopus ID
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Recommended Citation
Rasheed, Bader; Abdelhamid, Mohamed; Khan, Adil; Menezes, Igor; and Khatak, Asad Masood, "Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks" (2024). All Works. 6806.
https://zuscholars.zu.ac.ae/works/6806
Indexed in Scopus
yes
Open Access
yes
Open Access Type
Gold: This publication is openly available in an open access journal/series