Document Type
Article
Source of Publication
IEEE Access
Publication Date
1-1-2025
Abstract
This research investigates the effectiveness of established vulnerability metrics, such as the Common Vulnerability Scoring System (CVSS), in evaluating attacks on Large Language Models (LLMs), with a focus on Adversarial Attacks (AAs). The study explores the influence of different metric factors in determining vulnerability scores, providing new perspectives on potential enhancements to these metrics. Approach - This study adopts a quantitative approach, calculating and comparing the coefficient of variation of vulnerability scores across 56 adversarial attacks on LLMs. The attacks, sourced from various research papers, and obtained through online databases, were evaluated using multiple vulnerability metrics. Scores were determined by averaging the values assessed by three distinct LLMs. Findings - The results indicate that existing scoring systems yield vulnerability scores with minimal variation across different attacks, supporting the hypothesis that current vulnerability metrics are limited in evaluating AAs on LLMs, and highlighting the need for the development of more flexible, generalized metrics tailored to such attacks.
DOI Link
ISSN
Disciplines
Computer Sciences
Keywords
Adversarial Attacks, Descriptive Statistics, Large Language Models, Risk Assessment, Vulnerability Metrics
Scopus ID
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Recommended Citation
Bahar, Atmane Ayoub Mansour and Wazan, Ahmad Samer, "On the Validity of Traditional Vulnerability Scoring Systems for Adversarial Attacks against LLMs" (2025). All Works. 7341.
https://zuscholars.zu.ac.ae/works/7341
Indexed in Scopus
yes
Open Access
yes
Open Access Type
Gold: This publication is openly available in an open access journal/series