Author First name, Last name, Institution

Atmane Ayoub Mansour Bahar
Ahmad Samer Wazan, Zayed University

Document Type

Article

Source of Publication

IEEE Access

Publication Date

1-1-2025

Abstract

This research investigates the effectiveness of established vulnerability metrics, such as the Common Vulnerability Scoring System (CVSS), in evaluating attacks on Large Language Models (LLMs), with a focus on Adversarial Attacks (AAs). The study explores the influence of different metric factors in determining vulnerability scores, providing new perspectives on potential enhancements to these metrics. Approach - This study adopts a quantitative approach, calculating and comparing the coefficient of variation of vulnerability scores across 56 adversarial attacks on LLMs. The attacks, sourced from various research papers, and obtained through online databases, were evaluated using multiple vulnerability metrics. Scores were determined by averaging the values assessed by three distinct LLMs. Findings - The results indicate that existing scoring systems yield vulnerability scores with minimal variation across different attacks, supporting the hypothesis that current vulnerability metrics are limited in evaluating AAs on LLMs, and highlighting the need for the development of more flexible, generalized metrics tailored to such attacks.

ISSN

2169-3536

Disciplines

Computer Sciences

Keywords

Adversarial Attacks, Descriptive Statistics, Large Language Models, Risk Assessment, Vulnerability Metrics

Scopus ID

05006789787

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Indexed in Scopus

yes

Open Access

yes

Open Access Type

Gold: This publication is openly available in an open access journal/series

Share

COinS