"Enhancing Breast Cancer Detection through Vision Transformer Models an" by Bassam M. Kanber, Ahmad Al Smadi et al.
 

Enhancing Breast Cancer Detection through Vision Transformer Models and Adaptive Fine-Tuning

Document Type

Conference Proceeding

Source of Publication

2024 IEEE 7th International Conference on Computer and Communication Engineering Technology (CCET)

Publication Date

8-18-2024

Abstract

The necessity for accurate and early detection of breast cancer (BC) is underscored by the fact that it remains a significant global health issue. This paper presents a novel approach using Vision Transformer (ViT) models to analyze histopathological images from the BreaKHis dataset for BC diagnosis. Three ViT models (ViT_B_16, ViT_B_32, and ViT_L_32) are implemented and optimized through adaptive fine-tuning techniques, using predefined mean and standard deviation values for normalization and custom data transformations with different layer unfreezing strategies. Results demonstrate the effectiveness of the ViT_B_16 model, with one layer unfrozen, achieving 98.12% accuracy and 0.0671 test loss. Comparative analysis and discussions underscore the ViT models' performance and computational efficiency, positioning them as promising tools for automated BC diagnosis. However, the study is limited by the scope of the BreaKHis dataset and the specific configurations of the Vision Transformer models, which may not fully generalize to other datasets or real-world scenarios. Future work will explore data augmentation and transformer variants to enhance generalization across diverse datasets.

ISBN

979-8-3503-5567-3

Publisher

IEEE

Volume

00

First Page

36

Last Page

40

Disciplines

Computer Sciences | Medicine and Health Sciences

Keywords

Vision Transformer, breast cancer, histopathological images, adaptive fine-tuning, BreaKHis dataset

Indexed in Scopus

no

Open Access

no

Share

COinS