Document Type
Article
Source of Publication
Computers, Materials and Continua
Publication Date
1-1-2022
Abstract
Medical Image Analysis (MIA) is one of the active research areas in computer vision, where brain tumor detection is the most investigated domain among researchers due to its deadly nature. Brain tumor detection in magnetic resonance imaging (MRI) assists radiologists for better analysis about the exact size and location of the tumor. However, the existing systems may not efficiently classify the human brain tumors with significantly higher accuracies. In addition, smart and easily implementable approaches are unavailable in 2D and 3D medical images, which is the main problem in detecting the tumor. In this paper, we investigate various deep learning models for the detection and localization of the tumor in MRI. A novel two-tier framework is proposed where the first tire classifies normal and tumor MRI followed by tumor regions localization in the second tire. Furthermore, in this paper, we introduce a well-annotated dataset comprised of tumor and normal images. The experimental results demonstrate the effectiveness of the proposed framework by achieving 97% accuracy using GoogLeNet on the proposed dataset for classification and 83% for localization tasks after fine-tuning the pre-trained you only look once (YOLO) v3 model.
DOI Link
ISSN
Publisher
Computers, Materials and Continua (Tech Science Press)
Volume
72
First Page
73
Last Page
92
Disciplines
Computer Sciences
Keywords
GoogLeNet, Image classification, MRI, Tumor localization, YOLOv3
Scopus ID
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Recommended Citation
Ali, Farman; Khan, Sadia; Abbas, Arbab Waseem; Shah, Babar; Hussain, Tariq; Song, Dongho; EI-Sappagh, Shaker; and Singh, Jaiteg, "A Two-Tier Framework Based on GoogLeNet and YOLOv3 Models for Tumor Detection in MRI" (2022). All Works. 4919.
https://zuscholars.zu.ac.ae/works/4919
Indexed in Scopus
yes
Open Access
yes
Open Access Type
Gold: This publication is openly available in an open access journal/series