On the Impact of Deep Learning and Feature Extraction for Arabic Audio Classification and Speaker Identification

Document Type

Conference Proceeding

Source of Publication

2022 IEEE/ACS 19th International Conference on Computer Systems and Applications (AICCSA)

Publication Date

12-8-2022

Abstract

In recent times, machine learning and deep learning algorithms have contributed to the advances in audio and speech recognition. Despite the progress, there is limited emphasis on the classification of cantillation audio using deep learning. This paper introduces a dataset containing two labeled styles of cantillation from six reciters. Deep learning architectures including convolutional neural networks (CNN) and deep artificial neural networks (ANN) were used to classify the recitation styles using various spectrogram features. Moreover, the classification of the six reciters was also performed using deep learning. The best performance was achieved using a CNN model and Mel spectrograms resulting in an F1-score of 0.99 on the test set for classifying recitation style and an F1-score of 1.00 on the test set for classifying reciters. The results obtained in this work outperform the existing works in the literature. The paper also discusses the impact of various audio features and deep learning algorithms that apply to audio genre and speaker identification tasks.

ISBN

979-8-3503-1008-5

Publisher

IEEE

Volume

00

First Page

1

Last Page

8

Disciplines

Computer Sciences

Keywords

Deep learning, Visualization, Machine learning algorithms, Speech recognition, Feature extraction, Classification algorithms, Convolutional neural networks

Indexed in Scopus

no

Open Access

no

Share

COinS