Deepfake Audio Detection via MFCC Features Using Machine Learning
Source of Publication
Deepfake content is created or altered synthetically using artificial intelligence (AI) approaches to appear real. It can include synthesizing audio, video, images, and text. Deepfakes may now produce natural-looking content, making them harder to identify. Much progress has been achieved in identifying video deepfakes in recent years; nevertheless, most investigations in detecting audio deepfakes have employed the ASVSpoof or AVSpoof dataset and various machine learning, deep learning, and deep learning algorithms. This research uses machine and deep learning-based approaches to identify deepfake audio. Mel-frequency cepstral coefficients (MFCCs) technique is used to acquire the most useful information from the audio. We choose the Fake-or-Real dataset, which is the most recent benchmark dataset. The dataset was created with a text-to-speech model and is divided into four sub-datasets: for-rece, for-2-sec, for-norm and for-original. These datasets are classified into sub-datasets mentioned above according to audio length and bit rate. The experimental results show that the support vector machine (SVM) outperformed the other machine learning (ML) models in terms of accuracy on for-rece and for-2-sec datasets, while the gradient boosting model performed very well using for-norm dataset. The VGG-16 model produced highly encouraging results when applied to the for-original dataset. The VGG-16 model outperforms other state-of-the-art approaches.
Institute of Electrical and Electronics Engineers (IEEE)
Deepfakes, Deep learning, Speech synthesis, Training data, Feature extraction, Machine learning algorithms, Data models, Acoustics
Hamza, Ameer; Javed, Abdul Rehman Rehman; Iqbal, Farkhund; Kryvinska, Natalia; Almadhor, Ahmad S.; Jalil, Zunera; and Borghol, Rouba, "Deepfake Audio Detection via MFCC Features Using Machine Learning" (2022). All Works. 5544.
Indexed in Scopus