The intricate dance of emotions and psychophysiology: unveiling the secrets of microexpressions

Document Type

Article

Source of Publication

Peerj Computer Science

Publication Date

4-6-2026

Abstract

Background: Emotion recognition plays a pivotal role in behavioral analysis, mental health assessment, and human-computer interaction. Micro-expressions, which are brief and involuntary facial movements, offer valuable insights into concealed emotions. However, validating micro-expressions remains a challenge due to their subtlety and short duration. This study aims to enhance the validation and classification of micro-expressions by integrating electromyogram (EMG) signals with facial action units (AUs). Methods: EMG data was collected using the EMG Muscle Sensor Module V3.0, interfaced with an Arduino Mega 2560 microcontroller. To ensure signal clarity, various data filtration techniques were applied to eliminate noise, motion artifacts, and baseline interferences. The cleaned EMG signals were used to extract features relevant to facial muscle activity. These features were then analyzed using convolutional neural networks (CNN) and long short-term memory (LSTM) models. The CNN model focused on spatial pattern recognition in muscle activation, while the LSTM model captured temporal dependencies in the signal sequence. Results: The CNN-based model achieved an accuracy of 97.62% in emotion classification, while the LSTM model demonstrated a comparable accuracy of 96.47%. These results indicate a high degree of reliability in detecting emotions based on EMG signals and their correspondence to facial action units. The study also highlighted several limitations of existing emotion recognition frameworks, including reduced accuracy, limited emotion representation, and insufficient dataset diversity. The proposed integration of EMG signals with facial action units provides a reliable and accurate framework for micro-expression validation. By addressing key limitations in current models, this research contributes to the development of more robust and interpretable emotion recognition systems. Future work will focus on the integration of multimodal signals and the use of more diverse datasets to enhance generalizability across populations and environments.

ISSN

2376-5992

Publisher

PeerJ

Volume

12

Disciplines

Computer Sciences | Social and Behavioral Sciences

Keywords

Computer science (0.69), Artificial intelligence (0.65), Convolutional neural network (0.54), Action (physics) (0.51), Reliability (semiconductor) (0.51), SIGNAL (programming language) (0.46), Facial electromyography (0.46), Feature (linguistics) (0.44), Motion (physics) (0.44), Face (sociological concept) (0.43), Pattern recognition (psychology) (0.43), Feature extraction (0.42), Emotion classification (0.41), Speech recognition (0.41), Emotion recognition (0.41), Facial muscles (0.39), Computer vision (0.37), Machine learning (0.35), Deep learning (0.34), Electromyography (0.34), Key (lock) (0.34), Facial recognition system (0.33), Artificial neural network (0.31), Dance (0.31), Facial expression (0.3), Motion capture (0.27)

Scopus ID

105035689859

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Indexed in Scopus

yes

Open Access

yes

Open Access Type

Gold: This publication is openly available in an open access journal/series

This document is currently not available here.

Share

COinS