A Coarse-to-Fine Facial Landmark Detection Method Based on Self-attention Mechanism

Document Type

Article

Source of Publication

IEEE Transactions on Multimedia

Publication Date

1-1-2021

Abstract

© 1999-2012 IEEE. Facial landmark detection in the wild remains a challenging problem in computer vision. Deep learning-based methods currently play a leading role in solving this. However, these approaches generally focus on local feature learning and ignore global relationships. Therefore, in this study, a self-attention mechanism is introduced into facial landmark detection. Specifically, a coarse-to-fine facial landmark detection method is proposed that uses two stacked hourglasses as the backbone, with a new landmark-guided self-attention (LGSA) block inserted between them. The LGSA block learns the global relationships between different positions on the feature map and allows feature learning to focus on the locations of landmarks with the help of a landmark-specific attention map, which is generated in the first-stage hourglass model. A novel attentional consistency loss is also proposed to ensure the generation of an accurate landmark-specific attention map. A new channel transformation block is used as the building block of the hourglass model to improve the model's capacity. The coarse-to-fine strategy is adopted during and between phases to reduce complexity. Extensive experimental results on public datasets demonstrate the superiority of our proposed method against state-of-the-art models.

ISSN

1520-9210

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Volume

23

First Page

926

Last Page

938

Disciplines

Computer Sciences

Keywords

Convolutional neural network, facial landmark detection, self-attention mechanism

Scopus ID

85102062752

Indexed in Scopus

yes

Open Access

no

Share

COinS