Towards Enhanced Identification of Emotion from Resource-Constrained Language through a novel Multilingual BERT Approach

Document Type


Source of Publication

ACM Transactions on Asian and Low-Resource Language Information Processing

Publication Date



Emotion identification from text has recently gained attention due to its versatile ability to analyze human-machine interaction. This work focuses on detecting emotions from textual data. Languages, like English, Chinese, and German are widely used for text classification, however, limited research is done on resource-poor oriental languages. Roman Urdu (RU) is a resource-constrained language extensively used across Asia. This work focuses on predicting emotions from RU text. For this, a dataset is collected from different social media domains and based on Paul Ekman's theory it is annotated with six basic emotions, i.e., happy, surprise, angry, sad, fear, and disgusting. Dense word embedding representations of different languages is adopted that utilize existing pre-trained models. BERT is additionally pre-trained and fine-tuned for the classification task. The proposed approach is compared with baseline machine learning and deep learning algorithms. Additionally, a comparison of the current work is also performed with different approaches for the same task. Based on the empirical evaluation, the proposed approach performs better than the existing state-of-the-art with an average accuracy of 91%.




Association for Computing Machinery (ACM)


Computer Sciences


Deep learning, Emotion detection, Roman Urdu, Resource-poor language, BERT, Affective computing

Indexed in Scopus


Open Access


Open Access Type

Bronze: This publication is openly available on the publisher’s website but without an open license