Mobile Deep Classification of UAE Banknotes for the Visually Challenged

Document Type

Conference Proceeding

Source of Publication

2022 9th International Conference on Future Internet of Things and Cloud (FiCloud)

Publication Date



This paper proposes an artificial intelligence-powered mobile application for currency recognition to assist sufferers of visual disabilities. The proposed application uses RCNN, a pre-trained MobileNet V2 convolutional neural network, transfer learning, hough transform, and text-to-speech reader service to detect and classify captured currency and generate an auditory signal. To train our AI model, we collect 700 ultra-high definition images from the United Arab Emirates banknotes. We include the front and back faces of each banknote from various distances, angles, and lighting conditions to avoid overfitting. When triggered, our mobile application initiates a capture of an image using the mobile camera. The image is then pre-processed and input to our on-device currency detector and classifier. We finally use text-to-speech to change the textual class into an audio signal played on the user’s Bluetooth earpiece. Our results show that our system can be an effective tool in helping the visually challenged identify and differentiate banknotes using increasingly available smartphones. Our banknote classification model was validated using test-set and 5-fold cross-validation methods and achieved an average accuracy of 70% and 88%, respectively.



First Page


Last Page



Computer Sciences


Visualization, Bluetooth, Transfer learning, Transforms, Cameras, Mobile applications, Classification algorithms

Indexed in Scopus


Open Access