A Comparative Analysis of ISLRS Using CNN and ViT

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Indian Sign Language Recognition System (ISLRS) aims at recognizing and interpreting the hand gestures and movements in Indian Sign Language (ISL), in order to facilitate smooth communication between the hearing-impaired individuals and the normal population. This research aims at comparing ISLR System using a custom convolutional neural network (CNN) architecture as well as Vision Transformer (ViT). From the ISL alphabet dataset consisting of 36 classes, 26 classes corresponding to the English alphabets are considered in this analysis. The analysis showed that for the dataset, ViT outperforms CNN in terms of performance metrics considered.

Cite

CITATION STYLE

APA

Renjith, S., & Manazhy, R. (2024). A Comparative Analysis of ISLRS Using CNN and ViT. In Lecture Notes in Networks and Systems (Vol. 789 LNNS, pp. 1–9). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-99-6586-1_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free