SNCHAR: Sign language character recognition

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hard of hearing and unable to speak individuals convey among themselves utilizing gesture based communication yet typical individuals think that it’s hard to comprehend their language. Utilizing two hands frequently prompts lack of definition of highlights because of covering of hands. Our undertaking goes for making the essential stride in crossing over the correspondence hole between typical individuals and tragically challenged individuals utilizing Sign language. Powerful augmentation of this undertaking to words and typical statements may not just cause the deaf and dumb individuals to impart quicker and simpler with external world, yet in addition give a lift in creating self-sufficient frameworks for comprehension and supporting them. Gesture based communication is the favored technique for correspondence among the hard of hearing and the meeting debilitated individuals everywhere throughout the world. Acknowledgment of communication via gestures can have shifting level of achievement when utilized in a computer vision or some other techniques. Communication via gestures is said to have an organized arrangement of signals where each motion is having a particular significance. We propose a solution to this problem as SNCHAR will allow easy interaction between the deaf and the hearing impaired people and the ones who are not. Here SN stands for Sign language, CHA for Character, and R for Recognition system. The project "SNCHAR: Sign language Character Recognition" system is a python based application. It uses live video as input, and predicts the letters the user is gesturing in the live feed. It captures the frames, and recognizes the area of hand gesture by looking for skin color intensity object. It separates the gesture area from the rest of the frame, and feeds that part to our pre-trained model. This pre-trained model, using the hand gesture as input predicts a value that represents an alphabet. This alphabet is displayed on the screen. User can hear the text predicted on the screen by pressing “P” on the keyboard. The predicted text can be erased if required by using “Z” from the keyboard.

Author supplied keywords

Cite

CITATION STYLE

APA

Chaurasia, A., & Shire, H. (2019). SNCHAR: Sign language character recognition. International Journal of Recent Technology and Engineering, 8(3), 465–468. https://doi.org/10.35940/ijrte.C4226.098319

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free