Open In App

Project Idea | Sign Language Translator for Speech-Impaired

Last Updated : 12 Apr, 2019
Improve
Improve
Like Article
Like
Save
Share
Report

Project Title : Sign Language Translator for Speech-impaired

Introduction: The main objective is to translate sign language to text/speech. The framework provides a helping-hand for speech-impaired to communicate with the rest of the world using sign language. This leads to the elimination of the middle person who generally acts as a medium of translation. This would contain a user-friendly environment for the user by providing speech/text output for a sign gesture input.

Conceptual framework:
USER INTERFACE:
1. The user uses various buttons of the application in the system to operate it.
a. START BUTTON: – This would start the application and hence the user (speech-impaired person) would give input to the application through gestures.
b. PAUSE BUTTON: – This would cause the application to pause and briefly stop accepting input.
c. RESUME BUTTON: – This would make the application to continue accepting input after a brief pause.
d. STOP BUTTON: – This would make the application to stop and hence user can close the application.
2. Input for the application would be provided by any camera attached to the system to recognize gestures.

OUTPUT:
1. Output would be provided for various gestures in a separate window in the form of text.
2. Text output can further be converted into speech using text-to-speech converter for
further smooth communication.

Data structures and Algorithms:
ALGORITHMS USED:

1. Rule Based Classifier

2. Background subtraction method by detecting the color of skin using HSV (Hue
Saturation Value) model.

IMPLEMENTATION:
The detection of fingers and palm is based on the method described in the journal The Scientific World Journal Volume 2014 (2014), Article ID 267872 (http://dx.doi.org/10.1155/2014/267872). After detection of fingers and palm, hand signs can be recognized using simple rule classifier i.e. we will map every possible hand sign with an appropriate label and then we will capture images and match them with the labels. The sequence of labels will be stored in our database which will be based on American Sign Language. Then by matching the sequence of labels with our database, we will predict what the person is trying to convey and appropriately convert the sign language into text which in further steps can be converted to speech using Google’s Text-to-speech API

Tools Used: OpenCV, python3, matplotlib, mySQL, MATLAB.

Application: The main application of this project is to provide aid for the speech-impaired to communicate with
those who do not know the sign language. Due to the simplicity of the model, it can also be implemented in smartphones and is regarded as our future plan to do so.

REFERENCE: Scientific World Journal Volume 2014 (2014), Article ID 267872
http://dx.doi.org/10.1155/2014/267872.

College: National Institute of Technology, Agartala

Faculty Advisor:
Mr. Parthasarathi De, Assistant Professor NIT –Agartala
Email : parthasarathide76@gmail.com / Contact: 08731812590

TEAM MEMBERS :
1. SATYA PRAKASH 2. KAPIL KUMAR AHUJA
3. RAHUL THAKUR 4. VAMSI KRISHNA PENDYALA


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads