Sign Language is a language in which we make use of hand movement and gestures to communicate with people who are mainly deaf and dumb. This paper proposes a system to recognise the hand gestures using a Deep Learning Algorithm, Convolution Neural Network (CNN) to process the image and predict the gestures. This paper shows the sign language recognition of 26 hand gestures of American Sign Language. The proposed system contains modules such as: pre-processing and feature extraction, training and testing of model and sign to text conversion. Different CNN architecture and pre-processing techniques such as skin masking, Canny Edge Detection were designed and tested with our dataset to obtain better accuracy in recognition.
Sign Language is the most basic and natural way for hearing impaired people to communicate. The society neglects these physically impaired people and isolate them. To bridge the gap between the normal people and hearing-impaired individuals individual one must know sign language. Our aim is to develop an efficient system to help get over this barrier of communication.
The focus of this work will be to recognise American Sign Language using an android app. The objective is to incentivize algorithms that is ideal enough to be executed on mobile platform. The system was developed using 26 hand gestures of American Sign Language. The dataset contained approximately 3000 images per alphabet. The elementary task was to pre-process the images obtained for this project.
Das, A., Gawde, S., Suratwala, K., & Kalbande .D. and Rao, G. A., Syamala, K., Kishore, P. V. V., & Sastry. have performed fundamental research on sign language dataset using CNN algorithm to achieve satisfactory results from the training and testing of the dataset. In the proposed system is using selfie language to process the images and is tested using stochastic pooling. Mahesh Kumar N B is using MATLAB to perform feature extraction on the dataset used an then performing Linear Discrimination Analysis (LDA) for gesture recognition. Anup Kumar, Karun Thankachan and Mevin M. Dominic have developed a system using Support Vector Machine. The images used in are pre-processed using skin segmentation and extracts appropriate features from the image obtained. This pre-processing of images of is completed by converting image to grey scale and then performing HSV thresholding.
Post Pre-processing the images were fed into the Convolution Neural Network model for training and testing purpose. Various CNN architectures were performed and tested on our dataset to identify the best architecture for recognition of thee hand gestures. A mobile application for the above proposed system is in progress. The application is being developed using Flutter.
In the first step, the input is obtained as an image or captured from a video. The captured image is then pre-processed and forwarded to the CNN model. The CNN model tests the loaded image against the trained images and predicts the sign with the most probable labels from the already trained model. CNN model can’t predict accurately when unprocessed images are fed directly to it. The main issue with the processing in CNN is its inability to cancel out the background properly. Hence, the images are required to be processed separately using various image processing techniques.