Description
The recognition of the sign language is done using a CNN(Convolutional Neural Network) model which is trained on a dataset containing 41 classes among which 26 classes are for alphabets, 6 classes for words and remaining are for numbers. A custom dataset has been generated and the code for the same is available in CNN model.ipynb file.
A sample pic of the dataset generated is shown below.
A pic of prediction of local image
CNN model specification
Sequential model with
- Three Convo2D layers
- Three MaxPool2D layers
- Three dense layers with relu activation funcion has been used.
The output Dense layer has softmax activation function with 35 neurons.
Adam optimizer with categorical crossentropy loss function is used. The model is then trained for 10 epochs.
Strucure of repo
The repository contains the following structure.
- README.md - This is a markdown file which contains details about the project.
- Main scripts - This folder contains the main script files which can be used to either generate the model or for prediction with the model.
- Mytrials - This folder contains the files which which were used while training the model and also for testing the codes.
Contribution
Do you have any suggestions on improving this project?
Open an issue if you have any suggestions.
Here are few things You can work on
- Improving the model’s accuracy.
- Try with other algorithms like SVM, K nearest neighbor etc.
- To build a web version of the project(For this you need to consider the application.py file )
Feel free to modify the project and open a Pull request.
Also you can refer this article Sign language recognition using Python and Opencv