American Sing Language Validator
This is a prototype for real-time ASL alphabet recognition. It is meant to provide an automatic feedback mechanism for ASL learners with use of only a web camera.
Currently it support only static signs of ASL alpabet.
- Python https://www.python.org/
- Conda https://anaconda.cloud/getting-started-with-anaconda-individual-edition
- Jupyter Notebook https://jupyter.org/
- Tensorflow https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html
- Tensorflow Records https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#create-tensorflow-records
- OpenCV https://opencv.org/
- NumPy https://numpy.org/
- MediaPipe https://google.github.io/mediapipe/
- Scikit-Learn https://scikit-learn.org/stable/
- Matplotlib https://matplotlib.org/
- Run Anaconda command in the terminal to start Tensorflow: conda activate tensorflow
- Launch Jupiter notebook by running a command: jupyter notebook
- After a new browser window open one of the scripts, either asl-validator-initial.ipynb or asl-validator-DEV-vNext.ipynb
- Run sells, consult the comments, section 5 "Collect Data" should be skipped if data collection is not needed
- jupyer notebook script(s)
- [data] folder that contains dataset, each sign has corresponding sequences/videos per sign in a separate folder and each of the folders contain NumPY files for each frame
- [logs] folder, contains information on training process, generated when script is ran should be openened using TensorBoard, using command 'tensorboard --logdir "asl-validator/logs"'
*For more information on how to work with Jupiter Notebook consult https://jupyter.org/