Skip to content

AbuZar-Babar/SignDataCollection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

SignSpeak

A small data-collection and sign-language recognition project. This repository contains code to collect gesture data, train a model, and run realtime inference.

Repository layout (important files):

  • data/ - collected .npy feature files (excluded from git by default)
  • config/actions_config.py - action / label configuration
  • models/ - saved models and encoders (model files are excluded by .gitignore)
  • src/ - application code
    • collect_data.py / collect_data_gui.py - scripts to collect and label data
    • train_model.py - training script (produces a .h5 model)
    • realtime_inference.py - script for realtime camera inference
  • TODO.md - project TODOs

Quick start

  1. Create and activate a virtual environment (recommended):
python3 -m venv .venv
source .venv/bin/activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Collect data (example):
python src/collect_data.py
# or use the GUI
python src/collect_data_gui.py
  1. Train model:
python src/train_model.py
  1. Run realtime inference (requires camera and a trained model in models/):
python src/realtime_inference.py

Notes

  • Large binary files (models, .npy datasets) are ignored in .gitignore. Store them outside the repo or use Git LFS if you want them in the remote.
  • If you need a GPU-enabled TensorFlow build, install the appropriate tensorflow package for your platform.

Contributing

Feel free to open issues or pull requests. If you'd like, I can add CI (GitHub Actions) to run linting and unit tests.

License

Add a LICENSE file if you want to set a license for this project.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages