Jan 31st, DAH2, India
Slides for the session are available here
- Install virtual environment with the following command
$ pip install virtualenvOn raspberry pi, run this instead
$ sudo /usr/bin/easy_install virtualenv- Create a virtualenvironment
$ virtualenv -p python3 venv3- Activate the virtual environment
$ source venv3/bin/activate- For cv to work on raspberry pi, need to run the following commands
sudo apt-get install libatlas-base-dev
sudo apt-get install libjasper-dev
sudo apt-get install libqtgui4
sudo apt-get install python3-pyqt5This beginner exercise will get you familiar with using Opencv framework
- make sure you are inside the folder exercise1
$ python image_read_write.pyHere we will learn to read a live feed from the camera. We will also process frames one by one.
- make sure you are inside the folder exercise2
$ python camera_capture.pyHere we will use pre trained classifiers for face and eye detection on user defined image.
- make sure you are inside the folder exercise3
$ python face_detect.pyHere we will try and use a trained model to see how well our gestures are being recognized. This model supports 3 gestures.
- make sure you are inside the folder exercise4
$ cd exercise4- Now open a python terminal
$ python- Start executing the model
>>> import Video_Handler>>> Video_Handler.start_gesture_recognition()This code helps you to recognize and classify different emojis. As of now, we are only supporting hand emojis.
This project tries to understand user feedback from the gestures he/she shows with her hands. Feel free to train the model on your favorite hand gestures and see if it can detect the same later on when your friends make the same gestures.
- Filters to detect hand.
- CNN for training the model.
-
Network Used- Convolutional Neural Network
-
make sure you are inside the folder exercise4
$ cd exercise4- Now install the dependencies from requirements.txt Try,
$ pip install -r requirements.txtIf you face issues installing, consult us
- Now open a python terminal
$ python- Execute the following commands in python terminal
>>> import Video_HandlerBefore going to the next step please ensure that the folder "gestures" is empty.
Here you record the first gesture
>>> Video_Handler.save_gestures(0)Here you record the second gesture
>>> Video_Handler.save_gestures(1)Now you create a csv file corresponding to the gestures
>>> Video_Handler.createCSV_from_gestures()After execution of above line, a file "train_foo.csv" should be created
Now train your model
>>> Video_Handler.train(2)Because you trained with two gestures, the parameter passed is 2
Now see your model in action
>>> Video_Handler.start_gesture_recognition()Credits and references:
