Skip to content

ashhadulislam/TechDesignForumJan2019

 
 

Repository files navigation

Tech Design Forum 2019

Jan 31st, DAH2, India

Speakers

Presentation:

Slides for the session are available here

Setup your environment

  1. Install virtual environment with the following command
$ pip install virtualenv

On raspberry pi, run this instead

$ sudo /usr/bin/easy_install virtualenv
  1. Create a virtualenvironment
$ virtualenv -p python3 venv3
  1. Activate the virtual environment
$ source venv3/bin/activate
  1. For cv to work on raspberry pi, need to run the following commands
sudo apt-get install libatlas-base-dev
sudo apt-get install libjasper-dev
sudo apt-get install libqtgui4
sudo apt-get install python3-pyqt5

Exercise 1:

Reading and writing images

Description

This beginner exercise will get you familiar with using Opencv framework

  1. make sure you are inside the folder exercise1
$ python image_read_write.py

Exercise 2:

Capture camera feed and process frames

Description

Here we will learn to read a live feed from the camera. We will also process frames one by one.

  1. make sure you are inside the folder exercise2
$ python camera_capture.py

Exercise 3:

Face detection

Description

Here we will use pre trained classifiers for face and eye detection on user defined image.

  1. make sure you are inside the folder exercise3
$ python face_detect.py

Gesture Detector - Automated feedback system based on Gesture

Exercise 4:

Description

Here we will try and use a trained model to see how well our gestures are being recognized. This model supports 3 gestures.

Gesture#1 Index finger up

Gesture#2 Sign of the horns

Gesture#3 Three fingers up

  1. make sure you are inside the folder exercise4
$ cd exercise4
  1. Now open a python terminal
$ python
  1. Start executing the model
>>> import Video_Handler
>>> Video_Handler.start_gesture_recognition()

Exercise 5:

Gesture Detector - Automated feedback system based on Gesture

This code helps you to recognize and classify different emojis. As of now, we are only supporting hand emojis.

Description

This project tries to understand user feedback from the gestures he/she shows with her hands. Feel free to train the model on your favorite hand gestures and see if it can detect the same later on when your friends make the same gestures.

Functionalities

  1. Filters to detect hand.
  2. CNN for training the model.

Python Implementation

  1. Network Used- Convolutional Neural Network

  2. make sure you are inside the folder exercise4

$ cd exercise4
  1. Now install the dependencies from requirements.txt Try,
$ pip install -r requirements.txt

If you face issues installing, consult us

  1. Now open a python terminal
$ python
  1. Execute the following commands in python terminal
>>> import Video_Handler

Before going to the next step please ensure that the folder "gestures" is empty.

Here you record the first gesture

>>> Video_Handler.save_gestures(0)

Here you record the second gesture

>>> Video_Handler.save_gestures(1)

Now you create a csv file corresponding to the gestures

>>> Video_Handler.createCSV_from_gestures()

After execution of above line, a file "train_foo.csv" should be created

Now train your model

>>> Video_Handler.train(2)

Because you trained with two gestures, the parameter passed is 2

Now see your model in action

>>> Video_Handler.start_gesture_recognition()

Credits and references:

About

Talk at Tech Design Forum conducted at DBS Hyderabad, 31st Jan, 2019

Resources

License

MIT, MIT licenses found

Licenses found

MIT
LICENSE.md
MIT
LICENSE.txt

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 100.0%