This project has an objective of determine the heartrate of a person by analysing a 1 minute long video taken by a computer camera.
Go to file
Rémi BUSSIERE b9c35b72c1 Mise à jour de 'python/haarcascades-python.py' 2023-02-19 20:09:34 +01:00
haar Transférer les fichiers vers 'haar' 2023-02-19 20:03:05 +01:00
haarcascade_frontalface.xml Transférer les fichiers vers 'haarcascade_frontalface.xml' 2023-02-19 20:08:06 +01:00
mediapipe.py Transférer les fichiers vers 'mediapipe.py' 2023-02-19 20:06:48 +01:00
python Mise à jour de 'python/haarcascades-python.py' 2023-02-19 20:09:34 +01:00
.gitignore Initial commit 2023-02-19 19:12:06 +01:00
LICENSE Initial commit 2023-02-19 19:12:06 +01:00
README.md Mise à jour de 'README.md' 2023-02-19 19:36:13 +01:00
haarcascades.png Ajouter 'haarcascades.png' 2023-02-19 20:02:26 +01:00
mediapipe-python.py Ajouter 'mediapipe-python.py' 2023-02-19 20:00:00 +01:00

README.md

CV-Project-1-BUSSIERE-BEAUD

This project has an objective of determine the heartrate of a person by analysing a 1 minute long video taken by a computer camera.

FIRST STEP - Get the video

To get the video, we need a normal camera ( like a laptop camera ) and record a ~1min long video of ourselves, in this case : Rémi. Then, we use the ffmpeg package* to convert this video into a 15fps, 640x480 video.

*sudo apt install ffmpeg on debian distribution

SECOND STEP - Get the ROI position

The ROI ( Region of interest ) is important here because we only need the average color of the person's head, the surrounding is considerate here as noise.

To get the ROI, we use a simple solution delivered by OpenCV* in python called haarcascades.

*pip install opencv-python

The haarcascades function will return 4 coordinates that will surround the person's head. The haarcascades is a pre-train model of machine learning developped for OpenCV where you can find here : https://github.com/opencv/opencv/tree/master/data/haarcascades

THIRD STEP - Calculate the average color

Then, we have the coordinates, and we compute the average between each red composents of each pixel inside the square, and we repeat it for blue and green. Then, we have as an output one color/frame.

You can find a picture of the haarcascades algotithm in the picture section.

We then export a .txt file with all the average color to use it in an octave code.

LA TU METS TON CODE NATHAN

POSSIBLE ALTENATIVES

In order to find the best possible ROI, we need some other algorithm than haarcascades. So that's why we used the mediapipe* library, wich is a way better algorithm, it's also a pre-trained model but much more complete. We can see here a picture of the results using the mediapipe library, where the ROI is more precise.

*pip install mediapipe more informations here : https://github.com/google/mediapipe