||8 months ago|
|assets||8 months ago|
|doc||2 years ago|
|.gitignore||4 years ago|
|LICENSE||4 years ago|
|README.md||10 months ago|
|estimate_head_pose.py||10 months ago|
|mark_detector.py||10 months ago|
|optical_flow_tracker.py||4 years ago|
|os_detector.py||1 year ago|
|pose_estimator.py||2 years ago|
|stabilizer.py||1 year ago|
Head pose estimation
Real time human head pose estimation using TensorFlow and OpenCV.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
The code was tested on Ubuntu 20.04.
This repository already provided a pre-trained model for facial landmarks detection. Just git clone then you are good to go.
# From your favorite development directory: git clone https://github.com/yinguobing/head-pose-estimation.git
A video file or a webcam index should be assigned through arguments. If no source provided, the built in webcam will be used by default.
With video file
For any video format that OpenCV supports (
python3 estimate_head_pose.py --video /path/to/video.mp4
The webcam index should be provided:
python3 estimate_head_pose.py --cam 0
How it works
There are three major steps:
Face detection. A face detector is introduced to provide a face bounding box containing a human face. Then the face box is expanded and transformed to a square to suit the needs of later steps.
Facial landmark detection. A pre-trained deep learning model take the face image as input and output 68 facial landmarks.
Pose estimation. After getting 68 facial landmarks, the pose could be calculated by a mutual PnP algorithm.
The marks are detected frame by frame that makes the pose unstable. A Kalman filter is used to solve this problem, you can draw the original pose to observe the difference.
Retrain the model
This project is licensed under the MIT License - see the LICENSE.md file for details
Yin Guobing (尹国冰) - yinguobing
The pre-trained TensorFlow model file is trained with various public data sets which have their own licenses. Please refer to them before using this code.
- 300-W: https://ibug.doc.ic.ac.uk/resources/300-W/
- 300-VW: https://ibug.doc.ic.ac.uk/resources/300-VW/
- LFPW: https://neerajkumar.org/databases/lfpw/
- HELEN: http://www.ifp.illinois.edu/~vuongle2/helen/
- AFW: https://www.ics.uci.edu/~xzhu/face/
- IBUG: https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/
The 3D model of face comes from OpenFace, you can find the original file here.
The build in face detector comes from OpenCV. https://github.com/opencv/opencv/tree/master/samples/dnn/face_detector