PROPOSAL
Predicting Heart Rate from a Face Webcam
This student project aims to develop a method for predicting health metrics, such as heart rate, in real-time using a conventional webcam capturing the user’s face. The project will integrate face detection, image and video signal processing, and spatial-temporal neural networks to estimate heart rate by analyzing subtle color variations in the face caused by blood flow. The final deliverable will be a demonstrative GitHub repository capable of real-time heart rate prediction.
Pipeline Overview
- Face Detection: The system will first detect the face in the video feed using a reliable face detection model (e.g., Haar cascades, MTCNN, or YOLO).
- Signal Processing: Once the face is detected, the video frames will be preprocessed to isolate facial regions where blood flow-related color changes are most prominent.
- Heart Rate Estimation: The processed video data will then be input into a spatial-temporal neural network, designed to capture both spatial features of the facial region and temporal changes that reflect heart rate variations over time and predict the heart rate.
Several publicly available datasets and potential model architectures could be used in this project (see references below). The final pipeline may evolve as the project progresses. Candidates must have experience with deep learning frameworks (PyTorch or TensorFlow) and the HPC resources at ITU.
References
Ren, Yuzhuo, Braeden Syrnyk, and Niranjan Avadhanam. “Dual attention network for heart rate and respiratory rate estimation.” 2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2021.
Palma Pagano, Tiago, et al. “Machine learning models and facial regions videos for estimating heart rate: a review on Patents, Datasets and Literature.” arXiv e-prints (2022): arXiv-2202.