Falls are a very common unexpected accident in the elderly that result in serious injuries such as broken bones, head injury. Detecting falls and taking fall patients to the emergency room in time is very important. In this project, we propose a method that combines face recognition and action recognition for fall detection. Specifically, we identify seven basic actions that take place in elderly daily life based on skeleton data detected using YOLOv7-Pose model. Two deep models which are Spatial Temporal Graph Convolutional Network (ST-GCN) and Long Short-Term Memory (LSTM) are employed for action recognition on the skeleton data. The experimental results on our dataset show that ST-GCN model achieved an accuracy of 90% higher than the LSTM model by 7%.
recog_recording.mp4
Member:
- DAO DUY NGU
- LE VAN THIEN
Instructor: TRAN THI MINH HANH
git clone https://github.com/DuyNguDao/Human_Action_LSTM.git
cd Human_Action_LSTM
conda create --name human_action python=3.8
pip install -r requirements.txt
model yolov7 pose state dict: yolov7_w6_pose
python detect_video.py --fn <url if is video or 0>