This repository contains the code and documentation for the Person Fall Detection System, developed as part of the Pune Metro Hackathon. The system aims to detect individuals who have fallen onto metro tracks, providing a crucial safety mechanism for metro operations.
The challenge was to create a robust system capable of detecting persons who have fallen onto metro tracks, thereby entering a designated danger zone. This system is designed to raise an alarm in such scenarios, potentially preventing accidents and saving lives.
- Technical Overview
- System Architecture
- Models Used
- Dataset
- Implementation Details
- Results
- Future Improvements
- Contributing
- License
The Person Fall Detection System utilizes two primary components:
- Person Detection: Implemented using YOLOv7
- Track Segmentation: Implemented using YOLOv5
These components work in tandem to identify persons and define the danger zone (metro tracks). The system then calculates the intersection between detected persons and the danger zone to determine if a fall has occurred.
The system follows this high-level workflow:
- Input video frame
- Parallel processing: a. Person detection using YOLOv7 b. Track segmentation using YOLOv5
- Polygon creation from segmentation mask
- Intersection calculation between person bounding boxes and track polygon
- Alarm triggering based on intersection results
Output showing the track, i.e the danger zone is being updated every frame, even when the metro arrives, the danger zone basically adjusts itself automatically:
- Purpose: To detect and localize persons in each frame
- Modifications: Fine-tuned to detect only persons
- Output: Bounding boxes around detected persons
- Purpose: To segment the metro tracks (danger zone) in each frame
- Modifications: Fine-tuned for instance segmentation of metro tracks
- Output: Segmentation mask of the metro tracks
The dataset used for this project was provided by the Pune Metro Hackathon organizers. It includes:
- Images of metro stations and tracks
- Annotations for person locations
- Annotations for track locations and boundaries
This dataset was used to fine-tune both the YOLOv7 and YOLOv5 models for their respective tasks.
- Fine-tuned YOLOv7 on the provided dataset
- Configured to output only person class detections
- Applied to each frame of the input video
- Fine-tuned YOLOv5 for instance segmentation of metro tracks
- Applied to each frame to generate a segmentation mask of the tracks
- Used Shapely library to create a polygon from the track segmentation mask
- For each detected person: a. Created a polygon from the bounding box b. Calculated the intersection between the person polygon and track polygon
- If intersection exists, classify as a potential fall event
- Implement real-time processing for live video feeds
- Enhance the system to work under various lighting conditions
- Integrate with metro station alarm systems
- Develop a user interface for security personnel
- Explore the use of 3D sensors for more accurate depth perception
- To integrate yet another model to detect the pose of the persons and hence to see if they are leaning towards the track and are about to fall : Pose model
This project is licensed under the MIT License - see the LICENSE file for details.
Developed with ❤️ for the Pune Metro Hackathon