As AI engineers, we love data and we love to see graphs and numbers! So why not project the inference data on some platform to understand the inference better? When a model is deployed on the edge for some kind of monitoring, it takes up rigorous amount of frontend and backend developement apart from deep learning efforts — from getting the live data to displaying the correct output. So, I wanted to replicate a small scale video analytics tool and understand what all feature would be useful for such a tool and what could be the limitations?
- Choose input source - Local, RTSP or Webcam
- Input class threshold
- Set FPS drop warning threshold
- Option to save inference video
- Input class confidence for drift detection
- Option to save poor performing frames
- Display objects in current frame
- Display total detected objects so far
- Display System stats - Ram, CPU, GPU usage
- Display poor performing class
- Display minimum, maximum, and average FPS recorded during inference
- Clone this repo
- Install all the dependencies
- Run -> 'streamlit run app.py' or 'python -m streamlit run app.py' in Windows.
- Updated YOLOv5 to YOLOv11
- Replaced DeepSORT with YOLO's default BoT-SORT tracker
- Bug fixes, refactoring, performance boosters
The input video should be in same folder where app.py is. If you want to deploy the app in cloud and use it as a webapp then - download the user uploaded video to temporary folder and pass the path and video name to the respective function in app.py . This is Streamlit bug. Check Stackoverflow.