This project is a web-based application that utilizes real-time object detection to identify and label objects within an image or video stream. It is built using Next.js, ONNXRuntime, YOLOv7, and YOLOv10 model.
Demo at RTOD.vercel.app
demo.mp4
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
In order to run this project, you will need to have the following software installed on your machine:
- Node.js
- A web browser
- Clone the repository to your local machine:
https://github.com/juanjaho/real-time-object-detection-web-app.git
- Navigate to the project directory:
cd real-time-object-detection-web-app
- Install the necessary dependencies by running:
npm install
# or
yarn install
- Start the development server by running:
npm run dev
# or
yarn dev
- Open your web browser and navigate to http://localhost:3000 to view the application.
- Add your custom model to the
/models
directory. - Update the
RES_TO_MODEL
constant incomponents/models/Yolo.tsx
to include your model's resolution and path. - Modify the
preprocess
andpostprocess
functions incomponents/models/Yolo.tsx
to match the input and output requirements of your model. - If you encounter
protobuff error
while loading your.onnx
model, your model may not be optimised foronnxruntime webassembly
. Convert your model to.ort
or optimised.onnx
using onnxruntime. See ultralytics_pt_to_onnx.md for example.
This app can also be installed on your device (desktop or mobile) as a progressive web app (PWA). Here's how:
- Visit the app's URL in a web browser that supports PWAs (such as Google Chrome or Firefox).
- Look for the "Install" or "Add to Homescreen" button in the browser's interface.
- Click the button and follow the prompts to install the app.
- The app will now be installed on your device and can be launched from the homescreen like any other app.
This project can be deployed to a web server for public access. For more information on deploying a Next.js application, please visit the official documentation
- ONNXRuntime - An open-source project for running inferences using pre-trained models in a variety of formats.
- YOLOv10 - An Object detection model which is used in this project.
- YOLOv7 - An Object detection model which is used in this project.
- Next.js - A JavaScript framework for building server-rendered React applications.
- PWA - A progressive web app that can be installed on a user's device and run offline, providing a native-like experience.
If you want to contribute to this project, please feel free to submit a pull request. Any contributions, big or small, are greatly appreciated!
Juan Sebastian - Initial work - @juanjaho
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.
-
Thank you to @ultralytics for the easy configuration of YOLOv10 model.
-
Thank you to [@THU-MIG] (https://github.com/THU-MIG) for creating YOLOv10 model.
-
Thank you to @WongKinYiu for creating YOLOv7 model.
-
Hats off to the ONNXRuntime team for making such a powerful tool accessible to developers.
-
Referenced ONNXRuntime Web Demo for guidance on how to use ONNXRuntime in a web application.
-
Thank you to all the contributors to the open-source libraries used in this project.
-
Inspiration for this project was taken from my previous project AnimeArcaneGAN_Mobile
@article{THU-MIGyolov10,
title={YOLOv10: Real-Time End-to-End Object Detection},
author={Ao Wang, Hui Chen, Lihao Liu, et al.},
journal={arXiv preprint arXiv:2405.14458},
year={2024},
institution={Tsinghua University},
license = {AGPL-3.0}
}
@article{wang2022yolov7,
title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
journal={arXiv preprint arXiv:2207.02696},
year={2022}
}