Skip to content

shbaydadaev/Autonomous-Vehicle-Environment-Perception

 
 

Repository files navigation

Autonomous-Vehicle-Environment-Perception

This repository contains Pandas Team implementation of Autonomous Vehicles Environment Perception Task.

Environment Perception is a crucial asset when it comes to Autonomous Vehicles. The system is required to perceive several entities in its field of view. Said entities include but are not limited to pedestrians, other vehicles, traffic lights, traffic signs, distance relative to other objects on the road, cross-walks, and side-walks. In this work, we utilize various computer vision methods and algorithms to fulfill the sought-after task.

Abstract

In this project, we designed and coded an environmental perception system for an autonomous vehicle. Applications of this system include identifying pedestrians, traffic lights, and signs, identifying vehicles, and detecting distances from them, as well as identifying roadside and pedestrian lanes. A variety of neural networks and machine learning algorithms as well as classical machine vision techniques such as the huff algorithm have been used in this project.

Below You Can See Pictures of the Output:

14 10 11 12

Inference

To run the program, first install the requirements using the code below:

$ pip install -r requirements.txt

Then create a folder named 'weights' in the main directory and download all the weights in this shared google drive folder.

Then, place your video in the main folder of this repo and then run the following command.

$ python main.py --video yourvideoname.mp4 [--save] [--noshow] [--output-name myoutputvideo.mp4] [--fps]

--save argument will save the output video.

--noshow will not show you a preview of the output.

--output-name will determine the name you want for your output video

--fps will plot the fps results on the output frames

"yourvideoname.mp4" is the name of your video file added to the main folder. "myoutputvideo.mp4" is the name you want for your output video.

Afterwards, the program starts running and the output video will be saved in the specified directory. To view the output while running, do not use '--no-show' argument.

There you have it.

Colab Notebook

You can also use the provided colab notebook to automatically download all the weights and sample video, and run the program in a matter of seconds!

simply open the following colab notebook

Open In Colab

Cited Works

  1. Yolov5 DOI
  2. SGDepth GithubRepo, Also Paper
  3. PINet GithubRepo

Datasets

  1. Traffic-Sign Detection and Classification in the Wild Link
  2. DFG Traffic Sign Data Set Link

Our Team

We as Team Pandas won 1st place in the National Rahneshan competition 2020-2021 for autonomous vehicles. This contest has been one of the most competitive and challenging contests in the Rahneshan tournaments with more than 15 teams competing from top universities in Iran. Pandas6

Contact us

Feel free to contact us via email or connect with us on linkedin.

About

Autonomous Vehicle Environment Perception.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.5%
  • Jupyter Notebook 6.5%