Skip to content

This project attempts to create a system which would bring in added ease to the visually impaired, through our nagivation, obstacle-detection, obstacle distance identification and speech-driven system to seamlessly integrate applications like Ola, Uber, etc. This was built during the PW-Hacks Jan-2023.

Notifications You must be signed in to change notification settings

NDharshan/iNeuron-Blind-Navigation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

iNeuron-Blind-Navigation

This project attempts to create a system which would bring in added ease to the visually impaired, through our nagivation, obstacle-detection, obstacle distance identification and speech-driven system to seamlessly integrate applications like Ola, Uber, etc. This was built during the PW-Hacks Jan-2023.

The problem this Application solves

  • Visually impaired means that one can hardly fathom to cross the busy day-to-day life laden with obstacles, nay, even navigating through a micro-environment like home or work-place is a challenge.
  • We attempt to solve this through a two way model. That is we work with both the individual and his/her guardian to create a safe navigating experience for the person.
  • Hosting this application in any hand-held device like a mobile for ease of access

Technology Stack

Web Application

  • Flask (Python)
  • HTML
  • CSS

Mobile Application

  • Kivy (Python)
  • Android Studio

Getting started

Install the pre-requisite modules namely Flask and Kivy and other modules as applicable Running the Detection Code independently

Obstacle detection

python object classification/yolo_opencv.py

Obstacle-User distance detection

python person distance/distance.py

Sound-driven instructions

python speech recognition/speech.py

Flask application

python main.py

Kivy application

python kivy3.py

  • Requires KivyMD as well

Working and Features

  • The Web-application calls the voice-assisted system and is recieptive to user commands
  • There is always an alert for obstacle and when the obstacle arrives beyond a certain safe distance from an individual a beep sound is raised
  • The user can easily book an Ola by just an instructions

Snapshots of the Working Web Application

Screenshot (13) Screenshot (14)

Snapshots of the Working Mobile Application

mobile screens

Challenges we ran into and Acknowledgements

  • We faced issues in figuring out the weights in the model and in the design of the mobile application linked to the cloud, but were able to find that use of COCO dataset and Firebase which enabled seamless deployment

Further Enhancements

  • Integrating Face Recognition
  • Including Map API for Navigation

Video Demonstration

Team

Sujan Reddy

https://github.com/sujan-reddy

N.Dharshan

https://github.com/NDharshan

About

This project attempts to create a system which would bring in added ease to the visually impaired, through our nagivation, obstacle-detection, obstacle distance identification and speech-driven system to seamlessly integrate applications like Ola, Uber, etc. This was built during the PW-Hacks Jan-2023.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published