This is the demo code for Sam_cam, an app designed for visual impaired people to access visual information in photograph. It utilizes Segment Anything Model(SAM), Visual Language Models(VLMs), and Dot display.
A Dot display is a required for a visual impaired person to access the visual information. However, the code of this repo can run without the existance of a Dot Display. If you are not visual impaired, you can clone and run this repo to see how the whole demo works.