ViLT-GPT is an innovative application that gives the conversational AI ChatGPT the ability to "see". By integrating OpenAI's Language Models (LLM) and LangChain with Vision-and-Language models, this app can answer queries based on the content of images. Now, you can interact with your images, ask questions and get informative responses.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
Before running the app, make sure you have the following libraries installed:
- dotenv
- os
- streamlit
- PIL
- transformers
- LangChain
- Streamlit Extras
To get a copy of this project up and running on your local machine, follow these steps:
- Clone the repository to your local machine.
git clone https://github.com/your-repository-url.git
- Go to the cloned repository.
cd repository-name
- Create virtual environment and activate
python -m venv env
source env/bin/activate
- Install package requirements
pip install -r requirements.txt
- Set environment variable(s)
cp .env.example .env
# modify OPENAI_API_KEY in .env file
- Run the application.
streamlit run app.py
To use this app, follow these steps:
- Launch the app.
- In the sidebar, click on 'Upload your IMAGE' to upload an image.
- Ask a question related to the uploaded image in the text input field.
- Wait for the processing to finish, and the answer to your question will appear below.
- Click 'Cancel' to stop the process.
- Streamlit - The web framework used
- LangChain - The language modeling framework
- OpenAI - The language understanding model
- ViLT - Vision-and-Language model from Hugging Face
This project is licensed under the MIT License - see the LICENSE.md file for details.