A powerful visual similarity search app using the CLIP model to match uploaded query images against a gallery โ all within an interactive Streamlit UI.
โ Built with:
CLIP
,Transformers
,Streamlit
,PIL
,PyTorch
,scikit-learn
- ๐ผ๏ธ Upload a query image
- ๐๏ธ Upload a set of gallery images
- ๐ Get top 3 visually similar matches using CLIP embeddings
- ๐ Similarity scores shown using cosine similarity
- ๐จ Clean, minimal Streamlit UI
- Python 3.8 or higher
# 1. Clone this repo
git clone https://github.com/rakshath66/clipfindr.git
cd clipfindr
# 2. (Optional) Create a virtual environment
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
# 3. Install dependencies
pip install -r requirements.txt
# 4. Run the app
streamlit run app.py
clipfindr/
โโโ app.py # Streamlit app with CLIP visual search
โโโ gallery/ # Uploaded gallery images (auto-created)
โโโ requirements.txt # Python dependencies
โโโ README.md # This file
- ๐ง CLIP from OpenAI
- ๐ค Hugging Face Transformers
- ๐งฎ PyTorch + TorchVision
- ๐ Cosine Similarity via scikit-learn
- ๐จ PIL for image handling
- ๐ผ๏ธ Streamlit for the frontend
- Product image deduplication
- Visual search for screenshots
- Reverse lookup from dataset images
- Similar fashion or object search
CLIP is loaded directly via Hugging Face โ no tokens required.
Optional: To avoid rate limits, run
huggingface-cli login
if you have an account.
- Fork the repo
- Create a branch:
git checkout -b my-feature
- Make changes and commit:
git commit -m "Add: new feature"
- Push:
git push origin my-feature
- Open a pull request โ
Clean, modular contributions welcome!
MIT License ยฉ Rakshath U Shetty
- CLIP-based similarity matching
- Top 3 results with similarity scores
- Clean Streamlit UI
- โ Save image metadata
- ๐ง Add text + image matching
- ๐ผ๏ธ Visual heatmap of similarity
- ๐ Optional: add BLIP-based captioning