🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
-
Updated
Jan 23, 2024 - Python
🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.
Simple implementation of OpenAI CLIP model in PyTorch.
视觉UI分析工具
[ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models
根据文本描述搜索本地图片的工具,powered by Rust + candle + CLIP
[NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.
The most impactful papers related to contrastive pretraining for multimodal models!
Semantic Search demo featuring UForm, USearch, UCall, and StreamLit, to visual and retrieve from image datasets, similar to "CLIP Retrieval"
[ICCV2023] Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer
Youtube video moment searcher by text or photo
A Light weight deep learning model with with a web application to answer image-based questions with a non-generative approach for the VizWiz grand challenge 2023 by carefully curating the answer vocabulary and adding linear layer on top of Open AI's CLIP model as image and text encoder
Semantic Emoji Search Plugin for FiftyOne
[ NeurIPS 2023 R0-FoMo Workshop ] Official Codebase for "Estimating Uncertainty in Multimodal Foundation Models using Public Internet Data"
Traverse the space of concepts with a multi-modal similarity index in FiftyOne
Text to image search & Image Similarity Search using @typesense
Реализация система извлечения изображений по текстовому описанию и поиск похожих фотографий
OpenAI's CLIP neural network
Flask app to perform image search using semantic matching of input text and images
Add a description, image, and links to the clip-model topic page so that developers can more easily learn about it.
To associate your repository with the clip-model topic, visit your repo's landing page and select "manage topics."