We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Currently, there are no image search pretrained models in OS
Having image search pretrained models would make it plug-and-play removing unnecessary friction.
It would be good to have openai clip models
The workaround to this issue is one needs to trace the model themselves and then register it
The text was updated successfully, but these errors were encountered:
Would be also cool to have some pretrained multimodal search models Something like visual bert
I've been trying to setup this multimodal model but the lack of documentation makes it really hard
Sorry, something went wrong.
tagging to the RFC [RFC] Support more local model types
Right now ml commons does not support image embedding models so it is impossible to add image embedding models as pretrained models
To support this we need to add image embedding model type: #2622
We will also need to add an image embedding ingest processor to enable the ingest of images into image embedding models
+1 for interest in native CLIP support in OpenSearch
IanMenendez
No branches or pull requests
Currently, there are no image search pretrained models in OS
Having image search pretrained models would make it plug-and-play removing unnecessary friction.
It would be good to have openai clip models
The workaround to this issue is one needs to trace the model themselves and then register it
The text was updated successfully, but these errors were encountered: