You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@qinst64, let's do some brainstorming here. In general, things in AI move very fast and frequently break. I think that the Ollama team has done a good job of "slow" deprecation of the endpoint.
I think we can move forward with just support /api/embed (as default), dropping the legacy request payload type (the one containing prompt).
But maybe a smarter approach would be to simply ask the user to supply Ollama base API (e.g. the one without /api/xxx) and then go from there. We can check if the version is above 0.3.0 (/api/version) and then use the new endpoint otherwise use the old one. This approach has the drawback that if things are behind a custom endpoint (exposed via proxy) then this is a little less flexible. Perhaps mitigated by also adding an embedding_endpoint parameter with a default /api/embed.
Describe the problem
In Ollama doc there are both
api/embeddings
andapi/embed
and saying
will
OllamaEmbeddingFunction
supportapi/embed
?btw, they have different behaviors, how do I migrate?
The input (
prompt
vsinput
) and output (embedding
vsembeddings
, 1D list vs 2D list) are easy to deal with.But why output vectors are differenct?
Describe the proposed solution
support
api/embed
.give instruction for switching from
api/embeddings
toapi/embed
smoothly.Alternatives considered
No response
Importance
nice to have
Additional Information
No response
The text was updated successfully, but these errors were encountered: