You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I deployed models locally with Xinference, both LLM and embedding models, with API key authorization enabled.
When I configure RAGFlow for Xinference models, there's a 401 error occurred, I noticed a place holder in the text box says "for locally deployed model, ignore this", is that means there's no authorization support for local models?
The API key works fine when I configure it in OneAPI service.
Thank you for your help.
BTW: I found a QA says streaming output is not supported in RAGFlow and still working on it, when will we see it
The text was updated successfully, but these errors were encountered:
Describe your problem
I deployed models locally with Xinference, both LLM and embedding models, with API key authorization enabled.
When I configure RAGFlow for Xinference models, there's a 401 error occurred, I noticed a place holder in the text box says "for locally deployed model, ignore this", is that means there's no authorization support for local models?
The API key works fine when I configure it in OneAPI service.
Thank you for your help.
BTW: I found a QA says streaming output is not supported in RAGFlow and still working on it, when will we see it
The text was updated successfully, but these errors were encountered: