-
Notifications
You must be signed in to change notification settings - Fork 7.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pydantic validation error with ['default', 'docker'] #1756
Comments
Repeating the same by simply copying the repo and following #1445 causes the same problem, plus the following:
|
@mrepetto-certx, can you be more specific please? I have the same issue |
Well. To reproduce:
I do not know how to be more specific than that. |
Indeed
quite works. But still requires an embedding mode, which is different from |
I am still getting the same error even when I change to llamacpp. Should I do any prerequisite before doing |
Unfortunately I got your same result. The problem is given by the split in between the llm and embedding lines in local settings file.
I suggest using ollama and compose an additional container into the compose file.
…________________________________
Da: venkat chinni ***@***.***>
Inviato: Friday, March 22, 2024 7:11:26 AM
A: zylon-ai/private-gpt ***@***.***>
Cc: Marco Repetto ***@***.***>; Mention ***@***.***>
Oggetto: Re: [zylon-ai/private-gpt] Pydantic validation error with ['default', 'docker'] (Issue #1756)
I am still getting the same error even when I change to llamacpp. Should I do any prerequisite before doing docker-compose build such as setting any env variables. Downloading any modules etc.?
—
Reply to this email directly, view it on GitHub<#1756 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/A4PXYPP7MU6QOQT4R6DPTT3YZPDQ5AVCNFSM6AAAAABE3J4VL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUGQZDIMZSGI>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
I think I can help a little, if you are trying to use Ollama, which you will need to get installed and running first, then make these changes settings.yaml: from localhost to host.docker.internal here: In docker-compose.yaml change dockerfile: Dockerfile.local to dockerfile: Dockerfile.external in Dockerfile.external add these extras: RUN poetry install --extras "ui vector-stores-qdrant llms-ollama embeddings-ollama" then do a you will probably need to run ollama pull nomic-embed-text if you get the error about not having nomic I hope this helps. I was able to finally get it running on my M2 MacBook Air. |
I made these changes:
I am still facing this issue: My ollama server is running however when i get |
What about step 1 with changing localhost to: api_base: http://host.docker.internal:11434/ in the file settings.yaml. I also get a 404 for http://localhost:11434/api/embeddings, so no issue there. |
What is your take on decupling it in a way that ollama is used as a microservice? Something like:
With the
|
@mrepetto-certx Makes sense to me. Even if people already have Ollama installed this would just be another instance. You'd still need to tackle the addressing problem, though - it would either need to be http://host.docker.internal:11434/ for host installations or http://ollama:11434/ for Dockerized. Edit: It would also take quite a bit of testing for adding the llm and embedding models for the dockerized method |
Thanks @makeSmartio. I'm experimenting now with the caveat of having:
To avoid the problem of pulling a new model every docker compose. I'll keep you posted. |
No way I keep getting:
What is puzzling is that running:
inside the container works. |
Ok, I managed to make it work and pushed a pull request #1812 . The only thing to remember is to run |
I tried to run
docker compose run --rm --entrypoint="bash -c '[ -f scripts/setup ] && scripts/setup'" private-gpt
In a compose file somewhat similar to the repo:
But I got in return the following error:
The text was updated successfully, but these errors were encountered: