We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Due to recent updates in llama_index, the code breaks when using multiple workers (run-llama/llama_index#13497).
llama_index
This can be temporary fixed by setting:
llama-index = "==0.10.30" ... llama-index-core = "==0.10.30"
In the Pipfile.
The text was updated successfully, but these errors were encountered:
Thanks for opening this. I'll also keep an eye on the bug thread as this was definitely working with values > 1 before the update.
Sorry, something went wrong.
I did some debugging. It seems that _model is missing the second time the VectorStoreIndex is used.
_model
VectorStoreIndex
This index is initialized here:
local-rag/utils/llama_index.py
Line 172 in 72f2550
If you debug after this line, or print e.g. the following
print(hasattr(index._embed_model, "_model"))
you will see that the model is missing the second time: on file upload it exists, on chat, it isnt there.
It seems to be a problem in the caching:
Line 114 in 72f2550
After removing @st.cache_data(show_spinner=False), it seems to work again.
@st.cache_data(show_spinner=False)
Edit: It is clearly the caching, even running
index = create_index(_documents) print( hasattr(index._embed_model, "_model"), ) index2 = create_index(_documents) print( hasattr(index2._embed_model, "_model"), )
loses the model attribute.
jonfairbanks
Successfully merging a pull request may close this issue.
Due to recent updates in
llama_index
, the code breaks when using multiple workers (run-llama/llama_index#13497).This can be temporary fixed by setting:
In the Pipfile.
The text was updated successfully, but these errors were encountered: