Why does GPT4AllEmbeddings need internet access? #22640
Replies: 2 comments
-
It happened to me too. I downloaded the model, and I test that it works by pointing to the location of the downloaded model. But it still needs Internet access.... |
Beta Was this translation helpful? Give feedback.
-
Hi! I might be too late to help the OP, but I hope my response can be useful for future users coming across this discussion. When using the GPT4All embeddings via LangChain, you have to pass from langchain_community.embeddings import GPT4AllEmbeddings
model_path = "path\\to\\models\\nomic-embed-text-v1.5.f16.gguf"
model = GPT4AllEmbeddings(model_name=model_path,
gpt4all_kwargs= {'allow_download':False}
)
embeddings = model.embed_documents[['This is a test string to embed,',
'but I don\'t care about the result',
'I just want to use my model offline.'
]] It might be confusing, but it is the intended behavior of the parameter, as you can see in the GPT4All GitHub repo. |
Beta Was this translation helpful? Give feedback.
-
I'm trying to follow the rag example at https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_rag_agent_llama3_local.ipynb. I was able to get it all working on my Mac, but after I continued my testing in an air gapped environment, it hung on the "Index" section. I moved back to an environment that had internet access and it started working again. I then turned off my wifi and the procedure below produced the following error:
Is there a different embedding I can use that does not reach back out to the internet?
Beta Was this translation helpful? Give feedback.
All reactions