-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Architecture not yet supported for local LLM inference via Oobabooga
#636
Comments
I hit this issue after updating today. From what I can tell Oobabooga is a webui. I don't want a UI, sigh. |
I am having the same issue starting interpreter with --local but on the default Mistral 7b for the newest commit. Stuck on 'getting started...' |
exactly the same for me too |
Hi, I have the same problem. OS: Windows 10 Previous working Open-interpreter versions for me: 0.1.3 and 0.1.7 I ran interpreter --local
Hitting ctrl + c because it is not doing anything after "Getting started..." Error:
|
I'm also seeing this error
|
I also have the same bug with windows11, python 3.11. |
using M1 Pro MacBook, had this issue |
Same here ! strange thing is, i run it from a venv. On a pure Linux (22.04) install it works flawless (albeit slow because the server runs on old hardware). I created the same setup from within WSL2 (Ubuntu 22.04) and now receive the message: "Architecture not yet supported for local LLM inference via |
Same here. I run it on Windows 10 + WSL; and again with native Ubuntu. Got the same exception. |
Same here, is there a solution?:( |
Same issue, Windows 11 python 3.11 |
Same issue, Macbook M1 pro, python 3.11 |
is there an issue with cpu vs gpu? |
Had the same issue with waiting at "Getting started..."; following the instructions of @Sunwood-ai-labs got the interpreter up and running locally again: C:\Users\u>interpreter --local Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model. [?] Parameter count (smaller is faster, larger is more capable): 7B
[?] Quality (smaller is faster, larger is more capable): Small | Size: 2.6 GB, Estimated RAM usage: 5.1 GB
[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): n Model found at C:\Users\u\AppData\Local\Open Interpreter\Open Interpreter\models\codellama-7b-instruct.Q2_K.gguf ▌ Model set to TheBloke/CodeLlama-7B-Instruct-GGUF Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit.
What would you like to do today? |
@ericrallen I cant request review in Ooba, I believe those pull requests will fix many of the: "Getting Started", "Server took too long" and "System not supported" issues on windows |
Same error |
I'm seeing this too. I know that I can run the model locally with ollama, so I find this odd. |
I have a fork of Ooba that fixes this: you can install it by running: |
Thanks. I installed via the above command then launched open-interpreter
again but getting the same error.
…On Sat, Nov 4, 2023 at 5:13 AM Anton ***@***.***> wrote:
I have a fork of Ooba that fixes this:
https://github.com/Notnaton/ooba
you can install it by running:
pip install git+https://github.com/Notnaton/ooba.git --force-reinstall
—
Reply to this email directly, view it on GitHub
<#636 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAC3LCZIHNDTDEQQIJFARZTYCYBKFAVCNFSM6AAAAAA56O23TOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOJTGM4TCMJYGY>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I think we can close this one now that LM Studio is powering Open Interpreter’s local models. Please reopen it or ping me if I’m wrong. |
LM Studio isn't available on Linux, so many of us are using Ollama instead.
…On Wed, Nov 15, 2023 at 9:43 PM Eric Allen ***@***.***> wrote:
I *think* we can close this one now that LM Studio is powering Open
Interpreter’s local models.
Please reopen it or ping me if I’m wrong.
—
Reply to this email directly, view it on GitHub
<#636 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAC3LC4ZEQO7YGQQR455QHLYEV4VPAVCNFSM6AAAAAA56O23TOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJTG4YDEOJUHE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Ollama is great! I think there’s also a Linux beta of LM Studio, too. |
Describe the bug
windows 11
i'm trying to run local model
interpreter --local --model /TheBloke/CodeLlama-34B-Instruct-GGUF
After model has been downloaded it stucked on:
Getting started...
Ctrl+C
setup_text_llm.py", line 56, in setup_text_llm
raise Exception("Architecture not yet supported for local LLM inference via
Oobabooga
. Please runinterpreter
to connect to a cloud model.")Exception: Architecture not yet supported for local LLM inference via
Oobabooga
. Please runinterpreter
to connect to a cloud model.How can be fixed?
Reproduce
interpreter --local --model /TheBloke/CodeLlama-34B-Instruct-GGUF
Expected behavior
open interpreter
Screenshots
Open Interpreter version
0.1.9
Python version
3.11.5
Operating System name and version
windows 11
Additional context
No response
The text was updated successfully, but these errors were encountered: