Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting stuck at "Getting started..." when running local model on Windows PC #665

Closed
xinranli0809 opened this issue Oct 20, 2023 · 5 comments
Labels
Bug Something isn't working

Comments

@xinranli0809
Copy link

xinranli0809 commented Oct 20, 2023

Describe the bug

Once the model and Ooba are downloaded, run interpreter --local will show print "Getting started..." then become stuck forever.

I thought some arguments might be bugged when calling oobabooga server. Up on checking llm.py, at line 44, there are:

# Start oobabooga server
model_dir = "/".join(path.split("/")[:-1])
model_name = path.split("/")[-1]

This does not seem to work on Windows PCs as the path on Windows uses \ instead...? So I used this instead:

model_dir, model_name = os.path.split(path)
model_dir += "\\"

and got the correct model_dir and model_name values.

At this stage I still get stuck on "Getting started..." and I realized the code can't really go past the while true loops that find available ports in a reasonable amount of time (not sure if it's just my PC being too slow?), so instead of searching from port 2000 to 10000, I narrowed the search range down to the ports often used by Gradio webui.

# Line 47
# Find an open port
while True:
    self.port = random.randint(7860, 7870)
    open_ports = get_open_ports(7860, 7870)
    if self.port not in open_ports:
        break

# Also find an open port for blocking -- it will error otherwise
while True:
    unused_blocking_port = random.randint(7860, 7870)
    open_ports = get_open_ports(7860, 7870)
    if unused_blocking_port not in open_ports:
        break

# Line 94
open_ports = get_open_ports(7860, 7870)

The program was able to load the model quickly and successfully afterward.

Reproduce

  1. Model+Ooba already present, using a Windows PC
  2. interpreter --local

Expected behavior

Ooba starts and the model gets loaded in a reasonable amount of time without being stuck. Would be great if we could get some sort of feedback on the status of the loading process.

Screenshots

image

Open Interpreter version

0.1.10

Python version

3.11.6

Operating System name and version

Windows 10

Additional context

No response

@xinranli0809 xinranli0809 added the Bug Something isn't working label Oct 20, 2023
@Notnaton
Copy link
Collaborator

Hi, there are some fixes for this already for ooba:

Please check if these pull request works for you :)

@josedandrade
Copy link

After 7 hours waiting. It runs.

I have a 3070 Ti with 8GB RAM
Windows 11
i7, 40GB RAM

I did not know there was a chatbot with GUI, but yes.

So, if it is stuck, just wait. I should send the whole output, perhaps someone can find interesting info.

Mistral1 Mistral2 Mistral3

@gostopy
Copy link

gostopy commented Oct 27, 2023

How to solve this problem, I tried a lot of local models are “getting started:”

@Notnaton
Copy link
Collaborator

Notnaton commented Oct 27, 2023

@ericrallen This is also a ooba issue, I have made a fork for windows users to use. Should we pin my fork until ooba is merged so we get less broken windows issues?
bilde

https://github.com/Notnaton/ooba

also duplicate #636

@ericrallen
Copy link
Collaborator

Hey there, folks!

Now that LM Studio is the underlying local model provider, I think we can close this one.

Please reopen it if you still experience the issue in the latest version of Open Interpreter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants