Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting stuck at "Setting up the language model..." / Windows / local model #688

Closed
Tom1827 opened this issue Oct 24, 2023 · 6 comments
Closed
Labels
Bug Something isn't working Local Model Running a local model

Comments

@Tom1827
Copy link

Tom1827 commented Oct 24, 2023

Describe the bug

On Windows, trying to get a local model to start, I'm getting stuck every time at the "Setting up the language model..." stage.

I've tried several times, by removing open-interpreter, ooba, and the ooba folder in C:\Users<user>\AppData\Local\ooba and then reinstalling and redownloading Mistral 7B. It has got stuck each time at a different amount of progress. The attempt shown in the screenshot has now been running for about two hours.

Reproduce

  • Install per the instructions
  • Run interpreter (i.e. for the first time)
  • Press enter to use local model
  • Wait for the model to download

...then it grinds to a halt at some point as it is setting up the language model

Expected behavior

To not get stuck, and finish setting up the language model? :)

Screenshots

Screenshot 2023-10-25 001421

Open Interpreter version

0.1.10

Python version

3.10.2

Operating System name and version

Windows 11 Pro

Additional context

Ryzen 7 5800X, 32GB RAM, NVidia 3060Ti 8GB

@Tom1827 Tom1827 added the Bug Something isn't working label Oct 24, 2023
@bmwhite1611
Copy link

Yeah I waited like half a day and then closed the window. Now I run interpreter --local and it just says "Getting Started..." forever. I wish I could have gone back to Llama, but they turned it off for some reason, or else I just can't figure out how to go back to a larger model like that... should never have uninstalled it...

@sebvannistel
Copy link

yea same issue here. they still have not fixed it. been waiting for over 2 weeks for a fix #636

@thisdp
Copy link

thisdp commented Oct 28, 2023

It scans ports from 2000 to 10000 at the speed of 1 port pre second using single thread, we need at least 8000 secs. And under llm.py -> class llm -> init function, there are at least 3 scans....
This sucks:

while True:
 self.port = random.randint(2000, 10000)
 open_ports = get_open_ports(2000, 10000)
 if self.port not in open_ports:
  break

@thisdp
Copy link

thisdp commented Oct 28, 2023

And even there's even a bug in path management.
llm.py line 45, under windows, there should be "\" not "/"

@ericrallen ericrallen added the Local Model Running a local model label Oct 28, 2023
@Notnaton
Copy link
Collaborator

Notnaton commented Oct 29, 2023

Hi, @thisdp while the PRs are being merged, you can try my fork that fixes Ooba for windows. It fixes both the paths and port scanning, and some other stuff.
https://github.com/Notnaton/ooba
pip install git+https://github.com/Notnaton/ooba.git@windows --force-reinstall

@ericrallen
Copy link
Collaborator

Hey there, folks!

Now that LM Studio is the underlying local model provider, I think we can close this one.

Please reopen it if you still experience the issue in the latest version of Open Interpreter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working Local Model Running a local model
Projects
None yet
Development

No branches or pull requests

6 participants