-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
undefined symbol: cget_col_row_stats / 8-bit not working / libsbitsandbytes_cpu.so not found #400
Comments
I have an exact same issue with Nvidia GPU and Win10, tried a new install several times — nothing seems to work. Very frustrating, given the fact that yesterday it worked just fine. Shouldn't have done git pull today, it seem to have broken the UI. |
Same issue on linux as well. I even tried inside a 11.8.0-runtime-ubuntu22.04 Nvidia container |
I'm on CPU. It does work, but I'm not sure I'm getting the best out of it. Still getting short, low-quality responses with very little RP, which is why I did a fresh install.
…On Sat., Mar. 18, 2023, 4:51 a.m. fuomag9, ***@***.***> wrote:
Same issue on linux as well.
I even tried inside a 11.8.0-runtime-ubuntu22.04
<https://hub.docker.com/layers/nvidia/cuda/11.8.0-runtime-ubuntu22.04/images/sha256-61187bc58b1411daa436202bebc96022e9c5339611589a022cd913b1b54cdead?context=explore>
Nvidia container
—
Reply to this email directly, view it on GitHub
<#400 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6I5UHICGDV7OML6GCZFFDTW4WOS7ANCNFSM6AAAAAAV7NCHH4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
That's just a warning, not a bug |
But it says no GPU detected, falling back to CPU, I'd assume that's not the correct behavior? |
OP doesn't have a GPU, so it's expected behavior On Windows, I recommend installing using the new WSL recommended method |
In my case I had the same issue and I have a gpu passed with --gpus=all inside docker :( |
Unfortunately it's not possible, Microsoft store doesn't work in my country. Is it possible to download the previous working version of UI somewhere? |
You may be better just running an Ubuntu VM, your GPU should pass through |
I have deleted my conda environment and created a new one following the README and now I also can't use 8bit heh
|
Just tried to run: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): |
Installing those older versions had worked for me briefly, then it stopped working again. |
gettings this one as well too |
This may be relevant |
Nothing changed in bits&bytes. |
Ok I got it
|
Running
For me. |
I think the problem was the recent pytorch update. |
Doing I am using miniconda so my folder was |
I'm using Anaconda3, so I couldn't do the step 2, just can't find the folders, but I did everything else and was able to launch the UI, seems to be working fine right now, thank you! Although I've found those files in |
So I've changed those files in Also |
I had a problem with these instructions which I narrowed down to this line:
PyTorch has now updated to 2.0.0 and so running this command will install 2.0.0, but errors occur when running this code using 2.0.0 and using
would install a version of cuda which is not compatible with PyTorch 2.0.0, resulting in @KirillRepinArt's error:
To fix this, simply install the version of PyTorch immediately preceding 2.0.0. I did this using the command from the PyTorch website instead: I also didn't have to do |
This worked for me, thank you! I had to use though As in the previous version now But I had this issues before the last update, and everything that worked previously is also working now, so thanks again! |
i tried the command and got this error (d:\myenvs\textgen1) D:\text-generation-webui\repositories\GPTQ-for-LLaMa>python setup_cuda.py install |
@gsgoldma I ran into this error as well. Your CUDA version is 12.0 which isn't compatible with your PyTorch version 11.7. You need to downgrade your CUDA version to one that is compatible with PyTorch 11.7. You could try redoing everything with my instructions as well. |
Works on linux with CUDA 12.1: |
Note that on windows, if you have Python 3.10 set as sys path variable, the python 3.10 directory is entirely skipped. So the path is "cd Drive path/users/yourname/etcetcetc/miniconda3/envs/textgen/lib/site-packages/bitsandbytes/". |
I got the same issue when using the new one-click-installer, even though it is supposed to do the dirty fixes automatically. Nvidia gpu is not recognized, and it uses only CPU when I try to --load-in-8bit CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source? |
cc @jllllll |
There are no dirty fixes anymore. Try this: Also, you may have installed the cpu version of torch. I've seen that happen before, though I don't know the cause. You can try this to replace it:
This will tell you about your torch installation: |
you forgot an s |
The text-generation-webui works with GPU, if I dont use the 8bit-flag. I tried all these suggestions, results below. First attempt: Added the following line to start-webui.bat before call python server.py, to make sure the environment is the correct one:
|
@MikkoHaavisto Based on that log, it appears that the virtual environment was not properly created for some reason. This is odd, because the script should have told you this, instead it just created an empty environment. I guess I should add a check to see if python was actually installed to the environment or not. Did you not get any errors during install? Try this installer, it may be more reliable: https://github.com/jllllll/one-click-installers/tree/oobabooga-windows-miniconda |
@jllllll I tried that installer. The installation ended after prompting "Press any key to continue . . .", but without any mention of errors, after installing a thousand packages. I think it did that the last time as well when I installed. I wonder if it's supposed to say something like "install completed successfully"? I tried to copy all output from install but it just closed the cmd window. Anyway, the 8bit quantization seems to work now! Thank you! I loaded the model without --load-in-8bit, it took 24GB vram (100%). Then with --load-in-8bit it takes 15.5GB. webui starts with no errors. Can generate text in both modes. By the way, the https://huggingface.co/chavinlo/gpt4-x-alpaca model is giving me completely useless and strange answers, but that probably has nothing to do with these issues? Output after successfull first install Starting the web UI... ===================================BUG REPORT===================================
|
@MikkoHaavisto Yeah, there isn't any message for successful installation, mostly because there is no easy way to determine that with batch. As far as strange answers are concerned, you likely just need to adjust the generation parameters. Try these:
For larger models, you should lower the repetition penalty. For alpaca/assistant models, you may need to lower temperature. |
for step 2. To get the path to your conda environment on linux:
|
This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment. |
|
Describe the bug
On starting the server, I recieve the following error messages :
This is not the same as #388.
Is there an existing issue for this?
Reproduction
Start web UI using the supplied batch file.
Screenshot
Logs
System Info
The text was updated successfully, but these errors were encountered: