Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

undefined symbol: cget_col_row_stats / 8-bit not working / libsbitsandbytes_cpu.so not found #400

Closed
1 task done
AlexysLovesLexxie opened this issue Mar 18, 2023 · 38 comments
Labels
bug Something isn't working stale

Comments

@AlexysLovesLexxie
Copy link

Describe the bug

On starting the server, I recieve the following error messages :

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
C:\Oobabooga_new\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "

This is not the same as #388.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

Start web UI using the supplied batch file.

Screenshot

Screenshot 2023-03-18 024607

Logs

None.  See screenshot.

System Info

Windows 11
No GPU, CPU only
CPU : Ryzen 7 6800H
RAM : 32Gb
@AlexysLovesLexxie AlexysLovesLexxie added the bug Something isn't working label Mar 18, 2023
@KirillRepinArt
Copy link

KirillRepinArt commented Mar 18, 2023

I have an exact same issue with Nvidia GPU and Win10, tried a new install several times — nothing seems to work. Very frustrating, given the fact that yesterday it worked just fine. Shouldn't have done git pull today, it seem to have broken the UI.

@fuomag9
Copy link

fuomag9 commented Mar 18, 2023

Same issue on linux as well.

I even tried inside a 11.8.0-runtime-ubuntu22.04 Nvidia container

@AlexysLovesLexxie
Copy link
Author

AlexysLovesLexxie commented Mar 18, 2023 via email

@oobabooga
Copy link
Owner

That's just a warning, not a bug

@KirillRepinArt
Copy link

But it says no GPU detected, falling back to CPU, I'd assume that's not the correct behavior?

@oobabooga
Copy link
Owner

OP doesn't have a GPU, so it's expected behavior

On Windows, I recommend installing using the new WSL recommended method

@fuomag9
Copy link

fuomag9 commented Mar 18, 2023

OP doesn't have a GPU, so it's expected behavior

On Windows, I recommend installing using the new WSL recommended method

In my case I had the same issue and I have a gpu passed with --gpus=all inside docker :(

@KirillRepinArt
Copy link

OP doesn't have a GPU, so it's expected behavior

On Windows, I recommend installing using the new WSL recommended method

Unfortunately it's not possible, Microsoft store doesn't work in my country. Is it possible to download the previous working version of UI somewhere?

@olihough86
Copy link

You may be better just running an Ubuntu VM, your GPU should pass through

@oobabooga oobabooga reopened this Mar 18, 2023
@oobabooga
Copy link
Owner

I have deleted my conda environment and created a new one following the README and now I also can't use 8bit heh

undefined symbol: cget_col_row_stats

bitsandbytes-foundation/bitsandbytes#112

@KirillRepinArt
Copy link

KirillRepinArt commented Mar 18, 2023

Just tried to run:
conda install torchvision=0.14.1 torchaudio=0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
and git pulled my local folder
Everything went successfully but now I'm getting:

Traceback (most recent call last):
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\requests\compat.py", line 11, in
import chardet
ModuleNotFoundError: No module named 'chardet'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "F:\Program Files (x86)\textgen_webui_04\text-generation-webui\server.py", line 10, in
import gradio as gr
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\gradio_init_.py", line 3, in
import gradio.components as components
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\gradio\components.py", line 34, in
from gradio import media_data, processing_utils, utils
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\gradio\processing_utils.py", line 19, in
import requests
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\requests_init_.py", line 45, in
from .exceptions import RequestsDependencyWarning
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\requests\exceptions.py", line 9, in
from .compat import JSONDecodeError as CompatJSONDecodeError
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\requests\compat.py", line 13, in
import charset_normalizer as chardet
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\charset_normalizer_init_.py", line 23, in
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\charset_normalizer\api.py", line 10, in
from charset_normalizer.md import mess_ratio
File "charset_normalizer\md.py", line 5, in
ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (F:\Anakonda3\envs\textgen_webui_04\lib\site-packages\charset_normalizer\constant.py)

@oobabooga
Copy link
Owner

Installing those older versions had worked for me briefly, then it stopped working again.

@fuomag9
Copy link

fuomag9 commented Mar 18, 2023

I have deleted my conda environment and created a new one following the README and now I also can't use 8bit heh

undefined symbol: cget_col_row_stats

TimDettmers/bitsandbytes#112

gettings this one as well too

@oobabooga
Copy link
Owner

This may be relevant

bitsandbytes-foundation/bitsandbytes#156 (comment)

@oobabooga oobabooga changed the title Required library version not found: libsbitsandbytes_cpu.so. libsbitsandbytes_cpu.so not found / undefined symbol: cget_col_row_stats / 8-bit not working Mar 18, 2023
@oobabooga oobabooga pinned this issue Mar 18, 2023
@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Mar 18, 2023

Nothing changed in bits&bytes.

@oobabooga
Copy link
Owner

Ok I got it

  1. Start over
conda deactivate
conda remove -n textgen --all
conda create -n textgen python=3.10.9
conda activate textgen
pip3 install torch torchvision torchaudio
cd text-generation-webui
pip install -r requirements.txt
  1. Do the dirty fix in bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats bitsandbytes-foundation/bitsandbytes#156 (comment):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so
cd -
  1. Install cudatoolkit
conda install cudatoolkit
  1. It now works
python server.py --listen --model llama-7b  --lora alpaca-lora-7b  --load-in-8bit

@Arargd
Copy link

Arargd commented Mar 18, 2023

This may be relevant

TimDettmers/bitsandbytes#156 (comment)

Running pip3 install torch torchvision torchaudio in the new commit + replacing the cpu file with the cuda117 file seemed to have fixed:

undefined symbol: cget_col_row_stats

For me.

@oobabooga
Copy link
Owner

Nothing changed in bits&bytes.

I think the problem was the recent pytorch update.

@oobabooga oobabooga changed the title libsbitsandbytes_cpu.so not found / undefined symbol: cget_col_row_stats / 8-bit not working undefined symbol: cget_col_row_stats / 8-bit not working / libsbitsandbytes_cpu.so not found Mar 18, 2023
@fuomag9
Copy link

fuomag9 commented Mar 18, 2023

Ok I got it

  1. Start over
conda deactivate
conda remove -n textgen --all
conda create -n textgen python=3.10.9
conda activate textgen
pip3 install torch torchvision torchaudio
cd text-generation-webui
pip install -r requirements.txt
  1. Do the dirty fix in bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so
cd -
  1. Install cudatoolkit
conda install cudatoolkit
  1. It now works
python server.py --listen --model llama-7b  --lora alpaca-lora-7b  --load-in-8bit

Doing git pull and then this worked for me as well!

I am using miniconda so my folder was /home/$USER/.conda/envs/textgen/lib/python3.10/site-packages/bitsandbytes/

@KirillRepinArt
Copy link

KirillRepinArt commented Mar 18, 2023

conda install cudatoolkit

I'm using Anaconda3, so I couldn't do the step 2, just can't find the folders, but I did everything else and was able to launch the UI, seems to be working fine right now, thank you!

Although I've found those files in F:\Anakonda3\envs\textgen_webui_05\Lib\site-packages\bitsandbytes are those the same files?

@KirillRepinArt
Copy link

KirillRepinArt commented Mar 18, 2023

So I've changed those files in F:\Anakonda3\envs\textgen_webui_05\Lib\site-packages\bitsandbytes nothing seem to change though, still gives the warning:
Warning: torch.cuda.is_available() returned False.
It works, but doesn't seem to use GPU at all.

Also llama-7b-hf --gptq-bits 4 doesn't work anymore, although it used to in the previous version of UI. Says CUDA extension not installed.
It was possible before to load llama-13b-hf --auto-devices --gpu-memory 4 but now it just eats all of 32 Gb Ram, so I aborted it.

@xNul
Copy link
Contributor

xNul commented Mar 19, 2023

Ok I got it

1. Start over
conda deactivate
conda remove -n textgen --all
conda create -n textgen python=3.10.9
conda activate textgen
pip3 install torch torchvision torchaudio
cd text-generation-webui
pip install -r requirements.txt
2. Do the dirty fix in [bitsandbytes/libbitsandbytes_cpu.so: undefined  symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment)](https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so
cd -
3. Install cudatoolkit
conda install cudatoolkit
4. It now works
python server.py --listen --model llama-7b  --lora alpaca-lora-7b  --load-in-8bit

I had a problem with these instructions which I narrowed down to this line:

pip3 install torch torchvision torchaudio

PyTorch has now updated to 2.0.0 and so running this command will install 2.0.0, but errors occur when running this code using 2.0.0 and using

conda install cudatoolkit

would install a version of cuda which is not compatible with PyTorch 2.0.0, resulting in @KirillRepinArt's error:

Warning: torch.cuda.is_available() returned False.

To fix this, simply install the version of PyTorch immediately preceding 2.0.0. I did this using the command from the PyTorch website instead:
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

I also didn't have to do conda install cudatoolkit after using this pip command.

@KirillRepinArt
Copy link

KirillRepinArt commented Mar 19, 2023

Ok I got it

1. Start over
conda deactivate
conda remove -n textgen --all
conda create -n textgen python=3.10.9
conda activate textgen
pip3 install torch torchvision torchaudio
cd text-generation-webui
pip install -r requirements.txt
2. Do the dirty fix in [bitsandbytes/libbitsandbytes_cpu.so: undefined  symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment)](https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so
cd -
3. Install cudatoolkit
conda install cudatoolkit
4. It now works
python server.py --listen --model llama-7b  --lora alpaca-lora-7b  --load-in-8bit

I had a problem with these instructions which I narrowed down to this line:

pip3 install torch torchvision torchaudio

PyTorch has now updated to 2.0.0 and so running this command will install 2.0.0, but errors occur when running this code using 2.0.0 and using

conda install cudatoolkit

would install a version of cuda which is not compatible with PyTorch 2.0.0, resulting in @KirillRepinArt's error:

Warning: torch.cuda.is_available() returned False.

To fix this, simply install the version of PyTorch immediately preceding 2.0.0. I did this using the command from the PyTorch website instead: pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

I also didn't have to do conda install cudatoolkit after using this pip command.

This worked for me, thank you! I had to use though pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117 for cuda_11.7 and I didn't do conda install cudatoolkit also.
Now it seems to be working as in the previous state, uses GPU, I can load llama-7b-hf --cai-chat --gptq-bits 4

As in the previous version now --load-in-8bit doesn't work for me anymore, gives CUDA Setup failed despite GPU being available.
I also can't load --model llama-13b-hf --gptq-bits 4 --cai-chat --auto-devices --gpu-memory 4, gives me torch.cuda.OutOfMemoryError: CUDA out of memory.

But I had this issues before the last update, and everything that worked previously is also working now, so thanks again!

@gsgoldma
Copy link

i tried the command and got this error
(d:\myenvs\textgen1) D:\text-generation-webui\repositories\GPTQ-for-LLaMa>pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117
Collecting torch==1.13.1+cu117
Using cached https://download.pytorch.org/whl/cu117/torch-1.13.1%2Bcu117-cp310-cp310-win_amd64.whl (2255.4 MB)
Collecting torchvision==0.14.1+cu117
Using cached https://download.pytorch.org/whl/cu117/torchvision-0.14.1%2Bcu117-cp310-cp310-win_amd64.whl (4.8 MB)
Collecting torchaudio==0.13.1
Using cached https://download.pytorch.org/whl/cu117/torchaudio-0.13.1%2Bcu117-cp310-cp310-win_amd64.whl (2.3 MB)
Requirement already satisfied: typing-extensions in d:\myenvs\textgen1\lib\site-packages (from torch==1.13.1+cu117) (4.5.0)
Requirement already satisfied: numpy in d:\myenvs\textgen1\lib\site-packages (from torchvision==0.14.1+cu117) (1.24.2)
Requirement already satisfied: requests in d:\myenvs\textgen1\lib\site-packages (from torchvision==0.14.1+cu117) (2.28.2)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\myenvs\textgen1\lib\site-packages (from torchvision==0.14.1+cu117) (9.4.0)
Requirement already satisfied: certifi>=2017.4.17 in d:\myenvs\textgen1\lib\site-packages (from requests->torchvision==0.14.1+cu117) (2022.12.7)
Requirement already satisfied: charset-normalizer<4,>=2 in d:\myenvs\textgen1\lib\site-packages (from requests->torchvision==0.14.1+cu117) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in d:\myenvs\textgen1\lib\site-packages (from requests->torchvision==0.14.1+cu117) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\myenvs\textgen1\lib\site-packages (from requests->torchvision==0.14.1+cu117) (1.26.15)
Installing collected packages: torch, torchvision, torchaudio
Attempting uninstall: torch
Found existing installation: torch 2.0.0
Uninstalling torch-2.0.0:
Successfully uninstalled torch-2.0.0
Attempting uninstall: torchvision
Found existing installation: torchvision 0.15.0
Uninstalling torchvision-0.15.0:
Successfully uninstalled torchvision-0.15.0
Attempting uninstall: torchaudio
Found existing installation: torchaudio 2.0.0
Uninstalling torchaudio-2.0.0:
Successfully uninstalled torchaudio-2.0.0
Successfully installed torch-1.13.1+cu117 torchaudio-0.13.1+cu117 torchvision-0.14.1+cu117

(d:\myenvs\textgen1) D:\text-generation-webui\repositories\GPTQ-for-LLaMa>python setup_cuda.py install
running install
d:\myenvs\textgen1\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
d:\myenvs\textgen1\lib\site-packages\setuptools\command\easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running bdist_egg
running egg_info
writing quant_cuda.egg-info\PKG-INFO
writing dependency_links to quant_cuda.egg-info\dependency_links.txt
writing top-level names to quant_cuda.egg-info\top_level.txt
reading manifest file 'quant_cuda.egg-info\SOURCES.txt'
writing manifest file 'quant_cuda.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_ext
d:\myenvs\textgen1\lib\site-packages\torch\utils\cpp_extension.py:358: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
Traceback (most recent call last):
File "D:\text-generation-webui\repositories\GPTQ-for-LLaMa\setup_cuda.py", line 4, in
setup(
File "d:\myenvs\textgen1\lib\site-packages\setuptools_init_.py", line 87, in setup
return distutils.core.setup(**attrs)
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\core.py", line 185, in setup
return run_commands(dist)
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "d:\myenvs\textgen1\lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "d:\myenvs\textgen1\lib\site-packages\setuptools\command\install.py", line 74, in run
self.do_egg_install()
File "d:\myenvs\textgen1\lib\site-packages\setuptools\command\install.py", line 123, in do_egg_install
self.run_command('bdist_egg')
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "d:\myenvs\textgen1\lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "d:\myenvs\textgen1\lib\site-packages\setuptools\command\bdist_egg.py", line 165, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "d:\myenvs\textgen1\lib\site-packages\setuptools\command\bdist_egg.py", line 151, in call_command
self.run_command(cmdname)
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "d:\myenvs\textgen1\lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "d:\myenvs\textgen1\lib\site-packages\setuptools\command\install_lib.py", line 11, in run
self.build()
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\command\install_lib.py", line 112, in build
self.run_command('build_ext')
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "d:\myenvs\textgen1\lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "d:\myenvs\textgen1\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
_build_ext.run(self)
File "d:\myenvs\textgen1\lib\site-packages\setuptools_distutils\command\build_ext.py", line 346, in run
self.build_extensions()
File "d:\myenvs\textgen1\lib\site-packages\torch\utils\cpp_extension.py", line 499, in build_extensions
_check_cuda_version(compiler_name, compiler_version)
File "d:\myenvs\textgen1\lib\site-packages\torch\utils\cpp_extension.py", line 386, in _check_cuda_version
raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (12.0) mismatches the version that was used to compile
PyTorch (11.7). Please make sure to use the same CUDA versions.

@xNul
Copy link
Contributor

xNul commented Mar 21, 2023

@gsgoldma I ran into this error as well. Your CUDA version is 12.0 which isn't compatible with your PyTorch version 11.7. You need to downgrade your CUDA version to one that is compatible with PyTorch 11.7. You could try redoing everything with my instructions as well.

@quarterturn
Copy link

Works on linux with CUDA 12.1:
NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1

@bucketcat
Copy link

Ok I got it

1. Start over
conda deactivate
conda remove -n textgen --all
conda create -n textgen python=3.10.9
conda activate textgen
pip3 install torch torchvision torchaudio
cd text-generation-webui
pip install -r requirements.txt
2. Do the dirty fix in [bitsandbytes/libbitsandbytes_cpu.so: undefined  symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment)](https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so
cd -
3. Install cudatoolkit
conda install cudatoolkit
4. It now works
python server.py --listen --model llama-7b  --lora alpaca-lora-7b  --load-in-8bit

Note that on windows, if you have Python 3.10 set as sys path variable, the python 3.10 directory is entirely skipped. So the path is "cd Drive path/users/yourname/etcetcetc/miniconda3/envs/textgen/lib/site-packages/bitsandbytes/".

@MikkoHaavisto
Copy link

I got the same issue when using the new one-click-installer, even though it is supposed to do the dirty fixes automatically. Nvidia gpu is not recognized, and it uses only CPU when I try to --load-in-8bit

CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable

@oobabooga
Copy link
Owner

cc @jllllll

@jllllll
Copy link
Contributor

jllllll commented Apr 3, 2023

I got the same issue when using the new one-click-installer, even though it is supposed to do the dirty fixes automatically. Nvidia gpu is not recognized, and it uses only CPU when I try to --load-in-8bit

CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source? CUDA SETUP: Defaulting to libbitsandbytes_cpu.so... argument of type 'WindowsPath' is not iterable

There are no dirty fixes anymore. Try this:
#659 (comment)

Also, you may have installed the cpu version of torch. I've seen that happen before, though I don't know the cause. You can try this to replace it:

python -m pip install torch --index-url https://download.pytorch.org/whl/cu117 --force-reinstall
--OR--
python -m pip install https://download.pytorch.org/whl/cu117/torch-2.0.0%2Bcu117-cp310-cp310-win_amd64.whl  --force-reinstall

This will tell you about your torch installation: python -m torch.utils.collect_env

@belqit
Copy link

belqit commented Apr 4, 2023

Ok I got it

  1. Start over
conda deactivate
conda remove -n textgen --all
conda create -n textgen python=3.10.9
conda activate textgen
pip3 install torch torchvision torchaudio
cd text-generation-webui
pip install -r requirements.txt
  1. Do the dirty fix in bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so
cd -
  1. Install cudatoolkit
conda install cudatoolkit
  1. It now works
python server.py --listen --model llama-7b  --lora alpaca-lora-7b  --load-in-8bit

you forgot an s
cp libbitsandbytes_cuda120.so libsbitsandbytes_cpu.so

@MikkoHaavisto
Copy link

MikkoHaavisto commented Apr 4, 2023

I got the same issue when using the new one-click-installer, even though it is supposed to do the dirty fixes automatically. Nvidia gpu is not recognized, and it uses only CPU when I try to --load-in-8bit.
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source? CUDA SETUP: Defaulting to libbitsandbytes_cpu.so... argument of type 'WindowsPath' is not iterable

There are no dirty fixes anymore. Try this: #659 (comment)

Also, you may have installed the cpu version of torch. I've seen that happen before, though I don't know the cause. You can try this to replace it:

python -m pip install torch --index-url https://download.pytorch.org/whl/cu117 --force-reinstall
--OR--
python -m pip install https://download.pytorch.org/whl/cu117/torch-2.0.0%2Bcu117-cp310-cp310-win_amd64.whl  --force-reinstall

This will tell you about your torch installation: python -m torch.utils.collect_env

The text-generation-webui works with GPU, if I dont use the 8bit-flag. I tried all these suggestions, results below.

First attempt: Added the following line to start-webui.bat before call python server.py, to make sure the environment is the correct one:
python -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.37.2-py3-none-any.whl --force-reinstall

Starting the web UI...
Collecting bitsandbytes==0.37.2
Downloading https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.37.2-py3-none-any.whl (13.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.1/13.1 MB 13.1 MB/s eta 0:00:00
Installing collected packages: bitsandbytes
Attempting uninstall: bitsandbytes
Found existing installation: bitsandbytes 0.37.2
Uninstalling bitsandbytes-0.37.2:
Successfully uninstalled bitsandbytes-0.37.2
Successfully installed bitsandbytes-0.37.2

[notice] A new release of pip available: 22.2.1 -> 23.0.1
[notice] To update, run: python.exe -m pip install --upgrade pip

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

CUDA SETUP: Loading binary C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll...
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
The following models are available:

  1. gpt4-x-alpaca
  2. GPTQ-for-LLaMa

Which one do you want to load? 1-2

1

Loading gpt4-x-alpaca...
Warning: torch.cuda.is_available() returned False.
This means that no GPU has been detected.
Falling back to CPU mode.

Second attempt: Added instead the following line to start-webui.bat before call python server.py:
python -m pip install torch --index-url https://download.pytorch.org/whl/cu117 --force-reinstall

Starting the web UI...
Looking in indexes: https://download.pytorch.org/whl/cu117
Collecting torch
Downloading https://download.pytorch.org/whl/cu117/torch-2.0.0%2Bcu117-cp310-cp310-win_amd64.whl (2343.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 GB 1.8 MB/s eta 0:00:00
Collecting networkx
Downloading https://download.pytorch.org/whl/networkx-3.0-py3-none-any.whl (2.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 14.5 MB/s eta 0:00:00
Collecting typing-extensions
Downloading https://download.pytorch.org/whl/typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting sympy
Downloading https://download.pytorch.org/whl/sympy-1.11.1-py3-none-any.whl (6.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.5/6.5 MB 13.8 MB/s eta 0:00:00
Collecting jinja2
Downloading https://download.pytorch.org/whl/Jinja2-3.1.2-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.1/133.1 kB 8.2 MB/s eta 0:00:00
Collecting filelock
Downloading https://download.pytorch.org/whl/filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting MarkupSafe>=2.0
Downloading https://download.pytorch.org/whl/MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl (16 kB)
Collecting mpmath>=0.19
Downloading https://download.pytorch.org/whl/mpmath-1.2.1-py3-none-any.whl (532 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 532.6/532.6 kB 16.3 MB/s eta 0:00:00
Installing collected packages: mpmath, typing-extensions, sympy, networkx, MarkupSafe, filelock, jinja2, torch
Attempting uninstall: mpmath
Found existing installation: mpmath 1.3.0
Uninstalling mpmath-1.3.0:
Successfully uninstalled mpmath-1.3.0
Attempting uninstall: typing-extensions
Found existing installation: typing_extensions 4.5.0
Uninstalling typing_extensions-4.5.0:
Successfully uninstalled typing_extensions-4.5.0
Attempting uninstall: sympy
Found existing installation: sympy 1.11.1
Uninstalling sympy-1.11.1:
Successfully uninstalled sympy-1.11.1
Attempting uninstall: networkx
Found existing installation: networkx 3.0
Uninstalling networkx-3.0:
Successfully uninstalled networkx-3.0
Attempting uninstall: MarkupSafe
Found existing installation: MarkupSafe 2.1.2
Uninstalling MarkupSafe-2.1.2:
Successfully uninstalled MarkupSafe-2.1.2
Attempting uninstall: filelock
Found existing installation: filelock 3.10.6
Uninstalling filelock-3.10.6:
Successfully uninstalled filelock-3.10.6
Attempting uninstall: jinja2
Found existing installation: Jinja2 3.1.2
Uninstalling Jinja2-3.1.2:
Successfully uninstalled Jinja2-3.1.2
Attempting uninstall: torch
Found existing installation: torch 2.0.0
Uninstalling torch-2.0.0:
Successfully uninstalled torch-2.0.0
Successfully installed MarkupSafe-2.1.2 filelock-3.9.0 jinja2-3.1.2 mpmath-1.2.1 networkx-3.0 sympy-1.11.1 torch-2.0.0+cu117 typing-extensions-4.4.0

[notice] A new release of pip available: 22.2.1 -> 23.0.1
[notice] To update, run: python.exe -m pip install --upgrade pip

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('D:/oobabooga-windows/installer_files/env/bin')}
warn(msg)
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: D:\oobabooga-windows\installer_files\env did not contain cudart64_110.dll as expected! Searching further paths...
warn(msg)
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('D:/oobabooga-windows/installer_files/env/bin'), WindowsPath('D:/oobabooga-windows/installer_files/env/Library/mingw-w64/bin'), WindowsPath('D:/oobabooga-windows/installer_files/env/Library/usr/bin'), WindowsPath('C:/Users/user/.dotnet/tools'), WindowsPath('D:/oobabooga-windows/installer_files/env/Library/bin'), WindowsPath('D:/oobabooga-windows/installer_files/env/Scripts')}
warn(msg)
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: D:\oobabooga-windows\installer_files\env;D:\oobabooga-windows\installer_files\env\Library\mingw-w64\bin;D:\oobabooga-windows\installer_files\env\Library\usr\bin;D:\oobabooga-windows\installer_files\env\Library\bin;D:\oobabooga-windows\installer_files\env\Scripts;D:\oobabooga-windows\installer_files\env\bin;D:\oobabooga-windows\installer_files\mamba\condabin;C:\Program Files\Oculus\Support\oculus-runtime;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\System32\OpenSSH;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\Git\cmd;C:\Program Files\Microsoft SQL Server\150\Tools\Binn;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn;C:\Program Files\dotnet;C:\Program Files\nodejs;C:\Users\user\AppData\Local\Programs\Python\Python310\Scripts;C:\Users\user\AppData\Local\Programs\Python\Python310;C:\Users\user\AppData\Local\Microsoft\WindowsApps;;C:\Users\user\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\user.dotnet\tools;C:\Users\user\AppData\Roaming\npm did not contain cudart64_110.dll as expected! Searching further paths...
warn(msg)
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('/Users/user')}
warn(msg)
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('/W11PRO')}
warn(msg)
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('(D:/oobabooga-windows/installer_files/env) $P$G')}
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
warn(msg)
ERROR: libcudart.so could not be read from path: None!
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\main.py:141: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
warn(msg)
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version None
CUDA SETUP: Loading binary C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll...
C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
The following models are available:

  1. gpt4-x-alpaca
  2. GPTQ-for-LLaMa

Which one do you want to load? 1-2

1

Loading gpt4-x-alpaca...
Auto-assiging --gpu-memory 23 for your GPU to try to prevent out-of-memory errors.
You can manually set other values.
Loading checkpoint shards: 0%| | 0/6 [00:22<?, ?it/s]
Traceback (most recent call last):
File "D:\oobabooga-windows\text-generation-webui\server.py", line 274, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "D:\oobabooga-windows\text-generation-webui\modules\models.py", line 159, in load_model
model = AutoModelForCausalLM.from_pretrained(checkpoint, **params)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 471, in from_pretrained
return model_class.from_pretrained(
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2674, in from_pretrained
) = cls._load_pretrained_model(
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2997, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = load_state_dict_into_meta_model(
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 673, in load_state_dict_into_meta_model
set_module_8bit_tensor_to_device(model, param_name, param_device, value=param)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\bitsandbytes.py", line 70, in set_module_8bit_tensor_to_device
new_value = bnb.nn.Int8Params(new_value, requires_grad=False, has_fp16_weights=has_fp16_weights).to(device)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 196, in to
return self.cuda(device)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 160, in cuda
CB, CBt, SCB, SCBt, coo_tensorB = bnb.functional.double_quant(B)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1622, in double_quant
row_stats, col_stats, nnz_row_ptr = get_colrow_absmax(
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1511, in get_colrow_absmax
lib.cget_col_row_stats(ptrA, ptrRowStats, ptrColStats, ptrNnzrows, ct.c_float(threshold), rows, cols)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\ctypes_init
.py", line 387, in getattr
func = self.getitem(name)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\ctypes_init
.py", line 392, in getitem
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'cget_col_row_stats' not found
Press any key to continue . . .

Third thing: I instead put this into start-webui.bat
python -m torch.utils.collect_env

Starting the web UI...
Collecting environment information...
PyTorch version: 2.0.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 531.41
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture=9
CurrentClockSpeed=3701
DeviceID=CPU0
Family=205
L2CacheSize=1536
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=3701
Name=Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz
ProcessorType=3
Revision=

Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0+cu117
[pip3] torchaudio==2.0.1
[conda] Could not collect

Rest is the same.

@jllllll
Copy link
Contributor

jllllll commented Apr 4, 2023

@MikkoHaavisto Based on that log, it appears that the virtual environment was not properly created for some reason. This is odd, because the script should have told you this, instead it just created an empty environment. I guess I should add a check to see if python was actually installed to the environment or not. Did you not get any errors during install?

Try this installer, it may be more reliable: https://github.com/jllllll/one-click-installers/tree/oobabooga-windows-miniconda

@MikkoHaavisto
Copy link

@jllllll I tried that installer. The installation ended after prompting "Press any key to continue . . .", but without any mention of errors, after installing a thousand packages. I think it did that the last time as well when I installed. I wonder if it's supposed to say something like "install completed successfully"?

I tried to copy all output from install but it just closed the cmd window. _

Anyway, the 8bit quantization seems to work now! Thank you! I loaded the model without --load-in-8bit, it took 24GB vram (100%). Then with --load-in-8bit it takes 15.5GB. webui starts with no errors. Can generate text in both modes.

By the way, the https://huggingface.co/chavinlo/gpt4-x-alpaca model is giving me completely useless and strange answers, but that probably has nothing to do with these issues?

Output after successfull first install

Starting the web UI...

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

CUDA SETUP: CUDA runtime path found: D:\jillll\one-click-installers\installer_files\env\bin\cudart64_110.dll
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary D:\jillll\one-click-installers\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll...
Loading gpt4-x-alpaca...
Auto-assiging --gpu-memory 23 for your GPU to try to prevent out-of-memory errors.
You can manually set other values.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 6/6 [02:07<00:00, 21.24s/it]
Loaded the model in 128.28 seconds.
Loading the extension "gallery"... Ok.
D:\jillll\one-click-installers\installer_files\env\lib\site-packages\gradio\deprecation.py:40: UserWarning: The 'type' parameter has been deprecated. Use the Number component instead.
warnings.warn(value)
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
D:\jillll\one-click-installers\installer_files\env\lib\site-packages\transformers\generation\utils.py:1219: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Output generated in 10.28 seconds (1.94 tokens/s, 20 tokens, context 57)
Output generated in 9.82 seconds (2.65 tokens/s, 26 tokens, context 97)
Output generated in 5.67 seconds (1.76 tokens/s, 10 tokens, context 1848)
Output generated in 25.02 seconds (2.96 tokens/s, 74 tokens, context 56)

Some selected output when running install.bat the second time

...
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
numba 0.56.4 requires numpy<1.24,>=1.18, but you have numpy 1.24.2 which is incompatible.
...
...
...
Requirement already satisfied: MarkupSafe>=2.0 in d:\jillll\one-click-installers\installer_files\env\lib\site-packages (from jinja2->torch->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (2.1.2)
Requirement already satisfied: mpmath>=0.19 in d:\jillll\one-click-installers\installer_files\env\lib\site-packages (from sympy->torch->openai-whisper->-r extensions\whisper_stt\requirements.txt (line 2)) (1.3.0)
Installing collected packages: numpy
Attempting uninstall: numpy
Found existing installation: numpy 1.24.2
Uninstalling numpy-1.24.2:
Successfully uninstalled numpy-1.24.2
Successfully installed numpy-1.23.5
Press any key to continue . . .

It works the same way after running install.bat again, and that end to the install seems similar.

@jllllll
Copy link
Contributor

jllllll commented Apr 5, 2023

@MikkoHaavisto Yeah, there isn't any message for successful installation, mostly because there is no easy way to determine that with batch. As far as strange answers are concerned, you likely just need to adjust the generation parameters. Try these:

temperature=0.42
top_p=0.9
top_k=25
repetition_penalty=1.1

For larger models, you should lower the repetition penalty. For alpaca/assistant models, you may need to lower temperature.
You can read about these settings here: https://github.com/KoboldAI/KoboldAI-Client/wiki/Settings

@maxjacu
Copy link

maxjacu commented Apr 11, 2023

for step 2. To get the path to your conda environment on linux:
run (terminal)
conda info --envs | grep textgen

Ok I got it

1. Start over
conda deactivate
conda remove -n textgen --all
conda create -n textgen python=3.10.9
conda activate textgen
pip3 install torch torchvision torchaudio
cd text-generation-webui
pip install -r requirements.txt
2. Do the dirty fix in [bitsandbytes/libbitsandbytes_cpu.so: undefined  symbol: cget_col_row_stats TimDettmers/bitsandbytes#156 (comment)](https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713):
cd /home/yourname/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
cp libbitsandbytes_cuda120.so libbitsandbytes_cpu.so
cd -
3. Install cudatoolkit
conda install cudatoolkit
4. It now works
python server.py --listen --model llama-7b  --lora alpaca-lora-7b  --load-in-8bit

I had a problem with these instructions which I narrowed down to this line:

pip3 install torch torchvision torchaudio

PyTorch has now updated to 2.0.0 and so running this command will install 2.0.0, but errors occur when running this code using 2.0.0 and using

conda install cudatoolkit

would install a version of cuda which is not compatible with PyTorch 2.0.0, resulting in @KirillRepinArt's error:

Warning: torch.cuda.is_available() returned False.

To fix this, simply install the version of PyTorch immediately preceding 2.0.0. I did this using the command from the PyTorch website instead: pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

I also didn't have to do conda install cudatoolkit after using this pip command.

Copy link

github-actions bot commented Dec 5, 2023

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

@github-actions github-actions bot closed this as completed Dec 5, 2023
@gokulcoder7
Copy link

cd text-generation-webui
(textgen) C:\Windows\System32>cd text-generation-webui
The system cannot find the path specified.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests