-
Notifications
You must be signed in to change notification settings - Fork 646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Required library version not found: libbitsandbytes_cuda100.so. #82
Comments
I tried to follow the instructions to compile the lib from source CUDA SETUP: Something unexpected happened. Please compile from source: but the issue remains |
Thank you for reporting the problem. Currently, CUDA 10.0 is not supported. CUDA 10.2 is supported, though. CUDA 10.0 was supported in the past but required the maintenance of additional code since 10.0 does not support all the features of CUDA 10.2. If possible, upgrade to a different CUDA version. This should resolve the issue. I will review the inclusion of CUDA 10.0 at a later time again. |
Thanks a lot for the reply, I have CUDA 11.6 installed.
CUDA_VERSION=100 is from the error message. I tried to update the version to my version
But still seeing the error message. |
Are you getting the same error message? |
i get the same error |
yes same error message. |
Any progress? |
Hey all, Tim is busy writing a new paper and I am completely swamped at
work. I should be able to look into this in 7-14days from now. We're doing
this in our free time.
Thanks for your patience,
Titus
…On Thu, Nov 24, 2022, 14:05 yang ***@***.***> wrote:
Any progress?
—
Reply to this email directly, view it on GitHub
<#82 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACFBEO7GQXEKGRRXBTJ2D2TWJ5RYXANCNFSM6AAAAAARTQEL4M>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Currently, CUDA 10.0 is still not supported. If you have the same error message after installing CUDA 11.6 and you get the message
I will update the CUDA install to throw an error instead to make it clearer what is going on. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
CUDA SETUP: Required library version not found: libbitsandbytes_cuda100.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes.so...
================================================ERROR=====================================
CUDA SETUP: CUDA detection failed! Possible reasons:
CUDA SETUP: If you compiled from source, try again with
make CUDA_VERSION=DETECTED_CUDA_VERSION
for example,make CUDA_VERSION=113
.================================================================================
CUDA SETUP: Something unexpected happened. Please compile from source:
git clone git@github.com:TimDettmers/bitsandbytes.git
cd bitsandbytes
CUDA_VERSION=100
python setup.py install
CUDA SETUP: Required library version not found: libbitsandbytes_cuda100.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes.so...
================================================ERROR=====================================
CUDA SETUP: CUDA detection failed! Possible reasons:
CUDA SETUP: If you compiled from source, try again with
make CUDA_VERSION=DETECTED_CUDA_VERSION
for example,make CUDA_VERSION=113
.================================================================================
CUDA SETUP: Something unexpected happened. Please compile from source:
git clone git@github.com:TimDettmers/bitsandbytes.git
cd bitsandbytes
CUDA_VERSION=100
python setup.py install
CUDA SETUP: Required library version not found: libbitsandbytes_cuda100.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes.so...
================================================ERROR=====================================
CUDA SETUP: CUDA detection failed! Possible reasons:
CUDA SETUP: If you compiled from source, try again with
make CUDA_VERSION=DETECTED_CUDA_VERSION
for example,make CUDA_VERSION=113
.================================================================================
CUDA SETUP: Something unexpected happened. Please compile from source:
git clone git@github.com:TimDettmers/bitsandbytes.git
cd bitsandbytes
CUDA_VERSION=100
python setup.py install
CUDA SETUP: Something unexpected happened. Please compile from source:
git clone git@github.com:TimDettmers/bitsandbytes.git
cd bitsandbytes
CUDA_VERSION=100
python setup.py install
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ :1 in │
│ :18 in init │
│ :31 in _load_dependencies │
│ │
│ /root/venv/lib/python3.7/site-packages/transformers/pipelines/init.py:727 in pipeline │
│ │
│ 724 │ │ framework=framework, │
│ 725 │ │ task=task, │
│ 726 │ │ **hub_kwargs, │
│ ❱ 727 │ │ **model_kwargs, │
│ 728 │ ) │
│ 729 │ │
│ 730 │ model_config = model.config │
│ │
│ /root/venv/lib/python3.7/site-packages/transformers/pipelines/base.py:257 in │
│ infer_framework_load_model │
│ │
│ 254 │ │ │ │ ) │
│ 255 │ │ │ │
│ 256 │ │ │ try: │
│ ❱ 257 │ │ │ │ model = model_class.from_pretrained(model, **kwargs) │
│ 258 │ │ │ │ if hasattr(model, "eval"): │
│ 259 │ │ │ │ │ model = model.eval() │
│ 260 │ │ │ │ # Stop loading on the first successful load. │
│ │
│ /root/venv/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py:464 in │
│ from_pretrained │
│ │
│ 461 │ │ elif type(config) in cls._model_mapping.keys(): │
│ 462 │ │ │ model_class = _get_model_class(config, cls._model_mapping) │
│ 463 │ │ │ return model_class.from_pretrained( │
│ ❱ 464 │ │ │ │ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, │
│ 465 │ │ │ ) │
│ 466 │ │ raise ValueError( │
│ 467 │ │ │ f"Unrecognized configuration class {config.class} for this kind of AutoM │
│ │
│ /root/venv/lib/python3.7/site-packages/transformers/modeling_utils.py:2231 in from_pretrained │
│ │
│ 2228 │ │ │ model = cls(config, *model_args, **model_kwargs) │
│ 2229 │ │ │
│ 2230 │ │ if load_in_8bit: │
│ ❱ 2231 │ │ │ from .utils.bitsandbytes import get_keys_to_not_convert, replace_8bit_linear │
│ 2232 │ │ │ │
│ 2233 │ │ │ logger.info("Detected 8-bit loading: activating 8-bit loading for this model │
│ 2234 │
│ │
│ /root/venv/lib/python3.7/site-packages/transformers/utils/bitsandbytes.py:10 in │
│ │
│ 7 │ import torch │
│ 8 │ import torch.nn as nn │
│ 9 │ │
│ ❱ 10 │ import bitsandbytes as bnb │
│ 11 │
│ 12 if is_accelerate_available(): │
│ 13 │ from accelerate import init_empty_weights │
│ │
│ /root/venv/lib/python3.7/site-packages/bitsandbytes/init.py:6 in │
│ │
│ 3 # This source code is licensed under the MIT license found in the │
│ 4 # LICENSE file in the root directory of this source tree. │
│ 5 │
│ ❱ 6 from .autograd._functions import ( │
│ 7 │ MatmulLtState, │
│ 8 │ bmm_cublas, │
│ 9 │ matmul, │
│ │
│ /root/venv/lib/python3.7/site-packages/bitsandbytes/autograd/_functions.py:5 in │
│ │
│ 2 import warnings │
│ 3 │
│ 4 import torch │
│ ❱ 5 import bitsandbytes.functional as F │
│ 6 │
│ 7 from dataclasses import dataclass │
│ 8 from functools import reduce # Required in Python 3 │
│ │
│ /root/venv/lib/python3.7/site-packages/bitsandbytes/functional.py:13 in │
│ │
│ 10 from typing import Tuple │
│ 11 from torch import Tensor │
│ 12 │
│ ❱ 13 from .cextension import COMPILED_WITH_CUDA, lib │
│ 14 from functools import reduce # Required in Python 3 │
│ 15 │
│ 16 # math.prod not compatible with python < 3.8 │
│ │
│ /root/venv/lib/python3.7/site-packages/bitsandbytes/cextension.py:121 in │
│ │
│ 118 │ │ raise RuntimeError(''' │
│ 119 │ │ CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs to │
│ 120 │ │ If you cannot find any issues and suspect a bug, please open an issue with detal │
│ ❱ 121 │ │ https://github.com/TimDettmers/bitsandbytes/issues''') │
│ 122 │ lib.cadam32bit_g32 │
│ 123 │ lib.get_context.restype = ct.c_void_p │
│ 124 │ lib.get_cusparse.restype = ct.c_void_p │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError:
CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs to fix your environment!
If you cannot find any issues and suspect a bug, please open an issue with detals about your environment:
https://github.com/TimDettmers/bitsandbytes/issues
The text was updated successfully, but these errors were encountered: