Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'NoneType' object has no attribute 'Llama' #4817

Closed
1 task done
aaron13100 opened this issue Dec 5, 2023 · 15 comments
Closed
1 task done

AttributeError: 'NoneType' object has no attribute 'Llama' #4817

aaron13100 opened this issue Dec 5, 2023 · 15 comments
Labels
bug Something isn't working stale

Comments

@aaron13100
Copy link

Describe the bug

Whenever I try to load a model in cpu only mode the model doesn't load and I see this error message.

I noticed there is some discussion about this at #4098.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

Try to load a model?

Screenshot

No response

Logs

ERROR:Failed to load the model.
Traceback (most recent call last):
  File "/Users/user/Downloads/text-generation-webui/modules/ui_model_menu.py", line 209, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(shared.model_name, loader)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/user/Downloads/text-generation-webui/modules/models.py", line 85, in load_model
    output = load_func_map[loader](model_name)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/user/Downloads/text-generation-webui/modules/models.py", line 250, in llamacpp_loader
    model, tokenizer = LlamaCppModel.from_pretrained(model_file)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/user/Downloads/text-generation-webui/modules/llamacpp_model.py", line 54, in from_pretrained
    Llama = llama_cpp_lib().Llama
            ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'Llama'


### System Info

```shell
OSX 12.7.1
8 gigs of ram.
@aaron13100 aaron13100 added the bug Something isn't working label Dec 5, 2023
@ezesil
Copy link

ezesil commented Dec 22, 2023

Same problem here, Xeon E5-2660 V1, 40gb of ram, Windows Server 2022 Datacenter 21H2, in a Proxmox virtual machine. Using dolphin-2.2.1-mistral-7b.Q4_K_M.gguf model.

@exu-g
Copy link

exu-g commented Dec 28, 2023

I had this issue as well with an ARM-based VPS (Hetzner Cloud, 8c 16gb).
No issues though with a similarly sized x86 VPS (4c 16gb).

The machines are using KVM for virtualization, both installed with Debian 12.

I deleted the ARM VPS again, but I'm open to do some debugging if anyone wants me to.

@exu-g
Copy link

exu-g commented Dec 29, 2023

I managed to work around the issue by explicitly specifying the version of llama-cpp-python to be downloaded in the relevant requirements.txt (using the requirements_nowheels.txt here, patched in one_click.py).

Using the nowheels or cpu_only_noavx2 requirements.txt is required for me as I run into issue #4887 otherwise on aarch64. Maybe one of you could try simply adding llama-cpp-python to the requirements file your system uses.

See diff of my changes:

diff --git a/one_click.py b/one_click.py
index 76e8580..acf3cdf 100644
--- a/one_click.py
+++ b/one_click.py
@@ -303,6 +303,8 @@ def update_requirements(initial_installation=False):
         else:
             requirements_file = "requirements_noavx2.txt"
 
+    requirements_file = "requirements_nowheels.txt"
+
     print_big_message(f"Installing webui requirements from file: {requirements_file}")
     print(f"TORCH: {torver}\n")
 
diff --git a/requirements_nowheels.txt b/requirements_nowheels.txt
index f1a49b4..2c494b9 100644
--- a/requirements_nowheels.txt
+++ b/requirements_nowheels.txt
@@ -22,6 +22,7 @@ tensorboard
 transformers==4.36.*
 tqdm
 wandb
+llama-cpp-python==0.2.26
 
 # bitsandbytes
 bitsandbytes==0.41.1; platform_system != "Windows"

@github-actions github-actions bot added the stale label Feb 9, 2024
Copy link

github-actions bot commented Feb 9, 2024

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

@github-actions github-actions bot closed this as completed Feb 9, 2024
@exu-g
Copy link

exu-g commented Feb 10, 2024

Bump, this issue is still unresolved.

This is another way to fix it for me within the requirements_noavx2.txt file

diff --git a/requirements_noavx2.txt b/requirements_noavx2.txt
index fc2795cb..73a64ede 100644
--- a/requirements_noavx2.txt
+++ b/requirements_noavx2.txt
@@ -45,6 +45,7 @@ https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/te
 https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda_tensorcores-0.2.38+cu121avx-cp310-cp310-win_amd64.whl; platform_system == "Windows" and python_version == "3.10"
 https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda_tensorcores-0.2.38+cu121avx-cp311-cp311-manylinux_2_31_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"
 https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda_tensorcores-0.2.38+cu121avx-cp310-cp310-manylinux_2_31_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"
+llama-cpp-python==0.2.38; platform_machine == "aarch64"
 
 # CUDA wheels
 https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu121-cp311-cp311-win_amd64.whl; platform_system == "Windows" and python_version == "3.11"

@burrizza
Copy link

The error probably means that the llama_cpp module could not be imported by the script modules/llamacpp_model.py
You can maybe get more useful information if you try to load the library without catching the exception. Simply change the lines of that file to

import llama_cpp
"""
try:
    import llama_cpp
except:
    llama_cpp = None
"""

and try to load a model again. The new error message should be more specific which should make it easier to get help. Same would work if the llama_cpp_cuda or llama_cpp_cuda_tensorcores should be loaded. Don't forget to reverse that changes after troubleshooting.

Perhaps in the future an error output can be integrated into text-generation-webui if neither llama_cpp, llama_cpp_cuda, nor llama_cpp_cuda_tensorcores could be loaded.

Good luck!

@gianlucasullazzo
Copy link

Hi,
I removed the try catch and, on Windows, I got the error:
RuntimeError: Failed to load shared library 'C:\localai\installer_files\env\Lib\site-packages\llama_cpp\llama.dll': [WinError 1114] A dynamic link library (DLL) initialization routine failed
I configured for "only CPU".
Any ideas?

@burrizza
Copy link

Hi, I removed the try catch and, on Windows, I got the error: RuntimeError: Failed to load shared library 'C:\localai\installer_files\env\Lib\site-packages\llama_cpp\llama.dll': [WinError 1114] A dynamic link library (DLL) initialization routine failed I configured for "only CPU". Any ideas?

Hello,
did you use the installation with AVX2 or without? If your CPU does not support AVX2, I would try to install again using the following command: pip install -r requirements_cpu_only_noavx2.txt --upgrade --force-reinstall --no-cache-dir

Maybe also interesting: tensorflow - amahendrakar

@gianlucasullazzo
Copy link

Hi @burrizza, just reinstalled with noavx2 but got same error.

@burrizza
Copy link

burrizza commented Feb 18, 2024

Hello @gianlucasullazzo,
could you please provide your installation steps, because I guess something went wrong during the compilation of llama.cpp? I suggest to try the manual installation method to get more details.
I also recommend creating a new ticket as the problem has now become more explicit.

Good luck!

@exu-g
Copy link

exu-g commented Feb 18, 2024

The error probably means that the llama_cpp module could not be imported by the script modules/llamacpp_model.py

At least on ARM64, llama-cpp-python is not installed at all when using the requirements_noavx2.txt file as automatically selected by one_click.py.
This is why an error is produced when one tries to load a model that uses llama.cpp.

My diff previously posted simply adds an install option for ARM64 machines to solve this.
We could also fix one_click.py to potentially use a working requirements_* file on systems affected by this issue.

@gianlucasullazzo Can you share what hardware and OS you're using?
Which requirements_* file is being used by default? Scroll up to the text Installing webui requirements from file: ... lined with ******************************************************************* top and bottom.

@gianlucasullazzo
Copy link

Hi @RealStickman, @burrizza.
What I did:
Attempt nr.1
install one click --> error loading dll
Attempt nr.2
over existing installation, I launched pip install -r requirements_cpu_only_noavx2.txt --upgrade --force-reinstall --no-cache-dir but same error
Attempt nr.3
delete all and from downloaded dir I launched pip install -r requirements_cpu_only_noavx2.txt --upgrade --force-reinstall --no-cache-dir --> same error
At the end I restored the one click installation even if it does not work.
My setup is:
Win11Pro 23H2 on:
Processor Intel(R) Celeron(R) N5105 @ 2.00GHz 2.00 GHz
Installed RAM 16,0 GB (15,8 GB usable)
System type 64-bit operating system, x64-based processor

About the file used, I closed cmd so I have no more access to that log. Is there a log file inside install folders?
Thanks a lot

@exu-g
Copy link

exu-g commented Feb 18, 2024

What Python version are you using? Prebuilt wheels are only specified for Python 3.10 and 3.11 on Windows.

https://github.com/oobabooga/text-generation-webui/blob/main/requirements_cpu_only_noavx2.txt#L33-L34

You can try pip install llama-cpp-python

@gianlucasullazzo
Copy link

I have 3.10.11.
I ran pip install llama-cpp-python but I get Requirement already satisfied for each package.
Then i re-launched text-generation-webui and, as expected, same error loading dll.

@delaaxe
Copy link

delaaxe commented Dec 23, 2024

Same issue on macOS 15.
Can't just add a line to requirements_apple_silicon.txt because there doesn't seem to be a realease for llama_cpp_python-0.3.5-cp310-cp310-macosx_15_0_arm64.whl
Any ideas?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests

6 participants