Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

autoawq aarch64 unavailable #4887

Closed
1 task done
llcool opened this issue Dec 11, 2023 · 3 comments
Closed
1 task done

autoawq aarch64 unavailable #4887

llcool opened this issue Dec 11, 2023 · 3 comments
Labels
bug Something isn't working

Comments

@llcool
Copy link

llcool commented Dec 11, 2023

Describe the bug

This package is not available for aarch64/arm64 cpu's and does this package require a supported gpu?
I was trying to install this on Windows 11 arm, but that is not supported so I tried Windows 11 arm wsl because the package are available (mostly)...

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

Run the ./start_linux.sh on any arm Linux distro.

Checkout the package webpage https://pypi.org/project/autoawq/#files no arm/aarch listed.

Screenshot

No response

Logs

lloyd@DESKTOP-NG4NOJ4:~/text-generation-webui-main$ ./start_linux.sh
Downloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Linux-aarch64.sh to /home/lloyd/text-generation-webui-main/installer_files/miniconda_installer.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 50.3M  100 50.3M    0     0  3839k      0  0:00:13  0:00:13 --:--:-- 6439k
PREFIX=/home/lloyd/text-generation-webui-main/installer_files/conda
Unpacking payload ...
Installing base environment...
Downloading and Extracting Packages
Downloading and Extracting Packages
Preparing transaction: done
Executing transaction: done
installation finished.
Miniconda version:
conda 23.3.1
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
  current version: 23.3.1
  latest version: 23.11.0
Please update conda by running
    $ conda update -n base -c defaults conda
Or to minimize the number of packages updated during conda update use
     conda install conda=23.11.0
## Package Plan ##
  environment location: /home/lloyd/text-generation-webui-main/installer_files/env
  added / updated specs:
    - python=3.11
The following packages will be downloaded:
    package                    |            build
    ---------------------------|-----------------
    ca-certificates-2023.08.22 |       hd43f75c_0         123 KB
    libffi-3.4.4               |       h419075a_0         139 KB
    openssl-3.0.12             |       h2f4d8fa_0         5.3 MB
    pip-23.3.1                 |  py311hd43f75c_0         3.3 MB
    python-3.11.5              |       h4bb2201_0        15.4 MB
    setuptools-68.0.0          |  py311hd43f75c_0         1.2 MB
    sqlite-3.41.2              |       h998d150_0         1.4 MB
    wheel-0.41.2               |  py311hd43f75c_0         141 KB
    xz-5.4.5                   |       h998d150_0         663 KB
    ------------------------------------------------------------
                                           Total:        27.7 MB

The following NEW packages will be INSTALLED:
  _libgcc_mutex      pkgs/main/linux-aarch64::_libgcc_mutex-0.1-main
  _openmp_mutex      pkgs/main/linux-aarch64::_openmp_mutex-5.1-51_gnu
  bzip2              pkgs/main/linux-aarch64::bzip2-1.0.8-hfd63f10_2
  ca-certificates    pkgs/main/linux-aarch64::ca-certificates-2023.08.22-hd43f75c_0
  ld_impl_linux-aar~ pkgs/main/linux-aarch64::ld_impl_linux-aarch64-2.38-h8131f2d_1
  libffi             pkgs/main/linux-aarch64::libffi-3.4.4-h419075a_0
  libgcc-ng          pkgs/main/linux-aarch64::libgcc-ng-11.2.0-h1234567_1
  libgomp            pkgs/main/linux-aarch64::libgomp-11.2.0-h1234567_1
  libstdcxx-ng       pkgs/main/linux-aarch64::libstdcxx-ng-11.2.0-h1234567_1
  libuuid            pkgs/main/linux-aarch64::libuuid-1.41.5-h998d150_0
  ncurses            pkgs/main/linux-aarch64::ncurses-6.4-h419075a_0
  openssl            pkgs/main/linux-aarch64::openssl-3.0.12-h2f4d8fa_0
  pip                pkgs/main/linux-aarch64::pip-23.3.1-py311hd43f75c_0
  python             pkgs/main/linux-aarch64::python-3.11.5-h4bb2201_0
  readline           pkgs/main/linux-aarch64::readline-8.2-h998d150_0
  setuptools         pkgs/main/linux-aarch64::setuptools-68.0.0-py311hd43f75c_0
  sqlite             pkgs/main/linux-aarch64::sqlite-3.41.2-h998d150_0
  tk                 pkgs/main/linux-aarch64::tk-8.6.12-h241ca14_0
  tzdata             pkgs/main/noarch::tzdata-2023c-h04d1e81_0
  wheel              pkgs/main/linux-aarch64::wheel-0.41.2-py311hd43f75c_0
  xz                 pkgs/main/linux-aarch64::xz-5.4.5-h998d150_0
  zlib               pkgs/main/linux-aarch64::zlib-1.2.13-h998d150_0

Downloading and Extracting Packages
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate /home/lloyd/text-generation-webui-main/installer_files/env
#
# To deactivate an active environment, use
#
#     $ conda deactivate
What is your GPU?

A) NVIDIA
B) AMD (Linux/MacOS only. Requires ROCm SDK 5.6 on Linux)
C) Apple M Series
D) Intel Arc (IPEX)
N) None (I want to run models in CPU mode)

Input>
Collecting package metadata (current_repodata.json): done
Solving environment: done

==> WARNING: A newer version of conda exists. <==
  current version: 23.3.1
  latest version: 23.11.0

Please update conda by running
    $ conda update -n base -c defaults conda
Or to minimize the number of packages updated during conda update use
     conda install conda=23.11.0

## Package Plan ##
  environment location: /home/lloyd/text-generation-webui-main/installer_files/env
  added / updated specs:
    - git
    - ninja

The following packages will be downloaded:
    package                    |            build
    ---------------------------|-----------------
    c-ares-1.19.1              |       h998d150_0         123 KB
    curl-8.4.0                 |       h6ac735f_1          88 KB
    expat-2.5.0                |       h419075a_0         151 KB
    gdbm-1.18                  |       hf59d7a7_4         205 KB
    gettext-0.21.0             |       h0cce8dc_1         3.3 MB
    git-2.40.1                 | pl5340h372b8bf_1        13.1 MB
    icu-73.1                   |       h419075a_0        26.2 MB
    krb5-1.20.1                |       h2e2fba8_1         1.5 MB
    libcurl-8.4.0              |       hfa2bbb0_1         429 KB
    libedit-3.1.20221030       |       h998d150_0         194 KB
    libev-4.33                 |       hfd63f10_1         113 KB
    libnghttp2-1.57.0          |       hb788212_0         735 KB
    libssh2-1.10.0             |       h6ac735f_2         315 KB
    libxml2-2.10.4             |       h045d036_1         806 KB
    ninja-1.10.2               |       hd43f75c_5           8 KB
    ninja-base-1.10.2          |       h59a28a9_5         118 KB
    pcre2-10.42                |       hcfaa891_0         1.3 MB
    perl-5.34.0                |       h998d150_2        12.5 MB
    ------------------------------------------------------------
                                           Total:        61.1 MB
The following NEW packages will be INSTALLED:
  c-ares             pkgs/main/linux-aarch64::c-ares-1.19.1-h998d150_0
  curl               pkgs/main/linux-aarch64::curl-8.4.0-h6ac735f_1
  expat              pkgs/main/linux-aarch64::expat-2.5.0-h419075a_0
  gdbm               pkgs/main/linux-aarch64::gdbm-1.18-hf59d7a7_4
  gettext            pkgs/main/linux-aarch64::gettext-0.21.0-h0cce8dc_1
  git                pkgs/main/linux-aarch64::git-2.40.1-pl5340h372b8bf_1
  icu                pkgs/main/linux-aarch64::icu-73.1-h419075a_0
  krb5               pkgs/main/linux-aarch64::krb5-1.20.1-h2e2fba8_1
  libcurl            pkgs/main/linux-aarch64::libcurl-8.4.0-hfa2bbb0_1
  libedit            pkgs/main/linux-aarch64::libedit-3.1.20221030-h998d150_0
  libev              pkgs/main/linux-aarch64::libev-4.33-hfd63f10_1
  libnghttp2         pkgs/main/linux-aarch64::libnghttp2-1.57.0-hb788212_0
  libssh2            pkgs/main/linux-aarch64::libssh2-1.10.0-h6ac735f_2
  libxml2            pkgs/main/linux-aarch64::libxml2-2.10.4-h045d036_1
  ninja              pkgs/main/linux-aarch64::ninja-1.10.2-hd43f75c_5
  ninja-base         pkgs/main/linux-aarch64::ninja-base-1.10.2-h59a28a9_5
  pcre2              pkgs/main/linux-aarch64::pcre2-10.42-hcfaa891_0
  perl               pkgs/main/linux-aarch64::perl-5.34.0-h998d150_2

Downloading and Extracting Packages
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Collecting torch
  Downloading torch-2.1.1-cp311-cp311-manylinux2014_aarch64.whl.metadata (25 kB)
Collecting torchvision
  Downloading torchvision-0.16.1-cp311-cp311-manylinux2014_aarch64.whl.metadata (6.6 kB)
Collecting torchaudio
  Downloading torchaudio-2.1.1-cp311-cp311-manylinux2014_aarch64.whl.metadata (6.4 kB)
Collecting filelock (from torch)
  Downloading filelock-3.13.1-py3-none-any.whl.metadata (2.8 kB)
Collecting typing-extensions (from torch)
  Downloading typing_extensions-4.9.0-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch)
  Downloading sympy-1.12-py3-none-any.whl (5.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 MB 6.0 MB/s eta 0:00:00
Collecting networkx (from torch)
  Downloading networkx-3.2.1-py3-none-any.whl.metadata (5.2 kB)
Collecting jinja2 (from torch)
  Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.1/133.1 kB 5.4 MB/s eta 0:00:00
Collecting fsspec (from torch)
  Downloading fsspec-2023.12.1-py3-none-any.whl.metadata (6.8 kB)
Collecting numpy (from torchvision)
  Using cached numpy-1.26.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (62 kB)
Collecting requests (from torchvision)
  Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Using cached Pillow-10.1.0-cp311-cp311-manylinux_2_28_aarch64.whl.metadata (9.5 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch)
  Downloading MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.0 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision)
  Using cached charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->torchvision)
  Using cached idna-3.6-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision)
  Using cached urllib3-2.1.0-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision)
  Using cached certifi-2023.11.17-py3-none-any.whl.metadata (2.2 kB)
Collecting mpmath>=0.19 (from sympy->torch)
  Downloading mpmath-1.3.0-py3-none-any.whl (536 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 4.4 MB/s eta 0:00:00
Downloading torch-2.1.1-cp311-cp311-manylinux2014_aarch64.whl (84.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 84.2/84.2 MB 5.5 MB/s eta 0:00:00
Downloading torchvision-0.16.1-cp311-cp311-manylinux2014_aarch64.whl (14.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.0/14.0 MB 6.4 MB/s eta 0:00:00
Downloading torchaudio-2.1.1-cp311-cp311-manylinux2014_aarch64.whl (1.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 6.2 MB/s eta 0:00:00
Downloading Pillow-10.1.0-cp311-cp311-manylinux_2_28_aarch64.whl (3.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.5/3.5 MB 6.3 MB/s eta 0:00:00
Downloading filelock-3.13.1-py3-none-any.whl (11 kB)
Downloading fsspec-2023.12.1-py3-none-any.whl (168 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 168.9/168.9 kB 5.2 MB/s eta 0:00:00
Downloading networkx-3.2.1-py3-none-any.whl (1.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 6.7 MB/s eta 0:00:00
Using cached numpy-1.26.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Downloading typing_extensions-4.9.0-py3-none-any.whl (32 kB)
Using cached certifi-2023.11.17-py3-none-any.whl (162 kB)
Using cached charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (136 kB)
Using cached idna-3.6-py3-none-any.whl (61 kB)
Downloading MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (28 kB)
Using cached urllib3-2.1.0-py3-none-any.whl (104 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torchaudio
Successfully installed MarkupSafe-2.1.3 certifi-2023.11.17 charset-normalizer-3.3.2 filelock-3.13.1 fsspec-2023.12.1 idna-3.6 jinja2-3.1.2 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.2 pillow-10.1.0 requests-2.31.0 sympy-1.12 torch-2.1.1 torchaudio-2.1.1 torchvision-0.16.1 typing-extensions-4.9.0 urllib3-2.1.0
Collecting py-cpuinfo==9.0.0
  Using cached py_cpuinfo-9.0.0-py3-none-any.whl (22 kB)
Installing collected packages: py-cpuinfo
Successfully installed py-cpuinfo-9.0.0
Already up to date.
*******************************************************************
* Installing extensions requirements.
*******************************************************************

Collecting SpeechRecognition==3.10.0 (from -r extensions/openai/requirements.txt (line 1))
  Using cached SpeechRecognition-3.10.0-py2.py3-none-any.whl (32.8 MB)
Collecting flask_cloudflared==0.0.14 (from -r extensions/openai/requirements.txt (line 2))
  Using cached flask_cloudflared-0.0.14-py3-none-any.whl.metadata (4.6 kB)
Collecting sse-starlette==1.6.5 (from -r extensions/openai/requirements.txt (line 3))
  Using cached sse_starlette-1.6.5-py3-none-any.whl.metadata (6.7 kB)
Collecting tiktoken (from -r extensions/openai/requirements.txt (line 4))
  Using cached tiktoken-0.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (6.6 kB)
Requirement already satisfied: requests>=2.26.0 in ./installer_files/env/lib/python3.11/site-packages (from SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (2.31.0)
Collecting Flask>=0.8 (from flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2))
  Using cached flask-3.0.0-py3-none-any.whl.metadata (3.6 kB)
Collecting starlette (from sse-starlette==1.6.5->-r extensions/openai/requirements.txt (line 3))
  Using cached starlette-0.33.0-py3-none-any.whl.metadata (5.8 kB)
Collecting regex>=2022.1.18 (from tiktoken->-r extensions/openai/requirements.txt (line 4))
  Using cached regex-2023.10.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (40 kB)
Collecting Werkzeug>=3.0.0 (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2))
  Using cached werkzeug-3.0.1-py3-none-any.whl.metadata (4.1 kB)
Requirement already satisfied: Jinja2>=3.1.2 in ./installer_files/env/lib/python3.11/site-packages (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2)) (3.1.2)
Collecting itsdangerous>=2.1.2 (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2))
  Using cached itsdangerous-2.1.2-py3-none-any.whl (15 kB)
Collecting click>=8.1.3 (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2))
  Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting blinker>=1.6.2 (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2))
  Using cached blinker-1.7.0-py3-none-any.whl.metadata (1.9 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (2023.11.17)
Collecting anyio<5,>=3.4.0 (from starlette->sse-starlette==1.6.5->-r extensions/openai/requirements.txt (line 3))
  Using cached anyio-4.1.0-py3-none-any.whl.metadata (4.5 kB)
Collecting sniffio>=1.1 (from anyio<5,>=3.4.0->starlette->sse-starlette==1.6.5->-r extensions/openai/requirements.txt (line 3))
  Using cached sniffio-1.3.0-py3-none-any.whl (10 kB)
Requirement already satisfied: MarkupSafe>=2.0 in ./installer_files/env/lib/python3.11/site-packages (from Jinja2>=3.1.2->Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2)) (2.1.3)
Using cached flask_cloudflared-0.0.14-py3-none-any.whl (6.4 kB)
Using cached sse_starlette-1.6.5-py3-none-any.whl (9.6 kB)
Using cached tiktoken-0.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (2.0 MB)
Using cached flask-3.0.0-py3-none-any.whl (99 kB)
Using cached regex-2023.10.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (783 kB)
Using cached starlette-0.33.0-py3-none-any.whl (70 kB)
Using cached anyio-4.1.0-py3-none-any.whl (83 kB)
Using cached blinker-1.7.0-py3-none-any.whl (13 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Using cached werkzeug-3.0.1-py3-none-any.whl (226 kB)
Installing collected packages: Werkzeug, sniffio, regex, itsdangerous, click, blinker, tiktoken, SpeechRecognition, Flask, anyio, starlette, flask_cloudflared, sse-starlette
Successfully installed Flask-3.0.0 SpeechRecognition-3.10.0 Werkzeug-3.0.1 anyio-4.1.0 blinker-1.7.0 click-8.1.7 flask_cloudflared-0.0.14 itsdangerous-2.1.2 regex-2023.10.3 sniffio-1.3.0 sse-starlette-1.6.5 starlette-0.33.0 tiktoken-0.5.2
Collecting ngrok==0.* (from -r extensions/ngrok/requirements.txt (line 1))
  Using cached ngrok-0.12.1-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (16 kB)
Using cached ngrok-0.12.1-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (4.4 MB)
Installing collected packages: ngrok
Successfully installed ngrok-0.12.1
Collecting elevenlabs==0.2.24 (from -r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached elevenlabs-0.2.24-py3-none-any.whl.metadata (811 bytes)
Collecting pydantic<2.0,>=1.10 (from elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached pydantic-1.10.13-py3-none-any.whl.metadata (149 kB)
Collecting ipython>=7.0 (from elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached ipython-8.18.1-py3-none-any.whl.metadata (6.0 kB)
Requirement already satisfied: requests>=2.20 in ./installer_files/env/lib/python3.11/site-packages (from elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (2.31.0)
Collecting websockets>=11.0 (from elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached websockets-12.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (6.6 kB)
Collecting decorator (from ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting jedi>=0.16 (from ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached jedi-0.19.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting matplotlib-inline (from ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached matplotlib_inline-0.1.6-py3-none-any.whl (9.4 kB)
Collecting prompt-toolkit<3.1.0,>=3.0.41 (from ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached prompt_toolkit-3.0.41-py3-none-any.whl.metadata (6.5 kB)
Collecting pygments>=2.4.0 (from ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached pygments-2.17.2-py3-none-any.whl.metadata (2.6 kB)
Collecting stack-data (from ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached stack_data-0.6.3-py3-none-any.whl.metadata (18 kB)
Collecting traitlets>=5 (from ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached traitlets-5.14.0-py3-none-any.whl.metadata (10 kB)
Collecting pexpect>4.3 (from ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached pexpect-4.9.0-py2.py3-none-any.whl.metadata (2.5 kB)
Requirement already satisfied: typing-extensions>=4.2.0 in ./installer_files/env/lib/python3.11/site-packages (from pydantic<2.0,>=1.10->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (4.9.0)
Requirement already satisfied: charset-normalizer<4,>=2 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.20->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.20->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.20->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.20->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1)) (2023.11.17)
Collecting parso<0.9.0,>=0.8.3 (from jedi>=0.16->ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached parso-0.8.3-py2.py3-none-any.whl (100 kB)
Collecting ptyprocess>=0.5 (from pexpect>4.3->ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB)
Collecting wcwidth (from prompt-toolkit<3.1.0,>=3.0.41->ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached wcwidth-0.2.12-py2.py3-none-any.whl.metadata (14 kB)
Collecting executing>=1.2.0 (from stack-data->ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached executing-2.0.1-py2.py3-none-any.whl.metadata (9.0 kB)
Collecting asttokens>=2.1.0 (from stack-data->ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached asttokens-2.4.1-py2.py3-none-any.whl.metadata (5.2 kB)
Collecting pure-eval (from stack-data->ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached pure_eval-0.2.2-py3-none-any.whl (11 kB)
Collecting six>=1.12.0 (from asttokens>=2.1.0->stack-data->ipython>=7.0->elevenlabs==0.2.24->-r extensions/elevenlabs_tts/requirements.txt (line 1))
  Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Using cached elevenlabs-0.2.24-py3-none-any.whl (16 kB)
Using cached ipython-8.18.1-py3-none-any.whl (808 kB)
Using cached pydantic-1.10.13-py3-none-any.whl (158 kB)
Using cached websockets-12.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (131 kB)
Using cached jedi-0.19.1-py2.py3-none-any.whl (1.6 MB)
Using cached pexpect-4.9.0-py2.py3-none-any.whl (63 kB)
Using cached prompt_toolkit-3.0.41-py3-none-any.whl (385 kB)
Using cached pygments-2.17.2-py3-none-any.whl (1.2 MB)
Using cached traitlets-5.14.0-py3-none-any.whl (85 kB)
Using cached stack_data-0.6.3-py3-none-any.whl (24 kB)
Using cached asttokens-2.4.1-py2.py3-none-any.whl (27 kB)
Using cached executing-2.0.1-py2.py3-none-any.whl (24 kB)
Using cached wcwidth-0.2.12-py2.py3-none-any.whl (34 kB)
Installing collected packages: wcwidth, pure-eval, ptyprocess, websockets, traitlets, six, pygments, pydantic, prompt-toolkit, pexpect, parso, executing, decorator, matplotlib-inline, jedi, asttokens, stack-data, ipython, elevenlabs
Successfully installed asttokens-2.4.1 decorator-5.1.1 elevenlabs-0.2.24 executing-2.0.1 ipython-8.18.1 jedi-0.19.1 matplotlib-inline-0.1.6 parso-0.8.3 pexpect-4.9.0 prompt-toolkit-3.0.41 ptyprocess-0.7.0 pure-eval-0.2.2 pydantic-1.10.13 pygments-2.17.2 six-1.16.0 stack-data-0.6.3 traitlets-5.14.0 wcwidth-0.2.12 websockets-12.0
Collecting git+https://github.com/oobabooga/whisper.git (from -r extensions/whisper_stt/requirements.txt (line 2))
  Cloning https://github.com/oobabooga/whisper.git to /tmp/pip-req-build-0yjblcj6
  Running command git clone --filter=blob:none --quiet https://github.com/oobabooga/whisper.git /tmp/pip-req-build-0yjblcj6
  Resolved https://github.com/oobabooga/whisper.git to commit 958ee4f6e1e65425ba02c440fc083089d58f5c71
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: SpeechRecognition==3.10.0 in ./installer_files/env/lib/python3.11/site-packages (from -r extensions/whisper_stt/requirements.txt (line 1)) (3.10.0)
Collecting soundfile (from -r extensions/whisper_stt/requirements.txt (line 3))
  Using cached soundfile-0.12.1-py2.py3-none-any.whl (24 kB)
Collecting ffmpeg (from -r extensions/whisper_stt/requirements.txt (line 4))
  Using cached ffmpeg-1.4-py3-none-any.whl
Requirement already satisfied: requests>=2.26.0 in ./installer_files/env/lib/python3.11/site-packages (from SpeechRecognition==3.10.0->-r extensions/whisper_stt/requirements.txt (line 1)) (2.31.0)
Collecting numba (from openai-whisper==20230918->-r extensions/whisper_stt/requirements.txt (line 2))
  Using cached numba-0.58.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl.metadata (2.7 kB)
Requirement already satisfied: numpy in ./installer_files/env/lib/python3.11/site-packages (from openai-whisper==20230918->-r extensions/whisper_stt/requirements.txt (line 2)) (1.26.2)
Collecting tqdm (from openai-whisper==20230918->-r extensions/whisper_stt/requirements.txt (line 2))
  Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting more-itertools (from openai-whisper==20230918->-r extensions/whisper_stt/requirements.txt (line 2))
  Using cached more_itertools-10.1.0-py3-none-any.whl.metadata (33 kB)
Collecting tiktoken==0.3.3 (from openai-whisper==20230918->-r extensions/whisper_stt/requirements.txt (line 2))
  Using cached tiktoken-0.3.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.6 MB)
Requirement already satisfied: regex>=2022.1.18 in ./installer_files/env/lib/python3.11/site-packages (from tiktoken==0.3.3->openai-whisper==20230918->-r extensions/whisper_stt/requirements.txt (line 2)) (2023.10.3)
Collecting cffi>=1.0 (from soundfile->-r extensions/whisper_stt/requirements.txt (line 3))
  Using cached cffi-1.16.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (1.5 kB)
Collecting pycparser (from cffi>=1.0->soundfile->-r extensions/whisper_stt/requirements.txt (line 3))
  Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/whisper_stt/requirements.txt (line 1)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/whisper_stt/requirements.txt (line 1)) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/whisper_stt/requirements.txt (line 1)) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/whisper_stt/requirements.txt (line 1)) (2023.11.17)
Collecting llvmlite<0.42,>=0.41.0dev0 (from numba->openai-whisper==20230918->-r extensions/whisper_stt/requirements.txt (line 2))
  Using cached llvmlite-0.41.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (4.8 kB)
Using cached cffi-1.16.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (466 kB)
Using cached more_itertools-10.1.0-py3-none-any.whl (55 kB)
Using cached numba-0.58.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl (3.4 MB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached llvmlite-0.41.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (42.6 MB)
Building wheels for collected packages: openai-whisper
  Building wheel for openai-whisper (pyproject.toml) ... done
  Created wheel for openai-whisper: filename=openai_whisper-20230918-py3-none-any.whl size=798448 sha256=853b5cd76ddac2a81edf85fd4972a0a7106ccbc3ef49dc64a80ab61522eb19a4
  Stored in directory: /tmp/pip-ephem-wheel-cache-5etvygp0/wheels/35/e7/4f/cd878f35d6cb5bf819c592f299ff25b6c0cf5a74e1c6576eba
Successfully built openai-whisper
Installing collected packages: ffmpeg, tqdm, pycparser, more-itertools, llvmlite, tiktoken, numba, cffi, soundfile, openai-whisper
  Attempting uninstall: tiktoken
    Found existing installation: tiktoken 0.5.2
    Uninstalling tiktoken-0.5.2:
      Successfully uninstalled tiktoken-0.5.2
Successfully installed cffi-1.16.0 ffmpeg-1.4 llvmlite-0.41.1 more-itertools-10.1.0 numba-0.58.1 openai-whisper-20230918 pycparser-2.21 soundfile-0.12.1 tiktoken-0.3.3 tqdm-4.66.1
Requirement already satisfied: ipython in ./installer_files/env/lib/python3.11/site-packages (from -r extensions/silero_tts/requirements.txt (line 1)) (8.18.1)
Collecting num2words (from -r extensions/silero_tts/requirements.txt (line 2))
  Using cached num2words-0.5.13-py3-none-any.whl.metadata (12 kB)
Collecting omegaconf (from -r extensions/silero_tts/requirements.txt (line 3))
  Using cached omegaconf-2.3.0-py3-none-any.whl (79 kB)
Collecting pydub (from -r extensions/silero_tts/requirements.txt (line 4))
  Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting PyYAML (from -r extensions/silero_tts/requirements.txt (line 5))
  Using cached PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (2.1 kB)
Requirement already satisfied: decorator in ./installer_files/env/lib/python3.11/site-packages (from ipython->-r extensions/silero_tts/requirements.txt (line 1)) (5.1.1)
Requirement already satisfied: jedi>=0.16 in ./installer_files/env/lib/python3.11/site-packages (from ipython->-r extensions/silero_tts/requirements.txt (line 1)) (0.19.1)
Requirement already satisfied: matplotlib-inline in ./installer_files/env/lib/python3.11/site-packages (from ipython->-r extensions/silero_tts/requirements.txt (line 1)) (0.1.6)
Requirement already satisfied: prompt-toolkit<3.1.0,>=3.0.41 in ./installer_files/env/lib/python3.11/site-packages (from ipython->-r extensions/silero_tts/requirements.txt (line 1)) (3.0.41)
Requirement already satisfied: pygments>=2.4.0 in ./installer_files/env/lib/python3.11/site-packages (from ipython->-r extensions/silero_tts/requirements.txt (line 1)) (2.17.2)
Requirement already satisfied: stack-data in ./installer_files/env/lib/python3.11/site-packages (from ipython->-r extensions/silero_tts/requirements.txt (line 1)) (0.6.3)
Requirement already satisfied: traitlets>=5 in ./installer_files/env/lib/python3.11/site-packages (from ipython->-r extensions/silero_tts/requirements.txt (line 1)) (5.14.0)
Requirement already satisfied: pexpect>4.3 in ./installer_files/env/lib/python3.11/site-packages (from ipython->-r extensions/silero_tts/requirements.txt (line 1)) (4.9.0)
Collecting docopt>=0.6.2 (from num2words->-r extensions/silero_tts/requirements.txt (line 2))
  Using cached docopt-0.6.2-py2.py3-none-any.whl
Collecting antlr4-python3-runtime==4.9.* (from omegaconf->-r extensions/silero_tts/requirements.txt (line 3))
  Using cached antlr4_python3_runtime-4.9.3-py3-none-any.whl
Requirement already satisfied: parso<0.9.0,>=0.8.3 in ./installer_files/env/lib/python3.11/site-packages (from jedi>=0.16->ipython->-r extensions/silero_tts/requirements.txt (line 1)) (0.8.3)
Requirement already satisfied: ptyprocess>=0.5 in ./installer_files/env/lib/python3.11/site-packages (from pexpect>4.3->ipython->-r extensions/silero_tts/requirements.txt (line 1)) (0.7.0)
Requirement already satisfied: wcwidth in ./installer_files/env/lib/python3.11/site-packages (from prompt-toolkit<3.1.0,>=3.0.41->ipython->-r extensions/silero_tts/requirements.txt (line 1)) (0.2.12)
Requirement already satisfied: executing>=1.2.0 in ./installer_files/env/lib/python3.11/site-packages (from stack-data->ipython->-r extensions/silero_tts/requirements.txt (line 1)) (2.0.1)
Requirement already satisfied: asttokens>=2.1.0 in ./installer_files/env/lib/python3.11/site-packages (from stack-data->ipython->-r extensions/silero_tts/requirements.txt (line 1)) (2.4.1)
Requirement already satisfied: pure-eval in ./installer_files/env/lib/python3.11/site-packages (from stack-data->ipython->-r extensions/silero_tts/requirements.txt (line 1)) (0.2.2)
Requirement already satisfied: six>=1.12.0 in ./installer_files/env/lib/python3.11/site-packages (from asttokens>=2.1.0->stack-data->ipython->-r extensions/silero_tts/requirements.txt (line 1)) (1.16.0)
Using cached num2words-0.5.13-py3-none-any.whl (143 kB)
Using cached PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (732 kB)
Installing collected packages: pydub, docopt, antlr4-python3-runtime, PyYAML, num2words, omegaconf
Successfully installed PyYAML-6.0.1 antlr4-python3-runtime-4.9.3 docopt-0.6.2 num2words-0.5.13 omegaconf-2.3.0 pydub-0.25.1
Collecting deep-translator==1.9.2 (from -r extensions/google_translate/requirements.txt (line 1))
  Using cached deep_translator-1.9.2-py3-none-any.whl (30 kB)
Collecting beautifulsoup4<5.0.0,>=4.9.1 (from deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1))
  Using cached beautifulsoup4-4.12.2-py3-none-any.whl (142 kB)
Requirement already satisfied: requests<3.0.0,>=2.23.0 in ./installer_files/env/lib/python3.11/site-packages (from deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (2.31.0)
Collecting soupsieve>1.2 (from beautifulsoup4<5.0.0,>=4.9.1->deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1))
  Using cached soupsieve-2.5-py3-none-any.whl.metadata (4.7 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in ./installer_files/env/lib/python3.11/site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in ./installer_files/env/lib/python3.11/site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./installer_files/env/lib/python3.11/site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in ./installer_files/env/lib/python3.11/site-packages (from requests<3.0.0,>=2.23.0->deep-translator==1.9.2->-r extensions/google_translate/requirements.txt (line 1)) (2023.11.17)
Using cached soupsieve-2.5-py3-none-any.whl (36 kB)
Installing collected packages: soupsieve, beautifulsoup4, deep-translator
Successfully installed beautifulsoup4-4.12.2 deep-translator-1.9.2 soupsieve-2.5
TORCH: 2.1.1

*******************************************************************
* Installing webui requirements from file: requirements_noavx2.txt
*******************************************************************
WARNING: Skipping torch-grammar as it is not installed.
Uninstalled torch-grammar
Requirement already satisfied: SpeechRecognition==3.10.0 in ./installer_files/env/lib/python3.11/site-packages (from -r extensions/openai/requirements.txt (line 1)) (3.10.0)
Requirement already satisfied: flask_cloudflared==0.0.14 in ./installer_files/env/lib/python3.11/site-packages (from -r extensions/openai/requirements.txt (line 2)) (0.0.14)
Requirement already satisfied: sse-starlette==1.6.5 in ./installer_files/env/lib/python3.11/site-packages (from -r extensions/openai/requirements.txt (line 3)) (1.6.5)
Requirement already satisfied: tiktoken in ./installer_files/env/lib/python3.11/site-packages (from -r extensions/openai/requirements.txt (line 4)) (0.3.3)
Collecting tiktoken (from -r extensions/openai/requirements.txt (line 4))
  Using cached tiktoken-0.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (6.6 kB)
Requirement already satisfied: requests>=2.26.0 in ./installer_files/env/lib/python3.11/site-packages (from SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (2.31.0)
Requirement already satisfied: Flask>=0.8 in ./installer_files/env/lib/python3.11/site-packages (from flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2)) (3.0.0)
Requirement already satisfied: starlette in ./installer_files/env/lib/python3.11/site-packages (from sse-starlette==1.6.5->-r extensions/openai/requirements.txt (line 3)) (0.33.0)
Requirement already satisfied: regex>=2022.1.18 in ./installer_files/env/lib/python3.11/site-packages (from tiktoken->-r extensions/openai/requirements.txt (line 4)) (2023.10.3)
Requirement already satisfied: Werkzeug>=3.0.0 in ./installer_files/env/lib/python3.11/site-packages (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2)) (3.0.1)
Requirement already satisfied: Jinja2>=3.1.2 in ./installer_files/env/lib/python3.11/site-packages (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2)) (3.1.2)
Requirement already satisfied: itsdangerous>=2.1.2 in ./installer_files/env/lib/python3.11/site-packages (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2)) (2.1.2)
Requirement already satisfied: click>=8.1.3 in ./installer_files/env/lib/python3.11/site-packages (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2)) (8.1.7)
Requirement already satisfied: blinker>=1.6.2 in ./installer_files/env/lib/python3.11/site-packages (from Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2)) (1.7.0)
Requirement already satisfied: charset-normalizer<4,>=2 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in ./installer_files/env/lib/python3.11/site-packages (from requests>=2.26.0->SpeechRecognition==3.10.0->-r extensions/openai/requirements.txt (line 1)) (2023.11.17)
Requirement already satisfied: anyio<5,>=3.4.0 in ./installer_files/env/lib/python3.11/site-packages (from starlette->sse-starlette==1.6.5->-r extensions/openai/requirements.txt (line 3)) (4.1.0)
Requirement already satisfied: sniffio>=1.1 in ./installer_files/env/lib/python3.11/site-packages (from anyio<5,>=3.4.0->starlette->sse-starlette==1.6.5->-r extensions/openai/requirements.txt (line 3)) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in ./installer_files/env/lib/python3.11/site-packages (from Jinja2>=3.1.2->Flask>=0.8->flask_cloudflared==0.0.14->-r extensions/openai/requirements.txt (line 2)) (2.1.3)
Using cached tiktoken-0.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (2.0 MB)
Installing collected packages: tiktoken
  Attempting uninstall: tiktoken
    Found existing installation: tiktoken 0.3.3
    Uninstalling tiktoken-0.3.3:
      Successfully uninstalled tiktoken-0.3.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
openai-whisper 20230918 requires tiktoken==0.3.3, but you have tiktoken 0.5.2 which is incompatible.
Successfully installed tiktoken-0.5.2
Collecting git+https://github.com/oobabooga/torch-grammar.git (from -r temp_requirements.txt (line 23))
  Cloning https://github.com/oobabooga/torch-grammar.git to /tmp/pip-req-build-ingvnv5t
  Running command git clone --filter=blob:none --quiet https://github.com/oobabooga/torch-grammar.git /tmp/pip-req-build-ingvnv5t
  Resolved https://github.com/oobabooga/torch-grammar.git to commit 82850b5383a629f3b0fa1fba7d8f2aba3185ddb2
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Ignoring bitsandbytes: markers 'platform_system == "Windows"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Windows" and python_version == "3.11"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring llama-cpp-python: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Windows" and python_version == "3.11"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring auto-gptq: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Ignoring exllama: markers 'platform_system == "Windows" and python_version == "3.11"' don't match your environment
Ignoring exllama: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring exllama: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring exllama: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring exllama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring exllama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring exllama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring exllama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Windows" and python_version == "3.11"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring exllamav2: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Ignoring flash-attn: markers 'platform_system == "Windows" and python_version == "3.11"' don't match your environment
Ignoring flash-attn: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring flash-attn: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring flash-attn: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring flash-attn: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring flash-attn: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring flash-attn: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring flash-attn: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Ignoring llama-cpp-python-cuda: markers 'platform_system == "Windows" and python_version == "3.11"' don't match your environment
Ignoring llama-cpp-python-cuda: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring llama-cpp-python-cuda: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring llama-cpp-python-cuda: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring llama-cpp-python-cuda: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring llama-cpp-python-cuda: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring llama-cpp-python-cuda: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring llama-cpp-python-cuda: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Ignoring gptq-for-llama: markers 'platform_system == "Windows" and python_version == "3.11"' don't match your environment
Ignoring gptq-for-llama: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring gptq-for-llama: markers 'platform_system == "Windows" and python_version == "3.9"' don't match your environment
Ignoring gptq-for-llama: markers 'platform_system == "Windows" and python_version == "3.8"' don't match your environment
Ignoring gptq-for-llama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Ignoring gptq-for-llama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"' don't match your environment
Ignoring gptq-for-llama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.9"' don't match your environment
Ignoring gptq-for-llama: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.8"' don't match your environment
Collecting ctransformers==0.2.27+cu121 (from -r temp_requirements.txt (line 88))
  Downloading https://github.com/jllllll/ctransformers-cuBLAS-wheels/releases/download/AVX/ctransformers-0.2.27+cu121-py3-none-any.whl (15.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.6/15.6 MB 6.4 MB/s eta 0:00:00
Collecting accelerate==0.25.* (from -r temp_requirements.txt (line 1))
  Using cached accelerate-0.25.0-py3-none-any.whl.metadata (18 kB)
Collecting colorama (from -r temp_requirements.txt (line 2))
  Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting datasets (from -r temp_requirements.txt (line 3))
  Using cached datasets-2.15.0-py3-none-any.whl.metadata (20 kB)
Collecting einops (from -r temp_requirements.txt (line 4))
  Using cached einops-0.7.0-py3-none-any.whl.metadata (13 kB)
Collecting exllamav2==0.0.10 (from -r temp_requirements.txt (line 5))
  Using cached exllamav2-0.0.10-py3-none-any.whl.metadata (401 bytes)
Collecting gradio==3.50.* (from -r temp_requirements.txt (line 6))
  Using cached gradio-3.50.2-py3-none-any.whl.metadata (17 kB)
Collecting markdown (from -r temp_requirements.txt (line 7))
  Using cached Markdown-3.5.1-py3-none-any.whl.metadata (7.1 kB)
Collecting numpy==1.24.* (from -r temp_requirements.txt (line 8))
  Using cached numpy-1.24.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (5.6 kB)
Collecting optimum==1.14.0 (from -r temp_requirements.txt (line 9))
  Using cached optimum-1.14.0-py3-none-any.whl.metadata (17 kB)
Collecting pandas (from -r temp_requirements.txt (line 10))
  Using cached pandas-2.1.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (18 kB)
Collecting peft==0.6.* (from -r temp_requirements.txt (line 11))
  Using cached peft-0.6.2-py3-none-any.whl.metadata (23 kB)
Requirement already satisfied: Pillow>=9.5.0 in ./installer_files/env/lib/python3.11/site-packages (from -r temp_requirements.txt (line 12)) (10.1.0)
Requirement already satisfied: pyyaml in ./installer_files/env/lib/python3.11/site-packages (from -r temp_requirements.txt (line 13)) (6.0.1)
Requirement already satisfied: requests in ./installer_files/env/lib/python3.11/site-packages (from -r temp_requirements.txt (line 14)) (2.31.0)
Collecting safetensors==0.4.1 (from -r temp_requirements.txt (line 15))
  Using cached safetensors-0.4.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.8 kB)
Collecting scipy (from -r temp_requirements.txt (line 16))
  Using cached scipy-1.11.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (60 kB)
Collecting sentencepiece (from -r temp_requirements.txt (line 17))
  Using cached sentencepiece-0.1.99-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.3 MB)
Collecting tensorboard (from -r temp_requirements.txt (line 18))
  Using cached tensorboard-2.15.1-py3-none-any.whl.metadata (1.7 kB)
Collecting transformers==4.35.* (from -r temp_requirements.txt (line 19))
  Using cached transformers-4.35.2-py3-none-any.whl.metadata (123 kB)
Requirement already satisfied: tqdm in ./installer_files/env/lib/python3.11/site-packages (from -r temp_requirements.txt (line 20)) (4.66.1)
Collecting wandb (from -r temp_requirements.txt (line 21))
  Using cached wandb-0.16.1-py3-none-any.whl.metadata (9.8 kB)
Collecting bitsandbytes==0.41.1 (from -r temp_requirements.txt (line 26))
  Using cached bitsandbytes-0.41.1-py3-none-any.whl.metadata (9.8 kB)
ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11; 1.6.2 Requires-Python >=3.7,<3.10; 1.6.3 Requires-Python >=3.7,<3.10; 1.7.0 Requires-Python >=3.7,<3.10; 1.7.1 Requires-Python >=3.7,<3.10; 1.7.2 Requires-Python >=3.7,<3.11; 1.7.3 Requires-Python >=3.7,<3.11; 1.8.0 Requires-Python >=3.8,<3.11; 1.8.0rc1 Requires-Python >=3.8,<3.11; 1.8.0rc2 Requires-Python >=3.8,<3.11; 1.8.0rc3 Requires-Python >=3.8,<3.11; 1.8.0rc4 Requires-Python >=3.8,<3.11; 1.8.1 Requires-Python >=3.8,<3.11
ERROR: Could not find a version that satisfies the requirement autoawq==0.1.7 (from versions: none)
ERROR: No matching distribution found for autoawq==0.1.7
Command '. "/home/lloyd/text-generation-webui-main/installer_files/conda/etc/profile.d/conda.sh" && conda activate "/home/lloyd/text-generation-webui-main/installer_files/env" && python -m pip install -r temp_requirements.txt --upgrade' failed with exit status code '1'.

Exiting now.
Try running the start/update script again.

System Info

Architecture:           aarch64
  CPU op-mode(s):       32-bit, 64-bit
  Byte Order:           Little Endian
CPU(s):                 8
  On-line CPU(s) list:  0-7
Vendor ID:              ARM
  Model:                0
  Thread(s) per core:   1
  Core(s) per socket:   4
  Socket(s):            1
  Stepping:             r0p0
  BogoMIPS:             38.40
  Flags:                fp asimd aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ilrcp
                        c flagm
  Model name:           Cortex-A78C
    Model:              0
    Thread(s) per core: 1
    Core(s) per socket: 2
    Socket(s):          1
    Stepping:           r0p0
    BogoMIPS:           38.40
    Flags:              fp asimd aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ilrcp
                        c flagm
  Model:                0
  Thread(s) per core:   1
  Core(s) per socket:   1
  Socket(s):            1
  Stepping:             r0p0
  BogoMIPS:             38.40
  Flags:                fp asimd aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ilrcp
                        c flagm
  Model:                0
  Thread(s) per core:   1
  Core(s) per socket:   1
  Socket(s):            1
Caches (sum of all):
  L1d:                  512 KiB (8 instances)
  L1i:                  512 KiB (8 instances)
  L2:                   8 MiB (8 instances)
  L3:                   8 MiB (1 instance)
Vulnerabilities:
  Gather data sampling: Not affected
  Itlb multihit:        Not affected
  L1tf:                 Not affected
  Mds:                  Not affected
  Meltdown:             Vulnerable
  Mmio stale data:      Not affected
  Retbleed:             Not affected
  Spec rstack overflow: Not affected
  Spec store bypass:    Not affected
  Spectre v1:           Mitigation; __user pointer sanitization
  Spectre v2:           Mitigation; CSV2, but not BHB
  Srbds:                Not affected
  Tsx async abort:      Not affected
@llcool llcool added the bug Something isn't working label Dec 11, 2023
@llcool
Copy link
Author

llcool commented Dec 12, 2023

I found the problem the script one_click.py fails to delect my cpu correctly, so I hard coded:-
requirements_file = "requirements_cpu_only_noavx2.txt", deleted installer_files directory and re-ran ./start_linux.sh

@exu-g
Copy link

exu-g commented Feb 10, 2024

As determined by @llcool already, autoawq does not provide distributions for aarch64 CPUs. I've opened an issue about potential support here

In the meantime, limiting autoawq to x86_64 only could work.

diff --git a/requirements_noavx2.txt b/requirements_noavx2.txt
index fc2795cb..b9a47a87 100644
--- a/requirements_noavx2.txt
+++ b/requirements_noavx2.txt
@@ -64,4 +64,4 @@ https://github.com/jllllll/GPTQ-for-LLaMa-CUDA/releases/download/0.1.1/gptq_for_
 https://github.com/jllllll/GPTQ-for-LLaMa-CUDA/releases/download/0.1.1/gptq_for_llama-0.1.1+cu121-cp311-cp311-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"
 https://github.com/jllllll/GPTQ-for-LLaMa-CUDA/releases/download/0.1.1/gptq_for_llama-0.1.1+cu121-cp310-cp310-linux_x86_64.whl; platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.10"
 https://github.com/jllllll/ctransformers-cuBLAS-wheels/releases/download/AVX/ctransformers-0.2.27+cu121-py3-none-any.whl
-autoawq==0.1.8; platform_system == "Linux" or platform_system == "Windows"
+autoawq==0.1.8; platform_machine == "x86_64" and (platform_system == "Linux" or platform_system == "Windows")

I'll open a PR in a bit

@elkay
Copy link

elkay commented Mar 2, 2024

Replying as I am also looking for aarch64 install. Following.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants