Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

request: macOS installation instructions #1

Closed
YoungPhlo opened this issue Dec 24, 2024 · 11 comments · Fixed by #2
Closed

request: macOS installation instructions #1

YoungPhlo opened this issue Dec 24, 2024 · 11 comments · Fixed by #2

Comments

@YoungPhlo
Copy link
Contributor

The goal is to create installation instructions that works for Mac devices.
Want to use this as an archive of thoughts as I'm working through the set up.

Here are my specs:

- Operating System
ProductName:            macOS
ProductVersion:         14.1
BuildVersion:           23B2073
KernelVersion:          23.1.0

- Environment
Shell:       /bin/zsh (arm64)
Homebrew:    4.4.6
Python:      3.13.0
@YoungPhlo
Copy link
Contributor Author

using my text editor I asked Sonnet 3.5 for help making install.sh using install.bat as an example and it gave me this:

#!/bin/bash

# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate

# Install base requirements
pip install flask laion_clap librosa numpy

# Install PyTorch with MPS support (nightly version for best M1/M2/M3 support)
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu

# Create run script
cat > run.sh << 'EOF'
#!/bin/bash
source venv/bin/activate
python3 sound-similarity-browser.py
open http://localhost:5000/
EOF

# Make run script executable
chmod +x run.sh

# Run the application
./run.sh

but need to make install.sh executable before running so: chmod +x install.sh in the terminal first

@YoungPhlo
Copy link
Contributor Author

the script from Sonnet doesnt work because laion_clap is pinned to numpy==1.23.5 and Python 3.13 (the default for macOS 14.1 (?)) is not compatible with numpy==1.23.5

Error:

(samplebrowser) dev@mac-m3pro: sample-browser % pip install laion_clap
Collecting laion_clap
  Using cached laion_clap-1.1.6-py3-none-any.whl.metadata (26 kB)
Collecting numpy==1.23.5 (from laion_clap)
  Using cached numpy-1.23.5.tar.gz (10.7 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [33 lines of output]
      Traceback (most recent call last):
        File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 112, in get_requires_for_build_wheel
          backend = _build_backend()
                    ^^^^^^^^^^^^^^^^
        File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
          obj = import_module(mod_path)
                ^^^^^^^^^^^^^^^^^^^^^^^
        File "/opt/homebrew/Cellar/python@3.12/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/__init__.py", line 90, in import_module
          return _bootstrap._gcd_import(name[level:], package, level)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
        File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
        File "<frozen importlib._bootstrap>", line 1310, in _find_and_load_unlocked
        File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
        File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
        File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
        File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
        File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
        File "<frozen importlib._bootstrap_external>", line 995, in exec_module
        File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
        File "/Users/dev/Apps/IDE/CursorPy/cursor-portable-data/tmp/pip-build-env-d633a9m9/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 16, in <module>
          import setuptools.version
        File "/Users/dev/Apps/IDE/CursorPy/cursor-portable-data/tmp/pip-build-env-d633a9m9/overlay/lib/python3.12/site-packages/setuptools/version.py", line 1, in <module>
          import pkg_resources
        File "/Users/dev/Apps/IDE/CursorPy/cursor-portable-data/tmp/pip-build-env-d633a9m9/overlay/lib/python3.12/site-packages/pkg_resources/__init__.py", line 2172, in <module>
          register_finder(pkgutil.ImpImporter, find_on_path)
                          ^^^^^^^^^^^^^^^^^^^
      AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
(samplebrowser) dev@mac-m3pro: sample-browser % 

Issue: numpy/numpy#23808

@YoungPhlo
Copy link
Contributor Author

Sonnet is suggesting:

1. Use Python 3.11 instead of 3.12 (LAION CLAP and some of its dependencies aren't fully compatible with 3.12 yet)
2. Install a newer version of numpy first (1.26.0 or later has proper Python 3.11 support)

If you've already created the virtual environment with Python 3.12, you'll need to:
1. Delete the existing venv directory: `rm -rf venv`
2. Make sure Python 3.11 is installed on your system (on Mac: `brew install python@3.11`)
3. Run the updated install script

This should resolve the dependency conflicts and allow you to install LAION CLAP successfully.

After running brew list | grep python I see I don't have python@3.11 installed so I'm gonna use python@3.10 and see if that works.

@YoungPhlo
Copy link
Contributor Author

Ok new error related to
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu:

(samplebrowser) dev@mac-m3pro: sample-browser % python3 sound-similarity-browser.py 
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████████| 48.0/48.0 [00:00<00:00, 57.2kB/s]
vocab.txt: 100%|█████████████████████████████████████████████████████████████████████████| 232k/232k [00:00<00:00, 2.96MB/s]
tokenizer.json: 100%|████████████████████████████████████████████████████████████████████| 466k/466k [00:00<00:00, 9.22MB/s]
config.json: 100%|█████████████████████████████████████████████████████████████████████████| 570/570 [00:00<00:00, 3.12MB/s]
tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████| 25.0/25.0 [00:00<00:00, 142kB/s]
vocab.json: 100%|████████████████████████████████████████████████████████████████████████| 899k/899k [00:00<00:00, 26.1MB/s]
merges.txt: 100%|████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 35.4MB/s]
tokenizer.json: 100%|██████████████████████████████████████████████████████████████████| 1.36M/1.36M [00:00<00:00, 38.6MB/s]
config.json: 100%|█████████████████████████████████████████████████████████████████████████| 481/481 [00:00<00:00, 2.58MB/s]
vocab.json: 100%|████████████████████████████████████████████████████████████████████████| 899k/899k [00:00<00:00, 49.8MB/s]
merges.txt: 100%|████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 37.8MB/s]
tokenizer.json: 100%|██████████████████████████████████████████████████████████████████| 1.36M/1.36M [00:00<00:00, 46.5MB/s]
config.json: 100%|█████████████████████████████████████████████████████████████████████| 1.72k/1.72k [00:00<00:00, 7.30MB/s]
/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/torch/functional.py:539: UserWarnin
/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/torch/functional.py:539: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /User
/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/torch/functional.py:539: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:4312.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
model.safetensors: 100%|██████████████████████████████████████████████████████████████████| 499M/499M [00:04<00:00, 104MB/s]
Some weights of RobertaModel were not initialized from the model checkpoint at roberta-base and are newly initialized: ['rob
Some weights of RobertaModel were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Load our best checkpoint in the paper.
Downloading laion_clap weight files...
100% [....................................................................] 1863587645 / 1863587645Download completed!
Load Checkpoint...
Traceback (most recent call last):
  File "/Users/dev/code/ml/music/classification/sample-browser/sound-similarity-browser.py", line 112, in <module>
    browser = SoundSimilarityBrowser()
  File "/Users/dev/code/ml/music/classification/sample-browser/sound-similarity-browser.py", line 15, in __init__
    self.model.load_ckpt()
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/laion_clap/hook.py", line 1
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/laion_clap/hook.py", line 113, in load_ckpt
    ckpt = load_state_dict(ckpt, skip_params=True)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/laion_clap/clap_module/fact
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/laion_clap/clap_module/factory.py", line 54, in load_state_dict
    checkpoint = torch.load(checkpoint_path, map_location=map_location)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/torch/serialization.py", li
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/torch/serialization.py", line 1488, in load
    raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those st
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. 
        (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `Tru
        (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code exec
        (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
        (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error messag
        (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
        WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default
        WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context ma
        WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function.

Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/do
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.

Sonnet is suggesting an edit to the sound-similarity-browser.py file but I really want to avoid that, especially if this is already working on Windows. Here is its suggestion:

This error is occurring because of recent changes in PyTorch 2.6 regarding the torch.load security features. The LAION-CLAP model checkpoint contains NumPy arrays that need special handling with the new PyTorch loading mechanism.
Here's how to fix it:
    def __init__(self, cache_file: str = "embeddings_cache.jsonl"):
        torch.serialization.add_safe_globals([np.core.multiarray.scalar])
        
        self.model = laion_clap.CLAP_Module(enable_fusion=False)
        self.model.load_ckpt()
        self.cache_file = cache_file
        self.embeddings_cache: Dict[str, List[float]] = {}
        self.load_cache()

@YoungPhlo
Copy link
Contributor Author

YoungPhlo commented Dec 24, 2024

The original install.bat file says:

pip install -U torch==2.4.0+cu121 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

So it makes sense to edit the install.sh file to include PyTorch 2.4.0 and maybe that'll fix the issues:

#!/bin/bash

# Create and activate virtual environment
python3.10 -m venv --prompt samplebrowser venv
source venv/bin/activate

# Install base requirements
pip install --upgrade pip setuptools wheel packaging
pip install flask laion_clap librosa

# Install PyTorch with MPS support (nightly version for best M1/M2/M3 support)
pip install torch==2.4.0 torchvision torchaudio

# Create run script
cat > run.sh << 'EOF'
#!/bin/bash
source venv/bin/activate
python3 sound-similarity-browser.py
open http://localhost:5000/
EOF

# Make run script executable
chmod +x run.sh

# Run the application
./run.sh

Slight edits to the original:

  • remove pip install numpy being that laion_clap is pinned
  • add --prompt for venv naming
  • remove nightly version of pytorch
  • update pip, setuptools, wheel, and packaging before installing deps

@YoungPhlo
Copy link
Contributor Author

not sure what it is about localhost on mac but when i go to http://localhost:5000/ on Chrome or Safari it doesnt work. Safari is just completely blank but Chrome shows this error:

Access to localhost was denied
You don't have authorization to view this page.
HTTP ERROR 403

Even when I edit the sound-similarity-browser.py file for Flask to use a different random port (like 5183) I get the same error.

The weird thing is visiting http://localhost:5000/ in Safari will trigger the script and the models will load in the background. All the activity still shows in the terminal.

I see * Running on http://127.0.0.1:5000 in the console as the script is loading. Turns out 127.0.0.1 works as opposed to localhost

@YoungPhlo
Copy link
Contributor Author

Screenshot 2024-12-24 at 10 30 32 AM

@YoungPhlo
Copy link
Contributor Author

A small folder of percs took 4 seconds to process 🔥
This script appears to use the GPU on my M3 but I couldnt really tell because it happened so fast. I'll test another folder.
Bugs:

  • The sound wont play after matching

Screenshots:


Screenshot 2024-12-24 at 10 36 22 AM Screenshot 2024-12-24 at 10 37 26 AM

Errors in console:

INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 09:35:01] "GET / HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 09:35:01] "GET /favicon.ico HTTP/1.1" 404 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:30:14] "GET / HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:01] "GET /process-folder?folder_path=/Users/dev/Dropbox/Public/Production/User/Metro%20Drumkit/Percs HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "POST /search HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "GET /serve-audio/%2FUsers%2Fdev%2FDropbox%2FPublic%2FProduction%2FUser%2FMetro%20Drumkit%2FPercs%2FMB%20Perc%20(3).wav HTTP/1.1" 308 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "GET /serve-audio/%2FUsers%2Fdev%2FDropbox%2FPublic%2FProduction%2FUser%2FMetro%20Drumkit%2FPercs%2FMB%20Perc%20(12).wav HTTP/1.1" 308 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "GET /serve-audio/%2FUsers%2Fdev%2FDropbox%2FPublic%2FProduction%2FUser%2FMetro%20Drumkit%2FPercs%2FMB%20Perc%20(9).wav HTTP/1.1" 308 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "GET /serve-audio/%2FUsers%2Fdev%2FDropbox%2FPublic%2FProduction%2FUser%2FMetro%20Drumkit%2FPercs%2FMB%20Perc%20(7).wav HTTP/1.1" 308 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "GET /serve-audio/%2FUsers%2Fdev%2FDropbox%2FPublic%2FProduction%2FUser%2FMetro%20Drumkit%2FPercs%2FMB%20Perc%20(8).wav HTTP/1.1" 308 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "GET /serve-audio/%2FUsers%2Fdev%2FDropbox%2FPublic%2FProduction%2FUser%2FMetro%20Drumkit%2FPercs%2FMB%20Perc%20(5).wav HTTP/1.1" 308 -
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "GET /serve-audio/Users/dev/Dropbox/Public/Production/User/Metro%20Drumkit/Percs/MB%20Perc%20(3).wav HTTP/1.1" 500 -
Traceback (most recent call last):
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 1536, in __call__
    return self.wsgi_app(environ, start_response)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 1514, in wsgi_app
    response = self.handle_exception(e)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 1511, in wsgi_app
    response = self.full_dispatch_request()
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 919, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 917, in full_dispatch_request
    rv = self.dispatch_request()
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 902, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "/Users/dev/code/ml/music/classification/sample-browser/sound-similarity-browser.py", line 139, in serve_audio
    return send_file(filepath)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/helpers.py", line 511, in send_file
    return werkzeug.utils.send_file(  # type: ignore[return-value]
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/werkzeug/utils.py", line 428, in send_file
    stat = os.stat(path)
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dev/code/ml/music/classification/sample-browser/Users/dev/Dropbox/Public/Production/User/Metro Drumkit/Percs/MB Perc (3).wav'
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "GET /serve-audio/Users/dev/Dropbox/Public/Production/User/Metro%20Drumkit/Percs/MB%20Perc%20(9).wav HTTP/1.1" 500 -
Traceback (most recent call last):
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 1536, in __call__
    return self.wsgi_app(environ, start_response)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 1514, in wsgi_app
    response = self.handle_exception(e)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 1511, in wsgi_app
    response = self.full_dispatch_request()
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 919, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 917, in full_dispatch_request
    rv = self.dispatch_request()
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/app.py", line 902, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "/Users/dev/code/ml/music/classification/sample-browser/sound-similarity-browser.py", line 139, in serve_audio
    return send_file(filepath)
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/flask/helpers.py", line 511, in send_file
    return werkzeug.utils.send_file(  # type: ignore[return-value]
  File "/Users/dev/code/ml/music/classification/sample-browser/venv/lib/python3.10/site-packages/werkzeug/utils.py", line 428, in send_file
    stat = os.stat(path)
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dev/code/ml/music/classification/sample-browser/Users/dev/Dropbox/Public/Production/User/Metro Drumkit/Percs/MB Perc (9).wav'
INFO:werkzeug:127.0.0.1 - - [24/Dec/2024 10:36:29] "GET /serve-audio/%2FUsers%2Fdev%2FDropbox%2FPublic%2FProduction%2FUser%2FMetro%20Drumkit%2FPercs%2FMB%20Perc%20(6).wav HTTP/1.1" 308 -

@YoungPhlo
Copy link
Contributor Author

Processed Sliced beats folder from FL Studio. 89 files of varying length, some 30s long.
Only took 7s to process.
Audio still not playing in browser.
Screenshot 2024-12-24 at 10 55 43 AM

Before and after view of my CPU/GPU while processing. 100% CPU means it's probably not using the GPU huh? 🤔

Screenshot 2024-12-24 at 10 55 09 AM Screenshot 2024-12-24 at 10 55 21 AM

@YoungPhlo
Copy link
Contributor Author

Ok Sonnet gave me a few ideas but fixing the audio path issues in the script is probably beyond the scope of this issue. Going to open a PR with changes to the README on how to install this for mac.

@YoungPhlo
Copy link
Contributor Author

One other weird thing I encountered using this repo on macOS is because app.run is set to debug=True it's very hard to kill the server once it's up and running. Even if I ctrl+C in the terminal the server keeps running in the background.

I added a piece to the install.sh that hopefully addresses this but I have not tested it extensively.

python3 sound-similarity-browser.py &
PID=$!  # Capture the background process ID
sleep 15
open http://127.0.0.1:5000/
# Wait for Ctrl+C
trap "kill $PID" INT
wait $PID

If I hit ctrl+C then tried to run the original command again (python sound-similarity-browser.py) I would get this error:

Address already in use
Port 5000 is in use by another program. Either identify and stop that program, or start the server with a different port.
On macOS, try disabling the 'AirPlay Receiver' service from System Preferences -> General -> AirDrop & Handoff.
dev@mac-m3pro: sample-browser % 

Which also means maybe we should use a port other than 5000.

Also, any time I would make an edit to sound-similarity-browser.py the server would restart with the new changes even if I had "killed the server" ...until I added the process ID capture.

These are just additional thoughts. I'm leaving this alone for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant