Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hooks: torch: suppress generation of symlinks for shared libs #836

Merged
merged 2 commits into from
Dec 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions _pyinstaller_hooks_contrib/stdhooks/hook-torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,20 @@ def _infer_nvidia_hiddenimports():
logger.info("hook-torch: inferred hidden imports for CUDA libraries: %r", nvidia_hiddenimports)
hiddenimports += nvidia_hiddenimports

# On Linux, prevent binary dependency analysis from generating symbolic links for libraries from `torch/lib` to
# the top-level application directory. These symbolic links seem to confuse `torch` about location of its shared
# libraries (likely because code in one of the libraries looks up the library file's location, but does not
# fully resolve it), and prevent it from finding dynamically-loaded libraries in `torch/lib` directory, such as
# `torch/lib/libtorch_cuda_linalg.so`. The issue was observed with earlier versions of `torch` builds provided
# by https://download.pytorch.org/whl/torch, specifically 1.13.1+cu117, 2.0.1+cu117, and 2.1.2+cu118; later
# versions do not seem to be affected. The wheels provided on PyPI do not seem to be affected, either, even
# for torch 1.13.1, 2.01, and 2.1.2. However, these symlinks should be not necessary on linux in general, so
# there should be no harm in suppressing them for all versions.
#
# The `bindepend_symlink_suppression` hook attribute requires PyInstaller >= 6.11, and is no-op in earlier
# versions.
bindepend_symlink_suppression = ['**/torch/lib/*.so*']

# The Windows nightly build for torch 2.3.0 added dependency on MKL. The `mkl` distribution does not provide an
# importable package, but rather installs the DLLs in <env>/Library/bin directory. Therefore, we cannot write a
# separate hook for it, and must collect the DLLs here. (Most of these DLLs are missed by PyInstaller's binary
Expand Down
7 changes: 7 additions & 0 deletions news/834.update.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
(Linux) Update ``torch`` hook to suppress creation of symbolic links to
the top-level application directory for the shared libraries discovered
during binary dependency analysis in ``torch/lib`` directory. This fixes
issues with ``libtorch_cuda_linalg.so`` not being found in spite of it
being collected, as observed with certain ``torch`` builds provided by
https://download.pytorch.org/whl/torch (e.g., ``1.13.1+cu117``,
``2.0.1+cu117``, and ``2.1.2+cu118``).
49 changes: 49 additions & 0 deletions tests/test_pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@

import pytest

from PyInstaller import isolated
from PyInstaller.utils.tests import importorskip


Expand All @@ -28,6 +29,54 @@ def test_torch(pyi_builder):
""")


@importorskip('torch')
def test_torch_cuda_linalg(pyi_builder):
# Check that CUDA is available.
@isolated.decorate
def _check_cuda():
import torch
return torch.cuda.is_available() and torch.cuda.device_count() > 0

if not _check_cuda():
pytest.skip(reason="CUDA not available.")

pyi_builder.test_source("""
import torch

# Solve the following system of equations:
# x + 2y - 2z = -15
# 2x + y - 5z = -21
# x - 4y + z = 18
#
# Solution: x=-1, y=-4, z=3

cuda_device = torch.device('cuda')
print(f"Using device: {cuda_device}")

A = torch.tensor([
[1, 2, -2],
[2, 1, -5],
[1, -4, 1],
], dtype=torch.float, device=cuda_device)

b = torch.tensor([
[-15],
[-21],
[18],
], dtype=torch.float, device=cuda_device)

print(f"A={A}")
print(f"b={b}")

x = torch.linalg.solve(A, b)
print(f"x={x}")

assert x[0] == -1
assert x[1] == -4
assert x[2] == 3
""")


# Test with torchaudio transform that uses torchcript, which requires
# access to transforms' sources.
@importorskip('torchaudio')
Expand Down
Loading