Skip to content

Commit

Permalink
[ROCm] Guard triton backend call around cuda.is_available (pytorch#14…
Browse files Browse the repository at this point in the history
…3570)

To resolve: pytorch/test-infra#6082

Calling into Triton's get_backend_options will initialise CUDA and break CPU-only environments that may have hip installed.

Pull Request resolved: pytorch#143570
Approved by: https://github.com/atalman, https://github.com/jeffdaily
  • Loading branch information
jataylo authored and pytorchmergebot committed Dec 19, 2024
1 parent c46cfc2 commit 6617257
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion torch/_inductor/kernel/conv.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def conv3d_grid(n, c, d, h, w, meta):
)

# On ROCm convert num_stages to 1 as pipelining provides no benefit
if torch.version.hip:
if torch.version.hip and torch.cuda.is_available():
platform_configs = build_rocm_gemm_configs(platform_configs)


Expand Down
2 changes: 1 addition & 1 deletion torch/_inductor/kernel/mm_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -388,7 +388,7 @@ def filtered_configs(
)

# On ROCm convert num_stages to improve performance
if torch.version.hip:
if torch.version.hip and torch.cuda.is_available():
mm_platform_configs = build_rocm_gemm_configs(mm_platform_configs)
extra_mm_platform_configs = build_rocm_gemm_configs(extra_mm_platform_configs)
int8_platform_configs = build_rocm_gemm_configs(int8_platform_configs)
Expand Down

0 comments on commit 6617257

Please sign in to comment.