Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

c_stdlib_version in conda_build_config.yaml ignored when cuda is used #1950

Open
1 task done
minrk opened this issue Jun 14, 2024 · 1 comment
Open
1 task done

c_stdlib_version in conda_build_config.yaml ignored when cuda is used #1950

minrk opened this issue Jun 14, 2024 · 1 comment
Labels

Comments

@minrk
Copy link
Member

minrk commented Jun 14, 2024

Solution to issue cannot be found in the documentation.

  • I checked the documentation.

Issue

Packages that optionally use cuda compilers and specify c_stdlib_version: 2.17 # [linux] in conda_build_config.yaml are still getting builds for cuda_compiler=None, c_stdlib_version=2.12. I'm guessing this relates to the complex zip_keys for cuda and c_stdlib_version not handling conda_build_config at highest priority. I'm not 100% certain it's cuda related, but I've seen it twice (openmpi and lammps), both of which use the cuda compilers and have no-cuda variants which still want to use 2.17.

This can be worked around in this specific case by specifying os_version: cos7 in conda-forge.yml, but presumably something is wrong in the rendering.

example: conda-forge/lammps-feedstock#198 which migrates from sysroot_linux-64 2.17 to c_stdlib_version pinning, which has no effect.

Installed packages

conda                     24.3.0          py310hbe9552e_0    conda-forge
conda-build               24.3.0          py310hbe9552e_1    conda-forge
conda-forge-conda-plugins 0.1.dev28+gf2fa78e.d20240613          pypi_0    pypi
conda-forge-pinning       2023.10.29.22.14.35      hd8ed1ab_0    conda-forge
conda-index               0.4.0              pyhd8ed1ab_0    conda-forge
conda-libmamba-solver     23.11.1            pyhd8ed1ab_0    conda-forge
conda-lock                2.4.2              pyhd8ed1ab_0    conda-forge
conda-pack                0.7.1              pyhd8ed1ab_0    conda-forge
conda-package-handling    2.2.0              pyh38be061_0    conda-forge
conda-package-streaming   0.9.0              pyhd8ed1ab_0    conda-forge
conda-smithy              3.36.2          unix_pyh707e725_0    conda-forge
conda-tree                1.1.0              pyhd8ed1ab_2    conda-forge

Environment info

mamba version : 1.5.8
     active environment : None
            shell level : 0
       user config file : /Users/minrk/.condarc
 populated config files : /Users/minrk/conda/.condarc
                          /Users/minrk/.condarc
          conda version : 24.3.0
    conda-build version : 24.3.0
         python version : 3.10.13.final.0
                 solver : libmamba (default)
       virtual packages : __archspec=1=m1
                          __conda=24.3.0=0
                          __mpich=4.2.1=0
                          __osx=14.5=0
                          __unix=0=0
       base environment : /Users/minrk/conda  (writable)
      conda av data dir : /Users/minrk/conda/etc/conda
  conda av metadata url : None
           channel URLs : https://conda.anaconda.org/minrk/label/fenics-windows/osx-arm64
                          https://conda.anaconda.org/minrk/label/fenics-windows/noarch
                          https://conda.anaconda.org/conda-forge/osx-arm64
                          https://conda.anaconda.org/conda-forge/noarch
          package cache : /Users/minrk/conda/pkgs
                          /Users/minrk/.conda/pkgs
       envs directories : /Users/minrk/conda/envs
                          /Users/minrk/.conda/envs
               platform : osx-arm64
             user-agent : conda/24.3.0 requests/2.31.0 CPython/3.10.13 Darwin/23.5.0 OSX/14.5 solver/libmamba conda-libmamba-solver/23.11.1 libmambapy/1.5.8
                UID:GID : 501:20
             netrc file : /Users/minrk/.netrc
           offline mode : False
@jakirkham
Copy link
Member

Likely less of an issue now that we are using GLIBC 2.17 by default

That said, the issue is that all associated zip_keys need to be defined. These come from here. Copying below for completeness

zip_keys:
  # For CUDA, c_stdlib_version/cdt_name is zipped below with the compilers.
  -                             # [linux and os.environ.get("CF_CUDA_ENABLED", "False") != "True"]
    - c_stdlib_version          # [linux and os.environ.get("CF_CUDA_ENABLED", "False") != "True"]
    - cdt_name                  # [linux and os.environ.get("CF_CUDA_ENABLED", "False") != "True"]
  -                             # [unix]
    - c_compiler_version        # [unix]
    - cxx_compiler_version      # [unix]
    - fortran_compiler_version  # [unix]
    - c_stdlib_version          # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
    - cdt_name                  # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
    - cuda_compiler             # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
    - cuda_compiler_version     # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
    - docker_image              # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM", "").startswith("linux-")]
  -                             # [win64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
    - cuda_compiler             # [win64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
    - cuda_compiler_version     # [win64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]

Agree that just defining os_version is an effective solution that bypasses this need

Ideally we would have some way to replace values for one key in zip_keys without needing to mess with the rest. Unfortunately we lack the tooling atm to do so (and this can be a bit hairy in some cases)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants