Skip to content
This repository has been archived by the owner on Sep 1, 2023. It is now read-only.

Proposal: Rename cuda-cccl-impl to cccl. #2

Closed
bdice opened this issue Aug 9, 2023 · 17 comments
Closed

Proposal: Rename cuda-cccl-impl to cccl. #2

bdice opened this issue Aug 9, 2023 · 17 comments

Comments

@bdice
Copy link
Contributor

bdice commented Aug 9, 2023

Background

Recently, CCCL has migrated to a new unified repo (https://github.com/NVIDIA/cccl). This is now the official home for Thrust/CUB/libcudacxx, and those libraries will be packaged/shipped as a single entity ("CCCL") from CCCL version 2.2.0 onwards. The first release (2.2.0) and compatibility policies for CCCL are being formalized by @jrhemstad right now. From recent conversations around the CCCL 2.2.0 release, it is likely that most users will want to refer to CCCL by its versions (e.g. 2.1.0) rather than by the CUDA Toolkit (CTK) versions (e.g. 12.0.90) that shipped that CCCL version (i.e. cuda-cccl-impl versions, not cuda-cccl versions, in the present naming scheme). There is interest from CCCL maintainers/developers and users (including myself) to make this package, currently named cuda-cccl-impl, into the official conda-forge package for users to find and install CCCL. However, the current name cuda-cccl-impl is unclear and implies that this package is an "implementation" or otherwise contains internals and isn't meant to be the front door for CCCL users.

Proposal

Proposal: rename cuda-cccl-impl (this feedstock/package) to cccl.

Going forward, this package is intended to be a user-friendly way to install and use CCCL in a conda environment. The current name doesn't make it sound that way, and renaming to cccl would help.

The existing double-versioning scheme where cuda-cccl (versioned like the CTK) depends on cuda-cccl-impl (versioned like CCCL) is still useful per the original design (previously discussed with most of those cc'd below), and cuda-cccl would need to be updated to depend on the new name cccl. Currently this should be easy to do, since only one version of cuda-cccl and cuda-cccl-impl exist on conda-forge.

Additionally, this proposal would not introduce any conflicts with the nvidia channel CTK packages. No packages named cccl exist on any channel today. https://anaconda.org/search?q=cccl

Releases

I also propose a clarification of the versioning policies / release schedules that would be used by this package, which corresponds to the new monorepo and drafts about release policies for that repo. The cccl package would be tied to the releases at https://github.com/NVIDIA/cccl. New tags of that repo would result in new releases of cccl, even if that CCCL version has not been incorporated into a CTK release yet. This is aligned with the current practice of pulling cuda-cccl-impl files from the Thrust/CUB/libcudacxx repos, while the "target platform" packages (see below) are made from the CUDA Toolkit package tarballs.

The key results for users would be:

  • Users desiring the latest CCCL should install cccl
    • This is aligned with the CCCL guidance to "live at head", i.e. use the latest versions available
  • Users desiring a specific CCCL version X.Y.Z should install cccl=X.Y.Z
  • Users desiring a CCCL that was released with CUDA Toolkit version X.Y should install cuda-cccl=X.Y.*
    • CUDA >= 12.0 only, no backports are planned at this time

Alternatives

Alternative 1: cuda-cccl-impl could be just a metapackage that depends on cccl, and cccl would ship the actual CCCL contents.
Alternative 2: cccl could be just a metapackage that depends on cuda-cccl-impl, and cuda-cccl-impl would ship the actual CCCL contents.

I don't think Alternative 1 or 2 provide much real utility over Proposal 1 above, but they could be evaluated if others feel differently.

Additional Context / "target platform" packages

The cuda-cccl-impl package ships its contents in include/{cub,cuda,nv,thrust}/ and aims to serve users building their packages with a CCCL that may be newer than the CCCL that ships as an internal component of the user's installed CTK (see also CCCL / CTK compatibility policy, draft in NVIDIA/cccl#291 at the time of writing).

There are also "target platform" packages like cuda-cccl_linux-64 which exist to serve the CUDA Toolkit's internal uses of CCCL, specifically packages required for cuda-cudart-dev_linux-64, which is in turn a dependency of cuda-nvcc_linux-64. The "target platform" packages are not affected by this proposal. Those "target platform" packages ship their contents in paths like targets/x86_64-linux/include/{cub,cuda,nv,thrust} and do not conflict with this package.

cc: @jakirkham @wmaxey @jrhemstad @manopapad @kkraus14 @robertmaynard

@jakirkham
Copy link
Member

cc @adibbley

@jakirkham
Copy link
Member

cc @vyasr

@kkraus14
Copy link

I'm happy with any of the directions proposed. Thanks everyone for pushing this forward!

@robertmaynard
Copy link

robertmaynard commented Aug 10, 2023

I agree with proposal 1 and don't see the value in the alternatives given how recent cuda-cccl-impl is and how much more involved the reasoning would be with the alternatives.

@vyasr
Copy link

vyasr commented Aug 10, 2023

I concur, I don't see much benefit in adding extra metapackage layers of indirection.

@jrhemstad
Copy link

All y'all are smarter than me, so I'll defer to you. My only goal is to make it as easy for people to use CCCL from whatever their preferred source is.

@bdice
Copy link
Contributor Author

bdice commented Aug 18, 2023

@jakirkham It looks like renaming this requires a new staged recipe. https://conda-forge.org/docs/orga/guidelines.html#renaming-packages

Is filing a new staged recipe and archiving this feedstock the right process for moving forward? If so, I can help with that.

@jakirkham
Copy link
Member

Yeah that sounds like a good plan. Please link that PR here for reference

@bdice
Copy link
Contributor Author

bdice commented Aug 18, 2023

Staged recipe PR created: conda-forge/staged-recipes#23722

@jakirkham
Copy link
Member

Thanks Bradley! 🙏

@jakirkham
Copy link
Member

cc @adibbley

@jakirkham
Copy link
Member

jakirkham commented Aug 18, 2023

Also think we want to add this to cuda-cccl-impl to prevent clobbering (as is being done to cccl)

requirements:
  run_constrained:
    # Prevent clobbering with cccl
    - cccl <0a0

Edit: We can also add this to old packages via a repodata hotfix

@bdice
Copy link
Contributor Author

bdice commented Aug 18, 2023

@jakirkham I think you only need run_constrained in one direction, right? I'd prefer to fix (and repodata patch) cuda-cccl-impl and not hold that constraint code in the new cccl, if that's viable. I tentatively accepted your change to the cccl recipe but would like to switch it to a fix/repodata patch for cuda-cccl-impl.

@bdice
Copy link
Contributor Author

bdice commented Aug 18, 2023

^ That would be more similar to how we handled archival of thrust when it was a standalone feedstock. conda-forge/thrust-feedstock#19

@jakirkham
Copy link
Member

Sure let's give that a try

@bdice
Copy link
Contributor Author

bdice commented Aug 18, 2023

@jakirkham I opened #3 for now, and I'll open a repodata patch PR after the cccl staged recipe and #3 are merged.

@jakirkham
Copy link
Member

The cccl feedstock is now live: https://github.com/conda-forge/cccl-feedstock

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants