Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA]: Update available cmake targets #203

Closed
1 task done
jrhemstad opened this issue Jul 11, 2023 · 15 comments · Fixed by #244
Closed
1 task done

[FEA]: Update available cmake targets #203

jrhemstad opened this issue Jul 11, 2023 · 15 comments · Fixed by #244
Assignees
Labels
infrastructure Shared CMake, github, etc infrastructure

Comments

@jrhemstad
Copy link
Collaborator

Is this a duplicate?

Area

Infrastructure

Is your feature request related to a problem? Please describe.

As a user of CCCL + cmake, I would like to have a convenient cmake target to link against that makes all of the CCCL code available on my include path as well as set any potential compiler flags.

Describe the solution you'd like

I would like a cccl::cccl target that I can link against that provides Thrust/CUB/libcu++ to my application.

target_link_libraries(my_app PRIVATE cccl::cccl)

This should effectively provide the same experience as if I were using CCCL packaged with the CTK, meaning I can #include anything from <thrust/...> <cub/...> <cuda/...>. Furthermore, the thrust::device backend should be CUDA and the thrust::host backend should be cpp. There is no expectation that using any of the other should be available.

I would also like individual targets for Thrust/CUB/libcu++ exposed under the cccl:: namespace, e.g., cccl::thrust, cccl::cub, cccl::libcudacxx.

In order to optimize to make the common case easy and convenient, I'd like to not have to run any additional cmake functions to expose the targets other than add_subdirectory (or whatever else is used by things like CPM).

Describe alternatives you've considered

No response

Additional context

For CUB and libcu++, I expect this will not be an issue, but this will be a departure from the status quo for Thrust where you currently have to invoke a cmake function in order to select which execution backends you want before the link targets are exposed.

I believe the overwhelmingly common case is that people use the CUDA backend for thrust::device and sequential cpp backend for thrust::host. We should optimize to make this common case easy and convenient with the cccl::cccl target.

We can still require invoking a cmake function in order to expose more customized targets where people wish to configure alternative backends for thrust::device/host.

@alliepiper
Copy link
Collaborator

My biggest concern is the usecase of "I use Thrust with a CPU parallel backend and don't want to enable the CUDA language or Thrust CUDA backend at all."

If we always provide a cccl::cccl/cccl::thrust target that defaults to CPP/CUDA host/device, we must enable CUDA and the Thrust CUDA backend, which can be expensive and problematic:

  1. What if the current environment doesn't have the CTK installed? Including CCCL will cause CMake to try to enable CUDA and search for a CUDA compiler, then fail when it's not found.
  2. On windows especially, enabling the CUDA language in CMake adds significant overhead to the configure step.

My initial thought to work around this is to still provide the subproject packages, so a user can just pull in Thrust (add_package(Thrust) / add_subdirectory(cccl/thrust) without any configured backends and selectively enable the ones they want. But if they include CCCL, I think it's a reasonable expectation that CTK will be present and desired, so a cccl::thrust target can be configured as described above.

@alliepiper
Copy link
Collaborator

The other concern is how we spell the target names. We're in a bit of a mess right now with capitalization conventions:

libcudacxx::libcudacxx vs. CUB::CUB vs. Thrust::Thrust

Introducing cccl::libcudacxx is fine, but cccl::cub and cccl::thrust would be different spellings than the native packages, which is somewhat ugly. Do we:

a. Keep the existing capitalization for the cccl:: targets (cccl::libcudacxx, cccl::Thrust, cccl::CUB)
b. Use lowercase for cccl:: targets and change the packages to also use all lowercase (I do not like this, unnecessarily disruptive for users)
c. Have mismatched capitalization between the cccl targets and the subproject targets (I do not like this, unnecessarily inconsistent)
d. Use lowercase targets for cccl:: and provide lowercase targets in subproject while also keeping the mixed/upper case targets around as aliases, eventually perhaps deprecating them down the road (nice compromise, but feels hacky).

@miscco
Copy link
Collaborator

miscco commented Jul 11, 2023

I am seconding Allison that we should provide support for users to easily include individual projects on their own.

That said, I imagine the more common use case to be an include cccl::something

Given that we are introducing new targets I would greatly appreciate, if we could just settle on the common cmake practice and follow that for all repos. I am saying that not even knowing what the common cmake practice is. But having three different spellings is just crazy

@jrhemstad
Copy link
Collaborator Author

jrhemstad commented Jul 11, 2023

If we always provide a cccl::cccl/cccl::thrust target that defaults to CPP/CUDA host/device, we must enable CUDA and the Thrust CUDA backend, which can be expensive and problematic:

Hm, yeah, that's more complicated than I originally thought about.

So what if instead of requiring invoking a cmake function to populate the targets, we make them a cmake option like:

option(CCCL_ENABLE_CUDA "Enable CUDA targets" ON)
option(CCCL_ENABLE_THRUST_OPENMP_BACKEND "Enable the TBB backend for Thrust" OFF)
option(CCCL_ENABLE_THRUST_TBB_BACKEND "Enable the OpenMP backend for Thrust" OFF)

So then if I'm in a non-CUDA context, then I can do:

CPMAddPackage(
    NAME CCCL
    GITHUB_REPOSITORY nvidia/cccl
    OPTIONS "CCCL_ENABLE_CUDA OFF" "CCCL_ENABLE_THRUST_OPENMP_BACKEND ON"
)

target_link_library(my_app cccl::cccl) # This won't link against the cccl::CUB target, nor will it require the CUDA language 

a. Keep the existing capitalization for the cccl:: targets (cccl::libcudacxx, cccl::Thrust, cccl::CUB)

Yeah, do this. I didn't intentionally change the capitalization, I just wasn't sure what it was to begin with and I don't have a dog in this race 🙂.

I am seconding Allison that we should provide support for users to easily include individual projects on their own.

That was always the intention. I'm trying to provide a target that makes using everything easier.

@alliepiper
Copy link
Collaborator

So what if instead of requiring invoking a cmake function to populate the targets, we make them a cmake option like:

What about CCCL_THRUST_DEFAULT_HOST_BACKEND and CCCL_THRUST_DEFAULT_DEVICE_BACKEND variables that can be set to CPP, OMP, TBB, or CUDA to control the configuration of the default cccl::Thrust target?

Or, if the user would rather handle Thrust configuration without creating a cccl::Thrust target, they could always just do something like:

find_package(cccl COMPONENTS libcudacxx CUB) # cccl::cccl only provides libcu++ and CUB
find_package(Thrust) # Find thrust separately
thrust_configure_target(MyThrust ...) # Create a custom Thrust target
target_link_libraries(cccl::cccl INTERFACE MyThrust) # Add the custom Thrust target to cccl::cccl

@jrhemstad
Copy link
Collaborator Author

What about CCCL_THRUST_DEFAULT_HOST_BACKEND and CCCL_THRUST_DEFAULT_DEVICE_BACKEND variables that can be set to CPP, OMP, TBB, or CUDA to control the configuration of the default cccl::Thrust target?

I like this even better than what I said.

Would it still make sense to have a CCCL_ENABLE_CUDA option (or something similar)?

I'm thinking of the use case similar to Thrust where someone may want to use libcu++ in a purely host-only environment where CUDA isn't available. How would we enable them to do that?

@alliepiper
Copy link
Collaborator

Would it still make sense to have a CCCL_ENABLE_CUDA option (or something similar)?

The libcu++ package is a lot more lightweight than Thrust, and it shouldn't have any issues -- we just generate a single libcudacxx::libcudacxx interface target with the include directory set.

Thrust is the only package that has any real complexity here, since libcudacxx and CUB do not have optional dependencies, so as long as the Thrust usecase is covered, we should be fine, I don't think we'll need any extra options for those.

@alliepiper
Copy link
Collaborator

I am saying that not even knowing what the common cmake practice is. But having three different spellings is just crazy.

I believe that the CMake convention is to use the common project spelling. Looking at how the projects are spelled in the READMEs, etc, libcudacxx, Thrust, and CUB are how the projects are commonly called, so we used those spellings.

@alliepiper
Copy link
Collaborator

Which reminds me -- the CCCL project is named CCCL and discovered via find_package(CCCL), so we should name these targets CCCL::CCCL, CCCL::libcudacxx, CCCL::Thrust, and CCCL::CUB.

@jrhemstad
Copy link
Collaborator Author

I don't think we'll need any extra options for those

Aren't libcu++ and CUB set up to enable the CUDA language such that if you attempt to use them in a context where CUDA isn't available then cmake will fail?

@miscco miscco added the infrastructure Shared CMake, github, etc infrastructure label Jul 12, 2023
@alliepiper
Copy link
Collaborator

Aren't libcu++ and CUB set up to enable the CUDA language such that if you attempt to use them in a context where CUDA isn't available then cmake will fail?

Nope -- they're both simple interface targets that only provide include directories. Thrust is much more complicated and has to search for dependencies depending on the backends, so that's the only project we need to be careful with.

@alliepiper alliepiper self-assigned this Jul 12, 2023
@robertmaynard
Copy link
Contributor

I would prefer we investigate using find_package(CCCL COMPONENTS) to control thrust host/device selection.

  • find_package(CCCL) -> Default to CUDA device backend, openmp && tbb disabled
  • find_package(CCCL COMPONENTS THRUST_TBB) -> Use TBB as the device backend

If users need a more complex setup for the host backend, they can use the Thrust functions to do that.

@jrhemstad
Copy link
Collaborator Author

I would prefer we investigate using find_package(CCCL COMPONENTS) to control thrust host/device selection.

How do you do that through CPM?

@robertmaynard
Copy link
Contributor

I would prefer we investigate using find_package(CCCL COMPONENTS) to control thrust host/device selection.

How do you do that through CPM?

CPMFindPackage has support for components via the FIND_ARGS parameter. Since CPMAddPackage doesn't use find_package at all this would have no impact in that workflow.

Depending on the logic the CCL-config could map from the COMPONENTS syntax to the proposed variables ( e.g CCCL_ENABLE_CUDA ). That would provide a nice interface to users that use find_package and those that want to use CCCL directly from source.

@alliepiper
Copy link
Collaborator

I would prefer we investigate using find_package(CCCL COMPONENTS) to control thrust host/device selection.

I have thought about this approach as well. We do use find_package(CCCL COMPONENTS Thrust CUB libcudacxx) to select subprojects, and we also support find_package(Thrust COMPONENTS TBB OMP CPP CUDA) to pre-load the backends. However, this Thrust component syntax still requires using the thrust_configure_target(....) to build a target with the backends mapped to host/device.

To me, find_package(CCCL COMPONENTS Thrust_TBB) is ambiguous -- is this meant to be host=cpp device=tbb, or host=tbb device=cuda? It'd also be a bit confusing because assuming either would be confusingly different from how the Thrust package works when used directly. We could introduce components like Thrust_CPP_TBB but I think we can do better.

Thrust also supports this feature that we could use for CCCL: https://github.com/NVIDIA/cccl/blob/main/thrust/thrust/cmake/README.md#configure-target-from-cache-options

# In CCCL package
find_package(Thrust)
# The CCCL::Thrust target will be configured from advanced cache options:
thrust_configure_target(CCCL::Thrust FROM_OPTIONS
  HOST_OPTION CCCL_THRUST_HOST_SYSTEM
  DEVICE_OPTION CCCL_THRUST_DEVICE_SYSTEM
  ADVANCED
)
# User code:
set(CCCL_THRUST_HOST_SYSTEM TBB)
set(CCCL_THRUST_DEVICE_SYSTEM CUDA)
find_package(CCCL)
target_link_libraries(foodledoo PRIVATE CCCL::Thrust)

rapids-bot bot pushed a commit that referenced this issue Jul 19, 2023
The Thrust target is configured via cache options as discussed in #203.

Closes #203.
jrhemstad pushed a commit that referenced this issue Aug 7, 2023
* Fix indention.

* Add CCCL::CCCL, CCCL::libcudacxx, CCCL::CUB, and CCCL::Thrust targets.

The Thrust target is configured via cache options as discussed in #203.

Closes #203.

* Mark the CCCL::Thrust options as advanced.

* Guard against rebuilding pre-existing targets.

* Make the imported `CCCL::CCCL` target global in our CMake package.

This ensures that users will be able to use it from their builds.
jrhemstad added a commit that referenced this issue Aug 8, 2023
* Add initial skeleton for example cmake project.

* Add support for nvcc-specific matrix.

* Remove errnoeous runs-on

* Fix path to compute-matrix script.

* debug

* debug.

* debug.

* lol, use hyphen not underscore.

* Get absolute path for matrix file.

* Fix reference to matrix_file var.

* Remove extraneous matrix_query

* Use composite action instead of reusable workflow for matrix.

* Fix quotes around composite action inputs.

* Need to checkout repo to get local composite action.

* Fix quotes.

* Add shell to composite action.

* Specify shell in step.

* Correctly use inputs for composite action.

* debug

* Fix expansion.

* Remove debug.

* Remove debug.

* Fix indention.

* Add CCCL::CCCL, CCCL::libcudacxx, CCCL::CUB, and CCCL::Thrust targets.

The Thrust target is configured via cache options as discussed in #203.

Closes #203.

* (Not working) Update to use new targets.

* Make CPM point to the PR branch instead.

* Escape quotes in jq query string.

* Use underscore instead of hyphen to avoid needing quotes in query.

* Use alternative syntax for jq query.

* Missing quote.

* Add post processing to set nvcc-specific matrix outputs.

* Quotes around matrix output.

* Single quotes.

* Make CCCL::CCCL target GLOBAL.

* Remove old compute-matrix reusable workflow.

* Fix usage message.

* Update devcontainer script to reference nvcc field.

* Removed unused GITHUB_OUTPUT var.

* EOF

* Fix quotes.

* Add more complete example file.

* Add top level cmake file for building/running all the examples.

* Enable examples to run via CI.

* Add missing newlines.

* Enable overriding CPM source for CI.

* Add top level job for running examples.

* Add devcontainer_version field to matrix file.

* Update make_devcontainers script to read devcontainer_version.

* Add job to get devcontainer_version and pass to dependent jobs.

* Add and use input for devcontainer_version.

* Avoid unnecessarily using compute-matrix action.

* Use yaml alias/anchor for cuda versions.

* [skip tests] Update devcontainer version job name.

* Update to CUDA 12.2 and devcontainer 23.08.

* Update devcontainer.json image versions and update to CUDA 12.2.

* Add more docs.

* Mark the CCCL::Thrust options as advanced.

* Guard against rebuilding pre-existing targets.

* Make the imported `CCCL::CCCL` target global in our CMake package.

This ensures that users will be able to use it from their builds.

* Include cmake/CCCLUtilities

* Don't use magic number.

* Update CPM to point to cccl/main.

* Update README.

* Add some more comments to the cmakelists.txt.

* Update examples/README

* Get devcontainer version from prior job.

* Hard code example job to only build/run example_project.

* Set error handling options in run-as-coder shell.

* Add license.

* Fix spacing.

* Add license headers.

* [skip-tests]

* Remove accidental submodule?

* Split cccl tests from examples in naming schemes.

Move the CCCL example targets/tests to `cccl.example.*`

* Remove unnecessary include.

This file is included in the top-level CMakeLists.txt.

* Add options to allow configuration of example CPM repo.

* Add CMakePresets.txt.json with recipe for testing examples.

* Update PR workflow to run examples using presets and new options.

* Specify the CUDA compiler for the example config step.

---------

Co-authored-by: Allison Vacanti <alliepiper16@gmail.com>
Co-authored-by: Allison Vacanti <users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
infrastructure Shared CMake, github, etc infrastructure
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants