-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA]: Update available cmake targets #203
Comments
My biggest concern is the usecase of "I use Thrust with a CPU parallel backend and don't want to enable the CUDA language or Thrust CUDA backend at all." If we always provide a
My initial thought to work around this is to still provide the subproject packages, so a user can just pull in Thrust ( |
The other concern is how we spell the target names. We're in a bit of a mess right now with capitalization conventions:
Introducing a. Keep the existing capitalization for the |
I am seconding Allison that we should provide support for users to easily include individual projects on their own. That said, I imagine the more common use case to be an include Given that we are introducing new targets I would greatly appreciate, if we could just settle on the common cmake practice and follow that for all repos. I am saying that not even knowing what the common cmake practice is. But having three different spellings is just crazy |
Hm, yeah, that's more complicated than I originally thought about. So what if instead of requiring invoking a cmake function to populate the targets, we make them a cmake option like:
So then if I'm in a non-CUDA context, then I can do:
Yeah, do this. I didn't intentionally change the capitalization, I just wasn't sure what it was to begin with and I don't have a dog in this race 🙂.
That was always the intention. I'm trying to provide a target that makes using everything easier. |
What about Or, if the user would rather handle Thrust configuration without creating a
|
I like this even better than what I said. Would it still make sense to have a I'm thinking of the use case similar to Thrust where someone may want to use libcu++ in a purely host-only environment where CUDA isn't available. How would we enable them to do that? |
The libcu++ package is a lot more lightweight than Thrust, and it shouldn't have any issues -- we just generate a single Thrust is the only package that has any real complexity here, since libcudacxx and CUB do not have optional dependencies, so as long as the Thrust usecase is covered, we should be fine, I don't think we'll need any extra options for those. |
I believe that the CMake convention is to use the common project spelling. Looking at how the projects are spelled in the READMEs, etc, |
Which reminds me -- the CCCL project is named |
Aren't libcu++ and CUB set up to enable the CUDA language such that if you attempt to use them in a context where CUDA isn't available then cmake will fail? |
Nope -- they're both simple interface targets that only provide include directories. Thrust is much more complicated and has to search for dependencies depending on the backends, so that's the only project we need to be careful with. |
I would prefer we investigate using
If users need a more complex setup for the host backend, they can use the Thrust functions to do that. |
How do you do that through CPM? |
Depending on the logic the |
I have thought about this approach as well. We do use To me, Thrust also supports this feature that we could use for CCCL: https://github.com/NVIDIA/cccl/blob/main/thrust/thrust/cmake/README.md#configure-target-from-cache-options
|
* Fix indention. * Add CCCL::CCCL, CCCL::libcudacxx, CCCL::CUB, and CCCL::Thrust targets. The Thrust target is configured via cache options as discussed in #203. Closes #203. * Mark the CCCL::Thrust options as advanced. * Guard against rebuilding pre-existing targets. * Make the imported `CCCL::CCCL` target global in our CMake package. This ensures that users will be able to use it from their builds.
* Add initial skeleton for example cmake project. * Add support for nvcc-specific matrix. * Remove errnoeous runs-on * Fix path to compute-matrix script. * debug * debug. * debug. * lol, use hyphen not underscore. * Get absolute path for matrix file. * Fix reference to matrix_file var. * Remove extraneous matrix_query * Use composite action instead of reusable workflow for matrix. * Fix quotes around composite action inputs. * Need to checkout repo to get local composite action. * Fix quotes. * Add shell to composite action. * Specify shell in step. * Correctly use inputs for composite action. * debug * Fix expansion. * Remove debug. * Remove debug. * Fix indention. * Add CCCL::CCCL, CCCL::libcudacxx, CCCL::CUB, and CCCL::Thrust targets. The Thrust target is configured via cache options as discussed in #203. Closes #203. * (Not working) Update to use new targets. * Make CPM point to the PR branch instead. * Escape quotes in jq query string. * Use underscore instead of hyphen to avoid needing quotes in query. * Use alternative syntax for jq query. * Missing quote. * Add post processing to set nvcc-specific matrix outputs. * Quotes around matrix output. * Single quotes. * Make CCCL::CCCL target GLOBAL. * Remove old compute-matrix reusable workflow. * Fix usage message. * Update devcontainer script to reference nvcc field. * Removed unused GITHUB_OUTPUT var. * EOF * Fix quotes. * Add more complete example file. * Add top level cmake file for building/running all the examples. * Enable examples to run via CI. * Add missing newlines. * Enable overriding CPM source for CI. * Add top level job for running examples. * Add devcontainer_version field to matrix file. * Update make_devcontainers script to read devcontainer_version. * Add job to get devcontainer_version and pass to dependent jobs. * Add and use input for devcontainer_version. * Avoid unnecessarily using compute-matrix action. * Use yaml alias/anchor for cuda versions. * [skip tests] Update devcontainer version job name. * Update to CUDA 12.2 and devcontainer 23.08. * Update devcontainer.json image versions and update to CUDA 12.2. * Add more docs. * Mark the CCCL::Thrust options as advanced. * Guard against rebuilding pre-existing targets. * Make the imported `CCCL::CCCL` target global in our CMake package. This ensures that users will be able to use it from their builds. * Include cmake/CCCLUtilities * Don't use magic number. * Update CPM to point to cccl/main. * Update README. * Add some more comments to the cmakelists.txt. * Update examples/README * Get devcontainer version from prior job. * Hard code example job to only build/run example_project. * Set error handling options in run-as-coder shell. * Add license. * Fix spacing. * Add license headers. * [skip-tests] * Remove accidental submodule? * Split cccl tests from examples in naming schemes. Move the CCCL example targets/tests to `cccl.example.*` * Remove unnecessary include. This file is included in the top-level CMakeLists.txt. * Add options to allow configuration of example CPM repo. * Add CMakePresets.txt.json with recipe for testing examples. * Update PR workflow to run examples using presets and new options. * Specify the CUDA compiler for the example config step. --------- Co-authored-by: Allison Vacanti <alliepiper16@gmail.com> Co-authored-by: Allison Vacanti <users.noreply.github.com>
Is this a duplicate?
Area
Infrastructure
Is your feature request related to a problem? Please describe.
As a user of CCCL + cmake, I would like to have a convenient cmake target to link against that makes all of the CCCL code available on my include path as well as set any potential compiler flags.
Describe the solution you'd like
I would like a
cccl::cccl
target that I can link against that provides Thrust/CUB/libcu++ to my application.This should effectively provide the same experience as if I were using CCCL packaged with the CTK, meaning I can
#include
anything from<thrust/...>
<cub/...>
<cuda/...>
. Furthermore, thethrust::device
backend should beCUDA
and thethrust::host
backend should becpp
. There is no expectation that using any of the other should be available.I would also like individual targets for Thrust/CUB/libcu++ exposed under the
cccl::
namespace, e.g.,cccl::thrust, cccl::cub, cccl::libcudacxx
.In order to optimize to make the common case easy and convenient, I'd like to not have to run any additional cmake functions to expose the targets other than
add_subdirectory
(or whatever else is used by things like CPM).Describe alternatives you've considered
No response
Additional context
For CUB and libcu++, I expect this will not be an issue, but this will be a departure from the status quo for Thrust where you currently have to invoke a cmake function in order to select which execution backends you want before the link targets are exposed.
I believe the overwhelmingly common case is that people use the CUDA backend for
thrust::device
and sequentialcpp
backend forthrust::host
. We should optimize to make this common case easy and convenient with thecccl::cccl
target.We can still require invoking a cmake function in order to expose more customized targets where people wish to configure alternative backends for
thrust::device/host
.The text was updated successfully, but these errors were encountered: