Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure that headers in <cuda/*> can be build with a C++ only compiler #3472

Merged
merged 1 commit into from
Jan 22, 2025

Conversation

miscco
Copy link
Collaborator

@miscco miscco commented Jan 22, 2025

We were never really clear about what we support, so be nice and make sure those headers can also be included without a cuda compiler

Fixes #3372

@miscco miscco requested review from a team as code owners January 22, 2025 09:01
@miscco miscco force-pushed the make_stream_ref_available_on_host branch from 77a2f26 to ee84fca Compare January 22, 2025 09:15
@miscco miscco force-pushed the make_stream_ref_available_on_host branch from ee84fca to c364017 Compare January 22, 2025 09:23
Copy link
Contributor

🟩 CI finished in 2h 02m: Pass: 100%/135 | Total: 2d 10h | Avg: 25m 52s | Max: 1h 20m | Hits: 393%/23303
  • 🟩 cub: Pass: 100%/38 | Total: 1d 04h | Avg: 45m 26s | Max: 1h 18m | Hits: 146%/3540

    🟩 cpu
      🟩 amd64              Pass: 100%/36  | Total:  1d 03h | Avg: 45m 24s | Max:  1h 18m | Hits: 146%/3540  
      🟩 arm64              Pass: 100%/2   | Total:  1h 32m | Avg: 46m 09s | Max: 47m 27s
    🟩 ctk
      🟩 12.0               Pass: 100%/5   | Total:  4h 06m | Avg: 49m 21s | Max:  1h 04m | Hits: 148%/885   
      🟩 12.5               Pass: 100%/2   | Total:  2h 14m | Avg:  1h 07m | Max:  1h 08m
      🟩 12.6               Pass: 100%/31  | Total: 22h 25m | Avg: 43m 24s | Max:  1h 18m | Hits: 145%/2655  
    🟩 cudacxx
      🟩 ClangCUDA18        Pass: 100%/2   | Total:  1h 49m | Avg: 54m 33s | Max: 57m 48s
      🟩 nvcc12.0           Pass: 100%/5   | Total:  4h 06m | Avg: 49m 21s | Max:  1h 04m | Hits: 148%/885   
      🟩 nvcc12.5           Pass: 100%/2   | Total:  2h 14m | Avg:  1h 07m | Max:  1h 08m
      🟩 nvcc12.6           Pass: 100%/29  | Total: 20h 36m | Avg: 42m 38s | Max:  1h 18m | Hits: 145%/2655  
    🟩 cudacxx_family
      🟩 ClangCUDA          Pass: 100%/2   | Total:  1h 49m | Avg: 54m 33s | Max: 57m 48s
      🟩 nvcc               Pass: 100%/36  | Total:  1d 02h | Avg: 44m 56s | Max:  1h 18m | Hits: 146%/3540  
    🟩 cxx
      🟩 Clang14            Pass: 100%/4   | Total:  3h 08m | Avg: 47m 05s | Max: 51m 23s
      🟩 Clang15            Pass: 100%/1   | Total: 47m 45s | Avg: 47m 45s | Max: 47m 45s
      🟩 Clang16            Pass: 100%/1   | Total: 46m 03s | Avg: 46m 03s | Max: 46m 03s
      🟩 Clang17            Pass: 100%/1   | Total: 51m 48s | Avg: 51m 48s | Max: 51m 48s
      🟩 Clang18            Pass: 100%/7   | Total:  4h 59m | Avg: 42m 48s | Max: 57m 48s
      🟩 GCC7               Pass: 100%/2   | Total:  1h 31m | Avg: 45m 43s | Max: 46m 09s
      🟩 GCC8               Pass: 100%/1   | Total: 50m 49s | Avg: 50m 49s | Max: 50m 49s
      🟩 GCC9               Pass: 100%/2   | Total:  1h 33m | Avg: 46m 56s | Max: 47m 55s
      🟩 GCC10              Pass: 100%/1   | Total: 54m 14s | Avg: 54m 14s | Max: 54m 14s
      🟩 GCC11              Pass: 100%/1   | Total: 48m 00s | Avg: 48m 00s | Max: 48m 00s
      🟩 GCC12              Pass: 100%/3   | Total:  1h 26m | Avg: 28m 47s | Max: 50m 25s
      🟩 GCC13              Pass: 100%/8   | Total:  4h 13m | Avg: 31m 44s | Max: 50m 42s
      🟩 MSVC14.29          Pass: 100%/2   | Total:  2h 12m | Avg:  1h 06m | Max:  1h 08m | Hits: 147%/1770  
      🟩 MSVC14.39          Pass: 100%/2   | Total:  2h 27m | Avg:  1h 13m | Max:  1h 18m | Hits: 145%/1770  
      🟩 NVHPC24.7          Pass: 100%/2   | Total:  2h 14m | Avg:  1h 07m | Max:  1h 08m
    🟩 cxx_family
      🟩 Clang              Pass: 100%/14  | Total: 10h 33m | Avg: 45m 15s | Max: 57m 48s
      🟩 GCC                Pass: 100%/18  | Total: 11h 18m | Avg: 37m 42s | Max: 54m 14s
      🟩 MSVC               Pass: 100%/4   | Total:  4h 39m | Avg:  1h 09m | Max:  1h 18m | Hits: 146%/3540  
      🟩 NVHPC              Pass: 100%/2   | Total:  2h 14m | Avg:  1h 07m | Max:  1h 08m
    🟩 gpu
      🟩 h100               Pass: 100%/2   | Total: 35m 57s | Avg: 17m 58s | Max: 19m 21s
      🟩 v100               Pass: 100%/36  | Total:  1d 04h | Avg: 46m 58s | Max:  1h 18m | Hits: 146%/3540  
    🟩 jobs
      🟩 Build              Pass: 100%/31  | Total:  1d 02h | Avg: 50m 46s | Max:  1h 18m | Hits: 146%/3540  
      🟩 DeviceLaunch       Pass: 100%/1   | Total: 20m 20s | Avg: 20m 20s | Max: 20m 20s
      🟩 GraphCapture       Pass: 100%/1   | Total: 18m 43s | Avg: 18m 43s | Max: 18m 43s
      🟩 HostLaunch         Pass: 100%/3   | Total:  1h 06m | Avg: 22m 16s | Max: 25m 30s
      🟩 TestGPU            Pass: 100%/2   | Total: 46m 49s | Avg: 23m 24s | Max: 24m 49s
    🟩 sm
      🟩 90                 Pass: 100%/2   | Total: 35m 57s | Avg: 17m 58s | Max: 19m 21s
      🟩 90a                Pass: 100%/1   | Total: 16m 50s | Avg: 16m 50s | Max: 16m 50s
    🟩 std
      🟩 17                 Pass: 100%/14  | Total: 12h 38m | Avg: 54m 12s | Max:  1h 08m | Hits: 147%/2655  
      🟩 20                 Pass: 100%/24  | Total: 16h 07m | Avg: 40m 19s | Max:  1h 18m | Hits: 143%/885   
    
  • 🟩 libcudacxx: Pass: 100%/37 | Total: 6h 45m | Avg: 10m 58s | Max: 33m 22s | Hits: 676%/10061

    🟩 cpu
      🟩 amd64              Pass: 100%/35  | Total:  6h 38m | Avg: 11m 22s | Max: 33m 22s | Hits: 676%/10061 
      🟩 arm64              Pass: 100%/2   | Total:  7m 28s | Avg:  3m 44s | Max:  3m 58s
    🟩 ctk
      🟩 12.0               Pass: 100%/5   | Total: 41m 44s | Avg:  8m 20s | Max: 21m 47s | Hits: 677%/2470  
      🟩 12.5               Pass: 100%/2   | Total: 42m 05s | Avg: 21m 02s | Max: 33m 22s
      🟩 12.6               Pass: 100%/30  | Total:  5h 22m | Avg: 10m 44s | Max: 32m 28s | Hits: 676%/7591  
    🟩 cudacxx
      🟩 ClangCUDA18        Pass: 100%/4   | Total:  1h 06m | Avg: 16m 36s | Max: 21m 49s
      🟩 nvcc12.0           Pass: 100%/5   | Total: 41m 44s | Avg:  8m 20s | Max: 21m 47s | Hits: 677%/2470  
      🟩 nvcc12.5           Pass: 100%/2   | Total: 42m 05s | Avg: 21m 02s | Max: 33m 22s
      🟩 nvcc12.6           Pass: 100%/26  | Total:  4h 15m | Avg:  9m 49s | Max: 32m 28s | Hits: 676%/7591  
    🟩 cudacxx_family
      🟩 ClangCUDA          Pass: 100%/4   | Total:  1h 06m | Avg: 16m 36s | Max: 21m 49s
      🟩 nvcc               Pass: 100%/33  | Total:  5h 39m | Avg: 10m 17s | Max: 33m 22s | Hits: 676%/10061 
    🟩 cxx
      🟩 Clang14            Pass: 100%/4   | Total: 21m 53s | Avg:  5m 28s | Max:  7m 30s
      🟩 Clang15            Pass: 100%/1   | Total:  4m 25s | Avg:  4m 25s | Max:  4m 25s
      🟩 Clang16            Pass: 100%/1   | Total:  5m 01s | Avg:  5m 01s | Max:  5m 01s
      🟩 Clang17            Pass: 100%/1   | Total:  7m 40s | Avg:  7m 40s | Max:  7m 40s
      🟩 Clang18            Pass: 100%/8   | Total:  1h 44m | Avg: 13m 05s | Max: 24m 17s
      🟩 GCC7               Pass: 100%/2   | Total:  6m 57s | Avg:  3m 28s | Max:  3m 39s
      🟩 GCC8               Pass: 100%/1   | Total:  3m 39s | Avg:  3m 39s | Max:  3m 39s
      🟩 GCC9               Pass: 100%/2   | Total:  7m 31s | Avg:  3m 45s | Max:  3m 55s
      🟩 GCC10              Pass: 100%/1   | Total:  4m 05s | Avg:  4m 05s | Max:  4m 05s
      🟩 GCC11              Pass: 100%/1   | Total:  4m 04s | Avg:  4m 04s | Max:  4m 04s
      🟩 GCC12              Pass: 100%/1   | Total:  4m 15s | Avg:  4m 15s | Max:  4m 15s
      🟩 GCC13              Pass: 100%/8   | Total:  1h 28m | Avg: 11m 03s | Max: 32m 28s
      🟩 MSVC14.29          Pass: 100%/2   | Total: 46m 18s | Avg: 23m 09s | Max: 24m 31s | Hits: 677%/4950  
      🟩 MSVC14.39          Pass: 100%/2   | Total: 54m 50s | Avg: 27m 25s | Max: 28m 16s | Hits: 676%/5111  
      🟩 NVHPC24.7          Pass: 100%/2   | Total: 42m 05s | Avg: 21m 02s | Max: 33m 22s
    🟩 cxx_family
      🟩 Clang              Pass: 100%/15  | Total:  2h 23m | Avg:  9m 34s | Max: 24m 17s
      🟩 GCC                Pass: 100%/16  | Total:  1h 58m | Avg:  7m 25s | Max: 32m 28s
      🟩 MSVC               Pass: 100%/4   | Total:  1h 41m | Avg: 25m 17s | Max: 28m 16s | Hits: 676%/10061 
      🟩 NVHPC              Pass: 100%/2   | Total: 42m 05s | Avg: 21m 02s | Max: 33m 22s
    🟩 gpu
      🟩 v100               Pass: 100%/37  | Total:  6h 45m | Avg: 10m 58s | Max: 33m 22s | Hits: 676%/10061 
    🟩 jobs
      🟩 Build              Pass: 100%/32  | Total:  5h 09m | Avg:  9m 40s | Max: 33m 22s | Hits: 676%/10061 
      🟩 NVRTC              Pass: 100%/2   | Total: 54m 25s | Avg: 27m 12s | Max: 32m 28s
      🟩 Test               Pass: 100%/2   | Total: 39m 54s | Avg: 19m 57s | Max: 24m 17s
      🟩 VerifyCodegen      Pass: 100%/1   | Total:  2m 04s | Avg:  2m 04s | Max:  2m 04s
    🟩 sm
      🟩 90                 Pass: 100%/1   | Total: 12m 25s | Avg: 12m 25s | Max: 12m 25s
      🟩 90a                Pass: 100%/2   | Total: 17m 23s | Avg:  8m 41s | Max: 13m 09s
    🟩 std
      🟩 17                 Pass: 100%/15  | Total:  2h 40m | Avg: 10m 40s | Max: 26m 34s | Hits: 676%/7430  
      🟩 20                 Pass: 100%/21  | Total:  4h 03m | Avg: 11m 36s | Max: 33m 22s | Hits: 675%/2631  
    
  • 🟩 thrust: Pass: 100%/37 | Total: 19h 56m | Avg: 32m 20s | Max: 1h 20m | Hits: 181%/9180

    🟩 cmake_options
      🟩 -DTHRUST_DISPATCH_TYPE=Force32bit Pass: 100%/2   | Total: 39m 18s | Avg: 19m 39s | Max: 22m 43s
    🟩 cpu
      🟩 amd64              Pass: 100%/35  | Total: 19h 08m | Avg: 32m 48s | Max:  1h 20m | Hits: 181%/9180  
      🟩 arm64              Pass: 100%/2   | Total: 47m 55s | Avg: 23m 57s | Max: 25m 20s
    🟩 ctk
      🟩 12.0               Pass: 100%/5   | Total:  3h 02m | Avg: 36m 35s | Max:  1h 09m | Hits: 141%/1836  
      🟩 12.5               Pass: 100%/2   | Total:  2h 17m | Avg:  1h 08m | Max:  1h 10m
      🟩 12.6               Pass: 100%/30  | Total: 14h 35m | Avg: 29m 11s | Max:  1h 20m | Hits: 190%/7344  
    🟩 cudacxx
      🟩 ClangCUDA18        Pass: 100%/2   | Total: 47m 56s | Avg: 23m 58s | Max: 24m 09s
      🟩 nvcc12.0           Pass: 100%/5   | Total:  3h 02m | Avg: 36m 35s | Max:  1h 09m | Hits: 141%/1836  
      🟩 nvcc12.5           Pass: 100%/2   | Total:  2h 17m | Avg:  1h 08m | Max:  1h 10m
      🟩 nvcc12.6           Pass: 100%/28  | Total: 13h 47m | Avg: 29m 33s | Max:  1h 20m | Hits: 190%/7344  
    🟩 cudacxx_family
      🟩 ClangCUDA          Pass: 100%/2   | Total: 47m 56s | Avg: 23m 58s | Max: 24m 09s
      🟩 nvcc               Pass: 100%/35  | Total: 19h 08m | Avg: 32m 48s | Max:  1h 20m | Hits: 181%/9180  
    🟩 cxx
      🟩 Clang14            Pass: 100%/4   | Total:  1h 49m | Avg: 27m 26s | Max: 28m 03s
      🟩 Clang15            Pass: 100%/1   | Total: 28m 32s | Avg: 28m 32s | Max: 28m 32s
      🟩 Clang16            Pass: 100%/1   | Total: 25m 52s | Avg: 25m 52s | Max: 25m 52s
      🟩 Clang17            Pass: 100%/1   | Total: 26m 48s | Avg: 26m 48s | Max: 26m 48s
      🟩 Clang18            Pass: 100%/7   | Total:  2h 29m | Avg: 21m 21s | Max: 29m 17s
      🟩 GCC7               Pass: 100%/2   | Total: 56m 04s | Avg: 28m 02s | Max: 28m 31s
      🟩 GCC8               Pass: 100%/1   | Total: 29m 22s | Avg: 29m 22s | Max: 29m 22s
      🟩 GCC9               Pass: 100%/2   | Total:  1h 00m | Avg: 30m 13s | Max: 30m 13s
      🟩 GCC10              Pass: 100%/1   | Total: 29m 00s | Avg: 29m 00s | Max: 29m 00s
      🟩 GCC11              Pass: 100%/1   | Total: 30m 13s | Avg: 30m 13s | Max: 30m 13s
      🟩 GCC12              Pass: 100%/1   | Total: 31m 56s | Avg: 31m 56s | Max: 31m 56s
      🟩 GCC13              Pass: 100%/8   | Total:  2h 40m | Avg: 20m 01s | Max: 32m 17s
      🟩 MSVC14.29          Pass: 100%/2   | Total:  2h 18m | Avg:  1h 09m | Max:  1h 09m | Hits: 138%/3672  
      🟩 MSVC14.39          Pass: 100%/3   | Total:  3h 02m | Avg:  1h 00m | Max:  1h 20m | Hits: 209%/5508  
      🟩 NVHPC24.7          Pass: 100%/2   | Total:  2h 17m | Avg:  1h 08m | Max:  1h 10m
    🟩 cxx_family
      🟩 Clang              Pass: 100%/14  | Total:  5h 40m | Avg: 24m 19s | Max: 29m 17s
      🟩 GCC                Pass: 100%/16  | Total:  6h 37m | Avg: 24m 49s | Max: 32m 17s
      🟩 MSVC               Pass: 100%/5   | Total:  5h 21m | Avg:  1h 04m | Max:  1h 20m | Hits: 181%/9180  
      🟩 NVHPC              Pass: 100%/2   | Total:  2h 17m | Avg:  1h 08m | Max:  1h 10m
    🟩 gpu
      🟩 v100               Pass: 100%/37  | Total: 19h 56m | Avg: 32m 20s | Max:  1h 20m | Hits: 181%/9180  
    🟩 jobs
      🟩 Build              Pass: 100%/31  | Total: 18h 18m | Avg: 35m 25s | Max:  1h 20m | Hits: 135%/7344  
      🟩 TestCPU            Pass: 100%/3   | Total: 51m 08s | Avg: 17m 02s | Max: 36m 01s | Hits: 365%/1836  
      🟩 TestGPU            Pass: 100%/3   | Total: 47m 06s | Avg: 15m 42s | Max: 16m 35s
    🟩 sm
      🟩 90a                Pass: 100%/1   | Total: 12m 47s | Avg: 12m 47s | Max: 12m 47s
    🟩 std
      🟩 17                 Pass: 100%/14  | Total:  9h 16m | Avg: 39m 44s | Max:  1h 09m | Hits: 136%/5508  
      🟩 20                 Pass: 100%/21  | Total: 10h 00m | Avg: 28m 36s | Max:  1h 20m | Hits: 247%/3672  
    
  • 🟩 cudax: Pass: 100%/20 | Total: 1h 49m | Avg: 5m 28s | Max: 18m 20s | Hits: 363%/522

    🟩 cpu
      🟩 amd64              Pass: 100%/16  | Total:  1h 39m | Avg:  6m 11s | Max: 18m 20s | Hits: 363%/522   
      🟩 arm64              Pass: 100%/4   | Total: 10m 29s | Avg:  2m 37s | Max:  2m 45s
    🟩 ctk
      🟩 12.0               Pass: 100%/1   | Total: 13m 38s | Avg: 13m 38s | Max: 13m 38s | Hits: 381%/261   
      🟩 12.5               Pass: 100%/2   | Total: 10m 12s | Avg:  5m 06s | Max:  5m 23s
      🟩 12.6               Pass: 100%/17  | Total:  1h 25m | Avg:  5m 02s | Max: 18m 20s | Hits: 344%/261   
    🟩 cudacxx
      🟩 nvcc12.0           Pass: 100%/1   | Total: 13m 38s | Avg: 13m 38s | Max: 13m 38s | Hits: 381%/261   
      🟩 nvcc12.5           Pass: 100%/2   | Total: 10m 12s | Avg:  5m 06s | Max:  5m 23s
      🟩 nvcc12.6           Pass: 100%/17  | Total:  1h 25m | Avg:  5m 02s | Max: 18m 20s | Hits: 344%/261   
    🟩 cudacxx_family
      🟩 nvcc               Pass: 100%/20  | Total:  1h 49m | Avg:  5m 28s | Max: 18m 20s | Hits: 363%/522   
    🟩 cxx
      🟩 Clang14            Pass: 100%/1   | Total:  3m 18s | Avg:  3m 18s | Max:  3m 18s
      🟩 Clang15            Pass: 100%/1   | Total:  3m 08s | Avg:  3m 08s | Max:  3m 08s
      🟩 Clang16            Pass: 100%/1   | Total:  3m 04s | Avg:  3m 04s | Max:  3m 04s
      🟩 Clang17            Pass: 100%/1   | Total:  3m 03s | Avg:  3m 03s | Max:  3m 03s
      🟩 Clang18            Pass: 100%/4   | Total: 23m 39s | Avg:  5m 54s | Max: 14m 57s
      🟩 GCC10              Pass: 100%/1   | Total:  3m 01s | Avg:  3m 01s | Max:  3m 01s
      🟩 GCC11              Pass: 100%/1   | Total:  3m 05s | Avg:  3m 05s | Max:  3m 05s
      🟩 GCC12              Pass: 100%/2   | Total: 21m 22s | Avg: 10m 41s | Max: 18m 20s
      🟩 GCC13              Pass: 100%/4   | Total: 10m 30s | Avg:  2m 37s | Max:  2m 44s
      🟩 MSVC14.36          Pass: 100%/1   | Total: 13m 38s | Avg: 13m 38s | Max: 13m 38s | Hits: 381%/261   
      🟩 MSVC14.39          Pass: 100%/1   | Total: 11m 29s | Avg: 11m 29s | Max: 11m 29s | Hits: 344%/261   
      🟩 NVHPC24.7          Pass: 100%/2   | Total: 10m 12s | Avg:  5m 06s | Max:  5m 23s
    🟩 cxx_family
      🟩 Clang              Pass: 100%/8   | Total: 36m 12s | Avg:  4m 31s | Max: 14m 57s
      🟩 GCC                Pass: 100%/8   | Total: 37m 58s | Avg:  4m 44s | Max: 18m 20s
      🟩 MSVC               Pass: 100%/2   | Total: 25m 07s | Avg: 12m 33s | Max: 13m 38s | Hits: 363%/522   
      🟩 NVHPC              Pass: 100%/2   | Total: 10m 12s | Avg:  5m 06s | Max:  5m 23s
    🟩 gpu
      🟩 v100               Pass: 100%/20  | Total:  1h 49m | Avg:  5m 28s | Max: 18m 20s | Hits: 363%/522   
    🟩 jobs
      🟩 Build              Pass: 100%/18  | Total:  1h 16m | Avg:  4m 14s | Max: 13m 38s | Hits: 363%/522   
      🟩 Test               Pass: 100%/2   | Total: 33m 17s | Avg: 16m 38s | Max: 18m 20s
    🟩 sm
      🟩 90                 Pass: 100%/1   | Total:  2m 42s | Avg:  2m 42s | Max:  2m 42s
      🟩 90a                Pass: 100%/1   | Total:  2m 44s | Avg:  2m 44s | Max:  2m 44s
    🟩 std
      🟩 17                 Pass: 100%/4   | Total: 12m 42s | Avg:  3m 10s | Max:  4m 49s
      🟩 20                 Pass: 100%/16  | Total:  1h 36m | Avg:  6m 02s | Max: 18m 20s | Hits: 363%/522   
    
  • 🟩 cccl_c_parallel: Pass: 100%/2 | Total: 10m 10s | Avg: 5m 05s | Max: 8m 13s

    🟩 cpu
      🟩 amd64              Pass: 100%/2   | Total: 10m 10s | Avg:  5m 05s | Max:  8m 13s
    🟩 ctk
      🟩 12.6               Pass: 100%/2   | Total: 10m 10s | Avg:  5m 05s | Max:  8m 13s
    🟩 cudacxx
      🟩 nvcc12.6           Pass: 100%/2   | Total: 10m 10s | Avg:  5m 05s | Max:  8m 13s
    🟩 cudacxx_family
      🟩 nvcc               Pass: 100%/2   | Total: 10m 10s | Avg:  5m 05s | Max:  8m 13s
    🟩 cxx
      🟩 GCC13              Pass: 100%/2   | Total: 10m 10s | Avg:  5m 05s | Max:  8m 13s
    🟩 cxx_family
      🟩 GCC                Pass: 100%/2   | Total: 10m 10s | Avg:  5m 05s | Max:  8m 13s
    🟩 gpu
      🟩 v100               Pass: 100%/2   | Total: 10m 10s | Avg:  5m 05s | Max:  8m 13s
    🟩 jobs
      🟩 Build              Pass: 100%/1   | Total:  1m 57s | Avg:  1m 57s | Max:  1m 57s
      🟩 Test               Pass: 100%/1   | Total:  8m 13s | Avg:  8m 13s | Max:  8m 13s
    
  • 🟩 python: Pass: 100%/1 | Total: 44m 41s | Avg: 44m 41s | Max: 44m 41s

    🟩 cpu
      🟩 amd64              Pass: 100%/1   | Total: 44m 41s | Avg: 44m 41s | Max: 44m 41s
    🟩 ctk
      🟩 12.6               Pass: 100%/1   | Total: 44m 41s | Avg: 44m 41s | Max: 44m 41s
    🟩 cudacxx
      🟩 nvcc12.6           Pass: 100%/1   | Total: 44m 41s | Avg: 44m 41s | Max: 44m 41s
    🟩 cudacxx_family
      🟩 nvcc               Pass: 100%/1   | Total: 44m 41s | Avg: 44m 41s | Max: 44m 41s
    🟩 cxx
      🟩 GCC13              Pass: 100%/1   | Total: 44m 41s | Avg: 44m 41s | Max: 44m 41s
    🟩 cxx_family
      🟩 GCC                Pass: 100%/1   | Total: 44m 41s | Avg: 44m 41s | Max: 44m 41s
    🟩 gpu
      🟩 v100               Pass: 100%/1   | Total: 44m 41s | Avg: 44m 41s | Max: 44m 41s
    🟩 jobs
      🟩 Test               Pass: 100%/1   | Total: 44m 41s | Avg: 44m 41s | Max: 44m 41s
    

👃 Inspect Changes

Modifications in project?

Project
CCCL Infrastructure
+/- libcu++
CUB
Thrust
CUDA Experimental
python
CCCL C Parallel Library
Catch2Helper

Modifications in project or dependencies?

Project
CCCL Infrastructure
+/- libcu++
+/- CUB
+/- Thrust
+/- CUDA Experimental
+/- python
+/- CCCL C Parallel Library
+/- Catch2Helper

🏃‍ Runner counts (total jobs: 135)

# Runner
92 linux-amd64-cpu16
17 linux-amd64-gpu-v100-latest-1
15 windows-amd64-cpu16
10 linux-arm64-cpu16
1 linux-amd64-gpu-h100-latest-1-testing

@miscco miscco merged commit 9c1541a into NVIDIA:main Jan 22, 2025
147 of 150 checks passed
@miscco miscco deleted the make_stream_ref_available_on_host branch January 22, 2025 15:07
davebayer pushed a commit to davebayer/cccl that referenced this pull request Jan 22, 2025
davebayer pushed a commit to davebayer/cccl that referenced this pull request Jan 22, 2025
davebayer added a commit to davebayer/cccl that referenced this pull request Jan 22, 2025
update docs

update docs

add `memcmp`, `memmove` and `memchr` implementations

implement tests

Use cuda::std::min/max in Thrust (NVIDIA#3364)

Implement `cuda::std::numeric_limits` for `__half` and `__nv_bfloat16` (NVIDIA#3361)

* implement `cuda::std::numeric_limits` for `__half` and `__nv_bfloat16`

Cleanup util_arch (NVIDIA#2773)

Deprecate thrust::null_type (NVIDIA#3367)

Deprecate cub::DeviceSpmv (NVIDIA#3320)

Fixes: NVIDIA#896

Improves `DeviceSegmentedSort` test run time for large number of items and segments (NVIDIA#3246)

* fixes segment offset generation

* switches to analytical verification

* switches to analytical verification for pairs

* fixes spelling

* adds tests for large number of segments

* fixes narrowing conversion in tests

* addresses review comments

* fixes includes

Compile basic infra test with C++17 (NVIDIA#3377)

Adds support for large number of items and large number of segments to `DeviceSegmentedSort` (NVIDIA#3308)

* fixes segment offset generation

* switches to analytical verification

* switches to analytical verification for pairs

* addresses review comments

* introduces segment offset type

* adds tests for large number of segments

* adds support for large number of segments

* drops segment offset type

* fixes thrust namespace

* removes about-to-be-deprecated cub iterators

* no exec specifier on defaulted ctor

* fixes gcc7 linker error

* uses local_segment_index_t throughout

* determine offset type based on type returned by segment iterator begin/end iterators

* minor style improvements

Exit with error when RAPIDS CI fails. (NVIDIA#3385)

cuda.parallel: Support structured types as algorithm inputs (NVIDIA#3218)

* Introduce gpu_struct decorator and typing

* Enable `reduce` to accept arrays of structs as inputs

* Add test for reducing arrays-of-struct

* Update documentation

* Use a numpy array rather than ctypes object

* Change zeros -> empty for output array and temp storage

* Add a TODO for typing GpuStruct

* Documentation udpates

* Remove test_reduce_struct_type from test_reduce.py

* Revert to `to_cccl_value()` accepting ndarray + GpuStruct

* Bump copyrights

---------

Co-authored-by: Ashwin Srinath <shwina@users.noreply.github.com>

Deprecate thrust::async (NVIDIA#3324)

Fixes: NVIDIA#100

Review/Deprecate CUB `util.ptx` for CCCL 2.x (NVIDIA#3342)

Fix broken `_CCCL_BUILTIN_ASSUME` macro (NVIDIA#3314)

* add compiler-specific path
* fix device code path
* add _CCC_ASSUME

Deprecate thrust::numeric_limits (NVIDIA#3366)

Replace `typedef` with `using` in libcu++ (NVIDIA#3368)

Deprecate thrust::optional (NVIDIA#3307)

Fixes: NVIDIA#3306

Upgrade to Catch2 3.8  (NVIDIA#3310)

Fixes: NVIDIA#1724

refactor `<cuda/std/cstdint>` (NVIDIA#3325)

Co-authored-by: Bernhard Manfred Gruber <bernhardmgruber@gmail.com>

Update CODEOWNERS (NVIDIA#3331)

* Update CODEOWNERS

* Update CODEOWNERS

* Update CODEOWNERS

* [pre-commit.ci] auto code formatting

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

Fix sign-compare warning (NVIDIA#3408)

Implement more cmath functions to be usable on host and device (NVIDIA#3382)

* Implement more cmath functions to be usable on host and device

* Implement math roots functions

* Implement exponential functions

Redefine and deprecate thrust::remove_cvref (NVIDIA#3394)

* Redefine and deprecate thrust::remove_cvref

Co-authored-by: Michael Schellenberger Costa <miscco@nvidia.com>

Fix assert definition for NVHPC due to constexpr issues (NVIDIA#3418)

NVHPC cannot decide at compile time where the code would run so _CCCL_ASSERT within a constexpr function breaks it.

Fix this by always using the host definition which should also work on device.

Fixes NVIDIA#3411

Extend CUB reduce benchmarks (NVIDIA#3401)

* Rename max.cu to custom.cu, since it uses a custom operator
* Extend types covered my min.cu to all fundamental types
* Add some notes on how to collect tuning parameters

Fixes: NVIDIA#3283

Update upload-pages-artifact to v3 (NVIDIA#3423)

* Update upload-pages-artifact to v3

* Empty commit

---------

Co-authored-by: Ashwin Srinath <shwina@users.noreply.github.com>

Replace and deprecate thrust::cuda_cub::terminate (NVIDIA#3421)

`std::linalg` accessors and `transposed_layout` (NVIDIA#2962)

Add round up/down to multiple (NVIDIA#3234)

[FEA]: Introduce Python module with CCCL headers (NVIDIA#3201)

* Add cccl/python/cuda_cccl directory and use from cuda_parallel, cuda_cooperative

* Run `copy_cccl_headers_to_aude_include()` before `setup()`

* Create python/cuda_cccl/cuda/_include/__init__.py, then simply import cuda._include to find the include path.

* Add cuda.cccl._version exactly as for cuda.cooperative and cuda.parallel

* Bug fix: cuda/_include only exists after shutil.copytree() ran.

* Use `f"cuda-cccl @ file://{cccl_path}/python/cuda_cccl"` in setup.py

* Remove CustomBuildCommand, CustomWheelBuild in cuda_parallel/setup.py (they are equivalent to the default functions)

* Replace := operator (needs Python 3.8+)

* Fix oversights: remove `pip3 install ./cuda_cccl` lines from README.md

* Restore original README.md: `pip3 install -e` now works on first pass.

* cuda_cccl/README.md: FOR INTERNAL USE ONLY

* Remove `$pymajor.$pyminor.` prefix in cuda_cccl _version.py (as suggested under NVIDIA#3201 (comment))

Command used: ci/update_version.sh 2 8 0

* Modernize pyproject.toml, setup.py

Trigger for this change:

* NVIDIA#3201 (comment)

* NVIDIA#3201 (comment)

* Install CCCL headers under cuda.cccl.include

Trigger for this change:

* NVIDIA#3201 (comment)

Unexpected accidental discovery: cuda.cooperative unit tests pass without CCCL headers entirely.

* Factor out cuda_cccl/cuda/cccl/include_paths.py

* Reuse cuda_cccl/cuda/cccl/include_paths.py from cuda_cooperative

* Add missing Copyright notice.

* Add missing __init__.py (cuda.cccl)

* Add `"cuda.cccl"` to `autodoc.mock_imports`

* Move cuda.cccl.include_paths into function where it is used. (Attempt to resolve Build and Verify Docs failure.)

* Add # TODO: move this to a module-level import

* Modernize cuda_cooperative/pyproject.toml, setup.py

* Convert cuda_cooperative to use hatchling as build backend.

* Revert "Convert cuda_cooperative to use hatchling as build backend."

This reverts commit 61637d6.

* Move numpy from [build-system] requires -> [project] dependencies

* Move pyproject.toml [project] dependencies -> setup.py install_requires, to be able to use CCCL_PATH

* Remove copy_license() and use license_files=["../../LICENSE"] instead.

* Further modernize cuda_cccl/setup.py to use pathlib

* Trivial simplifications in cuda_cccl/pyproject.toml

* Further simplify cuda_cccl/pyproject.toml, setup.py: remove inconsequential code

* Make cuda_cooperative/pyproject.toml more similar to cuda_cccl/pyproject.toml

* Add taplo-pre-commit to .pre-commit-config.yaml

* taplo-pre-commit auto-fixes

* Use pathlib in cuda_cooperative/setup.py

* CCCL_PYTHON_PATH in cuda_cooperative/setup.py

* Modernize cuda_parallel/pyproject.toml, setup.py

* Use pathlib in cuda_parallel/setup.py

* Add `# TOML lint & format` comment.

* Replace MANIFEST.in with `[tool.setuptools.package-data]` section in pyproject.toml

* Use pathlib in cuda/cccl/include_paths.py

* pre-commit autoupdate (EXCEPT clang-format, which was manually restored)

* Fixes after git merge main

* Resolve warning: AttributeError: '_Reduce' object has no attribute 'build_result'

```
=========================================================================== warnings summary ===========================================================================
tests/test_reduce.py::test_reduce_non_contiguous
  /home/coder/cccl/python/devenv/lib/python3.12/site-packages/_pytest/unraisableexception.py:85: PytestUnraisableExceptionWarning: Exception ignored in: <function _Reduce.__del__ at 0x7bf123139080>

  Traceback (most recent call last):
    File "/home/coder/cccl/python/cuda_parallel/cuda/parallel/experimental/algorithms/reduce.py", line 132, in __del__
      bindings.cccl_device_reduce_cleanup(ctypes.byref(self.build_result))
                                                       ^^^^^^^^^^^^^^^^^
  AttributeError: '_Reduce' object has no attribute 'build_result'

    warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============================================================= 1 passed, 93 deselected, 1 warning in 0.44s ==============================================================
```

* Move `copy_cccl_headers_to_cuda_cccl_include()` functionality to `class CustomBuildPy`

* Introduce cuda_cooperative/constraints.txt

* Also add cuda_parallel/constraints.txt

* Add `--constraint constraints.txt` in ci/test_python.sh

* Update Copyright dates

* Switch to https://github.com/ComPWA/taplo-pre-commit (the other repo has been archived by the owner on Jul 1, 2024)

For completeness: The other repo took a long time to install into the pre-commit cache; so long it lead to timeouts in the CCCL CI.

* Remove unused cuda_parallel jinja2 dependency (noticed by chance).

* Remove constraints.txt files, advertise running `pip install cuda-cccl` first instead.

* Make cuda_cooperative, cuda_parallel testing completely independent.

* Run only test_python.sh [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Try using another runner (because V100 runners seem to be stuck) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Fix sign-compare warning (NVIDIA#3408) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Revert "Try using another runner (because V100 runners seem to be stuck) [skip-rapids][skip-matx][skip-docs][skip-vdc]"

This reverts commit ea33a21.

Error message: NVIDIA#3201 (comment)

* Try using A100 runner (because V100 runners still seem to be stuck) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Also show cuda-cooperative site-packages, cuda-parallel site-packages (after pip install) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Try using l4 runner (because V100 runners still seem to be stuck) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Restore original ci/matrix.yaml [skip-rapids]

* Use for loop in test_python.sh to avoid code duplication.

* Run only test_python.sh [skip-rapids][skip-matx][skip-docs][skip-vdc][skip pre-commit.ci]

* Comment out taplo-lint in pre-commit config [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Revert "Run only test_python.sh [skip-rapids][skip-matx][skip-docs][skip-vdc][skip pre-commit.ci]"

This reverts commit ec206fd.

* Implement suggestion by @shwina (NVIDIA#3201 (review))

* Address feedback by @leofang

---------

Co-authored-by: Bernhard Manfred Gruber <bernhardmgruber@gmail.com>

cuda.parallel: Add optional stream argument to reduce_into() (NVIDIA#3348)

* Add optional stream argument to reduce_into()

* Add tests to check for reduce_into() stream behavior

* Move protocol related utils to separate file and rework __cuda_stream__ error messages

* Fix synchronization issue in stream test and add one more invalid stream test case

* Rename cuda stream validation function after removing leading underscore

* Unpack values from __cuda_stream__ instead of indexing

* Fix linting errors

* Handle TypeError when unpacking invalid __cuda_stream__ return

* Use stream to allocate cupy memory in new stream test

Upgrade to actions/deploy-pages@v4 (from v2), as suggested by @leofang (NVIDIA#3434)

Deprecate `cub::{min, max}` and replace internal uses with those from libcu++ (NVIDIA#3419)

* Deprecate `cub::{min, max}` and replace internal uses with those from libcu++

Fixes NVIDIA#3404

Fix CI issues (NVIDIA#3443)

Remove deprecated `cub::min` (NVIDIA#3450)

* Remove deprecated `cuda::{min,max}`

* Drop unused `thrust::remove_cvref` file

Fix typo in builtin (NVIDIA#3451)

Moves agents to `detail::<algorithm_name>` namespace (NVIDIA#3435)

uses unsigned offset types in thrust's scan dispatch (NVIDIA#3436)

Default transform_iterator's copy ctor (NVIDIA#3395)

Fixes: NVIDIA#2393

Turn C++ dialect warning into error (NVIDIA#3453)

Uses unsigned offset types in thrust's sort algorithm calling into `DispatchMergeSort` (NVIDIA#3437)

* uses thrust's dynamic dispatch for merge_sort

* [pre-commit.ci] auto code formatting

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

Refactor allocator handling of contiguous_storage (NVIDIA#3050)

Co-authored-by: Michael Schellenberger Costa <miscco@nvidia.com>

Drop thrust::detail::integer_traits (NVIDIA#3391)

Add cuda::is_floating_point supporting half and bfloat (NVIDIA#3379)

Co-authored-by: Michael Schellenberger Costa <miscco@nvidia.com>

Improve docs of std headers (NVIDIA#3416)

Drop C++11 and C++14 support for all of cccl (NVIDIA#3417)

* Drop C++11 and C++14 support for all of cccl

---------

Co-authored-by: Bernhard Manfred Gruber <bernhardmgruber@gmail.com>

Deprecate a few CUB macros (NVIDIA#3456)

Deprecate thrust universal iterator categories (NVIDIA#3461)

Fix launch args order (NVIDIA#3465)

Add `--extended-lambda` to the list of removed clangd flags (NVIDIA#3432)

add `_CCCL_HAS_NVFP8` macro (NVIDIA#3429)

Add `_CCCL_BUILTIN_PREFETCH` (NVIDIA#3433)

Drop universal iterator categories (NVIDIA#3474)

Ensure that headers in `<cuda/*>` can be build with a C++ only compiler (NVIDIA#3472)

Specialize __is_extended_floating_point for FP8 types (NVIDIA#3470)

Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <miscco@nvidia.com>

Moves CUB kernel entry points to a detail namespace (NVIDIA#3468)

* moves emptykernel to detail ns

* second batch

* third batch

* fourth batch

* fixes cuda parallel

* concatenates nested namespaces

Deprecate block/warp algo specializations (NVIDIA#3455)

Fixes: NVIDIA#3409

Refactor CUB's util_debug (NVIDIA#3345)
davebayer pushed a commit to davebayer/cccl that referenced this pull request Jan 29, 2025
miscco added a commit to miscco/cccl that referenced this pull request Feb 3, 2025
miscco added a commit that referenced this pull request Feb 3, 2025
… only compiler (#3472) (#3651)

* Ensure that headers in `<cuda/*>` can be build with a C++ only compiler (#3472)

* Avoid warnigns for MSVC
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

[BUG]: Libcudaxx headers such as <cuda/stream_ref> should compile even without CUDA compilers
3 participants