-
Notifications
You must be signed in to change notification settings - Fork 3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Adding CUDNN Frontend and use for CUDA NN Convolution (#19470)
### Description Added CUDNN Frontend and used it for NHWC convolutions, and optionally fuse activation. #### Backward compatible - For model existed with FusedConv, model can still run. - If ORT is built with cuDNN 8, cuDNN frontend will not be built into binary. Old kernels (using cudnn backend APIs) are used. #### Major Changes - For cuDNN 9, we will enable cudnn frontend to fuse convolution and bias when a provider option `fuse_conv_bias=1`. - Remove the fusion of FusedConv from graph transformer for CUDA provider, so there will not be FusedConv be added to graph for CUDA EP in the future. - Update cmake files regarding to cudnn settings. The search order of CUDNN installation in build are like the following: * environment variable `CUDNN_PATH` * `onnxruntime_CUDNN_HOME` cmake extra defines. If a build starts from build.py/build.sh, user can pass it through `--cudnn_home` parameter, or by environment variable `CUDNN_HOME` if `--cudnn_home` not used. * cudnn python package installation directory like python3.xx/site-packages/nvidia/cudnn * CUDA installation path #### Potential Issues - If ORT is built with cuDNN 8, FusedConv fusion is no longer done automatically, so some model might have performance regression. If user still wants FusedConv operator for performance reason, they can still have multiple ways to walkaround: like use older version of onnxruntime; or use older version of ORT to save optimized onnx, then run with latest version of ORT. We believe that majority users have moved to cudnn 9 when 1.20 release (since the default in ORT and PyTorch is cudnn 9 for 3 months when 1.20 release), so the impact is small. - cuDNN graph uses TF32 by default, and user cannot disable TF32 through the use_tf32 cuda provider option. If user encounters accuracy issue (like in testing), user has to set environment variable `NVIDIA_TF32_OVERRIDE=0` to disable TF32. Need update the document of use_tf32 later. #### Follow ups This is one of PRs that target to enable NHWC convolution in CUDA EP by default if device supports it. There are other changes will follow up to make it possible. (1) Enable `prefer_nhwc` by default for device with sm >= 70. (2) Change `fuse_conv_bias=1` by default after more testing. (3) Add other NHWC operators (like Resize or UpSample). ### Motivation and Context The new CUDNN Frontend library provides the functionality to fuse operations and provides new heuristics for kernel selection. Here it fuses the convolution with the pointwise bias operation. On the [NVIDIA ResNet50](https://pytorch.org/hub/nvidia_deeplearningexamples_resnet50/) we get a performance boost from 49.1144 ms to 42.4643 ms per inference on a 2560x1440 input (`onnxruntime_perf_test -e cuda -I -q -r 100-d 1 -i 'prefer_nhwc|1' resnet50.onnx`). --------- Co-authored-by: Tianlei Wu <tlwu@microsoft.com> Co-authored-by: Maximilian Mueller <maximilianm@nvidia.com>
- Loading branch information
1 parent
0e708de
commit 1391354
Showing
45 changed files
with
1,806 additions
and
559 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,111 @@ | ||
add_library(CUDNN::cudnn_all INTERFACE IMPORTED) | ||
|
||
find_path( | ||
CUDNN_INCLUDE_DIR cudnn.h | ||
HINTS $ENV{CUDNN_PATH} ${CUDNN_PATH} ${Python_SITEARCH}/nvidia/cudnn ${CUDAToolkit_INCLUDE_DIRS} | ||
PATH_SUFFIXES include | ||
REQUIRED | ||
) | ||
|
||
file(READ "${CUDNN_INCLUDE_DIR}/cudnn_version.h" cudnn_version_header) | ||
string(REGEX MATCH "#define CUDNN_MAJOR [1-9]+" macrodef "${cudnn_version_header}") | ||
string(REGEX MATCH "[1-9]+" CUDNN_MAJOR_VERSION "${macrodef}") | ||
|
||
function(find_cudnn_library NAME) | ||
find_library( | ||
${NAME}_LIBRARY ${NAME} "lib${NAME}.so.${CUDNN_MAJOR_VERSION}" | ||
HINTS $ENV{CUDNN_PATH} ${CUDNN_PATH} ${Python_SITEARCH}/nvidia/cudnn ${CUDAToolkit_LIBRARY_DIR} | ||
PATH_SUFFIXES lib64 lib/x64 lib | ||
REQUIRED | ||
) | ||
|
||
if(${NAME}_LIBRARY) | ||
add_library(CUDNN::${NAME} UNKNOWN IMPORTED) | ||
set_target_properties( | ||
CUDNN::${NAME} PROPERTIES | ||
INTERFACE_INCLUDE_DIRECTORIES ${CUDNN_INCLUDE_DIR} | ||
IMPORTED_LOCATION ${${NAME}_LIBRARY} | ||
) | ||
message(STATUS "${NAME} found at ${${NAME}_LIBRARY}.") | ||
else() | ||
message(STATUS "${NAME} not found.") | ||
endif() | ||
|
||
|
||
endfunction() | ||
|
||
find_cudnn_library(cudnn) | ||
|
||
include (FindPackageHandleStandardArgs) | ||
find_package_handle_standard_args( | ||
LIBRARY REQUIRED_VARS | ||
CUDNN_INCLUDE_DIR cudnn_LIBRARY | ||
) | ||
|
||
if(CUDNN_INCLUDE_DIR AND cudnn_LIBRARY) | ||
|
||
message(STATUS "cuDNN: ${cudnn_LIBRARY}") | ||
message(STATUS "cuDNN: ${CUDNN_INCLUDE_DIR}") | ||
|
||
set(CUDNN_FOUND ON CACHE INTERNAL "cuDNN Library Found") | ||
|
||
else() | ||
|
||
set(CUDNN_FOUND OFF CACHE INTERNAL "cuDNN Library Not Found") | ||
|
||
endif() | ||
|
||
target_include_directories( | ||
CUDNN::cudnn_all | ||
INTERFACE | ||
$<INSTALL_INTERFACE:include> | ||
$<BUILD_INTERFACE:${CUDNN_INCLUDE_DIR}> | ||
) | ||
|
||
target_link_libraries( | ||
CUDNN::cudnn_all | ||
INTERFACE | ||
CUDNN::cudnn | ||
) | ||
|
||
if(CUDNN_MAJOR_VERSION EQUAL 8) | ||
find_cudnn_library(cudnn_adv_infer) | ||
find_cudnn_library(cudnn_adv_train) | ||
find_cudnn_library(cudnn_cnn_infer) | ||
find_cudnn_library(cudnn_cnn_train) | ||
find_cudnn_library(cudnn_ops_infer) | ||
find_cudnn_library(cudnn_ops_train) | ||
|
||
target_link_libraries( | ||
CUDNN::cudnn_all | ||
INTERFACE | ||
CUDNN::cudnn_adv_train | ||
CUDNN::cudnn_ops_train | ||
CUDNN::cudnn_cnn_train | ||
CUDNN::cudnn_adv_infer | ||
CUDNN::cudnn_cnn_infer | ||
CUDNN::cudnn_ops_infer | ||
) | ||
elseif(CUDNN_MAJOR_VERSION EQUAL 9) | ||
find_cudnn_library(cudnn_cnn) | ||
find_cudnn_library(cudnn_adv) | ||
find_cudnn_library(cudnn_graph) | ||
find_cudnn_library(cudnn_ops) | ||
find_cudnn_library(cudnn_engines_runtime_compiled) | ||
find_cudnn_library(cudnn_engines_precompiled) | ||
find_cudnn_library(cudnn_heuristic) | ||
|
||
target_link_libraries( | ||
CUDNN::cudnn_all | ||
INTERFACE | ||
CUDNN::cudnn_adv | ||
CUDNN::cudnn_ops | ||
CUDNN::cudnn_cnn | ||
CUDNN::cudnn_graph | ||
CUDNN::cudnn_engines_runtime_compiled | ||
CUDNN::cudnn_engines_precompiled | ||
CUDNN::cudnn_heuristic | ||
) | ||
endif() | ||
|
||
mark_as_advanced(CUDNN_INCLUDE_DIR) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
include(FetchContent) | ||
FetchContent_Declare( | ||
cudnn_frontend | ||
URL ${DEP_URL_cudnn_frontend} | ||
URL_HASH SHA1=${DEP_SHA1_cudnn_frontend} | ||
) | ||
|
||
set(CUDNN_FRONTEND_BUILD_SAMPLES OFF) | ||
set(CUDNN_FRONTEND_BUILD_UNIT_TESTS OFF) | ||
set(CUDNN_FRONTEND_BUILD_PYTHON_BINDINGS OFF) | ||
set(CUDNN_PATH ${onnxruntime_CUDNN_HOME}) | ||
FetchContent_MakeAvailable(cudnn_frontend) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,4 +19,5 @@ enum CudaResource : int { | |
enable_skip_layer_norm_strict_mode_t, | ||
prefer_nhwc_t, | ||
use_tf32_t, | ||
fuse_conv_bias_t | ||
}; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.