-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ROCM to the DGL build #3
Conversation
These are the result of iterative work on the hipified files, so we're prefetching some replacements I didn't discover until later. The source files also need some modification to make this all work.
This is just the output of the hipify-inplace.sh and hipify-tensoradapter.py scripts with no further modifications. I think it's easier to review changes to the actual HIP source rather than trying to thing about what the hipify script will do and since there are a fair number of changes to review, that seems worth it. In the end we should have a bunch of hip source files and .prehip cuda files that they can be generated from. Then we can handle organization however we want: restore the originals and have hipification be part of a build process, have the hip versions on a separate branch, etc. I'll check in the .prehip files in a separate commit to keep things a bit cleaner.
These all get the .prehip extension appended.
In my porting, I was finding it really annoying that everything in DGL was hardcoded to the directory build/ and that it created build/ directories in various source subdirectories (which meant cleaning and rebuilding was fraught). I modified things so that all sub-builds happen in the main build directory. There were also some bugs in the shell scripts and I cleaned them up a bit to make them more robust. Not all of this is strictly required for ROCM to build, so we might want to strip it out. I already stripped out various warning silencing for that reason.
@jeffdaily PTAL. This is based on my other changes. Really only |
cuda_add_library(gpu_cache STATIC ${gpu_cache_src}) | ||
target_include_directories(gpu_cache PRIVATE "third_party/HugeCTR/gpu_cache/include") | ||
target_include_directories(dgl PRIVATE "third_party/HugeCTR/gpu_cache/include") | ||
list(APPEND DGL_LINKER_LIBS gpu_cache) | ||
message(STATUS "Build with HugeCTR GPU embedding cache.") | ||
elif(USE_ROCM) | ||
set_source_files_properties(${gpu_cache_src} PROPERTIES LANGUAGE HIP) | ||
add_library(gpu_cache STATIC ${gpu_cache_src}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm aware we have a hip_add_library but I'm not sure if or when it is preferred over just add_library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, the HIP docs just use add_library
: https://rocm.docs.amd.com/en/latest/conceptual/cmake-packages.html#using-hip-in-cmake. I can look into hip_add_library
, although this appears to work.
Adds CMake options to find ROCM (for a modern ROCM version) and configure it to build the source files with CUDA extensions.
In my porting, I was finding it really annoying that everything in DGL was hardcoded to the directory
build/
, as it made building with different configurations painful. The build also createdbuild/
directories in various source subdirectories (which meant cleaning and rebuilding was fraught). I modified things so that all sub-builds happen in the main build directory.There were also some bugs in the sub-build shell scripts and I cleaned them up a bit to make them more robust.
Not all of this is strictly required for ROCM to build, so we might want to strip it out. I already stripped out the changes I made to silence various warnings for that reason.