Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Align directories and codegen with stock PyTorch #310

Merged
merged 68 commits into from
Sep 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
3314db8
Buffer src file for easier build
xytintel May 22, 2024
ea7e949
v0.0, build-able version
ZhiweiYan-96 May 23, 2024
b686646
Inherit AT_PER_OPERATOR_HEADERS
ZhiweiYan-96 May 28, 2024
eda1f6a
[cmake] softlink to templates in main lib
ZhiweiYan-96 Jun 4, 2024
30fc0be
Align file structure to PyTorch
ZhiweiYan-96 Jun 13, 2024
7b75f9a
Remove useless code
ZhiweiYan-96 Jun 13, 2024
ab38df7
[Not buildable] big kernels, error include
ZhiweiYan-96 Jun 17, 2024
09b9000
Buildable version
ZhiweiYan-96 Jun 19, 2024
311be29
remove useless comments
ZhiweiYan-96 Jun 19, 2024
3ac45a7
Add tool for rm headers
ZhiweiYan-96 Jun 19, 2024
056d91f
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Jun 25, 2024
65e99fe
typo
ZhiweiYan-96 Jun 25, 2024
9681a66
fix div, arrange, and, or, clamp_scalar, max, min, flip, acos, acosh,…
ZhiweiYan-96 Jun 26, 2024
9f4333b
fix random, max/min_values, SoftMax, scatter/fill/add/reduce, index
ZhiweiYan-96 Jun 27, 2024
54b6f3d
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Jun 28, 2024
41cb936
stash change for backup
ZhiweiYan-96 Jul 1, 2024
b86c4a9
build version
ZhiweiYan-96 Jul 3, 2024
e4800a4
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Jul 5, 2024
f0a0a4e
Reuse pytorch native backendwhitelist strategy
ZhiweiYan-96 Jul 9, 2024
95b1935
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Jul 11, 2024
fde70b3
Remove index.tensor
ZhiweiYan-96 Jul 15, 2024
0bfdf0a
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Jul 16, 2024
242fa9c
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Jul 17, 2024
4eb271a
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Jul 17, 2024
5ce3ac0
fallback std_var_stub
ZhiweiYan-96 Jul 24, 2024
b01ba7c
Skip std var acc failure
ZhiweiYan-96 Jul 30, 2024
59af0a6
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Aug 7, 2024
8a911a0
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Aug 7, 2024
d30c054
fix xpu::pair template argument substitution issue
ZhiweiYan-96 Aug 8, 2024
d28f45d
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Aug 8, 2024
e98846f
coding style, remove cuda in yaml
ZhiweiYan-96 Aug 8, 2024
25650e0
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Aug 12, 2024
dcec695
rm soft link
ZhiweiYan-96 Aug 12, 2024
bbd50fc
Enable aten::histogram
majing921201 Aug 12, 2024
c2673e9
fix clang-format
majing921201 Aug 12, 2024
980fc61
fix build error
ZhiweiYan-96 Aug 13, 2024
4d0a011
add sort
majing921201 Aug 13, 2024
aedd878
fix multiple definitions issue
ZhiweiYan-96 Aug 13, 2024
2ef7e46
Merge branch 'zhiwei/codegen' of https://github.com/intel/torch-xpu-o…
ZhiweiYan-96 Aug 13, 2024
6b7751f
fix link issue
ZhiweiYan-96 Aug 13, 2024
d7a2d96
enable index_fill
majing921201 Aug 14, 2024
52a0bb5
bug fix
guoyejun Aug 14, 2024
bb4ba3f
Merge branch 'zhiwei/codegen' of https://github.com/intel/torch-xpu-o…
guoyejun Aug 14, 2024
6b53fad
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Aug 16, 2024
99d51f9
bug fix
ZhiweiYan-96 Aug 16, 2024
9f24f01
reflection_pad1d, index_fill, sign, remainder
ZhiweiYan-96 Aug 16, 2024
562e4ed
use keepdim arguments in aminmax
ZhiweiYan-96 Aug 16, 2024
67fee75
remove contiguous in aminmax
ZhiweiYan-96 Aug 16, 2024
cfa47d5
skip unsafe_masked_index_put_accumulate_xpu
ZhiweiYan-96 Aug 19, 2024
5a7d774
rm std_var skip list
ZhiweiYan-96 Aug 19, 2024
c93ac3c
Remove used functions
leizhenyuan Aug 21, 2024
3848570
initialize grad_input to zero in upsamplenearest
ZhiweiYan-96 Aug 26, 2024
2e364fa
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Aug 27, 2024
5badf42
Use xpu nonzero kernel
ZhiweiYan-96 Aug 29, 2024
283c6f7
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Aug 29, 2024
33cebd2
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Sep 11, 2024
ab1cba4
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Sep 11, 2024
953c8c9
skip scatter add uts
ZhiweiYan-96 Sep 12, 2024
948d01c
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Sep 12, 2024
c0e68cb
skip multinomial cases
ZhiweiYan-96 Sep 12, 2024
ad7f5b8
skip scatter_add uts
ZhiweiYan-96 Sep 13, 2024
14c1b24
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Sep 13, 2024
bd194fb
codegen
fengyuan14 Sep 20, 2024
1dd030f
Delete yaml/templates
chunhuanMeng Sep 24, 2024
9bd57e7
Merge branch 'main' into zhiwei/codegen
chunhuanMeng Sep 25, 2024
85fb11b
fix soft link command
chunhuanMeng Sep 25, 2024
e0c8ac0
fix windows soft link command
chunhuanMeng Sep 25, 2024
f71cfcb
Merge branch 'main' into zhiwei/codegen
ZhiweiYan-96 Sep 26, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .lintrunner.toml
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,8 @@ code = 'CLANGFORMAT'
include_patterns = [
'src/aten/*.h',
'src/aten/*.cpp',
'src/aten/sycl/*.h',
'src/aten/sycl/*.cpp',
'src/ATen/native/xpu/sycl/*.h',
'src/ATen/native/xpu/sycl/*.cpp',
'aten/src/ATen/*.h',
'aten/src/ATen/mps/**/*.mm',
'aten/src/ATen/xpu/**/*.h',
Expand Down
4 changes: 4 additions & 0 deletions cmake/BuildFlags.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,10 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU" OR CMAKE_CXX_COMPILER_ID STREQUAL "MSVC"
list(APPEND SYCL_HOST_FLAGS -O0)
endif(CMAKE_BUILD_TYPE MATCHES Debug)

if(USE_PER_OPERATOR_HEADERS)
list(APPEND SYCL_HOST_FLAGS -DAT_PER_OPERATOR_HEADERS)
endif()

# -- Kernel flags (SYCL_KERNEL_OPTIONS)
# The fast-math will be enabled by default in SYCL compiler.
# Refer to [https://clang.llvm.org/docs/UsersManual.html#cmdoption-fno-fast-math]
Expand Down
64 changes: 59 additions & 5 deletions cmake/Codegen.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ if(Codegen_GPU_cmake_included)
endif()
set(Codegen_GPU_cmake_included true)

set(BUILD_TORCH_XPU_ATEN_GENERATED "${CMAKE_BINARY_DIR}/aten/src/ATen/xpu")
set(BUILD_TORCH_XPU_ATEN_GENERATED "${CMAKE_BINARY_DIR}/xpu/ATen/")
file(MAKE_DIRECTORY ${BUILD_TORCH_XPU_ATEN_GENERATED})

set(RegisterXPU_PATH ${BUILD_TORCH_XPU_ATEN_GENERATED}/RegisterXPU.cpp)
Expand Down Expand Up @@ -43,10 +43,64 @@ function(GEN_BACKEND file_yaml)
)
endfunction(GEN_BACKEND)

GEN_BACKEND(
xpu_functions.yaml
XPUNativeFunctions.h
RegisterXPU.cpp)

set(RegisterXPU_PATH ${BUILD_TORCH_XPU_ATEN_GENERATED}/RegisterXPU.cpp)
set(XPUFallback_PATH ${TORCH_XPU_OPS_ROOT}/src/ATen/native/xpu/XPUFallback.template)
function(GEN_XPU file_yaml)
set(generated_files "")
foreach(f ${ARGN})
list(APPEND generated_files "${BUILD_TORCH_XPU_ATEN_GENERATED}/${f}")
endforeach()
file(GLOB_RECURSE depend_files ${TORCH_XPU_OPS_ROOT}/yaml/${file_yaml})
set(CODEGEN_TEMPLATE ${TORCH_XPU_OPS_ROOT}/yaml/)

# Codegen prepare process
if(WIN32)
string(REPLACE "/" "\\" LinkPATH "${CODEGEN_TEMPLATE}templates")
string(REPLACE "/" "\\" TargetPATH "${CMAKE_SOURCE_DIR}/aten/src/ATen/templates")
execute_process(COMMAND cmd /c mklink /D ${LinkPATH} ${TargetPATH})
string(REPLACE "/" "\\" RegisterXPU_PATH_BACKSLASH "${RegisterXPU_PATH}")
string(REPLACE "/" "\\" XPUFallback_PATH_BACKSLASH "${XPUFallback_PATH}")
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_PATH_BACKSLASH} ">>" ${RegisterXPU_PATH_BACKSLASH})
else()
execute_process(COMMAND ln -s ${CMAKE_SOURCE_DIR}/aten/src/ATen/templates ${CODEGEN_TEMPLATE}) # soft link to pytorch templates
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_PATH} ">>" ${RegisterXPU_PATH})
endif()

add_custom_command(
OUTPUT ${generated_files}
COMMAND
"${PYTHON_EXECUTABLE}" -m torchgen.gen
--source-path ${TORCH_XPU_OPS_ROOT}/yaml/
--install-dir ${BUILD_TORCH_XPU_ATEN_GENERATED}
--per-operator-headers
--static-dispatch-backend
--backend-whitelist=XPU
COMMAND
${REGISTER_FALLBACK_CMD}
# Codegen post-process
COMMAND "${PYTHON_EXECUTABLE}" ${TORCH_XPU_OPS_ROOT}/tools/codegen/remove_headers.py --register_xpu_path ${RegisterXPU_PATH}
${SIMPLE_TRACE}
WORKING_DIRECTORY ${TORCH_ROOT}
DEPENDS
${depended_files}
${TORCH_XPU_OPS_ROOT}/yaml/native/${file_yaml}
${XPUFallback_PATH}
)
endfunction(GEN_XPU)

# GEN_BACKEND(
# xpu_functions.yaml
# XPUNativeFunctions.h
# RegisterXPU.cpp)

GEN_XPU(
native_functions.yaml
XPUFunctions.h
RegisterXPU.cpp
)




list(APPEND xpu_generated_src ${RegisterXPU_PATH})
Expand Down
6 changes: 1 addition & 5 deletions src/ATen/native/sparse/SparseTensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,9 @@
#include <ATen/core/op_registration/adaption.h>
#include <torch/library.h>

#ifndef AT_PER_OPERATOR_HEADERS
#include <ATen/Functions.h>
#include <ATen/NativeFunctions.h>
#else
#include <ATen/ops/_nnz_native.h>
#include <ATen/ops/_sparse_coo_tensor_with_dims_and_tensors_native.h>
#endif
#include <ATen/ops/_values_native.h>

namespace at::native::xpu {

Expand Down
Loading