Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use new API to register custom ExecuTorch kernels into ATen #2937

Closed
wants to merge 1 commit into from

Conversation

larryliu0820
Copy link
Contributor

Summary:
Retry of D55713944
Use WRAP_TO_ATEN to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to libcustom_ops_aot_lib.so in setup.py. This is to make sure we can build libcustom_ops_aot_lib.so and install it to the correct position (<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so) and then it can be loaded by torch.ops.load_library.

Differential Revision: D55907749

Copy link

pytorch-bot bot commented Apr 9, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2937

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 70c9622 with merge base 6e43135 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 9, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

larryliu0820 added a commit that referenced this pull request Apr 9, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

larryliu0820 added a commit that referenced this pull request Apr 9, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

larryliu0820 added a commit that referenced this pull request Apr 9, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

larryliu0820 added a commit that referenced this pull request Apr 9, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Differential Revision: D55907749
@larryliu0820 larryliu0820 force-pushed the export-D55907749 branch 2 times, most recently from d9bdadc to 1a67378 Compare April 9, 2024 07:31
larryliu0820 added a commit that referenced this pull request Apr 9, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

larryliu0820 added a commit that referenced this pull request Apr 9, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

larryliu0820 added a commit that referenced this pull request Apr 11, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Reviewed By: lucylq

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

larryliu0820 added a commit that referenced this pull request Apr 11, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Reviewed By: lucylq

Differential Revision: D55907749
larryliu0820 added a commit that referenced this pull request Apr 11, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Reviewed By: lucylq

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

larryliu0820 added a commit that referenced this pull request Apr 11, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Reviewed By: lucylq

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

larryliu0820 added a commit that referenced this pull request Apr 11, 2024
Summary:
Pull Request resolved: #2937

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Reviewed By: lucylq

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

Summary:

Retry of D55713944
Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch.

This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`.

Reviewed By: lucylq

Differential Revision: D55907749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D55907749

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in c322685.

Copy link
Contributor

@dbort dbort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The structure of the edits to setup.py look ok. My main worry is about the maintainability of the naming

@@ -144,6 +144,8 @@ option(EXECUTORCH_BUILD_COREML "Build the Core ML backend" OFF)

option(EXECUTORCH_BUILD_CUSTOM "Build the custom kernels" OFF)

option(EXECUTORCH_BUILD_CUSTOM_OPS_AOT "Build the custom ops lib for AOT" OFF)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name will be too vague over time: are we only going to have one library with custom ops? Or is this llama-specifc? If it is just one global library, how do we decide which kinds of custom ops get to live in it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah i agree. this custom ops were examples/models/llama specific at the time it was added. Though things have changed since.

@@ -144,6 +144,8 @@ option(EXECUTORCH_BUILD_COREML "Build the Core ML backend" OFF)

option(EXECUTORCH_BUILD_CUSTOM "Build the custom kernels" OFF)

option(EXECUTORCH_BUILD_CUSTOM_OPS_AOT "Build the custom ops lib for AOT" OFF)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name is too similar to EXECUTORCH_BUILD_CUSTOM. If you ask a random person what the different is between EXECUTORCH_BUILD_CUSTOM and EXECUTORCH_BUILD_CUSTOM_OPS_AOT they probably wouldn't have a good answer.

Really, EXECUTORCH_BUILD_CUSTOM should probably have a more descriptive name. If that's too disruptive, please update the help strings for both options to make it more clear how they differ, and help users understand which one to enable and when.

if(EXECUTORCH_BUILD_CUSTOM)
set(EXECUTORCH_BUILD_OPTIMIZED ON)
endif()

if(EXECUTORCH_BUILD_CPUINFO)
# --- cpuinfo
set(ORIGINAL_CMAKE_POSITION_INDEPENDENT_CODE_FLAG
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is an internal temp var it should have a name like _original_cmake_position_independent_code_flag. Similar below.

@@ -25,7 +25,7 @@ if(NOT TORCH_ROOT)
set(TORCH_ROOT ${EXECUTORCH_ROOT}/third-party/pytorch)
endif()

set(_common_compile_options -Wno-deprecated-declarations)
set(_common_compile_options -Wno-deprecated-declarations -fPIC)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment explaining why this sets -fPIC manually instead of using CMAKE_POSITION_INDEPENDENT_CODE

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I try to avoid using CMAKE_POSITION_INDEPENDENT_CODE as much as possible since it's a global variable. We have to use it for third-party libs because we can't change their CMakeLists.txt but for this one we can tweak the compile options for this specific lib.

@@ -88,6 +88,11 @@ def pybindings(cls) -> bool:
def xnnpack(cls) -> bool:
return cls._is_env_enabled("EXECUTORCH_BUILD_XNNPACK", default=False)

@classmethod
@property
def llama_custom_ops(cls) -> bool:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to remove llama_ here since that name isn't part of the env vars.

@mergennachin mergennachin mentioned this pull request Apr 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants