-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libtorch: new recipe #24759
base: master
Are you sure you want to change the base?
libtorch: new recipe #24759
Conversation
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
XNNPACK was not correctly added to project dependencies. Prefer namespaced targets, if possible.
This comment has been minimized.
This comment has been minimized.
Hooks produced the following warnings for commit 87a1370libtorch/2.4.0@#f680755600363ae5e29186ad5b798792
|
Conan v1 pipeline ❌Failure in build 6 (
Note: To save resources, CI tries to finish as soon as an error is found. For this reason you might find that not all the references have been launched or not all the configurations for a given reference. Also, take into account that we cannot guarantee the order of execution as it depends on CI workload and workers availability. Conan v2 pipeline ❌
The v2 pipeline failed. Please, review the errors and note this is required for pull requests to be merged. In case this recipe is still not ported to Conan 2.x, please, ping Failure in build 8 (
Note: To save resources, CI tries to finish as soon as an error is found. For this reason you might find that not all the references have been launched or not all the configurations for a given reference. Also, take into account that we cannot guarantee the order of execution as it depends on CI workload and workers availability. |
Hello @valgur, thanks for this amazing PR. Do you plan to continue working on it? 🤞Having libtorch in Conan would be so neat. Since OpenMPI is now available, do you plan to let the user to enable the distributed feature? |
tc.variables["BLAS"] = self._blas_cmake_option_value | ||
|
||
tc.variables["MSVC_Z7_OVERRIDE"] = False | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incidentally, this also needs
tc.variables["CMAKE_CXX_EXTENSIONS"] = True
Tested this while running a build that uses a compiler.cppstd. If it is using a non-gnu standard (which for other packages it must be) ATen breaks with the same error as: pytorch/QNNPACK#67
This converts -std=c++17 for example to -std=gnu++17.
It's probably not necessary on Windows but also shouldn't hurt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! TODO: add gnu_extensions=True
to check_min_cppstd()
whole_archive = f"-WHOLEARCHIVE:{lib_fullpath}" | ||
else: | ||
lib_fullpath = os.path.join(lib_folder, f"lib{libname}.a") | ||
whole_archive = f"-Wl,--whole-archive,{lib_fullpath},--no-whole-archive" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work on this PR--I am not using this library directly but found it through following some github issues on whole archive linking.
For this line--I wonder if it is possible to do this with -Wl,--push-state,--pop-state
? See eg
https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_LINK_LIBRARY_USING_FEATURE.html#loading-a-whole-static-library
self.options.rm_safe("with_mkldnn") | ||
if not is_apple_os(self) or self.settings.os not in ["Linux", "Android"]: | ||
del self.options.with_nnpack | ||
self.options.with_itt = self.settings.arch in ["x86", "x86_64"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line overrides the manual taken settings in with_itt
. So even if set to False
this line sets it to True
on x86, x86_64 which may not be the expected behaviour.
|
||
@property | ||
def _use_nnpack_family(self): | ||
return any(self.options.get_safe(f"with_{name}") for name in ["nnpack", "qnnpack", "xnnpack"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with_xnnpack
can not be deleted because it is unsafe used on line 284 if self.options.with_xnnpack:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the comment, but I don't see with_xnnpack
option being deleted anywhere? This specific line is querying the value, not removing the option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the missing context. with_xnnpack must be deleted for MacOS build since this is not yet supported under Mac. For that cases it might be problem.
I managed to build libtorch under linux and MacOS ARM with some modifications based on @valgur's work.
If you are interested you can see the actual verison:
https://github.com/joda01/imagec-recipes/actions/runs/12983296316
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I'll definitely take a look.
self.requires("vulkan-loader/1.3.268.0") | ||
if self.options.with_mimalloc: | ||
self.requires("mimalloc/2.1.7") | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Build on MacOS needs pybind
if is_apple_os(self): | |
self.requires("pybind11/2.13.6") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure it's required specifically on macOS? I might have missed it on Linux due to having it available on my system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried it without but had no success. You can have a look to
https://github.com/joda01/imagec-recipes/actions/runs/12983296316
|
||
# Keep only a restricted set of vendored dependencies. | ||
# Do it before build() to limit the amount of files to copy. | ||
allowed = ["pocketfft", "kineto", "miniz-2.1.0"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For MacOs build some more 3rd party libs are neeeded
allowed = ["pocketfft", "kineto", "miniz-2.1.0"] | |
allowed = ["pocketfft", "kineto", "miniz-2.1.0"] | |
if self.is_mac_os == True: | |
allowed = ["pocketfft", "kineto", "miniz-2.1.0", "opentelemetry-cpp", "protobuf"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... I would prefer to unvendor these and use Conan versions. I'll have to do more testing on macOS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I total agree with you! It's a fast workaround.
@valgur Thank's a lot for your initial work providing a libtorch recipe! |
Summary
Changes to recipe: libtorch/2.4.0
Motivation
Tensors and Dynamic neural networks in Python with strong GPU acceleration.
https://github.com/pytorch/pytorch
Details
Continues from #5100 by @SpaceIm.
CUDA, HIP and SYCL backends are currently disabled since the PR is complex enough already and these can be addressed in a follow-up PR. Vulkan and Metal (TODO) should be usable as GPU backends currently.
Distributed feature is disabled as well to limit the scope and due to
openmpi
not yet being available (#18980).Android and iOS builds are probably broken and need testing.
Non-OpenBLAS BLAS backends are probably not usable due to OpenBLAS being required for LAPACK. A separate LAPACK recipe would be required to fix that (such as #23798).
Closes #6861.
TODO:
pocketfft
and unvendor.