Skip to content

Commit

Permalink
* Upgrade presets for PyTorch 2.5.0
Browse files Browse the repository at this point in the history
  • Loading branch information
saudet committed Oct 26, 2024
1 parent d336eaf commit 0d3cf6d
Show file tree
Hide file tree
Showing 260 changed files with 2,471 additions and 1,783 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
* Build FFmpeg with zimg to enable zscale filter ([pull #1481](https://github.com/bytedeco/javacpp-presets/pull/1481))
* Enable PulseAudio support for FFmpeg on Linux ([pull #1472](https://github.com/bytedeco/javacpp-presets/pull/1472))
* Virtualize `btCollisionWorld`, `btOverlapFilterCallback`, `btOverlapCallback` from Bullet Physics SDK ([pull #1475](https://github.com/bytedeco/javacpp-presets/pull/1475))
* Upgrade presets for OpenCV 4.10.0, FFmpeg 7.1, Spinnaker 4.0.0.116 ([pull #1524](https://github.com/bytedeco/javacpp-presets/pull/1524)), DNNL 3.5.3, OpenBLAS 0.3.28, CMINPACK 1.3.9, GSL 2.8, CPython 3.13.0, NumPy 2.1.2, SciPy 1.14.1, LLVM 19.1.2, LibRaw 0.21.2 ([pull #1520](https://github.com/bytedeco/javacpp-presets/pull/1520)), Tesseract 5.4.1, libffi 3.4.6, CUDA 12.6.0, cuDNN 9.3.0, NCCL 2.22.3, nvCOMP 4.0.0, OpenCL 3.0.16, NVIDIA Video Codec SDK 12.2.72, PyTorch 2.4.0 ([pull #1466](https://github.com/bytedeco/javacpp-presets/pull/1466)), SentencePiece 0.2.0, TensorFlow Lite 2.17.0, TensorRT 10.3.0.26, Triton Inference Server 2.48.0, ONNX 1.17.0, ONNX Runtime 1.19.2, TVM 0.17.0, and their dependencies
* Upgrade presets for OpenCV 4.10.0, FFmpeg 7.1, Spinnaker 4.0.0.116 ([pull #1524](https://github.com/bytedeco/javacpp-presets/pull/1524)), DNNL 3.5.3, OpenBLAS 0.3.28, CMINPACK 1.3.9, GSL 2.8, CPython 3.13.0, NumPy 2.1.2, SciPy 1.14.1, LLVM 19.1.2, LibRaw 0.21.2 ([pull #1520](https://github.com/bytedeco/javacpp-presets/pull/1520)), Tesseract 5.4.1, libffi 3.4.6, CUDA 12.6.0, cuDNN 9.3.0, NCCL 2.22.3, nvCOMP 4.0.0, OpenCL 3.0.16, NVIDIA Video Codec SDK 12.2.72, PyTorch 2.5.0 ([pull #1466](https://github.com/bytedeco/javacpp-presets/pull/1466)), SentencePiece 0.2.0, TensorFlow Lite 2.17.0, TensorRT 10.3.0.26, Triton Inference Server 2.48.0, ONNX 1.17.0, ONNX Runtime 1.19.2, TVM 0.17.0, and their dependencies

### January 29, 2024 version 1.5.10
* Introduce `macosx-arm64` builds for PyTorch ([pull #1463](https://github.com/bytedeco/javacpp-presets/pull/1463))
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ Each child module in turn relies by default on the included [`cppbuild.sh` scrip
* NVIDIA Video Codec SDK 12.2.x https://developer.nvidia.com/nvidia-video-codec-sdk
* OpenCL 3.0.x https://github.com/KhronosGroup/OpenCL-ICD-Loader
* MXNet 1.9.x https://github.com/apache/incubator-mxnet
* PyTorch 2.4.x https://github.com/pytorch/pytorch
* PyTorch 2.5.x https://github.com/pytorch/pytorch
* SentencePiece 0.2.0 https://github.com/google/sentencepiece
* TensorFlow 1.15.x https://github.com/tensorflow/tensorflow
* TensorFlow Lite 2.17.x https://github.com/tensorflow/tensorflow
Expand Down
2 changes: 1 addition & 1 deletion platform/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,7 @@
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.4.0-${project.version}</version>
<version>2.5.0-${project.version}</version>
</dependency>
<dependency>
<groupId>org.bytedeco</groupId>
Expand Down
6 changes: 3 additions & 3 deletions pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Introduction
------------
This directory contains the JavaCPP Presets module for:

* PyTorch 2.4.0 https://pytorch.org/
* PyTorch 2.5.0 https://pytorch.org/

Please refer to the parent README.md file for more detailed information about the JavaCPP Presets.

Expand Down Expand Up @@ -48,14 +48,14 @@ We can use [Maven 3](http://maven.apache.org/) to download and install automatic
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.4.0-1.5.11-SNAPSHOT</version>
<version>2.5.0-1.5.11-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>2.4.0-1.5.11-SNAPSHOT</version>
<version>2.5.0-1.5.11-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
Expand Down
6 changes: 4 additions & 2 deletions pytorch/cppbuild.sh
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ if [[ $PLATFORM == windows* ]]; then
export PYTHON_BIN_PATH=$(which python.exe)
fi

PYTORCH_VERSION=2.4.1
PYTORCH_VERSION=2.5.0

export PYTORCH_BUILD_VERSION="$PYTORCH_VERSION"
export PYTORCH_BUILD_NUMBER=1
Expand Down Expand Up @@ -129,7 +129,7 @@ mkdir -p "$PYTHON_INSTALL_PATH"

export CFLAGS="-I$CPYTHON_PATH/include/ -I$PYTHON_LIB_PATH/include/python/ -L$CPYTHON_PATH/lib/ -L$CPYTHON_PATH/libs/"
export PYTHONNOUSERSITE=1
$PYTHON_BIN_PATH -m pip install --target=$PYTHON_LIB_PATH setuptools==67.6.1 pyyaml==6.0.1 typing_extensions==4.8.0
$PYTHON_BIN_PATH -m pip install --target=$PYTHON_LIB_PATH setuptools==67.6.1 pyyaml==6.0.2 typing_extensions==4.8.0

case $PLATFORM in
linux-x86)
Expand Down Expand Up @@ -184,6 +184,7 @@ sedinplace 's/ build_deps()/ build_deps(); sys.exit()/g' setup.py
sedinplace 's/AND NOT DEFINED ENV{CUDAHOSTCXX}//g' cmake/public/cuda.cmake
sedinplace 's/CMAKE_CUDA_FLAGS "/CMAKE_CUDA_FLAGS " --use-local-env /g' CMakeLists.txt

sedinplace '/pycore_opcode.h/d' torch/csrc/dynamo/cpython_defs.c functorch/csrc/dim/dim*
sedinplace 's/using ExpandingArrayDouble/public: using ExpandingArrayDouble/g' ./torch/csrc/api/include/torch/nn/options/pooling.h

# allow setting the build directory and passing CUDA options
Expand All @@ -192,6 +193,7 @@ sedinplace 's/var.startswith(("BUILD_", "USE_", "CMAKE_"))/var.startswith(("BUIL

# allow resizing std::vector<at::indexing::TensorIndex>
sedinplace 's/TensorIndex(c10::nullopt_t)/TensorIndex(c10::nullopt_t none = None)/g' aten/src/ATen/TensorIndexing.h
sedinplace 's/TensorIndex(std::nullopt_t)/TensorIndex(std::nullopt_t none = None)/g' aten/src/ATen/TensorIndexing.h

# add missing declarations
sedinplace '/using ExampleType = ExampleType_;/a\
Expand Down
2 changes: 1 addition & 1 deletion pytorch/platform/gpu/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>2.4.0-${project.parent.version}</version>
<version>2.5.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform GPU for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/platform/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.4.0-${project.parent.version}</version>
<version>2.5.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch</artifactId>
<version>2.4.0-${project.parent.version}</version>
<version>2.5.0-${project.parent.version}</version>
<name>JavaCPP Presets for PyTorch</name>

<dependencies>
Expand Down
4 changes: 2 additions & 2 deletions pytorch/samples/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.4.0-1.5.11-SNAPSHOT</version>
<version>2.5.0-1.5.11-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>2.4.0-1.5.11-SNAPSHOT</version>
<version>2.5.0-1.5.11-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,4 +45,8 @@ public class AcceleratorHooksInterface extends Pointer {
public native @Cast("c10::DeviceIndex") byte exchangeDevice(@Cast("c10::DeviceIndex") byte device);

public native @Cast("c10::DeviceIndex") byte maybeExchangeDevice(@Cast("c10::DeviceIndex") byte device);

public native @Cast("bool") boolean isPinnedPtr(@Const Pointer data);

public native Allocator getPinnedMemoryAllocator();
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ public class AdaptiveAvgPool1dImplCloneable extends Module {
* and submodules in the cloned module are different from those in the
* original module. */
public native @SharedPtr("torch::nn::Module") @ByVal Module clone(
@Const @ByRef(nullValue = "std::optional<torch::Device>(c10::nullopt)") DeviceOptional device);
@Const @ByRef(nullValue = "std::optional<torch::Device>(std::nullopt)") DeviceOptional device);
public native @SharedPtr("torch::nn::Module") @ByVal Module clone();
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ public class AdaptiveAvgPool2dImplCloneable extends Module {
* and submodules in the cloned module are different from those in the
* original module. */
public native @SharedPtr("torch::nn::Module") @ByVal Module clone(
@Const @ByRef(nullValue = "std::optional<torch::Device>(c10::nullopt)") DeviceOptional device);
@Const @ByRef(nullValue = "std::optional<torch::Device>(std::nullopt)") DeviceOptional device);
public native @SharedPtr("torch::nn::Module") @ByVal Module clone();
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ public class AdaptiveAvgPool3dImplCloneable extends Module {
* and submodules in the cloned module are different from those in the
* original module. */
public native @SharedPtr("torch::nn::Module") @ByVal Module clone(
@Const @ByRef(nullValue = "std::optional<torch::Device>(c10::nullopt)") DeviceOptional device);
@Const @ByRef(nullValue = "std::optional<torch::Device>(std::nullopt)") DeviceOptional device);
public native @SharedPtr("torch::nn::Module") @ByVal Module clone();
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ public class AdaptiveLogSoftmaxWithLossImplCloneable extends Module {
* and submodules in the cloned module are different from those in the
* original module. */
public native @SharedPtr("torch::nn::Module") @ByVal Module clone(
@Const @ByRef(nullValue = "std::optional<torch::Device>(c10::nullopt)") DeviceOptional device);
@Const @ByRef(nullValue = "std::optional<torch::Device>(std::nullopt)") DeviceOptional device);
public native @SharedPtr("torch::nn::Module") @ByVal Module clone();
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ public class AdaptiveMaxPool1dImplCloneable extends Module {
* and submodules in the cloned module are different from those in the
* original module. */
public native @SharedPtr("torch::nn::Module") @ByVal Module clone(
@Const @ByRef(nullValue = "std::optional<torch::Device>(c10::nullopt)") DeviceOptional device);
@Const @ByRef(nullValue = "std::optional<torch::Device>(std::nullopt)") DeviceOptional device);
public native @SharedPtr("torch::nn::Module") @ByVal Module clone();
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ public class AdaptiveMaxPool2dImplCloneable extends Module {
* and submodules in the cloned module are different from those in the
* original module. */
public native @SharedPtr("torch::nn::Module") @ByVal Module clone(
@Const @ByRef(nullValue = "std::optional<torch::Device>(c10::nullopt)") DeviceOptional device);
@Const @ByRef(nullValue = "std::optional<torch::Device>(std::nullopt)") DeviceOptional device);
public native @SharedPtr("torch::nn::Module") @ByVal Module clone();
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ public class AdaptiveMaxPool3dImplCloneable extends Module {
* and submodules in the cloned module are different from those in the
* original module. */
public native @SharedPtr("torch::nn::Module") @ByVal Module clone(
@Const @ByRef(nullValue = "std::optional<torch::Device>(c10::nullopt)") DeviceOptional device);
@Const @ByRef(nullValue = "std::optional<torch::Device>(std::nullopt)") DeviceOptional device);
public native @SharedPtr("torch::nn::Module") @ByVal Module clone();
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ public class AlphaDropoutImplCloneable extends Module {
* and submodules in the cloned module are different from those in the
* original module. */
public native @SharedPtr("torch::nn::Module") @ByVal Module clone(
@Const @ByRef(nullValue = "std::optional<torch::Device>(c10::nullopt)") DeviceOptional device);
@Const @ByRef(nullValue = "std::optional<torch::Device>(std::nullopt)") DeviceOptional device);
public native @SharedPtr("torch::nn::Module") @ByVal Module clone();
}
2 changes: 1 addition & 1 deletion pytorch/src/gen/java/org/bytedeco/pytorch/AnyModule.java
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ public class AnyModule extends Pointer {

/** Creates a deep copy of an {@code AnyModule} if it contains a module, else an
* empty {@code AnyModule} if it is empty. */
public native @ByVal AnyModule clone(@ByVal(nullValue = "std::optional<torch::Device>(c10::nullopt)") DeviceOptional device);
public native @ByVal AnyModule clone(@ByVal(nullValue = "std::optional<torch::Device>(std::nullopt)") DeviceOptional device);
public native @ByVal AnyModule clone();

/** Assigns a module to the {@code AnyModule} (to circumvent the explicit
Expand Down
48 changes: 24 additions & 24 deletions pytorch/src/gen/java/org/bytedeco/pytorch/Argument.java
Original file line number Diff line number Diff line change
Expand Up @@ -37,50 +37,50 @@ public class Argument extends Pointer {
public Argument(
@StdString BytePointer name/*=""*/,
@Const @ByRef(nullValue = "c10::TypePtr(nullptr)") Type.TypePtr type,
@ByVal(nullValue = "std::optional<int32_t>(c10::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(c10::nullopt)") IValueOptional default_value,
@ByVal(nullValue = "std::optional<int32_t>(std::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(std::nullopt)") IValueOptional default_value,
@Cast("bool") boolean kwarg_only/*=false*/,
@ByVal(nullValue = "std::optional<c10::AliasInfo>(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, type, N, default_value, kwarg_only, alias_info); }
@ByVal(nullValue = "std::optional<c10::AliasInfo>(std::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, type, N, default_value, kwarg_only, alias_info); }
private native void allocate(
@StdString BytePointer name/*=""*/,
@Const @ByRef(nullValue = "c10::TypePtr(nullptr)") Type.TypePtr type,
@ByVal(nullValue = "std::optional<int32_t>(c10::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(c10::nullopt)") IValueOptional default_value,
@ByVal(nullValue = "std::optional<int32_t>(std::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(std::nullopt)") IValueOptional default_value,
@Cast("bool") boolean kwarg_only/*=false*/,
@ByVal(nullValue = "std::optional<c10::AliasInfo>(c10::nullopt)") AliasInfoOptional alias_info);
@ByVal(nullValue = "std::optional<c10::AliasInfo>(std::nullopt)") AliasInfoOptional alias_info);
public Argument() { super((Pointer)null); allocate(); }
private native void allocate();
public Argument(
@StdString String name/*=""*/,
@Const @ByRef(nullValue = "c10::TypePtr(nullptr)") Type.TypePtr type,
@ByVal(nullValue = "std::optional<int32_t>(c10::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(c10::nullopt)") IValueOptional default_value,
@ByVal(nullValue = "std::optional<int32_t>(std::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(std::nullopt)") IValueOptional default_value,
@Cast("bool") boolean kwarg_only/*=false*/,
@ByVal(nullValue = "std::optional<c10::AliasInfo>(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, type, N, default_value, kwarg_only, alias_info); }
@ByVal(nullValue = "std::optional<c10::AliasInfo>(std::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, type, N, default_value, kwarg_only, alias_info); }
private native void allocate(
@StdString String name/*=""*/,
@Const @ByRef(nullValue = "c10::TypePtr(nullptr)") Type.TypePtr type,
@ByVal(nullValue = "std::optional<int32_t>(c10::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(c10::nullopt)") IValueOptional default_value,
@ByVal(nullValue = "std::optional<int32_t>(std::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(std::nullopt)") IValueOptional default_value,
@Cast("bool") boolean kwarg_only/*=false*/,
@ByVal(nullValue = "std::optional<c10::AliasInfo>(c10::nullopt)") AliasInfoOptional alias_info);
@ByVal(nullValue = "std::optional<c10::AliasInfo>(std::nullopt)") AliasInfoOptional alias_info);

public Argument(
@StdString BytePointer name,
@ByVal Type.TypePtr fake_type,
@ByVal Type.TypePtr real_type,
@ByVal(nullValue = "std::optional<int32_t>(c10::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(c10::nullopt)") IValueOptional default_value,
@ByVal(nullValue = "std::optional<int32_t>(std::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(std::nullopt)") IValueOptional default_value,
@Cast("bool") boolean kwarg_only/*=false*/,
@ByVal(nullValue = "std::optional<c10::AliasInfo>(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, fake_type, real_type, N, default_value, kwarg_only, alias_info); }
@ByVal(nullValue = "std::optional<c10::AliasInfo>(std::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, fake_type, real_type, N, default_value, kwarg_only, alias_info); }
private native void allocate(
@StdString BytePointer name,
@ByVal Type.TypePtr fake_type,
@ByVal Type.TypePtr real_type,
@ByVal(nullValue = "std::optional<int32_t>(c10::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(c10::nullopt)") IValueOptional default_value,
@ByVal(nullValue = "std::optional<int32_t>(std::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(std::nullopt)") IValueOptional default_value,
@Cast("bool") boolean kwarg_only/*=false*/,
@ByVal(nullValue = "std::optional<c10::AliasInfo>(c10::nullopt)") AliasInfoOptional alias_info);
@ByVal(nullValue = "std::optional<c10::AliasInfo>(std::nullopt)") AliasInfoOptional alias_info);
public Argument(
@StdString BytePointer name,
@ByVal Type.TypePtr fake_type,
Expand All @@ -93,18 +93,18 @@ public Argument(
@StdString String name,
@ByVal Type.TypePtr fake_type,
@ByVal Type.TypePtr real_type,
@ByVal(nullValue = "std::optional<int32_t>(c10::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(c10::nullopt)") IValueOptional default_value,
@ByVal(nullValue = "std::optional<int32_t>(std::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(std::nullopt)") IValueOptional default_value,
@Cast("bool") boolean kwarg_only/*=false*/,
@ByVal(nullValue = "std::optional<c10::AliasInfo>(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, fake_type, real_type, N, default_value, kwarg_only, alias_info); }
@ByVal(nullValue = "std::optional<c10::AliasInfo>(std::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, fake_type, real_type, N, default_value, kwarg_only, alias_info); }
private native void allocate(
@StdString String name,
@ByVal Type.TypePtr fake_type,
@ByVal Type.TypePtr real_type,
@ByVal(nullValue = "std::optional<int32_t>(c10::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(c10::nullopt)") IValueOptional default_value,
@ByVal(nullValue = "std::optional<int32_t>(std::nullopt)") IntOptional N,
@ByVal(nullValue = "std::optional<c10::IValue>(std::nullopt)") IValueOptional default_value,
@Cast("bool") boolean kwarg_only/*=false*/,
@ByVal(nullValue = "std::optional<c10::AliasInfo>(c10::nullopt)") AliasInfoOptional alias_info);
@ByVal(nullValue = "std::optional<c10::AliasInfo>(std::nullopt)") AliasInfoOptional alias_info);
public Argument(
@StdString String name,
@ByVal Type.TypePtr fake_type,
Expand Down
Loading

0 comments on commit 0d3cf6d

Please sign in to comment.