Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use NCCL in comm.py if available #78

Merged
merged 2 commits into from
Oct 14, 2016
Merged

Conversation

colesbury
Copy link
Member

No description provided.


def test_bcast(self):
if torch.cuda.device_count() < 2:
raise unittest.SkipTest("Only one GPU detected")

This comment was marked as off-topic.

@@ -0,0 +1,234 @@
import os

This comment was marked as off-topic.

def _loadlib():
global lib
dir = os.path.dirname(os.path.abspath(__file__))
path = "{0}/../../lib/{1}".format(dir, libname)

This comment was marked as off-topic.


def _loadlib():
global lib
dir = os.path.dirname(os.path.abspath(__file__))

This comment was marked as off-topic.

for tensor in tensors:
if not tensor.is_contiguous():
return False
if not hasattr(tensor, 'get_device'):

This comment was marked as off-topic.

class NcclError(RuntimeError):
def __init__(self, status):
self.status = status
msg = '{0}: {1}'.format(status_codes.get(status), status)

This comment was marked as off-topic.

This comment was marked as off-topic.

data_type, op, root, comm[i], cudaStream()))


def bcast(inputs, root=0):

This comment was marked as off-topic.

This comment was marked as off-topic.

class _CudaBase(object):
is_cuda = True

def type(self, *args, **kwargs):

This comment was marked as off-topic.

-DCMAKE_C_FLAGS="$C_FLAGS" \
-DCMAKE_CXX_FLAGS="$C_FLAGS $CPP_FLAGS"
make install
cp "lib/libnccl.so" "${INSTALL_DIR}/lib/libnccl.so"

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@colesbury colesbury closed this Oct 7, 2016
@colesbury colesbury reopened this Oct 7, 2016
@colesbury colesbury merged commit 6b830bc into pytorch:master Oct 14, 2016
@colesbury colesbury deleted the nccl branch October 14, 2016 21:44
soumith pushed a commit that referenced this pull request Apr 18, 2017
Fix compilation error when compiling with 'clang -x cuda'.
cpuhrsch pushed a commit to cpuhrsch/pytorch that referenced this pull request Jul 26, 2019
…librosa-based `MEL`. Add missing docstring params. (pytorch#78)

* Bug fix: Use correct device for MEL2 functions so MEL2 works on CUDA tensors

* Rename classes in line with PyTorch standards. Remove redundent
slow librosa-based `MEL`. Add missing docstring params.

* fix param names
lly-zero-one pushed a commit to lly-zero-one/pytorch that referenced this pull request Feb 8, 2020
…rs. (pytorch#33)

To run the test: cmake . && make cpptest && ./expr_test

Refactor the RefHandle class. (pytorch#34)

Add convenience operator for Expr.

clang-format change (pytorch#35)

Adding Var, Let and eval_context support. (pytorch#36)

Add LLVM JIT class for online codegen

Refactor llvm codegen

fix caps of LlvmJit

Generate code for integer arithmetic

Test all arithmetic ops with LLVM

Fix rtti

Compat with llvm 7 and 8

Add support for tensor expressions. (pytorch#38)

Add Casting support so mixed dtypes are supported.
Add basic dtype and logging support. This should be merged with PyTorch during integration.

clang-format fix (pytorch#39)

Extend dtypes to support vector types (pytorch#40)

Support LLVM 9 too

Disambigate dependent type name with template keyword

Remove empty scalar.h

Add basic support for statements. (pytorch#41)

Add support for For, Ramp, Block, Load, Store and Broadcast.
Add support for Buffer.

Adding Stmt evaluation support. (pytorch#42)

Use third_party/googletest from pytorch

Remove nnc/tests/googletest submodule

Move nnc tld to torch/csrc/jit/compiler

Add a README (probably temporary) for jit/compiler

Move from namespace nnc to torch::jit::compiler

Refactor JIT class to isolate no-rtti pieces

Adding comparison operator to Var. (pytorch#43)

Fix typo in README.md

Use absolute imports and pragma once

Use absolute includes in new llvm_jit.h

Build non-LLVM compiler stuff with libtorch

Minimal asmjit codegen from the tensor IR

fix pessimizing moves

IR printer

fix printer bug

Add printer to build system.

Add data structure for schedule support and Split.

clang-format using the new template

Add IRMutator and basic support to substitude Var in Expr and Stmts.

Change the default count of RefCounted as zero.
Merge Expr(node) and Expr::make(node).

Add basic lowering to the tensor expression trees.

fix the schedule_test

fixed lowering

LLVM code generation for simple loops

bugfixes

refcount fixing self-assignment

Make LOG(FATAL) nonreturn
Enable Werror

Adding statement conversion for SplitWithTail

Add a reference tests for Split

clang-format

A functinoal reference chck for schedule tests.

clang-format

Add support for Float immediates.

Get absolute path for ASMJIT_DIR (pytorch#24)

Silence deprecation warnings from LLVM

Include legacy PassManager for debug printing

Set code model to medium to avoid indirect jumps in generated asm

Fix argument type of input float buffers

Add support for Casts in LLVM codegen.

Add a complete tensor+lower+llvm test

Enable the failing test

Enable export of compile_commands.json.

Floating point arithmetic

Test fp32 mul using compute expr

Broadcast add test using compute expr

Update to LLVM 9

Implementation of Broadcast for LLVM.

Add Buffer operator() overload, and some other minor features

Cleanup use of ConstantInt API.

fix accidental experimental changes

Change the Compute interface to bring the dim sizes and names together

clang-format

refactor Buffer into its own files

Add support for vector casts in LLVM CodeGen

Implement masked loads and stores.

Implement vector masked loads and stores.

Add a PaddedBuffer test util

Improve the user interface for SimpleIREvaluator

Add a test for Block codegen.

Fix gtest include path

clang-format

Add expressions and support for Max and Min. (pytorch#5)

Rename compiler to tensorexpr and move files around to be more similar to other pytorch parts. (pytorch#6)

Summary:

1. Move compiler to tensorexpr folder
2. Move files from src and include to the same folder (and remove src and include folders)
3. Rename .cc to .cpp

Add missing include <math.h> (pytorch#7)

Change isnan to std::isnan. It breaks my clang builds. (pytorch#8)

Change the SimpleIREvaluator frontend (pytorch#9)

Add RefHandle for subclass

Make LLVM dependency optional. (pytorch#10)

[wip] Basic fuser pass to select texpr subgraphs

Revert "[wip] Basic fuser pass to select texpr subgraphs"

This reverts commit a9d9919.

Revert changes to the main pytorch CMakeLists.txt (for now).

Add a test for aten::_cast_Float lowering. (pytorch#12)

Hook tensorexp up to the main build, and switch to c10 logging

More ATen op tests. (pytorch#16)

Fix some missing returns

Include tests back to the 'all' target. (pytorch#14)

Even more ATen op tests. (pytorch#18)

Test for relu ATen op. (pytorch#19)

Add intrinsics function support. (pytorch#20)

Remove fmax/fmin, as they are already covered by the Max/Min operators (pytorch#21)

refactor CallNode and BaseCallNode, so we can have a common concrete base class for visitors. (pytorch#22)

This is the first step to add other call types.

Add FunctionCall to use existing tensors (pytorch#23)

Add the ability to use an existing tensor expression in other compute functions. (pytorch#24)

fixing broken compilation on mac/clang

adding IRnode for Compare-Select Ops and their LLVM Codegen

Fix Werror. (pytorch#26)

Add tests for some transcendental ops. (pytorch#27)

Add Allocate and Free support. (pytorch#29)

Add Eval and test basic alloc support.
Add Lowering support for buffer allocation for intermediate tensors.

Tensor expr fuser pass for extremely simple expressions

Make fusion work for arbitrary buffer/tensor combinations of inputs (pytorch#30)

fix Let02 test

Access inputs and intermediates uniformly through Tensors (pytorch#31)

fix Let02 test (pytorch#32)

adding LLVM Codegen for Let

modifying CMakeLists.txt to enable ninja test && minor update for LLVM Codegen for Let (handling XQ's comment)

Adding ComputeInline support. (pytorch#35)

Fix broken tests (pytorch#36)

Make tx fuser work with arbitrary ranks

[fuser] Broadcast args

Improve naming of arg broadcasting function

Test cases for tensorexpr fusion (pytorch#37)

CompareSelct Op: Addressing XQ and Owen's comments

modifying CMakeLists.txt to enable ninja test && minor update for LLVM Codegen for Let (handling XQ's comment)

CompareSelct Op: Addressing XQ and Owen's comments

Sketch sufficient support for constants to get constant alpha working. (pytorch#40)

* Refactor to use a switch statement over Node kinds.

* Sketch sufficient support for constants to get constant alpha working.

Fix indices when inlining non-leaf calls (pytorch#39)

Fixing the inline ordering issue (pytorch#43)

Solve more problems with the inliner

Avoid creating redundant and/or improperly ordered Constant's in fused subgraphs. (pytorch#42)

Move fuser-styled tests to schedule_test (pytorch#44)

Add aten::sub to the new fuser. (pytorch#46)

Refactor CodeGen from SimpleIREval (pytorch#47)

Inline all the things (pytorch#45)

clang-format for atent_test.cpp

Eliminate a ton of warnings for my own sanity. (pytorch#48)

Add support for type promotion/demotion. (pytorch#50)

Flesh out new fuser coverage to several more ops. (pytorch#51)

Adding the first basic CudaCodeGen. (pytorch#52)

aten tests for eq, ge, gt, le, lt

support for aten ops: eq

support for more  aten ops: ge, gt, le, lt, ne

Minimal CMake change to link LLVM to libtorch

Fix issues causing assertion failures in llvm debug builds

Fatal on unimplement llvm codegen ops (Allocate, etc.)

Optionally compile tx fuser kernels with llvm

Test for 2D broadcasted with large dims to show vectorization

Updated isSupported for increased op coverage. (pytorch#54)

Refactor LLVMCodeGen to compile kernel in constructor

Cmake integration to PT codebase (pytorch#28)

With this change our code blends with the usual PyTorch code and is built the usual way. I added a cmake option to specify where to look for LLVM, if it's not specified, LLVM is not used.

An example of invocation (from the root of pytorch repo):

```
USE_LLVM=/path/to/llvm9/install  python setup.py develop
```

This command will build libtorch.{a,so} and other libraries, and tensorexpr code will be a part of it.

The tests will be built in build/bin/test_tensorexpr (I've ported only one test so far). So, invocation of the tests will be:

```
build/bin/test_tensorexpr
```

Remove old padded_buffer.{cpp,h}. (pytorch#56)

Add support for code generation of Log10 intrinsics with LLVM. (pytorch#57)

Remove tests/test_utils.h: inline what's still used and nuke what's unused. (pytorch#58)

Move Fuser tests (tests/tests.py) to test/test_tensorexpr.py. (pytorch#59)

Remove old CMakeLists and README.txt

Add support for vectorized and unmasked loads and stores with LLVM. (pytorch#62)

Enable CodeGen-level optimizations in LLVM. (pytorch#63)

Add Bind/GPUBlock/GPUThread support. (pytorch#64)

Bind/run interface to CodeGen (pytorch#60)

* Bind/run interface to CodeGen

* Make LLVMCodeGen implement CodeGen interface

* Allow bind/run to be unimplemented for the moment (CUDA)

* Cache compilation result

* Two nasty bugs: forgot virtual dtor, forgot to clear bindings after run()

Fix ambiguity in CreateExtractElementCall (0ull can be a Value*, I guess?) (pytorch#65)

Allow constants as lhs/rhs args (not just alpha) (pytorch#66)

Use correct tensor type for fuser output (pytorch#67)

clang-format

Rename 'compiler' namespace to 'tensorexpr'.

Include all built llvm targets (pytorch#68)

Switch back to linking only the native LLVM target. (pytorch#69)

Virtual dtors for IRVisitor/IRMutator (pytorch#70)

Add semicolon to make nvcc compile (pytorch#71)

Enable NVRTC for the GPU backend. (pytorch#74)

Fix non-CUDA testing. (pytorch#75)

Getting fused (a)Sin(h), (a)Cos(h),(a) Tan(h), abs working with the interpreter (pytorch#73)

* Getting fused (a)Sin(h), (a)Cos(h),(a) Tan(h), abs working with the interpreter

* take the interpreter path only when ENABLE_LLVM is not set

remove the leak tests, as we will get rid of refcounting (pytorch#76)

Implement aten::min, max, and clamp (pytorch#72)

* Implement aten::min, max, and clamp

* Propagate NaNs like std::max/min

* Change NaN propagation in interpreter too

clang-format tensorexpr/tests.h (pytorch#77)

Refactor UniqueNameManager into its own files. (pytorch#79)

refactor cuda_codegen (pytorch#80)

simplify nvrtc major, minor versions (pytorch#81)

Allow CodeGen to take Var args (interpreter support only) (pytorch#78)

* Test demonstrating dynamic shape

* Allow binding of Vars to args in interpreter

* Pass BufferArgs to LLVMCodeGen

* clang-format-diff

[LLVMCodeGen] Refactor kernel constructor to be less sprawling (pytorch#82)

* Member TM to TM_ in LLVMCodeGen

* [LLVMCodeGen] Add helper for getContext

* [LLVMCodeGen] Refactor type support

* [LLVMCodeGen] Refactor kernel emission
lly-zero-one pushed a commit to lly-zero-one/pytorch that referenced this pull request Feb 8, 2020
* Test demonstrating dynamic shape

* Allow binding of Vars to args in interpreter

* Pass BufferArgs to LLVMCodeGen

* clang-format-diff
lly-zero-one pushed a commit to lly-zero-one/pytorch that referenced this pull request Feb 18, 2020
* Test demonstrating dynamic shape

* Allow binding of Vars to args in interpreter

* Pass BufferArgs to LLVMCodeGen

* clang-format-diff
lly-zero-one pushed a commit to lly-zero-one/pytorch that referenced this pull request Mar 1, 2020
…rs. (pytorch#33)

To run the test: cmake . && make cpptest && ./expr_test

Refactor the RefHandle class. (pytorch#34)

Add convenience operator for Expr.

clang-format change (pytorch#35)

Adding Var, Let and eval_context support. (pytorch#36)

Add LLVM JIT class for online codegen

Refactor llvm codegen

fix caps of LlvmJit

Generate code for integer arithmetic

Test all arithmetic ops with LLVM

Fix rtti

Compat with llvm 7 and 8

Add support for tensor expressions. (pytorch#38)

Add Casting support so mixed dtypes are supported.
Add basic dtype and logging support. This should be merged with PyTorch during integration.

clang-format fix (pytorch#39)

Extend dtypes to support vector types (pytorch#40)

Support LLVM 9 too

Disambigate dependent type name with template keyword

Remove empty scalar.h

Add basic support for statements. (pytorch#41)

Add support for For, Ramp, Block, Load, Store and Broadcast.
Add support for Buffer.

Adding Stmt evaluation support. (pytorch#42)

Use third_party/googletest from pytorch

Remove nnc/tests/googletest submodule

Move nnc tld to torch/csrc/jit/compiler

Add a README (probably temporary) for jit/compiler

Move from namespace nnc to torch::jit::compiler

Refactor JIT class to isolate no-rtti pieces

Adding comparison operator to Var. (pytorch#43)

Fix typo in README.md

Use absolute imports and pragma once

Use absolute includes in new llvm_jit.h

Build non-LLVM compiler stuff with libtorch

Minimal asmjit codegen from the tensor IR

fix pessimizing moves

IR printer

fix printer bug

Add printer to build system.

Add data structure for schedule support and Split.

clang-format using the new template

Add IRMutator and basic support to substitude Var in Expr and Stmts.

Change the default count of RefCounted as zero.
Merge Expr(node) and Expr::make(node).

Add basic lowering to the tensor expression trees.

fix the schedule_test

fixed lowering

LLVM code generation for simple loops

bugfixes

refcount fixing self-assignment

Make LOG(FATAL) nonreturn
Enable Werror

Adding statement conversion for SplitWithTail

Add a reference tests for Split

clang-format

A functinoal reference chck for schedule tests.

clang-format

Add support for Float immediates.

Get absolute path for ASMJIT_DIR (pytorch#24)

Silence deprecation warnings from LLVM

Include legacy PassManager for debug printing

Set code model to medium to avoid indirect jumps in generated asm

Fix argument type of input float buffers

Add support for Casts in LLVM codegen.

Add a complete tensor+lower+llvm test

Enable the failing test

Enable export of compile_commands.json.

Floating point arithmetic

Test fp32 mul using compute expr

Broadcast add test using compute expr

Update to LLVM 9

Implementation of Broadcast for LLVM.

Add Buffer operator() overload, and some other minor features

Cleanup use of ConstantInt API.

fix accidental experimental changes

Change the Compute interface to bring the dim sizes and names together

clang-format

refactor Buffer into its own files

Add support for vector casts in LLVM CodeGen

Implement masked loads and stores.

Implement vector masked loads and stores.

Add a PaddedBuffer test util

Improve the user interface for SimpleIREvaluator

Add a test for Block codegen.

Fix gtest include path

clang-format

Add expressions and support for Max and Min. (pytorch#5)

Rename compiler to tensorexpr and move files around to be more similar to other pytorch parts. (pytorch#6)

Summary:

1. Move compiler to tensorexpr folder
2. Move files from src and include to the same folder (and remove src and include folders)
3. Rename .cc to .cpp

Add missing include <math.h> (pytorch#7)

Change isnan to std::isnan. It breaks my clang builds. (pytorch#8)

Change the SimpleIREvaluator frontend (pytorch#9)

Add RefHandle for subclass

Make LLVM dependency optional. (pytorch#10)

[wip] Basic fuser pass to select texpr subgraphs

Revert "[wip] Basic fuser pass to select texpr subgraphs"

This reverts commit a9d9919.

Revert changes to the main pytorch CMakeLists.txt (for now).

Add a test for aten::_cast_Float lowering. (pytorch#12)

Hook tensorexp up to the main build, and switch to c10 logging

More ATen op tests. (pytorch#16)

Fix some missing returns

Include tests back to the 'all' target. (pytorch#14)

Even more ATen op tests. (pytorch#18)

Test for relu ATen op. (pytorch#19)

Add intrinsics function support. (pytorch#20)

Remove fmax/fmin, as they are already covered by the Max/Min operators (pytorch#21)

refactor CallNode and BaseCallNode, so we can have a common concrete base class for visitors. (pytorch#22)

This is the first step to add other call types.

Add FunctionCall to use existing tensors (pytorch#23)

Add the ability to use an existing tensor expression in other compute functions. (pytorch#24)

fixing broken compilation on mac/clang

adding IRnode for Compare-Select Ops and their LLVM Codegen

Fix Werror. (pytorch#26)

Add tests for some transcendental ops. (pytorch#27)

Add Allocate and Free support. (pytorch#29)

Add Eval and test basic alloc support.
Add Lowering support for buffer allocation for intermediate tensors.

Tensor expr fuser pass for extremely simple expressions

Make fusion work for arbitrary buffer/tensor combinations of inputs (pytorch#30)

fix Let02 test

Access inputs and intermediates uniformly through Tensors (pytorch#31)

adding LLVM Codegen for Let

Adding ComputeInline support. (pytorch#35)

Fix broken tests (pytorch#36)

Make tx fuser work with arbitrary ranks

[fuser] Broadcast args

Improve naming of arg broadcasting function

modifying CMakeLists.txt to enable ninja test && minor update for LLVM Codegen for Let (handling XQ's comment)

Test cases for tensorexpr fusion (pytorch#37)

CompareSelct Op: Addressing XQ and Owen's comments

Sketch sufficient support for constants to get constant alpha working. (pytorch#40)

* Refactor to use a switch statement over Node kinds.

* Sketch sufficient support for constants to get constant alpha working.

Fix indices when inlining non-leaf calls (pytorch#39)

Fixing the inline ordering issue (pytorch#43)

Solve more problems with the inliner

Avoid creating redundant and/or improperly ordered Constant's in fused subgraphs. (pytorch#42)

Move fuser-styled tests to schedule_test (pytorch#44)

Add aten::sub to the new fuser. (pytorch#46)

Refactor CodeGen from SimpleIREval (pytorch#47)

Inline all the things (pytorch#45)

clang-format for atent_test.cpp

Eliminate a ton of warnings for my own sanity. (pytorch#48)

Add support for type promotion/demotion. (pytorch#50)

Flesh out new fuser coverage to several more ops. (pytorch#51)

Adding the first basic CudaCodeGen. (pytorch#52)

aten tests for eq, ge, gt, le, lt

support for aten ops: eq

support for more  aten ops: ge, gt, le, lt, ne

Minimal CMake change to link LLVM to libtorch

Fix issues causing assertion failures in llvm debug builds

Fatal on unimplement llvm codegen ops (Allocate, etc.)

Optionally compile tx fuser kernels with llvm

Test for 2D broadcasted with large dims to show vectorization

Updated isSupported for increased op coverage. (pytorch#54)

Refactor LLVMCodeGen to compile kernel in constructor

Cmake integration to PT codebase (pytorch#28)

With this change our code blends with the usual PyTorch code and is built the usual way. I added a cmake option to specify where to look for LLVM, if it's not specified, LLVM is not used.

An example of invocation (from the root of pytorch repo):

```
USE_LLVM=/path/to/llvm9/install  python setup.py develop
```

This command will build libtorch.{a,so} and other libraries, and tensorexpr code will be a part of it.

The tests will be built in build/bin/test_tensorexpr (I've ported only one test so far). So, invocation of the tests will be:

```
build/bin/test_tensorexpr
```

Remove old padded_buffer.{cpp,h}. (pytorch#56)

Add support for code generation of Log10 intrinsics with LLVM. (pytorch#57)

Remove tests/test_utils.h: inline what's still used and nuke what's unused. (pytorch#58)

Move Fuser tests (tests/tests.py) to test/test_tensorexpr.py. (pytorch#59)

Remove old CMakeLists and README.txt

Add support for vectorized and unmasked loads and stores with LLVM. (pytorch#62)

Enable CodeGen-level optimizations in LLVM. (pytorch#63)

Add Bind/GPUBlock/GPUThread support. (pytorch#64)

Bind/run interface to CodeGen (pytorch#60)

* Bind/run interface to CodeGen

* Make LLVMCodeGen implement CodeGen interface

* Allow bind/run to be unimplemented for the moment (CUDA)

* Cache compilation result

* Two nasty bugs: forgot virtual dtor, forgot to clear bindings after run()

Fix ambiguity in CreateExtractElementCall (0ull can be a Value*, I guess?) (pytorch#65)

Allow constants as lhs/rhs args (not just alpha) (pytorch#66)

Use correct tensor type for fuser output (pytorch#67)

clang-format

Rename 'compiler' namespace to 'tensorexpr'.

Include all built llvm targets (pytorch#68)

Switch back to linking only the native LLVM target. (pytorch#69)

Virtual dtors for IRVisitor/IRMutator (pytorch#70)

Add semicolon to make nvcc compile (pytorch#71)

Enable NVRTC for the GPU backend. (pytorch#74)

Fix non-CUDA testing. (pytorch#75)

Getting fused (a)Sin(h), (a)Cos(h),(a) Tan(h), abs working with the interpreter (pytorch#73)

* Getting fused (a)Sin(h), (a)Cos(h),(a) Tan(h), abs working with the interpreter

* take the interpreter path only when ENABLE_LLVM is not set

remove the leak tests, as we will get rid of refcounting (pytorch#76)

Implement aten::min, max, and clamp (pytorch#72)

* Implement aten::min, max, and clamp

* Propagate NaNs like std::max/min

* Change NaN propagation in interpreter too

clang-format tensorexpr/tests.h (pytorch#77)

Refactor UniqueNameManager into its own files. (pytorch#79)

refactor cuda_codegen (pytorch#80)

simplify nvrtc major, minor versions (pytorch#81)

Allow CodeGen to take Var args (interpreter support only) (pytorch#78)

* Test demonstrating dynamic shape

* Allow binding of Vars to args in interpreter

* Pass BufferArgs to LLVMCodeGen

* clang-format-diff

[LLVMCodeGen] Refactor kernel constructor to be less sprawling (pytorch#82)

* Member TM to TM_ in LLVMCodeGen

* [LLVMCodeGen] Add helper for getContext

* [LLVMCodeGen] Refactor type support

* [LLVMCodeGen] Refactor kernel emission

(TE Interpreter)Support for floor, ceil, trunc, remainder, sqrt and improving tests  (pytorch#83)

* Getting fused (a)Sin(h), (a)Cos(h),(a) Tan(h), abs working with the interpreter
* take the interpreter path only when ENABLE_LLVM is not set
* cleaning up the tests for the new aten ops
* (TE Interpret)adding support for floor, ceil, trunc, remainder and improving tests

Add Cond and Mod to SimpleIREval (pytorch#84)

[LLVMCodeGen] Support dynamic shapes by binding Var args (pytorch#86)

* [LLVMCodeGen] Support dynamic shapes by binding Var args

* Test llvm dynamic shape codegen using Tensor

Add SplitWithMask core support. (pytorch#87)

Add Cuda tests for SplitWithMask (pytorch#88)

Disable DEBUG_PRINT (pytorch#89)

Remove some debug prints (pytorch#90)

Fix the no-CUDA build. (pytorch#92)

Add support for multiple outputs from the fused subgraph. (pytorch#91)

Remove RefCounting (pytorch#93)

Add some comments for KernelScope. Address comments. (pytorch#94)

Completely remove refcount.h (pytorch#95)

fix the fuser pass (pytorch#97)

Rename Kernel to KernelArena (pytorch#98)

Add support for fusion through ConstantChunk ops. (pytorch#96)

Fix implicit noexcept deduction warning. (pytorch#99)

Make llvm tests conditional on USE_LLVM (pytorch#100)

* Make llvm tests conditional on USE_LLVM

* Use the right macro and add to gtest harness

* clang-format

Refactor ComputeNode into ComputeValue, to be able to handle arbitrary (pytorch#101)

multi-output operators.

Improve Stmt pretty printing from TensorExprFuser (pytorch#102)

Add support for IfThenElse (pytorch#103)

Add end-to-end support and a PyTorch fuser example on CudaCodeGen (pytorch#104)

fix rebase errors (pytorch#105)

fixes to build on system without LLVM and CUDA (pytorch#107)

* fixes to build on system without LLVM and CUDA

* minor edit: fixes to build on system without LLVM and CUDA

Add support for aten::cat to the new fuser. (pytorch#106)

Bail out of fusion if we don't have a complete tensor type (for now). (pytorch#108)

Standardize codegen call() interface and remove bind/run (pytorch#109)

* Standardize codegen call() interface and remove bind/run

* revert undef USE_CUDA

Clean up sketchy handling of scalar args in llvm codegen (pytorch#110)

Test 2D dynamic shapes (pytorch#112)

clang-format (pytorch#113)

Add LLVM codegen for a lot of transcendental ops. (pytorch#115)

Fix bug with binary math intrinsics. (pytorch#116)

Use CUDA for 3-arg test (pytorch#117)

Refactor CudaCodeGen into generic registration, so we can have both the Cuda and non-Cuda builds. (pytorch#118)

Add instructions on how to rebase on master.

Dynamic shape support in CUDA codegen (pytorch#120)

* Dynamic shape support in CUDA codegen

* free cuda memory

Disable GPU fuser. Revive the Cuda tests (pytorch#121)

Add ExecutionCounter to detect whether the underlying code is executed. (pytorch#122)

Adding GPU index flatting to support arbitrary elementwise and broadcasting support. (pytorch#126)

fix a bug kLog to Intrin::log (pytorch#124)

Allow scalar variables as inputs (pytorch#125)

clang-format (pytorch#127)

Format python tests with `black` (pytorch#128)

Add support for fusion in nested blocks. (pytorch#129)

Teach the LLVM JIT to use dlsym to resolve symbols. (pytorch#130)

Factor out kernel codegen from tx fusion pass (pytorch#131)

Use standard JIT logging in TX fuser.

Move memory management classes (KernelArena, KernelScope, KernelScopedObject) to a separate file. (pytorch#132)

(IR Interpreter) Adding more Operators: Erfc, Exmp1, frac, lgamma, neg, sigmoid, reciprocal, neg, relu (pytorch#133)

Add erfc to llvm codegen (pytorch#134)

Squash some warnings (pytorch#135)

(IR interpreter) addcmul (pytorch#137)

* (IR interpreter) addcmul

Remove IRNode. CodeGen accepts only Stmt. Add ExprEval utility wrapper. (pytorch#138)

Add the benchmark from NNC (pytorch#141)

Fix verifier errors in LLVM codegen when conditional loads feed directly into concats. (pytorch#143)

Strength reduction peephole for pow(). (pytorch#144)

Fix incorrect pow(x, 0) case. (pytorch#145)

Use `const Value*` where possible (pytorch#146)

Make Broadcast work (pytorch#147)

$ python benchmarks/tensorexpr/benchmark.py broadcast_3args --device gpu --mode fwd --jit_mode trace

Fixed CudaCodeGen output streams. Switch to __ldg by default (pytorch#148)

Add ElementWise support (pytorch#150)

Fix an assertion failure when merging constants into aten::cat fusions. (pytorch#151)

adding LLVM support ops: sigmoid, relu, neg, addcmul, reciprocal, lgamma, expm1 (pytorch#149)

* adding LLVM support for a few ops

add findllvm
resistor pushed a commit to resistor/pytorch that referenced this pull request Mar 4, 2020
* Test demonstrating dynamic shape

* Allow binding of Vars to args in interpreter

* Pass BufferArgs to LLVMCodeGen

* clang-format-diff
nailimixaM pushed a commit to nailimixaM/pytorch that referenced this pull request Jan 13, 2021
# This is the 1st commit message:

Add Gaussian negative log likelihood loss

# This is the commit message #2:

flake8 compliance of test file

# This is the commit message #3:

flake8 compliance loss math description

# This is the commit message #4:

flake8 compliance loss docstring

# This is the commit message #5:

Fix tests and docs

# This is the commit message #6:

Add loss to init script

# This is the commit message #7:

Change eps

# This is the commit message #8:

Fix test and docs

# This is the commit message #9:

Cleaner docs and fix tests

# This is the commit message #10:

Update docs for var clamping change

# This is the commit message #11:

Fix overridetests

# This is the commit message #12:

Fix reduction mode bug and var view bug

# This is the commit message #13:

Update class init to have kwargs

# This is the commit message #14:

Add note and reference to docs

# This is the commit message #15:

Fix typos

# This is the commit message #16:

Preserve memory format in qconv op (#49533)

Summary:
* qconv used to return NHWC no matter the input format
* this change returns NCHW format if the input was NCHW

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49533

Test Plan:
pytest test/quantization/test_quantized_op.py::\
TestQuantizedConv::test_qconv2d_preserve_mem_format

Fixes https://github.com/pytorch/pytorch/issues/47295

Reviewed By: kimishpatel

Differential Revision: D25609205

Pulled By: axitkhurana

fbshipit-source-id: 83f8ca4a1496a8a4612fc3da082d727ead257ce7

# This is the commit message #17:

Added linalg.inv (#48261)

Summary:
This PR adds `torch.linalg.inv` for NumPy compatibility.

`linalg_inv_out` uses in-place operations on provided `result` tensor.

I modified `apply_inverse` to accept tensor of Int instead of std::vector, that way we can write a function similar to `linalg_inv_out` but removing the error checks and device memory synchronization.

I fixed `lda` (leading dimension parameter which is max(1, n)) in many places to handle 0x0 matrices correctly.
Zero batch dimensions are also working and tested.

Ref https://github.com/pytorch/pytorch/issues/42666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48261

Reviewed By: ngimel

Differential Revision: D25690129

Pulled By: mruberry

fbshipit-source-id: edb2d03721f22168c42ded8458513cb23dfdc712

# This is the commit message #18:

Mod lists to neutral+descriptive terms in caffe2/docs (#49803)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49803

Per "https://fb.workplace.com/groups/e/permalink/3320810064641820/" we can no longer use the terms "whitelist" and "blacklist", and editing any file containing them results in a critical error signal. Let's embrace the change.
This diff changes "blacklist" to "blocklist" in a number of non-interface contexts (interfaces would require more extensive testing and might interfere with reading stored data, so those are deferred until later).

Test Plan: Sandcastle

Reviewed By: vkuzo

Differential Revision: D25686924

fbshipit-source-id: 117de2ca43a0ea21b6e465cf5082e605e42adbf6

# This is the commit message #19:

Improve docs for scatter and gather functions (#49679)

Summary:
- Add warning about non-unique indices
- And note that these functions don't broadcast
- Add missing `torch.scatter` and `torch.scatter_add` doc entries
- Fix parameter descriptions
- Improve code examples to make indexing behaviour easier to understand

Closes gh-48214
Closes gh-26191
Closes gh-37130
Closes gh-34062
xref gh-31776

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49679

Reviewed By: mruberry

Differential Revision: D25693660

Pulled By: ngimel

fbshipit-source-id: 4983e7b4efcbdf1ab9f04e58973b4f983e8e43a4

# This is the commit message #20:

removes more unused THC functions (#49788)

Summary:
per title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49788

Reviewed By: mruberry

Differential Revision: D25693328

Pulled By: ngimel

fbshipit-source-id: 244a096214d110e4c1a94f2847ff8457f1afb0d1

# This is the commit message #21:

[pt][quant] Make the CUDA fake quantize logic consistent with CPU fake quantize logic (#49808)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49808

In PyTorch, it uses `dst = std::nearbyint(src * inv_scale) + zero_point` instead of the LEGACY  `dst = std::nearbyint(src * inv_scale + zero_point)`. However, the CUDA implementation doesn't match this. This Diff makes the CPU and CUDA implementation consistent.

- FBGEMM code pointer: https://github.com/pytorch/FBGEMM/blob/master/include/fbgemm/QuantUtils.h#L76-L80
- PyTorch code pointer:
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/affine_quantizer.cpp#L306

Test Plan: CI

Reviewed By: dskhudia

Differential Revision: D25694235

fbshipit-source-id: 0a615e559132aafe18543deac1ea5028dd840cb9

# This is the commit message #22:

[numpy] `torch.erfinv`: promote integer inputs to float (#49155)

Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49155

Reviewed By: ngimel

Differential Revision: D25664234

Pulled By: mruberry

fbshipit-source-id: 630fd1d334567d78c8130236a67dda0f5ec02560

# This is the commit message #23:

[reland] Early terminate when CUDA assert were thrown (#49799)

Summary:
this is a reland of https://github.com/pytorch/pytorch/issues/49527.

fixed slow test not running properly in py36 because capture_output is introduced in py37.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49799

Reviewed By: janeyx99

Differential Revision: D25692616

Pulled By: walterddr

fbshipit-source-id: 9c5352220d632ec8d7464e5f162ffb468a0f30df

# This is the commit message #24:

Fix typo in complex autograd docs (#49755)

Summary:
Update complex autograd docs to fix a typo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49755

Reviewed By: mruberry

Differential Revision: D25692649

Pulled By: soulitzer

fbshipit-source-id: 43c2113b4c8f2d1828880102189a5a9b887dc784

# This is the commit message #25:

Revert D25690129: [pytorch][PR] Added linalg.inv

Test Plan: revert-hammer

Differential Revision:
D25690129 (https://github.com/pytorch/pytorch/commit/8554b58fbdd865c760d92bfa50c1119cc8fc65e9)

Original commit changeset: edb2d03721f2

fbshipit-source-id: 8679ea18e637423d35919544d2b047a62ac3abd8

# This is the commit message #26:

Creation of test framework for Sparse Operators (#48488)

Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48488

Reviewed By: ngimel

Differential Revision: D25696487

Pulled By: mruberry

fbshipit-source-id: dc4f57c6628f62b74dd321f3f6b0fff86f25b040

# This is the commit message #27:

Revert D25692616: [pytorch][PR] [reland] Early terminate when CUDA assert were thrown

Test Plan: revert-hammer

Differential Revision:
D25692616 (https://github.com/pytorch/pytorch/commit/e6a215592ea5b7f7f7e59e89116b507089bfb8d0)

Original commit changeset: 9c5352220d63

fbshipit-source-id: dade8068cad265d15ee908d98abe0de5b81a195d

# This is the commit message #28:

[quant][graphmode][fx] Standalone module support {input/output}_quantized_idxs (#49754)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49754

This PR adds the support for {input/output}_quantized_idxs for standalone module.

if input_quantized_idxs = [] and output_quantized_idxs = [], the standalone module will be expecting float
input and produce float output, and will quantize the input and dequantize output internally

if input_quantized_idxs = [0] and otuput_qiuantized_idxs = [0], the standalone module will be expecting quantized
input and produce quantized output, the input will be quantized in the parent module, and output will be dequantized
in the parent module as well, this is similar to current quantized modules like nn.quantized.Conv2d

For more details, please see the test case

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_standalone_module

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D25684692

fbshipit-source-id: 900360e01c0e35b26fe85f4a887dc1fd6f7bfb66

# This is the commit message #29:

Clip small scales to fp16 min

Summary: When the FC output min max range is very small, we want to enforce a cutoff on the scale parameter to better generalize for future values that could fall beyond the original range.

Test Plan:
More analysis about the output distributions can be found in N425166

An example workflow using fp16 min clipping is f240972205

Reviewed By: jspark1105

Differential Revision: D25681249

fbshipit-source-id: c4dfbd3ee823886afed06e6c2eccfc29d612f7e6

# This is the commit message #30:

Revert D25684692: [quant][graphmode][fx] Standalone module support {input/output}_quantized_idxs

Test Plan: revert-hammer

Differential Revision:
D25684692 (https://github.com/pytorch/pytorch/commit/89b4899ea5363fd69872c0cabf0dedea2dc533c8)

Original commit changeset: 900360e01c0e

fbshipit-source-id: 8b65fa8fbc7b364fbddb5f23cc696cd9b7db98cd

# This is the commit message #31:

[numpy] `torch.digamma` : promote integer inputs to float (#48302)

Summary:
**BC-breaking Note:**

This PR updates PyTorch's digamma function to be consistent with SciPy's special.digamma function. This changes the result of the digamma function on the nonpositive integers, where the gamma function is not defined. Since the gamma function is undefined at these points, the (typical) derivative of the logarithm of the gamma function is also undefined at these points, and for negative integers this PR updates digamma to return NaN. For zero, however, it returns -inf to be consistent with SciPy.

Interestingly, SciPy made a similar change, which was noticed by at least one user: https://github.com/scipy/scipy/issues/9663#issue-396587679.

SciPy's returning of negative infinity at zero is intentional:
https://github.com/scipy/scipy/blob/59347ae8b86bcc92c339efe213128f64ab6df98c/scipy/special/cephes/psi.c#L163

This change is consistent with the C++ standard for the gamma function:
https://en.cppreference.com/w/cpp/numeric/math/tgamma

**PR Summary:**
Reference https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48302

Reviewed By: ngimel

Differential Revision: D25664087

Pulled By: mruberry

fbshipit-source-id: 1168e81e218bf9fe5b849db0e07e7b22e590cf73

# This is the commit message #32:

early termination of CUDA tests (#49869)

Summary:
This is follow up on https://github.com/pytorch/pytorch/issues/49799.

* uses `torch.cuda.synchronize()` to validate CUDA assert instead of inspecting error message.
* remove non CUDA tests.

hopefully can reproduce why slow_tests fails but not normal test. since the test still runs for >1min.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49869

Reviewed By: mruberry

Differential Revision: D25714385

Pulled By: walterddr

fbshipit-source-id: 04f8ccb50d8c9ee42826a216c49baf90285b247f

# This is the commit message #33:

[*.py] Rename "Arguments:" to "Args:" (#49736)

Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619

# This is the commit message #34:

Support the `in` operator with str (#47057)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47057

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D24863370

Pulled By: ansley

fbshipit-source-id: 5d17165b06052f0a4676537c5f6757083185a591

# This is the commit message #35:

[NNC] masked fill (#49627)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49627

There was a bug in the test that was hidden by the `If eager mode doesn't support a dtype/op/device combo` try /  catch, so cuda wasn't being tested �  The fix is just to rename `aten::masked_fill` to `aten_masked_fill`.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D25696409

Pulled By: eellison

fbshipit-source-id: 83de1f5a194df54fe317b0035d4a6c1aed1d19a0

# This is the commit message #36:

[JIT] Constant prop getattr (#49806)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49806

Fix for https://github.com/pytorch/pytorch/issues/47089

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D25696791

Pulled By: eellison

fbshipit-source-id: 914c17b8effef7f4f341775ac2b8150ee4703efd

# This is the commit message #37:

fx quant: hook up ConvTranspose{n}d (#49717)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49717

Quantization of `ConvTranpose{n}d` is supported in Eager mode. This PR
adds the support for FX graph mode.

Note: this currenlty only works in `qnnpack` because per-channel weights
are not supported by quantized conv transpose. In a future PR we should throw
an error when someone tries to quantize a ConvTranspose model with per-channel
weight observers until this is fixed.

Test Plan:
```
python test/test_quantization.py TestQuantizeFxOps.test_conv_transpose_1d
python test/test_quantization.py TestQuantizeFxOps.test_conv_transpose_2d
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D25674636

fbshipit-source-id: b6948156123ed55db77e6337bea10db956215ae6

# This is the commit message #38:

fx quant: split linear test cases (#49740)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49740

1. Separates the module and functional linear test cases.
2. Combines the test case which tests for linear bias observation into
the main linear test case, as requested in
https://github.com/pytorch/pytorch/pull/49628.

Test Plan:
```
python test/test_quantization.py TestQuantizeFxOps.test_linear_module
python test/test_quantization.py TestQuantizeFxOps.test_linear_functional
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D25681272

fbshipit-source-id: 0ed0ebd5afb8cdb938b530f7dbfbd79798eb9318

# This is the commit message #39:

Implement torch.linalg.qr (#47764)

Summary:
I am opening this PR early to have a place to discuss design issues.
The biggest difference between `torch.qr` and `numpy.linalg.qr` is that the former `torch.qr` takes a boolean parameter `some=True`, while the latter takes a string parameter `mode='reduced'` which can be one of the following:

`reduced`
this is completely equivalent to `some=True`, and both are the default.

`complete`
this is completely equivalent to `some=False`.

`r`
this returns only `r` instead of a tuple `(r, q)`. We have already decided that we don't want different return types depending on the parameters, so I propose to return `(r, empty_tensor)` instead. I **think** that in this mode it will be impossible to implement the backward pass, so we should raise an appropriate error in that case.

`raw`
in this mode, it returns `(h, tau)` instead of `(q, r)`. Internally, `h` and `tau` are obtained by calling lapack's `dgeqrf` and are later used to compute the actual values of `(q, r)`. The numpy docs suggest that these might be useful to call other lapack functions, but at the moment none of them is exposed by numpy and I don't know how often it is used in the real world.
I suppose the implementing the backward pass need attention to: the most straightforward solution is to use `(h, tau)` to compute `(q, r)` and then use the normal logic for `qr_backward`, but there might be faster alternatives.

`full`, `f`
alias for `reduced`, deprecated since numpy 1.8.0

`economic`, `e`
similar to `raw but it returns only `h` instead of `(h, tau). Deprecated since numpy 1.8.0

To summarize:
  * `reduce`, `complete` and `r` are straightforward to implement.

  * `raw` needs a bit of extra care, but I don't know how much high priority it is: since it is used rarely, we might want to not support it right now and maybe implement it in the future?

  * I think we should just leave `full` and `economic` out, and possibly add a note to the docs explaining what you need to use instead

/cc mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47764

Reviewed By: ngimel

Differential Revision: D25708870

Pulled By: mruberry

fbshipit-source-id: c25c70a23a02ec4322430d636542041e766ebe1b

# This is the commit message #40:

Fix errata (#49903)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49903

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25718411

Pulled By: ansley

fbshipit-source-id: 0cc365c5a53077752dc1c5a5c4a65b873baa3604

# This is the commit message #41:

Update gather documentation to allow index.shape[k] <= input.shape[k] rather than ==. (#41887)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41887

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D22680014

Pulled By: gchanan

fbshipit-source-id: b162fccabc22a1403c0c43c1131f0fbf4689a79d

# This is the commit message #42:

Enable tests using named temp files on Windows (#49640)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49640

Reviewed By: ngimel

Differential Revision: D25681548

Pulled By: malfet

fbshipit-source-id: 0e2b25817c98d749920cb2b4079033a2ee8c1456

# This is the commit message #43:

added fuse_op and list_construct - list_unpack pass

Summary: Added fuse_op and list_construct and list_unpack pass

Test Plan:
jit_graph_opt_test.py
jit_graph_optimizer_test.cc
sparsenn_fused_operator_test.py

Reviewed By: qizzzh

Differential Revision: D25715079

fbshipit-source-id: fa976be53135a83f262b8f2e2eaedadd177f46c4

# This is the commit message #44:

Clean up type annotations in caffe2/torch/nn/modules (#49938)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49938

Test Plan: Sandcastle tests

Reviewed By: xush6528

Differential Revision: D25718705

fbshipit-source-id: 6a9e3e6d17aa458726cd32aa0a71a63c51b601d9

# This is the commit message #45:

[Tensorexpr]Copying header files in tensorexpr dir (#49933)

Summary:
Previously header files from jit/tensorexpr were not copied, this PR should enable copying.

This will allow other OSS projects like Glow to used TE.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49933

Reviewed By: Krovatkin, mruberry

Differential Revision: D25725927

Pulled By: protonu

fbshipit-source-id: 9d5a0586e9b73111230cacf044cd7e8f5c600ce9

# This is the commit message #46:

Clean up some type annotations in caffe2/torch/quantization (#49942)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49942

Upgrades type annotations from Python2 to Python3

Test Plan: Sandcastle tests

Reviewed By: vkuzo

Differential Revision: D25717551

fbshipit-source-id: 1b63dc485ecf6641641b05f7ce095ae1d2d87346

# This is the commit message #47:

Revert D25718705: Clean up type annotations in caffe2/torch/nn/modules

Test Plan: revert-hammer

Differential Revision:
D25718705 (https://github.com/pytorch/pytorch/commit/891759f8609f300203d41cccc7337089b38858bd)

Original commit changeset: 6a9e3e6d17aa

fbshipit-source-id: 1a4ef0bfdec8eb8e7ce149bfbdb34a4ad8d964b6

# This is the commit message #48:

added List as an option to the unflattened_size (#49838)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/49743

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49838

Reviewed By: mruberry

Differential Revision: D25727971

Pulled By: ngimel

fbshipit-source-id: 60142dae84ef107f0083676a2a78ce6b0472b7e1

# This is the commit message #49:

Fix auto exponent issue for torch.pow (#49809)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49809

Fixes https://github.com/pytorch/xla/issues/2688 #46936

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D25724176

Pulled By: anjali411

fbshipit-source-id: 16287a1f481e9475679b99d6fb45de840da225be

# This is the commit message #50:

Adding JIT support for cuda streams and events (#48020)

Summary:
=======

This PR addresses the following:

 * Adds JIT support for CUDA Streams
 * Adds JIT support for CUDA Events
 * Adds JIT support for CUDA Stream context manager

Testing:
======

python test/test_jit.py -v TestCUDA

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48020

Reviewed By: navahgar

Differential Revision: D25725749

Pulled By: nikithamalgifb

fbshipit-source-id: b0addeb49630f8f0c430ed7badeca43bb9d2535c

# This is the commit message #51:

Remove THPWrapper (#49871)

Summary:
Remove `THPWrapper` from PyTorch C code since it is not used anymore and because we have dropped Python 2 compatibility, its usage can be replaced by capsule objects (`PyCapsule_New`, `PyCapsule_CheckExact`, `PyCapsule_GetPointer` and `PyCapsule_GetDestructor`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49871

Reviewed By: mruberry

Differential Revision: D25715038

Pulled By: albanD

fbshipit-source-id: cc3b6f967bbe0dc42c692adf76dff4e4b667fdd5

# This is the commit message #52:

Enable test_fusions TanhQuantize (#49970)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49970

enable test_fusions:test_tanhquantize

Test Plan: https://internalfb.com/intern/testinfra/testrun/6755399469176694

Reviewed By: hyuen

Differential Revision: D25732684

fbshipit-source-id: b8479e43b5248ba5510f0c78c993d534d3ffc2b0

# This is the commit message #53:

[numpy] `torch.rsqrt` : promote integer inputs to float (#47909)

Summary:
Reference https://github.com/pytorch/pytorch/issues/42515

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47909

Reviewed By: ngimel

Differential Revision: D25730876

Pulled By: mruberry

fbshipit-source-id: c87a8f686e1dd64e511640e0278021c4a584ccf2

# This is the commit message #54:

Accept input tensor with 0-dim batch size for MultiLabelMarginLoss (#46975)

Summary:
Fix for one of the layers listed in https://github.com/pytorch/pytorch/issues/12013 or https://github.com/pytorch/pytorch/issues/38115

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46975

Reviewed By: mruberry

Differential Revision: D25719980

Pulled By: ngimel

fbshipit-source-id: 83414bad37c0b004bc7cced04df8b9c89bdba3e6

# This is the commit message #55:

Fix a KaTeX crash and many docstring issues (#49684)

Summary:
The first commit fixes the `MultiheadAttention` docstrings, which are causing a cryptic KaTeX crash.

The second commit fixes many documentation issues in `torch/_torch_docs.py`, and closes gh-43667 (missing "Keyword arguments" headers). It also fixes a weird duplicate docstring for `torch.argmin`; there's more of these, it looks like they were written based on whether the C++ implementation has an overload. That makes little sense to a Python user though, and the content is simply duplicate.

The `Shape:` heading for https://pytorch.org/docs/master/generated/torch.nn.MultiheadAttention.html looked bad, here's what it looks like with this PR:

<img width="475" alt="image" src="https://user-images.githubusercontent.com/98330/102797488-09a44e00-43b0-11eb-8788-acdf4e936f2f.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49684

Reviewed By: ngimel

Differential Revision: D25730909

Pulled By: mruberry

fbshipit-source-id: d25bcf8caf928e7e8e918017d119de12e10a46e9

# This is the commit message #56:

Remove incorrect usage of layout(std430) on uniform buffers, correctly now treated as error in the latest release of Vulkan SDK. (#49572)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49572

Differential Revision: D25729888

Test Plan: Imported from OSS

Reviewed By: SS-JIA

Pulled By: AshkanAliabadi

fbshipit-source-id: 15dd4acef3dfae72f03e7e3085b1ff5936becf3d

# This is the commit message #57:

quant docs: add common errors section (#49902)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49902

Adds a common errors section, and details the two errors
we see often on the discuss forums, with recommended solutions.

Test Plan: build the docs on Mac OS, the new section renders correctly.

Reviewed By: supriyar

Differential Revision: D25718195

Pulled By: vkuzo

fbshipit-source-id: c5ef2b24831d18d57bbafdb82d26d8fbf3a90781

# This is the commit message #58:

[quant] Quantizable LSTM (#49671)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49671

- Introduces the `torch.nn.quantizable` namespace
- Adds the `torch.nn.quantizable.LSTM` module

The point of the `quantizable` namespace is to segregate the purely quantized modules with the modules that could be quantized through a normal quantization flow, but are not using the quantized kernels explicitly.
That means the quantizable modules are functionally and numerically equivalent to the FP ones and can be used instead of the FP ones without any loss.

The main difference between the `torch.nn.LSTM` and the `torch.nn.quantizable.LSTM` is that the former one does not support observation for the linear layers, because all the computation is internal to the `aten` namespace.
The `torch.nn.quantizable.LSTM`, however, uses explicit linear layers that can be observed for further quantization.

Test Plan: Imported from OSS

Differential Revision: D25663870

Reviewed By: vkuzo

Pulled By: z-a-f

fbshipit-source-id: 70ff5463bd759b9a7922571a5712d3409dfdfa06

# This is the commit message #59:

[PyTorch] Decouple version numbers from c10 and caffe2 targets (#49905)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49905

There's size regression in model delivery in D25682312. Only the model version numbers are used. However, the dependency of the entire c10 (128 KB) is pulled in.

This diff is to decouple the version numbers to a separate header file, versions.h. Other targets referring to version numbers only can have deps of ```caffe2:version_headers```.
ghstack-source-id: 119161467

Test Plan: CI

Reviewed By: xcheng16, guangyfb

Differential Revision: D25716601

fbshipit-source-id: 07634bcf46eacfefa4aa75f2e4c9b9ee30c6929d

# This is the commit message #60:

Revert D25719980: [pytorch][PR] Accept input tensor with 0-dim batch size for MultiLabelMarginLoss

Test Plan: revert-hammer

Differential Revision:
D25719980 (https://github.com/pytorch/pytorch/commit/6b56b71e61e14bf4de5b371f0d8f2f2029065b31)

Original commit changeset: 83414bad37c0

fbshipit-source-id: 27eddd711a2b9e0adbc08bfab12100562e63ac21

# This is the commit message #61:

Improve `torch.flatten` docs and add tests to test_view_ops (#49501)

Summary:
Addresses https://github.com/pytorch/pytorch/issues/39474

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49501

Reviewed By: mruberry

Differential Revision: D25734450

Pulled By: soulitzer

fbshipit-source-id: 993667dd07acd81a4616465e0a3b94bde449193e

# This is the commit message #62:

Fix inf norm grad (reland) (#48611)

Summary:
Reland of https://github.com/pytorch/pytorch/issues/48122

Does this result in a regression? No significant regression observed.

Timer script:
```
import torch
from torch.utils.benchmark import Timer

setup="""
a = torch.rand((2, 2), requires_grad=True)
gradient = torch.ones(2)
"""

stmt="""
torch.autograd.grad(torch.norm(a, dim=(0,), keepdim=False), a, gradient)
"""

timer = Timer(stmt, setup)

print(timer.timeit(10000))
print(timer.collect_callgrind(100))
```
Note: small matrix, keepdim is False, and dims is non-empty

Before change
```
Runtime   37.37 us
1 measurement, 10000 runs , 1 thread

                           All          Noisy symbols removed
    Instructions:     15279045                   15141710
    Baseline:             4257                       3851
100 runs per measurement, 1 thread
```

After change
```
Runtime 36.08 us
1 measurement, 10000 runs , 1 thread

                           All          Noisy symbols removed
    Instructions:     15296974                   15153534
    Baseline:             4257                       3851
100 runs per measurement, 1 thread
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48611

Reviewed By: albanD, mruberry

Differential Revision: D25309997

Pulled By: soulitzer

fbshipit-source-id: 5fb950dc9259234342985c0e84ada25a7e3814d6

# This is the commit message #63:

Revert D25734450: [pytorch][PR] Improve `torch.flatten` docs and add tests to test_view_ops

Test Plan: revert-hammer

Differential Revision:
D25734450 (https://github.com/pytorch/pytorch/commit/730965c246192c94c804e5ac4a95f175dca2fb18)

Original commit changeset: 993667dd07ac

fbshipit-source-id: 603af25311fc8b29bb033167f3b2704da79c3147

# This is the commit message #64:

Remove flops warnings from the default profiler use case (#49896)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49896

Add missing check for with_flops option set

Test Plan:
python test/test_profiler.py
CI

Reviewed By: xuzhao9, ngimel

Differential Revision: D25716930

Pulled By: ilia-cher

fbshipit-source-id: 0da0bbb6c1a52328f665237e503406f877b41449

# This is the commit message #65:

[c10/**] Fix typos (#49815)

Summary:
All pretty minor. I avoided renaming `class DestructableMock` to `class DestructibleMock` and similar such symbol renames (in this PR).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49815

Reviewed By: VitalyFedyunin

Differential Revision: D25734507

Pulled By: mruberry

fbshipit-source-id: bbe8874a99d047e9d9814bf92ea8c036a5c6a3fd

# This is the commit message #66:

Back out "[pytorch][PR] Preserve memory format in qconv op" (#49994)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49994

Revert preserving memory format in qconv op because it is negatively affecting performance, will revert revert after fixing all issues

Test Plan: pytest fbcode/caffe2/test/quantization/test_quantized_op.py

Reviewed By: kimishpatel

Differential Revision: D25731279

fbshipit-source-id: 908dbb127210a93b27ada7ccdfa531177edf679a

# This is the commit message #67:

Making ops c10-full: list of optional tensors (#49138)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49138

See for details: https://fb.quip.com/QRtJAin66lPN

We need to model optional types explicitly, mostly for schema inference. So we cannot pass a `Tensor?[]` as `ArrayRef<Tensor>`, instead we need to pass it as an optional type. This PR changes it to `torch::List<c10::optional<Tensor>>`. It also makes the ops c10-full that were blocked by this.

## Backwards Compatibility

- This should not break the Python API because the representation in Python is the same and python_arg_parser just transforms the python list into a `List<optional<Tensor>>` instead of into a `List<Tensor>`.
- This should not break serialized models because there's some logic that allows loading a serialized `List<Tensor>` as `List<optional<Tensor>>`, see https://github.com/pytorch/pytorch/pull/49138/files#diff-9315f5dd045f47114c677174dcaa2f982721233eee1aa19068a42ff3ef775315R57
- This will break backwards compatibility for the C++ API. There is no implicit conversion from `ArrayRef<Tensor>` (which was the old argument type) to `List<optional<Tensor>>`. One common call pattern is `tensor.index({indices_tensor})`, where indices_tensor is another `Tensor`, and that will continue working because the `{}` initializer_list constructor for `List<optional<Tensor>>` can take `Tensor` elements that are implicitly converted to `optional<Tensor>`, but another common call pattern was `tensor.index(indices_tensor)`, where previously, the `Tensor` got implicitly converted to an `ArrayRef<Tensor>`, and to implicitly convert `Tensor -> optional<Tensor> -> List<optional<Tensor>>` would be two implicit conversions. C++ doesn't allow chaining. two implicit conversions. So those call sites have to be rewritten to `tensor.index({indices_tensor})`.

ghstack-source-id: 119269131

Test Plan:
## Benchmarks (C++ instruction counts):
### Forward
#### Script
```py
from torch.utils.benchmark import Timer

counts = Timer(
    stmt="""
        auto t = {{op call to measure}};
    """,
    setup="""
        using namespace torch::indexing;
        auto x = torch::ones({4, 4, 4});
    """,
    language="cpp",
).collect_callgrind(number=1_000)
print(counts)
```
#### Results
|  Op call                                                              |before   |after   |delta  |      |
|------------------------------------------------------------------------|---------|--------|-------|------|
|x[0] = 1                                                                |11566015 |11566015|0      |0.00% |
|x.index({0})                                                            |6807019  |6801019 |-6000  |-0.09%|
|x.index({0, 0})                                                         |13529019 |13557019|28000  |0.21% |
|x.index({0, 0, 0})                                                      |10677004 |10692004|15000  |0.14% |
|x.index({"..."})                                                        |5512015  |5506015 |-6000  |-0.11%|
|x.index({Slice(None, None, None)})                                      |6866016  |6936016 |70000  |1.02% |
|x.index({None})                                                         |8554015  |8548015 |-6000  |-0.07%|
|x.index({false})                                                        |22400000 |22744000|344000 |1.54% |
|x.index({true})                                                         |27624088 |27264393|-359695|-1.30%|
|x.index({"...", 0, true, Slice(1, None, 2), torch::tensor({1, 2})})|123472000|123463306|-8694|-0.01%|

### Autograd
#### Script
```py
from torch.utils.benchmark import Timer

counts = Timer(
    stmt="""
        auto t = {{op call to measure}};
    """,
    setup="""
        using namespace torch::indexing;
        auto x = torch::ones({4, 4, 4}, torch::requires_grad());
    """,
    language="cpp",
).collect_callgrind(number=1_000)
print(counts)
```
Note: the script measures the **forward** path of an op call with autograd enabled (i.e. calls into VariableType). It does not measure the backward path.

#### Results
|  Op call                                                              |before   |after   |delta  |      |
|------------------------------------------------------------------------|---------|--------|-------|------|
|x.index({0})                                                            |14839019|14833019|-6000| 0.00% |
|x.index({0, 0})                                                         |28342019|28370019|28000| 0.00% |
|x.index({0, 0, 0})                                                      |24434004|24449004|15000| 0.00% |
|x.index({"..."})                                                       |12773015|12767015|-6000| 0.00% |
|x.index({Slice(None, None, None)})                                      |14837016|14907016|70000| 0.47% |
|x.index({None})                                                        |15926015|15920015|-6000| 0.00% |
|x.index({false})                                                        |36958000|37477000|519000| 1.40% |
|x.index({true})                                                         |41971408|42426094|454686| 1.08% |
|x.index({"...", 0, true, Slice(1, None, 2), torch::tensor({1, 2})}) |168184392|164545682|-3638710| -2.16% |

Reviewed By: bhosmer

Differential Revision: D25454632

fbshipit-source-id: 28ab0cffbbdbdff1c40b4130ca62ee72f981b76d

# This is the commit message #68:

Add type annotations to _tensorboard_vis.py and hipify_python.py (#49834)

Summary:
closes gh-49833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49834

Reviewed By: mruberry

Differential Revision: D25725341

Pulled By: malfet

fbshipit-source-id: 7454c7afe07a3ff829826afe02aba05b7f649d9b

# This is the commit message #69:

Run test_type_hints first (#49748)

Summary:
Since it sort of a liner check and fails frequently

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49748

Reviewed By: vkuzo

Differential Revision: D25682980

Pulled By: malfet

fbshipit-source-id: 7dba28242dced0277bad56dc887d3273c1e9e575

# This is the commit message #70:

Update update_s3_htmls.yml (#49934)

Summary:
It is now running for forks, and generates a lot of failure message to owner of forks.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49934

Reviewed By: mruberry

Differential Revision: D25739552

Pulled By: seemethere

fbshipit-source-id: 0f9cc430316c0a5e9972de3cdd06d225528c81c2

# This is the commit message #71:

Improve `torch.flatten` docs and add tests to test_view_ops (#49501)

Summary:
Addresses https://github.com/pytorch/pytorch/issues/39474

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49501

Reviewed By: mrshenli

Differential Revision: D25740586

Pulled By: soulitzer

fbshipit-source-id: 3d7bdbab91eb208ac9e6832bb766d9d95a00c103

# This is the commit message #72:

move to non-legacy magma v2 headers (#49978)

Summary:
We recently (https://github.com/pytorch/pytorch/issues/7582) dropped magma v1 support, but we were still including the legacy compatibility headers and using functions only provided by them.
This changes the includes to the new magma_v2 header and fixes the triangular solve functions to use the v2-style magma_queue-using API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49978

Reviewed By: mrshenli

Differential Revision: D25752499

Pulled By: ngimel

fbshipit-source-id: 26d916bc5ce63978b341aefb072af228f140637d

# This is the commit message #73:

Enforce c10-fullness for all ops (#49619)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49619

This is a minimal-change PR that enforces that all operators are c10-full by making it the default.

This does not clean up any code yet, that will happen in PRs stacked on top. But this PR already ensures
that there are no non-c10-full ops left and there will be no non-c10-full ops introduced anymore.
ghstack-source-id: 119269182

Test Plan: waitforsandcastle

Reviewed By: bhosmer

Differential Revision: D25650198

fbshipit-source-id: efc53e884cb53193bf58a4834bf148453e689ea1

# This is the commit message #74:

.circleci: Ignore unbound variables for conda (#50053)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50053

For some reason conda likes to re-activate the conda environment when attempting this install
which means that a deactivate is run and some variables might not exist when that happens,
namely CONDA_MKL_INTERFACE_LAYER_BACKUP from libblas so let's just ignore unbound variables when
it comes to the conda installation commands

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Test Plan: Imported from OSS

Reviewed By: samestep

Differential Revision: D25760737

Pulled By: seemethere

fbshipit-source-id: 9e7720eb8a4f8028dbaa7bcfc304e5c1ca73ad08

# This is the commit message #75:

Construct CppSignatureGroup from NativeFunction (#49245)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49245

This will make it easier to implement the POC in
https://github.com/peterbell10/pytorch/commit/d534f7d4c555a37fd178c143098b8537a5a05d61
see also https://github.com/pytorch/pytorch/pull/45666

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D25594005

Pulled By: ezyang

fbshipit-source-id: e458d3dc3a765ec77425761b9b17f23769cecf9e

# This is the commit message #76:

Tighten up error checking on manual_kernel_registration (#49341)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49341

I noticed that #49097 was using manual_kernel_registration incorrectly,
so this diff tightens up the testing so that:

1. We don't generate useless wrapper functions when manual_kernel_registration
is on (it's not going to be registered, so it does nothing).

2. manual_kernel_registration shouldn't affect generation of functions in
Functions.h; if you need to stop bindings, use manual_cpp_binding

3. Structured and manual_kernel_registration are a hard error

4. We raise an error if you set dispatch and manual_kernel_registration at the
same time.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D25594003

Pulled By: ezyang

fbshipit-source-id: 655b10e9befdfd8bc95f1631b2f48f995a31a59a

# This is the commit message #77:

codegen: Resolve overload ambiguities created by defaulted arguments (#49348)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49348

This is a redux of #45666 post refactor, based off of
https://github.com/peterbell10/pytorch/commit/d534f7d4c555a37fd178c143098b8537a5a05d61
Credit goes to peterbell10 for the implementation.

Fixes #43945.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D25594004

Pulled By: ezyang

fbshipit-source-id: c8eb876bb3348308d6dc8ba7bf091a2a3389450f

# This is the commit message #78:

Move default or no default logic into native.argument (#49489)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49489

Previously, it was done at a use site, but that meant other use
sites don't get the right logic.  Pushing it in makes sure everyone
gets it.

I also fixed one case of confusion where defn() was used to define a decl().
If you want to define a declaration with no defaults, say no_default().decl()
which is more direct and will give us code reviewers a clue if you should
have pushed this logic in.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D25595407

Pulled By: ezyang

fbshipit-source-id: 89c664f0ed4d95699794a0d3123d54d0f7e4cba4

# This is the commit message #79:

Make use_c10_dispatcher: full mandatory for structured kernels (#49490)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49490

No reason to let people to do the legacy thing for the brand new kernel.
This simplifies the codegen.  I have to port the two structured kernels
to this new format.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D25595406

Pulled By: ezyang

fbshipit-source-id: b5931873379afdd0f3b00a012e0066af05de0a69

# This is the commit message #80:

Add trace batching forward/backward rule (#49979)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49979

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D25734379

Pulled By: ejguan

fbshipit-source-id: 8f9346afaf324e7ab17bafd6ecc97eed8442fd38

# This is the commit message #81:

[pytorch] add threshold_backward batching for vmap (#49881)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49881

title

Test Plan: pytest test/test_vmap.py -v -k "BatchedGrad"

Reviewed By: zou3519

Differential Revision: D25711289

fbshipit-source-id: f1856193249fda70da41e36e15bc26ea7966b510

# This is the commit message #82:

torch.xlogy: Use wrapped_scalar_tensor / gpu_with_scalars to speed up GPU kernel. (#49926)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49926

While investigating https://github.com/pytorch/pytorch/issues/49758, I changed the xlogy kernel to use the recommended wrapped_scaler_tensor pattern instead of moving the scalar to the GPU as a tensor.
While this doesn't avoid a synchronization (there is no synchronization in the move, as its done via fill), this does significantly speed up the GPU kernel (almost ~50%, benchmark in PR comments).

From looking at the nvprof output, it looks like this code path avoids broadcasting.  Aside: this seems unnecessary, as there is nothing special from the point-of-view of broadcasting whether the Tensor
is ()-sized or marked as a wrapped_scalar.  Still, this is a useful change to make as we avoid extra kernel launches and dispatches to create and fill the tensor.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25724215

Pulled By: gchanan

fbshipit-source-id: 4adcd5d8b3297502672ffeafc77e8af80592f460

# This is the commit message #83:

[BE] unified run_process_no_exception code (#49774)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49774

Reviewed By: janeyx99

Differential Revision: D25756811

Pulled By: walterddr

fbshipit-source-id: 4d2b3bd772572764ff96e5aad70323b58393e332

# This is the commit message #84:

prohibit assignment to a sparse tensor (#50040)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/48225 by prohibiting assignment to a sparse Tensor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50040

Reviewed By: mrshenli

Differential Revision: D25757125

Pulled By: zou3519

fbshipit-source-id: 3db6f48932eb10bf6ca5e97a6091afcabb60e478

# This is the commit message #85:

Suppress "statement is unreachable" warning (#49495)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49495

Compiling PyTorch currently generates a large number of warnings like this:
```
caffe2/aten/src/ATen/core/builtin_function.h(105): warning: statement is unreachable
```
The offending code
```
  std::string pretty_print_schema() const override {
    TORCH_INTERNAL_ASSERT(false);
    return "";
  }
```
has an unreachable return which prevents a "no return" warning.

We resolve the situation by using NVCC's pragma system to suppress this warning within this function.

Test Plan:
The warning appears when running:
```
buck build mode/dev-nosan //caffe2/torch/fb/sparsenn:test
```
As well as a number of other build commands.

Reviewed By: ngimel

Differential Revision: D25546542

fbshipit-source-id: 71cddd4fdb5fd16022a6d7b2daf0e6d55e6e90e2

# This is the commit message #86:

[ONNX] Handle Sub-block index_put in _jit_pass_onnx_remove_inplace_ops_for_onnx (#48734)

Summary:
For the added UT and existing UTs, this code is independent and ready for review.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48734

Reviewed By: izdeby

Differential Revision: D25502677

Pulled By: bzinodev

fbshipit-source-id: 788b4eaa5e5e8b5df1fb4956fbd25928127bb199

# This is the commit message #87:

Dont inlinine intermediates on cpu (#49565)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49565

Test Plan: Imported from OSS

Reviewed By: Krovatkin, ZolotukhinM

Differential Revision: D25688271

Pulled By: eellison

fbshipit-source-id: 9ea7858e2db4fb31292e04440fc72ee04623c688

# This is the commit message #88:

Drop unused imports from scripts (#49956)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49956

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727347

fbshipit-source-id: 74d0a08aa0cfd0f492688a2b8278a0c65fd1deba

# This is the commit message #89:

Drop unused imports from leftovers (#49953)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49953

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727348

fbshipit-source-id: b3feef80b9b4b535f1bd4060dace5b1a50bd5e69

# This is the commit message #90:

Clean up some type annotations in caffe2/contrib/aten/gen_op (#49945)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49945

Upgrades type annotations from Python2 to Python3

Test Plan: Sandcastle tests

Reviewed By: xush6528

Differential Revision: D25717502

fbshipit-source-id: 718d93e8614e9d050f4da1c6bd4ac892bab98154

# This is the commit message #91:

[ONNX] Modified var_mean symbolic to support more combinations of dims (#48949)

Summary:
Based on existing implementation of var_mean, values of dim have to be sequential and start with zero. The formats listed below are cause scenarios with incompatible dimension for the Sub node.
-> dim[1, 2]
-> dim[0, 2]
-> dim[2, 0]

The changes in this PR allow such formats to be supported in var_mean

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48949

Reviewed By: houseroad

Differential Revision: D25540272

Pulled By: SplitInfinity

fbshipit-source-id: 59813a77ff076d138655cc8c17953358f62cf137

# This is the commit message #92:

introduce a flag to disable aten::cat in TE (#49579)

Summary:
introduce a flag to disable aten::cat in TE

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49579

Reviewed By: eellison

Differential Revision: D25763758

Pulled By: Krovatkin

fbshipit-source-id: c4f4a8220964813202369a3383057e77e7f10cb0

# This is the commit message #93:

Complex backward for indexing, slicing, joining, and mutating ops (#49552)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49552

This PR:
1. Migrates independent autograd test for `hstack`, `dstack`, `vstack`, `movedim`, `moveaxis` from `test_autograd.py` to the new `OpInfo` based tests.
2. Migrates autograd test for `gather`, `index_select` from the method_tests to the new `OpInfo` based tests.
2. Enables complex backward for `stack, gather, index_select, index_add_` and adds tests for complex autograd for all the above mentioned ops.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25682511

Pulled By: anjali411

fbshipit-source-id: 5d8f89db4a9ec340ab99a6196987d44a23e2c6c6

# This is the commit message #94:

[FX] fix Graph python_code return type annotation (#49931)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49931

This fixes #49932. The `maybe_return_annotation` was not being passed by reference, so it was never getting modified.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D25725582

Pulled By: esqu1

fbshipit-source-id: 4136ff169a269d6b98f0b8e14d95d19e7c7cfa71

# This is the commit message #95:

[TensorExpr] Fix LLVM 10 build after LLVM API changes

Summary: Use `llvm::CodeGenFileType` for llvm-10+

Test Plan: local build

Reviewed By: asuhan

Differential Revision: D25694990

fbshipit-source-id: c35d973ef2669929715a94da5dd46e4a0457c4e8

# This is the commit message #96:

unit test for fc parallelization aot (#50056)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50056

buck test //caffe2/caffe2/contrib/fakelowp/test:test_chunkingnnpi -- --fallback-classic

Test Plan: https://our.intern.facebook.com/intern/testinfra/testrun/7036874446100155

Reviewed By: venkatacrc

Differential Revision: D25731079

fbshipit-source-id: 4aa4ffc641659cd90bf4670d28cb43e43ae76dcd

# This is the commit message #97:

Fix return value of _vmap_internals._get_name (#49951)

Summary:
This appears to have been a copy-paste error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49951

Reviewed By: mrshenli

Differential Revision: D25757099

Pulled By: zou3519

fbshipit-source-id: e47cc3b0694645bd0025326bfe45852ef0266adf

# This is the commit message #98:

Fix grammar typo in readme.md (#50000)

Summary:
missing `

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50000

Reviewed By: ezyang

Differential Revision: D25759608

Pulled By: mrshenli

fbshipit-source-id: 4dbe06b8978ae5b2b9b66cde163dab4bd8ee2257

# This is the commit message #99:

Fixing error in Readme.md. (#50033)

Summary:
Fix incorrect command in readme.
Fix incorrect url in readme.
Add url for dockerfile.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50033

Reviewed By: ezyang

Differential Revision: D25759567

Pulled By: mrshenli

fbshipit-source-id: 2a3bc88c8717a3890090ddd0d6657f49d14ff05a

# This is the commit message #100:

Revert D25763758: [pytorch][PR] introduce a flag to disable aten::cat in TE

Test Plan: revert-hammer

Differential Revision:
D25763758 (https://github.com/pytorch/pytorch/commit/9e0b4a96e48132190220820684033a77a92e8a33)

Original commit changeset: c4f4a8220964

fbshipit-source-id: 98775ad9058b81541a010e646b0cf4864854be3e

# This is the commit message #101:

Patch death tests/fork use after D25292667 (part 3)

Summary: (Note: this ignores all push blocking failures!)

Test Plan: unit tests

Differential Revision: D25775357

fbshipit-source-id: 0ae3c59181bc123d763ed9c0d05c536998ae5ca0

# This is the commit message #102:

fixes indices computation for trilinear interpolate backwards (#50084)

Summary:
https://github.com/pytorch/pytorch/issues/48675 had some typos in indices computations so that results for trilinear interpolation where height is not equal to width were wrong. This PR fixes it.
cc xwang233

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50084

Reviewed By: BIT-silence

Differential Revision: D25777083

Pulled By: ngimel

fbshipit-source-id: 71be545628735fe875b7ea30bf6a09df4f2fae5c

# This is the commit message #103:

Run mypy on more test files (#49658)

Summary:
Improves one annotation for `augment_model_with_bundled_inputs`

Also add a comment to not work on caffe2 type annotations, that's not worth the effort - those ignores can stay as they are.

xref gh-16574

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49658

Reviewed By: heitorschueroff

Differential Revision: D25757721

Pulled By: ezyang

fbshipit-source-id: 44c396d8da9ef3f41b97f9c46a528f0431c4b463

# This is the commit message #104:

Run mypy over test/test_utils.py (#49654)

Summary:
This caught one incorrect annotation in `cpp_extension.load`.

xref gh-16574.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49654

Reviewed By: heitorschueroff

Differential Revision: D25757691

Pulled By: ezyang

fbshipit-source-id: 145ce3ae532cc585d9ca3bbd5381401bad0072e2

# This is the commit message #105:

quant: ensure observers do not crash for empty Tensors (#49800)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49800

Ensures that having a Tensor with 0 elements does not crash observers.
Note: it's illegal to pass Tensors with 0 elements to reductions such
as min and max, so we gate this out before the logic hits min/max.

This should not be hit often in practice, but it's coming up
during debugging of some RCNN models with test inputs.

Test Plan:
```
python test/test_quantization.py TestObserver.test_zero_numel
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D25693230

fbshipit-source-id: d737559697c98bd923356edacba895835060bb38

# This is the commit message #106:

quant: nice error message on convtranspose with per-channel weight (#49899)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49899

Per channel weights observer in conv transpose is not supported yet.  Adding an
error message which fails instantly instead of making the user wait until after
calibration/training finishes.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_convtranspose_per_channel_fails_early
python test/test_quantization.py TestQuantizeFx.test_convtranspose_per_channel_fails_early
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D25717151

fbshipit-source-id: 093e5979030ec185e3e0d56c45d7ce7338bf94b6

# This is the commit message #107:

quant: throw a nice error message for allclose with quantized inputs (#49802)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49802

Currently `torch.allclose` is not supported with quantized inputs.
Throw a nice error message instead of a cryptic one.

Test Plan:
```
torch.allclose(x_fp32, y_fp32)

torch.allclose(x_int8, y_int8)
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D25693538

fbshipit-source-id: 8958628433adfca3ae6ce215f3e3ec3c5e29994c

# This is the commit message #108:

eager quant: fix error with removing forward hooks (#49813)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49813

https://github.com/pytorch/pytorch/issues/49739 reports a crash
where removing forward hooks results in a

```
RuntimeError: OrderedDict mutated during iteration
```

Unfortunately I cannot repro this inside the PyTorch module, but the issue
author has a good point and and we should not mutate the dict inside
of the iteration.

Test Plan:
```
// test plan from https://github.com/pytorch/pytorch/pull/46871 which
// originally added this
python test/test_quantization.py TestEagerModeQATOps
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D25698725

fbshipit-source-id: 13069d0d5017a84038c8f7be439a3ed537938ac6

# This is the commit message #109:

[JIT] Remove buffer metadata serialization forward-compat gate (#49990)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49990

**Summary**
This commit removes the forward-compatibility gate for buffer metadata
serialization. It was introduced to allow versions of fbcode
binaries statically linked against older versions of PyTorch (without
buffer metadata in JIT) to deserialize archives produced by new versions
of PyTorch. Enough time has probably passed that these old binaries
don't exist anymore, so it should be safe to remove the gate.

**Test Plan**
Internal tests.

Test Plan: Imported from OSS

Reviewed By: xw285cornell

Differential Revision: D25743199

Pulled By: SplitInfinity

fbshipit-source-id: 58d82ab4362270b309956826e36c8bf9d620f081

# This is the commit message #110:

Add an option to disable aten::cat in TE (re-revert) (#50101)

Summary:
This reverts commit ace78ddb6a2bdbf03f08c69767eba57306dd69ed.

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50101

Reviewed By: eellison

Differential Revision: D25784785

Pulled By: Krovatkin

fbshipit-source-id: cbb3d377e03303f6c8c71f4c59c6d90ab40d55f7

# This is the commit message #111:

[distributed] Provide parameter to pass GPU ID in barrier function (#49069)

Summary:
For a multi GPU node, rank and corresponding GPU mapping can be different.
Provide optional parameter to specify the GPU device number for the
allreduce operation in barrier function.

Add test cases to validate barrier device_ids.

Signed-off-by: Jagadish Krishnamoorthy <jagdish.krishna@gmail.com>

Fixes https://github.com/pytorch/pytorch/issues/48110

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49069

Reviewed By: mrshenli

Differential Revision: D25658528

Pulled By: rohan-varma

fbshipit-source-id: 418198b6224c8c1fd95993b80c072a8ff8f02eec

# This is the commit message #112:

[RPC] Relax some profiling tests (#49983)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49983

We have observed very rare flakiness in some profiling tests recently,
i.e.: . However, we were not able to reproduce these even with thousands of
runs on the CI machines where the failure was originally reported. As a result,
relaxing these tests and re-enabling them to reduce failure rates.
ghstack-source-id: 119352019

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D25739416

fbshipit-source-id: 4dbb6b30f20d3af94ba39f4a7ccf4fb055e440bc

# This is the commit message #113:

support building with conda installed libraries (#50080)

Summary:
This should fix a bunch of share library compilation error when installed in conda lib, lib64 folder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50080

Reviewed By: seemethere

Differential Revision: D25781923

Pulled By: walterddr

fbshipit-source-id: 78a74925981d65243b98bb99a65f1f2766e87a2f

# This is the commit message #114:

Fix store based barrier to only use 'add'. (#49930)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49930

Certain store implementations don't work well when we use get() and
add() on the same key. To avoid this issue, we only use add() in the store
based barrier. The buggy store implementations can't be properly fixed due to
legacy reasons.

Test Plan:
1) unit tests.
2) waitforbuildbot

Reviewed By: osalpekar

Differential Revision: D25725386

fbshipit-source-id: 1535e2629914de7f78847b730f8764f92cde67e7

# This is the commit message #115:

[caffe2][a10] Move down pragma pop to properly suppress warning 4522 (#49233)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49233

As the comments on line 160, say we should suppress this overly aggressive warning with MSVC:
```
caffe2\tensorbody.h_ovrsource#header-mode-symlink-tree-only,headers\aten\core\tensorbody.h(1223): warning C4522: 'at::Tensor': multiple assignment operators specified
```

However, in order to remove the warning, the closing brace of the class must be between the`#pragma warning` push and its corresponding pop. Move the pop down to ensure that.

Test Plan: Built locally using clang for Windows without buck cache, confirmed the warning resolved

Reviewed By: bhosmer

Differential Revision: D25422447

fbshipit-source-id: c1e1c66fb8513af5f9d4e3c1dc48d0070c4a1f84

# This is the commit message #116:

Drop unused imports from caffe2/python (#49980)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49980

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727359

fbshipit-source-id: c4f60005b10546423dc093d31d46deb418352286

# This is the commit message #117:

Update MultiHeadAttention docstring (#49950)

Summary:
Fixes MultiHeadAttention docstring.

Currently, https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html#torch.nn.MultiheadAttention
is

<img width="648" alt="Screen Shot 2020-12-29 at 21 06 43" src="https://user-images.githubusercontent.com/2459423/103311124-cd10cc00-4a19-11eb-89c9-0ee261364963.png">

and with the fix will be

<img width="648" alt="Screen Shot 2020-12-29 at 22 41 35" src="https://user-images.githubusercontent.com/2459423/103315838-0dc31200-4a27-11eb-82e2-ca8f13d713a1.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49950

Reviewed By: mrshenli

Differential Revision: D25732573

Pulled By: zhangguanheng66

fbshipit-source-id: b362f3f617ab26b0dd25c3a0a7d4117e522e620c

# This is the commit message #118:

Revert D25757691: [pytorch][PR] Run mypy over test/test_utils.py

Test Plan: revert-hammer

Differential Revision:
D25757691 (https://github.com/pytorch/pytorch/commit/c86cfcd81da46b5e8226441edb58f0b11a97f215)

Original commit changeset: 145ce3ae532c

fbshipit-source-id: 3dfd68f0c42fc074cde15c6213a630b16e9d8879

# This is the commit message #119:

Enable distribution validation if __debug__ (#48743)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/47123
Follows https://github.com/pyro-ppl/pyro/pull/2701

This turns on `Distribution` validation by default. The motivation is to favor beginners by providing helpful error messages. Advanced users focused on speed can disable validation by calling
```py
torch.distributions.Distribution.set_default_validate_args(False)
```
or by disabling individual distribution validation via `MyDistribution(..., validate_args=False)`.

In practice I have found many beginners forget or do not know about validation. Therefore I have [enabled it by default](https://github.com/pyro-ppl/pyro/pull/2701) in Pyro. I believe PyTorch could also benefit from this change. Indeed validation caught a number of bugs in `.icdf()` methods, in tests, and in PPL benchmarks, all of which have been fixed in this PR.

## Release concerns
- This may slightly slow down some models. Concerned users may disable validation.
- This may cause new `ValueErrors` in models that rely on unsupported behavior, e.g. `Categorical.log_prob()` applied to continuous-valued tensors (only {0,1}-valued tenso…
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this pull request Sep 20, 2021
Removing cudatoolkit and most sed calls
facebook-github-bot pushed a commit that referenced this pull request Apr 2, 2022
Summary:
X-link: pytorch/pytorch-canary#78

Pull Request resolved: #75039

It didn't match torch.nn.MultiheadAttention. Now it does.
ghstack-source-id: 152815449

Test Plan: updated tests

Reviewed By: zrphercule

Differential Revision: D34929186

fbshipit-source-id: 1eaee615bafd5a6f058f1faefa54f8f4aa01c92e
pytorchmergebot pushed a commit that referenced this pull request Apr 2, 2022
Summary:
X-link: pytorch/pytorch-canary#78

Pull Request resolved: #75039

It didn't match torch.nn.MultiheadAttention. Now it does.
ghstack-source-id: 152815449

Test Plan: updated tests

Reviewed By: zrphercule

Differential Revision: D34929186

fbshipit-source-id: 1eaee615bafd5a6f058f1faefa54f8f4aa01c92e
(cherry picked from commit 00eea72)
hubertlu-tw added a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
* FusedRMSNorm/"T5LayerNorm" based on FusedLayerNorm (pytorch#1274)

* FusedRMSNorm based on FusedLayerNorm

* refactor duplicated kernels

* delete comments

* delete comments

* cleanup

* cleanup

* cleanup, fixed clobbering forward_affine_mixed_dtypes

* fix pybind naming and add MixedFused test

* undo skipping

* check elementwise_affine

* Update tests/L0/run_fused_layer_norm/test_fused_layer_norm.py

Oof, nice catch, thanks

Co-authored-by: Masaki Kozuki <masaki.kozuki.2014@gmail.com>

Co-authored-by: Masaki Kozuki <masaki.kozuki.2014@gmail.com>

* fix and generate docs for FusedRMSNorm (pytorch#1285)

* [FusedRMSNorm doc] document where epsilon is added (pytorch#1295)

* [FusedRMSNorm doc] add epsilon to formula

* correct

* better wording

* Fix some bugs

* Optimize HostRMSNormGradient and HostApplyRMSNorm for AMD GPUs

* Fix NaN issues in FusedRMSNorm

* Update test_fused_layer_norm.py

* Skip test_fused_layer_norm.TestAutocastFusedRMSNorm on ROCm

* Use at::cuda::warp_size() instead of at::cuda::getCurrentDeviceProperties()->warpSize

Co-authored-by: eqy <eddiey@nvidia.com>
Co-authored-by: Masaki Kozuki <masaki.kozuki.2014@gmail.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
pytorchmergebot pushed a commit that referenced this pull request May 12, 2023
When tensor is resized, reference array to it's sizes may become invalid. Make a copy in advance.

<details>
<summary>ASAN report</summary>

```
=================================================================
==1115867==ERROR: AddressSanitizer: heap-use-after-free on address 0x61000013d790 at pc 0x03ff8e7da360 bp 0x03fff53c83a0 sp 0x03fff53c8390
READ of size 8 at 0x61000013d790 thread T0
    #0 0x3ff8e7da35f in c10::SymInt::is_heap_allocated() const /home/user/pytorch/c10/core/SymInt.h:154
    #1 0x3ff8e7da35f in c10::SymInt::maybe_as_int() const /home/user/pytorch/c10/core/SymInt.h:215
    #2 0x3ff8e7d0a6d in c10::SymInt::sym_eq(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.cpp:69
    #3 0x3ff7a9ab0bd in c10::SymInt::operator==(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.h:177
    #4 0x3ff7a9aaedd in bool std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-
v11/bits/stl_algobase.h:1162
    #5 0x3ff7a9aae4b in bool std::__equal_aux1<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/
stl_algobase.h:1211
    #6 0x3ff7a9aae05 in bool std::__equal_aux<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/s
tl_algobase.h:1219
    #7 0x3ff7a9aad97 in bool std::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_alg
obase.h:1556
    #8 0x3ff4b23c771 in c10::ArrayRef<c10::SymInt>::equals(c10::ArrayRef<c10::SymInt>) const /home/user/pytorch/c10/util/ArrayRef.h:188
    #9 0x3ff4cb91bc1 in bool c10::operator!=<c10::SymInt>(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>) /home/user/pytorch/c10/util/ArrayRef.h:341
    #10 0x3ff6d1b57ff in torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/torch/csrc/autograd/Variab
leTypeManual.cpp:408
    #11 0x3ff6d1e59c7 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
> >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
    #12 0x3ff6d1e59c7 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::Sy
mInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::Disp
atchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
    #13 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    #14 0x3ff51ca6e8f in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    #15 0x3ff51ca6e8f in at::Tensor const& c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Ten
sor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)
const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
    #16 0x3ff5182006b in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c
10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
    #17 0x3ff5182006b in at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2144
    #18 0x3ff6d1d5e07 in at::redispatch::resize__symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/RedispatchFunctions.h:2847
    #19 0x3ff6d1bbb67 in torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pyto
rch/torch/csrc/autograd/VariableTypeManual.cpp:243
    #20 0x3ff6d1bd197 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10
::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFu
nctionIntoFunctor.h:13
    #21 0x3ff6d1bd197 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor
 const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c
10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor
.h:480
    #22 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    #23 0x3ff5181ead1 in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    #24 0x3ff5181ead1 in at::Tensor const& c10::Dispatcher::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor co
nst& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/at
en/src/ATen/core/dispatch/Dispatcher.h:639
    #25 0x3ff5181ead1 in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>,
c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
    #26 0x3ff5181ead1 in at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2137
    #27 0x3ff79b44fcf in at::Tensor::resize__symint(c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const aten/src/ATen/core/TensorBody.h:2452
    #28 0x3ff79a802db in torch::autograd::THPVariable_resize_(_object*, _object*, _object*)::$_0::operator()(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/us
er/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13417
    #29 0x3ff7999f1eb in torch::autograd::THPVariable_resize_(_object*, _object*, _object*) /home/user/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13419
    #30 0x3ffa2c9b009 in method_vectorcall_VARARGS_KEYWORDS Objects/descrobject.c:344
    #31 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #32 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #33 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #34 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    #35 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #36 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #37 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #38 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    #39 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    #40 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    #41 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    #42 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #43 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #44 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #45 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #46 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    #47 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    #48 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    #49 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    #50 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #51 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #52 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #53 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #54 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #55 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #56 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #57 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    #58 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #59 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #60 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #61 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #62 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    #63 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #64 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #65 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #66 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #67 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #68 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #69 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #70 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #71 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #72 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #73 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    #74 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #75 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #76 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #77 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #78 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    #79 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #80 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #81 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #82 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #83 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #84 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #85 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #86 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #87 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    #88 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #89 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #90 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #91 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #92 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #93 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #94 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #95 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #96 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    #97 0x3ffa2c8ab9b in PyVectorcall_Call Objects/call.c:267
    #98 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    #99 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    #100 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    #101 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #102 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #103 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #104 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #105 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #106 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #107 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    #108 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #109 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #110 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #111 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #112 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #113 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #114 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #115 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #116 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #117 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #118 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #119 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    #120 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #121 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #122 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #123 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    #124 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    #125 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    #126 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    #127 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #128 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #129 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #130 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #131 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #132 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #133 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #134 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #135 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #136 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #137 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #138 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #139 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    #140 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #141 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #142 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #143 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #144 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #145 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #146 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #147 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #148 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #149 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    #150 0x3ffa2c8ad17 in _PyObject_Call Objects/call.c:305
    #151 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    #152 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    #153 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #154 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #155 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #156 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #157 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #158 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #159 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #160 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #161 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #162 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #163 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #164 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #165 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    #166 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #167 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #168 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #169 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #170 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #171 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #172 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #173 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    #174 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    #175 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    #176 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    #177 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #178 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #179 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #180 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #181 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #182 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #183 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #184 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #185 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #186 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #187 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #188 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #189 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #190 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #191 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #192 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #193 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #194 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #195 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    #196 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    #197 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    #198 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    #199 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #200 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #201 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #202 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #203 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #204 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #205 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #206 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #207 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #208 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #209 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #210 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #211 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    #212 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #213 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #214 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #215 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #216 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #217 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #218 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #219 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #220 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #221 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    #222 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #223 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #224 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #225 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #226 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #227 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #228 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #229 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #230 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    #231 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    #232 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    #233 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    #234 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #235 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #236 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #237 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #238 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #239 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #240 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #241 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #242 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #243 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #244 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #245 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #246 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    #247 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #248 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #249 0x3ffa2e05447 in call_function Python/ceval.c:5891
    #250 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #251 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #252 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    #253 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #254 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #255 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #256 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    #257 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215

0x61000013d790 is located 80 bytes inside of 192-byte region [0x61000013d740,0x61000013d800)
freed by thread T0 here:
    #0 0x3ffa3237de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff8e7e3221 in c10::TensorImpl::~TensorImpl() /home/user/pytorch/c10/core/TensorImpl.cpp:75

previously allocated by thread T0 here:
    #0 0x3ffa323734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff4aeeb3d1 in c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_null_type<c10::TensorImpl> > c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_nul
l_type<c10::TensorImpl> >::make<c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >, c10::DispatchKeySet&, caffe2::TypeMeta&>(c10::intrusive_ptr<c10::S
torageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >&&, c10::DispatchKeySet&, caffe2::TypeMeta&) /home/user/pytorch/c10/util/intrusive_ptr.h:498
    #2 0x3ff76f79e17  (/home/user/pytorch/build/lib.linux-s390x-cpython-310/torch/lib/libtorch_cpu.so+0x2fb79e17)

SUMMARY: AddressSanitizer: heap-use-after-free /home/user/pytorch/c10/core/SymInt.h:154 in c10::SymInt::is_heap_allocated() const
Shadow bytes around the buggy address:
  0x100c2000027aa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ac0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c2000027af0: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100c2000027b20: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b30: 00 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa
  0x100c2000027b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1115867==ABORTING
```
</details>

<details>
<summary>Additional backtraces (not full)</summary>

Memory deallocation:
```
#0  operator delete (ptr=0x61000013d740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ffa77e3222 in c10::TensorImpl::~TensorImpl (this=0x61000013d740) at /home/user/pytorch/c10/core/TensorImpl.cpp:75
#2  0x000003ff63e76e8c in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:291
#3  0x000003ff63e76910 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:370
#4  0x000003ff63e67240 in at::TensorBase::~TensorBase (this=0x3ffd7ec8230) at /home/user/pytorch/aten/src/ATen/core/TensorBase.h:80
#5  0x000003ff63e85ee0 in at::Tensor::~Tensor (this=0x3ffd7ec8230) at aten/src/ATen/core/TensorBody.h:90
#6  0x000003ff63f67304 in resize__functionalization (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:173
#7  0x000003ff63f89258 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (
    this=0x6030000390a0, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#8  c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (functor=0x6030000390a0, dispatchKeySet=..., args=..., args=...,
    args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
#9  0x000003ff6aca560a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff63f88a80 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>, functor=0x6030000390a0,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
#10 0x000003ff6aca715c in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1b28, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:96
#11 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
#12 0x000003ff6a82006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
#13 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
#14 0x000003ff861d5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
#15 0x000003ff861b579e in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:401
```

Memory access:
```
#0  c10::SymInt::maybe_as_int (this=0x61000013d790) at /home/user/pytorch/c10/core/SymInt.h:215
#1  0x000003ff734d0a6e in c10::SymInt::sym_eq (this=0x61000013d790, sci=...) at /home/user/pytorch/c10/core/SymInt.cpp:69
#2  0x000003ff5f6ab0be in c10::SymInt::operator== (this=0x61000013d790, o=...) at /home/user/pytorch/c10/core/SymInt.h:177
#3  0x000003ff5f6aaede in std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1162
#4  0x000003ff5f6aae4c in std::__equal_aux1<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1211
#5  0x000003ff5f6aae06 in std::__equal_aux<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1219
#6  0x000003ff5f6aad98 in std::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1556
#7  0x000003ff2ff3c772 in c10::ArrayRef<c10::SymInt>::equals (this=0x3ffed7c9900, RHS=...) at /home/user/pytorch/c10/util/ArrayRef.h:188
#8  0x000003ff31891bc2 in c10::operator!=<c10::SymInt> (a1=..., a2=...) at /home/user/pytorch/c10/util/ArrayRef.h:341
#9  0x000003ff51eb5800 in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:408
#10 0x000003ff51ee59c8 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c
10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
 > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (this=0x6030007dca40, args=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#11 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt
>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<
c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
#12 0x000003ff369a512a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff51ee51f0 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::Ar
rayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKern
el*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>, functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
#13 0x000003ff369a6e90 in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1bc8, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
#14 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::Arr
ayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
#15 0x000003ff3652006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&,
c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
#16 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
#17 0x000003ff51ed5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
#18 0x000003ff51ebbb68 in torch::autograd::VariableType::(anonymous namespace)::resize_ (ks=..., self=..., size=..., optional_memory_format=...)
    at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:243
```
</details>
Pull Request resolved: #101064
Approved by: https://github.com/Skylion007, https://github.com/albanD
pytorchmergebot pushed a commit that referenced this pull request May 15, 2023
arguments() returns vector member of object returned by schema() call.
When object returned by schema() call is destroyed, the vector is deallocated as well,
it's lifetime isn't extended.

This issue detected while running `pytest -v test/mobile/test_lite_script_type.py -k test_nest_typing_namedtuple_custom_classtype` with ASAN.

<details>
<summary>ASAN output</summary>

```
==1134126==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0005a5790 at pc 0x03ff844488d8 bp 0x03fff584afe8 sp 0x03fff584afd8
READ of size 8 at 0x60d0005a5790 thread T0
    #0 0x3ff844488d7 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) /usr/lib/gcc/s390x-i
bm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028
    #1 0x3ff8444293f in std::vector<c10::Argument, std::allocator<c10::Argument> >::begin() const /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_vector.h:821
    #2 0x3ff84d807d1 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:617
    #3 0x3ff84d80305 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
    #4 0x3ff84856871 in pybind11::detail::type_caster<c10::IValue, void>::cast(c10::IValue, pybind11::return_value_policy, pybind11::handle) /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
    #5 0x3ff85318191 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const /home/user/pytorch/cmake/../third_party/pybin
d11/include/pybind11/pybind11.h:249
    #6 0x3ff85317cfd in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) /home/user/pytorch/cmake/../third_party/pybind11/incl
ude/pybind11/pybind11.h:224
    #7 0x3ff82ee52e9 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
    #8 0x3ffab002903 in cfunction_call Objects/methodobject.c:543
    #9 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #10 0x3ffaaf8e919 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #11 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #12 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #13 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #14 0x3ffab105447 in call_function Python/ceval.c:5891
    #15 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #16 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #17 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #18 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #19 0x3ffaaf8a615 in _PyObject_FastCallDictTstate Objects/call.c:142
    #20 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #21 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #22 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #23 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #24 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #25 0x3ffab105447 in call_function Python/ceval.c:5891
    #26 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #27 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #28 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #29 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #30 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #31 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #32 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #33 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #34 0x3ffab105447 in call_function Python/ceval.c:5891
    #35 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #36 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #37 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #38 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #39 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #40 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #41 0x3ffab105447 in call_function Python/ceval.c:5891
    #42 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    #43 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #44 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #45 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #46 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #47 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #48 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #49 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #50 0x3ffab105447 in call_function Python/ceval.c:5891
    #51 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #52 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #53 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #54 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #55 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #56 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #57 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #58 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #59 0x3ffab105447 in call_function Python/ceval.c:5891
    #60 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #61 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #62 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #63 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #64 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #65 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #66 0x3ffaaf8ab9b in PyVectorcall_Call Objects/call.c:267
    #67 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #68 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #69 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #70 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #71 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #72 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #73 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #74 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #75 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #76 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #77 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #78 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #79 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #80 0x3ffab105447 in call_function Python/ceval.c:5891
    #81 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #82 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #83 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #84 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #85 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #86 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #87 0x3ffab105447 in call_function Python/ceval.c:5891
    #88 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    #89 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #90 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #91 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #92 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #93 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #94 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #95 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #96 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #97 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #98 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #99 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #100 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #101 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #102 0x3ffab105447 in call_function Python/ceval.c:5891
    #103 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #104 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #105 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #106 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #107 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #108 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #109 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #110 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #111 0x3ffab105447 in call_function Python/ceval.c:5891
    #112 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #113 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #114 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #115 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #116 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #117 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #118 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #119 0x3ffaaf8ad17 in _PyObject_Call Objects/call.c:305
    #120 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #121 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #122 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #123 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #124 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #125 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #126 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #127 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #128 0x3ffab105447 in call_function Python/ceval.c:5891
    #129 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #130 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #131 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #132 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #133 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #134 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #135 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #136 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #137 0x3ffab105447 in call_function Python/ceval.c:5891
    #138 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #139 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #140 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #141 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #142 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #143 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #144 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #145 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #146 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #147 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #148 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #149 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #150 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #151 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #152 0x3ffab105447 in call_function Python/ceval.c:5891
    #153 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #154 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #155 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #156 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #157 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #158 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #159 0x3ffab105447 in call_function Python/ceval.c:5891
    #160 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #161 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #162 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #163 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #164 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #165 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #166 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #167 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #168 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #169 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #170 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #171 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #172 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #173 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #174 0x3ffab105447 in call_function Python/ceval.c:5891
    #175 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #176 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #177 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #178 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #179 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #180 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #181 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #182 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #183 0x3ffab105447 in call_function Python/ceval.c:5891
    #184 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #185 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #186 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #187 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #188 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #189 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #190 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #191 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #192 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #193 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #194 0x3ffab105447 in call_function Python/ceval.c:5891
    #195 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #196 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #197 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #198 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #199 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #200 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #201 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #202 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #203 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #204 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #205 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #206 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #207 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #208 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #209 0x3ffab105447 in call_function Python/ceval.c:5891
    #210 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #211 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #212 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #213 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #214 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #215 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #217 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #218 0x3ffab105447 in call_function Python/ceval.c:5891
    #219 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #220 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #221 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #222 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #223 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #224 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #225 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #226 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #227 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #228 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #229 0x3ffab105447 in call_function Python/ceval.c:5891
    #230 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #231 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #232 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #233 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #234 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #235 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #236 0x3ffab105447 in call_function Python/ceval.c:5891
    #237 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #238 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #239 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #240 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #241 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #242 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #243 0x3ffab105447 in call_function Python/ceval.c:5891
    #244 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #245 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #246 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #247 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #248 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #249 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290

0x60d0005a5790 is located 80 bytes inside of 136-byte region [0x60d0005a5740,0x60d0005a57c8)
freed by thread T0 here:
    #0 0x3ffab537de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff55984fdb in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate(std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>*, unsigned long) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145

previously allocated by thread T0 here:
    #0 0x3ffab53734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff5598443f in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
    #2 0x3fff5849ecf  ([stack]+0xb2ecf)

SUMMARY: AddressSanitizer: heap-use-after-free /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&)
Shadow bytes around the buggy address:
  0x100c1a000b4aa0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa
  0x100c1a000b4ab0: fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c1a000b4ac0: fd fd fd fd fd fa fa fa fa fa fa fa fa fa fd fd
  0x100c1a000b4ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
  0x100c1a000b4ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c1a000b4af0: fd fd[fd]fd fd fd fd fd fd fa fa fa fa fa fa fa
  0x100c1a000b4b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1134126==ABORTING
```

Additional backtraces (not full):
Allocation:
```
#0  __memset_z196 () at ../sysdeps/s390/memset-z900.S:144
#1  0x000003ff96f3072a in __asan::Allocator::Allocate (this=this@entry=0x3ff97041eb8 <__asan::instance>, size=size@entry=136, alignment=8, alignment@entry=0, stack=<optimized out>,
    stack@entry=0x3ffdbb45d78, alloc_type=<optimized out>, can_fill=true) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:599
#2  0x000003ff96f2c088 in __asan::asan_memalign (alignment=alignment@entry=0, size=size@entry=136, stack=stack@entry=0x3ffdbb45d78, alloc_type=alloc_type@entry=__asan::FROM_NEW)
    at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:1039
#3  0x000003ff96fb73b0 in operator new (size=136) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
#4  0x000003ff41404440 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate (this=0x3ffdbb468c0,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
#5  0x000003ff414042a0 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::allocate (__a=...,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:464
#6  0x000003ff41403b66 in std::__allocate_guarded<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > > (__a=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:98
#7  0x000003ff4140372a in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47888, __p=@0x3ffdbb47880: 0x0, __a=..., __args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:648
#8  0x000003ff41403328 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1342
#9  0x000003ff41402f06 in std::shared_ptr<c10::FunctionSchema>::shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (
    this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:409
#10 0x000003ff41402b6e in std::allocate_shared<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__a=...,
    __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:862
#11 0x000003ff4140215c in std::make_shared<c10::FunctionSchema, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:878
#12 0x000003ff413d180c in c10::TupleType::createWithSpec<c10::basic_string_view<char> > (qualName=..., field_names=std::vector of length 1, capacity 1 = {...},
    field_types=std::vector of length 1, capacity 1 = {...}, field_defaults=std::vector of length 0, capacity 0) at /home/user/pytorch/aten/src/ATen/core/type.cpp:769
#13 0x000003ff413b9ca6 in c10::TupleType::createNamed (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...})
    at /home/user/pytorch/aten/src/ATen/core/type.cpp:725
#14 0x000003ff4115fbac in c10::ivalue::TupleTypeFactory<c10::TupleType>::fallback (type=...) at /home/user/pytorch/aten/src/ATen/core/dynamic_type.cpp:383
#15 0x000003ff708217fe in c10::ivalue::Tuple::type<c10::TupleType> (this=0x6080004b8520) at /home/user/pytorch/aten/src/ATen/core/ivalue_inl.h:781
#16 0x000003ff70800740 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
#17 0x000003ff70800306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
#18 0x000003ff702d6872 in pybind11::detail::type_caster<c10::IValue, void>::cast (src=...) at /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
#19 0x000003ff70d98192 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const (this=0x3ffdbb4ca20, call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:249
#20 0x000003ff70d97cfe in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) (call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:224
#21 0x000003ff6e9652ea in pybind11::cpp_function::dispatcher (self=<PyCapsule at remote 0x3ff83e27720>,
    args_in=(<torch._C.LiteScriptModule at remote 0x3ff811844b0>, (<Tensor at remote 0x3ff814efb00>,)), kwargs_in=0x0) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
```

Deallocation:
```
#0  operator delete (ptr=0x60d0005a5740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ff44904fdc in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate (this=0x3ffc5dc8020,
    __p=0x60d0005a5740, __t=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145
#2  0x000003ff44904fa8 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::deallocate (
    __a=..., __p=0x60d0005a5740, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:496
#3  0x000003ff449041f2 in std::__allocated_ptr<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::~__allocated_ptr (
    this=0x3ffc5dc8030) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:74
#4  0x000003ff44904888 in std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>::_M_destroy (this=0x60d0005a5740)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:538
#5  0x000003ff43895a62 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:184
#6  0x000003ff43895420 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x611000c40648) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
#7  0x000003ff4466e7f4 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x611000c40640)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
#8  0x000003ff4466d820 in std::shared_ptr<c10::FunctionSchema>::~shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
#9  0x000003ff448d82f6 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
#10 0x000003ff448d8346 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
#11 0x000003ff731296a4 in std::_Sp_counted_ptr<c10::TupleType*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x603000c43ae0)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:348
#12 0x000003ff71eaf666 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:168
#13 0x000003ff71eaf330 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x3ffc5dc9368) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
#14 0x000003ff73129ee4 in std::__shared_ptr<c10::TupleType, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x3ffc5dc9360)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
#15 0x000003ff73122390 in std::shared_ptr<c10::TupleType>::~shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
#16 0x000003ff73d00788 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
#17 0x000003ff73d00306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
```
</details>
Pull Request resolved: #101400
Approved by: https://github.com/zou3519
jcaip pushed a commit that referenced this pull request May 23, 2023
arguments() returns vector member of object returned by schema() call.
When object returned by schema() call is destroyed, the vector is deallocated as well,
it's lifetime isn't extended.

This issue detected while running `pytest -v test/mobile/test_lite_script_type.py -k test_nest_typing_namedtuple_custom_classtype` with ASAN.

<details>
<summary>ASAN output</summary>

```
==1134126==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0005a5790 at pc 0x03ff844488d8 bp 0x03fff584afe8 sp 0x03fff584afd8
READ of size 8 at 0x60d0005a5790 thread T0
    #0 0x3ff844488d7 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) /usr/lib/gcc/s390x-i
bm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028
    #1 0x3ff8444293f in std::vector<c10::Argument, std::allocator<c10::Argument> >::begin() const /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_vector.h:821
    #2 0x3ff84d807d1 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:617
    #3 0x3ff84d80305 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
    #4 0x3ff84856871 in pybind11::detail::type_caster<c10::IValue, void>::cast(c10::IValue, pybind11::return_value_policy, pybind11::handle) /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
    #5 0x3ff85318191 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const /home/user/pytorch/cmake/../third_party/pybin
d11/include/pybind11/pybind11.h:249
    #6 0x3ff85317cfd in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) /home/user/pytorch/cmake/../third_party/pybind11/incl
ude/pybind11/pybind11.h:224
    #7 0x3ff82ee52e9 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
    #8 0x3ffab002903 in cfunction_call Objects/methodobject.c:543
    #9 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #10 0x3ffaaf8e919 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #11 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #12 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #13 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #14 0x3ffab105447 in call_function Python/ceval.c:5891
    #15 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #16 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #17 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #18 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #19 0x3ffaaf8a615 in _PyObject_FastCallDictTstate Objects/call.c:142
    #20 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #21 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #22 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #23 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #24 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #25 0x3ffab105447 in call_function Python/ceval.c:5891
    #26 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #27 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #28 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #29 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #30 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #31 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #32 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #33 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #34 0x3ffab105447 in call_function Python/ceval.c:5891
    #35 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #36 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #37 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #38 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #39 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #40 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #41 0x3ffab105447 in call_function Python/ceval.c:5891
    #42 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    #43 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #44 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #45 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #46 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #47 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #48 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #49 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #50 0x3ffab105447 in call_function Python/ceval.c:5891
    #51 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #52 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #53 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #54 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #55 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #56 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #57 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #58 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #59 0x3ffab105447 in call_function Python/ceval.c:5891
    #60 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #61 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #62 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #63 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #64 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #65 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #66 0x3ffaaf8ab9b in PyVectorcall_Call Objects/call.c:267
    #67 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #68 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #69 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #70 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #71 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #72 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #73 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #74 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #75 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #76 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #77 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #78 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #79 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #80 0x3ffab105447 in call_function Python/ceval.c:5891
    #81 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #82 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #83 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #84 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #85 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #86 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #87 0x3ffab105447 in call_function Python/ceval.c:5891
    #88 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    #89 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #90 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #91 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #92 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #93 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #94 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #95 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #96 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #97 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #98 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #99 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #100 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #101 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #102 0x3ffab105447 in call_function Python/ceval.c:5891
    #103 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #104 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #105 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #106 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #107 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #108 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #109 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #110 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #111 0x3ffab105447 in call_function Python/ceval.c:5891
    #112 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #113 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #114 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #115 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #116 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #117 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #118 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #119 0x3ffaaf8ad17 in _PyObject_Call Objects/call.c:305
    #120 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #121 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #122 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #123 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #124 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #125 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #126 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #127 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #128 0x3ffab105447 in call_function Python/ceval.c:5891
    #129 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #130 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #131 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #132 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #133 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #134 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #135 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #136 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #137 0x3ffab105447 in call_function Python/ceval.c:5891
    #138 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #139 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #140 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #141 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #142 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #143 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #144 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #145 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #146 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #147 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #148 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #149 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #150 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #151 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #152 0x3ffab105447 in call_function Python/ceval.c:5891
    #153 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #154 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #155 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #156 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #157 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #158 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #159 0x3ffab105447 in call_function Python/ceval.c:5891
    #160 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #161 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #162 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #163 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #164 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #165 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #166 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #167 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #168 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #169 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #170 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #171 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #172 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #173 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #174 0x3ffab105447 in call_function Python/ceval.c:5891
    #175 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #176 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #177 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #178 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #179 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #180 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #181 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #182 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #183 0x3ffab105447 in call_function Python/ceval.c:5891
    #184 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #185 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #186 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #187 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #188 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #189 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #190 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #191 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #192 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #193 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #194 0x3ffab105447 in call_function Python/ceval.c:5891
    #195 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #196 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #197 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #198 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #199 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #200 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    #201 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    #202 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    #203 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #204 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #205 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #206 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #207 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #208 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #209 0x3ffab105447 in call_function Python/ceval.c:5891
    #210 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #211 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #212 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #213 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #214 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #215 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    #216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #217 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #218 0x3ffab105447 in call_function Python/ceval.c:5891
    #219 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #220 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #221 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #222 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #223 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    #224 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    #225 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    #226 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    #227 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #228 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #229 0x3ffab105447 in call_function Python/ceval.c:5891
    #230 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    #231 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #232 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #233 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #234 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #235 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #236 0x3ffab105447 in call_function Python/ceval.c:5891
    #237 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #238 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #239 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #240 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #241 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #242 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    #243 0x3ffab105447 in call_function Python/ceval.c:5891
    #244 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #245 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #246 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    #247 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    #248 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    #249 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290

0x60d0005a5790 is located 80 bytes inside of 136-byte region [0x60d0005a5740,0x60d0005a57c8)
freed by thread T0 here:
    #0 0x3ffab537de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff55984fdb in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate(std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>*, unsigned long) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145

previously allocated by thread T0 here:
    #0 0x3ffab53734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff5598443f in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
    #2 0x3fff5849ecf  ([stack]+0xb2ecf)

SUMMARY: AddressSanitizer: heap-use-after-free /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&)
Shadow bytes around the buggy address:
  0x100c1a000b4aa0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa
  0x100c1a000b4ab0: fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c1a000b4ac0: fd fd fd fd fd fa fa fa fa fa fa fa fa fa fd fd
  0x100c1a000b4ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
  0x100c1a000b4ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c1a000b4af0: fd fd[fd]fd fd fd fd fd fd fa fa fa fa fa fa fa
  0x100c1a000b4b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1134126==ABORTING
```

Additional backtraces (not full):
Allocation:
```
#0  __memset_z196 () at ../sysdeps/s390/memset-z900.S:144
#1  0x000003ff96f3072a in __asan::Allocator::Allocate (this=this@entry=0x3ff97041eb8 <__asan::instance>, size=size@entry=136, alignment=8, alignment@entry=0, stack=<optimized out>,
    stack@entry=0x3ffdbb45d78, alloc_type=<optimized out>, can_fill=true) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:599
#2  0x000003ff96f2c088 in __asan::asan_memalign (alignment=alignment@entry=0, size=size@entry=136, stack=stack@entry=0x3ffdbb45d78, alloc_type=alloc_type@entry=__asan::FROM_NEW)
    at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:1039
#3  0x000003ff96fb73b0 in operator new (size=136) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
#4  0x000003ff41404440 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate (this=0x3ffdbb468c0,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
#5  0x000003ff414042a0 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::allocate (__a=...,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:464
#6  0x000003ff41403b66 in std::__allocate_guarded<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > > (__a=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:98
#7  0x000003ff4140372a in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47888, __p=@0x3ffdbb47880: 0x0, __a=..., __args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:648
#8  0x000003ff41403328 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1342
#9  0x000003ff41402f06 in std::shared_ptr<c10::FunctionSchema>::shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (
    this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:409
#10 0x000003ff41402b6e in std::allocate_shared<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__a=...,
    __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:862
#11 0x000003ff4140215c in std::make_shared<c10::FunctionSchema, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:878
#12 0x000003ff413d180c in c10::TupleType::createWithSpec<c10::basic_string_view<char> > (qualName=..., field_names=std::vector of length 1, capacity 1 = {...},
    field_types=std::vector of length 1, capacity 1 = {...}, field_defaults=std::vector of length 0, capacity 0) at /home/user/pytorch/aten/src/ATen/core/type.cpp:769
#13 0x000003ff413b9ca6 in c10::TupleType::createNamed (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...})
    at /home/user/pytorch/aten/src/ATen/core/type.cpp:725
#14 0x000003ff4115fbac in c10::ivalue::TupleTypeFactory<c10::TupleType>::fallback (type=...) at /home/user/pytorch/aten/src/ATen/core/dynamic_type.cpp:383
#15 0x000003ff708217fe in c10::ivalue::Tuple::type<c10::TupleType> (this=0x6080004b8520) at /home/user/pytorch/aten/src/ATen/core/ivalue_inl.h:781
#16 0x000003ff70800740 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
#17 0x000003ff70800306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
#18 0x000003ff702d6872 in pybind11::detail::type_caster<c10::IValue, void>::cast (src=...) at /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
#19 0x000003ff70d98192 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const (this=0x3ffdbb4ca20, call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:249
#20 0x000003ff70d97cfe in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) (call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:224
#21 0x000003ff6e9652ea in pybind11::cpp_function::dispatcher (self=<PyCapsule at remote 0x3ff83e27720>,
    args_in=(<torch._C.LiteScriptModule at remote 0x3ff811844b0>, (<Tensor at remote 0x3ff814efb00>,)), kwargs_in=0x0) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
```

Deallocation:
```
#0  operator delete (ptr=0x60d0005a5740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ff44904fdc in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate (this=0x3ffc5dc8020,
    __p=0x60d0005a5740, __t=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145
#2  0x000003ff44904fa8 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::deallocate (
    __a=..., __p=0x60d0005a5740, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:496
#3  0x000003ff449041f2 in std::__allocated_ptr<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::~__allocated_ptr (
    this=0x3ffc5dc8030) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:74
#4  0x000003ff44904888 in std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>::_M_destroy (this=0x60d0005a5740)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:538
#5  0x000003ff43895a62 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:184
#6  0x000003ff43895420 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x611000c40648) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
#7  0x000003ff4466e7f4 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x611000c40640)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
#8  0x000003ff4466d820 in std::shared_ptr<c10::FunctionSchema>::~shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
#9  0x000003ff448d82f6 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
#10 0x000003ff448d8346 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
#11 0x000003ff731296a4 in std::_Sp_counted_ptr<c10::TupleType*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x603000c43ae0)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:348
#12 0x000003ff71eaf666 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:168
#13 0x000003ff71eaf330 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x3ffc5dc9368) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
#14 0x000003ff73129ee4 in std::__shared_ptr<c10::TupleType, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x3ffc5dc9360)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
#15 0x000003ff73122390 in std::shared_ptr<c10::TupleType>::~shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
#16 0x000003ff73d00788 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
#17 0x000003ff73d00306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
```
</details>
Pull Request resolved: #101400
Approved by: https://github.com/zou3519
pytorchmergebot pushed a commit that referenced this pull request May 26, 2023
3 disabled functions are attempting out of bounds reads. Disable them until sleef library is fixed.

<details>
<summary>ASAN report</summary>

```
=================================================================
==2030580==ERROR: AddressSanitizer: global-buffer-overflow on address 0x03ff70f54570 at pc 0x03ff6704e960 bp 0x03ffce128940 sp 0x03ffce128930
READ of size 4 at 0x03ff70f54570 thread T0
    #0 0x3ff6704e95f in vgather_vf_p_vi2 /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129
    #1 0x3ff6704e95f in rempif /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:550
    #2 0x3ff6704e95f in Sleef_cosf4_u10vxe2 /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:1021
    #3 0x3ff67029cfb in Sleef_cosf4_u10 /home/user/pytorch/build/sleef/src/libm/disps390x_128.c:182
    #4 0x3ff55d21941 in at::vec::ZVECTOR::Vectorized<float, void> at::vec::ZVECTOR::Vectorized<float, void>::mapSleef<float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __
vector(2)), float, 0>(float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __vector(2))) const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:991
    #5 0x3ff5689ad01 in at::vec::ZVECTOR::Vectorized<float, void>::cos() const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:1074
    #6 0x3ff5685df97 in at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)#1}::operator()(at::vec::ZVECTOR::Vectorized<float, void>) const /home/
user/pytorch/aten/src/ATen/cpu/vml.h:71
    #7 0x3ff5689b691 in void at::vec::map<float, at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)#1}, 0>(at::vml::ZVECTOR::vcos<float>(float*,
float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)#1} const&, float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vec/functional_base.h:239
    #8 0x3ff5685e0df in void at::vml::ZVECTOR::vcos<float>(float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vml.h:71
    #9 0x3ff563fdde3 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    #10 0x3ff5648e4a3 in operator() /home/user/pytorch/aten/src/ATen/TensorIterator.h:406
    #11 0x3ff5663cae1 in callback_fn<at::TensorIteratorBase::loop_2d_from_1d<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> >(c
onst at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)>&)::<lambda(char**, const int64_t*, int64_t, int64_t)> > /home/user/pytorch/
c10/util/FunctionRef.h:43
    #12 0x3ff4d45a933 in c10::function_ref<void (char**, long const*, long, long)>::operator()(char**, long const*, long, long) const /home/user/pytorch/c10/util/FunctionRef.h:64
    #13 0x3ff4d455133 in at::internal::serial_for_each(c10::ArrayRef<long>, c10::ArrayRef<long>, char**, unsigned long, c10::function_ref<void (char**, long const*, long, long)>, at::Range) /home/user/pyt
orch/aten/src/ATen/TensorIteratorInternal.h:52
    #14 0x3ff4d43b703 in at::TensorIteratorBase::serial_for_each(c10::function_ref<void (char**, long const*, long, long)>, at::Range) const /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:777
    #15 0x3ff4d43ab59 in at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long) /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:749
    #16 0x3ff5648e851 in for_each<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> > /home/user/pytorch/aten/src/ATen/TensorItera
tor.h:421
    #17 0x3ff563fe5f9 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    #18 0x3ff56400915 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    #19 0x3ff56400f1d in at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&) /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
    #20 0x3ff4f303007 in void at::native::DispatchStub<void (*)(at::TensorIteratorBase&), at::native::cos_stub>::operator()<at::native::structured_cos_out&>(c10::DeviceType, at::native::structured_cos_out
&) /home/user/pytorch/aten/src/ATen/native/DispatchStub.h:158
    #21 0x3ff4f2edb3f in at::native::structured_cos_out::impl(at::Tensor const&, at::Tensor const&) /home/user/pytorch/aten/src/ATen/native/UnaryOps.cpp:330
    #22 0x3ff526ef739 in wrapper_CPU_cos /home/user/pytorch/build/aten/src/ATen/RegisterCPU.cpp:4307
    #23 0x3ff52c651d9 in operator() /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
    #24 0x3ff52c651d9 in call /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:463
    #25 0x3ff5076df2f in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) /home/user/pytorch/aten/src/ATen/core
/boxing/KernelFunction_impl.h:50
    #26 0x3ff5009a93f in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core
/boxing/KernelFunction_impl.h:103
    #27 0x3ff5009a93f in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)> const&, at::Tensor const&) const /home/user/pytorch/aten/s
rc/ATen/core/dispatch/Dispatcher.h:639
    #28 0x3ff5009a93f in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)>::call(at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
    #29 0x3ff5009a93f in at::_ops::cos::call(at::Tensor const&) /home/user/pytorch/build/aten/src/ATen/Operators_0.cpp:2215
    #30 0x3ff7d813741 in at::Tensor::cos() const /home/user/pytorch/build/aten/src/ATen/core/TensorBody.h:2107
    #31 0x3ff7dc0f2b7 in operator() /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2953
    #32 0x3ff7dc0faf7 in THPVariable_cos /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2955
    #33 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    #34 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    #35 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #36 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    #37 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #38 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #39 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #40 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #41 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    #42 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #43 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #44 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/
torch/csrc/utils/python_dispatch.cpp:175
    #45 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch::
PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::operator()(c10::OperatorKernel*, c10::Op
eratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87
    #46 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch::
PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::_FUN(c10::OperatorKernel*, c10::Operator
Handle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86
    #47 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/b
oxing/BoxedKernel_impl.h:41
    #48 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/cor
e/boxing/KernelFunction_impl.h:43
    #49 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:6
91
    #50 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417
    #51 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421
    #52 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15
    #53 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c1
0::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61
    #54 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::
IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111
    #55 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290
    #56 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-lin
ux-gnu/11/include/g++-v11/bits/std_function.h:590
    #57 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41
    #58 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11::
kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764
    #59 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829
    #60 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
    #61 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::vo
id_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
    #62 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /h
ome/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
    #63 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
    #64 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
    #65 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
    #66 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    #67 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    #68 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #69 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    #70 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #71 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #72 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #73 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #74 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    #75 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    #76 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    #77 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    #78 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #79 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #80 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #81 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #82 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #83 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #84 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #85 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    #86 0x3ffa5feb289 in call_function Python/ceval.c:5891
    #87 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #88 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #89 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #90 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #91 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    #92 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #93 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #94 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #95 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #96 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #97 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #98 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #99 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    #100 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #101 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #102 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch
/torch/csrc/utils/python_dispatch.cpp:175
    #103 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:
:PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::operator()(c10::OperatorKernel*, c10::O
peratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87
    #104 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:
:PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)#1}::_FUN(c10::OperatorKernel*, c10::Operato
rHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86
    #105 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/
boxing/BoxedKernel_impl.h:41
    #106 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/co
re/boxing/KernelFunction_impl.h:43
    #107 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:
691
    #108 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417
    #109 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421
    #110 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15
    #111 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c
10::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61
    #112 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10:
:IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111
    #113 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290
    #114 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-li
nux-gnu/11/include/g++-v11/bits/std_function.h:590
    #115 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41
    #116 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11:
:kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764
    #117 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
 pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829
    #118 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
    #119 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v
oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
    #120 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /
home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
    #121 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
    #122 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
    #123 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
    #124 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    #125 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    #126 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #127 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    #128 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #129 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #130 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #131 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #132 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    #133 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    #134 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    #135 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    #136 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #137 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #138 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #139 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #140 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #141 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #142 0x3ffa5e87d2b in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #143 0x3ffa5e882dd in method_vectorcall Objects/classobject.c:83
    #144 0x3ffa5e836d3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #145 0x3ffa5e84b6f in _PyObject_CallFunctionVa Objects/call.c:485
    #146 0x3ffa5e84f2d in callmethod Objects/call.c:557
    #147 0x3ffa5e85039 in PyObject_CallMethod Objects/call.c:577
    #148 0x3ff7f7efa05 in torch::handle_torch_function_no_python_arg_parser(c10::ArrayRef<pybind11::handle>, _object*, _object*, char const*, _object*, char const*, torch::TorchFunctionName) /home/user/py
torch/torch/csrc/utils/python_arg_parser.cpp:338
    #149 0x3ff7eb09b67 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
 pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:827
    #150 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
    #151 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v
oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
    #152 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /
home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
    #153 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
    #154 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
    #155 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
    #156 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
    #157 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    #158 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #159 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
    #160 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #161 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #162 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #163 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #164 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    #165 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    #166 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    #167 0x3ffa5e84027 in _PyObject_MakeTpCall Objects/call.c:215
    #168 0x3ffa5fd767b in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    #169 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    #170 0x3ffa5feb289 in call_function Python/ceval.c:5891
    #171 0x3ffa5fe5ad1 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    #172 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #173 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #174 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #175 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #176 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    #177 0x3ffa5feb289 in call_function Python/ceval.c:5891
    #178 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213
    #179 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #180 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #181 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #182 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    #183 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #184 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #185 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #186 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #187 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #188 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #189 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #190 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    #191 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #192 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #193 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #194 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #195 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #196 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #197 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #198 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    #199 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #200 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #201 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #202 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #203 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #204 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #205 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #206 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
    #207 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #208 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #209 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #210 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #211 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #212 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #213 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #214 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
    #215 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
    #216 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
    #217 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
    #218 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #219 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #220 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #221 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #222 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #223 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #224 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    #225 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
    #226 0x3ffa5feb289 in call_function Python/ceval.c:5891
    #227 0x3ffa5fe5b21 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    #228 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #229 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #230 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #231 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    #232 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #233 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #234 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #235 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #236 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #237 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #238 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #239 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    #240 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #241 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #242 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #243 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #244 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #245 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #246 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #247 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
    #248 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
    #249 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
    #250 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
    #251 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    #252 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    #253 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
    #254 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
    #255 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267

0x03ff70f54570 is located 0 bytes to the right of global variable 'Sleef_rempitabsp' defined in '/home/user/pytorch/third_party/sleef/src/libm/rempitab.c:986:34' (0x3ff70f53f00) of size 1648
SUMMARY: AddressSanitizer: global-buffer-overflow /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 in vgather_vf_p_vi2
Shadow bytes around the buggy address:
  0x10007fee1ea850: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea860: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea870: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea890: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x10007fee1ea8a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[f9]f9
  0x10007fee1ea8b0: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10007fee1ea8f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==2030580==ABORTING
```
</details>

It reproduces when running `pytest -v test/test_ops.py -k test_python_ref__refs_cos_cpu_bfloat16` under address sanitizer on s390x.

See also: shibatch/sleef#464

Pull Request resolved: #102266
Approved by: https://github.com/malfet
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants