Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: 'tuple' object is not callable #188

Closed
halfslicestudio opened this issue Jul 31, 2022 · 8 comments
Closed

TypeError: 'tuple' object is not callable #188

halfslicestudio opened this issue Jul 31, 2022 · 8 comments

Comments

@halfslicestudio
Copy link

halfslicestudio commented Jul 31, 2022

Describe the bug
Everything runs up until the point of trying to actually train then i get the error in the title "TypeError: 'tuple' object is not callable"

To Reproduce
Start training with a custom dataset 256x256

ERROR

Traceback (most recent call last):
File "stylegan3/train.py", line 286, in
main() # pylint: disable=no-value-for-parameter
File "/usr/lib/python3/dist-packages/click/core.py", line 764, in call
return self.main(*args, **kwargs)
File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "stylegan3/train.py", line 281, in main
launch_training(c=c, desc=desc, outdir=opts.outdir, dry_run=opts.dry_run)
File "stylegan3/train.py", line 96, in launch_training
subprocess_fn(rank=0, c=c, temp_dir=temp_dir)
File "stylegan3/train.py", line 47, in subprocess_fn
training_loop.training_loop(rank=rank, **c)
File "/home/ubuntu/ai4/stylegan3/training/training_loop.py", line 278, in training_loop
loss.accumulate_gradients(phase=phase.name, real_img=real_img, real_c=real_c, gen_z=gen_z, gen_c=gen_c, gain=phase.interval, cur_nimg=cur_nimg)
File "/home/ubuntu/ai4/stylegan3/training/loss.py", line 81, in accumulate_gradients
loss_Gmain.mean().mul(gain).backward()
File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/init.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 253, in apply
return user_fn(self, *args)
File "/home/ubuntu/ai4/stylegan3/torch_utils/ops/grid_sample_gradfix.py", line 52, in backward
grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid)
File "/home/ubuntu/ai4/stylegan3/torch_utils/ops/grid_sample_gradfix.py", line 63, in forward
grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False, output_mask)
TypeError: 'tuple' object is not callable

@Zhuo-Feng
Copy link

Hello, you may try pytorchversion==1.10.0 to solve this problem. The code is: !pip install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio===0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

@hai-kreate
Copy link

Another potential fix is to extract op correctly in your local setup:

-op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')
+op, _ = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')

@hamediut
Copy link

Another potential fix is to extract op correctly in your local setup:

-op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')
+op, _ = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')

This worked for me in Pytorch 1.12.0, thank you :)

@exceedzhang
Copy link

Hello, you may try pytorchversion==1.10.0 to solve this problem. The code is: !pip install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio===0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

The system runs normally after reinstalling pytorch version 1.10.0. thank you!

CUDA 11.3

conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=11.3 -c pytorch -c conda-forge

@benx13
Copy link

benx13 commented Apr 10, 2023

Another potential fix is to extract op correctly in your local setup:

-op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')
+op, _ = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')

This fixed it for me torch== 1.13.1
Thanks.

@xfiax
Copy link

xfiax commented Apr 15, 2023

Another potential fix is to extract op correctly in your local setup:

-op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')
+op, _ = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')

hi, i'm using colab. how could I apply this method? I tried the install cuda 1.10.0, it doesn't solve the issue for me.

nurpax pushed a commit that referenced this issue Apr 26, 2023
Adapt to newer _jit_get_operation API that changed in
pytorch/pytorch#76814

for #188, #193
@jannehellsten
Copy link
Contributor

Should be fixed by c233a91. Sorry for the inconvenience.

@changbuyuan
Copy link

您好,您可以嘗試使用 pytorchversion==1.10.0 來解決這個問題。程式碼是:!pip install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio===0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

Sorry, I used this method and re-ran train.py, but the same error feedback still appeared.

Traceback (most recent call last): File "/content/drive/MyDrive/colab-sg3/stylegan3/train.py", line 317, in <module> main() # pylint: disable=no-value-for-parameter File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) File "/content/drive/MyDrive/colab-sg3/stylegan3/train.py", line 312, in main launch_training(c=c, desc=desc, outdir=opts.outdir, dry_run=opts.dry_run) File "/content/drive/MyDrive/colab-sg3/stylegan3/train.py", line 97, in launch_training subprocess_fn(rank=0, c=c, temp_dir=temp_dir) File "/content/drive/MyDrive/colab-sg3/stylegan3/train.py", line 48, in subprocess_fn training_loop.training_loop(rank=rank, **c) File "/content/drive/MyDrive/colab-sg3/stylegan3/training/training_loop.py", line 279, in training_loop loss.accumulate_gradients(phase=phase.name, real_img=real_img, real_c=real_c, gen_z=gen_z, gen_c=gen_c, gain=phase.interval, cur_nimg=cur_nimg) File "/content/drive/MyDrive/colab-sg3/stylegan3/training/loss.py", line 81, in accumulate_gradients loss_Gmain.mean().mul(gain).backward() File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 487, in backward torch.autograd.backward( File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 200, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 274, in apply return user_fn(self, *args) File "/content/drive/MyDrive/colab-sg3/stylegan3/torch_utils/ops/grid_sample_gradfix.py", line 50, in backward grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid) File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/content/drive/MyDrive/colab-sg3/stylegan3/torch_utils/ops/grid_sample_gradfix.py", line 59, in forward grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) TypeError: 'tuple' object is not callable

phcerdan pushed a commit to phcerdan/stylegan3 that referenced this issue Nov 25, 2023
Adapt to newer _jit_get_operation API that changed in
pytorch/pytorch#76814

for NVlabs#188, NVlabs#193
whatsnewsisyphus added a commit to whatsnewsisyphus/stylegan3-fun that referenced this issue Feb 22, 2024
The code does not work with any available stable version of pytorch, meaning it breaks with colab out of box. The fix is simple:

From: NVlabs@c233a91

Adapt to newer _jit_get_operation API that changed in
pytorch/pytorch#76814

for NVlabs#188, NVlabs#193
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants