We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When trying to use the Google Colab demo to infer custom images, I encountered the following error:
# Inference the uploaded images #@markdown `DDPM_STEPS`: Number of DDPM steps for sampling<br> DDPM_STEPS = 200 #@param {type:"slider", min:10, max:1000, step:10} #@markdown `FIDELITY_WEIGHT`: Balance the quality (lower number) and fidelity (higher number)<br> FIDELITY_WEIGHT = 0.97 #@param {type:"slider", min:0, max:1, step:0.01} #@markdown `UPSCALE`: The upscale for super-resolution, 4x SR by default<br> UPSCALE = 4.0 #@param {type:"slider", min:1.0, max:16.0, step:0.5} #@markdown `SEED`: The random seed for sampling<br> SEED = 42 #@param {type:"slider", min:0, max:10000, step:1} #@markdown `TILE_OVERLAP`: The overlap between tiles, betwwen 0 to 64<br> TILE_OVERLAP = 32 #@param {type:"slider", min:0, max:60, step:2} #@markdown `VQGANTILE_SIZE`: The size for VQGAN tile operation in pixel, min 512.<br> VQGANTILE_SIZE = 1280 #@param {type:"slider", min:512, max:2000, step:2} #@markdown `Aggregation_Sampling`: Use Aggregation Sampling if the expected resolution is not 512x512<br> Aggregation_Sampling = False #@param {type:"boolean"} #@markdown `Enable_Tile`: Enable tile to handle large resolution beyond 1024x1024<br> Enable_Tile = False #@param {type:"boolean"} VQGANTILE_STRIDE = int(VQGANTILE_SIZE * 0.9) if Enable_Tile: !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --vqgantile_stride {VQGANTILE_STRIDE} --vqgantile_size {VQGANTILE_SIZE} --colorfix_type 'adain' elif Aggregation_Sampling: !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --colorfix_type 'adain' else: !python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --seed {SEED} --colorfix_type 'adain'
Traceback (most recent call last): File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py", line 7, in <module> import torchvision File "/usr/local/lib/python3.10/site-packages/torchvision/__init__.py", line 6, in <module> from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils File "/usr/local/lib/python3.10/site-packages/torchvision/_meta_registrations.py", line 164, in <module> def meta_nms(dets, scores, iou_threshold): File "/usr/local/lib/python3.10/site-packages/torch/library.py", line 440, in inner handle = entry.abstract_impl.register(func_to_register, source) File "/usr/local/lib/python3.10/site-packages/torch/_library/abstract_impl.py", line 30, in register if torch._C._dispatch_has_kernel_for_dispatch_key(self.qualname, "Meta"): RuntimeError: operator torchvision::nms does not exist
Any help in resolving this issue would be greatly appreciated.
The text was updated successfully, but these errors were encountered:
I guess the version of the torchvision is not correct?
Sorry, something went wrong.
No branches or pull requests
When trying to use the Google Colab demo to infer custom images, I encountered the following error:
Any help in resolving this issue would be greatly appreciated.
The text was updated successfully, but these errors were encountered: