Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

determinism (seeding) not working for transform RandSimulateLowResolutiond #7911

Closed
jwc-rad opened this issue Jul 10, 2024 · 1 comment · Fixed by #8057
Closed

determinism (seeding) not working for transform RandSimulateLowResolutiond #7911

jwc-rad opened this issue Jul 10, 2024 · 1 comment · Fixed by #8057
Labels
bug Something isn't working

Comments

@jwc-rad
Copy link

jwc-rad commented Jul 10, 2024

Describe the bug
Determinism (seeding) is not working for dictionary transform RandSimulateLowResolutiond. It does work for RandSimulateLowResolution (array transform).

To Reproduce
Steps to reproduce the behavior:

import torch
from monai.utils import set_determinism
from monai.transforms import (
    Compose, 
    EnsureChannelFirst, EnsureChannelFirstd, 
    RandSimulateLowResolution, RandSimulateLowResolutiond, 
    RandGaussianNoise, RandGaussianNoised
)

seed = 12345
check_tfms = []
for _ in range(2):
    torch.manual_seed(seed)
    set_determinism(seed)    
    
    tfm = Compose([
        EnsureChannelFirstd(keys='image', channel_dim=0),
        RandSimulateLowResolutiond(keys='image',prob=1, zoom_range=[0.5,1], downsample_mode='nearest', upsample_mode='trilinear'),
    ])
    
    tx = torch.randn(1,64,64,64)
    ty = tfm({'image': tx})
    check_tfms.append(ty['image'])
print(((check_tfms[0] - check_tfms[1])**2).sum())

Expected behavior
The code runs seeded transform twice, so the two results should be the same, but the print shows non-zero value.

Screenshots
Other transforms including array transform RandSimulateLowResolution produce zero for the same code.
image

Environment

Ensuring you use the relevant python executable, please paste the output of:

Printing MONAI config...

MONAI version: 1.3.1
Numpy version: 1.26.4
Pytorch version: 2.2.2+cu118
MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: 96bfda0
MONAI file: /home//anaconda3/envs/py39torch2/lib/python3.9/site-packages/monai/init.py

Optional dependencies:
Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION.
ITK version: NOT INSTALLED or UNKNOWN VERSION.
Nibabel version: 5.2.1
scikit-image version: 0.22.0
scipy version: 1.13.1
Pillow version: 10.2.0
Tensorboard version: NOT INSTALLED or UNKNOWN VERSION.
gdown version: 5.2.0
TorchVision version: 0.17.2+cu118
tqdm version: 4.66.4
lmdb version: NOT INSTALLED or UNKNOWN VERSION.
psutil version: 5.9.0
pandas version: 2.2.2
einops version: 0.8.0
transformers version: NOT INSTALLED or UNKNOWN VERSION.
mlflow version: NOT INSTALLED or UNKNOWN VERSION.
pynrrd version: NOT INSTALLED or UNKNOWN VERSION.
clearml version: NOT INSTALLED or UNKNOWN VERSION.

For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies

Printing system config...

System: Linux
Linux version: Ubuntu 22.04.4 LTS
Platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
Processor: x86_64
Machine: x86_64
Python version: 3.9.19
Process name: python
Command: ['python', '-c', 'import monai; monai.config.print_debug_info()']
Open files: [popenfile(path='/home/jwchoi/.vscode-server/data/logs/20240710T132329/ptyhost.log', fd=19, position=2015, mode='a', flags=33793), popenfile(path='/home/jwchoi/.vscode-server/data/logs/20240710T132329/remoteagent.log', fd=24, position=488, mode='a', flags=33793), popenfile(path='/home/jwchoi/.vscode-server/data/logs/20240710T132329/network.log', fd=25, position=0, mode='a', flags=33793)]
Num physical CPUs: 16
Num logical CPUs: 32
Num usable CPUs: 32
CPU usage (%): [9.2, 7.5, 5.2, 5.1, 5.2, 5.2, 5.1, 5.1, 5.2, 37.7, 5.2, 5.1, 5.2, 5.2, 6.0, 5.2, 5.1, 5.1, 5.2, 5.2, 5.2, 6.0, 5.2, 5.9, 5.1, 25.6, 5.2, 5.2, 5.2, 5.2, 6.8, 43.6]
CPU freq. (MHz): 4
Load avg. in last 1, 5, 15 mins (%): [0.1, 0.1, 0.0]
Disk usage (%): 5.2
Avg. sensor temp. (Celsius): UNKNOWN for given OS
Total physical memory (GB): 125.0
Available memory (GB): 121.3
Used memory (GB): 2.5

Printing GPU config...

Num GPUs: 1
Has CUDA: True
CUDA version: 11.8
cuDNN enabled: True
NVIDIA_TF32_OVERRIDE: None
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE: None
cuDNN version: 8700
Current device: 0
Library compiled for CUDA architectures: ['sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_37', 'sm_90']
GPU 0 Name: NVIDIA GeForce RTX 4090
GPU 0 Is integrated: False
GPU 0 Is multi GPU board: False
GPU 0 Multi processor count: 128
GPU 0 Total memory (GB): 23.6
GPU 0 CUDA capability (maj.min): 8.9

@25benjaminli
Copy link
Contributor

@jwc-rad I found and resolved the issue, the line self.sim_lowres_tfm.set_random_state(seed, state) was omitted in the dictionary transform (i.e. the helper function wasn't seeded). Submitting a pull request for this - thanks for bringing it up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants