2022-09-26 18:01:16,777 - mmflow - INFO - Multi-processing start method is `fork` 2022-09-26 18:01:16,777 - mmflow - INFO - OpenCV num_threads is `20 2022-09-26 18:01:16,814 - mmflow - INFO - Environment info: ------------------------------------------------------------ sys.platform: linux Python: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] CUDA available: True CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 10.2, V10.2.89 GPU 0: GeForce RTX 2080 Ti GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.8.0 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 10.2 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37 - CuDNN 7.6.5 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, TorchVision: 0.9.0 OpenCV: 4.6.0 MMCV: 1.6.1 MMFlow: 0.5.1+cc140b4 MMCV Compiler: GCC 7.3 MMCV CUDA Compiler: 10.2 ------------------------------------------------------------ 2022-09-26 18:01:16,815 - mmflow - INFO - Distributed training: False 2022-09-26 18:01:17,159 - mmflow - INFO - Config: model = dict( type='RAFT', num_levels=4, radius=4, cxt_channels=128, h_channels=128, encoder=dict( type='RAFTEncoder', in_channels=3, out_channels=256, net_type='Basic', norm_cfg=dict(type='IN'), init_cfg=[ dict( type='Kaiming', layer=['Conv2d'], mode='fan_out', nonlinearity='relu'), dict(type='Constant', layer=['InstanceNorm2d'], val=1, bias=0) ]), cxt_encoder=dict( type='RAFTEncoder', in_channels=3, out_channels=256, net_type='Basic', norm_cfg=dict(type='SyncBN'), init_cfg=[ dict( type='Kaiming', layer=['Conv2d'], mode='fan_out', nonlinearity='relu'), dict(type='Constant', layer=['SyncBatchNorm2d'], val=1, bias=0) ]), decoder=dict( type='RAFTDecoder', net_type='Basic', num_levels=4, radius=4, iters=12, corr_op_cfg=dict(type='CorrLookup', align_corners=True), gru_type='SeqConv', flow_loss=dict(type='SequenceLoss', gamma=0.85), act_cfg=dict(type='ReLU')), freeze_bn=True, train_cfg=dict(), test_cfg=dict(iters=32)) img_norm_cfg = dict( mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], to_rgb=False) crop_size = (288, 960) kitti_train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', sparse=True), dict( type='ColorJitter', asymmetric_prob=0.0, brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1592356687898089), dict(type='Erase', prob=0.5, bounds=[50, 100], max_num=3), dict( type='SpacialTransform', spacial_prob=0.8, stretch_prob=0.8, crop_size=(288, 960), min_scale=-0.2, max_scale=0.4, max_stretch=0.2), dict(type='RandomCrop', crop_size=(288, 960)), dict( type='Normalize', mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], to_rgb=False), dict(type='DefaultFormatBundle'), dict( type='Collect', keys=['imgs', 'flow_gt', 'valid'], meta_keys=[ 'filename1', 'filename2', 'ori_filename1', 'ori_filename2', 'filename_flow', 'ori_filename_flow', 'ori_shape', 'img_shape', 'erase_bounds', 'erase_num', 'scale_factor' ]) ] kitti_train = dict( type='KITTI2015', data_root='data/kitti2015', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', sparse=True), dict( type='ColorJitter', asymmetric_prob=0.0, brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1592356687898089), dict(type='Erase', prob=0.5, bounds=[50, 100], max_num=3), dict( type='SpacialTransform', spacial_prob=0.8, stretch_prob=0.8, crop_size=(288, 960), min_scale=-0.2, max_scale=0.4, max_stretch=0.2), dict(type='RandomCrop', crop_size=(288, 960)), dict( type='Normalize', mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], to_rgb=False), dict(type='DefaultFormatBundle'), dict( type='Collect', keys=['imgs', 'flow_gt', 'valid'], meta_keys=[ 'filename1', 'filename2', 'ori_filename1', 'ori_filename2', 'filename_flow', 'ori_filename_flow', 'ori_shape', 'img_shape', 'erase_bounds', 'erase_num', 'scale_factor' ]) ], test_mode=False) kitti_test_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', sparse=True), dict(type='InputPad', exponent=3), dict( type='Normalize', mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], to_rgb=False), dict(type='TestFormatBundle'), dict( type='Collect', keys=['imgs'], meta_keys=[ 'flow_gt', 'valid', 'filename1', 'filename2', 'ori_filename1', 'ori_filename2', 'ori_shape', 'img_shape', 'img_norm_cfg', 'scale_factor', 'pad_shape', 'pad' ]) ] kitti2015_val_test = dict( type='KITTI2015', data_root='data/kitti2015', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', sparse=True), dict(type='InputPad', exponent=3), dict( type='Normalize', mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], to_rgb=False), dict(type='TestFormatBundle'), dict( type='Collect', keys=['imgs'], meta_keys=[ 'flow_gt', 'valid', 'filename1', 'filename2', 'ori_filename1', 'ori_filename2', 'ori_shape', 'img_shape', 'img_norm_cfg', 'scale_factor', 'pad_shape', 'pad' ]) ], test_mode=True) data = dict( train_dataloader=dict( samples_per_gpu=2, workers_per_gpu=5, drop_last=True, shuffle=True, persistent_workers=True), val_dataloader=dict( samples_per_gpu=1, workers_per_gpu=5, shuffle=False, persistent_workers=True), test_dataloader=dict(samples_per_gpu=1, workers_per_gpu=2, shuffle=False), train=dict( type='KITTI2015', data_root='data/kitti2015', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', sparse=True), dict( type='ColorJitter', asymmetric_prob=0.0, brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1592356687898089), dict(type='Erase', prob=0.5, bounds=[50, 100], max_num=3), dict( type='SpacialTransform', spacial_prob=0.8, stretch_prob=0.8, crop_size=(288, 960), min_scale=-0.2, max_scale=0.4, max_stretch=0.2), dict(type='RandomCrop', crop_size=(288, 960)), dict( type='Normalize', mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], to_rgb=False), dict(type='DefaultFormatBundle'), dict( type='Collect', keys=['imgs', 'flow_gt', 'valid'], meta_keys=[ 'filename1', 'filename2', 'ori_filename1', 'ori_filename2', 'filename_flow', 'ori_filename_flow', 'ori_shape', 'img_shape', 'erase_bounds', 'erase_num', 'scale_factor' ]) ], test_mode=False), val=dict( type='KITTI2015', data_root='data/kitti2015', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', sparse=True), dict(type='InputPad', exponent=3), dict( type='Normalize', mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], to_rgb=False), dict(type='TestFormatBundle'), dict( type='Collect', keys=['imgs'], meta_keys=[ 'flow_gt', 'valid', 'filename1', 'filename2', 'ori_filename1', 'ori_filename2', 'ori_shape', 'img_shape', 'img_norm_cfg', 'scale_factor', 'pad_shape', 'pad' ]) ], test_mode=True), test=dict( type='KITTI2015', data_root='data/kitti2015', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', sparse=True), dict(type='InputPad', exponent=3), dict( type='Normalize', mean=[127.5, 127.5, 127.5], std=[127.5, 127.5, 127.5], to_rgb=False), dict(type='TestFormatBundle'), dict( type='Collect', keys=['imgs'], meta_keys=[ 'flow_gt', 'valid', 'filename1', 'filename2', 'ori_filename1', 'ori_filename2', 'ori_shape', 'img_shape', 'img_norm_cfg', 'scale_factor', 'pad_shape', 'pad' ]) ], test_mode=True)) log_config = dict( interval=50, hooks=[dict(type='TextLoggerHook'), dict(type='TensorboardLoggerHook')]) dist_params = dict(backend='nccl') log_level = 'INFO' load_from = 'https://download.openmmlab.com/mmflow/raft/raft_8x2_100k_mixed_368x768.pth' resume_from = None workflow = [('train', 1)] optimizer = dict( type='AdamW', lr=0.000125, betas=(0.9, 0.999), eps=1e-08, weight_decay=1e-05, amsgrad=False) optimizer_config = dict(grad_clip=dict(max_norm=1.0)) lr_config = dict( policy='OneCycle', max_lr=0.000125, total_steps=50100, pct_start=0.05, anneal_strategy='linear') runner = dict(type='IterBasedRunner', max_iters=50000) checkpoint_config = dict(by_epoch=False, interval=5000) evaluation = dict(interval=5000, metric='EPE') work_dir = 'raft_kitti2015_debug' auto_resume = False gpu_ids = [0] 2022-09-26 18:01:17,159 - mmflow - INFO - Set random seed to 2002780012, deterministic: False 2022-09-26 18:01:17,220 - mmflow - INFO - initialize RAFTEncoder with init_cfg [{'type': 'Kaiming', 'layer': ['Conv2d'], 'mode': 'fan_out', 'nonlinearity': 'relu'}, {'type': 'Constant', 'layer': ['InstanceNorm2d'], 'val': 1, 'bias': 0}] 2022-09-26 18:01:17,287 - mmflow - INFO - initialize RAFTEncoder with init_cfg [{'type': 'Kaiming', 'layer': ['Conv2d'], 'mode': 'fan_out', 'nonlinearity': 'relu'}, {'type': 'Constant', 'layer': ['SyncBatchNorm2d'], 'val': 1, 'bias': 0}] 2022-09-26 18:01:17,306 - mmflow - INFO - RAFT( (encoder): RAFTEncoder( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3)) (in1): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (relu): ReLU(inplace=True) (res_layer1): ResLayer( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in1): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (relu): ReLU() ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in1): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (relu): ReLU() ) ) (conv2): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1)) (res_layer2): ResLayer( (0): BasicBlock( (conv1): Conv2d(64, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (in1): InstanceNorm2d(96, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (conv2): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in2): InstanceNorm2d(96, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (relu): ReLU() (downsample): Sequential( (0): Conv2d(64, 96, kernel_size=(1, 1), stride=(2, 2)) (1): InstanceNorm2d(96, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (1): BasicBlock( (conv1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in1): InstanceNorm2d(96, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (conv2): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in2): InstanceNorm2d(96, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (relu): ReLU() ) ) (res_layer3): ResLayer( (0): BasicBlock( (conv1): Conv2d(96, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (in1): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in2): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (relu): ReLU() (downsample): Sequential( (0): Conv2d(96, 128, kernel_size=(1, 1), stride=(2, 2)) (1): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in1): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (in2): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (relu): ReLU() ) ) ) init_cfg=[{'type': 'Kaiming', 'layer': ['Conv2d'], 'mode': 'fan_out', 'nonlinearity': 'relu'}, {'type': 'Constant', 'layer': ['InstanceNorm2d'], 'val': 1, 'bias': 0}] (decoder): RAFTDecoder( (corr_block): CorrelationPyramid( (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (corr_lookup): CorrLookup() (encoder): MotionEncoder( (corr_net): Sequential( (0): ConvModule( (conv): Conv2d(324, 256, kernel_size=(1, 1), stride=(1, 1)) (activate): ReLU(inplace=True) ) (1): ConvModule( (conv): Conv2d(256, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activate): ReLU(inplace=True) ) ) (flow_net): Sequential( (0): ConvModule( (conv): Conv2d(2, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3)) (activate): ReLU(inplace=True) ) (1): ConvModule( (conv): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activate): ReLU(inplace=True) ) ) (out_net): Sequential( (0): ConvModule( (conv): Conv2d(256, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activate): ReLU(inplace=True) ) ) ) (gru): ConvGRU( (conv_z): ModuleList( (0): ConvModule( (conv): Conv2d(384, 128, kernel_size=(1, 5), stride=(1, 1), padding=(0, 2)) (activate): Sigmoid() ) (1): ConvModule( (conv): Conv2d(384, 128, kernel_size=(5, 1), stride=(1, 1), padding=(2, 0)) (activate): Sigmoid() ) ) (conv_r): ModuleList( (0): ConvModule( (conv): Conv2d(384, 128, kernel_size=(1, 5), stride=(1, 1), padding=(0, 2)) (activate): Sigmoid() ) (1): ConvModule( (conv): Conv2d(384, 128, kernel_size=(5, 1), stride=(1, 1), padding=(2, 0)) (activate): Sigmoid() ) ) (conv_q): ModuleList( (0): ConvModule( (conv): Conv2d(384, 128, kernel_size=(1, 5), stride=(1, 1), padding=(0, 2)) (activate): Tanh() ) (1): ConvModule( (conv): Conv2d(384, 128, kernel_size=(5, 1), stride=(1, 1), padding=(2, 0)) (activate): Tanh() ) ) ) (flow_pred): XHead( (layers): Sequential( (0): ConvModule( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activate): ReLU(inplace=True) ) ) (predict_layer): Conv2d(256, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (mask_pred): XHead( (layers): Sequential( (0): ConvModule( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activate): ReLU(inplace=True) ) ) (predict_layer): Conv2d(256, 576, kernel_size=(1, 1), stride=(1, 1)) ) (flow_loss): SequenceLoss() ) (context): RAFTEncoder( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3)) (bn1): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (res_layer1): ResLayer( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn1): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn2): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn1): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn2): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() ) ) (conv2): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1)) (res_layer2): ResLayer( (0): BasicBlock( (conv1): Conv2d(64, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (bn1): SyncBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn2): SyncBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() (downsample): Sequential( (0): Conv2d(64, 96, kernel_size=(1, 1), stride=(2, 2)) (1): SyncBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn1): SyncBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn2): SyncBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() ) ) (res_layer3): ResLayer( (0): BasicBlock( (conv1): Conv2d(96, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (bn1): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn2): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() (downsample): Sequential( (0): Conv2d(96, 128, kernel_size=(1, 1), stride=(2, 2)) (1): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn1): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (bn2): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() ) ) ) init_cfg=[{'type': 'Kaiming', 'layer': ['Conv2d'], 'mode': 'fan_out', 'nonlinearity': 'relu'}, {'type': 'Constant', 'layer': ['SyncBatchNorm2d'], 'val': 1, 'bias': 0}] ) 2022-09-26 18:01:17,678 - mmflow - INFO - dataset size 200 /home/liyang/mmflow1/mmflow/apis/train.py:132: UserWarning: SyncBN is only supported with DDP. To be compatible with DP, we convert SyncBN to BN. Please use dist_train.sh which can avoid this error. warnings.warn( 2022-09-26 18:01:19,647 - mmflow - INFO - load checkpoint from http path: https://download.openmmlab.com/mmflow/raft/raft_8x2_100k_mixed_368x768.pth 2022-09-26 18:01:19,690 - mmflow - INFO - Start running, host: liyang@server-blue, work_dir: /home/liyang/mmflow1/raft_kitti2015_debug 2022-09-26 18:01:19,691 - mmflow - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) OneCycleLrUpdaterHook (NORMAL ) CheckpointHook (VERY_LOW ) TextLoggerHook (VERY_LOW ) TensorboardLoggerHook -------------------- before_train_epoch: (VERY_HIGH ) OneCycleLrUpdaterHook (LOW ) IterTimerHook (VERY_LOW ) TextLoggerHook (VERY_LOW ) TensorboardLoggerHook -------------------- before_train_iter: (VERY_HIGH ) OneCycleLrUpdaterHook (LOW ) IterTimerHook -------------------- after_train_iter: (ABOVE_NORMAL) OptimizerHook (NORMAL ) CheckpointHook (LOW ) IterTimerHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook (VERY_LOW ) TensorboardLoggerHook -------------------- after_train_epoch: (NORMAL ) CheckpointHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook (VERY_LOW ) TensorboardLoggerHook -------------------- before_val_epoch: (LOW ) IterTimerHook (VERY_LOW ) TextLoggerHook (VERY_LOW ) TensorboardLoggerHook -------------------- before_val_iter: (LOW ) IterTimerHook -------------------- after_val_iter: (LOW ) IterTimerHook -------------------- after_val_epoch: (VERY_LOW ) TextLoggerHook (VERY_LOW ) TensorboardLoggerHook -------------------- after_run: (VERY_LOW ) TextLoggerHook (VERY_LOW ) TensorboardLoggerHook -------------------- 2022-09-26 18:01:19,691 - mmflow - INFO - workflow: [('train', 1)], max: 50000 iters 2022-09-26 18:01:19,691 - mmflow - INFO - Checkpoints will be saved to /home/liyang/mmflow1/raft_kitti2015_debug by HardDiskBackend. 2022-09-26 18:01:39,807 - mmflow - INFO - Iter [50/50000] lr: 7.348e-06, eta: 5:31:43, time: 0.398, data_time: 0.011, memory: 5444, loss_flow: 1.4945, loss: 1.4945, grad_norm: 12.1196 2022-09-26 18:01:59,155 - mmflow - INFO - Iter [100/50000] lr: 9.744e-06, eta: 5:26:36, time: 0.387, data_time: 0.003, memory: 5444, loss_flow: 1.3797, loss: 1.3797, grad_norm: 10.7855 2022-09-26 18:02:21,129 - mmflow - INFO - Iter [150/50000] lr: 1.214e-05, eta: 5:39:13, time: 0.439, data_time: 0.051, memory: 5444, loss_flow: 1.4180, loss: 1.4180, grad_norm: 13.1344 2022-09-26 18:02:41,209 - mmflow - INFO - Iter [200/50000] lr: 1.454e-05, eta: 5:37:29, time: 0.402, data_time: 0.004, memory: 5444, loss_flow: 1.2529, loss: 1.2529, grad_norm: 11.7746