Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for U-net from mmseg in terratorch #80

Open
romeokienzler opened this issue Aug 6, 2024 · 5 comments · Fixed by #67
Open

Support for U-net from mmseg in terratorch #80

romeokienzler opened this issue Aug 6, 2024 · 5 comments · Fixed by #67
Assignees
Milestone

Comments

@romeokienzler
Copy link
Collaborator

No description provided.

@romeokienzler romeokienzler added this to the 24.8.2 milestone Aug 6, 2024
@Joao-L-S-Almeida Joao-L-S-Almeida linked a pull request Aug 6, 2024 that will close this issue
@romeokienzler
Copy link
Collaborator Author

To test

git clone https://github.com/IBM/terratorch.git
cd terratorch
git checkout add/unet
pip install -r requirements/required.txt
pip install --upgrade .

@romeokienzler
Copy link
Collaborator Author

@Michal-Muszynski can we close?

@romeokienzler
Copy link
Collaborator Author

romeokienzler commented Dec 9, 2024

@Michal-Muszynski reopening, commant failing:
jbsub -queue x86_6h -cores 1+1 -require v100 -mem 64g -interactive bash

terratorch predict -c /dccstor/geofm-finetuning/AGB_GHG_downstream_task/AGB_torchgeo_agb_mmu/gfm_brazil_2022/first_watsonx_model/platform_testing/config_files/terratorch_agb-GFM_finetune_brazil_dataset_class_update.yaml --ckpt_path /dccstor/geofm-finetuning/AGB_GHG_downstream_task/AGB_torchgeo_agb_mmu/gfm_brazil_2022/first_watsonx_model/forContainer_v0/checkpoints/'epoch=94.ckpt' --predict_output_dir /dccstor/geofm-finetuning/AGB_GHG_downstream_task/AGB_torchgeo_agb_mmu/gfm_brazil_2022/first_watsonx_model/platform_testing/outputs/outputs_HLS_L30_Cloud_Free_2019/outputs_HLS_L30_Cloud_Free_2019_v100/ --data.init_args.predict_data_root /dccstor/hhr-weather/gs_karukinka/HLS_L30_Cloud_Free/2019/ --data.init_args.predict_dataset_bands [0,1,2,3,4,5,6,7,8,9] --data.init_args.predict_output_bands [1,2,3,4,5,6] --out_dtype float32

on that one I get (tt v0.99.7)

 File "/dccstor/wfm/users/rkie/gitco/Prithvi-EO-2.0/.venv/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 2584, in load_state_dict                                                                                          
    raise RuntimeError(                                                                                                                                                                                                                       
RuntimeError: Error(s) in loading state_dict for PixelwiseRegressionTask:                                                                                                                                                                     
        Missing key(s) in state_dict: "model.encoder._timm_module.patch_embed.projection.weight", "model.encoder._timm_module.patch_embed.projection.bias", "model.encoder._timm_module.patch_embed.norm.weight", "model.encoder._timm_module.
patch_embed.norm.bias", "model.encoder._timm_module.stages_0.blocks.0.norm1.weight", "model.encoder._timm_module.stages_0.blocks.0.norm1.bias", "model.encoder._timm_module.stages_0.blocks.0.attn.w_msa.relative_position_bias_table", "model
.encoder._timm_module.stages_0.blocks.0.attn.w_msa.relative_position_index", "model.encoder._timm_module.stages_0.blocks.0.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_0.blocks.0.attn.w_msa.qkv.bias", "model.encoder._timm_mo
dule.stages_0.blocks.0.attn.w_msa.proj.weight", "model.encoder._timm_module.stages_0.blocks.0.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_0.blocks.0.norm2.weight", "model.encoder._timm_module.stages_0.blocks.0.norm2.bias", "
model.encoder._timm_module.stages_0.blocks.0.ffn.layers.0.0.weight", "model.encoder._timm_module.stages_0.blocks.0.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_0.blocks.0.ffn.layers.1.weight", "model.encoder._timm_module.stage
s_0.blocks.0.ffn.layers.1.bias", "model.encoder._timm_module.stages_0.blocks.1.norm1.weight", "model.encoder._timm_module.stages_0.blocks.1.norm1.bias", "model.encoder._timm_module.stages_0.blocks.1.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_0.blocks.1.attn.w_msa.relative_position_index", "model.encoder._timm_module.stages_0.blocks.1.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_0.blocks.1.attn.w_msa.qkv.bias", "model.encode$._timm_module.stages_0.blocks.1.attn.w_msa.proj.weight", "model.encoder._timm_module.stages_0.blocks.1.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_0.blocks.1.norm2.weight", "model.encoder._timm_module.stages_0.blocks.1.norm$.bias", "model.encoder._timm_module.stages_0.blocks.1.ffn.layers.0.0.weight", "model.encoder._timm_module.stages_0.blocks.1.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_0.blocks.1.ffn.layers.1.weight", "model.encoder._timm_mo$ule.stages_0.blocks.1.ffn.layers.1.bias", "model.encoder._timm_module.stages_0.downsample.norm.weight", "model.encoder._timm_module.stages_0.downsample.norm.bias", "model.encoder._timm_module.stages_0.downsample.reduction.weight", "model$encoder._timm_module.stages_0.norm.weight", "model.encoder._timm_module.stages_0.norm.bias", "model.encoder._timm_module.stages_1.blocks.0.norm1.weight", "model.encoder._timm_module.stages_1.blocks.0.norm1.bias", "model.encoder._timm_mod$le.stages_1.blocks.0.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_1.blocks.0.attn.w_msa.relative_position_index", "model.encoder._timm_module.stages_1.blocks.0.attn.w_msa.qkv.weight", "model.encoder._timm_$odule.stages_1.blocks.0.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_1.blocks.0.attn.w_msa.proj.weight", "model.encoder._timm_module.stages_1.blocks.0.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_1.blocks.0.norm2$weight", "model.encoder._timm_module.stages_1.blocks.0.norm2.bias", "model.encoder._timm_module.stages_1.blocks.0.ffn.layers.0.0.weight", "model.encoder._timm_module.stages_1.blocks.0.ffn.layers.0.0.bias", "model.encoder._timm_module.sta$es_1.blocks.0.ffn.layers.1.weight", "model.encoder._timm_module.stages_1.blocks.0.ffn.layers.1.bias", "model.encoder._timm_module.stages_1.blocks.1.norm1.weight", "model.encoder._timm_module.stages_1.blocks.1.norm1.bias", "model.encoder.$timm_module.stages_1.blocks.1.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_1.blocks.1.attn.w_msa.relative_position_index", "model.encoder._timm_module.stages_1.blocks.1.attn.w_msa.qkv.weight", "model.encod$r._timm_module.stages_1.blocks.1.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_1.blocks.1.attn.w_msa.proj.weight", "model.encoder._timm_module.stages_1.blocks.1.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_1.block$.1.norm2.weight", "model.encoder._timm_module.stages_1.blocks.1.norm2.bias", "model.encoder._timm_module.stages_1.blocks.1.ffn.layers.0.0.weight", "model.encoder._timm_module.stages_1.blocks.1.ffn.layers.0.0.bias", "model.encoder._timm_m$dule.stages_1.blocks.1.ffn.layers.1.weight", "model.encoder._timm_module.stages_1.blocks.1.ffn.layers.1.bias", "model.encoder._timm_module.stages_1.downsample.norm.weight", "model.encoder._timm_module.stages_1.downsample.norm.bias", "mod$l.encoder._timm_module.stages_1.downsample.reduction.weight", "model.encoder._timm_module.stages_1.norm.weight", "model.encoder._timm_module.stages_1.norm.bias", "model.encoder._timm_module.stages_2.blocks.0.norm1.weight", "model.encoder$_timm_module.stages_2.blocks.0.norm1.bias", "model.encoder._timm_module.stages_2.blocks.0.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_2.blocks.0.attn.w_msa.relative_position_index", "model.encoder._timm_m$dule.stages_2.blocks.0.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_2.blocks.0.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_2.blocks.0.attn.w_msa.proj.weight", "model.encoder._timm_module.stages_2.blocks.0.attn.$_msa.proj.bias", "model.encoder._timm_module.stages_2.blocks.0.norm2.weight", "model.encoder._timm_module.stages_2.blocks.0.norm2.bias", "model.encoder._timm_module.stages_2.blocks.0.ffn.layers.0.0.weight", "model.encoder._timm_module.st$ges_2.blocks.0.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_2.blocks.0.ffn.layers.1.weight", "model.encoder._timm_module.stages_2.blocks.0.ffn.layers.1.bias", "model.encoder._timm_module.stages_2.blocks.1.norm1.weight", "mode$.encoder._timm_module.stages_2.blocks.1.norm1.bias", "model.encoder._timm_module.stages_2.blocks.1.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_2.blocks.1.attn.w_msa.relative_position_index", "model.encode$
._timm_module.stages_2.blocks.1.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_2.blocks.1.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_2.blocks.1.attn.w_msa.proj.weight", "model.encoder._timm_module.stages_2.block$
.1.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_2.blocks.1.norm2.weight", "model.encoder._timm_module.stages_2.blocks.1.norm2.bias", "model.encoder._timm_module.stages_2.blocks.1.ffn.layers.0.0.weight", "model.encoder._timm_$
odule.stages_2.blocks.1.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_2.blocks.1.ffn.layers.1.weight", "model.encoder._timm_module.stages_2.blocks.1.ffn.layers.1.bias", "model.encoder._timm_module.stages_2.blocks.2.norm1.weigh$
", "model.encoder._timm_module.stages_2.blocks.2.norm1.bias", "model.encoder._timm_module.stages_2.blocks.2.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_2.blocks.2.attn.w_msa.relative_position_index", "mod$
l.encoder._timm_module.stages_2.blocks.2.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_2.blocks.2.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_2.blocks.2.attn.w_msa.proj.weight", "model.encoder._timm_module.stage$
_2.blocks.2.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_2.blocks.2.norm2.weight", "model.encoder._timm_module.stages_2.blocks.2.norm2.bias", "model.encoder._timm_module.stages_2.blocks.2.ffn.layers.0.0.weight", "model.encod$
r._timm_module.stages_2.blocks.2.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_2.blocks.2.ffn.layers.1.weight", "model.encoder._timm_module.stages_2.blocks.2.ffn.layers.1.bias", "model.encoder._timm_module.stages_2.blocks.3.no$
m1.weight", "model.encoder._timm_module.stages_2.blocks.3.norm1.bias", "model.encoder._timm_module.stages_2.blocks.3.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_2.blocks.3.attn.w_msa.relative_position_ind$
x", "model.encoder._timm_module.stages_2.blocks.3.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_2.blocks.3.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_2.blocks.3.attn.w_msa.proj.weight", "model.encoder._timm_mod$
le.stages_2.blocks.3.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_2.blocks.3.norm2.weight", "model.encoder._timm_module.stages_2.blocks.3.norm2.bias", "model.encoder._timm_module.stages_2.blocks.3.ffn.layers.0.0.weight", "mo$
el.encoder._timm_module.stages_2.blocks.3.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_2.blocks.3.ffn.layers.1.weight", "model.encoder._timm_module.stages_2.blocks.3.ffn.layers.1.bias", "model.encoder._timm_module.stages_2.bl$
cks.4.norm1.weight", "model.encoder._timm_module.stages_2.blocks.4.norm1.bias", "model.encoder._timm_module.stages_2.blocks.4.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_2.blocks.4.attn.w_msa.relative_pos$
tion_index", "model.encoder._timm_module.stages_2.blocks.4.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_2.blocks.4.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_2.blocks.4.attn.w_msa.proj.weight", "model.encoder.$
timm_module.stages_2.blocks.4.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_2.blocks.4.norm2.weight", "model.encoder._timm_module.stages_2.blocks.4.norm2.bias", "model.encoder._timm_module.stages_2.blocks.4.ffn.layers.0.0.wei$
ht", "model.encoder._timm_module.stages_2.blocks.4.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_2.blocks.4.ffn.layers.1.weight", "model.encoder._timm_module.stages_2.blocks.4.ffn.layers.1.bias", "model.encoder._timm_module.st$
ges_2.blocks.5.norm1.weight", "model.encoder._timm_module.stages_2.blocks.5.norm1.bias", "model.encoder._timm_module.stages_2.blocks.5.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_2.blocks.5.attn.w_msa.rel$
tive_position_index", "model.encoder._timm_module.stages_2.blocks.5.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_2.blocks.5.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_2.blocks.5.attn.w_msa.proj.weight", "model$
encoder._timm_module.stages_2.blocks.5.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_2.blocks.5.norm2.weight", "model.encoder._timm_module.stages_2.blocks.5.norm2.bias", "model.encoder._timm_module.stages_2.blocks.5.ffn.layer$
.0.0.weight", "model.encoder._timm_module.stages_2.blocks.5.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_2.blocks.5.ffn.layers.1.weight", "model.encoder._timm_module.stages_2.blocks.5.ffn.layers.1.bias", "model.encoder._timm_$
odule.stages_2.blocks.6.norm1.weight", "model.encoder._timm_module.stages_2.blocks.6.norm1.bias", "model.encoder._timm_module.stages_2.blocks.6.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_2.blocks.6.attn.$
_msa.relative_position_index", "model.encoder._timm_module.stages_2.blocks.6.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_2.blocks.6.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_2.blocks.6.attn.w_msa.proj.weight$
, "model.encoder._timm_module.stages_2.blocks.6.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_2.blocks.6.norm2.weight", "model.encoder._timm_module.stages_2.blocks.6.norm2.bias", "model.encoder._timm_module.stages_2.blocks.6.$
fn.layers.0.0.weight", "model.encoder._timm_module.stages_2.blocks.6.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_2.blocks.6.ffn.layers.1.weight", "model.encoder._timm_module.stages_2.blocks.6.ffn.layers.1.bias", "model.encod$
r._timm_module.stages_2.blocks.7.norm1.weight", "model.encoder._timm_module.stages_2.blocks.7.norm1.bias", "model.encoder._timm_module.stages_2.blocks.7.attn.w_msa.relative_position_bias_table", "model.encoder._timm_module.stages_2.block$
.7.attn.w_msa.relative_position_index", "model.encoder._timm_module.stages_2.blocks.7.attn.w_msa.qkv.weight", "model.encoder._timm_module.stages_2.blocks.7.attn.w_msa.qkv.bias", "model.encoder._timm_module.stages_2.blocks.7.attn.w_msa.pr$
j.weight", "model.encoder._timm_module.stages_2.blocks.7.attn.w_msa.proj.bias", "model.encoder._timm_module.stages_2.blocks.7.norm2.weight", "model.encoder._timm_module.stages_2.blocks.7.norm2.bias", "model.encoder._timm_module.stages_2.$
locks.7.ffn.layers.0.0.weight", "model.encoder._timm_module.stages_2.blocks.7.ffn.layers.0.0.bias", "model.encoder._timm_module.stages_2.blocks.7.ffn.layers.1.weight", "model.encoder._timm_module.stages_2.blocks.7.ffn.layers.1.bias", "mo$
[0] 0:[tmux]* 1:bash  2:bash-                                                                                                                                                                                       "cccxl016" 03:39 10-Dec-24

@romeokienzler romeokienzler reopened this Dec 9, 2024
@romeokienzler
Copy link
Collaborator Author

jbsub -queue x86_6h -cores 1+1 -require a100_80gb -mem 64g -interactive bash

terratorch predict -c /dccstor/geofm-finetuning/AGB_GHG_downstream_task/AGB_torchgeo_agb_mmu/gfm_brazil_2022/first_watsonx_model/platform_testing/config_files/terratorch_agb-GFM_finetune_brazil_dataset_class_update.yaml --ckpt_path /dccstor/geofm-finetuning/AGB_GHG_downstream_task/AGB_torchgeo_agb_mmu/gfm_brazil_2022/first_watsonx_model/forContainer_v0/checkpoints/'epoch=94.ckpt' --predict_output_dir /dccstor/geofm-finetuning/AGB_GHG_downstream_task/AGB_torchgeo_agb_mmu/gfm_brazil_2022/first_watsonx_model/platform_testing/outputs/outputs_HLS_L30_Cloud_Free_2019/outputs_HLS_L30_Cloud_Free_2019_a100/  --data.init_args.predict_data_root /dccstor/hhr-weather/gs_karukinka/HLS_L30_Cloud_Free/2019/ --data.init_args.predict_dataset_bands [0,1,2,3,4,5,6,7,8,9] --data.init_args.predict_output_bands [1,2,3,4,5,6] --out_dtype float32





@Joao-L-S-Almeida
Copy link
Member

Currently we have builtin UNet and ASPPHeads (they were properly included in the register in #453).
@Michal-Muszynski @romeokienzler

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants