Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to embed video encoder module from pytorch? #148

Open
zyhzyh88 opened this issue Dec 4, 2023 · 3 comments
Open

How to embed video encoder module from pytorch? #148

zyhzyh88 opened this issue Dec 4, 2023 · 3 comments

Comments

@zyhzyh88
Copy link

zyhzyh88 commented Dec 4, 2023

No description provided.

@1049451037
Copy link
Member

Need more detailed information... What do you mean by "embed video encoder module from pytorch"?

@zyhzyh88
Copy link
Author

zyhzyh88 commented Dec 5, 2023

Sorry! I already have a video encoder written in pytorch, how can I fully embed this module into the sat framework?

@1049451037
Copy link
Member

1049451037 commented Dec 5, 2023

Just replace the model with your pytorch module in fine-tuning script: (Because sat models are just normal pytorch modules)

model, args = ViTFinetuneModel.from_pretrained(args.from_pretrained, args)

One more thing, maybe you need to add a disable_untrainable_params function to your model, to control what parameters you want to train:

def disable_untrainable_params(self):
    total_trainable = 0
    enable = ['mlp']
    for n, p in self.named_parameters():
        flag = False
        for e in enable:
            if e.lower() in n.lower():
                flag = True
                break
        if not flag:
            p.requires_grad_(False)
        else:
            total_trainable += p.numel()
            print_rank0(n)
    print_rank0("***** Total trainable parameters: "+str(total_trainable)+" *****")

model.disable_untrainable_params = disable_untrainable_params

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants