-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove accelerate.prepare hooks and FP32 conversion #858
Comments
There is no util to remove those for now (it's not the Hooks, which we use for big model inference, but more wrappers around the forward) but I see why you'd want a util like this! I think the @muellerzr Would you have time to work on that? |
@sgugger yep! Will assign myself |
Hi @sgugger thank you for your answer! I actually use:
before saving it, still this |
Yes, like I said there is nothing to remove it yet. The goal would be for this snippet of code to work once Zach is done :-) |
Hi @DavidePaglieri, should be fixed with #860! |
When I use
accelerator.prepare(model)
the model gets some hooks used in accelerate. After I'm done with the training and want to save the model, those hooks are still somewhere in the .state_dict() of the model and I can't figure out where exactly and how to remove them.This is a problem especially when I later want to load the model without accelerate and run it in FP16 with other models, because despite the model being loaded in FP16, I have the following accelerate decorator still somewhere moving the output in FP32:
Is there a way I can remove all such hooks from the model trained with accelerate? I have tried
remove_hook_from_module
but it doesn't work.Thank you!
The text was updated successfully, but these errors were encountered: