Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let GradientState know active dataloaders and reset the remainder #1162

Merged
merged 7 commits into from
Mar 7, 2023

Conversation

muellerzr
Copy link
Collaborator

@muellerzr muellerzr commented Mar 7, 2023

Adjusts the GradientState so it can know:

  • What dataloader is entered/exited on __iter__
  • The total dataloader list that we are currently iterating over
  • Changes the logic for setting the end of the dataloader to exist in GradientState._add_dataloader and GradientState._remove_dataloader

Solves #960 and #1050

@muellerzr muellerzr added bug Something isn't working enhancement New feature or request labels Mar 7, 2023
@muellerzr muellerzr requested a review from sgugger March 7, 2023 16:43
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Mar 7, 2023

The documentation is not available anymore as the PR was closed or merged.

@muellerzr muellerzr marked this pull request as draft March 7, 2023 16:50
@muellerzr muellerzr marked this pull request as ready for review March 7, 2023 16:53
@muellerzr muellerzr linked an issue Mar 7, 2023 that may be closed by this pull request
4 tasks
Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@muellerzr muellerzr force-pushed the dataloader-multistate branch from 83688e9 to c56b897 Compare March 7, 2023 18:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

A BUG? when performing gradient accumulation
3 participants