Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix caching #190

Merged
merged 7 commits into from
Jun 10, 2022
Merged

Fix caching #190

merged 7 commits into from
Jun 10, 2022

Conversation

ejm714
Copy link
Collaborator

@ejm714 ejm714 commented Jun 9, 2022

Main changes:

  • downloads public checkpoint file to top level model_cache_dir (this supports look ups in the correct place on future runs)
  • if cached checkpoint file exists (and no other checkpoint was passed in), put the cached path on the checkpoint property in TrainConfig

These changes allow the previously downloaded file to actually get used on future runs. Previously, we were seeing "downloading model weights" on every run.

Closes #189

@ejm714 ejm714 requested a review from pjbull June 9, 2022 19:51
@github-actions
Copy link
Contributor

github-actions bot commented Jun 9, 2022

@@ -106,7 +106,7 @@ def test_video(model, chimp_video_path, tmp_path):
)

# output to disk
assert anatomy_info.shape == (8, 46)
assert anatomy_info.shape == (10, 46)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unrelated to caching, this is due to new densepose model release: https://github.com/facebookresearch/detectron2/tree/main/projects/DensePose#whats-new

@ejm714
Copy link
Collaborator Author

ejm714 commented Jun 9, 2022

All tests are now passing, this is ready for a review

@pjbull pjbull merged commit d64e4f7 into master Jun 10, 2022
@pjbull pjbull deleted the 189-fix-caching branch June 10, 2022 00:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cached model does not get used in next run
2 participants