Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update dependency accelerate to v0.34.2 #273

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Dec 1, 2023

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
accelerate ==0.24.1 -> ==0.34.2 age adoption passing confidence

Release Notes

huggingface/accelerate (accelerate)

v0.34.2

Compare Source

v0.34.1: Patchfix

Compare Source

Bug fixes

  • Fixes an issue where processed DataLoaders could no longer be pickled in #​3074 thanks to @​byi8220
  • Fixes an issue when using FSDP where default_transformers_cls_names_to_wrap would separate _no_split_modules by characters instead of keeping it as a list of layer names in #​3075

Full Changelog: huggingface/accelerate@v0.34.0...v0.34.1

v0.34.0: : StatefulDataLoader Support, FP8 Improvements, and PyTorch Updates!

Compare Source

Dependency Changes

  • Updated Safetensors Requirement: The library now requires safetensors version 0.4.3.
  • Added support for Numpy 2.0: The library now fully supports numpy 2.0.0

Core

New Script Behavior Changes
  • Process Group Management: PyTorch now requires users to destroy process groups after training. The accelerate library will handle this automatically with accelerator.end_training(), or you can do it manually using PartialState().destroy_process_group().
  • MLU Device Support: Added support for saving and loading RNG states on MLU devices by @​huismiling
  • NPU Support: Corrected backend and distributed settings when using transfer_to_npu, ensuring better performance and compatibility.
DataLoader Enhancements
  • Stateful DataDataLoader: We are excited to announce that early support has been added for the StatefulDataLoader from torchdata, allowing better handling of data loading states. Enable by passing use_stateful_dataloader=True to the DataLoaderConfiguration, and when calling load_state() the DataLoader will automatically be resumed from its last step, no more having to iterate through passed batches.
  • Decoupled Data Loader Preparation: The prepare_data_loader() function is now independent of the Accelerator, giving you more flexibility towards which API levels you would like to use.
  • XLA Compatibility: Added support for skipping initial batches when using XLA.
  • Improved State Management: Bug fixes and enhancements for saving/loading DataLoader states, ensuring smoother training sessions.
  • Epoch Setting: Introduced the set_epoch function for MpDeviceLoaderWrapper.
FP8 Training Improvements
  • Enhanced FP8 Training: Fully Sharded Data Parallelism (FSDP) and DeepSpeed support now work seamlessly with TransformerEngine FP8 training, including better defaults for the quantized FP8 weights.
  • Integration baseline: We've added a new suite of examples and benchmarks to ensure that our TransformerEngine integration works exactly as intended. These scripts run one half using 🤗 Accelerate's integration, the other with raw TransformersEngine, providing users with a nice example of what we do under the hood with accelerate, and a good sanity check to make sure nothing breaks down over time. Find them here
  • Import Fixes: Resolved issues with import checks for the Transformers Engine that has downstream issues.
  • FP8 Docker Images: We've added new docker images for TransformerEngine and accelerate as well. Use docker pull huggingface/accelerate@gpu-fp8-transformerengine to quickly get an environment going.

torchpippy no more, long live torch.distributed.pipelining

  • With the latest PyTorch release, torchpippy is now fully integrated into torch core, and as a result we are exclusively supporting the PyTorch implementation from now on
  • There are breaking examples and changes that comes from this shift. Namely:
    • Tracing of inputs is done with a shape each GPU will see, rather than the size of the total batch. So for 2 GPUs, one should pass in an input of [1, n, n] rather than [2, n, n] as before.
    • We no longer support Encoder/Decoder models. PyTorch tracing for pipelining no longer supports encoder/decoder models, so the t5 example has been removed.
    • Computer vision model support currently does not work: There are some tracing issues regarding resnet's we are actively looking into.
  • If either of these changes are too breaking, we recommend pinning your accelerate version. If the encoder/decoder model support is actively blocking your inference using pippy, please open an issue and let us know. We can look towards adding in the old support for torchpippy potentially if needed.

Fully Sharded Data Parallelism (FSDP)

  • Environment Flexibility: Environment variables are now fully optional for FSDP, simplifying configuration. You can now fully create a FullyShardedDataParallelPlugin yourself manually with no need for environment patching:
from accelerate import FullyShardedDataParallelPlugin
fsdp_plugin = FullyShardedDataParallelPlugin(...)
  • FSDP RAM efficient loading: Added a utility to enable RAM-efficient model loading (by setting the proper environmental variable). This is generally needed if not using accelerate launch and need to ensure the env variables are setup properly for model loading:
from accelerate.utils import enable_fsdp_ram_efficient_loading, disable_fsdp_ram_efficient_loading
enable_fsdp_ram_efficient_loading()
  • Model State Dict Management: Enhanced support for unwrapping model state dicts in FSDP, making it easier to manage distributed models.

New Examples

Bug Fixes

New Contributors

Full Changelog:

Detailed Full Changelog:

v0.33.0: : MUSA backend support and bugfixes

Compare Source

MUSA backend support and bugfixes

Small release this month, with key focuses on some added support for backends and bugs:

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v0.32.1...v0.33.0

v0.32.1

Compare Source

v0.32.0: : Profilers, new hooks, speedups, and more!

Compare Source

Core

Distributed Data Parallelism

FSDP

XPU

XLA

Examples

Full Changelog

New Contributors

Full Changelog: huggingface/accelerate@v0.31.0...v0.32.0

v0.31.0: : Better support for sharded state dict with FSDP and Bugfixes

Compare Source

Core

FSDP

Megatron

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v0.30.1...v0.31.0

v0.30.1: : Bugfixes

Compare Source

Patchfix

Full Changelog: huggingface/accelerate@v0.30.0...v0.30.1

v0.30.0: : Advanced optimizer support, MoE DeepSpeed support, add upcasting for FSDP, and more

Compare Source

Core


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot added the dependencies Pull requests that update a dependency file label Dec 1, 2023
Copy link

codecov bot commented Dec 1, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 37.00%. Comparing base (6530de4) to head (8cc06e0).
Report is 1 commits behind head on main.

Current head 8cc06e0 differs from pull request most recent head e01f6db

Please upload reports for the commit e01f6db to get more accurate results.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #273   +/-   ##
=======================================
  Coverage   37.00%   37.00%           
=======================================
  Files          23       23           
  Lines        1481     1481           
  Branches      202      202           
=======================================
  Hits          548      548           
  Misses        925      925           
  Partials        8        8           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.25.0 chore(deps): update dependency accelerate to v0.26.0 Jan 10, 2024
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.26.0 chore(deps): update dependency accelerate to v0.26.1 Jan 11, 2024
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.26.1 chore(deps): update dependency accelerate to v0.27.0 Feb 10, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch 2 times, most recently from 6738e00 to c03be08 Compare February 13, 2024 16:47
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.27.0 chore(deps): update dependency accelerate to v0.27.2 Feb 13, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from c03be08 to 529abe5 Compare March 12, 2024 17:05
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.27.2 chore(deps): update dependency accelerate to v0.28.0 Mar 12, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from 529abe5 to f27a701 Compare April 5, 2024 14:59
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.28.0 chore(deps): update dependency accelerate to v0.29.0 Apr 5, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from f27a701 to 46fc1d9 Compare April 5, 2024 20:24
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.29.0 chore(deps): update dependency accelerate to v0.29.1 Apr 5, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from 46fc1d9 to 0ae38f8 Compare April 9, 2024 13:49
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.29.1 chore(deps): update dependency accelerate to v0.29.2 Apr 9, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from 0ae38f8 to d978882 Compare April 17, 2024 17:43
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.29.2 chore(deps): update dependency accelerate to v0.29.3 Apr 17, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from d978882 to 3d307ca Compare May 3, 2024 16:25
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.29.3 chore(deps): update dependency accelerate to v0.30.0 May 3, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from 3d307ca to 8cc06e0 Compare May 10, 2024 19:17
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.30.0 chore(deps): update dependency accelerate to v0.30.1 May 10, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from 8cc06e0 to e01f6db Compare June 7, 2024 15:53
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.30.1 chore(deps): update dependency accelerate to v0.31.0 Jun 7, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from e01f6db to aab84a4 Compare July 3, 2024 17:28
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.31.0 chore(deps): update dependency accelerate to v0.32.0 Jul 3, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from aab84a4 to dd3f09d Compare July 4, 2024 16:22
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.32.0 chore(deps): update dependency accelerate to v0.32.1 Jul 4, 2024
@renovate renovate bot force-pushed the renovate/accelerate-0.x branch from dd3f09d to cb0c4b6 Compare July 23, 2024 17:42
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.32.1 chore(deps): update dependency accelerate to v0.33.0 Jul 23, 2024
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.33.0 chore(deps): update dependency accelerate to v0.34.0 Sep 3, 2024
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.34.0 chore(deps): update dependency accelerate to v0.34.1 Sep 5, 2024
@renovate renovate bot changed the title chore(deps): update dependency accelerate to v0.34.1 chore(deps): update dependency accelerate to v0.34.2 Sep 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants