Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[huggingface_pytorch] Training - DLC for Transformers to 4.46.0 - Accelerate 1.0.1 - PyTorch 2.3 #4393

Open
wants to merge 11 commits into
base: master
Choose a base branch
from

Conversation

JingyaHuang
Copy link
Contributor

This is a stand-by PR, we need to wait for two important major releases: transformers 4.45.0 + accelerate 1.0.0.

  • transformers: 4.46.0
  • datasets: 3.0.2
  • evaluate: 0.4.3
  • accelerate: 1.0.1
  • torch: 2.3.0
  • diffusers: 0.31.0
  • trl: 0.11.4
  • peft: 0.13.2
  • flash-attn:2.6.3

Note:

  • If merging this PR should also close the associated Issue, please also add that Issue # to the Linked Issues section on the right.

  • All PR's are checked weekly for staleness. This PR will be closed if not updated in 30 days.

Description

Tests run

NOTE: By default, docker builds are disabled. In order to build your container, please update dlc_developer_config.toml and specify the framework to build in "build_frameworks"

  • I have run builds/tests on commit for my changes.
Confused on how to run tests? Try using the helper utility...

Assuming your remote is called origin (you can find out more with git remote -v)...

  • Run default builds and tests for a particular buildspec - also commits and pushes changes to remote; Example:

python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -cp origin

  • Enable specific tests for a buildspec or set of buildspecs - also commits and pushes changes to remote; Example:

python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -t sanity_tests -cp origin

  • Restore TOML file when ready to merge

python src/prepare_dlc_dev_environment.py -rcp origin

NOTE: If you are creating a PR for a new framework version, please ensure success of the standard, rc, and efa sagemaker remote tests by updating the dlc_developer_config.toml file:

Expand
  • sagemaker_remote_tests = true
  • sagemaker_efa_tests = true
  • sagemaker_rc_tests = true

Additionally, please run the sagemaker local tests in at least one revision:

  • sagemaker_local_tests = true

Formatting

DLC image/dockerfile

Builds to Execute

Expand

Fill out the template and click the checkbox of the builds you'd like to execute

Note: Replace with <X.Y> with the major.minor framework version (i.e. 2.2) you would like to start.

  • build_pytorch_training_<X.Y>_sm

  • build_pytorch_training_<X.Y>_ec2

  • build_pytorch_inference_<X.Y>_sm

  • build_pytorch_inference_<X.Y>_ec2

  • build_pytorch_inference_<X.Y>_graviton

  • build_tensorflow_training_<X.Y>_sm

  • build_tensorflow_training_<X.Y>_ec2

  • build_tensorflow_inference_<X.Y>_sm

  • build_tensorflow_inference_<X.Y>_ec2

  • build_tensorflow_inference_<X.Y>_graviton

Additional context

PR Checklist

Expand
  • I've prepended PR tag with frameworks/job this applies to : [mxnet, tensorflow, pytorch] | [ei/neuron/graviton] | [build] | [test] | [benchmark] | [ec2, ecs, eks, sagemaker]
  • If the PR changes affects SM test, I've modified dlc_developer_config.toml in my PR branch by setting sagemaker_tests = true and efa_tests = true
  • If this PR changes existing code, the change fully backward compatible with pre-existing code. (Non backward-compatible changes need special approval.)
  • (If applicable) I've documented below the DLC image/dockerfile this relates to
  • (If applicable) I've documented below the tests I've run on the DLC image
  • (If applicable) I've reviewed the licenses of updated and new binaries and their dependencies to make sure all licenses are on the Apache Software Foundation Third Party License Policy Category A or Category B license list. See https://www.apache.org/legal/resolved.html.
  • (If applicable) I've scanned the updated and new binaries to make sure they do not have vulnerabilities associated with them.

NEURON/GRAVITON Testing Checklist

  • When creating a PR:
  • I've modified dlc_developer_config.toml in my PR branch by setting neuron_mode = true or graviton_mode = true

Benchmark Testing Checklist

  • When creating a PR:
  • I've modified dlc_developer_config.toml in my PR branch by setting ec2_benchmark_tests = true or sagemaker_benchmark_tests = true

Pytest Marker Checklist

Expand
  • (If applicable) I have added the marker @pytest.mark.model("<model-type>") to the new tests which I have added, to specify the Deep Learning model that is used in the test (use "N/A" if the test doesn't use a model)
  • (If applicable) I have added the marker @pytest.mark.integration("<feature-being-tested>") to the new tests which I have added, to specify the feature that will be tested
  • (If applicable) I have added the marker @pytest.mark.multinode(<integer-num-nodes>) to the new tests which I have added, to specify the number of nodes used on a multi-node test
  • (If applicable) I have added the marker @pytest.mark.processor(<"cpu"/"gpu"/"eia"/"neuron">) to the new tests which I have added, if a test is specifically applicable to only one processor type

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@JingyaHuang JingyaHuang requested review from a team as code owners October 28, 2024 15:02
@aws-deep-learning-containers-ci aws-deep-learning-containers-ci bot added build Reflects file change in build folder huggingface Reflects file change in huggingface folder sagemaker_tests Size:S Determines the size of the PR test Reflects file change in test folder labels Oct 28, 2024
@Captainia
Copy link
Contributor

Pasting safety check logs

=================================== FAILURES ===================================
--
784 | _ test_safety_file_exists_and_is_valid[669063966089.dkr.ecr.us-west-2.amazonaws.com/pr-huggingface-pytorch-training:2.3.0-transformers4.46.0-gpu-py311-cu121-ubuntu20.04-pr-4393-2024-10-29-17-20-31] _
785 | [gw2] linux -- Python 3.8.0 /usr/local/bin/python
786 |  
787 | image = '669063966089.dkr.ecr.us-west-2.amazonaws.com/pr-huggingface-pytorch-training:2.3.0-transformers4.46.0-gpu-py311-cu121-ubuntu20.04-pr-4393-2024-10-29-17-20-31'
788 |  
789 | @pytest.mark.model("N/A")
790 | @pytest.mark.skipif(is_canary_context(), reason="Skipping test because it does not run on canary")
791 | def test_safety_file_exists_and_is_valid(image):
792 | """
793 | Checks if the image has a safety report at the desired location and fails if any of the
794 | packages in the report have failed the safety check.
795 |  
796 | :param image: str, image uri
797 | """
798 | repo_name, image_tag = image.split("/")[-1].split(":")
799 | # Make sure this container name doesn't conflict with the safety check test container name
800 | container_name = f"{repo_name}-{image_tag}-safety-file"
801 | # Add null entrypoint to ensure command exits immediately
802 | run(
803 | f"docker run -id " f"--name {container_name} " f"--entrypoint='/bin/bash' " f"{image}",
804 | hide=True,
805 | warn=True,
806 | )
807 |  
808 | try:
809 | # Check if file exists
810 | docker_exec_cmd = f"docker exec -i {container_name}"
811 | safety_file_check = run(f"{docker_exec_cmd} test -f {SAFETY_FILE}", warn=True, hide=True)
812 | assert safety_file_check.ok, f"Safety file existence test failed for {image}"
813 |  
814 | file_content = run(f"{docker_exec_cmd} cat {SAFETY_FILE}", warn=True, hide=True)
815 | raw_scan_result = json.loads(file_content.stdout)
816 | safety_report_object = SafetyPythonEnvironmentVulnerabilityReport(report=raw_scan_result)
817 |  
818 | # processing safety reports
819 | report_log_template = "SAFETY_REPORT ({status}) [pkg: {pkg}] [installed: {installed}] [vulnerabilities: {vulnerabilities}]"
820 | failed_count = 0
821 | for report_item in safety_report_object.report:
822 | if report_item.scan_status == "FAILED":
823 | failed_count += 1
824 | LOGGER.error(
825 | report_log_template.format(
826 | status="FAILED",
827 | pkg=report_item.package,
828 | installed=report_item.installed,
829 | vulnerabilities=[
830 | entry for entry in report_item.vulnerabilities if not entry.ignored
831 | ],
832 | )
833 | )
834 | >           assert failed_count == 0, f"{failed_count} package/s failed safety test for {image} !!!"
835 | E           AssertionError: 1 package/s failed safety test for 669063966089.dkr.ecr.us-west-2.amazonaws.com/pr-huggingface-pytorch-training:2.3.0-transformers4.46.0-gpu-py311-cu121-ubuntu20.04-pr-4393-2024-10-29-17-20-31 !!!
836 | E           assert 1 == 0

Looks like we are missing the safety file

@JingyaHuang
Copy link
Contributor Author

Thanks, @Captainia. Could you elaborate more on what the safety file is and how I should add it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build Reflects file change in build folder huggingface Reflects file change in huggingface folder sagemaker_tests Size:S Determines the size of the PR test Reflects file change in test folder
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants