Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix llama readme #3339

Closed
wants to merge 93 commits into from
Closed

fix llama readme #3339

wants to merge 93 commits into from

Commits on Apr 8, 2024

  1. Configuration menu
    Copy the full SHA
    55ed5ab View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    1f1f357 View commit details
    Browse the repository at this point in the history

Commits on Apr 9, 2024

  1. Update docs (#2919) (#2923)

    Summary:
    Pull Request resolved: #2919
    
    The note directive on sphinx doesn't render well on markdown. Remove it to avoid cause confusion.
    
    Reviewed By: mergennachin, cccclai
    
    Differential Revision: D55881939
    
    fbshipit-source-id: a4252f0b70593ecd97e5cc352c601e772a9c222a
    (cherry picked from commit dc7e4d5)
    
    Co-authored-by: Hansong Zhang <hsz@meta.com>
    pytorchbot and kirklandsign authored Apr 9, 2024
    Configuration menu
    Copy the full SHA
    9383bda View commit details
    Browse the repository at this point in the history
  2. Use unified path /data/local/tmp/llama (#2899) (#2924)

    Summary: Pull Request resolved: #2899
    
    Reviewed By: mergennachin
    
    Differential Revision: D55829514
    
    Pulled By: kirklandsign
    
    fbshipit-source-id: 3e5d222b969c7b13fc8902dbda738edb3cb898dc
    (cherry picked from commit 3e256ff)
    
    Co-authored-by: Hansong Zhang <hsz@fb.com>
    pytorchbot and kirklandsign authored Apr 9, 2024
    Configuration menu
    Copy the full SHA
    e193c71 View commit details
    Browse the repository at this point in the history
  3. Refine LLM getting started guide for uniformity, fix critical errors (#…

    …2911)
    
    Summary:
    Update the LLM getting started guide for uniform tone and tense. Informally following the Google developer documentation style guide: https://developers.google.com/style. Also, resolve a number of outstanding issues with incorrect or misleading documentation and steps.
    
    For reference, here are links to the current and proposed LLM guide:
    https://docs-preview.pytorch.org/pytorch/executorch/2911/llm/getting-started.html (proposed)
    https://pytorch.org/executorch/main/llm/getting-started.html (live)
    
    Pull Request resolved: #2911
    
    Reviewed By: Gasoonjia, byjlw
    
    Differential Revision: D55867181
    
    Pulled By: GregoryComer
    
    fbshipit-source-id: 5e865eaa4a0ae52845963b15c221a3d272431448
    (cherry picked from commit 01bac3d)
    GregoryComer authored and mergennachin committed Apr 9, 2024
    Configuration menu
    Copy the full SHA
    166a635 View commit details
    Browse the repository at this point in the history
  4. Update docs for the demo app. (#2921)

    Summary:
    Pull Request resolved: #2921
    overriding_review_checks_triggers_an_audit_and_retroactive_review
    Oncall Short Name: executorch
    
    Differential Revision: D55885790
    
    fbshipit-source-id: bb62a42b74ecdfb2e1f6bcebab979e2e8fcf0a3c
    (cherry picked from commit 9ba8bc9)
    shoumikhin authored and mergennachin committed Apr 9, 2024
    Configuration menu
    Copy the full SHA
    002ae53 View commit details
    Browse the repository at this point in the history
  5. Fixing minor issues in llama2 7b repro (#2926)

    Summary:
    Pull Request resolved: #2926
    
    Fixing issues we've seen in #2907 and #2805
    
    bypass-github-export-checks
    bypass-github-pytorch-ci-checks
    bypass-github-executorch-ci-checks
    
    Reviewed By: iseeyuan, cccclai
    
    Differential Revision: D55893925
    
    fbshipit-source-id: c6e0264d868cb487faf02f95ff1bd223cbcc97ac
    (cherry picked from commit 6db9d72)
    mergennachin committed Apr 9, 2024
    Configuration menu
    Copy the full SHA
    fa4d88d View commit details
    Browse the repository at this point in the history
  6. Update iphone 15 pro benchmarking numbers (#2927)

    Summary:
    Pull Request resolved: #2927
    
    ATT
    
    Created from CodeHub with https://fburl.com/edit-in-codehub
    
    Reviewed By: mergennachin
    
    Differential Revision: D55895703
    
    fbshipit-source-id: 5466b44224b8ebf7b88d846354683da0c1f6a801
    (cherry picked from commit ce447dc)
    kimishpatel authored and mergennachin committed Apr 9, 2024
    Configuration menu
    Copy the full SHA
    4e7aebf View commit details
    Browse the repository at this point in the history
  7. Fix generation speed calculation. (#2932)

    Summary:
    Pull Request resolved: #2932
    overriding_review_checks_triggers_an_audit_and_retroactive_review
    Oncall Short Name: executorch
    
    Differential Revision: D55904722
    
    fbshipit-source-id: 6057bc75f812e5ae9dd057bbed7291a539d80ff6
    (cherry picked from commit 8cabeac)
    shoumikhin authored and mergennachin committed Apr 9, 2024
    Configuration menu
    Copy the full SHA
    46566b5 View commit details
    Browse the repository at this point in the history
  8. exclude mutated buffer (#2876)

    Summary:
    Pull Request resolved: #2876
    
    Fixing the tag constant for mutable buffer. The buffer shouldn't be tagged if it's going to be mutated by the delegated. It's more common in hardware backends
    
    Will follow up and test having delegate consume mutation
    
    Reviewed By: mcr229, angelayi
    
    Differential Revision: D55812844
    
    fbshipit-source-id: e0be4c2dc295141d673cccb1aeecee45894b1e70
    (cherry picked from commit 599cfde)
    cccclai authored and mergennachin committed Apr 9, 2024
    Configuration menu
    Copy the full SHA
    2fe7543 View commit details
    Browse the repository at this point in the history

Commits on Apr 10, 2024

  1. Make minor updates to LLM guide setup instructions (#2940) (#2959)

    Summary:
    Minor updates to the prerequisite section of the LLM getting started guide. Passing -s to pyenv install prevents a prompt if python 3.10 is already installed (it will just silently continue in this case when the flag is passed). Additionally, under pyenv, we should be using python, not python3. I also added a little bit of wording on env management.
    
    Pull Request resolved: #2940
    
    Test Plan: Ran LLM guide prerequisite section on an m1 mac with pyenv-virtualenv.
    
    Reviewed By: byjlw
    
    Differential Revision: D55913382
    
    Pulled By: GregoryComer
    
    fbshipit-source-id: 7f04262b025db83b8621c972c90d3cdc3f029377
    (cherry picked from commit 218f643)
    
    Co-authored-by: Gregory Comer <gregoryjcomer@gmail.com>
    pytorchbot and GregoryComer authored Apr 10, 2024
    Configuration menu
    Copy the full SHA
    69bae6e View commit details
    Browse the repository at this point in the history
  2. resolve_buck.py: Add an entry for darwin-x86_64 (#2868)

    Summary:
    Version hash reported by
    https://github.com/facebook/buck2/releases/download/2024-02-15/buck2-x86_64-apple-darwin.zst
    
    Pull Request resolved: #2868
    
    Reviewed By: Olivia-liu
    
    Differential Revision: D55914146
    
    Pulled By: GregoryComer
    
    fbshipit-source-id: b9882900acfd4cb6f74eda90a7c99bdb119ec122
    (cherry picked from commit de7fdaa)
    dbort authored and mergennachin committed Apr 10, 2024
    Configuration menu
    Copy the full SHA
    cd2779a View commit details
    Browse the repository at this point in the history

Commits on Apr 11, 2024

  1. Refine the LLM manual (focus on the debugging and profiling part) (#2952

    ) (#2971)
    
    Summary:
    Pull Request resolved: #2952
    
    * Some auto-formatting by my VSCode (remove extra spaces)
    * Remove imports that have been imported in previous part of the doc
    * Other minor changes to keep consistency across the doc
    * Link a screenshot instead of using the raw table because the original table is illegible:
     {F1482781056}
    
    Reviewed By: GregoryComer
    
    Differential Revision: D55938344
    
    fbshipit-source-id: 699abb9ebe1196ab73d90a3d08d60be7aa0d8688
    (cherry picked from commit e733f2d)
    
    Co-authored-by: Olivia Liu <olivialpx@meta.com>
    pytorchbot and Olivia-liu authored Apr 11, 2024
    Configuration menu
    Copy the full SHA
    16c3afc View commit details
    Browse the repository at this point in the history

Commits on Apr 12, 2024

  1. Configuration menu
    Copy the full SHA
    6a67ec2 View commit details
    Browse the repository at this point in the history
  2. Add llama2 readme in examples/README (#2992) (#2993)

    Summary:
    Pull Request resolved: #2992
    
    We should promote the llama2 page more in https://github.com/pytorch/executorch/tree/main/examples/
    
    bypass-github-export-checks
    bypass-github-pytorch-ci-checks
    bypass-github-executorch-ci-checks
    
    Reviewed By: iseeyuan
    
    Differential Revision: D56018978
    
    fbshipit-source-id: cbbc7bd2ea4ce55e564bd6b4a2900f623599dde6
    (cherry picked from commit e641ffc)
    
    Co-authored-by: Mergen Nachin <mnachin@meta.com>
    pytorchbot and mergennachin authored Apr 12, 2024
    Configuration menu
    Copy the full SHA
    be476a5 View commit details
    Browse the repository at this point in the history
  3. Add the missing import generate_etrecord to doc Getting Started with …

    …LLM (#2977) (#2997)
    
    Summary:
    Pull Request resolved: #2977
    
    As titled
    
    Reviewed By: Gasoonjia
    
    Differential Revision: D55992093
    
    fbshipit-source-id: 7864c330bd86af5d4127cacfd47e96f1e6666bfb
    (cherry picked from commit cb9caa3)
    
    Co-authored-by: Olivia Liu <olivialpx@meta.com>
    pytorchbot and Olivia-liu authored Apr 12, 2024
    Configuration menu
    Copy the full SHA
    ca69051 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    31bb2ea View commit details
    Browse the repository at this point in the history

Commits on Apr 15, 2024

  1. Add required deps to pyproject.toml

    These pip dependencies need to be present to build the pip wheel.
    
    Also, change the version to a stub that looks less like a real version,
    until we can hook up the logic to get the version from the git repo
    state.
    dbort committed Apr 15, 2024
    Configuration menu
    Copy the full SHA
    28f1c8c View commit details
    Browse the repository at this point in the history
  2. Install build requirements in pre_build_script.sh

    Manually install build requirements because `python setup.py
    bdist_wheel` does not install them.
    dbort committed Apr 15, 2024
    Configuration menu
    Copy the full SHA
    72854c8 View commit details
    Browse the repository at this point in the history
  3. Have setup.py unset HOME when running buck2

    setup.py is sometimes run as root in docker containers. buck2 doesn't
    allow running as root unless $HOME is owned by root or does not exist.
    So temporarily undefine it while configuring cmake, which runs buck2 to
    get some source lists.
    
    Also, the buck2 daemon can sometimes get stuck on the CI workers. Try
    killing it before starting the build, ignoring any failures.
    dbort committed Apr 15, 2024
    Configuration menu
    Copy the full SHA
    4280334 View commit details
    Browse the repository at this point in the history
  4. Add project.ignore to .buckconfig to reduce watched files

    Some CI jobs can fail with "OS file watch limit reached" when running
    buck2. This section should reduce the number of files that it tries to
    watch.
    dbort committed Apr 15, 2024
    Configuration menu
    Copy the full SHA
    1352d4e View commit details
    Browse the repository at this point in the history
  5. Don't recurse submodules when building wheels

    Change the build-wheels workflow to only fetch the first layer of
    submodules. ExecuTorch only needs the first layer of submodules to
    build its pip package, but the `build_wheels_*.yaml` workflows will
    recursively fetch all submodules by default.
    
    Fetching all submodules can also cause `buck2` to fail because it will
    try to watch too many files.
    
    This change makes `buck2` work on the CI runners, speeds up the jobs,
    and reduces disk/network usage.
    dbort committed Apr 15, 2024
    Configuration menu
    Copy the full SHA
    d5cbc09 View commit details
    Browse the repository at this point in the history
  6. Build pybindings and link in backends when building pip wheels

    Always build the pybindings when building the pip wheel.
    
    Always link in XNNPACK.
    
    On macos, also link in MPS. Core ML can't build on the worker machine,
    though, because the version of macOS is too old; Core ML requires some
    features introduced in macOS 10.15.
    dbort committed Apr 15, 2024
    Configuration menu
    Copy the full SHA
    dc1ca98 View commit details
    Browse the repository at this point in the history
  7. Wrap std::isnan/std::isinf in the portable operators

    Passing the `std::` functions directory to unary_ufunc_realhb_to_bool
    can cause "error: cannot resolve overloaded function ‘isinf’ based
    on conversion to type ‘torch::executor::FunctionRef<bool(double)>’"
    in some compilation environments.
    
    Might be because these functions can be templatized, or because they
    became constexpr in C++23.
    dbort committed Apr 15, 2024
    Configuration menu
    Copy the full SHA
    638433f View commit details
    Browse the repository at this point in the history

Commits on Apr 16, 2024

  1. Decouple custom ops in llama_transformer.py Part 1/N (#3005) (#3052)

    Summary:
    This is a no-op
    
    Pull Request resolved: #3005
    
    Test Plan:
    CI
    
    Run with
    
    `python -m examples.models.llama2.export_llama -c stories110M.pt -p params.json -kv --use_sdpa_with_kv_cache -X`
    
    and with
    
    `python -m examples.models.llama2.export_llama -c stories110M.pt -p params.json -kv -X`
    
    Make sure both work
    
    Reviewed By: cccclai
    
    Differential Revision: D56048177
    
    Pulled By: mergennachin
    
    fbshipit-source-id: 3ac9ac5c34f6fe215de1cfe8b5ddc7aae3635359
    (cherry picked from commit 488afc5)
    
    Co-authored-by: Mergen Nachin <mnachin@meta.com>
    cccclai and mergennachin authored Apr 16, 2024
    Configuration menu
    Copy the full SHA
    60bf405 View commit details
    Browse the repository at this point in the history

Commits on Apr 17, 2024

  1. add more instructions and examples on Delegation (#3042)

    * add more instructions and examples on Delegation (#2973)
    
    Summary:
    Pull Request resolved: #2973
    
    as title.
    
    Reviewed By: vmpuri, byjlw
    
    Differential Revision: D55988177
    
    fbshipit-source-id: 8cdc953118ecd22e8e9a809f0dd716a30a7fc117
    (cherry picked from commit 17c64a3)
    
    * replace Executorch with ExecuTorch to fix lint error
    
    ---------
    
    Co-authored-by: Songhao Jia <gasoonjia@meta.com>
    pytorchbot and Gasoonjia authored Apr 17, 2024
    Configuration menu
    Copy the full SHA
    ca7eba9 View commit details
    Browse the repository at this point in the history
  2. Decouple custom ops in llama_transformer.py Part 2/N (#3007) (#3061)

    Summary:
    Pull Request resolved: #3007
    
    Keep llama_transformer.py to look like stock implementation, so that it can be reused everywhere.
    
    Do module swap
    
    Reviewed By: cccclai
    
    Differential Revision: D56048640
    
    fbshipit-source-id: 76de1b09b7f5d79422bb3b32bc830a9a7ecd935c
    (cherry picked from commit 74eb8b3)
    
    Co-authored-by: Mergen Nachin <mnachin@meta.com>
    cccclai and mergennachin authored Apr 17, 2024
    Configuration menu
    Copy the full SHA
    87d3748 View commit details
    Browse the repository at this point in the history
  3. Cherry-pick commits for executorch_no_prim_ops (#3025)

    * Add executorch_no_prim_ops target (#2934)
    
    Summary:
    Pull Request resolved: #2934
    
    Currently `libexecutorch.a` always contain prim ops. This becomes a problem when a binary contains 2 "versions" of `libexecutorch.a`, causing a double registration of the prim ops.
    
    For example, `libA.so` depends on `libexecutorch.a` and a binary `B` depends on both `libA.so` and `libexecutorch.a`. Since both `libexecutorch.a` and `libA.so` contains prim ops, they will be registered twice.
    
    In this PR I created another library `executorch_no_prim_ops` for `libA.so` to depend on.
    
    Reviewed By: cccclai, kirklandsign
    
    Differential Revision: D55907752
    
    fbshipit-source-id: 755a9b8d5f6f7cf44d011b83bfdc18be6da1aa05
    (cherry picked from commit d309e9d)
    
    * Fix failing CI jobs caused by #2934 (#2961)
    
    Summary:
    Pull Request resolved: #2961
    
    Fix these 3 CI job failures caused by #2934 (D55907752):
    
    * Apple / build-frameworks-ios / macos-job
    * trunk / test-arm-backend-delegation / linux-job
    * trunk / test-coreml-delegate / macos-job
    
    Reviewed By: kirklandsign
    
    Differential Revision: D55950023
    
    fbshipit-source-id: 6166d9112e6d971d042df1400442395d8044c3b3
    (cherry picked from commit d993797)
    
    * [NOT-CLEAN-CP] Fix 3 CI jobs (#3006)
    
    Summary:
    * [NOT APPLICABLE IN RELEASE] Apple / build-frameworks-ios / macos-job
    
    We removed libcustom_ops_lib.a in #2916 so need to remove it from `build_apple_frameworks.sh`.
    
    * [NOT APPLICABLE IN RELEASE] Lint / lintrunner / linux-job
    
    Remove extra line in backends/qualcomm/quantizer/utils.py
    
    * pull / unittest / macos (buck2) / macos-job
    
    Fix it by using `executorch_no_prim_ops` instead of `executorch` in MPS and CoreML.
    
    Pull Request resolved: #3006
    
    Reviewed By: lucylq
    
    Differential Revision: D56048430
    
    Pulled By: larryliu0820
    
    fbshipit-source-id: 9dcb476eea446ea3aba566d595167c691fb00eec
    (cherry picked from commit 5b7c4ba)
    
    ---------
    
    Co-authored-by: Mengwei Liu <larryliu@meta.com>
    Co-authored-by: Mengwei Liu <larryliu@fb.com>
    3 people authored Apr 17, 2024
    Configuration menu
    Copy the full SHA
    e7e9e06 View commit details
    Browse the repository at this point in the history
  4. Fix tutorial for Qualcomm AI Engine Direct Backend (#2956) (#3026)

    Summary:
    We have refactors recently and need to update the tutorial and cmake.
    
    See #2955 for isseues.
    
    Pull Request resolved: #2956
    
    Reviewed By: mcr229, cccclai
    
    Differential Revision: D55947725
    
    Pulled By: kirklandsign
    
    fbshipit-source-id: f23af28b9a8fe071223d8ffa922a6cd4e49efe61
    (cherry picked from commit c7fd394)
    kirklandsign authored Apr 17, 2024
    Configuration menu
    Copy the full SHA
    212e91f View commit details
    Browse the repository at this point in the history
  5. Android demo app tutorial fix for XNNPACK and QNN (#2962) (#3027)

    Summary:
    * Update tutorial due to recent changes.
    * Clean up setup.sh for app helper lib build.
    
    Pull Request resolved: #2962
    
    Reviewed By: cccclai
    
    Differential Revision: D55951189
    
    Pulled By: kirklandsign
    
    fbshipit-source-id: 2c95e8580145b039f503e7cd99a4003867f8dbb0
    (cherry picked from commit 26365f1)
    kirklandsign authored Apr 17, 2024
    Configuration menu
    Copy the full SHA
    925f674 View commit details
    Browse the repository at this point in the history
  6. Skip annotate boolean input (#2957) (#3051)

    * Skip annotate boolean input (#2957)
    
    Summary:
    Pull Request resolved: #2957
    
    ghstack-source-id: 222200589
    exported-using-ghexport
    
    It only makes sense to quantize fp tensor, but not boolean. Add a check to make sure only fp tensor are annotated in quantizer
    
    Reviewed By: jerryzh168
    
    Differential Revision: D55946526
    
    fbshipit-source-id: d94bfee38ab2d29fc9672ab631b4d5d0c5239d25
    
    * fix lint
    cccclai authored Apr 17, 2024
    Configuration menu
    Copy the full SHA
    e078e93 View commit details
    Browse the repository at this point in the history

Commits on Apr 18, 2024

  1. Update doc-build.yml (#3045) (#3098)

    Summary: Pull Request resolved: #3045
    
    Reviewed By: clee2000
    
    Differential Revision: D56201946
    
    Pulled By: svekars
    
    fbshipit-source-id: 4212c24b02a1229ff06137b0d437b4e8c5dd454e
    (cherry picked from commit c73bfc0)
    
    Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
    pytorchbot and svekars authored Apr 18, 2024
    Configuration menu
    Copy the full SHA
    59fa8e3 View commit details
    Browse the repository at this point in the history
  2. Update doc-build.yml (#3071) (#3099)

    Summary:
    Move noindex logic to the build job
    
    Pull Request resolved: #3071
    
    Reviewed By: clee2000
    
    Differential Revision: D56218857
    
    Pulled By: svekars
    
    fbshipit-source-id: 69dff489d98eee046d69185a6c03d62fbae37a16
    (cherry picked from commit 5d7949d)
    
    Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
    pytorchbot and svekars authored Apr 18, 2024
    Configuration menu
    Copy the full SHA
    0ad7043 View commit details
    Browse the repository at this point in the history
  3. move mask as sdpa input instead of attribute (#3036) (#3114)

    Summary:
    Pull Request resolved: #3036
    
    sdpa (https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) input is taking attention mask as input, refactor the sdpa module input closer to the sdpa input
    ghstack-source-id: 222650466
    exported-using-ghexport
    
    Reviewed By: mergennachin
    
    Differential Revision: D56119739
    
    fbshipit-source-id: d9adda66e540abc518b7ffb6a5ebd2aab1626b3b
    (cherry picked from commit b341223)
    cccclai authored Apr 18, 2024
    Configuration menu
    Copy the full SHA
    3fe0e70 View commit details
    Browse the repository at this point in the history
  4. Documentation for Vulkan Delegate (#3113) (#3124)

    Summary:
    Pull Request resolved: #3113
    
    imported-using-ghimport
    
    Test Plan: Imported from OSS
    
    Reviewed By: cccclai
    
    Differential Revision: D56279743
    
    Pulled By: SS-JIA
    
    fbshipit-source-id: af55cdf2d8518c582b7d8deccb731c6bc442a1c9
    (cherry picked from commit 414cd05)
    
    Co-authored-by: Sicheng Jia <ssjia@fb.com>
    pytorchbot and SS-JIA authored Apr 18, 2024
    Configuration menu
    Copy the full SHA
    c9811bc View commit details
    Browse the repository at this point in the history

Commits on Apr 19, 2024

  1. [RELEASE-ONLY] Pin Xcode projects to release/0.2 branch (#3155)

    * Pin Xcode projects to release/0.2 branch
    
    * Update the version for the iOS frameworks upload workflow
    shoumikhin authored Apr 19, 2024
    Configuration menu
    Copy the full SHA
    27e1a62 View commit details
    Browse the repository at this point in the history
  2. Core ML Has Added Index_Put Support, No Need to Skip Anymore (#2975) (

    #3157)
    
    Summary:
    It was a workaround to skip `aten.index_put` op in Core ML delegation, at the cost of partitioning the Llama model into 13 pieces.
    
    For better performance, we prefer to delegate the whole model to Core ML. Since Core ML has added the [necessary support](apple/coremltools#2190), it is time to revert this workaround
    
    Pull Request resolved: #2975
    
    Reviewed By: kirklandsign
    
    Differential Revision: D56002979
    
    Pulled By: cccclai
    
    fbshipit-source-id: e7a7c8c43706cb57eba3e6f720b3d713bec5065b
    (cherry picked from commit 7d4bafc)
    
    Co-authored-by: yifan_shen3 <yifan_shen3@apple.com>
    pytorchbot and yifan_shen3 authored Apr 19, 2024
    Configuration menu
    Copy the full SHA
    aa3f22c View commit details
    Browse the repository at this point in the history
  3. Add a simple sdpa (#3037) (#3166)

    Summary:
    Pull Request resolved: #3037
    
    Add a simple sdpa so it's decomposed to simpler ops instead of the decompose F.scaled_dot_product_attention, which includes 29 ops including `torch.where`
    ```
    def forward(self, q, k, v):
        aten_mul_scalar = executorch_exir_dialects_edge__ops_aten_mul_Scalar(q, 0.5946035575013605);  q = None
        aten_full_default = executorch_exir_dialects_edge__ops_aten_full_default([8, 8], True, dtype = torch.bool, layout = torch.strided, device = device(type='cpu'), pin_memory = False)
        aten_arange_start_step = executorch_exir_dialects_edge__ops_aten_arange_start_step(0, 8, layout = torch.strided, device = device(type='cpu'), pin_memory = False)
        aten_unsqueeze_copy_default = executorch_exir_dialects_edge__ops_aten_unsqueeze_copy_default(aten_arange_start_step, -2);  aten_arange_start_step = None
        aten_arange_start_step_1 = executorch_exir_dialects_edge__ops_aten_arange_start_step(0, 8, layout = torch.strided, device = device(type='cpu'), pin_memory = False)
        aten_unsqueeze_copy_default_1 = executorch_exir_dialects_edge__ops_aten_unsqueeze_copy_default(aten_arange_start_step_1, -1);  aten_arange_start_step_1 = None
        aten_sub_tensor = executorch_exir_dialects_edge__ops_aten_sub_Tensor(aten_unsqueeze_copy_default, aten_unsqueeze_copy_default_1);  aten_unsqueeze_copy_default = aten_unsqueeze_copy_default_1 = None
        aten_le_scalar = executorch_exir_dialects_edge__ops_aten_le_Scalar(aten_sub_tensor, 0);  aten_sub_tensor = None
        aten_logical_and_default = executorch_exir_dialects_edge__ops_aten_logical_and_default(aten_le_scalar, aten_full_default);  aten_le_scalar = aten_full_default = None
        aten_full_like_default = executorch_exir_dialects_edge__ops_aten_full_like_default(aten_logical_and_default, 0, dtype = torch.float32, pin_memory = False, memory_format = torch.preserve_format)
        aten_logical_not_default = executorch_exir_dialects_edge__ops_aten_logical_not_default(aten_logical_and_default);  aten_logical_and_default = None
        aten_scalar_tensor_default = executorch_exir_dialects_edge__ops_aten_scalar_tensor_default(-inf, dtype = torch.float32, layout = torch.strided, device = device(type='cpu'))
        aten_where_self = executorch_exir_dialects_edge__ops_aten_where_self(aten_logical_not_default, aten_scalar_tensor_default, aten_full_like_default);  aten_logical_not_default = aten_scalar_tensor_default = aten_full_like_default = None
        aten_permute_copy_default = executorch_exir_dialects_edge__ops_aten_permute_copy_default(k, [0, 1, 3, 2]);  k = None
        aten_mul_scalar_1 = executorch_exir_dialects_edge__ops_aten_mul_Scalar(aten_permute_copy_default, 0.5946035575013605);  aten_permute_copy_default = None
        aten_expand_copy_default = executorch_exir_dialects_edge__ops_aten_expand_copy_default(aten_mul_scalar, [1, 1, 8, 8]);  aten_mul_scalar = None
        aten_view_copy_default = executorch_exir_dialects_edge__ops_aten_view_copy_default(aten_expand_copy_default, [1, 8, 8]);  aten_expand_copy_default = None
        aten_expand_copy_default_1 = executorch_exir_dialects_edge__ops_aten_expand_copy_default(aten_mul_scalar_1, [1, 1, 8, 8]);  aten_mul_scalar_1 = None
        aten_view_copy_default_1 = executorch_exir_dialects_edge__ops_aten_view_copy_default(aten_expand_copy_default_1, [1, 8, 8]);  aten_expand_copy_default_1 = None
        aten_bmm_default = executorch_exir_dialects_edge__ops_aten_bmm_default(aten_view_copy_default, aten_view_copy_default_1);  aten_view_copy_default = aten_view_copy_default_1 = None
        aten_view_copy_default_2 = executorch_exir_dialects_edge__ops_aten_view_copy_default(aten_bmm_default, [1, 1, 8, 8]);  aten_bmm_default = None
        aten_add_tensor = executorch_exir_dialects_edge__ops_aten_add_Tensor(aten_view_copy_default_2, aten_where_self);  aten_view_copy_default_2 = aten_where_self = None
        aten__softmax_default = executorch_exir_dialects_edge__ops_aten__softmax_default(aten_add_tensor, -1, False);  aten_add_tensor = None
        aten_expand_copy_default_2 = executorch_exir_dialects_edge__ops_aten_expand_copy_default(aten__softmax_default, [1, 1, 8, 8]);  aten__softmax_default = None
        aten_view_copy_default_3 = executorch_exir_dialects_edge__ops_aten_view_copy_default(aten_expand_copy_default_2, [1, 8, 8]);  aten_expand_copy_default_2 = None
        aten_expand_copy_default_3 = executorch_exir_dialects_edge__ops_aten_expand_copy_default(v, [1, 1, 8, 8]);  v = None
        aten_view_copy_default_4 = executorch_exir_dialects_edge__ops_aten_view_copy_default(aten_expand_copy_default_3, [1, 8, 8]);  aten_expand_copy_default_3 = None
        aten_bmm_default_1 = executorch_exir_dialects_edge__ops_aten_bmm_default(aten_view_copy_default_3, aten_view_copy_default_4);  aten_view_copy_default_3 = aten_view_copy_default_4 = None
        aten_view_copy_default_5 = executorch_exir_dialects_edge__ops_aten_view_copy_default(aten_bmm_default_1, [1, 1, 8, 8]);  aten_bmm_default_1 = None
        return (aten_view_copy_default_5,)
    ```
    After applying the diff, we remove the following ops
    ```
        %aten_full_like_default : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.full_like.default](args = (%aten_index_tensor_2, 0), kwargs = {dtype: torch.float32, pin_memory: False, memory_format: torch.preserve_format})
    
        %aten_logical_not_default : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.logical_not.default](args = (%aten_index_tensor_2,), kwargs = {})
    
        %aten_scalar_tensor_default : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.scalar_tensor.default](args = (-inf,), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu})
    
        %aten_where_self : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.where.self](args = (%aten_logical_not_default, %aten_scalar_tensor_default, %aten_full_like_default), kwargs = {})
    
        %aten_mul_scalar : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.mul.Scalar](args = (%aten_permute_copy_default_3, 0.5946035575013605), kwargs = {})
        ...
        %aten_mul_scalar_1 : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.mul.Scalar](args = (%aten_permute_copy_default_6, 0.5946035575013605), kwargs = {})
    ```
    but introduce an add
        %aten_add_tensor_3 : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.add.Tensor](args = (%aten_mul_tensor_11, %aten_index_tensor_2), kwargs = {})
    ```
    ghstack-source-id: 223152096
    exported-using-ghexport
    
    Reviewed By: mergennachin, kimishpatel
    
    Differential Revision: D56119737
    
    fbshipit-source-id: ec8e875f0a4c4ec67b7493e4872c9a5b081e6de7
    (cherry picked from commit cf78107)
    cccclai authored Apr 19, 2024
    Configuration menu
    Copy the full SHA
    efb7cf3 View commit details
    Browse the repository at this point in the history
  4. Docs for lower smaller models to mps/coreml/qnn (#3146) (#3178)

    Summary:
    Pull Request resolved: #3146
    
    ghstack-source-id: 223235858
    
    Reviewed By: mcr229, kirklandsign
    
    Differential Revision: D56340028
    
    fbshipit-source-id: ef06142546ac54105ae87007cd82369917a22b3e
    (cherry picked from commit d47f9fe)
    cccclai authored Apr 19, 2024
    Configuration menu
    Copy the full SHA
    36eb9c8 View commit details
    Browse the repository at this point in the history

Commits on Apr 20, 2024

  1. qnn end to end flow for stories model (#3038) (#3182)

    Summary:
    Pull Request resolved: #3038
    
    Patch a few changes including:
    - support bool tensor type
    - support fp16 and fix the 8w8a quantization.
    - add two non-supported ops (slice_scatter and index_put) in common_defs.py
    
    stories model working end to end:
    AOT:
    fp16:
    ```
    python -m examples.models.llama2.export_llama -kv --qnn -c stories110M.pt -p params.json
    ```
    quantize:
    ```
    python -m examples.models.llama2.export_llama -kv --qnn --pt2e_quantize qnn_8a8w -c stories110M.pt -p params.json
    ```
    
    Runtime:
    ```
    /llama_main --model_path=llama2_fp16_qnn_2.21.pte  --tokenizer_path=tokenizer.bin --prompt="Once"
    ```
    Output:
    ```
    Once upon a time, there was a little girl named Lily. She loved to play outside and explore the world around her. One day, she went on a walk with her mommy and they found a beautiful landscape with lots of trees and flowers.
    Lily said, "Mommy, this place is so pretty! Can we take a picture?"
    Mommy replied, "Of course, Lily! Let's take a picture to remember the original place we found."
    After they took the picture, they continued their walk and saw a bird flying in the sky. Lily said, "MomPyTorchObserver {"prompt_tokens":2,"generated_tokens":125,"model_load_start_ms":1713226585936,"model_load_end_ms":1713226586909,"inference_start_ms":1713226586909,"inference_end_ms":1713226590363,"prompt_eval_end_ms":1713226586966,"first_token_ms":1713226586994,"aggregate_sampling_time_ms":23,"SCALING_FACTOR_UNITS_PER_SECOND":1000}
    I 00:00:04.436699 executorch:runner.cpp:414] 	Prompt Tokens: 2    Generated Tokens: 125
    I 00:00:04.436703 executorch:runner.cpp:420] 	Model Load Time:		0.973000 (seconds)
    I 00:00:04.436732 executorch:runner.cpp:430] 	Total inference time:		3.454000 (seconds)		 Rate: 	36.189925 (tokens/second)
    I 00:00:04.436735 executorch:runner.cpp:438] 		Prompt evaluation:	0.057000 (seconds)		 Rate: 	35.087719 (tokens/second)
    I 00:00:04.436739 executorch:runner.cpp:449] 		Generated 125 tokens:	3.397000 (seconds)		 Rate: 	36.797174 (tokens/second)
    I 00:00:04.436742 executorch:runner.cpp:457] 	Time to first generated token:	0.085000 (seconds)
    I 00:00:04.436744 executorch:runner.cpp:464] 	Sampling time over 127 tokens:	0.023000 (seconds)
    [INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
    [INFO] [Qnn ExecuTorch]: Destroy Qnn context
    ```
    
    Stories model is too small and sensitive to qunatization.
    ghstack-source-id: 223199545
    exported-using-ghexport
    
    Reviewed By: mergennachin, kirklandsign
    
    Differential Revision: D56119738
    
    fbshipit-source-id: daf5563fe51a677f302e09ae8a9fb80e6bda72c5
    (cherry picked from commit 3257c66)
    cccclai authored Apr 20, 2024
    Configuration menu
    Copy the full SHA
    7b29ad2 View commit details
    Browse the repository at this point in the history

Commits on Apr 22, 2024

  1. Fix build-framework-ios CI job (#2996) (#3186)

    Summary:
    As titled. `build_apple_frameworks.sh` is copying all the exported headers out and in #2934 `//executorch/schema:program` is being moved to `exported_deps` and causing `build_apple_frameworks.sh` to not able to copy generated headers `program_generated.h` and `scalar_type_generated.h`.
    
    This PR fixes it by moving it back to `deps`.
    
    Pull Request resolved: #2996
    
    Reviewed By: kirklandsign
    
    Differential Revision: D56028952
    
    Pulled By: larryliu0820
    
    fbshipit-source-id: 2cd4999154877b0ac7b49cd1f54d518cba34b2f2
    (cherry picked from commit 3b727a7)
    larryliu0820 authored Apr 22, 2024
    Configuration menu
    Copy the full SHA
    4142cf6 View commit details
    Browse the repository at this point in the history
  2. ETRecord ser/de handling "None" outputs and more (#3039) (#3191)

    Summary:
    Pull Request resolved: #3039
    
    For the ease of communication, let me assign nicknames to the files related to this diff:
    * File A: *caffe2/torch/_export/serde/serialize.py*
    * File B: *executorch/exir/serde/serialize.py*
    * File C: *executorch/exir/serde/export_serialize.py*
    
    Recently, we noticed that error `torch._export.serde.serialize.SerializeError: Unable to deserialize output node Argument(as_none=[])` (P1210590561) was thrown from File B when deserializing ETRecord. It's possible that the error has been there since the beginning, but we've just never tested that logic path.
    
    In this diff, I made a fix on File B to resolve this particular issue. Also adding handling for "None" output case in sdk logic. ***Keep on reading if you don't think the code changes make sense:***
    
    I explored the history of file changes. In chronological order:
    1. D48258552, `deserialize_graph_output()` was copied from File A to File B, with some modifications made. The `deserialize_graph_output()` in File B overrides that in File A due to polymorphism.
    2. D52446586, File C was created by ***copying*** File A. As a result of this diff, the `deserialize_graph_output()` in File B now overrides that in File C.
    3. Also in D52446586, the `deserialize_graph_output()` in File A had some significant changes; File C got the new version of `deserialize_graph_output()`. But this diff didn't update the `deserialize_graph_output()` in File B.
    4. D55391674 added the handling for "None" outputs to File A.
    
    This diff brings (parts of) File C up-to-date with File A, and make `deserialize_graph_output()` in File B properly overrides that in File A.
    
    In the future, we should figure out how to keep File C and File A in sync. Recently, File C was broken because it didn't stay in sync with File A in D54855251 and had to be fixed by D55776877. There will be a design review session this Friday to discuss consolidating the serialization code for edge and export.
    
    Reviewed By: tarun292
    
    Differential Revision: D56091104
    
    fbshipit-source-id: 20c75ddc610c3be7ab2bb62943419d3b8b2be079
    (cherry picked from commit 89cfa73)
    
    Co-authored-by: Olivia Liu <olivialpx@meta.com>
    pytorchbot and Olivia-liu authored Apr 22, 2024
    Configuration menu
    Copy the full SHA
    a94459a View commit details
    Browse the repository at this point in the history
  3. Handle empty (size=0) tensor in Inspector (#2998) (#3192)

    Summary:
    Pull Request resolved: #2998
    
    Empty tensors are not handled so they throw errors.
     {F1484412951}
    
    Reviewed By: tarun292
    
    Differential Revision: D56027102
    
    fbshipit-source-id: a8dab52d9ba7eb0784a72493e9888cf63aefbb76
    (cherry picked from commit f14dc83)
    
    Co-authored-by: Olivia Liu <olivialpx@meta.com>
    pytorchbot and Olivia-liu authored Apr 22, 2024
    Configuration menu
    Copy the full SHA
    281134d View commit details
    Browse the repository at this point in the history
  4. Cherry pick #3006 (#3190)

    * Fix build-framework-ios CI job (#2996)
    
    Summary:
    As titled. `build_apple_frameworks.sh` is copying all the exported headers out and in #2934 `//executorch/schema:program` is being moved to `exported_deps` and causing `build_apple_frameworks.sh` to not able to copy generated headers `program_generated.h` and `scalar_type_generated.h`.
    
    This PR fixes it by moving it back to `deps`.
    
    Pull Request resolved: #2996
    
    Reviewed By: kirklandsign
    
    Differential Revision: D56028952
    
    Pulled By: larryliu0820
    
    fbshipit-source-id: 2cd4999154877b0ac7b49cd1f54d518cba34b2f2
    
    * Fix 3 CI jobs (#3006)
    
    Summary:
    * Apple / build-frameworks-ios / macos-job
    
    We removed libcustom_ops_lib.a in #2916 so need to remove it from `build_apple_frameworks.sh`.
    
    * Lint / lintrunner / linux-job
    
    Remove extra line in backends/qualcomm/quantizer/utils.py
    
    * pull / unittest / macos (buck2) / macos-job
    
    Fix it by using `executorch_no_prim_ops` instead of `executorch` in MPS and CoreML.
    
    Pull Request resolved: #3006
    
    Reviewed By: lucylq
    
    Differential Revision: D56048430
    
    Pulled By: larryliu0820
    
    fbshipit-source-id: 9dcb476eea446ea3aba566d595167c691fb00eec
    
    * Revert "Fix build-framework-ios CI job (#2996)"
    
    This reverts commit e365c5b.
    
    ---------
    
    Co-authored-by: Mengwei Liu <larryliu@fb.com>
    huydhn and larryliu0820 authored Apr 22, 2024
    Configuration menu
    Copy the full SHA
    dcd5c44 View commit details
    Browse the repository at this point in the history
  5. Handle missing data types. (#2984) (#3134)

    Summary:
    **Changes**
    - The runtime was failing if it encountered a datatype not supported by CoreML framework. The changes add support for all the datatypes that are supported by coremltools basically if `CoreMLBackend` can export a model then runtime would execute it. Complex types are not supported because `coremltools` doesn't support it.
    
    - Improves and cleans the multiarray copying code.
    
    - Adds portable ops to CoreML executor so that it can run a partitioned model.
    
    **Testing**
    - Tested partitioned model `coreml_stories.pte`
    - Added multiarray copying tests.
    
    Pull Request resolved: #2984
    
    Reviewed By: kirklandsign
    
    Differential Revision: D56003795
    
    Pulled By: shoumikhin
    
    fbshipit-source-id: fa1c7846f9510d68c359aed6761aedb2d10c6f46
    (cherry picked from commit d731866)
    
    Co-authored-by: Gyan Sinha <gyanendra_sinha@apple.com>
    pytorchbot and cymbalrush authored Apr 22, 2024
    Configuration menu
    Copy the full SHA
    f008e12 View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    773da4d View commit details
    Browse the repository at this point in the history
  7. Add a pure python wrapper to pybindings.portable_lib (#3137) (#3218)

    Summary:
    Pull Request resolved: #3137
    
    When installed as a pip wheel, we must import `torch` before trying to import the pybindings shared library extension. This will load libtorch.so and related libs, ensuring that the pybindings lib can resolve those runtime dependencies.
    
    So, add a pure python wrapper that lets us do this when users say `import executorch.extension.pybindings.portable_lib`
    
    We only need this for OSS, so don't bother doing this for other pybindings targets.
    
    Reviewed By: orionr, mikekgfb
    
    Differential Revision: D56317150
    
    fbshipit-source-id: 920382636732aa276c25a76163afb7d28b1846d0
    (cherry picked from commit 969aa96)
    
    Co-authored-by: Dave Bort <dbort@meta.com>
    pytorchbot and dbort authored Apr 22, 2024
    Configuration menu
    Copy the full SHA
    67d0dd7 View commit details
    Browse the repository at this point in the history

Commits on Apr 23, 2024

  1. Remove unused extension/aot_util directory (#3216) (#3226)

    Summary:
    The AOT util extension was removed a while back, but the directory and README still exist. This PR cleans them up. Note that the aot_util sources were deleted previously, so this is not a functional change.
    
    Pull Request resolved: #3216
    
    Test Plan: CI. This is not a functional change, as it changes only a README file.
    
    Reviewed By: metascroy
    
    Differential Revision: D56436216
    
    Pulled By: GregoryComer
    
    fbshipit-source-id: 2f8b8cee20b7a3efb25a1ef1df3ebd69f3b512c9
    (cherry picked from commit 67f3376)
    
    Co-authored-by: Gregory Comer <gregoryjcomer@gmail.com>
    pytorchbot and GregoryComer authored Apr 23, 2024
    Configuration menu
    Copy the full SHA
    c79666a View commit details
    Browse the repository at this point in the history
  2. Fix dynamic linking issues with prebuilt pip packages (#3049)

    * Build pybindings with -D_GLIBCXX_USE_CXX11_ABI=0 to match libtorch.so
    
    libtorch.so builds with the old glibc ABI, so we need to as well,
    for any source files that include torch headers.
    
    * Set the RPATH of _portable_lib.so so it can find libtorch
    
    pip wheels will need to be able to find the torch libraries. On Linux,
    the .so has non-absolute dependencies on libs like "libtorch.so" without
    paths; as long as we `import torch` first, those dependencies will work.
    
    But Apple dylibs do not support non-absolute dependencies, so we need
    to tell the loader where to look for its libraries. The LC_LOAD_DYLIB
    entries for the torch libraries will look like "@rpath/libtorch.dylib",
    so we can add an LC_RPATH entry to look in a directory relative to the
    installed location of our _portable_lib.so file.
    
    To see these LC_* values, run `otool -l _portable_lib*.so`.
    
    * Disable wheel delocation on macos
    
    The executorch build system will ensure that .dylib/.so files have
    LC_LOAD_DYLIB and LC_RPATH entries that will work when they're
    installed.
    
    Delocating (i.e., making copies of the .dylibs that ET's libs depend on)
    will break any libs that depend on the torch libraries if users ever
    import both `torch` and the executorch library. Both import paths must
    load exactly the same file, not just a copy of it.
    
    * Implemement smoke_test.py for pip wheel jobs
    
    This script is run by CI after building the executorch wheel. Before
    running this, the job will install the matching torch package as well as
    the newly-built executorch package and its dependencies.
    
    For now we test the export of a simple model, and try executing it using
    the runtime pybindings.
    
    Test Plan:
    ```
    ./install_requirements.sh
    python build/packaging/smoke_test.py
    ```
    dbort authored Apr 23, 2024
    Configuration menu
    Copy the full SHA
    1fd5562 View commit details
    Browse the repository at this point in the history
  3. Update some SDK docs from MVP (#3212) (#3230)

    Summary:
    Pull Request resolved: #3212
    
    doc changes including
    1. Remove instruction for Buck because we're moving away from it and just use CMake now and future;
    2. Remove Coming soon for the realized feature;
    3. Formatting.
    
    Reviewed By: Jack-Khuu
    
    Differential Revision: D56433016
    
    fbshipit-source-id: fffa283b4a04438866d84765a65377dcf8a88837
    (cherry picked from commit b41f763)
    
    Co-authored-by: Olivia Liu <olivialpx@meta.com>
    pytorchbot and Olivia-liu authored Apr 23, 2024
    Configuration menu
    Copy the full SHA
    4cce1cf View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    24ecd04 View commit details
    Browse the repository at this point in the history
  5. Enable doc upload for tags, disable for release branches (#3153) (#3243)

    Summary:
    - Disabled doc upload for branches like release/x.x
    - Enabled publishing for tags.
    
    Tested locally:
    ```
    export GITHUB_REF=refs/tags/v3.1.4-rc5
    bash test-version.py
    ```
    ```
    # test-version.py
    if [[ "${GITHUB_REF}" =~ ^refs/tags/v([0-9]+\.[0-9]+)\.* ]]; then
      TARGET_FOLDER="${BASH_REMATCH[1]}"
    else
      TARGET_FOLDER="main"
    fi
    echo "Target folder: ${TARGET_FOLDER}"
    ```
    Output:
    ```
    Target folder: 3.1
    ```
    One more:
    ```
    export GITHUB_REF=refs/tags/v1.15.4
    bash test-version.sh
    ```
    Output:
    ```
    Target folder: 1.15
    ```
    
    Pull Request resolved: #3153
    
    Reviewed By: dbort
    
    Differential Revision: D56445037
    
    Pulled By: svekars
    
    fbshipit-source-id: e7328523dfe308e8921c1e4f365d9a757d053191
    (cherry picked from commit ee8c3a6)
    
    Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
    pytorchbot and svekars authored Apr 23, 2024
    Configuration menu
    Copy the full SHA
    1ba292a View commit details
    Browse the repository at this point in the history
  6. Update Core ML Backend Doc (#3188) (#3249)

    Summary:
    Update Core ML backend doc on:
    1. Partitioner
    2. Quantizer
    
    Pull Request resolved: #3188
    
    Reviewed By: shoumikhin
    
    Differential Revision: D56481126
    
    Pulled By: cccclai
    
    fbshipit-source-id: 925a107a210094e035a816a15c70d9aedd5bd369
    (cherry picked from commit c004efe)
    
    Co-authored-by: yifan_shen3 <yifan_shen3@apple.com>
    pytorchbot and yifan_shen3 authored Apr 23, 2024
    Configuration menu
    Copy the full SHA
    47bc4aa View commit details
    Browse the repository at this point in the history
  7. bundled program alpha document (#3224) (#3252)

    Summary:
    Pull Request resolved: #3224
    
    as title
    
    Reviewed By: tarun292, Jack-Khuu
    
    Differential Revision: D56446890
    
    fbshipit-source-id: fc3dc6bb2349cd7ca4a8e998e528176dd9fb7679
    (cherry picked from commit 783e932)
    
    Co-authored-by: Songhao Jia <gasoonjia@meta.com>
    pytorchbot and Gasoonjia authored Apr 23, 2024
    Configuration menu
    Copy the full SHA
    ba6e318 View commit details
    Browse the repository at this point in the history
  8. Fix executor_runner_mps and mpsdelegate linking with pybind (#3222) (#…

    …3248)
    
    Summary:
    Summary of changes:
    - fixes mps_executor_runner build - previously it would fail to build previously due to incorrect linking with portable ops
    - fixes `mpsdelegate` linking with `pybind` lib
    - added tests to check correctness directly through pybind
    - added a helper file (`bench_utils.py`) to help measure models forward pass between PyTorch MPS and ExecuTorch MPS
    
    Testing (will run both AOT and runtime if MPS was built with pybind):
    - `./install_requirements.sh --pybind mps`
    - invoke a single unit test: `python3 -m unittest backends.apple.mps.test.test_mps_indexing_ops -v -k test_mps_indexing_get_1`.
    - invoke all tests from a file: `python3 -m unittest backends.apple.mps.test.test_mps_indexing_ops -v`
    
    cc cccclai , shoumikhin
    
    Pull Request resolved: #3222
    
    Reviewed By: shoumikhin
    
    Differential Revision: D56447888
    
    Pulled By: cccclai
    
    fbshipit-source-id: 5cbbcbf8df34f29e23a1854df72f764337a9df76
    (cherry picked from commit 6c30eea)
    
    Co-authored-by: Denis Vieriu <dvieriu@apple.com>
    pytorchbot and DenisVieriu97 authored Apr 23, 2024
    Configuration menu
    Copy the full SHA
    b045b3c View commit details
    Browse the repository at this point in the history
  9. update sdk delegate integration (#3246) (#3258)

    Summary:
    Pull Request resolved: #3246
    
    As title
    
    Reviewed By: tarun292
    
    Differential Revision: D56479387
    
    fbshipit-source-id: c324d2b46dc7f849dfb42b3452c6a82f24aa9319
    (cherry picked from commit cf487f1)
    
    Co-authored-by: Chen Lai <chenlai@meta.com>
    pytorchbot and cccclai authored Apr 23, 2024
    Configuration menu
    Copy the full SHA
    3d7a24c View commit details
    Browse the repository at this point in the history
  10. Dynamically determine the version of the pip package. (#3259)

    Use the logic from
    https://github.com/pytorch/torcharrow/blob/15a7f7124d4c73c8c541547aef072264baab63b7/setup.py#L21
    to play nicely with the pytorch ecosystem CI build environment.
    
    Test Plan:
    ```
    $ ./install_requirements.sh
    ...
    Successfully installed executorch-0.2.0a0+1ba292a
    
    $ python
    >>> from executorch import version
    >>> version.__version__
    '0.2.0a0+1ba292a'
    >>> version.git_version
    '1ba292ae4071c4eede8ea14e8f10ffd973a085b4'
    >>> ^D
    
    $ grep Version /home/dbort/.conda/envs/executorch-tmp/lib/python3.10/site-packages/executorch-0.2.0a0+1ba292a.dist-info/METADATA
    Metadata-Version: 2.1
    Version: 0.2.0a0+1ba292a
    ```
    
    Temporarily commented out the call to `setup()` in `setup.py` then
    imported it.
    
    ```
    $ python
    >>> from setup import Version
    >>> Version.string
    '0.2.0a0+1ba292a'
    >>> Version.git_hash
    '1ba292ae4071c4eede8ea14e8f10ffd973a085b4'
    >>> Version.write_to_python_file("/tmp/version.py")
    >>> ^D
    $ cat /tmp/version.py
    from typing import Optional
    __all__ = ["__version__", "git_version"]
    __version__ = "0.2.0a0+1ba292a"
    git_version: Optional[str] = '1ba292ae4071c4eede8ea14e8f10ffd973a085b4'
    ```
    
    ```
    $ BUILD_VERSION="5.5.5" python
    >>> from setup import Version
    >>> Version.string
    '5.5.5'
    ```
    dbort authored Apr 23, 2024
    Configuration menu
    Copy the full SHA
    214371d View commit details
    Browse the repository at this point in the history

Commits on Apr 24, 2024

  1. Update Profiling Section in XNNPACK Delegate Docs (#3237) (#3261)

    Summary:
    Pull Request resolved: #3237
    
    Updating Profiling Section of the docs
    
    Main point is pointing the the SDK Profiling Tutorial on how to get XNNPACK profiling information
    
    Reviewed By: metascroy, cccclai
    
    Differential Revision: D56439491
    
    fbshipit-source-id: 1d724ffae6d89e8769ea427cb37b4ec85fe3452f
    (cherry picked from commit 329184a)
    
    Co-authored-by: Max Ren <maxren@meta.com>
    pytorchbot and mcr229 authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    a0bd7fa View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    66783f4 View commit details
    Browse the repository at this point in the history
  3. Add index.Tensor and aten.logical_not (#3221) (#3267)

    Summary:
    Add missing llama ops for MPS delegate:
    - `index.Tensor`
    - `logical_not`
    
    `index.put` works correctly for generating 1 token, but gives incorrect results on 2nd token. This remains disabled.
    
    Summary of changes:
    - Adds missing llama2 ops
    - Adds support for launching Metal kernels instead of MPSGraph ops (if MPSGraph doesn't have the support)
    
    cc cccclai , shoumikhin
    
    Pull Request resolved: #3221
    
    Reviewed By: shoumikhin
    
    Differential Revision: D56447710
    
    Pulled By: cccclai
    
    fbshipit-source-id: 778a485df5e67d1afd006b42f07b69c8a3961223
    (cherry picked from commit 02a6b66)
    
    Co-authored-by: Denis Vieriu <dvieriu@apple.com>
    pytorchbot and DenisVieriu97 authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    75484d9 View commit details
    Browse the repository at this point in the history
  4. Specify OSX deployment target for python package. (#3194) (#3279)

    Summary:
    Pull Request resolved: #3194
    overriding_review_checks_triggers_an_audit_and_retroactive_review
    Oncall Short Name: executorch
    
    Differential Revision: D56405473
    
    fbshipit-source-id: 785709e8acc1b07e57825b278c3e0a355641e13a
    (cherry picked from commit a7a9ab3)
    
    Co-authored-by: Anthony Shoumikhin <shoumikhin@meta.com>
    pytorchbot and shoumikhin authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    915164e View commit details
    Browse the repository at this point in the history
  5. Pin CoreMLTools 7.2 (#3170) (#3281)

    Summary:
    It is more stable to pin a release branch of CoreMLTools. We will periodically update it when necessary
    
    Pull Request resolved: #3170
    
    Reviewed By: cccclai
    
    Differential Revision: D56373108
    
    Pulled By: shoumikhin
    
    fbshipit-source-id: d6a96813f07df97abbf8f4ca75e2aae2666372b1
    (cherry picked from commit cb77763)
    
    Co-authored-by: yifan_shen3 <yifan_shen3@apple.com>
    pytorchbot and yifan_shen3 authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    ed07890 View commit details
    Browse the repository at this point in the history
  6. Add iPad support to demo apps. (#3251) (#3256)

    Summary:
    Pull Request resolved: #3251
    
    .
    
    Reviewed By: cccclai
    
    Differential Revision: D56488666
    
    fbshipit-source-id: d63a08b4abdf055607948229be88f0c7762948ab
    (cherry picked from commit 1eaed2b)
    
    Co-authored-by: Anthony Shoumikhin <shoumikhin@meta.com>
    pytorchbot and shoumikhin authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    e783942 View commit details
    Browse the repository at this point in the history
  7. Update apple.yml (#3287)

    shoumikhin authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    eabdeb0 View commit details
    Browse the repository at this point in the history
  8. Fix typo in sub & clean up (#3100) (#3253)

    Summary: Pull Request resolved: #3100
    
    Reviewed By: kirklandsign
    
    Differential Revision: D56255838
    
    fbshipit-source-id: b6567320b557aeb287db66b43447db9caabebd13
    (cherry picked from commit e69a662)
    
    Co-authored-by: Manuel Candales <mcandales@meta.com>
    pytorchbot and manuelcandales authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    da94594 View commit details
    Browse the repository at this point in the history
  9. fix qnn install link (#3260) (#3271)

    Summary:
    Pull Request resolved: #3260
    
    As title, the link was wrong...
    
    Reviewed By: kirklandsign
    
    Differential Revision: D56498322
    
    fbshipit-source-id: 42708b5f7a634f1c01e05af4c897d0c6da54d724
    (cherry picked from commit e9d7868)
    cccclai authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    0dd6639 View commit details
    Browse the repository at this point in the history
  10. update docs (#3286)

    kirklandsign authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    ad80c6b View commit details
    Browse the repository at this point in the history
  11. Fix a small inconsistency on the SDK debugging page (#3247) (#3290)

    Summary:
    Pull Request resolved: #3247
    
    so that the code is consistent with the text description
    
    Reviewed By: dbort
    
    Differential Revision: D56481274
    
    fbshipit-source-id: f303b966ebf3e07b510ef825c7bc09eaecd89554
    (cherry picked from commit ca8e589)
    
    Co-authored-by: Olivia Liu <olivialpx@meta.com>
    pytorchbot and Olivia-liu authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    f0a0a20 View commit details
    Browse the repository at this point in the history
  12. SDK tutorial doc update (#3238) (#3292)

    Summary:
    Pull Request resolved: #3238
    
    fix some links, remove outdated commands
    
    Reviewed By: GregoryComer
    
    Differential Revision: D56453800
    
    fbshipit-source-id: 8bd86a593f8c5b9342e61ab2d129473d315b57a8
    (cherry picked from commit f89c312)
    
    Co-authored-by: Olivia Liu <olivialpx@meta.com>
    pytorchbot and Olivia-liu authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    dbb7e26 View commit details
    Browse the repository at this point in the history
  13. Fix broken links on the coreml tutorial page (#3250) (#3293)

    Summary: Pull Request resolved: #3250
    
    Reviewed By: dbort
    
    Differential Revision: D56487125
    
    fbshipit-source-id: 502019365de043a7e07bb0d766134b334ee115ba
    (cherry picked from commit ba0caf8)
    
    Co-authored-by: Olivia Liu <olivialpx@meta.com>
    pytorchbot and Olivia-liu authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    6044404 View commit details
    Browse the repository at this point in the history
  14. delegation debug page (#3254) (#3294)

    Summary:
    Pull Request resolved: #3254
    
    Create a new page for the new util functions Chen and I made to debug delegations. These functions were well-received within the team as well as by partner teams including modai, thus I think it's important to call them out in our documentation. The examples were copied from the llm manual, but reworded a little bit to flow naturally in this doc.
    
    bypass-github-export-checks
    bypass-github-pytorch-ci-checks
    bypass-github-executorch-ci-checks
    
    Reviewed By: cccclai
    
    Differential Revision: D56491214
    
    fbshipit-source-id: 162b4ae75e79730218b0d669d1ec2a7a914b933c
    (cherry picked from commit bf9888f)
    
    Co-authored-by: Olivia Liu <olivialpx@meta.com>
    pytorchbot and Olivia-liu authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    147579a View commit details
    Browse the repository at this point in the history
  15. Add delegate time scale converter to Inspector (#3240) (#3297)

    Summary:
    Pull Request resolved: #3240
    
    The time scale of delegate events reported might be different from the timescale of CPU events. This diff adds support for providing a callable that can be invoked by Inspector to modify the timescale of delegated events to ensure consistency in timescales across delegated and non-delegated events.
    
    Reviewed By: Jack-Khuu
    
    Differential Revision: D55298701
    
    fbshipit-source-id: e888e51b602c7e1ec8cb9e05ac052280daa12823
    (cherry picked from commit b7b40ac)
    
    Co-authored-by: Tarun Karuturi <tkaruturi@meta.com>
    pytorchbot and tarun292 authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    759fd12 View commit details
    Browse the repository at this point in the history
  16. move code under executorch/example (#3176) (#3307)

    Summary:
    Pull Request resolved: #3176
    This diff moves llm manual code from outside github (Dave's and Georgey's) to executorch codebase for better pointing to.
    After this diff. //executorch/examples/llm_maunal will become the only source of truth of our llm manual code.
    
    Reviewed By: byjlw, dbort
    
    Differential Revision: D56365058
    
    fbshipit-source-id: 97280fc0ca955caabb6056cddbb72102ed711f2c
    (cherry picked from commit b6e54d0)
    
    Co-authored-by: Songhao Jia <gasoonjia@meta.com>
    pytorchbot and Gasoonjia authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    b218b18 View commit details
    Browse the repository at this point in the history
  17. add dynamic export into llm manual (#3202) (#3308)

    Summary:
    Pull Request resolved: #3202
    
    This diff adds dynamic export into llm manual, including code and related comments.
    Also update other documentations for better understanding.
    
    Reviewed By: dbort
    
    Differential Revision: D56365041
    
    fbshipit-source-id: 5ce4c15206a2923c4d54811cefca03f72869c719
    (cherry picked from commit 66a350b)
    
    Co-authored-by: Songhao Jia <gasoonjia@meta.com>
    pytorchbot and Gasoonjia authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    31300d0 View commit details
    Browse the repository at this point in the history
  18. Configuration menu
    Copy the full SHA
    b3fb810 View commit details
    Browse the repository at this point in the history
  19. [RELEASE ONLY] Android custom op registration (#3284)

    * [Android] Fix upload workflow for release
    
    * [RELEASE ONLY] Android custom op registration
    kirklandsign authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    904e989 View commit details
    Browse the repository at this point in the history
  20. Audit and update the pip package metadata (#3265)

    Fill out the recommended `project` keys, most of which will affect the
    web page that PyPI will render for the `executorch` package.
    
    See
    https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#about-your-project
    for the latest guidance.
    
    Use
    https://github.com/pytorch/pytorch/blob/a21327e0b03cc18850a0608be2d9c5bd38fd4646/setup.py#L1394
    as a guide for the actual values.
    
    Add a README-wheel.md file that will be included in the wheel, and will
    become the main page contents on PyPI.
    
    Test Plan:
    * Installed the package with `./install_requirements.sh`
    * Looked at the files under ~/miniconda3/envs/executorch/lib/python3.10/site-packages/executorch-0.2.0a0+1a499e0.dist-info. METADATA and LICENSE both contain the new metadata.
    dbort authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    d1cf0a6 View commit details
    Browse the repository at this point in the history
  21. Update Llava README.md (#3309)

    Simplify the instruction.
    iseeyuan authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    2a1ae4f View commit details
    Browse the repository at this point in the history
  22. Use relative links in llm/getting-started.md (#3244) (#3310)

    Summary:
    Use relative markdown links instead of full URLs. This way, the docs will always point to a consistent branch.
    
    Pull Request resolved: #3244
    
    Test Plan: Clicked on all modified links in the rendered docs preview: https://docs-preview.pytorch.org/pytorch/executorch/3244/llm/getting-started.html
    
    Reviewed By: Gasoonjia
    
    Differential Revision: D56479234
    
    Pulled By: dbort
    
    fbshipit-source-id: 45fb25f017c73df8606c3fb861acafbdd82fec8c
    (cherry picked from commit b560864)
    
    Co-authored-by: Dave Bort <dbort@meta.com>
    pytorchbot and dbort authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    861abb1 View commit details
    Browse the repository at this point in the history
  23. [pyproject.toml] Add a dependency on torch==2.3 (#3277)

    * Fix lint
    
    Remove `requires-python = ">=3.10"`. This caused the linter to use a new
    `with` syntax that was added in python 3.10, but we want to eventualy
    supoort older versions of python.
    
    * [ci] Look on pytorch servers when installing pip deps
    
    When installing the executorch pip package for CI jobs, look on the
    pytorch servers when resolving dependencies. This lets the executorch
    package depend on pytorch pre-release and nightly versions.
    
    Also run the llava setup with `-x` to make it easier to debug failures.
    
    * [pyproject.toml] Add a dependency on `torch==2.3`
    
    This is the version that the release/0.2 version of executorch depends
    on. We should not pick this back into main.
    dbort authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    399138e View commit details
    Browse the repository at this point in the history
  24. Update readme. (#3301) (#3302)

    Summary:
    Pull Request resolved: #3301
    overriding_review_checks_triggers_an_audit_and_retroactive_review
    Oncall Short Name: executorch
    
    Differential Revision: D56517032
    
    fbshipit-source-id: ec2f7fbb1111daf8bd529e0917be698bac3435f4
    (cherry picked from commit 5b0030f)
    
    Co-authored-by: Anthony Shoumikhin <shoumikhin@meta.com>
    pytorchbot and shoumikhin authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    b53c97d View commit details
    Browse the repository at this point in the history
  25. Fix LLAMA app (#3228) (#3283)

    Summary:
    Pull Request resolved: #3228
    
    Fix a UI thread issue causing crash.
    
    Reviewed By: cccclai
    
    Differential Revision: D56447006
    
    fbshipit-source-id: 02eff27d4b4cd108c95b664d04679d4f92aaf5db
    (cherry picked from commit 4389442)
    kirklandsign authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    c07cfc9 View commit details
    Browse the repository at this point in the history
  26. update typos (#3300) (#3321)

    Summary:
    Pull Request resolved: #3300
    
    This diff solves part of Ali's comments in our tracer sheet (https://docs.google.com/spreadsheets/d/1PoJt7P9qMkFSaMmS9f9j8dVcTFhOmNHotQYpwBySydI/edit#gid=0). Specifically speaking:
    
    "NanoGPT" -> "nanoGPT"
    "CoreML" -> "Core ML"
    "ExecuTorch Codebase" -> "ExecuTorch codebase"
    "Android Phone" -> "Android phone"
    "How to build Mobile Apps" -> "How to Build Mobile Apps"
    
    also shorten the following two column names for avoid overlapping.
    "occurrences_in_delegated_graphs" ->  "# in_delegated_graphs" "occurrences_in_non_delegated_graphs" -> # in_non_delegated_graphs
    
    Reviewed By: Jack-Khuu
    
    Differential Revision: D56513601
    
    fbshipit-source-id: 7015c2c5b94b79bc6c57c533ee812c9e58ab9d56
    (cherry picked from commit b669056)
    
    Co-authored-by: Songhao Jia <gasoonjia@meta.com>
    pytorchbot and Gasoonjia authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    59cda5d View commit details
    Browse the repository at this point in the history
  27. update memory planning docs (#3270) (#3319)

    Summary: Pull Request resolved: #3270
    
    Reviewed By: JacobSzwejbka
    
    Differential Revision: D56503511
    
    Pulled By: lucylq
    
    fbshipit-source-id: d9e39f32adf39761652feaccdb73344b4550a094
    (cherry picked from commit de0c233)
    
    Co-authored-by: Lucy Qiu <lfq@meta.com>
    pytorchbot and lucylq authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    ed1992c View commit details
    Browse the repository at this point in the history
  28. Update README-wheel.md to document what's linked into pybindings (#3323)

    * Stop linking MPS into the prebuilt pip wheel
    
    We haven't tested this, and we'd prefer to have both MPS and Core ML
    working. Remove it for now, putting macOS and Linux on equal footing.
    
    * Update README-wheel.md to document what's linked into pybindings
    
    Since pybindings can be built with many configurations, it's important
    to tell users what's actually present in the wheel.
    dbort authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    d1175cf View commit details
    Browse the repository at this point in the history
  29. Update git clone instructions (#3306)

    Summary:
    
    Will land once we create a new tag
    mergennachin authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    c78c45b View commit details
    Browse the repository at this point in the history
  30. Remove the sorting of the nodes from partitioning (not needed for now…

    … as Custom Metal kernels are yet not enabled) (#3327)
    DenisVieriu97 authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    a1d881a View commit details
    Browse the repository at this point in the history
  31. Configuration menu
    Copy the full SHA
    2d75a0b View commit details
    Browse the repository at this point in the history
  32. llama2 readme (#3315) (#3326)

    Summary:
    - add note for embedding quantize, for llama3
    - re-order export args to be the same as llama2, group_size missing `--`
    
    Pull Request resolved: #3315
    
    Reviewed By: cccclai
    
    Differential Revision: D56528535
    
    Pulled By: lucylq
    
    fbshipit-source-id: 4453070339ebdb3d782b45f96fe43d28c7006092
    (cherry picked from commit 34f59ed)
    
    Co-authored-by: Lucy Qiu <lfq@meta.com>
    pytorchbot and lucylq authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    fdd266c View commit details
    Browse the repository at this point in the history
  33. Update readme. (#3331)

    * Update readme.
    
    Summary: .
    
    Reviewed By: cccclai
    
    Differential Revision: D56532283
    
    fbshipit-source-id: 62d7c9e8583fdb5c9a1b2e781e80799c06682aae
    (cherry picked from commit ce1e9c1)
    
    * Update readme.
    
    Summary: .
    
    Reviewed By: cccclai
    
    Differential Revision: D56535633
    
    fbshipit-source-id: 070a3b0af9dea234f8ae4be01c37c03b4e0a56e6
    (cherry picked from commit 035aee4)
    shoumikhin authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    ef376d6 View commit details
    Browse the repository at this point in the history
  34. Update custom kernel registration API (#3330)

    Summary: As titled
    
    Reviewed By: lucylq
    
    Differential Revision: D56532035
    
    (cherry picked from commit 73ad1fb)
    larryliu0820 authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    f05fdd3 View commit details
    Browse the repository at this point in the history
  35. Inspector APIs page (#3335)

    Summary:
    The old screenshot has outdated event block name and event names. New screenshot was taken from a recent real run.
    
    bypass-github-export-checks
    bypass-github-pytorch-ci-checks
    bypass-github-executorch-ci-checks
    
    Reviewed By: tarun292, Jack-Khuu
    
    Differential Revision: D56447799
    
    fbshipit-source-id: 040fe45311c9aa8e8a1a0f6756ebda5f0ebbdebf
    (cherry picked from commit 9c99fe1)
    Olivia-liu authored Apr 24, 2024
    Configuration menu
    Copy the full SHA
    f937935 View commit details
    Browse the repository at this point in the history

Commits on Apr 25, 2024

  1. Configuration menu
    Copy the full SHA
    52aa8cf View commit details
    Browse the repository at this point in the history