Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

add peft model support in deepspeed sharded mode #884

Merged
merged 3 commits into from
Dec 9, 2023
Merged

Conversation

sywangyi
Copy link
Contributor

@sywangyi sywangyi commented Dec 7, 2023

Type of Change

feature or bug fix or documentation or others
API changed or not

Description

detail description
JIRA ticket: xxx

Expected Behavior & Potential Risk

the expected behavior that triggered by this PR

How has this PR been tested?

how to reproduce the test (including hardware information)

Dependency Change?

any library dependency introduced or removed

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
@sywangyi
Copy link
Contributor Author

sywangyi commented Dec 7, 2023

@lkk12014402

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Signed-off-by: VincyZhang <wenxin.zhang@intel.com>
@VincyZhang VincyZhang merged commit 370ca35 into main Dec 9, 2023
13 checks passed
@VincyZhang VincyZhang deleted the peft_deepspeed branch December 9, 2023 05:19
delock pushed a commit to delock/intel-extension-for-transformers that referenced this pull request Dec 16, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants