Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

Commit

Permalink
Fix config parameters (#909)
Browse files Browse the repository at this point in the history
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
  • Loading branch information
lvliang-intel authored Dec 14, 2023
1 parent 0f0bf22 commit 57eef5e
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 211 deletions.
6 changes: 3 additions & 3 deletions intel_extension_for_transformers/neural_chat/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ class ModelArguments:
},
)
use_fast_tokenizer: bool = field(
default=False,
default=True,
metadata={
"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."
},
Expand Down Expand Up @@ -309,7 +309,7 @@ class FinetuningArguments:
},
)
lora_all_linear: bool = field(
default=True,
default=False,
metadata={"help": "if True, will add adaptor for all linear for lora finetuning"},
)
task: Optional[str] = field(
Expand All @@ -319,7 +319,7 @@ class FinetuningArguments:
},
)
do_lm_eval: bool = field(
default=True,
default=False,
metadata={"help": "whether to run the LM evaluation with EleutherAI/lm-evaluation-harness"},
)
lm_eval_tasks: Optional[List[str]] = field(
Expand Down

This file was deleted.

0 comments on commit 57eef5e

Please sign in to comment.