This repository has been archived by the owner on Oct 25, 2024. It is now read-only.
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Added notebook example of NeuralChat finetuning for LLaMA2 and MPT (#240
) * Added notebook example of NeuralChat finetuning on NV GPU for LLaMA2 and MPT with both LoRA and QLoRA. Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com> * Added notebook example of NeuralChat finetuning on Intel Xeon CPU for LLaMA2 and MPT with LoRA. Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com> --------- Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
- Loading branch information