Skip to content

Try the stanford alpaca/alpaca-lora on colab and autodl

License

Notifications You must be signed in to change notification settings

whitesay/my-alpaca

 
 

Repository files navigation

my-alpaca

try alpaca

Step by Step

Colab (Free colab is too slow to perform one epoch)

  • filetune

    • nohup sh run.sh my_alpaca/autodl/finetune.py > autodl.log 2>&1 &
  • inference_llama

    • sh run.sh my_alpaca/autodl/inference_llama.py
  • inference_llama_gradio

    • sh run.sh my_alpaca/autodl/inference_llama_gradio.py
  • inference_alpaca_lora

    • sh run.sh my_alpaca/autodl/inference_alpaca_lora.py
  • inference_alpaca_lora_gradio

    • sh run.sh my_alpaca/autodl/inference_alpaca_lora_gradio.py
  • Tips

    • Autodl的网速特别慢,没法下llama,只能慢慢从本地传。制作了镜像,包含了llama模型,有需要的话,可以留下autodl账号,我在平台上分享(貌似只能指定账号分享)。

References

Articles

Repositories

Papers

  • 2020-Integer Quantization for Deep Learning Inference Principles and Empirical Evaluation [paper]
  • 2021-LoRA- Low-Rank Adaptation of Large Language Models [paper]
  • 2022-SELF-INSTRUCT- Aligning Language Model with Self Generated Instructions [paper]
  • 2023-LLaMA- Open and Efficient Foundation Language Models [paper]
  • 2023-Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge.
  • 2023-Recalpaca: Low-rank llama instruct-tuning for recommendation
  • 2023-Llama-adapter: Efficient fine-tuning of language models with zero-init attention

Models

Datasets

About

Try the stanford alpaca/alpaca-lora on colab and autodl

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 89.0%
  • Python 10.8%
  • Other 0.2%