try alpaca
-
- nohup sh run.sh my_alpaca/autodl/finetune.py > autodl.log 2>&1 &
-
- sh run.sh my_alpaca/autodl/inference_llama_gradio.py
-
- sh run.sh my_alpaca/autodl/inference_alpaca_lora_gradio.py
-
Tips
- Autodl的网速特别慢,没法下llama,只能慢慢从本地传。制作了镜像,包含了llama模型,有需要的话,可以留下autodl账号,我在平台上分享(貌似只能指定账号分享)。
- Alpaca: A Strong, Replicable Instruction-Following Model
- GPT fine-tune实战: 训练我自己的 ChatGPT
- Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality
- ChatGPT-Techniques-Introduction-for-Everyone
- llama
- stanford_alpaca
- alpaca-lora
- FastChat
- Chinese-ChatLLaMA
- BELLE
- Chinese-LLaMA-Alpaca
- Luotuo-Chinese-LLM
- Chinese-Vicuna
- Chinese-alpaca-lora
- Japanese-Alpaca-LoRA
- thai-buffala-lora-7b-v0-1
- 2020-Integer Quantization for Deep Learning Inference Principles and Empirical Evaluation [paper]
- 2021-LoRA- Low-Rank Adaptation of Large Language Models [paper]
- 2022-SELF-INSTRUCT- Aligning Language Model with Self Generated Instructions [paper]
- 2023-LLaMA- Open and Efficient Foundation Language Models [paper]
- 2023-Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge.
- 2023-Recalpaca: Low-rank llama instruct-tuning for recommendation
- 2023-Llama-adapter: Efficient fine-tuning of language models with zero-init attention