Skip to content

Commit

Permalink
New conversion script (#545)
Browse files Browse the repository at this point in the history
  • Loading branch information
comex committed Apr 2, 2023
1 parent 5b70e7d commit a7d6214
Show file tree
Hide file tree
Showing 16 changed files with 1,075 additions and 1,305 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,10 +150,10 @@ ls ./models
65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model

# install Python dependencies
python3 -m pip install torch numpy sentencepiece
python3 -m pip install -r requirements.txt

# convert the 7B model to ggml FP16 format
python3 convert-pth-to-ggml.py models/7B/ 1
python3 convert.py models/7B/

# quantize the model to 4-bits (using method 2 = q4_0)
./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin 2
Expand Down
299 changes: 0 additions & 299 deletions convert-ggml-to-pth.py

This file was deleted.

Loading

0 comments on commit a7d6214

Please sign in to comment.