Skip to content

Commit

Permalink
[IDEA] Auto convert MODEL_PATH model in .env (#19)
Browse files Browse the repository at this point in the history
* Create convert.py

Has to be actively run 

Checks model type of MODEL_PATH's model from .env and if is old ggml type, converts to ggjt and closes. If already ggjt, it just exits.

The code to check the model type is on line 863 "def lazy_load_file(path: Path) -> ModelPlus:"

Could be interesting if you could call it automatically from startllm and then after conversion change the name of your old file to oldmodel.bin and the new file to what you have in .env and then use it.

* Update README.md

---------

Co-authored-by: su77ungr <69374354+su77ungr@users.noreply.github.com>
  • Loading branch information
alxspiker and su77ungr authored May 12, 2023
1 parent 6d9364a commit 1230b2c
Show file tree
Hide file tree
Showing 2 changed files with 1,169 additions and 2 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,10 @@ Type `exit` to finish the script.

### Convert GGML model to GGJT-ready model v1 (for truncation error or not supported models)

> from huggingface download [tokenizer.model](https://huggingface.co/chavinlo/gpt4-x-alpaca/resolve/main/tokenizer.model), [convert.py](https://github.com/ggerganov/llama.cpp/blob/master/convert.py) from llamacpp and put them in the parent folder of my alpaca7b ggml model named model.bin
> ``` python .\convert.py .\models\ --outfile new.bin ``` [see discussion](https://github.com/su77ungr/CASALIOY/issues/10#issue-1706854398)
1. Download ready-to-use models
> from huggingface download [huggingFace](https://huggingface.co/)
2. Convert locally
> ``` python convert.py --outfile new.bin ``` [see discussion](https://github.com/su77ungr/CASALIOY/issues/10#issue-1706854398)

## How does it work? 👀
Expand Down
Loading

0 comments on commit 1230b2c

Please sign in to comment.