Skip to content

Commit

Permalink
Update Llava README.md
Browse files Browse the repository at this point in the history
Simplify the instruction.
  • Loading branch information
iseeyuan authored Apr 24, 2024
1 parent eabdeb0 commit 6b959c6
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions examples/models/llava_encoder/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,7 @@ In this example, we initiate the process of running multi modality through Execu

## Instructions
Note that this folder does not host the pretrained LLava model.
- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L
- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`.
- If there is numpy compatibility issue, run `pip install bitsandbytes -I`.
- Alternatively, run `examples/models/llava_encoder/install_requirements.sh`, to replace the steps above.
- Run `examples/models/llava_encoder/install_requirements.sh`.
- Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated.
- Run `./cmake-out/executor_runner --model_path ./llava_encoder.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.

Expand Down

0 comments on commit 6b959c6

Please sign in to comment.