diff --git a/examples/models/llava_encoder/README.md b/examples/models/llava_encoder/README.md index a074fa6133..76224e4145 100644 --- a/examples/models/llava_encoder/README.md +++ b/examples/models/llava_encoder/README.md @@ -5,10 +5,7 @@ In this example, we initiate the process of running multi modality through Execu ## Instructions Note that this folder does not host the pretrained LLava model. -- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L -- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`. -- If there is numpy compatibility issue, run `pip install bitsandbytes -I`. -- Alternatively, run `examples/models/llava_encoder/install_requirements.sh`, to replace the steps above. +- Run `examples/models/llava_encoder/install_requirements.sh`. - Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated. - Run `./cmake-out/executor_runner --model_path ./llava_encoder.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.