You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While I am running the interactive inference I am getting the below error , saying cannot find the llava module, not sure why its complaining ... Can you please provide any inputs on this ?
--video_path videos/city-bird
--ckpt_path checkpoints/lgvi-i
--request "I have this incredible shot of a pelican gliding in the sky, but there's another bird also captured in the frame. Can you help me make the picture solely about the pelican?"
/root/miniconda3/envs/myenv/lib/python3.9/site-packages/diffusers/models/transformers/transformer_2d.py:34: FutureWarning: Transformer2DModelOutput is deprecated and will be removed in version 1.0.0. Importing Transformer2DModelOutput from diffusers.models.transformer_2d is deprecated and this will be removed in a future version. Please use from diffusers.models.modeling_outputs import Transformer2DModelOutput, instead.
deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)
Traceback (most recent call last):
File "/root/miniconda3/envs/myenv/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/envs/myenv/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/app/inference_interactive.py", line 14, in
from rovi.pipelines.pipeline_rovi_mllm import RoviPipelineMLLM
File "/app/rovi/pipelines/pipeline_rovi_mllm.py", line 31, in
from rovi.llm.llava.conversation import conv_templates, SeparatorStyle
File "/app/rovi/llm/llava/init.py", line 1, in
from .model import LlavaLlamaForCausalLM
File "/app/rovi/llm/llava/model/init.py", line 1, in
from .language_model.llava_llama import LlavaLlamaForCausalLM, LlavaConfig
File "/app/rovi/llm/llava/model/language_model/llava_llama.py", line 27, in
from ..llava_arch import LlavaMetaModel, LlavaMetaForCausalLM
File "/app/rovi/llm/llava/model/llava_arch.py", line 24, in
from llava.constants import IGNORE_INDEX, IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_PATCH_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
ModuleNotFoundError: No module named 'llava'
The text was updated successfully, but these errors were encountered:
While I am running the interactive inference I am getting the below error , saying cannot find the llava module, not sure why its complaining ... Can you please provide any inputs on this ?
(myenv) root@statefulset-0:/app# python -m inference_interactive \
The text was updated successfully, but these errors were encountered: