Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Currently unable to run the demo between Colab and Huggingface. #19

Open
kuangxiaoye opened this issue Jul 13, 2023 · 3 comments
Open

Comments

@kuangxiaoye
Copy link

colab

When I run the demo case of Colab, it crashes and gives an error message when it reaches the following code section.

import decord
import matplotlib.pyplot as plt
import numpy as np
from collections import OrderedDict

import torch
import torchvision.transforms as transforms
import torchvision.transforms._transforms_video as transforms_video

import sys
sys.path.insert(0, './')
from lavila.data.video_transforms import Permute
from lavila.data.datasets import get_frame_ids, video_loader_by_frames
from lavila.models.models import VCLM_OPENAI_TIMESFORMER_BASE_GPT2
from lavila.models.tokenizer import MyGPT2Tokenizer

After running here, it will cause colab to crash and cannot continue to run.

The log is as follows:

Timestamp Level Message
Jul 13, 2023, 9:19:25 PM WARNING WARNING:root:kernel 84307101-331a-4430-b1d6-b5446bb9c947 restarted
Jul 13, 2023, 9:19:25 PM WARNING WARNING:root:kernel 84307101-331a-4430-b1d6-b5446bb9c947 restarted
Jul 13, 2023, 9:19:25 PM INFO KernelRestarter: restarting kernel (1/5), keep random ports
Jul 13, 2023, 9:19:25 PM WARNING what(): random_device could not be read
Jul 13, 2023, 9:19:25 PM WARNING terminate called after throwing an instance of 'std::runtime_error'

colab of the configuration is as follows:
cpu:true
gpt:false
Do I need to become a colabpro?

huggingface

When i open https://huggingface.co/spaces/nateraw/lavila,error are as follow:

Runtime error
in _get_config_dict
resolved_config_file = cached_path(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/utils/hub.py", line 284, in cached_path
output_path = get_from_cache(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/utils/hub.py", line 562, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "app.py", line 148, in
Pipeline(),
File "app.py", line 69, in init
self.model = VCLM_OPENAI_TIMESFORMER_BASE_GPT2(
File "./lavila/models/models.py", line 914, in VCLM_OPENAI_TIMESFORMER_BASE_GPT2
gpt2 = GPT2LMHeadModel.from_pretrained(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1833, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/configuration_utils.py", line 534, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/configuration_utils.py", line 561, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/configuration_utils.py", line 649, in _get_config_dict
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this model, couldn't find it in the cached files and it looks like gpt2 is not the path to a directory containing a config.json file.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Container logs:

===== Application Startup at 2023-06-16 03:59:42 =====

/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torchvision/transforms/_functional_video.py:5: UserWarning: The _functional_video module is deprecated. Please use the functional module instead.
warnings.warn(
/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torchvision/transforms/_transforms_video.py:25: UserWarning: The _transforms_video module is deprecated. Please use the transforms module instead.
warnings.warn(
######USING ATTENTION STYLE: frozen-in-time

0%| | 0.00/335M [00:00<?, ?iB/s]
4%|█▌ | 13.5M/335M [00:00<00:02, 142MiB/s]
8%|███▏ | 27.8M/335M [00:00<00:02, 146MiB/s]
12%|████▊ | 41.8M/335M [00:00<00:02, 146MiB/s]
17%|██████▍ | 55.7M/335M [00:00<00:02, 144MiB/s]
21%|████████ | 69.5M/335M [00:00<00:01, 144MiB/s]
25%|█████████▋ | 83.4M/335M [00:00<00:01, 144MiB/s]
29%|███████████▎ | 97.5M/335M [00:00<00:01, 146MiB/s]
33%|█████████████▍ | 112M/335M [00:00<00:01, 147MiB/s]
38%|███████████████ | 126M/335M [00:00<00:01, 144MiB/s]
42%|████████████████▋ | 140M/335M [00:01<00:01, 142MiB/s]
46%|██████████████████▎ | 153M/335M [00:01<00:01, 142MiB/s]
50%|███████████████████▉ | 167M/335M [00:01<00:01, 142MiB/s]
54%|█████████████████████▌ | 180M/335M [00:01<00:01, 140MiB/s]
58%|███████████████████████▏ | 194M/335M [00:01<00:01, 139MiB/s]
62%|████████████████████████▊ | 208M/335M [00:01<00:00, 141MiB/s]
66%|██████████████████████████▍ | 221M/335M [00:01<00:00, 141MiB/s]
70%|████████████████████████████ | 235M/335M [00:01<00:00, 141MiB/s]
74%|█████████████████████████████▋ | 248M/335M [00:01<00:00, 141MiB/s]
78%|███████████████████████████████▎ | 262M/335M [00:01<00:00, 140MiB/s]
82%|████████████████████████████████▉ | 275M/335M [00:02<00:00, 140MiB/s]
86%|██████████████████████████████████▌ | 289M/335M [00:02<00:00, 142MiB/s]
90%|████████████████████████████████████▏ | 303M/335M [00:02<00:00, 142MiB/s]
95%|█████████████████████████████████████▊ | 317M/335M [00:02<00:00, 143MiB/s]
99%|███████████████████████████████████████▍| 330M/335M [00:02<00:00, 143MiB/s]
100%|████████████████████████████████████████| 335M/335M [00:02<00:00, 143MiB/s]
=> Loading CLIP (ViT-B/16) weights
_IncompatibleKeys(missing_keys=['temporal_embed', 'blocks.0.timeattn.qkv.weight', 'blocks.0.timeattn.qkv.bias', 'blocks.0.timeattn.proj.weight', 'blocks.0.timeattn.proj.bias', 'blocks.0.norm3.weight', 'blocks.0.norm3.bias', 'blocks.1.timeattn.qkv.weight', 'blocks.1.timeattn.qkv.bias', 'blocks.1.timeattn.proj.weight', 'blocks.1.timeattn.proj.bias', 'blocks.1.norm3.weight', 'blocks.1.norm3.bias', 'blocks.2.timeattn.qkv.weight', 'blocks.2.timeattn.qkv.bias', 'blocks.2.timeattn.proj.weight', 'blocks.2.timeattn.proj.bias', 'blocks.2.norm3.weight', 'blocks.2.norm3.bias', 'blocks.3.timeattn.qkv.weight', 'blocks.3.timeattn.qkv.bias', 'blocks.3.timeattn.proj.weight', 'blocks.3.timeattn.proj.bias', 'blocks.3.norm3.weight', 'blocks.3.norm3.bias', 'blocks.4.timeattn.qkv.weight', 'blocks.4.timeattn.qkv.bias', 'blocks.4.timeattn.proj.weight', 'blocks.4.timeattn.proj.bias', 'blocks.4.norm3.weight', 'blocks.4.norm3.bias', 'blocks.5.timeattn.qkv.weight', 'blocks.5.timeattn.qkv.bias', 'blocks.5.timeattn.proj.weight', 'blocks.5.timeattn.proj.bias', 'blocks.5.norm3.weight', 'blocks.5.norm3.bias', 'blocks.6.timeattn.qkv.weight', 'blocks.6.timeattn.qkv.bias', 'blocks.6.timeattn.proj.weight', 'blocks.6.timeattn.proj.bias', 'blocks.6.norm3.weight', 'blocks.6.norm3.bias', 'blocks.7.timeattn.qkv.weight', 'blocks.7.timeattn.qkv.bias', 'blocks.7.timeattn.proj.weight', 'blocks.7.timeattn.proj.bias', 'blocks.7.norm3.weight', 'blocks.7.norm3.bias', 'blocks.8.timeattn.qkv.weight', 'blocks.8.timeattn.qkv.bias', 'blocks.8.timeattn.proj.weight', 'blocks.8.timeattn.proj.bias', 'blocks.8.norm3.weight', 'blocks.8.norm3.bias', 'blocks.9.timeattn.qkv.weight', 'blocks.9.timeattn.qkv.bias', 'blocks.9.timeattn.proj.weight', 'blocks.9.timeattn.proj.bias', 'blocks.9.norm3.weight', 'blocks.9.norm3.bias', 'blocks.10.timeattn.qkv.weight', 'blocks.10.timeattn.qkv.bias', 'blocks.10.timeattn.proj.weight', 'blocks.10.timeattn.proj.bias', 'blocks.10.norm3.weight', 'blocks.10.norm3.bias', 'blocks.11.timeattn.qkv.weight', 'blocks.11.timeattn.qkv.bias', 'blocks.11.timeattn.proj.weight', 'blocks.11.timeattn.proj.bias', 'blocks.11.norm3.weight', 'blocks.11.norm3.bias', 'head.weight', 'head.bias'], unexpected_keys=[])
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/configuration_utils.py", line 616, in _get_config_dict
resolved_config_file = cached_path(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/utils/hub.py", line 284, in cached_path
output_path = get_from_cache(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/utils/hub.py", line 562, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "app.py", line 148, in
Pipeline(),
File "app.py", line 69, in init
self.model = VCLM_OPENAI_TIMESFORMER_BASE_GPT2(
File "./lavila/models/models.py", line 914, in VCLM_OPENAI_TIMESFORMER_BASE_GPT2
gpt2 = GPT2LMHeadModel.from_pretrained(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1833, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/configuration_utils.py", line 534, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/configuration_utils.py", line 561, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/configuration_utils.py", line 649, in _get_config_dict
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this model, couldn't find it in the cached files and it looks like gpt2 is not the path to a directory containing a config.json file.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

I hope you can help me successfully run the demo. Thank you very much.

@nateraw
Copy link

nateraw commented Jul 18, 2023

Hey there, I'm having a look at the demo on Hugging Face now. It seems there's a dependency issue that cropped up. Discussed a bit here: gradio-app/gradio#4912. Will ping when fixed.

@nateraw
Copy link

nateraw commented Jul 18, 2023

OK the demo works again on Hugging Face.

I had a hard time setting this up in colab because it seems we need Python 3.6. colab uses 3.10. Started looking into workaround, but it became dependency nightmare 🙁.

@fgvfgfg564
Copy link

Seems that the Hugging Face demo fails again due to some dependency issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants