We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The docs attached show that datasets are a list of str. However this model with repo_id=Arch4ngel/pochita-plushie-v2 gives datasets of type str.
datasets
str
Arch4ngel/pochita-plushie-v2
from huggingface_hub import model_info
info = model_info(repo_id="Arch4ngel/pochita-plushie-v2", files_metadata=True)
ModelInfo(id='Arch4ngel/pochita-plushie-v2', author='Arch4ngel', sha='1ae5bf2e531bdd891a02cd51eb402af44d24ce6d', created_at=datetime.datetime(2022, 12, 29, 15, 18, 26, tzinfo=datetime.timezone.utc), last_modified=datetime.datetime(2023, 3, 16, 19, 41, 51, tzinfo=datetime.timezone.utc), private=False, disabled=False, downloads=26, downloads_all_time=None, gated=False, gguf=None, inference=None, likes=1, library_name='diffusers', tags=['diffusers', 'pytorch', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard', 'dataset:Arch4ngel/pochita_v2', 'license:creativeml-openrail-m', 'autotrain_compatible', 'endpoints_compatible', 'diffusers:StableDiffusionPipeline', 'region:us'], pipeline_tag='text-to-image', mask_token=None, card_data={'base_model': None, 'datasets': 'Arch4ngel/pochita_v2', 'eval_results': None, 'language': None, 'library_name': None, 'license': 'creativeml-openrail-m', 'license_name': None, 'license_link': None, 'metrics': None, 'model_name': None, 'pipeline_tag': None, 'tags': ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard'], 'widget': [{'text': 'pochita plushie goes fishing'}]}, widget_data=[{'text': 'pochita plushie goes fishing'}], model_index=None, config={'diffusers': {'_class_name': 'StableDiffusionPipeline'}}, transformers_info=None, trending_score=None, siblings=[RepoSibling(rfilename='.gitattributes', size=1477, blob_id='c7d9f3332a950355d5a77d85000f05e6f45435ea', lfs=None), RepoSibling(rfilename='README.md', size=967, blob_id='a46199d1d25961b2a7f6afdffeafd91ca849bed0', lfs=None), RepoSibling(rfilename='feature_extractor/preprocessor_config.json', size=518, blob_id='0d9d33b883843d1b370da781f3943051067e1b2c', lfs=None), RepoSibling(rfilename='model_index.json', size=577, blob_id='86934934847de0105ac23a3d1bb3d2f2c911b1fe', lfs=None), RepoSibling(rfilename='safety_checker/config.json', size=4895, blob_id='bcfa996ef254bce2b6222ea224314a8c629e8eb4', lfs=None), RepoSibling(rfilename='safety_checker/pytorch_model.bin', size=1216064769, blob_id='c8835557a0d3af583cb06c7c154b7e54a069c41d', lfs=BlobLfsInfo(size=1216064769, sha256='16d28f2b37109f222cdc33620fdd262102ac32112be0352a7f77e9614b35a394', pointer_size=135)), RepoSibling(rfilename='scheduler/scheduler_config.json', size=317, blob_id='fe196bbd55e42e07581cc571347beda8bce89d45', lfs=None), RepoSibling(rfilename='text_encoder/config.json', size=612, blob_id='adf10c84a44fdfd1212940c5ada4716cbcb27bdc', lfs=None), RepoSibling(rfilename='text_encoder/pytorch_model.bin', size=492309793, blob_id='cd60c8e7580c7e9b0776e82aac6a689441c9e1b4', lfs=BlobLfsInfo(size=492309793, sha256='a20d724dacc3f94e73753462567b578a6395b725e17b4ddaeddf0dd3bc148729', pointer_size=134)), RepoSibling(rfilename='tokenizer/merges.txt', size=524619, blob_id='76e821f1b6f0a9709293c3b6b51ed90980b3166b', lfs=None), RepoSibling(rfilename='tokenizer/special_tokens_map.json', size=472, blob_id='2c2130b544c0c5a72d5d00da071ba130a9800fb2', lfs=None), RepoSibling(rfilename='tokenizer/tokenizer_config.json', size=806, blob_id='e67c855cfb2f0b62b11b07a593f20603c983c8d2', lfs=None), RepoSibling(rfilename='tokenizer/vocab.json', size=1059962, blob_id='469be27c5c010538f845f518c4f5e8574c78f7c8', lfs=None), RepoSibling(rfilename='unet/config.json', size=1068, blob_id='cad3da81355f76fba64ec4f8d166f8ef7592d345', lfs=None), RepoSibling(rfilename='unet/diffusion_pytorch_model.bin', size=3438375973, blob_id='16c830153a5eff07a27d204314bf099ba444382e', lfs=BlobLfsInfo(size=3438375973, sha256='a7982e038abeae13bf1d43e3d692c932f14040d9e982fb968352fefe0c491f42', pointer_size=135)), RepoSibling(rfilename='vae/config.json', size=629, blob_id='054aecb170d9bdd886c54f4abc3de925df9a8002', lfs=None), RepoSibling(rfilename='vae/diffusion_pytorch_model.bin', size=334715313, blob_id='a9b64f956a1741ccb1df1bc90916ad6131b7cab0', lfs=BlobLfsInfo(size=334715313, sha256='f3d17aa398e021a2600c4a8b570060056a069a1b7b6f8ace78ffa52fff69395d', pointer_size=134))], spaces=[], safetensors=None, security_repo_status=None)
- huggingface_hub version: 0.26.2 - Platform: macOS-15.0.1-arm64-arm-64bit - Python version: 3.11.7 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Running in Google Colab Enterprise ?: No - Token path ?: /Users/adityaborikar/.cache/huggingface/token - Has saved token ?: False - Configured git credential helpers: osxkeychain - FastAI: N/A - Tensorflow: N/A - Torch: N/A - Jinja2: N/A - Graphviz: N/A - keras: N/A - Pydot: N/A - Pillow: N/A - hf_transfer: N/A - gradio: N/A - tensorboard: N/A - numpy: 1.26.4 - pydantic: N/A - aiohttp: N/A - ENDPOINT: https://huggingface.co - HF_HUB_CACHE: /Users/adityaborikar/.cache/huggingface/hub - HF_ASSETS_CACHE: /Users/adityaborikar/.cache/huggingface/assets - HF_TOKEN_PATH: /Users/adityaborikar/.cache/huggingface/token - HF_STORED_TOKENS_PATH: /Users/adityaborikar/.cache/huggingface/stored_tokens - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False - HF_HUB_ETAG_TIMEOUT: 10 - HF_HUB_DOWNLOAD_TIMEOUT: 10
The text was updated successfully, but these errors were encountered:
ModelCardData
Hello @adiaholic, thanks a lot for reporting this! 🤗 I just opened a PR #2644 to fix the typing and align it with the API.
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
Describe the bug
The docs attached show that
datasets
are a list ofstr
.However this model with repo_id=
Arch4ngel/pochita-plushie-v2
gives datasets of typestr
.Reproduction
from huggingface_hub import model_info
info = model_info(repo_id="Arch4ngel/pochita-plushie-v2", files_metadata=True)
Logs
System info
The text was updated successfully, but these errors were encountered: