Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The dot in the model name when using auto_map will cause a path parsing error. #35082

Closed
2 of 4 tasks
hvlgo opened this issue Dec 4, 2024 · 13 comments
Closed
2 of 4 tasks
Labels

Comments

@hvlgo
Copy link

hvlgo commented Dec 4, 2024

System Info

transformers version: 4.40.2
Platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Python version: 3.10.4
Huggingface_hub version: 0.26.2
Safetensors version: 0.4.5
Accelerate version: 1.1.1
Accelerate config: not found
PyTorch version (GPU?): 2.0.1+cu117 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('xxx/xxx-1.1', trust_remote_code=True, token=True)

Expected behavior

config.json:
{
...,
"auto_map": {
"AutoConfig": "configuration_xxx.xxxConfig",
 "AutoModelForCausalLM": "modeling_xxx.xxxForPrediction"
 },
 }
When I use the above config and code to load my custom model with auto_map, an error occurs if my model's name contains a .:

ModuleNotFoundError: No module named 'transformers_modules.xxx-1',It seems that the . in the name is mistakenly recognized as a directory. How can this issue be resolved?

@hvlgo hvlgo added the bug label Dec 4, 2024
@Rocketknight1
Copy link
Member

hi @hvigo, this should have been fixed by #29175. Can you pip install --upgrade transformers and let me know if the issue is still present?

@hvlgo
Copy link
Author

hvlgo commented Dec 9, 2024

Oh, but my code is based on transformers 4.40.2. Can you tell me which version officially introduced this update? My code doesn't support the latest version of transformers, but I can try using the version that fixed this issue.

@hvlgo
Copy link
Author

hvlgo commented Dec 9, 2024

I see that version 4.40.2 was updated on May 7, but this issue was merged on Feb 23. It's quite puzzling.

@Rocketknight1
Copy link
Member

Yes, that is quite puzzling. Can you try loading the model with the latest version of transformers (even if that isn't compatible with the rest of your code) and confirm the issue is fixed?

@hvlgo
Copy link
Author

hvlgo commented Dec 9, 2024

Okay, I'll try it now.

@hvlgo
Copy link
Author

hvlgo commented Dec 9, 2024

Yes, the bug remains in v4.47.0.

@hvlgo
Copy link
Author

hvlgo commented Dec 9, 2024

The following is the error message, which I have anonymized:
Traceback (most recent call last):
File "D:\CC\test.py", line 4, in
model = AutoModelForCausalLM.from_pretrained('org/model-1.1', trust_remote_code=True, token=True)
File "D:\miniconda\envs\py310\lib\site-packages\transformers\models\auto\auto_factory.py", line 553, in from_pretrained
model_class = get_class_from_dynamic_module(
File "D:\miniconda\envs\py310\lib\site-packages\transformers\dynamic_module_utils.py", line 553, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module, force_reload=force_download)
File "D:\miniconda\envs\py310\lib\site-packages\transformers\dynamic_module_utils.py", line 250, in get_class_in_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\DD\.cache\huggingface\modules\transformers_modules\org\model-1.1\cccccccc\modeling_model.py", line 10, in
from .aa_generation_mixin import aaGenerationMixin
ModuleNotFoundError: No module named 'transformers_modules.org.model-1'

@Rocketknight1
Copy link
Member

Understood - to help us debug this, is it possible to try loading a model with the same path on Mac or Linux? If this is a Windows-specific bug, that gives us a lot of information about what the cause could be!

@hvlgo
Copy link
Author

hvlgo commented Dec 9, 2024

I have tried it on mac, and the same issue occurs. I haven't tried it on Linux.

@Rocketknight1
Copy link
Member

Got it - and last question, is it possible to share the custom model code where the issue occurs, or is it private?

@hvlgo
Copy link
Author

hvlgo commented Dec 10, 2024

The model is private, but this bug is not related to most of the code in the model. I can provide the structure of the model code and the part of the code related to this bug:
--configuration_model.py
--modeling_model.py
--aa_generation_mixin
and in modeling_model.py there is a line: from .aa_generation_mixin import aaGenerationMixin

@hvlgo
Copy link
Author

hvlgo commented Dec 13, 2024

Could you please confirm if you can reproduce this issue? Do you have plans to fix it?

Copy link

github-actions bot commented Jan 6, 2025

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants