-
Notifications
You must be signed in to change notification settings - Fork 27.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The dot in the model name when using auto_map will cause a path parsing error. #35082
Comments
Oh, but my code is based on transformers 4.40.2. Can you tell me which version officially introduced this update? My code doesn't support the latest version of transformers, but I can try using the version that fixed this issue. |
I see that version 4.40.2 was updated on May 7, but this issue was merged on Feb 23. It's quite puzzling. |
Yes, that is quite puzzling. Can you try loading the model with the latest version of transformers (even if that isn't compatible with the rest of your code) and confirm the issue is fixed? |
Okay, I'll try it now. |
Yes, the bug remains in v4.47.0. |
The following is the error message, which I have anonymized: |
Understood - to help us debug this, is it possible to try loading a model with the same path on Mac or Linux? If this is a Windows-specific bug, that gives us a lot of information about what the cause could be! |
I have tried it on mac, and the same issue occurs. I haven't tried it on Linux. |
Got it - and last question, is it possible to share the custom model code where the issue occurs, or is it private? |
The model is private, but this bug is not related to most of the code in the model. I can provide the structure of the model code and the part of the code related to this bug: |
Could you please confirm if you can reproduce this issue? Do you have plans to fix it? |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
System Info
transformers version: 4.40.2
Platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Python version: 3.10.4
Huggingface_hub version: 0.26.2
Safetensors version: 0.4.5
Accelerate version: 1.1.1
Accelerate config: not found
PyTorch version (GPU?): 2.0.1+cu117 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('xxx/xxx-1.1', trust_remote_code=True, token=True)
Expected behavior
config.json:
{
...,
"auto_map": {
"AutoConfig": "configuration_xxx.xxxConfig",
"AutoModelForCausalLM": "modeling_xxx.xxxForPrediction"
},
}
When I use the above config and code to load my custom model with
auto_map
, an error occurs if my model's name contains a.
:ModuleNotFoundError: No module named 'transformers_modules.xxx-1'
,It seems that the.
in the name is mistakenly recognized as a directory. How can this issue be resolved?The text was updated successfully, but these errors were encountered: