-
Notifications
You must be signed in to change notification settings - Fork 26.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add T5 GGUF loading support #33389
base: main
Are you sure you want to change the base?
Add T5 GGUF loading support #33389
Conversation
Does your code allow loading T5 Encoder XXL (the one used in Flux)? |
@bonlime Yes, it works with AutoModelForTextEncoding class. I've tested with T5 encoder of the exact repo you linked, but I didn't commit the independent test code block for T5 encoder since the code could be dirty with embedding vector style example output. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work @junejae and sorry for the delay ! Just a few nits. There are a few merge conflits, can you fix them also ?
parsed_parameters["config"]["tie_word_embeddings"] = False | ||
parsed_parameters["config"]["is_gated_act"] = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to hardcode these ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've found that almost every gguf versions of t5 models out there are actually flan-t5, and they need to set those config values when they get back to transformer's t5 class.
I didn't want to get deeper with unexpected risks, so I just hardcoded them. Therefore, feel free to fix it in clever way!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation. Can you add a comment to explain this ? This is probably something that we need to fix in the future !
@SunMarc |
tests/quantization/ggml/test_ggml.py
Outdated
if ( | ||
"SelfAttention" in quantized_name | ||
and "SelfAttention" in original_name | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's test all layers
if ( | |
"SelfAttention" in quantized_name | |
and "SelfAttention" in original_name | |
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SunMarc
I've tested your suggestion before committing, and I found that only one type of weights have issue with torch.testing.assert_close
.
Those weights' type is DenseReluDense.wo
, and the error log is like this:
AssertionError: Tensor-likes are not close!
Mismatched elements: 401401 / 524288 (76.6%)
Greatest absolute difference: 0.0037021636962890625 at index (367, 773) (up to 1e-05 allowed)
Greatest relative difference: 0.0004879237385466695 at index (17, 619) (up to 1.3e-06 allowed)
I'm not familiar with situation like this, so I may need your help. Do you have any ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the DenseReluDense.wo
weights are not the same in the gguf and the original model. I'm not sure why this is not the case. Could you print the weights to compare them ? I don't know if this is just an accuracy issue or the weights are totally different
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for delay.
I've printed both of them and I think it is just an accuracy issue.
# gguf model (repetitio/flan-t5-small, flan-t5-small-f16.gguf)
tensor([[ 2.2873e-02, -1.6003e-01, 1.9275e-01, ..., 4.2480e-01,
2.3706e-01, -1.1108e-01],
[-1.1694e-01, 3.3911e-01, -2.9694e-02, ..., -2.0239e-01,
3.3862e-01, 2.6587e-01],
[ 3.5913e-01, -3.4106e-01, -3.6597e-01, ..., -1.2695e-01,
-2.2125e-02, -5.1819e-02],
...,
[ 5.2216e-02, -2.9443e-01, -1.6882e-01, ..., -8.0688e-02,
-2.5391e-01, -8.2779e-04],
[ 3.0542e-01, -3.6335e-03, 5.0879e-01, ..., 7.5317e-02,
3.4326e-01, 3.1470e-01],
[ 1.2158e+00, 5.9113e-02, -3.2568e-01, ..., -3.1323e-01,
2.7026e-01, 1.9165e-01]], device='cuda:0')
# original model (google/flan-t5-small)
tensor([[ 2.2874e-02, -1.6004e-01, 1.9277e-01, ..., 4.2484e-01,
2.3705e-01, -1.1111e-01],
[-1.1692e-01, 3.3915e-01, -2.9694e-02, ..., -2.0241e-01,
3.3858e-01, 2.6584e-01],
[ 3.5917e-01, -3.4098e-01, -3.6591e-01, ..., -1.2691e-01,
-2.2127e-02, -5.1824e-02],
...,
[ 5.2229e-02, -2.9446e-01, -1.6881e-01, ..., -8.0677e-02,
-2.5395e-01, -8.2794e-04],
[ 3.0539e-01, -3.6335e-03, 5.0860e-01, ..., 7.5305e-02,
3.4325e-01, 3.1474e-01],
[ 1.2160e+00, 5.9106e-02, -3.2575e-01, ..., -3.1322e-01,
2.7032e-01, 1.9162e-01]], device='cuda:0')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to update the rtol so that the tests pass !
Make sure to fix the CI also ! |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
What does this PR do?
Add T5 GGUF loading support
Due to the nature of T5's architecture, I decided to replicate gguf's conversion logic, so the final code gets messy.
I tried to avoid any logical conflicts between t5's and existing model architectures, but feel free to edit codes if you find any mistakes that I haven't noticed out.
Before submitting
Pull Request section?
to it if that's the case. Link: Community contribution: Adding GGUF support for more architectures #33260
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc @LysandreJik @ArthurZucker , could you please review this PR?