Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add T5 GGUF loading support #33389

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open

Conversation

junejae
Copy link
Contributor

@junejae junejae commented Sep 9, 2024

What does this PR do?

Add T5 GGUF loading support

Due to the nature of T5's architecture, I decided to replicate gguf's conversion logic, so the final code gets messy.
I tried to avoid any logical conflicts between t5's and existing model architectures, but feel free to edit codes if you find any mistakes that I haven't noticed out.

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@SunMarc @LysandreJik @ArthurZucker , could you please review this PR?

@bonlime
Copy link

bonlime commented Sep 11, 2024

Does your code allow loading T5 Encoder XXL (the one used in Flux)?
Example files here:
https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf/tree/main

@junejae
Copy link
Contributor Author

junejae commented Sep 11, 2024

@bonlime Yes, it works with AutoModelForTextEncoding class. I've tested with T5 encoder of the exact repo you linked, but I didn't commit the independent test code block for T5 encoder since the code could be dirty with embedding vector style example output.

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your work @junejae and sorry for the delay ! Just a few nits. There are a few merge conflits, can you fix them also ?

Comment on lines +97 to +98
parsed_parameters["config"]["tie_word_embeddings"] = False
parsed_parameters["config"]["is_gated_act"] = True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to hardcode these ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've found that almost every gguf versions of t5 models out there are actually flan-t5, and they need to set those config values when they get back to transformer's t5 class.
I didn't want to get deeper with unexpected risks, so I just hardcoded them. Therefore, feel free to fix it in clever way!

Copy link
Member

@SunMarc SunMarc Oct 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation. Can you add a comment to explain this ? This is probably something that we need to fix in the future !

tests/quantization/ggml/test_ggml.py Show resolved Hide resolved
src/transformers/modeling_gguf_pytorch_utils.py Outdated Show resolved Hide resolved
@junejae junejae requested a review from SunMarc October 3, 2024 11:37
@junejae
Copy link
Contributor Author

junejae commented Oct 3, 2024

@SunMarc
I've resolved conflicts and added more tests. Could you please review it again?

Comment on lines 505 to 508
if (
"SelfAttention" in quantized_name
and "SelfAttention" in original_name
):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's test all layers

Suggested change
if (
"SelfAttention" in quantized_name
and "SelfAttention" in original_name
):

Copy link
Contributor Author

@junejae junejae Oct 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SunMarc
I've tested your suggestion before committing, and I found that only one type of weights have issue with torch.testing.assert_close.

Those weights' type is DenseReluDense.wo, and the error log is like this:

AssertionError: Tensor-likes are not close!

Mismatched elements: 401401 / 524288 (76.6%)
Greatest absolute difference: 0.0037021636962890625 at index (367, 773) (up to 1e-05 allowed)
Greatest relative difference: 0.0004879237385466695 at index (17, 619) (up to 1.3e-06 allowed)

I'm not familiar with situation like this, so I may need your help. Do you have any ideas?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the DenseReluDense.wo weights are not the same in the gguf and the original model. I'm not sure why this is not the case. Could you print the weights to compare them ? I don't know if this is just an accuracy issue or the weights are totally different

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for delay.
I've printed both of them and I think it is just an accuracy issue.

# gguf model (repetitio/flan-t5-small, flan-t5-small-f16.gguf)
tensor([[ 2.2873e-02, -1.6003e-01,  1.9275e-01,  ...,  4.2480e-01,
          2.3706e-01, -1.1108e-01],
        [-1.1694e-01,  3.3911e-01, -2.9694e-02,  ..., -2.0239e-01,
          3.3862e-01,  2.6587e-01],
        [ 3.5913e-01, -3.4106e-01, -3.6597e-01,  ..., -1.2695e-01,
         -2.2125e-02, -5.1819e-02],
        ...,
        [ 5.2216e-02, -2.9443e-01, -1.6882e-01,  ..., -8.0688e-02,
         -2.5391e-01, -8.2779e-04],
        [ 3.0542e-01, -3.6335e-03,  5.0879e-01,  ...,  7.5317e-02,
          3.4326e-01,  3.1470e-01],
        [ 1.2158e+00,  5.9113e-02, -3.2568e-01,  ..., -3.1323e-01,
          2.7026e-01,  1.9165e-01]], device='cuda:0')

# original model (google/flan-t5-small)
tensor([[ 2.2874e-02, -1.6004e-01,  1.9277e-01,  ...,  4.2484e-01,
          2.3705e-01, -1.1111e-01],
        [-1.1692e-01,  3.3915e-01, -2.9694e-02,  ..., -2.0241e-01,
          3.3858e-01,  2.6584e-01],
        [ 3.5917e-01, -3.4098e-01, -3.6591e-01,  ..., -1.2691e-01,
         -2.2127e-02, -5.1824e-02],
        ...,
        [ 5.2229e-02, -2.9446e-01, -1.6881e-01,  ..., -8.0677e-02,
         -2.5395e-01, -8.2794e-04],
        [ 3.0539e-01, -3.6335e-03,  5.0860e-01,  ...,  7.5305e-02,
          3.4325e-01,  3.1474e-01],
        [ 1.2160e+00,  5.9106e-02, -3.2575e-01,  ..., -3.1322e-01,
          2.7032e-01,  1.9162e-01]], device='cuda:0')

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel free to update the rtol so that the tests pass !

tests/quantization/ggml/test_ggml.py Show resolved Hide resolved
@SunMarc
Copy link
Member

SunMarc commented Oct 3, 2024

Make sure to fix the CI also !

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants