-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blip: get/set input embeddings correctly #34152
Changes from all commits
7207872
73ef59d
d44c840
8523001
01176de
0297d71
adf45ea
4536e44
f760726
be37a96
ffbef7f
7e86be2
ea8fe84
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1768,11 +1768,12 @@ def forward( | |
decoder_attention_mask=decoder_attention_mask, | ||
output_attentions=output_attentions, | ||
output_hidden_states=output_hidden_states, | ||
return_dict=return_dict, | ||
return_dict=True, # toggle for easier access to loss/logits below | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry 😐 realized this would break torch.script or fx export compatibility so maybe False by default ? (I might be wrong tho, but I don't think it's suported) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah, torchscript is not supported for BLIP afaik and the tests are disabled therefore. I guess in that case we don't need it to be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No but you could script only the LM model and not the full model no? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I added torchscript tests and they are passing currently. FX test cannot be added because the model architecture is not in supported list I don't think we should do There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Okay sounds good! |
||
labels=labels, | ||
) | ||
loss = outputs.loss if return_dict else outputs[0] | ||
logits = outputs.logits if return_dict else outputs[1] | ||
loss = outputs.loss | ||
logits = outputs.logits | ||
outputs = outputs.to_tuple() if not return_dict else outputs | ||
|
||
if not return_dict: | ||
output = (logits, vision_outputs, query_outputs, outputs) | ||
|
@@ -1810,6 +1811,12 @@ def __init__(self, config: Blip2Config): | |
# Initialize weights and apply final processing | ||
self.post_init() | ||
|
||
def get_input_embeddings(self): | ||
return self.embeddings.word_embeddings | ||
|
||
def set_input_embeddings(self, value): | ||
self.embeddings.word_embeddings = value | ||
|
||
@add_start_docstrings_to_model_forward(BLIP_2_TEXT_WITH_PROJECTION_INPUTS_DOCSTRING) | ||
@replace_return_docstrings(output_type=Blip2TextModelOutput, config_class=Blip2Config) | ||
def forward( | ||
|
@@ -2233,11 +2240,12 @@ def forward( | |
decoder_attention_mask=decoder_attention_mask, | ||
output_attentions=output_attentions, | ||
output_hidden_states=output_hidden_states, | ||
return_dict=return_dict, | ||
return_dict=True, # toggle for easier access to loss/logits below | ||
labels=labels, | ||
) | ||
loss = outputs.loss if return_dict else outputs[0] | ||
logits = outputs.logits if return_dict else outputs[1] | ||
loss = outputs.loss | ||
logits = outputs.logits | ||
outputs = outputs.to_tuple() if not return_dict else outputs | ||
|
||
if not return_dict: | ||
output = (logits, vision_outputs, query_outputs, outputs) | ||
|
@@ -2389,6 +2397,12 @@ def __init__(self, config: Blip2Config): | |
# Initialize weights and apply final processing | ||
self.post_init() | ||
|
||
def get_input_embeddings(self): | ||
return self.embeddings.word_embeddings | ||
|
||
def set_input_embeddings(self, value): | ||
self.embeddings.word_embeddings = value | ||
|
||
@add_start_docstrings_to_model_forward(BLIP2_IMAGE_TEXT_RETRIEVAL_INPUTS_DOCSTRING) | ||
@replace_return_docstrings(output_type=Blip2ImageTextMatchingModelOutput, config_class=Blip2Config) | ||
def forward( | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if there is a text_config, we could automatically deduce this from the
key
which would be heretext_model
which to call? (thinking about general api-wise!)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, i see that in PreTrainedModel we try to get the method from
base_model
and prob we can fallback to that by indicating thebase_model_prefix
I am not very sure yet how the prefix is used when loading the model, so lemme quick check that state dict is still correctly loaded
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: yes the idea works and loading happens same way as without the
base_model_prefix
. But some of the tests will fail because of the composite nature ofBlipConfig
(test_correct_missing_keys
)I will take this noted, and will add it to my TODO list. But I believe it would force us to refactor
from_pretrained
to work well with composite modelsThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay