-
Notifications
You must be signed in to change notification settings - Fork 27.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix mask slicing for models with HybridCache #35681
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For gemma2 it was suppose to work!
Not aligned with the removal of the attentoin mask slicing tho! Let's run slow test on the PR
if attention_mask.shape[-1] <= 1: # when decoding | ||
attention_mask = attention_mask[:, :, :, -self.sliding_window :] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is super important ! Why is it removed?
I know it is counter intuitive, but _flash_attention_forward
takes the attention mask to pad / unpad the input itds.
Thus you need the slicing otherwise this operation fails, see the blame !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, I was too fast on this one, the HybridCache behaves slightly differently than I remembered. There was still an issue in the slicing during prefill for FA2 though!
cc @ArthurZucker slicing is now correct for all length (beyond sliding window) and attention functions. To not break dynamo tracing, the simplest IMO is to propagate the necesary |
PS: slow tests are similar as on main for both models (and the new tests showing full equivalence all pass) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I understand the motivation, and as we have kwargs I'd be in favor of passing them through kwargs but explicit is also fine.
Wondering wether we want to do a lot of work for the user or not. I am also thinking a bit more with continuous batching were you want to prepare 2 different tensors: you have the index to fill and the cache index to slice, so will be a bit aligned with this.
TLDR; we do need to patch Gemma2 and cohere, so let's iterate a bit! Only bothered by the case that won't happen in practice!
# In case a 4d mask is passed directly without using `generate`, we have to rely on cache_position | ||
# It will break dynamo tracing but there are no way around it (and it should never happen in practice) | ||
last_cache_position = ( | ||
attention_mask.shape[-1] if attention_mask.dim() == 2 else cache_position[-1].item() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if in practice it never happens, let's remove this one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks IMO we should remove the 4d without generate case as the users is expected to have correct mask, unless you are in the decoder layer and then you slice depending on layers
# In case a 4d mask is passed directly without using `generate`, we have to rely on cache_position | ||
# It will break dynamo tracing but there are no way around it (and it should never happen in practice) | ||
last_cache_position = ( | ||
attention_mask.shape[-1] if attention_mask.dim() == 2 else cache_position[-1].item() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for gemma 2 I would say passing the mask of a correct shape might be on the users's side to chefck
* correctly slice * check mask * Update modular_gemma2.py * fix * add tests * fix typo * finally fix mask slicing * Finally correctly slice in all cases!! * add test for all attention functions * small fix in tests * trick around dynamo tracing issue * last update * more robust * kwargs propagation * make it explicit for checkpointing * apply modular
* correctly slice * check mask * Update modular_gemma2.py * fix * add tests * fix typo * finally fix mask slicing * Finally correctly slice in all cases!! * add test for all attention functions * small fix in tests * trick around dynamo tracing issue * last update * more robust * kwargs propagation * make it explicit for checkpointing * apply modular
What does this PR do?
As per the title. Models with HybridCache need to correctly slice the key/value states when using FA2 as inputs needs to be unpadded on the right as well (and the mask has shape [bs, seq_len]). Moreover, mask slicing was wrong in all cases when the sequence length is larger than the sliding window. It is currently broken and leads to garbage generation when using padding. This fixes it.