-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix CI for VLMs #35690
Fix CI for VLMs #35690
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Asked the infrao. It can't be downloaded in EU region ...
which (full) test name? @Cyrilvallez talked to me we might need to introduce @need_a10_gpu decorator if we don't have other way |
[[0.9148, -1.4148, 3.8040], [3.3443, 1.9478, 0.2080], [1.6604, 2.8184, -0.3618]] | ||
).to(torch_device) | ||
|
||
self.assertTrue( | ||
torch.allclose(outputs.vision_model_output.last_hidden_state[0, :3, :3], expected_slice, atol=1e-4) | ||
torch.allclose(outputs.vision_model_output.last_hidden_state[0, :3, :3], expected_slice, atol=1e-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
somehow I feel such changes are quite large, and not really know what happened in the history of commits ...
But first question: are you checking this on T4 runner?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, running everything on T4. I believe the user who added the test didn't trigger slow tests with our runners thus the difference
The test fails when I go to the commit when it was added
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, I am the one working on Kosmos2 and I ran the test like hundred of times 😆 .
Maybe torch versions changes matter here. Anyway, I can take a look on this specific model and make sure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, this particular test was added by community user, after model release. But yeah, double checking would not hurt
😢
Cool, the decorator wouuld definitely help a lot! The tests are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM appart from tests that are a bit of an edge case, even if fixed!
@@ -481,49 +480,6 @@ def test_batched_generation(self): | |||
outputs = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) | |||
self.assertEqual(outputs, EXPECTED_OUTPUT) | |||
|
|||
@slow | |||
@require_bitsandbytes | |||
def test_llava_index_error_bug(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure why we are removing this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the original bug was related to extending the attention mask after we merge_inputs
the old way, and the bug cannot be there anymore as we removed old style merging
See the decorator |
* fix some easy test * more tests * remove logit check here also * add require_torch_large_gpu in Emu3
* fix some easy test * more tests * remove logit check here also * add require_torch_large_gpu in Emu3
What does this PR do?
As per title, main changes are:
expected_text
so updated themNote: Emu3/Aria flex-attn test will be failing and needs a different PR to fix the way attention is dispatched on some VLMs. Apparently it never worked for Emu3-like models