Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

skip flash attn ut on Hopper #55148

Merged
merged 2 commits into from
Jul 6, 2023

Conversation

Wong4j
Copy link
Collaborator

@Wong4j Wong4j commented Jul 5, 2023

PR types

Others

PR changes

Others

Description

The test_flash_attention.py UT failed on H100 because flash attention is only implemented for sm75 and sm8x. Refer to flash_attn.cpp#L293

So I skip this UT on unsupported arch.

Error message:

ERROR: test_all (test_flash_attention.TestFlashAttentionAPI)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/paddle/paddle/build/python/paddle/fluid/tests/unittests/test_flash_attention.py", line 192, in test_all
    out, _ = flash_attention(
  File "/opt/paddle/paddle/build/python/paddle/nn/functional/flash_attention.py", line 83, in flash_attention
    (result_attention, result_softmax,) = _C_ops.flash_attn(
OSError: (External) `is_sm8x || is_sm75` check failed at /opt/paddle/paddle/build/third_party/flashattn/src/extern_flashattn/csrc/flash_attn/flash_attn.cpp:292 (at /opt/paddle/paddle/paddle/phi/kernels/gpu/flash_attn_kernel.cu:142)

@paddle-bot
Copy link

paddle-bot bot commented Jul 5, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added contributor External developers status: proposed labels Jul 5, 2023
@Wong4j Wong4j added the NVIDIA label Jul 5, 2023
Copy link
Contributor

@XieYunshen XieYunshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@XieYunshen XieYunshen merged commit 07e5552 into PaddlePaddle:develop Jul 6, 2023
@paddle-bot
Copy link

paddle-bot bot commented Jul 6, 2023

你的PR已合入Paddle库,请关注后续测试结果。
Your PR has been merged into the repository. An official integration test will be conducted later. Stay tuned.

cqulilujia pushed a commit to cqulilujia/Paddle that referenced this pull request Jul 24, 2023
* skip flash attn ut on Hopper

* minor change
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers NVIDIA
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants