Skip to content

Commit

Permalink
[CI] Compile with pytorch 2.4.0.dev20240514
Browse files Browse the repository at this point in the history
  • Loading branch information
tridao committed Jul 11, 2024
1 parent da11d1b commit 116b05f
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ jobs:
# manylinux docker image, but I haven't figured out how to install CUDA on manylinux.
os: [ubuntu-20.04]
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
torch-version: ['2.0.1', '2.1.2', '2.2.2', '2.3.1', '2.4.0.dev20240512']
torch-version: ['2.0.1', '2.1.2', '2.2.2', '2.3.1', '2.4.0.dev20240514']
cuda-version: ['11.8.0', '12.2.2']
# We need separate wheels that either uses C++11 ABI (-D_GLIBCXX_USE_CXX11_ABI) or not.
# Pytorch wheels currently don't use it, but nvcr images have Pytorch compiled with C++11 ABI.
Expand Down
2 changes: 1 addition & 1 deletion flash_attn/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "2.6.0"
__version__ = "2.6.0.post1"

from flash_attn.flash_attn_interface import (
flash_attn_func,
Expand Down
4 changes: 2 additions & 2 deletions training/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ RUN pip install transformers==4.25.1 datasets==2.8.0 pytorch-lightning==1.8.6 tr
RUN pip install git+https://github.com/mlcommons/logging.git@2.1.0

# Install FlashAttention
RUN pip install flash-attn==2.6.0
RUN pip install flash-attn==2.6.0.post1

# Install CUDA extensions for fused dense
RUN pip install git+https://github.com/HazyResearch/flash-attention@v2.6.0#subdirectory=csrc/fused_dense_lib
RUN pip install git+https://github.com/HazyResearch/flash-attention@v2.6.0.post1#subdirectory=csrc/fused_dense_lib

0 comments on commit 116b05f

Please sign in to comment.