Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] relax the FP8 CUDA arch limitation to SM89 #549

Merged
merged 2 commits into from
Aug 21, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions torchtitan/float8.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@
from torchtitan.parallelisms import ParallelDims


def _is_sm90_or_later():
# Float8 is only supported on H100+ GPUs
return torch.cuda.is_available() and torch.cuda.get_device_capability() >= (9, 0)
def _is_sm89_or_later():
# Float8 is only supported on SM89 or later (H100+ GPUs)
return torch.cuda.is_available() and torch.cuda.get_device_capability() >= (8, 9)


class Float8Handler:
Expand All @@ -35,9 +35,9 @@ def __init__(self, job_config: JobConfig, parallel_dims: ParallelDims):
float8_config = job_config.float8
if not float8_config.enable_float8_linear:
return
if not _is_sm90_or_later():
if not _is_sm89_or_later():
logger.warning(
"Failed to swap to Float8Linear because SM90 or later is not available",
"Failed to swap to Float8Linear because only SM89 or later is available",
leeeizhang marked this conversation as resolved.
Show resolved Hide resolved
)
return
try:
Expand Down
Loading