Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lint fixes #1 torchao/dtypes #827

Merged
merged 3 commits into from
Sep 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions ruff.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
# We plan to add files in chunks using the 'include' list below.
# To add a new path: Simply add it to the 'include' list.
# Example: To lint all files in every subfolder of 'test', add "test/**/*"
# To exclude a file type: Simply add it to the 'include' list.
# Example: To lint all files in every subfolder of 'test', add "test/**/*"
include = [
"torchao/float8/inference.py",
"torchao/float8/float8_utils.py",
Expand All @@ -10,4 +12,9 @@ include = [
"torchao/float8/float8_tensor.py",
"torchao/quantization/linear_activation_weight_observer.py",
"test/quantization/test_observer.py",
"torchao/dtypes/*"
]

exclude = [
"**/*.md"
]
3 changes: 2 additions & 1 deletion torchao/dtypes/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from .nf4tensor import NF4Tensor, to_nf4

# from ..prototype.dtypes.uint2 import UInt2Tensor, BitnetTensor
from .uint4 import UInt4Tensor
from .affine_quantized_tensor import (
Expand All @@ -21,7 +22,7 @@
__all__ = [
"NF4Tensor",
"to_nf4",
"UInt4Tensor"
"UInt4Tensor",
"AffineQuantizedTensor",
"to_affine_quantized_intx",
"to_affine_quantized_intx_static",
Expand Down
Loading
Loading