Skip to content

Commit

Permalink
Replace black and blackdoc with ruff-format (#9506)
Browse files Browse the repository at this point in the history
* Replace black with ruff-format

* Fix formatting mistakes moving mypy comments

* Replace black with ruff in the contributing guides
  • Loading branch information
Armavica authored Oct 17, 2024
1 parent 1e579fb commit b9780e7
Show file tree
Hide file tree
Showing 33 changed files with 41 additions and 70 deletions.
7 changes: 1 addition & 6 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,15 @@ repos:
# Ruff version.
rev: 'v0.6.9'
hooks:
- id: ruff-format
- id: ruff
args: ["--fix", "--show-fixes"]
# https://github.com/python/black#version-control-integration
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 24.8.0
hooks:
- id: black-jupyter
- repo: https://github.com/keewis/blackdoc
rev: v0.3.9
hooks:
- id: blackdoc
exclude: "generate_aggregations.py"
additional_dependencies: ["black==24.8.0"]
- id: blackdoc-autoupdate-black
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.11.2
hooks:
Expand Down
3 changes: 1 addition & 2 deletions CORE_TEAM_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -271,8 +271,7 @@ resources such as:
[NumPy documentation guide](https://numpy.org/devdocs/dev/howto-docs.html#documentation-style)
for docstring conventions.
- [`pre-commit`](https://pre-commit.com) hooks for autoformatting.
- [`black`](https://github.com/psf/black) autoformatting.
- [`flake8`](https://github.com/PyCQA/flake8) linting.
- [`ruff`](https://github.com/astral-sh/ruff) autoformatting and linting.
- [python-xarray](https://stackoverflow.com/questions/tagged/python-xarray) on Stack Overflow.
- [@xarray_dev](https://twitter.com/xarray_dev) on Twitter.
- [xarray-dev](https://discord.gg/bsSGdwBn) discord community (normally only used for remote synchronous chat during sprints).
Expand Down
2 changes: 1 addition & 1 deletion ci/min_deps_check.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
publication date. Compare it against requirements/min-all-deps.yml to verify the
policy on obsolete dependencies is being followed. Print a pretty report :)
"""

from __future__ import annotations

import itertools
Expand All @@ -16,7 +17,6 @@

CHANNELS = ["conda-forge", "defaults"]
IGNORE_DEPS = {
"black",
"coveralls",
"flake8",
"hypothesis",
Expand Down
1 change: 0 additions & 1 deletion ci/requirements/all-but-dask.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ channels:
- conda-forge
- nodefaults
dependencies:
- black
- aiobotocore
- array-api-strict
- boto3
Expand Down
8 changes: 2 additions & 6 deletions doc/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -549,11 +549,7 @@ Code Formatting

xarray uses several tools to ensure a consistent code format throughout the project:

- `Black <https://black.readthedocs.io/en/stable/>`_ for standardized
code formatting,
- `blackdoc <https://blackdoc.readthedocs.io/en/stable/>`_ for
standardized code formatting in documentation,
- `ruff <https://github.com/charliermarsh/ruff/>`_ for code quality checks and standardized order in imports
- `ruff <https://github.com/astral-sh/ruff>`_ for formatting, code quality checks and standardized order in imports
- `absolufy-imports <https://github.com/MarcoGorelli/absolufy-imports>`_ for absolute instead of relative imports from different files,
- `mypy <http://mypy-lang.org/>`_ for static type checking on `type hints
<https://docs.python.org/3/library/typing.html>`_.
Expand Down Expand Up @@ -1069,7 +1065,7 @@ PR checklist
- Test the code using `Pytest <http://doc.pytest.org/en/latest/>`_. Running all tests (type ``pytest`` in the root directory) takes a while, so feel free to only run the tests you think are needed based on your PR (example: ``pytest xarray/tests/test_dataarray.py``). CI will catch any failing tests.
- By default, the upstream dev CI is disabled on pull request and push events. You can override this behavior per commit by adding a ``[test-upstream]`` tag to the first line of the commit message. For documentation-only commits, you can skip the CI per commit by adding a ``[skip-ci]`` tag to the first line of the commit message.
- **Properly format your code** and verify that it passes the formatting guidelines set by `Black <https://black.readthedocs.io/en/stable/>`_ and `Flake8 <http://flake8.pycqa.org/en/latest/>`_. See `"Code formatting" <https://docs.xarray.dev/en/stablcontributing.html#code-formatting>`_. You can use `pre-commit <https://pre-commit.com/>`_ to run these automatically on each commit.
- **Properly format your code** and verify that it passes the formatting guidelines set by `ruff <https://github.com/astral-sh/ruff>`_. See `"Code formatting" <https://docs.xarray.dev/en/stablcontributing.html#code-formatting>`_. You can use `pre-commit <https://pre-commit.com/>`_ to run these automatically on each commit.
- Run ``pre-commit run --all-files`` in the root directory. This may modify some files. Confirm and commit any formatting changes.
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ extend-exclude = [

[tool.ruff.lint]
# E402: module level import not at top of file
# E501: line too long - let black worry about that
# E501: line too long - let the formatter worry about that
# E731: do not assign a lambda expression, use a def
extend-safe-fixes = [
"TID252", # absolute imports
Expand Down
7 changes: 2 additions & 5 deletions xarray/backends/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,13 +206,10 @@ def load(self):
For example::
class SuffixAppendingDataStore(AbstractDataStore):
def load(self):
variables, attributes = AbstractDataStore.load(self)
variables = {'%s_suffix' % k: v
for k, v in variables.items()}
attributes = {'%s_suffix' % k: v
for k, v in attributes.items()}
variables = {"%s_suffix" % k: v for k, v in variables.items()}
attributes = {"%s_suffix" % k: v for k, v in attributes.items()}
return variables, attributes
This function will be called anytime variables or attributes
Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/file_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ class CachingFileManager(FileManager):
Example usage::
manager = FileManager(open, 'example.txt', mode='w')
manager = FileManager(open, "example.txt", mode="w")
f = manager.acquire()
f.write(...)
manager.close() # ensures file is closed
Expand Down
2 changes: 0 additions & 2 deletions xarray/backends/h5netcdf_.py
Original file line number Diff line number Diff line change
Expand Up @@ -474,7 +474,6 @@ def open_datatree(
driver_kwds=None,
**kwargs,
) -> DataTree:

from xarray.core.datatree import DataTree

groups_dict = self.open_groups_as_dict(
Expand Down Expand Up @@ -520,7 +519,6 @@ def open_groups_as_dict(
driver_kwds=None,
**kwargs,
) -> dict[str, Dataset]:

from xarray.backends.common import _iter_nc_groups
from xarray.core.treenode import NodePath
from xarray.core.utils import close_on_error
Expand Down
1 change: 0 additions & 1 deletion xarray/backends/netCDF4_.py
Original file line number Diff line number Diff line change
Expand Up @@ -710,7 +710,6 @@ def open_datatree(
autoclose=False,
**kwargs,
) -> DataTree:

from xarray.core.datatree import DataTree

groups_dict = self.open_groups_as_dict(
Expand Down
4 changes: 2 additions & 2 deletions xarray/backends/plugins.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ def backends_dict_from_pkg(


def set_missing_parameters(
backend_entrypoints: dict[str, type[BackendEntrypoint]]
backend_entrypoints: dict[str, type[BackendEntrypoint]],
) -> None:
for _, backend in backend_entrypoints.items():
if backend.open_dataset_parameters is None:
Expand All @@ -89,7 +89,7 @@ def set_missing_parameters(


def sort_backends(
backend_entrypoints: dict[str, type[BackendEntrypoint]]
backend_entrypoints: dict[str, type[BackendEntrypoint]],
) -> dict[str, type[BackendEntrypoint]]:
ordered_backends_entrypoints = {}
for be_name in STANDARD_BACKENDS_ORDER:
Expand Down
4 changes: 0 additions & 4 deletions xarray/backends/zarr.py
Original file line number Diff line number Diff line change
Expand Up @@ -496,7 +496,6 @@ def open_store(
zarr_version=None,
write_empty: bool | None = None,
):

zarr_group, consolidate_on_close, close_store_on_close = _get_open_params(
store=store,
mode=mode,
Expand Down Expand Up @@ -542,7 +541,6 @@ def open_group(
zarr_version=None,
write_empty: bool | None = None,
):

zarr_group, consolidate_on_close, close_store_on_close = _get_open_params(
store=store,
mode=mode,
Expand Down Expand Up @@ -1338,7 +1336,6 @@ def open_groups_as_dict(
zarr_version=None,
**kwargs,
) -> dict[str, Dataset]:

from xarray.core.treenode import NodePath

filename_or_obj = _normalize_path(filename_or_obj)
Expand Down Expand Up @@ -1385,7 +1382,6 @@ def open_groups_as_dict(


def _iter_zarr_groups(root: ZarrGroup, parent: str = "/") -> Iterable[str]:

parent_nodepath = NodePath(parent)
yield str(parent_nodepath)
for path, group in root.groups():
Expand Down
3 changes: 1 addition & 2 deletions xarray/convert.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
"""Functions for converting to and from xarray objects
"""
"""Functions for converting to and from xarray objects"""

from collections import Counter

Expand Down
2 changes: 1 addition & 1 deletion xarray/core/dataarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -1582,7 +1582,7 @@ def sel(
Do not try to assign values when using any of the indexing methods
``isel`` or ``sel``::
da = xr.DataArray([0, 1, 2, 3], dims=['x'])
da = xr.DataArray([0, 1, 2, 3], dims=["x"])
# DO NOT do this
da.isel(x=[0, 1, 2])[1] = -1
Expand Down
1 change: 0 additions & 1 deletion xarray/core/datatree.py
Original file line number Diff line number Diff line change
Expand Up @@ -801,7 +801,6 @@ def _replace_node(
data: Dataset | Default = _default,
children: dict[str, DataTree] | Default = _default,
) -> None:

ds = self.to_dataset(inherit=False) if data is _default else data

if children is _default:
Expand Down
3 changes: 1 addition & 2 deletions xarray/core/formatting.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
"""String formatting routines for __repr__.
"""
"""String formatting routines for __repr__."""

from __future__ import annotations

Expand Down
10 changes: 7 additions & 3 deletions xarray/core/groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,9 @@ def to_array(self) -> DataArray:
T_Group = Union["T_DataArray", _DummyGroup]


def _ensure_1d(group: T_Group, obj: T_DataWithCoords) -> tuple[
def _ensure_1d(
group: T_Group, obj: T_DataWithCoords
) -> tuple[
T_Group,
T_DataWithCoords,
Hashable | None,
Expand Down Expand Up @@ -462,7 +464,10 @@ def factorize(self) -> EncodedGroups:
)
# NaNs; as well as values outside the bins are coded by -1
# Restore these after the raveling
mask = functools.reduce(np.logical_or, [(code == -1) for code in broadcasted_codes]) # type: ignore[arg-type]
mask = functools.reduce(
np.logical_or, # type: ignore[arg-type]
[(code == -1) for code in broadcasted_codes],
)
_flatcodes[mask] = -1

midx = pd.MultiIndex.from_product(
Expand Down Expand Up @@ -1288,7 +1293,6 @@ def _concat_shortcut(self, applied, dim, positions=None):
return self._obj._replace_maybe_drop_dims(reordered)

def _restore_dim_order(self, stacked: DataArray) -> DataArray:

def lookup_order(dimension):
for grouper in self.groupers:
if dimension == grouper.name and grouper.group.ndim == 1:
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/indexes.py
Original file line number Diff line number Diff line change
Expand Up @@ -1768,7 +1768,7 @@ def indexes_equal(


def indexes_all_equal(
elements: Sequence[tuple[Index, dict[Hashable, Variable]]]
elements: Sequence[tuple[Index, dict[Hashable, Variable]]],
) -> bool:
"""Check if indexes are all equal.
Expand Down
3 changes: 2 additions & 1 deletion xarray/core/parallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -372,7 +372,8 @@ def _wrapper(

# ChainMap wants MutableMapping, but xindexes is Mapping
merged_indexes = collections.ChainMap(
expected["indexes"], merged_coordinates.xindexes # type: ignore[arg-type]
expected["indexes"],
merged_coordinates.xindexes, # type: ignore[arg-type]
)
expected_index = merged_indexes.get(name, None)
if expected_index is not None and not index.equals(expected_index):
Expand Down
8 changes: 6 additions & 2 deletions xarray/core/resample.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,9 @@ def _interpolate(self, kind="linear", **kwargs) -> T_Xarray:


# https://github.com/python/mypy/issues/9031
class DataArrayResample(Resample["DataArray"], DataArrayGroupByBase, DataArrayResampleAggregations): # type: ignore[misc]
class DataArrayResample( # type: ignore[misc]
Resample["DataArray"], DataArrayGroupByBase, DataArrayResampleAggregations
):
"""DataArrayGroupBy object specialized to time resampling operations over a
specified dimension
"""
Expand Down Expand Up @@ -329,7 +331,9 @@ def asfreq(self) -> DataArray:


# https://github.com/python/mypy/issues/9031
class DatasetResample(Resample["Dataset"], DatasetGroupByBase, DatasetResampleAggregations): # type: ignore[misc]
class DatasetResample( # type: ignore[misc]
Resample["Dataset"], DatasetGroupByBase, DatasetResampleAggregations
):
"""DatasetGroupBy object specialized to resampling a specified dimension"""

def map(
Expand Down
1 change: 0 additions & 1 deletion xarray/core/variable.py
Original file line number Diff line number Diff line change
Expand Up @@ -839,7 +839,6 @@ def _getitem_with_mask(self, key, fill_value=dtypes.NA):
dims, indexer, new_order = self._broadcast_indexes(key)

if self.size:

if is_duck_dask_array(self._data):
# dask's indexing is faster this way; also vindex does not
# support negative indices yet:
Expand Down
6 changes: 4 additions & 2 deletions xarray/namedarray/_array_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,8 @@ def astype(


def imag(
x: NamedArray[_ShapeType, np.dtype[_SupportsImag[_ScalarType]]], / # type: ignore[type-var]
x: NamedArray[_ShapeType, np.dtype[_SupportsImag[_ScalarType]]], # type: ignore[type-var]
/,
) -> NamedArray[_ShapeType, np.dtype[_ScalarType]]:
"""
Returns the imaginary component of a complex number for each element x_i of the
Expand Down Expand Up @@ -111,7 +112,8 @@ def imag(


def real(
x: NamedArray[_ShapeType, np.dtype[_SupportsReal[_ScalarType]]], / # type: ignore[type-var]
x: NamedArray[_ShapeType, np.dtype[_SupportsReal[_ScalarType]]], # type: ignore[type-var]
/,
) -> NamedArray[_ShapeType, np.dtype[_ScalarType]]:
"""
Returns the real component of a complex number for each element x_i of the
Expand Down
2 changes: 1 addition & 1 deletion xarray/plot/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -986,7 +986,7 @@ def legend_elements(
This is useful for obtaining a legend for a `~.Axes.scatter` plot;
e.g.::
scatter = plt.scatter([1, 2, 3], [4, 5, 6], c=[7, 2, 3])
scatter = plt.scatter([1, 2, 3], [4, 5, 6], c=[7, 2, 3])
plt.legend(*scatter.legend_elements())
creates three legend elements, one for each color with the numerical
Expand Down
3 changes: 0 additions & 3 deletions xarray/tests/test_cftime_offsets.py
Original file line number Diff line number Diff line change
Expand Up @@ -1673,7 +1673,6 @@ def test_new_to_legacy_freq_anchored(year_alias, n):
),
)
def test_legacy_to_new_freq_pd_freq_passthrough(freq, expected):

result = _legacy_to_new_freq(freq)
assert result == expected

Expand All @@ -1699,7 +1698,6 @@ def test_legacy_to_new_freq_pd_freq_passthrough(freq, expected):
),
)
def test_new_to_legacy_freq_pd_freq_passthrough(freq, expected):

result = _new_to_legacy_freq(freq)
assert result == expected

Expand Down Expand Up @@ -1786,7 +1784,6 @@ def test_date_range_no_freq(start, end, periods):
)
@pytest.mark.parametrize("has_year_zero", [False, True])
def test_offset_addition_preserves_has_year_zero(offset, has_year_zero):

with warnings.catch_warnings():
warnings.filterwarnings("ignore", message="this date/calendar/year zero")
datetime = cftime.DatetimeGregorian(-1, 12, 31, has_year_zero=has_year_zero)
Expand Down
5 changes: 4 additions & 1 deletion xarray/tests/test_coordinates.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,10 @@ def test_init_index_error(self) -> None:
Coordinates(indexes={"x": idx})

with pytest.raises(TypeError, match=".* is not an `xarray.indexes.Index`"):
Coordinates(coords={"x": ("x", [1, 2, 3])}, indexes={"x": "not_an_xarray_index"}) # type: ignore[dict-item]
Coordinates(
coords={"x": ("x", [1, 2, 3])},
indexes={"x": "not_an_xarray_index"}, # type: ignore[dict-item]
)

def test_init_dim_sizes_conflict(self) -> None:
with pytest.raises(ValueError):
Expand Down
4 changes: 1 addition & 3 deletions xarray/tests/test_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -297,9 +297,7 @@ def test_repr(self) -> None:
var2 (dim1, dim2) float64 576B 1.162 -1.097 -2.123 ... 1.267 0.3328
var3 (dim3, dim1) float64 640B 0.5565 -0.2121 0.4563 ... -0.2452 -0.3616
Attributes:
foo: bar""".format(
data["dim3"].dtype
)
foo: bar""".format(data["dim3"].dtype)
)
actual = "\n".join(x.rstrip() for x in repr(data).split("\n"))
print(actual)
Expand Down
Loading

0 comments on commit b9780e7

Please sign in to comment.