Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate const of ndarray from const of its data. #491

Merged
merged 7 commits into from
Jul 26, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 16 additions & 13 deletions docs/api_extra.rst
Original file line number Diff line number Diff line change
Expand Up @@ -549,6 +549,12 @@ section <ndarrays>`.

.. cpp:class:: template <typename... Args> ndarray

.. cpp:var:: is_ro
hpkfft marked this conversation as resolved.
Show resolved Hide resolved

A constant static boolean that is true if the array's data is read-only.
This is determined by the class template arguments, not by any dynamic
properties of the referenced array.

.. cpp:function:: ndarray() = default

Create an invalid array.
Expand Down Expand Up @@ -678,14 +684,19 @@ section <ndarrays>`.
In a multi-device/GPU setup, this function returns the ID of the device
storing the array.

.. cpp:function:: const Scalar * data() const
.. cpp:function:: Scalar * data() const

Return a pointer to the array data.
If :cpp:var:`is_ro` is true, a pointer-to-const is returned.

Return a const pointer to the array data.
.. cpp:function:: template <typename... Ts> auto& operator()(Ts... indices)

.. cpp:function:: Scalar * data()
Return a reference to the element stored at the provided index/indices.
If :cpp:var:`is_ro` is true, a reference-to-const is returned.
Note that ``sizeof(Ts)`` must match :cpp:func:`ndim()`.

Return a mutable pointer to the array data. Only enabled when `Scalar` is
not itself ``const``.
This accessor is only available when the scalar type and array dimension
were specified as template parameters.

.. cpp:function:: template <typename... Extra> auto view()

Expand All @@ -698,14 +709,6 @@ section <ndarrays>`.
``shape()``, ``stride()``, and ``operator()`` following the conventions
of the `ndarray` type.

.. cpp:function:: template <typename... Ts> auto& operator()(Ts... indices)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason for this removal? AFAIK this operator is still there, and it should be documented.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not removed; it's on lines 692-696. The diff algorithm got confused (from a human perspective) and spliced things together weirdly.
There had been two data() functions documented, with and without const-qualified this. Now there is only one as the return value is const-qualified based on the writability of the actual data and not on the const qualification of the ndarray object itself.

I did move the documentation of operator() above that of view(). I thought it would be helpful for readers to see operator() next to data() so they could see both and decide what better suits their purpose. Also, view() documents that it provides operator() following the same conventions as ndarray, so it seemed nice to document all of ndarray's functions before discussing view().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or, lines 807-814 if you're looking at diff after the merge commit....


Return a mutable reference to the element at stored at the provided
index/indices. ``sizeof(Ts)`` must match :cpp:func:`ndim()`.

This accessor is only available when the scalar type and array dimension
were specified as template parameters.

Data types
^^^^^^^^^^

Expand Down
4 changes: 2 additions & 2 deletions docs/ndarray.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,14 +33,14 @@ Binding functions that take arrays as input
-------------------------------------------

A function that accepts a :cpp:class:`nb::ndarray\<\> <ndarray>`-typed parameter
(i.e., *without* template parameters) can be called with *any* array
(i.e., *without* template parameters) can be called with *any* writable array
from any framework regardless of the device on which it is stored. The
following example binding declaration uses this functionality to inspect the
properties of an arbitrary input array:

.. code-block:: cpp

m.def("inspect", [](nb::ndarray<> a) {
m.def("inspect", [](const nb::ndarray<>& a) {
printf("Array data pointer : %p\n", a.data());
printf("Array dimension : %zu\n", a.ndim());
for (size_t i = 0; i < a.ndim(); ++i) {
Expand Down
86 changes: 55 additions & 31 deletions include/nanobind/ndarray.h
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ template <size_t N> using ndim = typename detail::ndim_shape<std::make_index_seq
template <typename T> constexpr dlpack::dtype dtype() {
static_assert(
detail::is_ndarray_scalar_v<T>,
"nanobind::dtype<T>: T must be a floating point or integer variable!"
"nanobind::dtype<T>: T must be a floating point or integer type!"
);

dlpack::dtype result;
Expand Down Expand Up @@ -265,53 +265,83 @@ template <typename T> struct ndarray_arg<T, enable_if_t<T::is_device>> {
template <typename... Ts> struct ndarray_info {
using scalar_type = void;
using shape_type = void;
constexpr static bool is_ro = false;
constexpr static auto name = const_name("ndarray");
constexpr static ndarray_framework framework = ndarray_framework::none;
constexpr static char order = '\0';
};

template <typename T, typename... Ts> struct ndarray_info<T, Ts...> : ndarray_info<Ts...> {
using scalar_type =
std::conditional_t<ndarray_traits<T>::is_float || ndarray_traits<T>::is_int ||
ndarray_traits<T>::is_bool || ndarray_traits<T>::is_complex,
T, typename ndarray_info<Ts...>::scalar_type>;
std::conditional_t<
detail::is_ndarray_scalar_v<T> &&
std::is_void_v<typename ndarray_info<Ts...>::scalar_type>,
T, typename ndarray_info<Ts...>::scalar_type>;

constexpr static bool is_ro = ndarray_info<Ts...>::is_ro ||
(detail::is_ndarray_scalar_v<T> && std::is_const_v<T>);
};

template <typename... Ts> struct ndarray_info<ro, Ts...> : ndarray_info<Ts...> {
constexpr static bool is_ro = true;
};

template <ssize_t... Is, typename... Ts> struct ndarray_info<shape<Is...>, Ts...> : ndarray_info<Ts...> {
using shape_type = shape<Is...>;
using shape_type =
std::conditional_t<
std::is_void_v<typename ndarray_info<Ts...>::shape_type>,
shape<Is...>, typename ndarray_info<Ts...>::shape_type>;
};

template <typename... Ts> struct ndarray_info<c_contig, Ts...> : ndarray_info<Ts...> {
static_assert(ndarray_info<Ts...>::order == '\0'
|| ndarray_info<Ts...>::order == 'C',
"The order can only be set once.");
constexpr static char order = 'C';
};

template <typename... Ts> struct ndarray_info<f_contig, Ts...> : ndarray_info<Ts...> {
static_assert(ndarray_info<Ts...>::order == '\0'
|| ndarray_info<Ts...>::order == 'F',
"The order can only be set once.");
constexpr static char order = 'F';
};

template <typename... Ts> struct ndarray_info<numpy, Ts...> : ndarray_info<Ts...> {
static_assert(ndarray_info<Ts...>::framework == ndarray_framework::none
|| ndarray_info<Ts...>::framework == ndarray_framework::numpy,
"The framework can only be set once.");
constexpr static auto name = const_name("numpy.ndarray");
constexpr static ndarray_framework framework = ndarray_framework::numpy;
};

template <typename... Ts> struct ndarray_info<pytorch, Ts...> : ndarray_info<Ts...> {
static_assert(ndarray_info<Ts...>::framework == ndarray_framework::none
|| ndarray_info<Ts...>::framework == ndarray_framework::pytorch,
"The framework can only be set once.");
constexpr static auto name = const_name("torch.Tensor");
constexpr static ndarray_framework framework = ndarray_framework::pytorch;
};

template <typename... Ts> struct ndarray_info<tensorflow, Ts...> : ndarray_info<Ts...> {
static_assert(ndarray_info<Ts...>::framework == ndarray_framework::none
|| ndarray_info<Ts...>::framework == ndarray_framework::tensorflow,
"The framework can only be set once.");
constexpr static auto name = const_name("tensorflow.python.framework.ops.EagerTensor");
constexpr static ndarray_framework framework = ndarray_framework::tensorflow;
};

template <typename... Ts> struct ndarray_info<jax, Ts...> : ndarray_info<Ts...> {
static_assert(ndarray_info<Ts...>::framework == ndarray_framework::none
|| ndarray_info<Ts...>::framework == ndarray_framework::jax,
"The framework can only be set once.");
constexpr static auto name = const_name("jaxlib.xla_extension.DeviceArray");
constexpr static ndarray_framework framework = ndarray_framework::jax;
};


NAMESPACE_END(detail)


template <typename Scalar, typename Shape, char Order> struct ndarray_view {
static constexpr size_t Dim = Shape::size;

Expand Down Expand Up @@ -375,7 +405,10 @@ template <typename... Args> class ndarray {
template <typename...> friend class ndarray;

using Info = detail::ndarray_info<Args...>;
using Scalar = typename Info::scalar_type;
static constexpr bool is_ro = Info::is_ro;
using Scalar = std::conditional_t<is_ro,
std::add_const_t<typename Info::scalar_type>,
typename Info::scalar_type>;

ndarray() = default;

Expand All @@ -387,7 +420,7 @@ template <typename... Args> class ndarray {
template <typename... Args2>
explicit ndarray(const ndarray<Args2...> &other) : ndarray(other.m_handle) { }

ndarray(std::conditional_t<std::is_const_v<Scalar>, const void *, void *> data,
ndarray(std::conditional_t<is_ro, const void *, void *> data,
size_t ndim,
const size_t *shape,
handle owner,
Expand All @@ -397,11 +430,11 @@ template <typename... Args> class ndarray {
int32_t device_id = 0) {
m_handle = detail::ndarray_create(
(void *) data, ndim, shape, owner.ptr(), strides, &dtype,
std::is_const_v<Scalar>, device_type, device_id);
is_ro, device_type, device_id);
m_dltensor = *detail::ndarray_inc_ref(m_handle);
}

ndarray(std::conditional_t<std::is_const_v<Scalar>, const void *, void *> data,
ndarray(std::conditional_t<is_ro, const void *, void *> data,
std::initializer_list<size_t> shape,
handle owner,
std::initializer_list<int64_t> strides = { },
Expand All @@ -415,7 +448,7 @@ template <typename... Args> class ndarray {
m_handle = detail::ndarray_create(
(void *) data, shape.size(), shape.begin(), owner.ptr(),
(strides.size() == 0) ? nullptr : strides.begin(), &dtype,
std::is_const_v<Scalar>, device_type, device_id);
is_ro, device_type, device_id);

m_dltensor = *detail::ndarray_inc_ref(m_handle);
}
Expand Down Expand Up @@ -471,35 +504,26 @@ template <typename... Args> class ndarray {
size_t itemsize() const { return ((size_t) dtype().bits + 7) / 8; }
size_t nbytes() const { return ((size_t) dtype().bits * size() + 7) / 8; }

const Scalar *data() const {
return (const Scalar *)((const uint8_t *) m_dltensor.data + m_dltensor.byte_offset);
}

template <typename T = Scalar, std::enable_if_t<!std::is_const_v<T>, int> = 1>
Scalar *data() {
Scalar *data() const {
return (Scalar *) ((uint8_t *) m_dltensor.data +
m_dltensor.byte_offset);
}

template <typename T = Scalar,
std::enable_if_t<!std::is_const_v<T>, int> = 1, typename... Ts>
NB_INLINE auto &operator()(Ts... indices) {
template <typename... Ts>
NB_INLINE auto& operator()(Ts... indices) const {
return *(Scalar *) ((uint8_t *) m_dltensor.data +
byte_offset(indices...));
}

template <typename... Ts> NB_INLINE const auto & operator()(Ts... indices) const {
return *(const Scalar *) ((const uint8_t *) m_dltensor.data +
byte_offset(indices...));
}

template <typename... Extra> NB_INLINE auto view() const {
using Info2 = typename ndarray<Args..., Extra...>::Info;
using Scalar2 = typename Info2::scalar_type;
using Scalar2 = std::conditional_t<Info2::is_ro,
std::add_const_t<typename Info2::scalar_type>,
typename Info2::scalar_type>;
using Shape2 = typename Info2::shape_type;

constexpr bool has_scalar = !std::is_same_v<Scalar2, void>,
has_shape = !std::is_same_v<Shape2, void>;
constexpr bool has_scalar = !std::is_void_v<Scalar2>,
has_shape = !std::is_void_v<Shape2>;

static_assert(has_scalar,
"To use the ndarray::view<..>() method, you must add a scalar type "
Expand All @@ -523,8 +547,8 @@ template <typename... Args> class ndarray {
private:
template <typename... Ts>
NB_INLINE int64_t byte_offset(Ts... indices) const {
constexpr bool has_scalar = !std::is_same_v<Scalar, void>,
has_shape = !std::is_same_v<typename Info::shape_type, void>;
constexpr bool has_scalar = !std::is_void_v<Scalar>,
has_shape = !std::is_void_v<typename Info::shape_type>;

static_assert(has_scalar,
"To use ndarray::operator(), you must add a scalar type "
Expand All @@ -542,7 +566,7 @@ template <typename... Args> class ndarray {
int64_t index = 0;
((index += int64_t(indices) * m_dltensor.strides[counter++]), ...);

return (int64_t) m_dltensor.byte_offset + index * sizeof(typename Info::scalar_type);
return (int64_t) m_dltensor.byte_offset + index * sizeof(Scalar);
} else {
return 0;
}
Expand Down
96 changes: 95 additions & 1 deletion tests/test_ndarray.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,43 @@ namespace nanobind {
}
#endif

template<bool expect_ro, bool is_shaped, typename... Ts>
bool check_ro(const nb::ndarray<Ts...>& a) { // Pytest passes five doubles
static_assert(std::remove_reference_t<decltype(a)>::is_ro == expect_ro);
static_assert(std::is_const_v<std::remove_pointer_t<decltype(a.data())>>
== expect_ro);
auto vd = a.template view<double, nb::ndim<1>>();
static_assert(std::is_const_v<std::remove_pointer_t<decltype(vd.data())>>
== expect_ro);
static_assert(std::is_const_v<std::remove_reference_t<decltype(vd(0))>>
== expect_ro);
auto vcd = a.template view<const double, nb::ndim<1>>();
static_assert(std::is_const_v<std::remove_pointer_t<decltype(vcd.data())>>);
static_assert(std::is_const_v<std::remove_reference_t<decltype(vcd(0))>>);

bool pass = vd.data() == a.data() && vcd.data() == a.data();
if constexpr (!expect_ro) {
vd(1) = 1.414214;
pass &= vcd(1) == 1.414214;
}
if constexpr (is_shaped) {
static_assert(std::is_const_v<std::remove_reference_t<decltype(a(0))>>
== expect_ro);
auto v = a.view();
static_assert(std::is_const_v<std::remove_pointer_t<decltype(v.data())>>
== expect_ro);
static_assert(std::is_const_v<std::remove_reference_t<decltype(v(0))>>
== expect_ro);
pass &= v.data() == a.data();
if constexpr (!expect_ro) {
a(2) = 2.718282;
v(4) = 16.0;
}
}
pass &= vcd(3) == 3.14159;
return pass;
}

NB_MODULE(test_ndarray_ext, m) {
m.def("get_is_valid", [](const nb::ndarray<nb::ro> &t) {
return t.is_valid();
Expand Down Expand Up @@ -90,6 +127,57 @@ NB_MODULE(test_ndarray_ext, m) {
[](const nb::ndarray<float, nb::c_contig,
nb::shape<-1, -1, 4>> &) {}, "array"_a.noconvert());

m.def("check_rw_by_value",
[](nb::ndarray<> a) {
return check_ro</*expect_ro=*/false, /*is_shaped=*/false>(a);
});
m.def("check_ro_by_value_ro",
[](nb::ndarray<nb::ro> a) {
return check_ro</*expect_ro=*/true, /*is_shaped=*/false>(a);
});
m.def("check_rw_by_value_float64",
[](nb::ndarray<double, nb::ndim<1>> a) {
return check_ro</*expect_ro=*/false, /*is_shaped=*/true>(a);
});
m.def("check_ro_by_value_const_float64",
[](nb::ndarray<const double, nb::ndim<1>> a) {
return check_ro</*expect_ro=*/true, /*is_shaped=*/true>(a);
});

m.def("check_rw_by_const_ref",
[](const nb::ndarray<>& a) {
return check_ro</*expect_ro=*/false, /*is_shaped=*/false>(a);
});
m.def("check_ro_by_const_ref_ro",
[](const nb::ndarray<nb::ro>& a) {
return check_ro</*expect_ro=*/true, /*is_shaped=*/false>(a);
});
m.def("check_rw_by_const_ref_float64",
[](nb::ndarray<double, nb::ndim<1>> a) {
return check_ro</*expect_ro=*/false, /*is_shaped=*/true>(a);
});
m.def("check_ro_by_const_ref_const_float64",
[](const nb::ndarray<const double, nb::ndim<1>>& a) {
return check_ro</*expect_ro=*/true, /*is_shaped=*/true>(a);
});

m.def("check_rw_by_rvalue_ref",
[](nb::ndarray<>&& a) {
return check_ro</*expect_ro=*/false, /*is_shaped=*/false>(a);
});
m.def("check_ro_by_rvalue_ref_ro",
[](nb::ndarray<nb::ro>&& a) {
return check_ro</*expect_ro=*/true, /*is_shaped=*/false>(a);
});
m.def("check_rw_by_rvalue_ref_float64",
[](nb::ndarray<double, nb::ndim<1>>&& a) {
return check_ro</*expect_ro=*/false, /*is_shaped=*/true>(a);
});
m.def("check_ro_by_rvalue_ref_const_float64",
[](nb::ndarray<const double, nb::ndim<1>>&& a) {
return check_ro</*expect_ro=*/true, /*is_shaped=*/true>(a);
});

m.def("check_order", [](nb::ndarray<nb::c_contig>) -> char { return 'C'; });
m.def("check_order", [](nb::ndarray<nb::f_contig>) -> char { return 'F'; });
m.def("check_order", [](nb::ndarray<>) -> char { return '?'; });
Expand Down Expand Up @@ -123,7 +211,7 @@ NB_MODULE(test_ndarray_ext, m) {
[](nb::ndarray<float, nb::c_contig, nb::shape<2, 2>>) { return 0; },
"array"_a);

m.def("inspect_ndarray", [](nb::ndarray<> ndarray) {
m.def("inspect_ndarray", [](const nb::ndarray<>& ndarray) {
printf("Tensor data pointer : %p\n", ndarray.data());
printf("Tensor dimension : %zu\n", ndarray.ndim());
for (size_t i = 0; i < ndarray.ndim(); ++i) {
Expand Down Expand Up @@ -285,6 +373,12 @@ NB_MODULE(test_ndarray_ext, m) {
v(i, j) *= std::complex<float>(-1.0f, 2.0f);
}, "x"_a.noconvert());

m.def("fill_view_6", [](nb::ndarray<std::complex<float>, nb::shape<2, 2>, nb::c_contig, nb::device::cpu> x) {
auto v = x.view<nb::shape<4>>();
for (size_t i = 0; i < v.shape(0); ++i)
v(i) = -v(i);
}, "x"_a.noconvert());

#if defined(__aarch64__)
m.def("ret_numpy_half", []() {
__fp16 *f = new __fp16[8] { 1, 2, 3, 4, 5, 6, 7, 8 };
Expand Down
Loading