Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow broadcasting in the numerical representations of standard operations #2609

Merged
merged 26 commits into from
Jun 2, 2022
Merged
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
d317048
commit old changes
dwierichs May 23, 2022
dde9478
intermed
dwierichs May 24, 2022
393784f
clean up, move broadcast dimension first
dwierichs May 24, 2022
d3bc1b3
Merge branch 'parameter-broadcasting-1' into parameter-broadcasting-2
dwierichs May 26, 2022
2ade7cf
update tests that manually set ndim_params for default ops
dwierichs May 26, 2022
c438d45
pin protobuf<4.21.0
dwierichs May 26, 2022
2fb80bd
improve shape coersion order
dwierichs May 26, 2022
bb9bf14
changelog formatting
dwierichs May 26, 2022
bfe8d77
Merge branch 'parameter-broadcasting-1' into parameter-broadcasting-2
dwierichs May 28, 2022
24088d8
broadcasted pow tests
dwierichs May 29, 2022
e318ce2
attribute test, ControlledQubitUnitary update
dwierichs May 29, 2022
32d1286
test kwargs attributes
dwierichs May 29, 2022
9956534
Apply suggestions from code review
dwierichs May 30, 2022
bb1538a
changelog
dwierichs May 30, 2022
4cf4d94
review
dwierichs May 30, 2022
f88dc2c
remove prints
dwierichs May 30, 2022
6b5cf8b
explicit attribute supports_broadcasting tests
dwierichs May 30, 2022
28cfc4c
tests disentangle
dwierichs May 31, 2022
9846d23
fix
dwierichs May 31, 2022
20e689a
PauliRot broadcasted identity compatible with TF
dwierichs May 31, 2022
e12edbf
rename "batched" into "broadcasted" for uniform namespace
dwierichs May 31, 2022
ab77959
old TF version support in qubitunitary unitarity check
dwierichs May 31, 2022
a4062e7
python3.7 support
dwierichs May 31, 2022
bc3f5f1
merge
dwierichs May 31, 2022
cfa60f6
Apply suggestions from code review
dwierichs Jun 2, 2022
0a454f0
linebreak
dwierichs Jun 2, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
142 changes: 120 additions & 22 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,34 +4,125 @@

<h3>New features since last release</h3>

* Devices have a new capability flag `capabilities()["supports_broadcasting"]`
and are now able to handle broadcasting of tapes. In addition, the tape transform
`broadcast_expand` was added, which allows a tape that uses broadcasting
(`tape.batch_size!=None`) to be expanded into multiple tapes without a broadcast dimension.
* Parameter broadcasting within operations and tapes was introduced.

dwierichs marked this conversation as resolved.
Show resolved Hide resolved
[(#2575)](https://github.com/PennyLaneAI/pennylane/pull/2575)
[(#2590)](https://github.com/PennyLaneAI/pennylane/pull/2590)
[(#2609)](https://github.com/PennyLaneAI/pennylane/pull/2609)

Parameter broadcasting refers to passing parameters with a (single) leading additional
dimension (compared to the expected parameter shape) to `Operator`s.
dwierichs marked this conversation as resolved.
Show resolved Hide resolved
Introducing this concept includes multiple changes:
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

1. New class attributes
- `Operator.ndim_params` contains the expected number of dimensions for each parameter
of an operator.
dwierichs marked this conversation as resolved.
Show resolved Hide resolved
- `Operator.batch_size` contains the size of an additional parameter-broadcasting axis,
if present.
- `QuantumTape.batch_size` contains the `batch_size` of its operations (see logic below).
dwierichs marked this conversation as resolved.
Show resolved Hide resolved
- `Device.capabilities()["supports_broadcasting"]` is a Boolean flag indicating whether a
device natively is able to apply broadcasted operators.
2. New functionalities
- `Operator`s use their new `ndim_params` attribute to set their new attribute `batch_size`
at instantiation. `batch_size=None` corresponds to unbroadcasted operators.
- `QuantumTape`s automatically determine their new `batch_size` attribute from the
`batch_size`s of their operations. For this, all `Operators` in the tape must have the same
`batch_size` or `batch_size=None`. That is, mixing broadcasted and unbroadcasted `Operators`
is allowed, but mixing broadcasted `Operators` with differing `batch_size` is not,
similar to NumPy broadcasting.
- A new tape `batch_transform` called `broadcast_expand` was added. It transforms a single
tape with `batch_size!=None` (broadcasted) into multiple tapes with `batch_size=None`
(unbroadcasted) each.
- `Device`s natively can handle broadcasted `QuantumTape`s by using `broadcast_expand` if
the new flag `capabilities()["supports_broadcasting"]` is set to `False` (the default).
3. Feature support
- Many parametrized operations now have the attribute `ndim_params` and
allow arguments with a broadcasting dimension in their numerical representations.
This includes all gates in `ops/qubit/parametric_ops` and `ops/qubit/matrix_ops`.
The broadcasted dimension is the first dimension in representations.
Note that the broadcasted parameter has to be passed as an `array` but not as a python
dwierichs marked this conversation as resolved.
Show resolved Hide resolved
`list` or `tuple` for most operations.

**Example**

Instantiating a rotation gate with a one-dimensional array leads to a broadcasted `Operation`:

```pycon
>>> op = qml.RX(np.array([0.1, 0.2, 0.3], requires_grad=True), 0)
>>> op.batch_size
3
```

If the mentioned flag is set to `False` (the default), the new transform `broadcast_expand`
is used internally to enable execution of broadcasted tapes on all devices.
It's matrix correspondingly is augmented by a leading dimension of size `batch_size`:

* Operators have new attributes `ndim_params` and `batch_size`, and `QuantumTapes` have the new
attribute `batch_size`.
- `Operator.ndim_params` contains the expected number of dimensions per parameter of the operator,
- `Operator.batch_size` contains the size of an additional parameter broadcasting axis, if present,
- `QuantumTape.batch_size` contains the `batch_size` of its operations (see below).
[(#2575)](https://github.com/PennyLaneAI/pennylane/pull/2575)
```pycon
>>> np.round(op.matrix(), 4)
tensor([[[0.9988+0.j , 0. -0.05j ],
[0. -0.05j , 0.9988+0.j ]],
[[0.995 +0.j , 0. -0.0998j],
[0. -0.0998j, 0.995 +0.j ]],
[[0.9888+0.j , 0. -0.1494j],
[0. -0.1494j, 0.9888+0.j ]]], requires_grad=True)
>>> op.matrix().shape
(3, 2, 2)
```

When providing an operator with the `ndim_params` attribute, it will
determine whether (and with which `batch_size`) its input parameter(s)
is/are broadcasted.
A `QuantumTape` can then infer from its operations whether it is batched.
For this, all `Operators` in the tape must have the same `batch_size` or `batch_size=None`.
That is, mixing broadcasted and unbroadcasted `Operators` is allowed, but mixing broadcasted
`Operators` with differing `batch_size` is not, similar to NumPy broadcasting.
A tape with such an operation will detect the `batch_size` and inherit it:

```pycon
>>> with qml.tape.QuantumTape() as tape:
>>> qml.apply(op)
>>> tape.batch_size
3
```

A tape may contain broadcasted and unbroadcasted `Operation`s

```pycon
>>> with qml.tape.QuantumTape() as tape:
>>> qml.apply(op)
>>> qml.RY(1.9, 0)
>>> tape.batch_size
3
```

but not `Operation`s with differing (non-`None`) `batch_size`s:

```pycon
>>> with qml.tape.QuantumTape() as tape:
>>> qml.apply(op)
>>> qml.RY(np.array([1.9, 2.4]), 0)
ValueError: The batch sizes of the tape operations do not match, they include 3 and 2.
```

When creating a valid broadcasted tape, we can expand it into unbroadcasted tapes with
the new `broadcast_expand` transform, and execute the three tapes independently.

```pycon
>>> with qml.tape.QuantumTape() as tape:
>>> qml.apply(op)
>>> qml.RY(1.9, 0)
>>> qml.apply(op)
>>> qml.expval(qml.PauliZ(0))
>>> tapes, fn = qml.transforms.broadcast_expand(tape)
>>> len(tapes)
3
>>> dev = qml.device("default.qubit", wires=1)
>>> fn(qml.execute(tapes, dev, None))
array([-0.33003414, -0.34999899, -0.38238817])
```

However, devices will handle this automatically under the hood:

```pycon
>>> qml.execute([tape], dev, None)[0]
array([-0.33003414, -0.34999899, -0.38238817])
```
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

* Boolean mask indexing of the parameter-shift Hessian
[(#2538)](https://github.com/PennyLaneAI/pennylane/pull/2538)

The `argnum` keyword argument for `param_shift_hessian`
The `argnum` keyword argument for `param_shift_hessian`
is now allowed to be a twodimensional Boolean `array_like`.
Only the indicated entries of the Hessian will then be computed.
A particularly useful example is the computation of the diagonal
Expand Down Expand Up @@ -117,11 +208,17 @@
for `qml.QueuingContext.update_info` in a variety of places.
[(#2612)](https://github.com/PennyLaneAI/pennylane/pull/2612)

* `BasisEmbedding` can accept an int as argument instead of a list of bits (optionally). Example: `qml.BasisEmbedding(4, wires = range(4))` is now equivalent to `qml.BasisEmbedding([0,1,0,0], wires = range(4))` (because 4=0b100).
* `BasisEmbedding` can accept an int as argument instead of a list of bits (optionally).
[(#2601)](https://github.com/PennyLaneAI/pennylane/pull/2601)

Example:

`qml.BasisEmbedding(4, wires = range(4))` is now equivalent to
`qml.BasisEmbedding([0,1,0,0], wires = range(4))` (because `4=0b100`).

* Introduced a new `is_hermitian` property to determine if an operator can be used in a measurement process.
[(#2629)](https://github.com/PennyLaneAI/pennylane/pull/2629)

<h3>Breaking changes</h3>

* The `qml.queuing.Queue` class is now removed.
Expand Down Expand Up @@ -163,7 +260,8 @@
as trainable do not have any impact on the QNode output.
[(#2584)](https://github.com/PennyLaneAI/pennylane/pull/2584)

* `QNode`'s now can interpret variations on the interface name, like `"tensorflow"` or `"jax-jit"`, when requesting backpropagation.
* `QNode`'s now can interpret variations on the interface name, like `"tensorflow"`
or `"jax-jit"`, when requesting backpropagation.
[(#2591)](https://github.com/PennyLaneAI/pennylane/pull/2591)

* Fixed a bug for `diff_method="adjoint"` where incorrect gradients were
Expand Down
1 change: 1 addition & 0 deletions pennylane/math/single_dispatch.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ def _i(name):
# qml.SparseHamiltonian are not automatically 'unwrapped' to dense NumPy arrays.
ar.register_function("scipy", "to_numpy", lambda x: x)
ar.register_function("scipy", "shape", np.shape)
ar.register_function("scipy", "ndim", np.ndim)
dwierichs marked this conversation as resolved.
Show resolved Hide resolved


def _scatter_element_add_numpy(tensor, index, value):
Expand Down
30 changes: 21 additions & 9 deletions pennylane/operation.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,29 +190,38 @@ def expand_matrix(base_matrix, wires, wire_order):
# TODO[Maria]: In future we should consider making ``utils.expand`` differentiable and calling it here.
wire_order = Wires(wire_order)
n = len(wires)
interface = qml.math._multi_dispatch(base_matrix) # pylint: disable=protected-access
shape = qml.math.shape(base_matrix)
batch_dim = shape[0] if len(shape) == 3 else None
interface = qml.math.get_interface(base_matrix) # pylint: disable=protected-access

# operator's wire positions relative to wire ordering
op_wire_pos = wire_order.indices(wires)

identity = qml.math.reshape(
qml.math.eye(2 ** len(wire_order), like=interface), [2] * len(wire_order) * 2
qml.math.eye(2 ** len(wire_order), like=interface), [2] * (len(wire_order) * 2)
)
axes = (list(range(n, 2 * n)), op_wire_pos)
# The first axis entries are range(n, 2n) for batch_dim=None and range(n+1, 2n+1) else
axes = (list(range(-n, 0)), op_wire_pos)

# reshape op.matrix()
op_matrix_interface = qml.math.convert_like(base_matrix, identity)
mat_op_reshaped = qml.math.reshape(op_matrix_interface, [2] * n * 2)
shape = [batch_dim] + [2] * (n * 2) if batch_dim else [2] * (n * 2)
mat_op_reshaped = qml.math.reshape(op_matrix_interface, shape)
mat_tensordot = qml.math.tensordot(
mat_op_reshaped, qml.math.cast_like(identity, mat_op_reshaped), axes
)

unused_idxs = [idx for idx in range(len(wire_order)) if idx not in op_wire_pos]
# permute matrix axes to match wire ordering
perm = op_wire_pos + unused_idxs
mat = qml.math.moveaxis(mat_tensordot, wire_order.indices(wire_order), perm)
sources = wire_order.indices(wire_order)
if batch_dim:
perm = [p + 1 for p in perm]
sources = [s + 1 for s in sources]

mat = qml.math.reshape(mat, (2 ** len(wire_order), 2 ** len(wire_order)))
mat = qml.math.moveaxis(mat_tensordot, sources, perm)
shape = [batch_dim] + [2 ** len(wire_order)] * 2 if batch_dim else [2 ** len(wire_order)] * 2
mat = qml.math.reshape(mat, shape)
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

return mat

Expand Down Expand Up @@ -804,7 +813,9 @@ def label(self, decimals=None, base_label=None, cache=None):

if len(qml.math.shape(params[0])) != 0:
# assume that if the first parameter is matrix-valued, there is only a single parameter
# this holds true for all current operations and templates
# this holds true for all current operations and templates unless parameter broadcasting
# is used
# TODO[dwierichs]: Implement a proper label for broadcasted operators
if (
cache is None
or not isinstance(cache.get("matrices", None), list)
Expand Down Expand Up @@ -926,7 +937,8 @@ def _check_batching(self, params):
]
if not qml.math.allclose(first_dims, first_dims[0]):
raise ValueError(
f"Batching was attempted but the batched dimensions do not match: {first_dims}."
"Broadcasting was attempted but the broadcasted dimensions "
f"do not match: {first_dims}."
dwierichs marked this conversation as resolved.
Show resolved Hide resolved
)
self._batch_size = first_dims[0]

Expand Down Expand Up @@ -1409,7 +1421,7 @@ def matrix(self, wire_order=None):
canonical_matrix = self.compute_matrix(*self.parameters, **self.hyperparameters)

if self.inverse:
canonical_matrix = qml.math.conj(qml.math.T(canonical_matrix))
canonical_matrix = qml.math.conj(qml.math.moveaxis(canonical_matrix, -2, -1))

if wire_order is None or self.wires == Wires(wire_order):
return canonical_matrix
Expand Down
2 changes: 1 addition & 1 deletion pennylane/ops/functions/matrix.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,6 @@ def _matrix(tape, wire_order=None):

for op in tape.operations:
U = matrix(op, wire_order=wire_order)
unitary_matrix = qml.math.dot(U, unitary_matrix)
unitary_matrix = qml.math.tensordot(U, unitary_matrix, axes=[[-1], [-2]])

return unitary_matrix
32 changes: 32 additions & 0 deletions pennylane/ops/qubit/attributes.py
Original file line number Diff line number Diff line change
Expand Up @@ -199,3 +199,35 @@ def __contains__(self, obj):
representation using ``np.linalg.eigvals``, which fails for some tensor types that the matrix
may be cast in on backpropagation devices.
"""

supports_broadcasting = Attribute(
[
"QubitUnitary",
"ControlledQubitUnitary",
"DiagonalQubitUnitary",
"RX",
"RY",
"RZ",
"PhaseShift",
"ControlledPhaseShift",
"Rot",
"MultiRZ",
"PauliRot",
"CRX",
"CRY",
"CRZ",
"CRot",
"U1",
"U2",
"U3",
"IsingXX",
"IsingYY",
"IsingZZ",
]
)
"""Attribute: Operations that support parameter broadcasting.

For such operations, the input parameters are allowed to have a single leading additional
broadcasting dimension, creating the operation with a ``batch_size`` and leading to
broadcasted tapes when used in a ``QuantumTape``.
"""
Loading